text
stringlengths 6
128k
|
---|
# A note on the value distribution of
$f^{l}(f^{(k)})^{n}$
Yan Jiang and Bin Huang Mathematics Subject Classifications (2010): Primary
30D35
###### Abstract
Let $f$ be a transcendental meromorphic function in the complex plane
$\mathbb{C}$, and $a$ be a nonzero complex number . We give quantitative
estimates for the characteristic function $T(r,f)$ in terms of
$N(r,1/(f^{l}(f^{(k)})^{n}-a))$, for integers $k$, $l$, $n$ greater than 1. We
conclude that $f^{l}(f^{(k)})^{n}$ assumes every nonzero finite value
infinitely often.
Keywords: Transcendental meromorphic function, deficiency.
## 1 Introduction
Let $f$ be a transcendental meromorphic function in the complex plane
$\mathbb{C}$. In this article, we use the standard notations in the sense of
Nevanlinna [7], such as $T(r,f)$, $N(r,f)$, $\bar{N}(r,f)$, $m(r,f)$,
$S(r,f)$, $\delta(a,f)$. In particular, $T(r,f)$ the characteristic function
and $\bar{N}(r,f)$ is a counting function with respect to poles of $f$,
ignoring multiplicities. We shall use the symbol $S(r,f)$ to denote an error
term $v(r)$ satisfying $T(r,v(r))=o\left(T(r,f)\right)$ as
$r\rightarrow\infty$, possibly outside a set of finite linear measure.
Throughout this paper a small function (with respect to $f$) means a function
$\varphi(z)$ meromorphic in $\mathbb{C}$ satisfying $T(r,\varphi)=S(r,f)$. In
addition, in this paper, we use another type of small function $S^{*}(r,f)$
which has the property $S^{*}(r,f)=o\left(T(r,f)\right)$ as
$r\rightarrow\infty$, $r\not\in E$, where $E$ is a set of logarithmic density
0.
A meromorphic function $f$ is rational if and only if $T(r,f)=O(\log r)$ (see
[6]). The quantity
$\delta(a,f)=\mathop{\lim\inf}\limits_{r\to\infty}\frac{m(r,1/(f-a))}{T(r,f)}=1-\mathop{\lim\sup}\limits_{r\to\infty}\frac{N(r,1/{(f-a)})}{T(r,f)}$
is called the deficiency of $f$ at the point $a$. Another deficiency is
defined by
$\Theta(a,f)=1-\mathop{\lim\sup}\limits_{r\to\infty}\frac{\bar{N}(r,1/{(f-a)})}{T(r,f)}.$
Note that $0\leq\delta(a,f)\leq\Theta(a,f)\leq 1$.
The First Fundamental Theorem of the value distribution theory due to
Nevanlinna is utilized frequently in this note. It is stated as the following
property:
$T\left(r,\frac{1}{f-a}\right)=T(r,f)+O(1)$
for any constant $a$. The details can be found in [6] for example. A root of
the equation $f(z)=a$ $(1/f(z)=0$ for $a=\infty)$ will be called an $a-$point
of the function $f(z)$ for $a\in\mathbb{C}\cup\\{\infty\\}$. $a$ is called a
Picard exceptional value of a function $f(z)$ if the number of its $a-$points
in $\mathbb{C}$ is finite.
The aim of this paper is to look for estimates with respect to
$f^{l}(f^{(k)})^{n}$. The following well-known estimate is due to Hayman [7,
Theorem 3.5].
###### Theorem A.
Let $f$ be a meromorphic and transcendental function in the plane, $l$ be a
positive integer, and $a$, $b$ be constants with $b\neq 0$. Then
$T(r,f)\leq\left(2+\frac{1}{l}\right)N\left(r,\frac{1}{f-a}\right)+\left(2+\frac{2}{l}\right)\bar{N}\left(r,\frac{1}{f^{(l)}-b}\right)+S(r,f).$
(1)
Hayman also concluded a corollary from the previous inequality.
###### Corollary.
Under the same assumptions as in Theorem A, either $f$ assumes every finite
value infinitely often or $f^{(l)}$ assumes every finite value except possibly
zero infinitely often.
Moreover, Hayman conjectured that if $f$ is a transcendental meromorphic
function and $l\geq 1$, then $f^{l}f^{\prime}$ takes every finite nonzero
value infinitely often. This conjecture has been confirmed by himself in [7]
for $l\geq 3$, by Mues [13] for $l=2$ and by Bergweiler and Eremenko [3] for
$l=1$. During the past decades, a sequence of related research have been made.
In 1982, Doeringer [4, Corollary 1] proved that for a transcendental
meromorphic function $f$, the only possible Picard exceptional value is zero
for a differential monomial $f^{l}(f^{(k)})^{n}$ when $l\geq 3$. In 1994, Tse
and Yang [15] gave an estimate of $T(r,f)$ for $l=1$ and $l=2$ and confirmed
the only possible Picard exceptional value is zero. In 1996, Yang and Hu [19,
Theorem 2] proved that if $\delta(0,f)>3/(3(l+n)+1)$ with positive integers
$k$, $l$, $n$, then for a nonzero finite complex number $a$,
$f^{l}(f^{(k)})^{n}-a$ has infinitely many zeros. In 2002, Li and Wu [12]
obtained that for a nonzero finite complex number $a$ and positive integers
$l$, $k$ with $l\geq 2$, there exists a constant $M>0$ such that
$T(r,f)<M\bar{N}\left(r,\frac{1}{f^{l}f^{(k)}-a}\right)+S(r,f).$
In 2003, Wang [16] studied the zeros of $f^{l}f^{(k)}-\phi$ for a small
meromorphic function $\phi(z)\not\equiv 0$, and verified that for $l\geq 2$,
$f^{l}f^{(k)}-\phi$ had infinitely many zeros if the poles of $f$ were
multiple. In 2004, Alotaibi [2] gave an estimate and showed that the function
$f(f^{(k)})^{n}-\phi$ has infinitely many zeros for a small function
$\phi(z)\not\equiv 0$, when $n\geq 2$.
We introduce a result given by Lahiri and Dewan [9, Theorem 3.2].
###### Theorem B.
Let $f$ be a transcendental meromorphic function and $a$, $\alpha$ be both
small functions of $f$ without being identically to zero and infinity. If
$\psi=\alpha f^{l}(f^{(k)})^{n}$, where $l(\geq 0)$, $n(\geq 1)$, $k(\geq 1)$
are integers, then
$\displaystyle(l+n)T(r,f)\leq\bar{N}\left(r,f\right)+\bar{N}\left(r,\frac{1}{f}\right)+nN_{(k)}\left(r,\frac{1}{f}\right)+\bar{N}\left(r,\frac{1}{\psi-a}\right)+S(r,f),$
(2)
where $N_{(k)}(r,1/f)$ is the counting function of zeros of $f$ with
multiplicity $q$ counted $\min\\{q,k\\}$ times.
Remark. Inequality (2) implies that for $l\geq 3$, $n\geq 1$, $k\geq 1$,
$T(r,f)\leq\frac{1}{l-2}N\left(r,\frac{1}{f^{l}(f^{(k)})^{n}-a}\right)+S(r,f),$
(3)
then
$\displaystyle\delta(a,\psi)\leq\Theta(a,\psi)\leq 1-\frac{l-2}{nk+n+l}.$ (4)
However, this result is still worth refining. In the current paper, we
obtained an estimate corresponding to the case $k$, $l$, $n$ all greater than
1, and in our proof, we use a very important inequality of Yamanoi.
###### Theorem C.
_[18, Yamanoi]_ Let $f$ be a meromorphic and transcendental function in the
complex plane and let $k\geq 2$ be an integer, $A\subset\mathbb{C}$ be a
finite set of complex numbers. Then we have
$(k-1)\bar{N}(r,f)+\sum_{a\in A}N_{1}\left(r,\frac{1}{f-a}\right)\leq
N\left(r,\frac{1}{f^{(k)}}\right)+S^{*}(r,f),$ (5)
where
$N_{1}\left(r,\frac{1}{f-a}\right)=N\left(r,\frac{1}{f-a}\right)-\bar{N}\left(r,\frac{1}{f-a}\right).$
Remark. Actually, this theorem is obviously a correspondent result to the
famous Gol’dberg Conjecture, which says that for a transcendental meromorphic
function $f$ and $k\geq 2$, then $\bar{N}(r,f)\leq
N\left(r,1/{f^{(k)}}\right)+S(r,f)$. There is obviously a special case of
Yamanoi’s inequality when $A$ is an empty set. The following special case is
the one we use in our proof,
$(k-1)\bar{N}(r,f)\leq N\left(r,\frac{1}{f^{(k)}}\right)+S^{*}(r,f).$ (6)
In this paper, we continue to consider the general form $f^{l}(f^{(k)})^{n}-a$
for a nonzero constant $a$. The following theorem improved Theorem B in some
sense.
###### Theorem 1.1.
Let $f$ be a transcendental meromorphic function in $\mathbb{C}$, $l$, $n$,
$k$ be integers greater than 1 and $a$ be a nonzero constant. Then
$T(r,f)\leq\frac{1}{l-1}N\left(r,\frac{1}{f^{l}(f^{(k)})^{n}-a}\right)+S^{*}(r,f).$
(7)
Remark. If the differential monomials $f^{l}(f^{(k)})^{n}$ is allowed to take
$l\geq 2$, $n\geq 2$, $k\geq 2$, then (7) is better than (3) with the except
of a finite set of logarithmic density 0. If the case $k=1$, $l\geq 3$ or
$n=1$, $l\geq 3$ occurs, (3) might be the best choice so far.
Another important remark should be made here. As we realized that for general
form $f^{l}(f^{(k)})^{n}$, except the cases stated in Theorem B and Theorem
1.1, two cases are inevitably excluded: $l=1$, $n\geq 1$, $k\geq 1$ and $l=2$,
$n\geq 1$, $k\geq 1$. We summarize the known estimates of these two cases. For
the case $l=2$, $n=k=1$, Zhang [20] obtained a quantitative result, proving
that the inequality $T(r,f)<6N(r,1/(f^{2}f^{\prime}-1))+S(r,f)$ holds. For the
case $l=2$, $n=1$, $k>1$ the the inequality is due to Huang and Gu [8]. For
the case $l=1$, $n\geq 2$, $k\geq 1$, by Li and Yang [11] and Alotaibi [2]
gave two different inequalities for the estimates independently. For the case
$l=n=1$, $k\geq 1$, again Alotaibi [1] obtained an estimate provided that
$N_{1)}\left(r,\frac{1}{f^{(k)}}\right)=S(r,f)$, where
$N_{1)}\left(r,\frac{1}{f^{(k)}}\right)$ is the counting function of simple
zeros of $f^{(k)}$, as well, Wang [16] gave an estimate but under the
additional condition that multiplicities of all poles of $f$ are at least $3$
and $N_{1)}(r,1/f)\leq\lambda T(r,f)$, where $\lambda<1/3$ is a constant.
Though these cases are excluded in Theorem 1.1, our estimate is considered to
be stronger compared to the known results so far. Furthermore, it is natural
to estimate the deficiency of $f^{l}(f^{(k)})^{n}$ by making use of Theorem
1.1. This leads us to the following.
###### Theorem 1.2.
Let $f$ be a transcendental meromorphic function in $\mathbb{C}$, $k$, $l$,
$n$ be positive integers all greater than 1 and $a$ be a nonzero constant.
Then
$\delta(a,f^{l}(f^{(k)})^{n})\leq 1-\frac{l-1}{nk+n+l}.$
Remark. Since for a nonzero constant $a$, $\delta(a,f^{l}(f^{(k)})^{n})<1$,
Theorem 1.2 also implies that the possible Picard exceptional value of
$f^{l}(f^{(k)})^{n}$ is zero for $k\geq 2$, $l\geq 2$, $n\geq 2$. We like to
state these results as a corollary here.
###### Corollary 1.3.
Under the same conditions as Theorem 1.1, $f^{l}(f^{(k)})^{n}$ assumes every
finite value except possibly zero infinitely often.
Remark. In fact, this kind of result is not brand new. There are already a
couple of known results implying that for any positive integers $k$, $l$, $n$,
the function $f^{l}(f^{(k)})^{n}$ assumes every finite value except possibly
zero infinitely often. The readers should see Lahiri and Dewan [9, 10],
Steinmetz [14], Wang [17], Alotaibi [2, 1] and Li and Wu [12] for further
details.
Lemmas used for the proof of Theorem 1.1 and Theorem 1.2 are presented in
Section 2. The proofs of Theorem 1.1 and Theorem 1.2 are placed in Section 3
and 4 respectively. In the last section, we give an application to the sum of
deficiencies.
## 2 Lemmas
Before we proceed to the proofs of the theorems, we need the following lemmas.
###### Lemma 2.1.
_[7, Theorem 3.1]_ Let $f$ be a non-constant meromorphic function in the
complex plane, $l$ be positive integer, $a_{0}(z)$, $a_{1}(z)$,$\cdots$,
$a_{l}(z)$ be meromorphic functions in the plane satisfying
$T\left(r,a_{\nu}(z)\right)=S(r,f)$ for $\nu=0$, $1$, $\ldots$, $l$ ( as
$r\rightarrow+\infty$) and
$\psi(z)=\sum_{\nu=0}^{l}a_{\nu}(z)f^{(\nu)}(z).$
Then
$m\left(r,\frac{\psi}{f}\right)=S(r,f).$
In particular, this lemma implies $m\left(r,f^{(l)}/f\right)=S(r,f)$ and
$m\left(r,f^{(l+1)}/f^{(l)}\right)=S(r,f^{(l)})$.
###### Lemma 2.2.
_[6, p. 99]_ Let $f$ be a non-constant meromorphic function in the complex
plane, $k$ be a positive integer. Then
$\displaystyle T(r,f^{(k)})\leq(k+1)T(r,f)+S(r,f).$ (8)
In particular, $S(r,f^{(k)})\leq S(r,f)$. Inequality (8) will be used often in
this note without reference.
###### Lemma 2.3.
Let $f$ be a transcendental meromorphic function in the plane. Then the
differential monomial
$\psi=f^{l}(f^{(k)})^{n}$
is transcendental, where $l$, $n$ and $k$ are positive integers.
Proof. Since we have
$\frac{1}{f^{l+n}}=\left(\frac{f^{(k)}}{f}\right)^{n}\frac{1}{\psi}.$
We obtain from Lemma 2.1 and the First Fundamental Theorem that
$\displaystyle(l+n)T(r,f)$ $\displaystyle\leq$ $\displaystyle
nT\left(r,\frac{f^{(k)}}{f}\right)+T\left(r,\frac{1}{\psi}\right)$ (9)
$\displaystyle\leq$ $\displaystyle
nN\left(r,\frac{f^{(k)}}{f}\right)+T\left(r,\frac{1}{\psi}\right)+S(r,f)$
$\displaystyle\leq$ $\displaystyle
nk\left[\bar{N}(r,f)+\bar{N}\left(r,\frac{1}{f}\right)\right]+T\left(r,\frac{1}{\psi}\right)+S(r,f).$
Since $\bar{N}(r,f)\leq\bar{N}(r,\psi)+S(r,f)$ and
$\bar{N}\left(r,\frac{1}{f}\right)\leq\bar{N}\left(r,\frac{1}{\psi}\right)+S(r,f)$,
we can simplify inequality (9) to
$(l+n)T(r,f)\leq(2nk+1)T\left(r,\frac{1}{\psi}\right)+S(r,f).$
Because $f$ is transcendental, we conclude that $\psi$ is transcendental.
###### Lemma 2.4.
Let $f$ be a transcendental meromorphic function in $\mathbb{C}$, let $k$,
$l$, $n$ be positive integers, and set
$g=f^{l}(f^{(k)})^{n}-1.$
Then,
$T(r,g)\leq O\left(T(r,f)\right),$
as $r\rightarrow\infty$, possibly outside a set of finite linear measure.
Proof. Note that $N\left(r,f^{l}(f^{(k)})^{n}\right)=O(N(r,f))$ and
$m\left(r,f^{(k)}/f\right)=S(r,f)$ by Lemma 2.1. Applying the First
Fundamental Theorem, we get
$\displaystyle T(r,g)$ $\displaystyle=$ $\displaystyle
T\left(r,f^{l}(f^{(k)})^{n}-1\right)$ $\displaystyle=$ $\displaystyle
N\left(r,f^{l}(f^{(k)})^{n}\right)+m\left(r,f^{l}(f^{(k)})^{n}\right)+O(1)$
$\displaystyle\leq$ $\displaystyle
O\left(N(r,f)\right)+lm(r,f)+nm\left(r,f^{(k)}\right)++O(1)$
$\displaystyle\leq$ $\displaystyle
O\left(N(r,f)\right)+lm(r,f)+nm(r,f)+nm\left(r,\frac{f^{(k)}}{f}\right)+O(1)$
$\displaystyle=$ $\displaystyle O\left(T(r,f)\right)+S(r,f).$
We can see that
$T(r,g^{\prime})\leq N(r,g^{\prime})+m(r,g)+S(r,g)\leq T(r,g)+S(r,g).$
Hence
$T(r,g)\leq O\left(T(r,f)\right).\@qedbox{}$
## 3 Proof of Theorem 1.1.
Without loss of generality, we assume $a=1$. $g=f^{l}(f^{(k)})^{n}-1$. By
Lemma 2.3, we know that $g$ is not constant. Since
$\frac{1}{f^{l+n}}=\left(\frac{f^{(k)}}{f}\right)^{n}-\frac{g^{\prime}}{f^{l+n}}\left(\frac{g}{g^{\prime}}\right),$
it follows that
$m\left(r,\frac{1}{f^{l+n}}\right)\leq
m\left(r,\frac{g}{g^{\prime}}\right)+m\left(r,\frac{g^{\prime}}{f^{l+n}}\right)+S(r,f).$
Note that
$\frac{{g}^{\prime}}{f^{l+n}}=l\frac{{f}^{\prime}}{f}\left(\frac{f^{(k)}}{f}\right)^{n}+n\frac{f^{(k+1)}}{f}\left(\frac{f^{(k)}}{f}\right)^{n-1},$
which implies
$m\left(r,\frac{g^{\prime}}{f^{l+n}}\right)=S(r,f).$
Therefore, we have
$m\left(r,\frac{1}{f^{l+n}}\right)\leq
m\left(r,\frac{g}{g^{\prime}}\right)+S(r,f).$
We know that the poles of $g^{\prime}/g$ come from the zeros and poles of $g$,
and all are simple. The poles of $g/g^{\prime}$ come from zeros of
$g^{\prime}$ which are not zeros of $g$, preserving multiplicity. Hence, we
get
$N\left(r,\frac{g^{\prime}}{g}\right)=\bar{N}\left(r,\frac{1}{g}\right)+\bar{N}(r,g),$
(10)
and
$N\left(r,\frac{g}{g^{\prime}}\right)=N\left(r,\frac{1}{g^{\prime}}\right)-\left(N\left(r,\frac{1}{g}\right)-\bar{N}\left(r,\frac{1}{g}\right)\right).$
(11)
By combining (10) with (11),
$N\left(r,\frac{g^{\prime}}{g}\right)-N\left(r,\frac{g}{g^{\prime}}\right)=\bar{N}(r,g)+N\left(r,\frac{1}{g}\right)-N\left(r,\frac{1}{g^{\prime}}\right).$
(12)
By Lemma 2.4, we know that
$m(r,g^{\prime}/g)=S(r,g)\leq S(r,f),\quad\quad\bar{N}(r,g)=\bar{N}(r,f).$
Applying the First Fundamental Theorem and (12),
$\displaystyle m\left(r,\frac{1}{f^{l+n}}\right)$ $\displaystyle=$
$\displaystyle(l+n)m\left(r,\frac{1}{f}\right)$ $\displaystyle\leq$
$\displaystyle
N\left(r,\frac{g^{\prime}}{g}\right)-N\left(r,\frac{g}{g^{\prime}}\right)+m\left(r,\frac{g^{\prime}}{g}\right)+S(r,f)$
$\displaystyle\leq$ $\displaystyle
N\left(r,\frac{g^{\prime}}{g}\right)-N\left(r,\frac{g}{g^{\prime}}\right)+S(r,g)+S(r,f)$
$\ \ \ \ \ \ \ \ \ \ \ \ \ \ =\
\bar{N}(r,f)+N\left(r,\frac{1}{g}\right)-N\left(r,\frac{1}{g^{\prime}}\right)+S(r,f).$
(13)
Here we add $N(r,1/f^{l+n})$ to both sides of inequality (13), then
$(l+n)T\left(r,\frac{1}{f}\right)\leq\bar{N}(r,f)+N\left(r,\frac{1}{g}\right)-N\left(r,\frac{1}{g^{\prime}}\right)+N\left(r,\frac{1}{f^{l+n}}\right)+S(r,f).$
(14)
Note that
$g^{\prime}=f^{l-1}\left(f^{(k)}\right)^{n-1}\left(lf^{(k)}f^{\prime}+nff^{(k+1)}\right)$,
which implies
$(l-1)N\left(r,\frac{1}{f}\right)+(n-1)N\left(r,\frac{1}{f^{(k)}}\right)\leq
N\left(r,\frac{1}{g^{\prime}}\right).$ (15)
Substituting (15) into (14), we get
$\displaystyle T\left(r,\frac{1}{f^{l+n}}\right)$ $\displaystyle\leq$
$\displaystyle\bar{N}(r,f)+N\left(r,\frac{1}{g}\right)+N\left(r,\frac{1}{f^{l+n}}\right)-(l-1)N\left(r,\frac{1}{f}\right)$
$\displaystyle-(n-1)N\left(r,\frac{1}{f^{(k)}}\right)+S(r,f).$
Hence,
$(l+n)T(r,f)\leq
N\left(r,\frac{1}{g}\right)+\bar{N}(r,f)+(n+1)N\left(r,\frac{1}{f}\right)-(n-1)N\left(r,\frac{1}{f^{(k)}}\right)+S(r,f).$
(16)
Inequality (6) implies that for $k\geq 2$,
$\displaystyle(k-1)\bar{N}(r,f)\leq
N\left(r,\frac{1}{f^{(k)}}\right)+S^{*}(r,f).$ (17)
Now by combining inequality (16) and (17), we have
$\displaystyle(l+n)T(r,f)$ $\displaystyle\leq$ $\displaystyle
N\left(r,\frac{1}{g}\right)+\frac{1}{k-1}N\left(r,\frac{1}{f^{(k)}}\right)+(n+1)N\left(r,\frac{1}{f}\right)$
$\displaystyle-(n-1)N\left(r,\frac{1}{f^{(k)}}\right)+S(r,f)+S^{*}(r,f).$
Since $1/{(k-1)}-n+1\leq 0$ for $n\geq 2$, $k\geq 2$, then
$\displaystyle(l+n)T(r,f)\leq
N\left(r,\frac{1}{g}\right)+\left(n+1\right)N\left(r,\frac{1}{f}\right)+S^{*}(r,f).$
(18)
Since $l-1>0$ and
$\left(n+1\right)N\left(r,1/f\right)\leq\left(n+1\right)T\left(r,f\right)$,
then
$\displaystyle
T(r,f)\leq\frac{1}{l-1}N\left(r,\frac{1}{f^{l}(f^{(k)})^{n}-1}\right)+S^{*}(r,f).$
(19)
Replacing the number 1 in $f^{l}(f^{(k)})^{n}-1$ by any nonzero constant $a$,
the inequality (7) is obtained. The proof is completed.
## 4 Proof of Theorem 1.2.
Set $\psi=f^{l}(f^{(k)})^{n}$. Inequality (19) is stated that
$T(r,f)\leq\frac{1}{l-1}N\left(r,\frac{1}{\psi-a}\right)+S^{*}(r,f)$ (20)
for $l\geq 2$, $n\geq 2$, $k\geq 2$. By the definition of $\delta(a,f)$ and
the First Fundamental Theorem, we obtain
$\displaystyle T(r,\psi)$ $\displaystyle\leq$
$\displaystyle(nk+n+l)T(r,f)+S(r,f)$ (21) $\displaystyle\leq$
$\displaystyle\frac{nk+n+l}{l-1}N\left(r,\frac{1}{\psi-a}\right)+S^{*}(r,f).$
By combining two inequalities (20) and (21), we have
$N\left(r,\frac{1}{\psi-a}\right)\geq\frac{l-1}{nk+n+l}T(r,\psi)-S^{*}(r,f).$
Since
$\displaystyle T(r,f)$ $\displaystyle=$
$\displaystyle\frac{1}{l}T(r,f^{l})\leq
T\left(r,(f^{(k)})^{n}\right)+T(r,\psi)$ $\displaystyle\leq$ $\displaystyle
O\left(T\left(r,\psi\right)\right),$
then we deduce that
$\mathop{\lim\inf}\limits_{r\to\infty}\frac{S^{*}(r,f)}{T(r,\psi)}=\mathop{\lim\inf}\limits_{r\to\infty}\frac{S^{*}(r,f)}{T(r,f)}\frac{T(r,f)}{T(r,\psi)}=0.$
Therefore, by the definition of deficiency,
$\displaystyle\delta(a,\psi)$ $\displaystyle=$ $\displaystyle
1-\mathop{\lim\sup}\limits_{r\to\infty}\frac{N\left(r,\frac{1}{\psi-a}\right)}{T(r,\psi)}$
$\displaystyle\leq$ $\displaystyle
1-\mathop{\lim\sup}\limits_{r\to\infty}\frac{\frac{l-1}{nk+n+l}T(r,\psi)-S^{*}(r,f)}{T(r,\psi)}$
$\displaystyle\leq$ $\displaystyle
1-\frac{l-1}{nk+n+l}+\mathop{\lim\inf}\limits_{r\to\infty}\frac{S^{*}(r,f)}{T(r,\psi)}$
$\displaystyle=$ $\displaystyle 1-\frac{l-1}{nk+n+l}.\@qedbox{}$
## 5 An application
After Yamanoi’s result was published in 2013, there are some results about
deficieny relations came out by using his important theorem. We take a result
from Fang and Wang [5] as a good example here, and we analogue their steps to
get an estimate of the sum of deficiencies of $f^{l}(f^{(k)})^{n}$.
###### Theorem D.
_[5, Propostion 2]_ Let $f$ be a meromorphic and transcendental function in
the complex plane, $k$ be a positive integer, and $P$ be the set of all
polynomials. Then
$\sum_{b\in\mathbb{C}}\delta{(b,f^{(k)})}\leq
1-(k-1)\left(1-\Theta_{E}\left(\infty,f^{(k)}\right)\right),$
where for $r\not\in E$,
$\Theta_{E}\left(\infty,f^{(k)}\right)=1-\mathop{\lim\sup}\limits_{r\to\infty}\frac{\bar{N}(r,f^{(k)})}{T(r,f^{(k)})},$
where $E(=M(K)\cup E_{1})$, $M(K)$ is a set of finite upper logarithmic
density and $E_{1}$ is a set of finite measure.
We need the following lemma for our calculation. This lemma is as well used in
paper [5].
###### Lemma 5.5.
_[7, p. 33]_ Let $a_{1}$, $a_{2}$, $\cdots$, $a_{q}$, where $q>2$, be distinct
finite complex numbers. Then
$\sum_{i=1}^{q}m\left(r,\frac{1}{f-a_{i}}\right)\leq
m\left(r,\sum_{i=1}^{q}\frac{1}{f-a_{i}}\right)+O(1).$
###### Theorem 5.6.
Let $f$ be a transcendental meromorphic function in $\mathbb{C}$, $k$, $l$,
$n$ be positive integers all at least $2$ and $a_{i}\in\mathbb{C}$ be
constants, $i=1,2,\cdots,q$. Then
$\sum_{i=1}^{q}\delta(a_{i},f^{l}(f^{(k)})^{n})\leq 1+\frac{1}{nk+n+l}.$
Proof. By Nevanlinna theory, for constants $a_{i}\in\mathbb{C}$, the sum of
deficiencies of function $f$ are defined by
$\displaystyle\sum_{i=1}^{q}\delta{(a_{i},f)}=\mathop{\lim\inf}\limits_{r\to\infty}\sum_{i=1}^{q}\frac{m\left(r,\frac{1}{f-a_{i}}\right)}{T\left(r,f\right)}.$
(22)
Let $\psi=f^{l}(f^{(k)})^{n}$. By Lemma 5.5, we have
$\displaystyle\sum_{i=1}^{q}m\left(r,\frac{1}{\psi-a_{i}}\right)$
$\displaystyle\leq$ $\displaystyle m\left(r,\sum_{i=1}^{q}\frac{1}{\psi-
a_{i}}\right)+O(1)$ (23) $\displaystyle\leq$ $\displaystyle
m\left(r,\sum_{i=1}^{q}\frac{\psi^{\prime\prime}}{\psi-
a_{i}}\right)+m\left(r,\frac{1}{\psi^{\prime\prime}}\right)+S(r,f)$
$\displaystyle\leq$ $\displaystyle
T(r,\psi^{\prime\prime})-N\left(r,\frac{1}{\psi^{\prime\prime}}\right)+S(r,f)$
$\displaystyle\leq$ $\displaystyle
N(r,\psi^{\prime\prime})+m(r,\psi^{\prime\prime})-N\left(r,\frac{1}{\psi^{\prime\prime}}\right)+S(r,f)$
$\displaystyle\leq$ $\displaystyle
N(r,\psi)+2\bar{N}(r,\psi)+m(r,\psi)-N\left(r,\frac{1}{\psi^{\prime\prime}}\right)+S(r,f).$
By Yamanoi’s result (6), it follows from inequality (23) that
$\displaystyle\sum_{i=1}^{q}m\left(r,\frac{1}{\psi-a_{i}}\right)$
$\displaystyle\leq$ $\displaystyle
T(r,\psi)+2\bar{N}(r,\psi)-\bar{N}\left(r,\psi\right)+S(r,f)$ (24)
$\displaystyle\leq$ $\displaystyle T(r,\psi)+\bar{N}(r,f)+S(r,f)$
$\displaystyle\leq$ $\displaystyle T(r,\psi)+T(r,f)+S(r,f).$
By Theorem 1.1 and Theorem 1.2, it follows from inequality (24) that,
$\displaystyle\sum_{i=1}^{q}m\left(r,\frac{1}{\psi-a_{i}}\right)$
$\displaystyle=$
$\displaystyle\mathop{\lim\inf}\limits_{r\to\infty}\sum_{i=1}^{q}\frac{m\left(r,\frac{1}{\psi-
a_{i}}\right)}{T\left(r,\psi\right)}$ $\displaystyle\leq$ $\displaystyle
1+\mathop{\lim\inf}\limits_{r\to\infty}\frac{T(r,f)}{T(r,\psi)}+S(r,f)$
$\displaystyle\leq$ $\displaystyle
1-\frac{1}{l-1}\left(1-\mathop{\lim\sup}\limits_{r\to\infty}\frac{N\left(r,\frac{1}{\psi-a}\right)}{T(r,\psi)}-1\right)$
$\displaystyle=$ $\displaystyle
1-\frac{1}{l-1}\left(\delta\left(a,\psi\right)-1\right)$ $\displaystyle\leq$
$\displaystyle 1-\frac{1}{l-1}\left(1-\frac{l-1}{nk+n+l}-1\right)$
$\displaystyle=$ $\displaystyle 1+\frac{1}{nk+n+l}.\@qedbox{}$
## 6 Acknowledgement
The authors are very grateful to Professor Toshiyuki Sugawa and every member
in the seminars for their valuable suggestions and comments, which helped a
lot in improving the paper.
## References
* [1] A. Alotaibi, _On the zeros of $aff^{(k)}-1$_, Complex Var. Theory Appl. 49 (2004), no. 13, 977–989.
* [2] , _On the zeros of $af(f^{(k)})^{n}-1$ for $n\geq 2$_, Comput. Methods Funct. Theory 4 (2004), 227–235.
* [3] W. Bergweiler and A. Eremenko, _On the singularities of the inverse to a meromorphic function of finite order_ , Rev. Mat. Iberoamericana 11 (1995), no. 2, 355–373.
* [4] W. Doeringer, _Exceptional values of differential polynomials_ , Pacific J. Math. 98 (1982), no. 1, 55–62.
* [5] M.L. Fang and Y.F. Wang, _A note on the conjectures of Hayman, Mues and Gol’dberg_ , Comput. Methods Funct. Theory 13 (2013), no. 4, 533–543.
* [6] A.A. Goldberg and I.V. Ostrovskii, _Value distribution of meromorphic functions_ , Translations of Mathematical Monographs, vol. 236, American Mathematical Society, Providence, RI, 2008.
* [7] W.K. Hayman, _Meromorphic functions_ , Oxford University Press, 1964\.
* [8] X.J. Huang and Y.X. Gu, _On the value distribution of $f^{2}f^{(k)}$_, J. Aust. Math. Soc. 78 (2005), no. 1, 17–26.
* [9] I. Lahiri and S. Dewan, _Inequalities arising out of the value distribution of a differential monomial_ , JIPAM. J. Inequal. Pure Appl. Math. 4 (2003), no. 2, Article 27, 6 pp. (electronic).
* [10] , _Value distribution of the product of a meromorphic function and its derivative_ , Kodai Math. J. 26 (2003), no. 1, 95–100.
* [11] P. Li and C.C. Yang, _On the value distribution of a certain type of differential polynomials_ , Monatsh. Math. 125 (1998), no. 1, 15–24.
* [12] W. Li and T.Y. Wu, _Value distribution of general differential monomials_ , J. Systems Sci. Math. Sci. 22 (2002), no. 1, 58–66.
* [13] E. Mues, _Über ein Problem von Hayman_ , Math. Z. 164 (1979), no. 3, 239–259.
* [14] N. Steinmetz, _Über die Nullstellen von Differentialpolynomen_ , Math. Z. 176 (1981), no. 2, 255–264.
* [15] C.K. Tse and C.C. Yang, _On the value distribution of $f^{l}(f^{(k)})^{n}$_, Kodai Math. J. 17 (1994), no. 1, 163–169.
* [16] J.P. Wang, _On the zeros of $f^{n}(z)f^{(k)}(z)-c(z)$_, Complex Var. Theory Appl. 48 (2003), no. 8, 695–703.
* [17] Y.F. Wang, C.C. Yang, and L. Yang, _On the zeros of $f(f^{(k)})^{n}-a$_, Kexue Tongbao 38 (1993), 2215–2218.
* [18] K. Yamanoi, _Zeros of higher derivatives of meromorphic functions in the complex plane_ , Proc. Lond. Math. Soc. (3) 106 (2013), no. 4, 703–780.
* [19] C.C. Yang and P.C. Hu, _On the value distribution of $ff^{(k)}$_, Kodai Math. J. 19 (1996), no. 2, 157–167.
* [20] Q.D. Zhang, _A growth theorem for meromorphic function_ , J. Chengdu Inst. Meteor. 20 (1992), 12–20.
Yan Jiang
Graduate School of Information Sciences,
Tohoku University, Sendai 980-88579, JAPAN
E-mail address<EMAIL_ADDRESS><EMAIL_ADDRESS>
Bin Huang
Department of Mathematics and Computing Science,
Changsha University of Science and Technology, Changsha 410076, P.R.China
E-mail address<EMAIL_ADDRESS>
|
# ContextGPT: Infusing LLMs Knowledge into Neuro-Symbolic Activity Recognition
Models
Luca Arrotta, Claudio Bettini, Gabriele Civitarese, Michele Fiori
luca.arrotta, claudio.bettini, gabriele.civitarese<EMAIL_ADDRESS>EveryWare Lab, Dept. of Computer Science, University of MilanMilanItaly
(2018; 20 February 2007; 12 March 2009; 5 June 2009)
###### Abstract.
Context-aware Human Activity Recognition (HAR) is a hot research area in
mobile computing, and the most effective solutions in the literature are based
on supervised deep learning models. However, the actual deployment of these
systems is limited by the scarcity of labeled data that is required for
training. Neuro-Symbolic AI (NeSy) provides an interesting research direction
to mitigate this issue, by infusing common-sense knowledge about human
activities and the contexts in which they can be performed into HAR deep
learning classifiers. Existing NeSy methods for context-aware HAR rely on
knowledge encoded in logic-based models (e.g., ontologies) whose design,
implementation, and maintenance to capture new activities and contexts require
significant human engineering efforts, technical knowledge, and domain
expertise. Recent works show that pre-trained Large Language Models (LLMs)
effectively encode common-sense knowledge about human activities. In this
work, we propose ContextGPT: a novel prompt engineering approach to retrieve
from LLMs common-sense knowledge about the relationship between human
activities and the context in which they are performed. Unlike ontologies,
ContextGPT requires limited human effort and expertise. An extensive
evaluation carried out on two public datasets shows how a NeSy model obtained
by infusing common-sense knowledge from ContextGPT is effective in data
scarcity scenarios, leading to similar (and sometimes better) recognition
rates than logic-based approaches with a fraction of the effort.
human activity recognition, context-awareness, large language models, neuro-
symbolic, knowledge infusion
††copyright: acmcopyright††journalyear: 2018††doi:
XXXXXXX.XXXXXXX††conference: Make sure to enter the correct conference title
from your rights confirmation emai; June 03–05, 2018; Woodstock, NY††price:
15.00††isbn: 978-1-4503-XXXX-X/18/06††ccs: Computing methodologies Machine
learning††ccs: Human-centered computing Ubiquitous and mobile devices††ccs:
Information systems Information extraction
## 1\. Introduction
The analysis of sensor data gathered from mobile and wearable devices for
Human Activity Recognition (HAR) has been extensively researched (Chen et al.,
2021; Gu et al., 2021). This is attributed to its high potential for
applications across various domains such as well-being, healthcare, and
sports, fueled by the widespread adoption of these devices.
Although the majority of the studies in this area focused on inertial sensors
only, context-aware HAR approaches also take into account the user’s
surrounding context as derived from the devices, user profiles or online
services (e.g., semantic location, temporal information, etc.) to further
improve the recognition rate and, at the same time, extend the set of
recognizable activities (Bettini et al., 2020).
In the last few years, a vast amount of solutions based on deep learning
classifiers have been proposed (Wang et al., 2019). However, most of those
approaches rely on supervised learning and require large labeled datasets to
be trained. The need to acquire reliable labels from a large number of users
and for an extended set of activities is currently a major obstacle to the
deployment of effective sensor-based HAR systems.
Among the several solutions proposed in the general machine learning community
to tackle the labeled data scarcity problem, Neuro-Symbolic (NeSy) approaches
represent a promising research direction (Sarker et al., 2021). NeSy methods
aim at combining data-driven and knowledge-based approaches to reduce the
amount of labeled data needed to effectively train the model and, at the same
time, to improve its interpretability. In NeSy approaches, well-known domain
constraints are infused into the model, thus avoiding learning them from data
and simplifying the overall learning process.
Existing NeSy approaches proposed for context-aware HAR retrieve common-sense
knowledge from logic-based models (e.g., ontologies) manually designed and
implemented by human experts (Arrotta et al., 2022). Such knowledge models
encode the relationship between an activity and the context in which it can be
performed. For instance, they encode that biking is not a plausible activity
if the user’s semantic location is highway or museum. However, manually
building a comprehensive knowledge model that captures all possible context
situations is challenging, and, even relying on a domain expert, it does not
guarantee that all the possible situations in which an activity can be
performed are captured. Moreover, logic-based models are not scalable since
including new activities, new context conditions, or new constraints requires
further manual work and expertise.
This paper explores the idea of infusing common-sense knowledge into NeSy HAR
models from Large Language Models (LLMs) instead of ontologies. Indeed, LLMs
implicitly encode knowledge spanning a wide range of domains and recent
results suggest that they can be efficiently exploited to retrieve common-
sense knowledge about human activities (Takeda et al., 2023; Graule and Isler,
2023; Zhou et al., 2023; Kaneko and Inoue, 2023; Xia et al., 2023; Leng et
al., 2023).
In this paper, we propose ContextGPT, a novel prompt engineering approach
that, based on the high-level user’s context, leverages a pre-trained LLM
model to determine the most plausible activities. Specifically, given a time
window of sensor data, ContextGPT converts high-level context information into
a natural language description. Thanks to a carefully designed system message,
ContextGPT generates a prompt by asking the LLM to determine the activities
that are consistent with the current context. The prompts generated by
ContextGPT also include a few examples (created by the prompt engineer)
depicting how the task should be carried out (i.e., few-shot prompting).
Our contributions are the following:
* •
We propose ContextGPT: a novel prompt engineering approach to retrieve common-
sense knowledge about the relationships between activities and high-level
contexts from pre-trained Large Language Models.
* •
We illustrate how this knowledge can be infused into a Neuro-Symbolic Context-
Aware model to mitigate labeled data scarcity.
* •
Our extensive experimental evaluation using two public datasets shows that
infusing ContextGPT knowledge leads to recognition rates similar to methods
based on logic-based models, while significantly reducing the cost in terms of
expert human effort.
## 2\. Related Work
### 2.1. Data scarcity in HAR
Most of the solutions proposed in the literature for sensor-based HAR on
mobile/wearable devices rely on supervised Deep Learning (DL) methods (Wang et
al., 2019; Chen et al., 2021). Even though the majority of these works only
focused on inertial sensors, several studies highlight how also including
high-level context information can significantly improve the recognition
rate(Bettini et al., 2020; Saguna et al., 2013).
Due to the unrealistic amount of training data required to train supervised
models, several research groups are proposing solutions to leverage small
amounts of labeled samples. Proposed methods to mitigate labeled data scarcity
are based on transfer learning (Sanabria et al., 2021; Soleimani and
Nazerfard, 2021), self-supervised learning (Haresamudram et al., 2022; Jain et
al., 2022; Hiremath et al., 2022), and semi-supervised learning approaches
(Abdallah et al., 2018). We believe that Neuro-Symbolic AI (NeSy) could be
coupled with such approaches to further mitigate data scarcity when fine-
tuning pre-trained HAR models with limited labeled data.
### 2.2. Neuro-Symbolic HAR
While several NeSy methods have been proposed in the computer vision and NLP
domains (Dash et al., 2022), only a few approaches have been proposed for HAR.
Existing NeSy methods for context-aware HAR retrieve common-sense knowledge
from logic-based models (e.g., ontologies). To the best of our knowledge,
three main strategies have been proposed so far to combine extracted knowledge
with deep learning models: a) using knowledge to refine the deep model’s
output (Bettini et al., 2020), b) including retrieved knowledge as additional
features in the latent space (Arrotta et al., 2020), and c) using a loss
function that penalizes predictions violating domain constraints (Arrotta et
al., 2023a; Xing et al., 2020). However, designing and implementing knowledge
models require significant human effort, and those models may not capture all
the possible situations in which activities can be performed. While there are
information retrieval approaches to semi-automatically obtaining common-sense
knowledge from public external sources (e.g., images (Riboni and Murtas,
2019), web (Perkowitz et al., 2004), text (Yordanova, 2016)) such methods
still faces challenges in creating comprehensive knowledge models.
### 2.3. Using LLMs for Human Activity Recognition
The adoption of Large Language Models (LLMs) in sensor-based HAR is a recently
emerging trend, and we expect a surge of contributions in this area in the
near future. For instance, the work in (Takeda et al., 2023) takes advantage
of the GPT-2 model to predict sequences of sensor events in smart-home
settings. Specifically, since the GPT-2 model is pre-trained to predict the
next word in a sentence, it has been fine-tuned to predict the most likely
sequence of sensor events that are treated similarly to textual data. A
similar approach was also proposed in (Graule and Isler, 2023), leveraging the
LLaMA2 language model. The work in (Zhou et al., 2023) uses the CLIP pre-
trained model to align sensor data embeddings and text embeddings by means of
contrastive learning, with the goal of improving the overall recognition rate.
A few research groups recently proposed solutions based on the well-known
ChatGPT tool. In (Kaneko and Inoue, 2023), ChatGPT is prompted with the
description of a specific sensor-based HAR task and the current choice for
sensor positioning, with the objective of obtaining feedback about where to
install new sensors to further improve the recognition rate. The work in (Xia
et al., 2023) proposed an unsupervised approach for smart-home settings. The
approach consists of two steps: the former asks ChatGPT to provide a
description of the target activities, and the latter asks ChatGPT to infer the
performed activity given a temporal sequence of events describing interaction
with household items. Finally, based on the recent advances in generating
synthetic inertial sensor data from textual descriptions (Guo et al., 2022),
the work in (Leng et al., 2023) leverages ChatGPT to generate textual
descriptions of activities, that are then used to generate virtual inertial
sensors data.
To the best of our knowledge, we are the first to leverage LLMs to mitigate
data scarcity in context-aware HAR based on mobile/wearable devices. In
particular, we query LLMs to determine which activities are compatible with
the current user’s context, with the goal of infusing this knowledge into a
neuro-symbolic model.
## 3\. Methodology
### 3.1. A Neuro-Symbolic Framework for Context-Aware HAR
We consider a mobile and wearable computing scenario in which users perform
activities while carrying their devices (e.g., smartphone, smartwatch,
tracker).
Figure 1 provides an overview of the Neuro-Symbolic (NeSy) system taking as
input the continuous stream of sensor data generated by the user devices, and
providing as output the most likely performed activity.
Figure 1. A Neuro-Symbolic AI framework for context-aware HAR gathering
knowledge from ContextGPT
As a common practice in this field (Chen et al., 2021), data are partitioned
into temporal windows. For each window $w$ (representing $z$ seconds of
consecutive raw sensor data), we derive two subsets $w^{R}$ and $w^{C}$: the
former includes raw data that we consider appropriate for being directly
processed by a data-driven model (e.g., inertial sensors data), while the
latter involves raw sensor data that we consider useful for deriving high-
level contexts through reasoning and/or abstraction.
A high-level context provides information, at a certain point in time, about
the user, the environment surrounding them, and their interactions with
objects or other subjects. Let $C=\langle C_{1},\dots,C_{n}\rangle$ be a set
of possible contexts that are meaningful for the application domain (e.g.,
$C_{1}$ = it is raining, $C_{2}$ = location is a park, $C_{3}$ = current
user’s speed is high).
Note that $w^{R}$ and $w^{C}$ may have a non-empty intersection and
$w^{R}\bigcup w^{C}=w$. For instance, it may be appropriate to exclude raw GPS
data from $w^{R}$ since it may be difficult to find robust correlations
capable of generalizing between different users (especially in data scarcity
settings). On the other hand, raw GPS data can be included in $w^{C}$ to
obtain high-level contexts that are more easily correlated with activities
(e.g., semantic location: ”public park”).
The Context Aggregation module is in charge of deriving all the most likely
high-level contexts $C^{w}\subset C$ that occur in a window $w$ based on
$w^{C}$. Context Aggregation can be implemented using simple rules, available
services, and/or context-aware middlewares (Henricksen et al., 2005). For
example, raw GPS coordinates can be used to derive the semantic location by
querying a dedicated web service (e.g., by using Google Places APIs).
Context-aware HAR could be addressed by relying on machine learning models
taking $\langle w^{R},C^{w}\rangle$ only as input. However, a more effective
approach is to complement data-driven approaches with common-sense knowledge
about the relationships between activities and contexts (Riboni and Bettini,
2011). For example, based on common-sense knowledge, people typically run
outdoors in parks, on trails, or along sidewalks (preferably in dry weather)
and indoors on a treadmill. Moreover, running requires the user to move at a
positive speed. The relationships between activities and the typical
conditions in which they can be performed can be used in the HAR process to
reduce the amount of labeled data required to learn them.
The ContextGPT module (that is the main contribution of this work) is in
charge of reasoning on the contexts in $C^{w}$ to derive the most likely
activities that are consistent according to $C^{w}$. ContextGPT relies on a
Large Language Model (LLM) and it is described in detail in the rest of this
section. The information about context-consistent activities is infused into a
NeSy HAR model, which also receives as input raw data and high-level context
data (i.e. $\langle w^{R},C^{w}\rangle$). The output of the model is a
probability distribution $P=\langle p_{1},\dots,p_{k}\rangle$, where $p_{i}$
is the probability that the user performed the activity $a_{i}$.
### 3.2. ContextGPT architecture
Figure 2 illustrates the overall architecture of ContextGPT.
Figure 2. Overall architecture of ContextGPT
ContextGPT receives as input a temporal window of high-level context data
$C^{w}$. First, $C^{w}$ is provided to the Context2Text module to obtain a
natural language description of the user’s context. Since LLMs also benefit
from a few examples depicting how the required task should be carried out
(i.e., the so-called few-shot prompting (Brown et al., 2020)), the Example
selection module considers a pool of examples and includes in the prompt those
having their context similar to $C^{w}$. Each example depicts a context
situation and the activities that are consistent with that situation. Finally,
the Prompt Construction module generates the complete prompt that is composed
of:
* •
System Message: general instructions to the LLM about the task that it has to
carry out.
* •
Examples: the most similar examples to the input context, selected from a
pool.
* •
Context Description: the description in natural language of $C^{w}$.
Figure 3 shows an example of a typical prompt generated by ContextGPT.
Figure 3. An example of ContextGPT prompt
The prompt is provided as input to a pre-trained LLM, and the output is post-
processed to obtain the list of activities that are consistent with the
current context.
### 3.3. Prompt Engineering
While existing Neuro-Symbolic methods for Context-Aware HAR demand knowledge
engineers to manually build logic-based models, ContextGPT requires a Prompt
Engineer in charge of designing: i) the system message, ii) the translation of
context data into natural language descriptions, and iii) the examples in the
pool. Due to the sensitivity of LLMs to the specific wording in the prompt,
the Prompt Engineer currently has a non-trivial role for the success (or
failure) in obtaining the desired goal (Meyer et al., 2023). However, since
these tasks are based on natural language, there is no need for designing
comprehensive and complex relationships between activities and contexts using
logic formalisms; hence, the required expertise and human effort is
significantly reduced. In the following, we describe each component of the
prompts of ContextGPT in detail.
#### 3.3.1. System Message
The system message defines the general description of the task the LLM should
complete. In our case, the task is determining the activities that are
consistent with a given context. Hence, we divided the system message into two
parts. The former instructs the LLM about the overall task and the list of
possible activities. The latter provides a detailed list of steps the LLM
should undertake to complete the task (i.e., Chain-Of-Thought approach (Wei et
al., 2022)).
The first step directs the LLM to focus on the input context. The second step
requires following an open-world assumption since, in our preliminary
experiments, we noticed instances where the model mistakenly excluded
activities not explicitly supported by the context description. For instance,
consider the following context description: “In the last 4 seconds the user
Bob was in an outdoor environment, where he was moving/traveling at speed
between 1 and 4 km/h, experiencing rainy weather, not following/close to a
public transportation route, and not experiencing elevation changes.”. Without
the second step, in our preliminary experiments, the LLM often excluded the
activity Moving by Car with the following motivation: “Not consistent as there
is no mention of being in a car or any other vehicle besides walking speed
movement”. While it is true that being in a car is not explicitly provided in
the context description, the Moving by car activity should still be considered
as possible. Indeed, it is impractical to monitor all the possible contextual
conditions through sensors in mobile/wearable devices. Finally, the last step
forces the LLM to provide context-consistent activities in a specific format
to simplify the automatic extraction of this information during post-
processing.
Figure 4 shows the system message of ContextGPT.
Figure 4. The system message of ContextGPT. The possible activities, in this
case, are the ones of the DOMINO (Arrotta et al., 2023b) dataset.
#### 3.3.2. Context2Text
In order to be ingested by the LLM, the Context2Text module transforms the
input context data $C^{w}$ into a natural language sentence describing it.
Each description starts with ‘‘In the last $z$ seconds, the user $u$” to
contextualize that the described context refers to a specific temporal window
of $z$ seconds and that it is associated with a specific user $u$. Then, the
sentence continues by describing in natural language each context variable.
Designing the specific mapping between context data and natural language
sentences is currently not trivial, since the prompt engineer has to
understand (through trial and error) how the LLM interprets specific words.
For instance, the context “the user is on a public transportation route” means
that the path that the user is following (e.g., as captured by a GPS trace) is
the one of a public transportation route. However, these words sometimes led
the model to mistakenly interpret it as “the user is on a public
transportation vehicle”, thus excluding activities like Moving by car and
Cycling. We observed that translating the same context information as “the
user is following/close to a public transportation route” significantly
reduced the instances where this issue occurs.
An example of application of Context2Text is depicted in Figure 5.
Figure 5. Context2Text applied to a specific context situation.
#### 3.3.3. Example pool
As previously mentioned, LLMs benefit from including a few examples in the
prompt, showing the output that should be associated with specific inputs. In
our case, an example represents a context situation and the corresponding
consistent activities. We assume that the Prompt Engineer is sufficiently
knowledgeable in the activity domain and, given a context description, they
are capable of determining which activities are consistent. For each activity
$a$, the Prompt Engineer populates the pool of examples $P$ with the following
strategy:
* •
Define a set of examples $E_{a}$ (i.e. a set of tuples ¡$context$,
$consistentActivities$¿) referring to context situations that are uncommon but
possible for the activity $a$ (the context may be common or uncommon for the
other consistent activities)
* •
For each example $e\in E_{a}$:
* –
Consider $e$ as an input context for the LLM (using the system message without
examples) and analyze the response about the set of consistent activities.
* –
If the response of the LLM is different from the consistent activities in the
example, the Prompt Engineer decides whether to include $e$ in $P$ to fill the
knowledge gap.
Considering one activity at a time significantly simplifies the Prompt
Engineer’s task of generating examples. The number of examples created by the
prompt engineer for each activity is not fixed: it depends on the activity,
the available contexts, and their experience in the activity domain. An
example is added to the pool only if the Prompt Engineer feels that the LLM is
not providing a satisfactory outcome, revealing a gap in the LLM knowledge. We
consider “uncommon” context situations in which an activity is rarely
performed but it is still possible; these are the cases most likely not
covered by the LLM. For instance, consider the activity running. While this
activity is usually performed outdoors, it may be uncommon to think that it
may also performed at the gym (e.g., on a treadmill). In our preliminary
experiments, we observed that such uncommon context for running was often not
captured by the LLM, and including it as an example improved ContextGPT
outputs significantly.
#### 3.3.4. Example Selection
Since the number of examples in the pool is not fixed, including all of them
in the prompt may result in increased costs, especially when using external
services whose price depends on the prompt’s length. Moreover, there are often
limits to the LLM’s input size. Hence, if the number of examples is high, they
may exceed the model’s capacity. ContextGPT employs a method to include in the
prompt only those examples that are similar to the input context $C^{w}$, and
its pseudo-code is outlined in Algorithm 1.
$S\leftarrow\emptyset$ ;
$C^{w}\leftarrow$ current user’s context;
$T^{C}\leftarrow Context2Text(C^{w})$ ;
$E^{C}\leftarrow$ compute embeddings of $T^{C}$ using a LLM ;
forall _$e\in P$_ do
$T^{e}\leftarrow Context2Text(e.context)$;
$E^{e}\leftarrow$ compute embeddings of $T^{e}$ using a LLM ;
$s_{e}\leftarrow\frac{E^{C}\cdot E^{e}}{|E^{C}||E^{e}|}$ ;
if _$s_{e} >k$_ then
$S\leftarrow S\bigcup\\{e\\}$;
end if
end forall
return $S$
Algorithm 1 Example Selection
Specifically, we use a pre-trained LLM to extract text embeddings from $C^{w}$
and all the examples in the pool using their description in natural language
obtained with Context2Text. Then, we use cosine similarity to compute the
similarity of each example with $C^{w}$, and we include in the prompt only the
examples with a similarity higher than a threshold $k$. This threshold is
determined empirically, and we will show its impact on the recognition rate in
Section 4.
### 3.4. Post-processing
Figure 6 shows an example of LLMs output in ContextGPT.
Figure 6. An example of LLM output
As required by the system message (Section 3.3.1), besides explaining the
reasoning process, the output includes the list of consistent activities in
square brackets. Hence, it is possible to easily extract this list $L$ using
regular expressions and transform it into a binary vector
$b=[b_{1},b_{2},\dots,b_{n}]$ where $b_{i}$ is $1$ if the activity $a_{i}\in
L$ (i.e., the activity $a_{i}$ is consistent with $C^{w}$ according to the
LLM), $0$ otherwise. In the following, we will refer to $b$ as the consistency
vector. Note that in the actual implementation of ContextGPT there is a cache
mechanism that avoids recomputing the consistency vector when $C^{w}$ has
already been processed. Consistency vectors are used to infuse knowledge
inside the NeSy model.
### 3.5. Infusing knowledge into Neuro-Symbolic model
While ContextGPT is agnostic with respect to the Neuro-Symbolic approach for
Context-Aware HAR, in this work we use a specific knowledge infusion approach
proposed in the general AI community (Sheth et al., 2019; Kursuncu et al.,
2019). This method relies on a hidden layer in the Deep Neural Network (DNN)
in charge of infusing knowledge in the latent space. An adaptation for
Context-Aware HAR named NIMBUS has been recently proposed in the literature,
exhibiting promising recognition rates (Arrotta et al., 2022). In NIMBUS,
knowledge is infused from a manually defined ontology of activity and
contexts. In this work, we adapted NIMBUS to fetch knowledge from ContextGPT.
The overall mechanism of NIMBUS is depicted in Figure 7. The consistency
vector obtained from ContextGPT is infused in the hidden layers of the Deep
Neural Network (DNN) through a dedicated layer named knowledge infusion layer.
This hidden layer makes it possible to learn correlations between the latent
representation of raw sensor data, high-level context data, and context-
consistent activities.
Figure 7. Infusing ContextGPT into the Symbolic Features approach
## 4\. Experimental Evaluation
### 4.1. Datasets
In the following, we describe the two publicly available datasets we used to
evaluate ContextGPT. Both datasets contain real data with the former obtained
in a realistic but scripted setting, and the latter collected in a in-the-wild
setting. They include two types of data collected from mobile devices: a) raw
inertial sensor data and b) pre-processed high-level context data.
#### 4.1.1. DOMINO
The DOMINO (Arrotta et al., 2023b) dataset includes data from $25$ subjects
wearing a smartwatch on the wrist of their dominant hand and a smartphone in
their pants front pocket. Both devices gathered raw sensor data from inertial
sensors (accelerometer, gyroscope, and magnetometer) and a wide variety of
high-level context data. This dataset includes nearly $9$ hours of labeled
data (approximately $350$ activity instances) covering $14$ distinct
activities: walking, running, standing, lying, sitting, stairs up, stairs
down, elevator up, elevator down, cycling, moving by car, sitting on
transport, standing on transport, and brushing teeth. The DOMINO dataset was
collected in a scripted setting: the volunteers were instructed to perform
sequences of indoor/outdoor activities, even though without specific guidance
on their execution.
DOMINO included the following pre-processed high-level context information:
* •
User’s height variations (discretized: negative, null, positive)
* •
User’s speed variations (discretized: null, low, medium, and high)
* •
Whether the user is indoors or outdoors
* •
Semantic locations (Home, Office, University, Mall, Station, Museum, Gym,
Shop, Bar, Restaurant, Barbershop, Bank, Church)
* •
Weather condition (Sunny, Rainy, Misty, Cloudy, Drizzly, Stormy)
* •
Whether the user is following a public transportation route
Overall, the DOMINO dataset includes data collected in $412$ unique context
conditions.
#### 4.1.2. ExtraSensory
ExtraSensory (Vaizman et al., 2018) was collected in an unscripted in-the-wild
setting from $60$ subjects. Similarly to DOMINO, users wore a smartwatch on
the wrist of their dominant hand and a smartphone in their pants front pocket.
ExtraSensory includes approximately $300$,$000$ minutes of labeled data,
including $51$ distinct labels self-reported by the users. These labels encode
both high-level context information (e.g., at home, with friends, phone in
bag, phone is charging) and performed activities (e.g., sitting, bicycling).
Considering inertial sensor data, each smartphone collected raw data from the
accelerometer, gyroscope, and magnetometer; on the other hand, the smartwatch
only collected raw data from the accelerometer.
Since it was collected in the wild, ExtraSensory is widely considered a
challenging benchmark. Existing works on this dataset report low recognition
rates although considering small subsets of activities (Arrotta et al., 2023a;
Tarafdar and Bose, 2021; Cruciani et al., 2020).
In this paper, we pre-process the dataset consistently with previous works in
context-aware HAR (Arrotta et al., 2023a). We consider the following $7$
activities: bicycling, lying down, moving by car, on transport, sitting,
standing, and walking. All the context information in the dataset that could
be easily collected by mobile devices is provided as input to the Neuro-
Symbolic model (e.g., audio level, screen brightness, ringer mode, etc.).
However, based on preliminary experiments, we provide to the LLM only the
context information where common-sense knowledge can be practically used to
reason about the target physical activities:
* •
Whether the user is indoors or outdoors
* •
Semantic location (Home, Workplace, School, Gym, Restaurant, Shopping, Bar,
Beach)
* •
User’s speed (null, low, medium, and high)
* •
User’s movements diameter (null, low, medium, and high)
* •
Whether the user is following a public transportation route
Overall, the ExtraSensory data presents $144$ unique context conditions for
the LLM. This number is significantly lower compared to DOMINO, due to a
reduced number of context variables and target activities.
### 4.2. Experimental setup
We implemented a working prototype of ContextGPT as well as the Neuro-Symbolic
model explained in Section 3.5 using the Python language (version 3.12). We
run our experiments on a machine of our department running Ubuntu 20.04.4 LTS
and equipped with an AMD EPYC Processor x86-64, an NVIDIA A100 GPU (80 GB),
with $43.5$ GBs RAM allocated. In the following, we provide details about our
experimental setup.
#### 4.2.1. Large Language Models
The pre-trained LLM we used in our experiments is gpt-3.5-turbo by OpenAI,
queried through its API through the Python OpenAI package (version 0.28.1). We
set the temperature of the model to $0$ to ensure a mostly deterministic
output since our task does not require leveraging the model’s creativity. The
pre-trained LLM model we adopted to compute example embeddings and thus
computing similarity (see Section 3.3.4) is all-
MiniLM-L6-v2111https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2.
#### 4.2.2. Generating examples
A member of our research group, without experience with the specific datasets
used in this work, created the examples following the strategy described in
Section 3.3.4. To simplify the example creation task and to automatically
generate examples in natural language, we developed a simple examples creation
tool (see Figure 8).
Figure 8. A screenshot of our tool to create examples
Since each dataset has different sets of activities and context variables,
examples were created separately for DOMINO and ExtraSensory. The output of
this process is $21$ examples for DOMINO and $12$ for ExtraSensory.
#### 4.2.3. Model and Hyper-parameters
For the NeSy model, we use the implementation of NIMBUS proposed in (Arrotta
et al., 2023a)222Note that in (Arrotta et al., 2023a) NIMBUS is named Symbolic
Features.. Smartphone inertial sensor data are provided to a sequence of three
convolutional layers, each consisting of $32$, $64$, and $96$ filters with
corresponding kernel sizes of $24$, $16$, and $8$. These layers are
interleaved with max-pooling layers with a pool size of $4$. Then, the flow
continues with a global max pooling layer and a fully connected layer with
$128$ neurons. In parallel on another channel, the smartwatch inertial sensor
data is processed by an identical sequence of layers with a small difference:
the kernel size of the first three convolutional layers is $16$, $8$, and $4$,
respectively. In parallel, high-level context data (represented with one-hot
encoding) is provided to a single fully connected layer with $8$ neurons. The
features extracted from these three parallel streams are merged with the
consistency vector from ContextGPT using a concatenation layer (i.e., the
knowledge-infusion layer presented in Section 3.5). Then, a dropout layer
(with a rate of $0.1$) is introduced for regularization, followed by a fully
connected layer of $256$ neurons to extract correlations among the features
from the different data sources and the consistency vector. Finally, a softmax
layer returns the probability distribution over the possible activities.
We used the same hyper-parameters proposed in (Arrotta et al., 2023a):
* •
Segmentation window: $4$ seconds
* •
Optimizer: Adam
* •
Loss: Categorical cross-entropy
* •
Learning rate: $0.001$
* •
Batch size: $32$
* •
Epochs: $200$
* •
Early stopping patience on the validation loss: $5$
### 4.3. Evaluation methodology
#### 4.3.1. Baselines
In this paper, we compare the infusion of ContextGPT knowledge into a Neuro-
Symbolic HAR model with two alternatives:
* •
No knowledge: a purely data-driven baseline without knowledge infusion, where
high-level context data is only used as input of the deep learning model.
* •
Ontology: the NIMBUS approach originally proposed in (Arrotta et al., 2022),
where knowledge is infused from an ontology encoding relationships between
activities and contexts. Hence, we use the ontology adopted in that work for
the DOMINO dataset and its adaptation to the ExtraSensory dataset recently
proposed in (Arrotta et al., 2023a).
#### 4.3.2. Evaluation Strategy
We evaluated the recognition rate of the HAR models by using a
leave-$n$-users-out cross-validation technique. At each fold, $n$ users were
designated for the test set, while the remaining users were split between the
training ($90\%$) and validation ($10\%$) sets. Specifically, for the DOMINO
dataset we set $n=1$, while for the Extrasensory dataset, consistently with
other works in the literature (Cruciani et al., 2020), we set $n=5$.
We simulated several data scarcity scenarios by downsampling the available
training data at each fold. At each iteration, we used the test set to
evaluate the recognition rate in terms of macro F1 score. To ensure robust
results, we conducted each experiment $5$ times, calculating the average F1
score.
### 4.4. Results
#### 4.4.1. Comparison with the baselines
Figures 9 and 10 show, for both datasets, the impact of infusing ContextGPT
knowledge (using the $k$ value leading to the best results) compared to the
baselines.
Figure 9. DOMINO: Knowledge infusion with ContextGPT compared to the
baselines. This plot shows the F1 score in different data scarcity scenarios.
Figure 10. ExtraSensory: ContextGPT compared to the baselines. This plot shows
the F1 score in different data scarcity scenarios.
As expected, infusing knowledge from ContextGPT significantly outperforms the
purely data-driven No knowledge baseline in data scarcity scenarios. At the
same time, ContextGPT reaches competitive results to the Ontology baseline,
with the great advantage of significantly reduced human effort. We also
observe that, when the training set includes a sufficient number of labeled
samples, the contribution of knowledge infusion is significantly reduced. In
these cases, it is likely that most domain constraints are learned from the
training data itself.
Consistently with existing works (Cruciani et al., 2020; Tarafdar and Bose,
2021; Arrotta et al., 2023a), the in-the-wild nature of ExtraSensory leads to
relatively low recognition rates. Moreover, on this dataset, increasing the
percentage of training data slightly degrades the recognition rate of both
knowledge infusion approaches. The reason may be due to the underlying NeSy
model, in which raw sensor data may overshadow the infused knowledge when the
training set is large. Since DOMINO is significantly smaller, we do not
observe the same phenomenon on this dataset.
For both datasets, knowledge infusion is particularly effective for those
activities that significantly depend on the context (e.g., using the elevator,
moving by car, moving on public transportation), while its benefits are
reduced for those activities where context influence is limited (e.g.,
sitting, walking).
#### 4.4.2. Impact of examples selection
Figures 11 and 12 illustrate the distribution of the number of selected
examples by varying $k$ for DOMINO and ExtraSensory, respectively.
Figure 11. DOMINO: Distribution of the number of selected examples by varying
$k$. Figure 12. ExtraSensory: Distribution of the number of selected examples
by varying $k$.
We observe that in both cases $k=0.75$ is the lowest threshold value
significantly restricting the number of selected examples, while lower values
lead to a higher number of examples.
In Figure 9, the best results on DOMINO were achieved with low values of $k$
(i.e., ranging from $0$ to $0.25$) and thus a high number of examples. This is
due to the fact that, in this dataset, there is a high number of possible
context conditions. Thus, the LLM benefits from more examples describing the
required task.
On the other hand, in Figure 10, the best results on the ExtraSensory dataset
are associated with higher values of $k$ (i.e., ranging from $0.25$ to $0.95$)
and thus selecting a smaller number of examples. In this case, since the
number of possible context conditions is significantly lower compared to
DOMINO, a few examples (similar to the input context) provide the LLM model a
better guidance on the required task.
In the following, we show the impact of example selection on the recognition
rate. Figure 13 and 14 show how $k$ impacts the recognition rate on each
dataset when only the $10\%$ of the training data is available. We observe
that the DOMINO dataset in this case benefits from $k=0$ (i.e., using all the
examples), while higher values of $k$ lead to worse recognition rates.
Nonetheless, it is interesting to note that even $k=1$ (i.e., no examples)
leads to outperforming the No knowledge baseline by $~{}\approx+4\%$.
On the other hand, the best recognition rates reached on the ExtraSensory
dataset in the same data scarcity scenario are associated with $k=0.5$ and
$k=0.75$. Notably, $k=0.75$ leads to a significantly smaller number of
examples than $k=0.5$ and to a similar recognition rate.
Our results on these datasets suggest that the higher the number of context
variables and activities, the higher the number of examples needed in the
prompt to maximize the recognition rate. We believe that this reliance on
examples may be mitigated by future more comprehensive LLMs and with fine-
tuning.
Figure 13. DOMINO: Impact of k on macro F1 with 10% of the training set Figure
14. ExtraSensory: Impact of k on macro F1 with 10% of the training set
While examples play a major role in improving the recognition rate for data
scarcity scenarios, they do not lead to significant improvement when
sufficient amounts of training data are available. This phenomenon is depicted
in Figures 15 and 16, where we show the impact of $k$ when the $50\%$ of
training data is available.
Figure 15. DOMINO: Impact of k on macro F1 with 50% of the training set Figure
16. ExtraSensory: Impact of k on macro F1 with 50% of the training set
When enough training data is available, the knowledge gaps in the LLM models
addressed by examples are probably learned from training data. Nonetheless,
infusing knowledge from ContextGPT still outperforms the No knowledge
baseline.
## 5\. LLM vs. Ontology for Neuro-Symbolic HAR
Our results show that by using pre-trained LLMs instead of ontologies in NeSy
HAR systems, we can reach similar (and sometimes better) recognition results.
Then we may wonder why LLMs are a more promising approach for future real
deployment of these systems, since we already have some ontologies like the
ones used in our work, while for LLMs some prompt engineering work is
required.
Considering the extension of these systems to a large class of human
activities and different context situations, there are clear limitations for
ontologies. To the best of our knowledge, we are not aware of publicly
available ontologies offering a comprehensive coverage of all possible human
activities and context situations. Hence, significant human effort would be
required to extend and adapt the ontology to new datasets with different sets
of activities and context data.
Indeed, extending an ontology means defining complex knowledge relationships
between activities and contexts, it requires skills in the logic-based
formalism model (e.g., OWL2 in the case of ontologies) and significant
expertise in the HAR domain. This task is also usually assigned to a single
knowledge engineer or small team with high risks of incompleteness.
In our case, adapting ContextGPT to a new dataset only requires using natural
language to adjust the system message on the target activities, extending the
Context2Text module to map new contexts to natural language descriptions, and
generating a new pool of examples.
However, a significant disadvantage of LLMs compared to ontologies is the
absence of real semantic reasoning, since the output is based on data-driven
text generation. Hence, there may be contradictions and/or model
hallucinations that we would not experience by using rigorous logic systems.
For instance, in one instance where the user was moving slowly, the model
considered as possible standing with the following motivation:“Possible, as
the user is moving at a relatively slow pace”. While hallucinations may be
mitigated by more advanced LLMs models (e.g., GPT-4) we believe that knowledge
infusion needs to cope with possibly noisy information.
Finally, we investigate the discrepancies between the sets of consistent
activities obtained by ContextGPT with the ones generated by the ontology.
Specifically, given a context condition, we compute the set $L$ of activities
considered consistent by the LLM in ContextGPT and $O$ as the set of
activities considered consistent by the ontology. Hence, we define the
following metrics:
* •
$L2O$ Inclusion: the proportion of activities considered as consistent by
ContextGPT that are also consistent for the ontology, computed as
$\frac{|L\bigcap O|}{|L|}$.
* •
$O2L$ Inclusion: the proportion of activities considered as consistent by the
ontology that are also consistent for ContextGPT, computed as $\frac{|L\bigcap
O|}{|O|}$.
Note that the low values of $L2O$ Inclusion imply that ContextGPT includes
several activities that are not considered consistent by the ontology.
Conversely, low values of $O2L$ Inclusion imply that ContextGPT does not
consider as consistent several activities that are deemed consistent by the
ontology.
Figure 17. DOMINO: Average $L2O$ and $O2L$ inclusion metrics varying $k$.
Figure 18. ExtraSensory: Average $L2O$ and $O2L$ inclusion metrics varying
$k$.
Figure 17 shows, for the DOMINO dataset, the average scores of $L2O$ and $O2L$
inclusion metrics at different values of $k$. By increasing $k$, we observe
that the $O2L$ metric is the one associated with the more drastic decrease,
thus denoting that considering only a small number of examples leads
ContextGPT to be restrictive on the consistent activities compared to the
ontology. Moreover, high values of $k$ are also associated with lower values
of $L2O$. This decrease in both metrics indicates a significant discrepancy
between the activities considered consistent by ContextGPT and those
considered consistent by the ontology. We hypothesize this may be correlated
with the results depicted in Figure 13, where a substantial number of examples
was required by ContextGPT to achieve results comparable to those obtained by
Ontology on the DOMINO dataset.
Figure 18 shows the impact of the inclusion metrics on ExtraSensory. In this
case, increasing $k$ significantly impacts only the $O2L$ Inclusion metric,
with ContextGPT considering a restricted set of consistent activities compared
to the ontology. On the other hand, the $L2O$ Inclusion metric exhibits only a
slight decrease since the activities consistent for ContextGPT are often
consistent also for the ontology. We hypothesize that such high values of the
$L2O$ metric are because, on this dataset, reducing the number of examples
does not lead to significant LLM’s hallucinations. This seems to be consistent
with the results previously depicted in Figure 14.
## 6\. Conclusions and Future Work
In this paper, we introduced ContextGPT: a novel method based on Large
Language Models (LLMs) to retrieve common-sense knowledge about human
activities to be infused in Neuro-Symbolic (NeSy) context-aware HAR models. We
showed the effectiveness of ContextGPT in data scarcity scenarios with an
extensive experimental evaluation using a state-of-the-art NeSy model. Our
results show that LLMs may effectively replace logic-based models in NeSy
systems to reduce human effort.
We have several plans for future work. First, in this work we considered a
general LLM incorporating knowledge from many domains. We want to investigate
how to specialize the LLM in the HAR domain. This is challenging since it
requires identifying reliable sources to effectively fine-tune the model.
Besides fine-tuning, we will investigate personalization aspects. Different
individuals may have customized habits requiring ad-hoc models (e.g., “Bob
usually enjoys running at the park some late afternoons after work, even if
it’s raining”). Thus, we will investigate how to introduce into the prompt
personalized aspects that may help in capturing personalized relationships
between activities and contexts.
Finally, ContextGPT currently generates a list of activities consistent with
the current context without considering fuzziness. However, associating a
consistency score with each activity may lead to more effective knowledge
infusion (Arrotta et al., 2022). The main challenge in this direction is that
LLMs are generally not good at dealing with numbers and mathematical
operations (Frieder et al., 2023).
Finally, we will also investigate alternative LLM models (e.g., GPT-4, LLAMA2)
and NeSy approaches (e.g., the Semantic Loss method recently proposed in
(Arrotta et al., 2023a)) and their impact on the recognition rate.
## Acknowledgments
The authors want to thank Mohcen Laaroussi for his excellent work on software
implementation. This work was partially supported by Italian Ministry of
University and Research with project “MUSA-Multilayered Urban Sustainability
Action” (project ID: ECS_00000037).
## References
* (1)
* Abdallah et al. (2018) Zahraa S Abdallah, Mohamed Medhat Gaber, Bala Srinivasan, and Shonali Krishnaswamy. 2018. Activity recognition with evolving data streams: A review. _ACM Computing Surveys (CSUR)_ 51, 4 (2018), 1–36.
* Arrotta et al. (2020) Luca Arrotta, Claudio Bettini, Gabriele Civitarese, and Riccardo Presotto. 2020. Context-aware data association for multi-inhabitant sensor-based activity recognition. In _2020 21st IEEE International Conference on Mobile Data Management (MDM)_. IEEE, 125–130.
* Arrotta et al. (2022) Luca Arrotta, Gabriele Civitarese, and Claudio Bettini. 2022. Knowledge Infusion for Context-Aware Sensor-Based Human Activity Recognition. In _2022 IEEE International Conference on Smart Computing (SMARTCOMP)_. IEEE, 1–8.
* Arrotta et al. (2023a) Luca Arrotta, Gabriele Civitarese, and Claudio Bettini. 2023a. Neuro-Symbolic Approaches for Context-Aware Human Activity Recognition. _arXiv preprint arXiv:2306.05058_ (2023).
* Arrotta et al. (2023b) Luca Arrotta, Gabriele Civitarese, Riccardo Presotto, and Claudio Bettini. 2023b. DOMINO: A Dataset for Context-Aware Human Activity Recognition using Mobile Devices. In _2023 24th IEEE International Conference on Mobile Data Management (MDM)_. IEEE, 346–351.
* Bettini et al. (2020) Claudio Bettini, Gabriele Civitarese, and Riccardo Presotto. 2020. Caviar: Context-driven active and incremental activity recognition. _Knowledge-Based Systems_ 196 (2020), 105816.
* Brown et al. (2020) Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. 2020\. Language models are few-shot learners. _Advances in neural information processing systems_ 33 (2020), 1877–1901.
* Chen et al. (2021) Kaixuan Chen, Dalin Zhang, Lina Yao, Bin Guo, Zhiwen Yu, and Yunhao Liu. 2021. Deep learning for sensor-based human activity recognition: Overview, challenges, and opportunities. _ACM Computing Surveys (CSUR)_ 54, 4 (2021), 1–40.
* Cruciani et al. (2020) Federico Cruciani, Anastasios Vafeiadis, Chris Nugent, Ian Cleland, Paul McCullagh, Konstantinos Votis, Dimitrios Giakoumis, Dimitrios Tzovaras, Liming Chen, and Raouf Hamzaoui. 2020. Feature learning for human activity recognition using convolutional neural networks: A case study for inertial measurement unit and audio data. _CCF Transactions on Pervasive Computing and Interaction_ 2, 1 (2020), 18–32.
* Dash et al. (2022) Tirtharaj Dash, Sharad Chitlangia, Aditya Ahuja, and Ashwin Srinivasan. 2022. A review of some techniques for inclusion of domain-knowledge into deep neural networks. _Scientific Reports_ 12, 1 (2022), 1040.
* Frieder et al. (2023) Simon Frieder, Luca Pinchetti, Ryan-Rhys Griffiths, Tommaso Salvatori, Thomas Lukasiewicz, Philipp Christian Petersen, Alexis Chevalier, and Julius Berner. 2023. Mathematical capabilities of chatgpt. _arXiv preprint arXiv:2301.13867_ (2023).
* Graule and Isler (2023) Moritz A Graule and Volkan Isler. 2023. GG-LLM: Geometrically Grounding Large Language Models for Zero-shot Human Activity Forecasting in Human-Aware Task Planning. _arXiv preprint arXiv:2310.20034_ (2023).
* Gu et al. (2021) Fuqiang Gu, Mu-Huan Chung, Mark Chignell, Shahrokh Valaee, Baoding Zhou, and Xue Liu. 2021. A survey on deep learning for human activity recognition. _ACM Computing Surveys (CSUR)_ 54, 8 (2021), 1–34.
* Guo et al. (2022) Chuan Guo, Shihao Zou, Xinxin Zuo, Sen Wang, Wei Ji, Xingyu Li, and Li Cheng. 2022. Generating diverse and natural 3d human motions from text. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_. 5152–5161.
* Haresamudram et al. (2022) Harish Haresamudram, Irfan Essa, and Thomas Plötz. 2022. Assessing the state of self-supervised human activity recognition using wearables. _Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies_ 6, 3 (2022), 1–47.
* Henricksen et al. (2005) Karen Henricksen, Jadwiga Indulska, Ted McFadden, and Sasitharan Balasubramaniam. 2005. Middleware for distributed context-aware systems. In _OTM Confederated International Conferences” On the Move to Meaningful Internet Systems”_. Springer, 846–863.
* Hiremath et al. (2022) Shruthi K Hiremath, Yasutaka Nishimura, Sonia Chernova, and Thomas Plötz. 2022. Bootstrapping Human Activity Recognition Systems for Smart Homes from Scratch. _Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies_ 6, 3 (2022), 1–27.
* Jain et al. (2022) Yash Jain, Chi Ian Tang, Chulhong Min, Fahim Kawsar, and Akhil Mathur. 2022. ColloSSL: Collaborative Self-Supervised Learning for Human Activity Recognition. _Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies_ 6, 1 (2022), 1–28.
* Kaneko and Inoue (2023) Haru Kaneko and Sozo Inoue. 2023. Toward Pioneering Sensors and Features Using Large Language Models in Human Activity Recognition. _arXiv preprint arXiv:2306.16017_ (2023).
* Kursuncu et al. (2019) Ugur Kursuncu, Manas Gaur, and Amit Sheth. 2019. Knowledge infused learning (k-il): Towards deep incorporation of knowledge in deep learning. _arXiv preprint arXiv:1912.00512_ (2019).
* Leng et al. (2023) Zikang Leng, Hyeokhyen Kwon, and Thomas Plötz. 2023. Generating Virtual On-body Accelerometer Data from Virtual Textual Descriptions for Human Activity Recognition. _arXiv preprint arXiv:2305.03187_ (2023).
* Meyer et al. (2023) Jesse G Meyer, Ryan J Urbanowicz, Patrick CN Martin, Karen O’Connor, Ruowang Li, Pei-Chen Peng, Tiffani J Bright, Nicholas Tatonetti, Kyoung Jae Won, Graciela Gonzalez-Hernandez, et al. 2023\. ChatGPT and large language models in academia: opportunities and challenges. _BioData Mining_ 16, 1 (2023), 20.
* Perkowitz et al. (2004) Mike Perkowitz, Matthai Philipose, Kenneth Fishkin, and Donald J Patterson. 2004. Mining models of human activities from the web. In _Proceedings of the 13th international conference on World Wide Web_. 573–582.
* Riboni and Bettini (2011) Daniele Riboni and Claudio Bettini. 2011. COSAR: hybrid reasoning for context-aware activity recognition. _Personal and Ubiquitous Computing_ 15, 3 (2011), 271–289.
* Riboni and Murtas (2019) Daniele Riboni and Marta Murtas. 2019. Sensor-based activity recognition: One picture is worth a thousand words. _Future Generation Computer Systems_ 101 (2019), 709–722.
* Saguna et al. (2013) Saguna Saguna, Arkady Zaslavsky, and Dipanjan Chakraborty. 2013. Complex activity recognition using context-driven activity theory and activity signatures. _ACM Transactions on Computer-Human Interaction (TOCHI)_ 20, 6 (2013), 1–34.
* Sanabria et al. (2021) Andrea Rosales Sanabria, Franco Zambonelli, and Juan Ye. 2021. Unsupervised Domain Adaptation in Activity Recognition: A GAN-Based Approach. _IEEE Access_ 9 (2021), 19421–19438.
* Sarker et al. (2021) Md Kamruzzaman Sarker, Lu Zhou, Aaron Eberhart, and Pascal Hitzler. 2021. Neuro-symbolic artificial intelligence: Current trends. _arXiv preprint arXiv:2105.05330_ (2021).
* Sheth et al. (2019) Amit Sheth, Manas Gaur, Ugur Kursuncu, and Ruwan Wickramarachchi. 2019. Shades of knowledge-infused learning for enhancing deep learning. _IEEE Internet Computing_ 23, 6 (2019), 54–63.
* Soleimani and Nazerfard (2021) Elnaz Soleimani and Ehsan Nazerfard. 2021. Cross-subject transfer learning in human activity recognition systems using generative adversarial networks. _Neurocomputing_ 426 (2021), 26–34.
* Takeda et al. (2023) Naoto Takeda, Roberto Legaspi, Yasutaka Nishimura, Kazushi Ikeda, Atsunori Minamikawa, Thomas Plötz, and Sonia Chernova. 2023. Sensor Event Sequence Prediction for Proactive Smart Home Support Using Autoregressive Language Model. In _2023 19th International Conference on Intelligent Environments (IE)_. IEEE, 1–8.
* Tarafdar and Bose (2021) Pratik Tarafdar and Indranil Bose. 2021. Recognition of human activities for wellness management using a smartphone and a smartwatch: a boosting approach. _Decision Support Systems_ 140 (2021), 113426.
* Vaizman et al. (2018) Yonatan Vaizman, Katherine Ellis, Gert Lanckriet, and Nadir Weibel. 2018. Extrasensory app: Data collection in-the-wild with rich user interface to self-report behavior. In _Proceedings of the 2018 CHI conference on human factors in computing systems_. 1–12.
* Wang et al. (2019) Jindong Wang, Yiqiang Chen, Shuji Hao, Xiaohui Peng, and Lisha Hu. 2019. Deep learning for sensor-based activity recognition: A survey. _Pattern Recognition Letters_ 119 (2019), 3–11.
* Wei et al. (2022) Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Fei Xia, Ed Chi, Quoc V Le, Denny Zhou, et al. 2022\. Chain-of-thought prompting elicits reasoning in large language models. _Advances in Neural Information Processing Systems_ 35 (2022), 24824–24837.
* Xia et al. (2023) Qingxin Xia, Takuya Maekawa, and Takahiro Hara. 2023. Unsupervised Human Activity Recognition through Two-stage Prompting with ChatGPT. _arXiv preprint arXiv:2306.02140_ (2023).
* Xing et al. (2020) Tianwei Xing, Luis Garcia, Marc Roig Vilamala, Federico Cerutti, Lance Kaplan, Alun Preece, and Mani Srivastava. 2020. Neuroplex: learning to detect complex events in sensor networks through knowledge injection. In _Proceedings of the 18th conference on embedded networked sensor systems_. 489–502.
* Yordanova (2016) Kristina Yordanova. 2016. From textual instructions to sensor-based recognition of user behaviour. In _Companion Publication of the 21st International Conference on Intelligent User Interfaces_. 67–73.
* Zhou et al. (2023) Yunjiao Zhou, Jianfei Yang, Han Zou, and Lihua Xie. 2023. TENT: Connect Language Models with IoT Sensors for Zero-Shot Activity Recognition. _arXiv preprint arXiv:2311.08245_ (2023).
|
Sorbonne Université, UMR 7589, LPTHE, F-75005, Paris, France
& CNRS, UMR 7589, LPTHE, F-75005, Paris, France
<EMAIL_ADDRESS>
The counting of partitions according to their genus is revisited. The case of
genus 0 –non-crossing partitions– is well known. Our approach relies on two
pillars : first a functional equation between generating functions, originally
written in genus 0 and interpreted graphically by Cvitanovic, is generalized
to higher genus; secondly, we show that all partitions may be reconstructed
from the “(semi)-primitive” ones introduced by Cori and Hetyei. Explicit
results for the generating functions of all types of partitions are obtained
in genus 1 and 2. This gives a second order interpolation between expansions
on ordinary or on free cumulants.
## 1 Introduction
### 1.1 Genus interpolation between usual and free cumulant expansions
Set partitions, say of the set $[\\![n]\\!]:=\\{1,2\cdots,n\\}$, are
fundamental objects in combinatorics. Let ${\mathcal{P}}(n)$ denote their set.
Their census, subject to different conditions, has been and is still the
subject of an abundant literature. In particular, it is well known, as we
recall below in sect. 2.2, that any partition $\alpha$ may be assigned a genus
$g(\alpha)$ by a formula descending from Euler’s relation. Curiously, the
census of partitions according to their genus is still an open problem, in
spite of several fundamental contributions, [12, 16, 17, 3, 4]. Except for a
few particular cases, only the case of genus 0 is thoroughly known: the non
crossing partitions (or planar) have been enumerated by Kreweras [12], before
reappearing in various contexts, matrix integrals [1], free probability [15,
14]…
The question also arises in probability theory and statistical mechanics.
There, it is common practice to associate cumulants to moments of random
variables. If $X$ is a random variable with moments $m_{n}=\mathbb{E}(X^{n})$
of arbitrary order, we decompose these moments on cumulants $\kappa_{m}$ and
their products associated with partitions $\alpha\in{\mathcal{P}}(n)$
$m_{n}=\sum_{\alpha\in{\mathcal{P}}(n)}\kappa_{\alpha}\,.$ (1)
Thus each term in (1) may be regarded as associated with a splitting of
$[\\![n]\\!]$ into parts described by the partition in statistical mechanics,
the terms $\kappa_{\alpha}$ are dubbed the connected parts of the moment
$m_{n}$.
If $\alpha_{\ell}$ denotes the number of parts of cardinality $\ell$ in the
partition $\alpha$, with $\sum_{\ell=1}^{n}\ell\alpha_{\ell}=n$, then 111I’m
assuming here that the moments and cumulants do not depend on extra variables
like momenta, etc.
$\kappa_{\alpha}:=\prod_{\ell=1}^{n}\kappa_{\ell}^{\alpha_{\ell}}\,.$ (2)
depends only on the type $[\alpha]=[1^{\alpha_{1}}2^{\alpha_{2}}\cdots
n^{\alpha_{n}}]$ of the partition $\alpha$; $[\alpha]$ may be regarded as a
partition of the integer $n$. By a small abuse of notation, we use the same
letter $\kappa$ to denote elementary cumulants $\kappa_{\ell}$,
$\ell\in\mathbb{N}$; compound ones $\kappa_{\alpha}$,
$\alpha\in{\mathcal{P}}(n)$ as in (2); or $\kappa_{[\alpha]}=\kappa_{\alpha}$,
$[\alpha]\vdash n$, and we rewrite
$m_{n}=\sum_{[\alpha]\vdash n}c_{n,[\alpha]}\kappa_{[\alpha]}\,$ (3)
where $c_{n,[\alpha]}$ denotes the number of partitions of type $[\alpha]$ (a
coefficient of a Bell polynomial)
$c_{n,[\alpha]}=\frac{n!}{\prod_{\ell=1}^{n}\alpha_{\ell}!(\ell!)^{\alpha_{\ell}}}\,.$
(4)
Then, making use of the genus $g(\alpha)$ mentioned above, it is natural to
modify the expansion (3) by weighting the various terms acccording to their
genus. Introducing a parameter $\epsilon$, we write
$m_{n}(\epsilon)=\sum_{\alpha\in{\mathcal{P}}(n)}\epsilon^{g(\alpha)}\kappa_{\alpha}$
(5)
or
$m_{n}(\epsilon)=\sum_{[\alpha]\vdash
n}\sum_{g=0}^{g_{{\mathrm{m}ax}}([\alpha])}C^{(g)}_{n,[\alpha]}\epsilon^{g}\kappa_{[\alpha]}\\\
,$ (6)
where $C^{(g)}_{n,[\alpha]}$ counts the number of partitions of type
$[\alpha]$ and of genus $g$.
For example,
$m_{4}(\epsilon)=\kappa_{4}+4\,\kappa_{3}\,\kappa_{1}+(2+\epsilon)\kappa_{2}^{2}+6\,\kappa_{2}\,{\kappa_{1}}^{2}+{\kappa_{1}}^{4}\,,$
see below.
Obviously $\sum_{g}C^{(g)}_{n,[\alpha]}=c_{n,[\alpha]}$, the coefficient in
(4), thus for $\epsilon=1$, we recover the usual expansion (3), whereas for
$\epsilon=0$, we have an expansion on non crossing (or free, or planar)
cumulants. Thus (6) provides an interpolation between the usual cumulant
expansion and that on non crossing ones.
In this paper, we try to determine the numbers $C^{(g)}_{n,[\alpha]}$. Or
alternatively, we strive to find relations between the (ordinary) generating
functions (GF) of the $m_{n}(\epsilon)$ and of the $\kappa_{\ell}$:
$\displaystyle Z(x,\epsilon)$ $\displaystyle=$ $\displaystyle 1+\sum_{n\geq
1}m_{n}(\epsilon)x^{n}$ $\displaystyle=$ $\displaystyle\sum_{g\geq
0}Z^{(g)}(x)\epsilon^{g}$ $\displaystyle W(x)$ $\displaystyle=$
$\displaystyle\sum_{\ell\geq 1}\kappa_{\ell}x^{\ell}\,.$ (8)
This will be achieved for genus 1 and 2, and the corresponding expressions of
$Z^{(g)}(x)$ are given by Theorem 1, (28), and Theorem 2, (45). Extension to
higher genera is in principle feasible, if the list of their primitive
diagrams is known.
### 1.2 Eliminating or reinserting singletons
In a partition, parts of size 1 are called singletons. It is natural and easy
to remove them in the counting, or to relate the countings of partitions with
or without singletons. Let us denote with a hat the GF of partitions without
singletons: $\hat{Z}^{(g)}(x)$, and derive the relation between
$\hat{Z}^{(g)}(x)$ and $Z^{(g)}(x)$. This is particularly easy in the language
of statistics, where discarding singletons amounts to going to a centered
variable: $X=\hat{X}+\mathbb{E}(X)=\hat{X}+m_{1}=\hat{X}+\kappa_{1}$
$m_{n}=\mathbb{E}(X^{n})=\mathbb{E}((\hat{X}+\kappa_{1})^{n})=\sum_{r=0}^{n}{n\choose
r}\,\hat{m}_{n-r}\,\kappa_{1}^{k}$
and, since singletons do not affect the genus, see below sect. 2.6,
$C^{(g)}_{n,[\alpha^{\prime},1^{r}]}={n\choose
r}C^{(g)}_{n-r,[\alpha^{\prime}]}\,$ (9)
where the partition $\alpha^{\prime}$ is singleton free (s.f.). For example,
$\displaystyle\hskip-56.9055ptm_{1}\\!\\!$ $\displaystyle=$
$\displaystyle\\!\\!\kappa_{1}$ $\displaystyle m_{2}\\!\\!$ $\displaystyle=$
$\displaystyle\\!\\!\kappa_{2}+\kappa_{1}^{2}$ $\displaystyle m_{3}\\!\\!$
$\displaystyle=$
$\displaystyle\\!\\!\kappa_{3}+3\kappa_{2}\kappa_{1}+\kappa_{1}^{3}$
$\displaystyle m_{4}\\!\\!$ $\displaystyle=$
$\displaystyle\\!\\!\kappa_{4}+(2+\epsilon)\kappa_{2}^{2}+4\kappa_{3}\kappa_{1}+6\kappa_{2}\kappa_{1}^{2}+\kappa_{1}^{4}$
$\displaystyle m_{5}\\!\\!$ $\displaystyle=$
$\displaystyle\\!\\!{\kappa_{5}+5\,\kappa_{4}\,\kappa_{1}+5(1+\epsilon)\kappa_{3}\kappa_{2}+10\,\kappa_{3}\,{\kappa_{1}}^{2}+5(2+\epsilon)\,{\kappa_{2}}^{2}\kappa_{1}+10\,\kappa_{2}\,{\kappa_{1}}^{3}+{\kappa_{1}}^{5}\,,}$
etc.
Then
$\displaystyle Z^{(g)}(x)$ $\displaystyle=$ $\displaystyle\sum_{n\geq
0}x^{n}\sum_{[\alpha]\atop\alpha\in{\mathcal{P}}(n)}C^{(g)}_{n,[\alpha]}\kappa_{[\alpha]}$
(10) $\displaystyle=$ $\displaystyle\sum_{n\geq
0}x^{n}\sum_{r=0}^{n}\sum_{[\alpha^{\prime}]\atop\alpha^{\prime}\in{\mathcal{P}}(n-r),\,\mathrm{s.f.}}C^{(g)}_{n,[1^{r},\alpha^{\prime}]}\kappa_{[\alpha^{\prime}]}\kappa_{1}^{r}$
$\displaystyle=$ $\displaystyle\sum_{n^{\prime}\geq
0}x^{n^{\prime}}\sum_{[\alpha^{\prime}]\atop\alpha^{\prime}\in{\mathcal{P}}(n^{\prime}),\,\mathrm{s.f.}}C^{(g)}_{n^{\prime},[\alpha^{\prime}]}\kappa_{[\alpha^{\prime}]}\sum_{r\geq
0}{n^{\prime}+r\choose r}\kappa_{1}^{r}x^{r}$ $\displaystyle=$
$\displaystyle\sum_{n^{\prime}\geq
0}\sum_{[\alpha^{\prime}]\atop\alpha^{\prime}\in{\mathcal{P}}(n^{\prime}),\,\mathrm{s.f.}}C^{(g)}_{n^{\prime},[\alpha^{\prime}]}\kappa_{[\alpha^{\prime}]}\frac{x^{n^{\prime}}}{(1-\kappa_{1}x)^{n^{\prime}+1}}$
$\displaystyle=$
$\displaystyle\frac{1}{1-\kappa_{1}x}\,\hat{Z}^{(g)}\big{(}\frac{x}{1-\kappa_{1}x}\big{)}\,,$
and conversely
$\hat{Z}^{(g)}(u)=\frac{1}{1+\kappa_{1}u}\,Z^{(g)}\big{(}\frac{u}{1+\kappa_{1}u}\big{)}\,.$
(11)
## 2 Partitions and their genus
In this section, we recall some standard notions on partitions, show how to
associate a graphical representation to a partition and introduce its genus in
a natural way.
### 2.1 Parts of a partition
As explained in sect. 1, we are interested in partitions of the set
$[\\![n]\\!]$.
Note that when listing the parts of a partition
$\alpha=(\\{i_{1}\\},\cdots\\{i_{\alpha_{1}}\\},\\{j_{1},j_{2}\\}.\cdots)$,
(i) the ordering of elements in each part is immaterial, and we thus choose to
write them in increasing order;
(ii) the relative position of parts is immaterial.
For example, consider the partition
$(\\{1,3,4,6,7\\},\\{2,5,9\\},\\{8\\},\\{10\\})$ of $[\\![10]\\!]$. It is of
type $[1^{2},3,5]$ with two singletons $\\{8\\}$ and $\\{10\\}$. Clearly the
order of elements within each part is irrelevant, e.g. parts $\\{1,3,4,6,7\\}$
and $\\{3,4,1,7,6\\}$ describe the same subset of $[\\![10]\\!]$. One may thus
order the elements of each part. Likewise the relative order of the parts is
immaterial:
$(\\{1,3,4,6,7\\},\\{2,5,9\\},\\{8\\},\\{10\\})$ and
$(\\{2,5,9\\},\\{8\\},\\{1,3,4,6,7\\},\\{10\\})$ describe the same partition.
Figure 1: The partition $(\\{1,3,4,6,7\\},\\{2,5,9\\},\\{8\\},\\{10\\})$ of
$[\\![10]\\!]$. (a) and (b): two equivalent representations of the 10-vertex;
(c) the four other vertices; (d) a contribution to
$C^{(g)}_{10,[1^{2}\,3\,5]}$; (e) the double line version of (d), with three
faces and thus genus $g=$ 2; (f) the linear version of (d).
### 2.2 Combinatorial and graphical representations of a partition and its
genus
A general partition $\alpha$ of ${\mathcal{P}}(n)$ may be described in terms
of a pair of permutations $\sigma$ and $\tau$, both in $\mathcal{S}_{n}$:
$\sigma$ is the cyclic permutation $(1,2,\cdots,n)$; $\tau$ belongs to the
class $[\alpha]$ of $\mathcal{S}_{n}$, and its cycles are described by the
parts of $\alpha$, thus subject to the condition (i) above: each cycle is an
increasing list of integers.
The genus $g$ of the partition is then defined by [10]
$n+2-2g=\\#\mathrm{cy}(\tau)+\\#\mathrm{cy}(\sigma)+\\#\mathrm{cy}(\sigma\circ\tau^{-1})\,$
(12)
or in the present case,
$-2g=\sum\alpha_{\ell}-1-n+\\#\mathrm{cy}(\sigma\circ\tau^{-1})\,.$ (13)
since here $\\#\mathrm{cy}(\sigma)=1$ and
$\\#\mathrm{cy}(\tau)=\sum\alpha_{k}$. Since
$\\#\mathrm{cy}(\sigma\circ\tau^{-1})\geq 1$, we find an upper bound on $g$
$g\leq
g_{\mathrm{max}}:=\bigg{\lfloor}\frac{1}{2}(n-\sum\alpha_{k})\bigg{\rfloor}\,,$
(14)
see also [18]. We recall below why this definition of the genus is natural.
Example.
For the above partition of $[\\![10]\\!]$, $\sigma=(1,2,\cdots,10)$,
$\tau=(1,3,4,6,7)(2,5,9)(8)(10)$,
$\sigma\circ\tau^{-1}=(1,8,9,6,5,3,2,10)(4)(7)$. Thus $2g=11-4-3=4$, $g=2$,
while $g_{\mathrm{max}}=3$.
To a given partition, we may also attach a map: it has $\alpha_{\ell}$
$\ell$-valent vertices, in short $\ell$-vertices 222Remember that
$\alpha_{\ell}$ are the multiplicities introduced in (2), for
$\ell=1,2,\cdots$, whose edges are numbered clockwise by the elements of the
partition, and a special $n$-valent vertex, with its $n$ edges numbered anti-
clockwise from 1 to $n$, see Fig. 1a,c. Edges are connected pairwise by
matching their indices. Two maps are regarded as topologically equivalent if
they encode the same partition. In fact it is topologically equivalent and
more handy to attach $n$ points clockwise on a circle, and to connect them
pairwise by arcs of the circle, see Fig. 1b. Now the permutation $\sigma$
describes the connectivity of the $n$ points on the circle, while $\tau$
describes how these points are connected through the other vertices. It is
readily seen that the permutation $\sigma\circ\tau^{-1}$ describes the
circuits bounding clockwise the faces of the map. This is even more clearly
seen if one adopts a double line notation for each edge [9], thus transforming
the map into a “fat graph”, see Fig. 1e . Thus the number of cycles of
$\sigma\circ\tau^{-1}$ is the number $f$ of faces of the map. Since each face
is homeomorphic to a disk, gluing a disk to each face transforms the map into
a closed Riemann surface, to which we may apply Euler’s formula
$2-2g=\\#(\mathrm{vertices})-\\#(\mathrm{edges})+\\#(\mathrm{faces})=1+\sum_{\ell}\alpha_{\ell}-n+f$
(15)
with $f=\\#\mathrm{cy}(\sigma\circ\tau^{-1})$, and we have reproduced (13).
Remark 1. This coding of a map, or here of a partition, by a pair of
permutations, with a resulting expression of its genus, is an old idea
originating in the work of Jacques, Walsh and Lehman [10, 16, 17] and
rediscovered and used with variants by many authors since then [6].
Remark 2. The diagrammatic representation that we adopt here differs from that
of other authors [18, 4]: in fact it is the dual picture, with our vertices
corresponding to faces of these authors. Our preference for the former is due
to its analogy with Feynman diagrams…
### 2.3 Glossary
It may be useful to list some elements of terminology used below.
It is convenient to represent a partition of ${\mathcal{P}}(n)$ by a diagram.
It may be a circular diagram, with $n$ points equidistributed clockwise, as on
Fig. 1-d, and it has a genus as explained above. We distinguish the points on
the circle from the vertices which lie inside the disk.
Occasionally we use a linear diagram, with $n$ points labelled from 1 to $n$
on a line (or an arc), and vertices above the line.
Note that if we give each point of the circle a weight $x$ and each $k$-vertex
the weight $\kappa_{k}$, the sum of diagrams of genus $g$ builds the GF
$Z^{(g)}(x)$.
In a (circular) diagram, we call 2-line a pair of edges attached to a
2-vertex. In the following, the middle 2-vertex will be omitted on 2-lines, to
avoid overloading the figures. A 2-line is then just a straight line between
two points of the circle.
In a diagram, we call adjacent a pair of edges joining a vertex to adjacent
points on the circle. For example, on Fig. 2, the edges ending at 1 and 3 are
not adjacent, those ending at 3 and 4 are.
In the following discussion, it will be important to focus on a point on the
circle, say point 1, and see what it is connected to. We shall refer to it as
the marked point.
If $\alpha$ is a partition of ${\mathcal{P}}(n)$ of a given type, all its
conjugates by powers of the cyclic permutation $\sigma$ have the same type.
Counting partitions of a given type thus amounts to counting orbits of
diagrams under the action of $\sigma$, while recording the length
(cardinality) of each orbit. Diagrammatically, the point 1 being marked, we
list orbits under rotations of the inner pattern of vertices and edges by the
cyclic group $\mathbb{Z}_{n}$, and record the length of each orbit. An orbit
has a weight equal to its length $n/{\mathfrak{s}}$, where ${\mathfrak{s}}$ is
the order of the stabilizer of the diagram – a subgroup of the rotation group.
For example, the left-most diagram of Fig. 8 has ${\mathfrak{s}}=2$, the
right-most ${\mathfrak{s}}=8$, the others have ${\mathfrak{s}}=1$.
### 2.4 The coefficients $C_{n,[\alpha]}^{(g)}$
We now return to our problem of determining the coefficients
$C_{n,[\alpha]}^{(g)}$ in (6). From the previous discussion, if we denote
${\mathcal{O}}_{n}([\alpha])\subset{\mathcal{S}}_{n}$ the subset of
permutations of class $[\alpha]$, whose cycles involve only increasing
sequences of integers, we have
$C_{n,[\alpha]}^{(g)}=\\#\left\\{\tau\big{|}\tau\in{\mathcal{O}}_{n}([\alpha]),\
g=\frac{1}{2}\Big{(}n+1-\sum\alpha_{\ell}-\\#\mathrm{cy}(\sigma\circ\tau)\Big{)}\right\\}\,.$
(16)
Alternatively, one may use the diagrammatic language to write
$C_{n,[\alpha]}^{(g)}=\sum_{\mathrm{orbits}}\mathrm{length\ of\
orbit}=n\sum_{\mathrm{orbits}}\frac{1}{{\mathfrak{s}}}\,,$ (17)
with a sum over orbits of diagrams of type $[\alpha]$ and genus $g$.
### 2.5 Remark on matrix integrals
As ’t Hooft’s double line notation [9] suggests, the coefficient
$C_{n,[\alpha]}(\epsilon)=\sum_{g}C_{n,[\alpha]}^{(g)}\epsilon^{g}$ (18)
could be defined and computed in matrix integrals
– (i) as the coefficient of $\prod_{\ell}\kappa_{\ell}^{\alpha_{\ell}}$ in the
computation of $\langle\frac{1}{N}:{\rm tr}\,M^{n}:\rangle_{rc}$ in a matrix
theory with action $S=-\frac{1}{2}N{\rm
tr}\,M^{2}+N\sum_{\ell}\kappa_{\ell}\,{\rm tr}\,M^{\ell}/\ell$; the notation
$:\ :$ and the subscript “rc” will be explained shortly;
– (ii) as the value of $\langle:\frac{1}{N}{\rm
tr}\,M^{n}:\,:\prod_{\ell}\frac{(N{\rm
tr}\,M^{\ell}/\ell)^{\alpha_{\ell}}}{\alpha_{\ell}!}:\rangle_{rc}$ in a
Gaussian matrix theory.
In both cases, $\epsilon=\frac{1}{N^{2}}$, if $N$ is the size of the
(Hermitian) matrices; $C_{n,[\alpha]}(N^{-2})$ is given by a sum of Feynman
diagrams (in fact, of “fat graphs”, or of maps) with
$1+\sum_{\ell}\alpha_{\ell}$ vertices, $n$ edges (“propagators”) joining the
$n$-vertex ${\rm tr}\,M^{n}$ to the other $\ell$-vertices, and $f$ faces
associated with each closed index circuit. The double dots $:\ :$ is a
standard notation in quantum field theory, where it denotes the normal or Wick
product, that forbids edges from a vertex to itself: here it forces all edges
to reach the $n$-vertex. The crucial point is that we impose a restricted
crossing (“rc”) condition: the edges connecting each $\ell$-vertex to the
$n$-vertex cannot cross one another, thus respecting their original cyclicity
and ordering. Only crossings of edges emanating from distinct vertices are
allowed.
It is that constraint, a direct consequence of rule 2.1 (i) above, that makes
the computation of the coefficients $C_{n,[\alpha]}^{(g)}$ by matrix integrals
or group theoretical techniques, , and the writing of recursion formulae
between them, quite non trivial. For partitions into doublets, however, one
deals only with 2-vertices for which the constraint is irrelevant, and
$C_{n=2p,[2^{p}]}^{(g)}$ is computable by these techniques [16, 8, 13].
Figure 2: (a) Diagram for the partition of $[\\![10]\\!]$ into
$(\\{1,3,4,6,7\\},\\{2,5\\},\\{8,9,10\\})$, $f=6$ hence genus
$g=1-(3+1-10+6)/2=1$; (b) after removal of the three adjacent edges coming
from the “centipede” $\\{8,9,10\\}$, here a 3-vertex, now $n^{\prime}=7$,
$f^{\prime}=4$, $g^{\prime}=1$; (c) after reduction of two sets of adjacent
edges to points 3 and 4, and 6, 7 and 1: now $n^{\prime\prime}=4$,
$f^{\prime\prime}=1$, $g^{\prime\prime}=1+(2+1-4+1)/2=1$. Figure 3: Removing
the blue parallel pair of edges and the light blue face does not affect the
genus: Variations $\Delta n=-2$, $\Delta f=-1$, $\Delta\sum\alpha_{k}=-1$,
hence $\Delta g=0$.
### 2.6 Reducing the diagrams
In this subsection, we show that certain modifications of a diagram associated
with a partition do not modify its genus. This discussion follows closely that
of Cori and Hetyei [4].
(i) Removing singletons.
Removing $p$ singletons changes the number of parts $\sum\alpha_{k}$ by $-p$,
$n$ by $-p$ and the number of faces $f$ is unchanged, hence according to (15)
the genus remains unchanged.
(ii) Removing centipedes.
Definition. A centipede is a planar linear subdiagram made of a $p$-vertex,
all the edges of which are attached in a consecutive way to the outer circle,
Fig. 2. In other words, it corresponds to a part of the partition with
consecutive integers (modulo $n$), $\\{j,j+1,\cdots,j+p\\}$ . Removing it
changes the number of parts $\sum\alpha_{k}$ by $-1$, $n$ by $-p$ and the
number of faces $f$ by $-(p-1)$, see the figure, hence the genus remains
unchanged.
(iii) Removing adjacent edges
If two edges emanating from a vertex go to two consecutive points of the
circle, (adjacent pair), see Fig 2b-c, removing one of them does not change
$\sum\alpha_{k}$ but changes $n$ and $f$ by $-1$, hence does not change the
genus. One may iterate this operation on the same vertex until one meets a
crossing with an edge emanating from another vertex. (If no crossing occurs,
this means that the vertex and its edges formed a centipede in the sense of
(ii) and may be erased without changing the genus.) To allow an unambiguous
reconstruction of all diagrams later in the dressing process, we adopt the
following
Convention 1: in removing such adjacent edges, one keeps the edge attached to
the marked point 1, or the first edge encountered clockwise starting from 1,
and one removes the others.
See Fig. 2 for illustration.
(iv) Removing parallel lines
Definition. Two pairs of edges joining two vertices respectively to points $i$
and $j+1$, and to points $i+1$ and $j$ on the circle are said to be parallel.
Note that this is equivalent to saying that they form a 2-cycle of the
permutation $\sigma\circ\tau^{-1}$. And conversely, any such 2-cycle is
associated with two parallel pairs of edges.
(a) If one of these two vertices is a 2-vertex, one may remove the
corresponding pair of edges and the 2-vertex without changing the genus, since
$\sum\alpha_{\ell}$ and $f$ have decreased by 1 and $n$ by 2, see Fig. 3 for
illustration. If both pairs of edges are attached to 2-vertices, we choose by
Convention 2 to keep the pair attached to the point of the circle of smallest
label. In particular, if one of the pairs is attached to the marked point 1,
it is kept and the other removed.
(b) If both pairs of edges are attached to vertices of valency larger that 2,
we keep them both. See Fig. 13 below for an example.
After carrying these removals of parallel lines, we are left with primitive or
semi-primitive diagrams (or partitions), following Cori–Hetyei’s terminology:
in primitive diagrams, no parallel pair is left; therefore, by the remark
above, all cycles of $\sigma\circ\tau^{-1}$ have length larger than 2. Semi-
primitive diagrams still have parallel pairs of type (b).
Now Cori and Hetyei have proved some fundamental results:
Proposition. To an arbitrary diagram corresponds a unique primitive (or semi-
primitive) diagram obtained by a sequence of reductions as above, and
independent of the order of these reductions.
Our new observation is that, conversely, any diagram may be recovered by
“dressing” a primitive (or semi-primitive) diagram, as we shall see below.
Moreover,
Proposition. [4] For a given genus, there are only a finite number of
primitive diagrams.
This follows from two inequalities:
$f=\\#\mathrm{cy}(\sigma\circ\tau^{-1})\leq\frac{n}{3}$, since in a primitive
diagram all cycles of $\sigma\circ\tau^{-1}$ are of length larger or equal to
three (see above); and $\sum\alpha_{i}\leq n/2$ after eliminating the
singletons. Hence plugging these inequalities in (13), we get for a primitive
diagram
$n\leq 6(2g-1)\,.$ (19)
As for the semi-primitive diagrams, it was shown in [4] that they are all
obtained by a finite number of operations from the primitive ones, hence are
themselves in finite number.
For example
Proposition[4] The irreducible diagrams of genus 1 are the two diagrams of
Fig. 4, that have two, resp. three 2-lines. No semi-primitive occurs in genus
1.
The proof of that statement is given in [4], sect. 8, where the two primitive
partitions or diagrams are referred to as $\beta_{1}$ and $\beta_{2}$.
## 3 From genus 0 to genus 1 …
Figure 4: The two “primitive” diagrams of genus 1. The blue figure in the
middle is the weight of the diagram in (16), namely the length of its orbit.
### 3.1 Non crossing partitions and the genus 0 generating function
Recall first that in genus 0, the formula given by Kreweras [12] on the census
of non crossing partitions may be conveniently encoded in the following
functional relation between the genus 0 GF of moments $Z^{(0)}(x)$ and that of
cumulants $W(x)$ defined above 333 Recall this relation is equivalent to the
functional identity $P\circ G=\mathrm{id}$, where
$G(u):=u^{-1}Z^{(0)}(u^{-1})$ and $P(z):=z^{-1}W(z)$, and
$R(z)=P(z)-\frac{1}{z}$ is the celebrated Voiculescu $R$ function [15, 14].
$Z^{(0)}(x)=1+W(xZ^{(0)}(x))\,.$ (20)
Indeed by application of Lagrange formula, one recovers Kreweras’ result
$C^{(0)}_{n,[\alpha]}=\frac{n!}{(n+1-\sum\alpha_{k})!\
\prod_{k}\alpha_{k}!}\,,$ (21)
as proved in [1].
There is a simple diagrammatical interpretation of the relation (20) due to
Cvitanovic [5], see Fig. 5, which reads: in an arbitrary planar (i.e., non-
crossing) diagram, the marked point 1 on the exterior circle is necessarily
connected to a $n$-vertex, $n\geq 1$, between the $n$ edges of which lie
arbitrary insertions of other (linear) diagrams of $Z^{(0)}$.
Our aim is to extend this kind of relation to higher genus.
Figure 5: A graphical representation of identity
$Z^{(0)}(j)=W(j\,Z^{(0)}(j))$ Figure 6: Top: A graphical representation of
identity (24). Bottom: a schematic representation of the dressing of the (red)
2-line attached to the marked point 1; or to another (blue) 2-line. In the
latter case, according to Convention 1, additional edges may be “emitted” from
the central vertex to go to clockwise adjacent points on the circle, and their
contribution to the generating function is $X_{2}(x)$. For the red line, these
additional edges may connect to either side of the marked point, and they
contribute $Y_{2}(x)$ to the GF.
### 3.2 Dressing the genus 1 primitive diagrams
We have seen that genus 1 diagrams may be reduced to the two primitive ones of
Fig. 4. We now write a relation à la Cvitanovic between the generating
functions $W$, $Z^{(0)}$ and $Z^{(1)}$, depicted in Fig. 6
$Z^{(1)}(x)=\sum_{n\geq 2}\kappa_{n}nx^{n}(Z^{(0)})^{n-1}Z^{(1)}+\mathrm{sum\
of\ dressed\ diagrams\ of\ Fig.\ \ref{Fig4}}\,,$ (22)
which reads: in a generic diagram of genus 1, the marked point 1 is attached
(a) either to an edge of an $n$-vertex, between the non-crossing edges of
which are inserted one (linear) subdiagram of genus 1 and $(n-1)$ subdiagrams
of genus 0 444Remember that by convention, $Z^{(0)}(x)$ starts with 1, hence
these subdiagrams of genus 0 may be trivial, (b) or to an edge of a dressed
primitive diagram of genus 1.
Let us concentrate on the case (b) and make explicit what is meant by
dressing.
The dressing consists in reinserting the elements removed in steps (iv)–(i) of
sect. 2.6, in that reverse order. First, additional 2-lines are introduced,
“parallel” to the two, resp. three 2-lines of the primitive diagrams of Fig.
4. Each of these 2-lines carries by definition a 2-vertex. Then to reinsert
“adjacent” edges removed in step (iii), each of these 2-vertices may be
transformed into a $k$-vertex, whose $k-2$ additional edges may fall, by
Convention 1, on either of the two arcs of the circle adjacent to the end
points of the 2-line and “clockwise downstream”, and without crossing one
another: there are $k-1$ partitions of $k-2$ into two parts, one of them
possibly empty, hence we attach a weight $X_{2}(x):=\sum_{k\geq
1}(k-1)\kappa_{k}x^{k}$ to each of these parallel lines. Since there is an
arbitrary number $r\geq 0$ of parallel lines, they contribute $X_{2}(x)^{r}$,
and their geometric series sums up to $1/(1-X_{2}(x))$. The same applies to
the original blue 2-lines of the primitive diagram of Fig. 6, which thus gives
each a factor $X_{2}(x)$. The red 2-line, which is the one attached to the
marked point 1,has a different weight, as the $k-2$ edges emanating from its
$k$ vertex may fall on either side of the marked point or on the rightmost
part of the diagram (see Convention 2 above): this is associated with a
partition of the $k-2$ edges into three parts (two of them possibly empty), in
number $k(k-1)/2$, which gives the red 2-line a weight
$Y_{2}(x)=\sum_{k}\frac{k(k-1)}{2}\kappa_{k}x^{k}$, while its dressing by
parallel lines leads to a factor $1/(1-X_{2}(x))^{2}$, because again, parallel
lines above or below the red 2-line are possible. Last step consists in
reinserting “centipedes” and (possibly) singletons, namely in changing
everywhere $x$ into $\tilde{x}=xZ^{(0)}(x)$.
In that way, we have reinstated all features that had been erased in the
reduction to primitive diagrams, and constructed the contribution to the GF
$Z^{(1)}(x)$ of all diagrams in which the marked point 1 is attached to an
edge that belongs to a dressed primitive diagram. Indeed in the resulting
diagrams, the marked point 1 may be attached to any of the edges, as it
should: this is clear whenever that edge is an edge of the primitive diagram;
this is also true if the edge is one of the parallel lines added, or one of
the added adjacent edges: that was the role of the factors in the definition
of $X_{2}$ or $Y_{2}$ to count these cases. It is thus clear that all possible
diagrams of type (b) contributing to $Z^{(1)}$ have been obtained by the
dressing procedure, and that they are generated once and only once, hence with
the right weight. Finally the cases (a) where 1 is not attached to a dressed
primitive, but to some genus 0 subdiagram, are accounted for by the first term
in equ.(22).
### 3.3 The genus 1 generating function
Define
$\tilde{x}=xZ^{(0)}(x)\,.$ (23)
Gathering all the contributions of sect. 3.2 we have
$Z^{(1)}(x)=\sum_{n\geq
2}\kappa_{n}nx^{n}(Z^{(0)}(x))^{n-1}Z^{(1)}(x)+\frac{Y_{2}(\tilde{x})X_{2}(\tilde{x})}{(1-X_{2}(\tilde{x}))^{3}}+\frac{Y_{2}(\tilde{x})X_{2}^{2}(\tilde{x})}{(1-X_{2}(\tilde{x}))^{4}}\,,$
(24)
i.e.,
$(1-V(x))Z^{(1)}(x)=\frac{Y_{2}(\tilde{x})X_{2}(\tilde{x})}{(1-X_{2}(\tilde{x}))^{4}}$
where
$\displaystyle X_{2}(x)$ $\displaystyle=$ $\displaystyle\sum_{k\geq
2}(k-1)\kappa_{k}x^{k}=xW^{\prime}(x)-W(x)\,,$ (25) $\displaystyle Y_{2}(x)$
$\displaystyle=$ $\displaystyle\sum_{k\geq
2}\frac{k(k-1)}{2}\kappa_{k}x^{k}=\frac{1}{2}x^{2}W^{\prime\prime}(x)$ (26)
$\displaystyle V(x)$ $\displaystyle=$
$\displaystyle\sum_{k}k\kappa_{k}x^{k}Z^{(0)\,k-1}=xW^{\prime}(\tilde{x})\,.$
(27)
This is summarized in the following theorem.
Theorem 1. If $\tilde{x}=xZ^{(0)}(x)$, the generating function of genus 1
partitions is given by
$Z^{(1)}(x)=\frac{X_{2}(\tilde{x})Y_{2}(\tilde{x})}{(1-X_{2}(\tilde{x}))^{4}\,(1-V(x))}\,.$
(28)
Alternatively, if we introduce
$\tilde{X}_{2}(x):=\frac{X_{2}(\tilde{x})}{(1-X_{2}(\tilde{x}))}\qquad{\tilde{Y}}_{2}(x):=\frac{Y_{2}(\tilde{x})}{(1-X_{2}(\tilde{x}))^{2}}$
(29)
we have the simple expression
$Z^{(1)}(x)=\frac{\tilde{Y}_{2}(x)\tilde{X}_{2}(x)(1+\tilde{X}_{2}(x))}{(1-V(x))}$
(30)
### 3.4 Examples and applications
#### 3.4.1 $n=2p\to[2^{p}]$
If all $\kappa_{i}$ vanish but $\kappa_{2}=1$, i.e., if we consider partitions
of $n=2p$ into $p$ doublets, which is the celebrated case considered in [16,
8], we have $W(x)=x^{2}$, hence
$Z^{(0)}(x;\kappa_{2}=1,\kappa_{i\neq 2}=0)=\frac{1-\sqrt{1-4x^{2}}}{2x^{2}}$
(31)
as the solution of equ. (20). Then following Theorem 1, we find
$Z^{(1)}(x;\kappa_{2}=1,\kappa_{i\neq 2}=0)=\frac{x^{4}}{(1-4x^{2})^{5/2}}\,,$
(32)
in accordance with known results.
#### 3.4.2 $n=3p\to[3^{p}]$
In that case, we take $\kappa_{3}=1$, $W(x)=x^{3}$, hence $Z^{(0)}$ satisfies
the third degree equation,
$(xZ)^{3}-Z+1=0$ (33)
and it is the GF of Fuss–Catalan numbers. We may write it as
$Z^{(0)}(x;\kappa_{3}=1,\kappa_{i\neq
3}=0)=\frac{2}{\sqrt{3x^{3}}}\sin\Big{(}\frac{1}{3}\mathrm{Arcsin\,}\big{(}\frac{3}{2}\sqrt{3x^{3}}\big{)}\Big{)}\,.$
(34)
Then following Theorem 1, one finds, after some algebra,
$Z^{(1)}(x;\kappa_{3}=1,\kappa_{i\neq
3}=0)=\frac{1152\,x^{3}\sin^{6}\left(\frac{1}{3}\mathrm{Arcsin\,}\big{(}\frac{3\sqrt{3x^{3}}}{2}\big{)}\right)}{\left(2\cos\left(\frac{1}{3}\mathrm{Arccos\,}\big{(}1-\frac{27x^{3}}{2}\big{)}\right)-1\right)\left(9\sqrt{x^{3}}-4\sqrt{3}\sin\left(\frac{1}{3}\mathrm{Arcsin\,}\big{(}\frac{3\sqrt{3x^{3}}}{2}\big{)}\right)\right)^{4}}$
(35)
with a Taylor expansion
$6x^{6}+102x^{9}+1212x^{12}+12330x^{15}+114888x^{18}+1011486x^{21}+8558712x^{24}+70324884x^{27}+564931230x^{30}+\cdots$
in agreement with direct calculation, see [2]. Note that the closest
singularity of $Z^{(1)}$ is at the vanishing point of the discriminant of
(33), namely $x^{3}=4/27$:
$Z^{(1)}(x;\kappa_{3}=1,\kappa_{i\neq
3}=0)\sim\frac{\mathrm{const.}}{(\frac{4}{27}-x^{3})^{5/2}}\,,$
when $x^{3}\to 4/27$, with the same exponent $5/2$ as in (32).
#### 3.4.3 Total number of partitions of genus 0 and 1
Let all $\kappa$ be equal to 1, resp. all $\kappa$’s but $\kappa_{1}=0$. Then
the previous expressions yield the GF of the numbers of partitions of genus 0
or 1, with, resp. without singletons:
$\displaystyle Z^{(0)}(x;\kappa_{i}=1)$ $\displaystyle=$
$\displaystyle\frac{1-\sqrt{1-4x}}{2x}$ (36)
$\displaystyle\hat{Z}^{(0)}(x):=Z^{(0)}(x;\kappa_{1}=0,\kappa_{i\geq 2}=1)$
$\displaystyle=$
$\displaystyle\frac{1-\sqrt{1-\frac{4x}{1+x}}}{2x}=\frac{1+x-\sqrt{1-2x-3x^{2}}}{2x(1+x)}\qquad\mathrm{no\
singleton}$ $\displaystyle Z^{(1)}(x;\kappa_{i}=1)$ $\displaystyle=$
$\displaystyle\frac{x^{4}}{(1-4x)^{5/2}}$ (37)
$\displaystyle\hat{Z}^{(1)}(x):=Z^{(1)}(x;\kappa_{1}=0,\kappa_{i\geq 2}=1)$
$\displaystyle=$
$\displaystyle\frac{x^{4}}{(1-2x-3x^{2})^{5/2}}\qquad\mathrm{no\ singleton}$
(38)
on which we may verify the relations (10-11) above.
Proof. If all $\kappa_{i}=1$, $W(x)=x/(1-x)$ as a formal series, and
$Z^{(0)}(x)$, solution of $Z^{(0)}(x)=W(xZ^{(0)}(x))$ as a formal series, is
given by (36), (the GF of the Catalan numbers). Likewise, if $\kappa_{1}=0$,
the others equal to 1, $W(x)=x^{2}/(1-x)$, etc. For genus 1, we then make use
of Theorem 1 to derive (37-38). ∎
#### 3.4.4 Number of partitions with a fixed number of parts, in genus 0 and
1
Let all $\kappa$ be equal to $y$, then $W(x)=xy/(1-x)$, and
$Z^{(g)}(x,y)=\sum_{n,k}p^{(g)}(n,k)x^{n}y^{k}$ is the GF of the numbers
$p^{(g)}(n,k)$ of genus $g$ partitions of $n$ with $k$ parts. $Z^{(0)}$ is the
solution of equ. (20)
$Z^{(0)}(x,y)=\frac{1+x-xy-\sqrt{(1+x-xy)^{2}-4x}}{2x}\,.$ (39)
which is the GF of Narayana numbers, and then we compute by (28)
$Z^{(1)}(x,y)=\frac{x^{4}y^{2}}{((1+x-xy)^{2}-4x)^{5/2}}$ (40)
which is the expression given by Yip [18], and Cori and Hetyei [3].
If we exclude singletons, $W(x;\kappa_{1}=0)=x^{2}y/(1-x)$, and the GF read
now
$\displaystyle\hat{Z}^{(0)}(x,y):=Z^{(0)}(x,y;\kappa_{1}=0)$ $\displaystyle=$
$\displaystyle\frac{{1+x}-\sqrt{(1-x)^{2}-4x^{2}y}}{2x(1+xy)}$ (41)
$\displaystyle\hat{Z}^{(1)}(x,y):=Z^{(1)}(x,y;\kappa_{1}=0)$ $\displaystyle=$
$\displaystyle\frac{x^{4}y^{2}}{((1-x)^{2}-4x^{2}y)^{5/2}}\,.$
## 4 …to genus 2
### 4.1 Primitive and semi-primitive diagrams of genus 2
The list of primitive and semi-primitive diagrams of genus 2 is known, thanks
to the work of Cori and Hetyei [4]. This has been confirmed independently, in
the present work, by generating on the computer all partitions of genus 2 of a
given type, and then eliminating all those that involve adjacent or parallel
edges. By inequality (19) these primitive diagrams have at most 18 points
(i.e., $n\leq 18$), and either up to 9 2-vertices, or one or two 3-vertices,
or one 4-vertex. In Table 1, are listed their number for increasing total
number of points $n$. 555In Table 1 of [4] there is the unfortunate omission
of the 175 primitive diagrams with one 3-vertex (a 3-cycle in their
terminology), while those diagrams are properly taken into account in the
ensuing formulae. These missing diagrams are listed in Fig. 10.
| 2-vertices | one 3-vertex | two 3-vertices | two 3-vertices | one 4-vertex
---|---|---|---|---|---
$n$ | | | | semi-prim. |
6 | 0 | 0 | 1 | 0 | 0
7 | 0 | 14 | 0 | 0 | 0
8 | 21 | 0 | 20 | 0 | 6
9 | 0 | 141 | 0 | 0 | 0
10 | 168 | 0 | 65 | 15 | 15
11 | 0 | 407 | 0 | 0 | 0
12 | 483 | 0 | 52 | 36 | 9
13 | 0 | 455 | 0 | 0 | 0
14 | 651 | 0 | 0 | 21 | 0
15 | 0 | 175 | 0 | 0 | 0
16 | 420 | 0 | 0 | 0 | 0
17 | 0 | 0 | 0 | 0 | 0
18 | 105 | 0 | 0 | 0 | 0
Table 1. Number of (semi-)primitive diagrams of genus 2.
Based on this list of primitive diagrams, we may now write an equation similar
to (24)
$\displaystyle Z^{(2)}(x)$ $\displaystyle=$
$\displaystyle\sum_{n}n\kappa_{n}x^{n}(Z^{(0)}(x))^{n-1}Z^{(2)}(x)$
$\displaystyle+$ $\displaystyle\quad\mathrm{dressing\ of\ (semi-)primitive\
diagrams\ of\ genus\ 2}$
as illustrated in Fig. 7.
Remark. It might seem natural to also have in the r.h.s. of (4.1 ) a term with
two insertions of genus 1 subdiagrams. In fact such diagrams will be included
in the set of primitives and their dressings. An example is given by the first
diagram of Fig. 8.
Figure 7: A graphical representation of relation (4.1)
### 4.2 Dressing of primitive diagrams of genus 2
The dressing of primitive diagrams with only 2-lines (Column 2 of Table 1)
involves the same functions $\tilde{X}_{2}$ and $\tilde{Y}_{2}$ defined above
in sect. 3.3: $\tilde{Y}_{2}$ is assigned to the line attached to point 1,
while the other lines carry the weight $\tilde{X}_{2}$. Hence their
contribution to the r.h.s. of equ.(4.1) reads
$z_{2}=\tilde{Y}_{2}(x)\Big{(}21\tilde{X}_{2}^{3}(x)+168\tilde{X}_{2}^{4}(x)+483\tilde{X}_{2}^{5}(x)+651\tilde{X}_{2}^{6}(x)+420\tilde{X}_{2}^{7}(x)+105\tilde{X}_{2}^{8}(x)\Big{)}$
with the notations of (29).
For the dressing of primitive diagrams with 3- or 4-vertices, we must
introduce new functions that generalize $X_{2}$ and $Y_{2}$ defined in (25-26)
$\displaystyle X_{\ell}(x)$ $\displaystyle=$
$\displaystyle\sum_{k\geq\ell}{k-1\choose\ell-1}\kappa_{k}x^{k}$ (43)
$\displaystyle Y_{\ell}(x)$ $\displaystyle=$
$\displaystyle\sum_{k\geq\ell}{k\choose\ell}\kappa_{k}x^{k}$
$\displaystyle\ell>2\qquad\tilde{X}_{\ell}(x):=\frac{X_{\ell}(\tilde{x})}{(1-X_{2}(\tilde{x}))^{\ell}}\quad$
;
$\displaystyle\quad\tilde{Y}_{\ell}(x):=\frac{Y_{\ell}(\tilde{x})}{(1-X_{2}(\tilde{x}))^{\ell}}\,.$
with, as before, $\tilde{x}=xZ^{(0)}(x)$. (Beware that the power of
$(1-X_{2}(\tilde{x}))$ in the denominator of $\tilde{X}_{\ell}$ does not apply
to $\ell=2$, compare with (29).) These functions too may also be expressed in
terms of derivatives of $W$: for example,
$Y_{3}(x)=\frac{1}{6}x^{3}W^{\prime\prime\prime}(x)$, etc.
Consider first a primitive diagram with a 3-vertex, like those depicted in
Fig. 9. Remember that all distinct rotated diagrams must be considered and
hence, the marked point 1 may be attached to the 3-vertex or to any one of the
2-lines.
(i) In the case where the marked point 1 is attached to one of the 2-lines,
its 2-vertex may be changed into a $k$ vertex, $k>2$ and as in sect. 3.2, this
yields a weight $Y_{2}(x)/(1-X_{2}(x))^{2}$, while the lines emanating from
the 3-vertex or parallel to it contribute $X_{3}(x)/(1-X_{2}(x))^{3}$. And
again, a final change of $x$ into $\tilde{x}$ completes the dressing.
(ii) In the former case, 1 attached to the 3-vertex, this 3-vertex may be
promoted to a $k$-vertex, $k>3$, with $k-3$ lines ending on four different
arcs of the circle: there are ${k\choose 3}$ ways of distributing them, whence
a weight $Y_{3}(x)$. Then adding parallel lines may be done in 3 ways, whence
a weight $1/(1-X_{2}(x))^{3}$. The 2-lines, on the other hand, carry a weight
$X_{2}(x)/(1-X_{2}(x))$, just like in sect. 3.2. Finally, again as in sect.
3.2, the variable $x$ has to be substituted for the dressed one
$\tilde{x}=xZ^{(0)}$ to take into account all possible insertions of genus 0
subdiagrams.
(iii) There is, however, a case not yet accounted for by the previous
dressing. When the marked point 1 is attached to a 2-line parallel to a pair
of edges of the 3-vertex, that line has been erased in the reduction process
and must be restored. A weight $2Y_{2}(x)/(1-X_{2}(x))$ is attached to that
new line, with a factor 2 comes from the two ends of the 2-line, and a single
factor $1/(1-X_{2}(x))$ as compared with what we saw in sect. 3.2, because the
counting of parallel lines between the new line and the 3-vertex has already
been taken into account in the term $\tilde{X}_{3}(x)$.
Now each of the previous contributions must be weighted by its number of
occurrences when the diagram is rotated. For example, each of the two diagrams
of Fig. 9 contributes $+\ 4\tilde{Y}_{2}\tilde{X}_{2}\tilde{X}_{3}$ (since
marked point 1 may be at any of the four end-points of the 2-lines) +$\
3\tilde{Y}_{3}\tilde{X}_{2}^{2}$ (3 ways of attaching point 1 to the 3-vertex)
$\ +3\tilde{X}_{3}\tilde{X}_{2}^{2}(2Y_{2}(\tilde{x})/(1-X_{2}(\tilde{x}))$
(when 1 lies on a line parallel to two edges of the 3-vertex). More generally,
for a primitive diagram of an orbit of symmetry order ${\mathfrak{s}}$, with
one 3-vertex and $p$ 2-lines, $n=3+2p$, the weight is
$\frac{1}{{\mathfrak{s}}}\left(2p\tilde{Y}_{2}\tilde{X}_{2}^{p-1}\tilde{X}_{3}+3\tilde{Y}_{3}\tilde{X}_{2}^{p}+3\tilde{X}_{3}\tilde{X}_{2}^{p}(2Y_{2}(\tilde{x})/(1-X_{2}(\tilde{x}))\right)\,,$
where we write $\tilde{X}_{\ell}$ and $\tilde{Y}_{\ell}$ in short for
$\tilde{X}_{\ell}(x)$ and $\tilde{Y}_{\ell}(x)$. Thus the orbits of partitions
of $[\\![n]\\!]$ with a primitive diagram with a single 3-vertex contribute
$\sum_{\mathrm{orbits}}\frac{1}{{\mathfrak{s}}}\left((n-3)\tilde{Y}_{2}\tilde{X}_{2}^{\frac{n-5}{2}}\tilde{X}_{3}+3\tilde{X}_{2}^{\frac{n-3}{2}}\Big{(}\tilde{Y}_{3}+\tilde{X}_{3}\frac{2Y_{2}(\tilde{x})}{(1-X_{2}(\tilde{x}))}\Big{)}\right)\,.$
But as we saw in (17), for a given $n$,
$\sum_{\mathrm{orbits}}\frac{1}{{\mathfrak{s}}}=\frac{N}{n}$, where $N$ is the
number listed in Table 1, column 3, row $n$. In total the diagrams with a
single 3-vertex contribute to the r.h.s. of (4.1) the amount $z_{3}$ listed
below in (4.3).
The dressing of primitive diagrams with two 3-vertices or one 4-vertex
(columns 4 and 6 of Table 1) is done along similar lines. Thus for an orbit of
primitive diagram with two 3-vertices and $p$ 2-lines, with now $n=2p+6$, we
get
$\frac{1}{{\mathfrak{s}}}\left(2p\tilde{Y}_{2}\tilde{X}_{3}^{2}\tilde{X}_{2}^{p-1}+6\tilde{X}_{2}^{p}\tilde{X}_{3}\Big{(}\tilde{Y}_{3}+\tilde{X}_{3}\frac{2Y_{2}(\tilde{x})}{(1-X_{2}(\tilde{x}))}\Big{)}\right)$
(44)
and the total contribution $z_{33}$ of such diagrams is given in (4.3).
For a primitive diagram with one 4-vertex and $p$ 2-lines, (and $n=2p+4$),
likewise, we get
$\frac{1}{{\mathfrak{s}}}\left(2p\tilde{Y}_{2}\tilde{X}_{4}\tilde{X}_{2}^{p-1}+4\tilde{X}_{2}^{p}\Big{(}\tilde{Y}_{4}+\tilde{X}_{4}\frac{2Y_{2}(\tilde{x})}{(1-X_{2}(\tilde{x}))}\Big{)}\right)$
and the total contribution $z_{4}$ is given in (50).
Finally, the dressing of semi-primitive diagrams (see a sample in Fig. 13)
requires special care to avoid double counting. Consider such a semi-primitive
diagram, thus with two 3-vertices and $p$ 2-lines, $n=2p+6$. First, when the
point 1 is attached to one of the 2-lines or one of the two 3-vertices, we
have a contribution like the first two terms in (44), but multiplied by
$(1-X_{2}(\tilde{x}))$ not to count twice the set of lines between the two
parallel lines. Moreover, when the point 1 is attached to an added line
parallel to one of the branches of the two 3-vertices, there are 5 locations
for that line, whence a contribution
$\frac{5}{{\mathfrak{s}}}\tilde{X}_{3}^{2}\tilde{X}_{2}^{p}\times
2Y_{2}(\tilde{x})$, with no further factor $1/(1-X_{2}(\tilde{x}))$. In total,
a semi-primitive diagram contributes
$\frac{1}{{\mathfrak{s}}}\left((1-X_{2}(\tilde{x}))\Big{(}2p\tilde{Y}_{2}\tilde{X}_{3}^{2}\tilde{X}_{2}^{p-1}+6\tilde{Y}_{3}\tilde{X}_{3}\tilde{X}_{2}^{p}\Big{)}+5\tilde{X}_{2}^{p}\tilde{X}_{3}^{2}(2Y_{2}(\tilde{x}))\right)$
and the total from semi-primitive diagrams appears as $z_{33s}$ in (4.3).
Remark. As noticed by Cori and Hetyei [4], the semi-primitive diagrams may be
obtained from the primitive ones by “splitting” a vertex of valency larger
than 3. For example the three diagrams of Fig. 13 may be obtained from those
of Fig. 14 by splitting their 4-vertex as in Fig. 15. One might thus consider
only primitive diagrams and include the splitting operation in the dressing
procedure. The benefit is that primitive diagrams are easy to characterize:
they are such that the permutation $\tau$ has no 1-cycle and
$\sigma\circ\tau^{-1}$ no 2-cycle.
### 4.3 General case of genus 2
Collecting all the contributions of the previous subsection, we can now make
equation (4.1) more explicit in the form of
Theorem 2. The generating function of genus 2 partitions is given by
$\qquad\qquad Z^{(2)}(x)(1-V(x))=z_{2}+z_{3}+z_{33}+z_{33s}+z_{4}$ (45)
where $V(x)$ has been given in (27) and $z_{2},\cdots,z_{4}$ are the
contributions of dressing the (semi-)primitive diagrams listed in Table 1.
$\displaystyle z_{2}$ $\displaystyle=$
$\displaystyle\tilde{Y}_{2}(21\tilde{X}_{2}^{3}+168\tilde{X}_{2}^{4}+483\tilde{X}_{2}^{5}+651\tilde{X}_{2}^{6}+420\tilde{X}_{2}^{7}+105\tilde{X}_{2}^{8})\,;$
(46) $\displaystyle z_{3}$ $\displaystyle=$
$\displaystyle\tilde{X}_{3}\tilde{Y}_{2}(8\tilde{X}_{2}+94\tilde{X}_{2}^{2}+296\tilde{X}_{2}^{3}+350\tilde{X}_{2}^{4}+140\tilde{X}_{2}^{5})$
$\displaystyle\qquad+\tilde{X}_{2}(6\tilde{X}_{2}+47\tilde{X}_{2}^{2}+111\tilde{X}_{2}^{3}+105\tilde{X}_{2}^{4}+35\tilde{X}_{2}^{5})\Big{(}\tilde{Y}_{3}+\tilde{X}_{3}\frac{2Y_{2}(\tilde{x})}{(1-X_{2}(\tilde{x}))}\Big{)}\,;$
$\displaystyle z_{33}$ $\displaystyle=$
$\displaystyle\tilde{X}_{3}^{2}\tilde{Y}_{2}(5+26\tilde{X}_{2}+26\tilde{X}_{2}^{2})$
$\displaystyle\qquad+\tilde{X}_{3}(1+15\tilde{X}_{2}+39\tilde{X}_{2}^{2}+26\tilde{X}_{2}^{3})\Big{(}\tilde{Y}_{3}+\tilde{X}_{3}\frac{2Y_{2}(\tilde{x})}{(1-X_{2}(\tilde{x}))}\Big{)}\,;$
$\displaystyle z_{33s}$ $\displaystyle=$
$\displaystyle\tilde{Y}_{2}\tilde{X}_{3}^{2}\tilde{X}_{2}(6+18\tilde{X}_{2}+12\tilde{X}_{2}^{2})(1-X_{2}(\tilde{x}))$
$\displaystyle\qquad+\tilde{Y}_{3}\tilde{X}_{3}\tilde{X}_{2}^{2}(9+18\tilde{X}_{2}+9\tilde{X}_{2}^{2})(1-(X_{2}(\tilde{x}))+\tilde{X}_{3}^{2}\tilde{X}_{2}^{2}(15+30\tilde{X}_{2}+15\tilde{X}_{2}^{2})Y_{2}(\tilde{x})\,;$
$\displaystyle z_{4}$ $\displaystyle=$
$\displaystyle\tilde{Y}_{2}\tilde{X}_{4}(3\tilde{X}_{2}+9\tilde{X}_{2}^{2}+6\tilde{X}_{2}^{3})+(3\tilde{X}_{2}^{2}+6\tilde{X}_{2}^{3}+3\tilde{X}_{2}^{4})\Big{(}\tilde{Y}_{4}+\tilde{X}_{4}\frac{2Y_{2}(\tilde{x})}{(1-X_{2}(\tilde{x}))}\Big{)}\,,$
(50)
and we recall that $\tilde{X}_{\ell}$ and $\tilde{Y}_{\ell}$ stand for
$\tilde{X}_{\ell}(x)$ and $\tilde{Y}_{\ell}(x)$ defined in (43).
The resulting expressions for the numbers $C_{n,[\alpha]}^{(2)}$ have been
tested up to $n=15$ and all $[\alpha]$ against direct enumeration using
formulae (16) or (17), and for some higher values of $n$ for a few particular
cases.
Figure 8: The primitive diagrams of order 8, type $[2^{4}]$ and genus 2, with
their weight in blue Figure 9: The primitive diagrams of order 7, type
$[2^{2}\,3]$ and genus 2, with the sum of weights equal to 14 Figure 10: The
primitive diagrams of order 15, type $[2^{6}\,3]$ and genus 2, with the sum of
weights equal to $175$ Figure 11: The primitive diagram of order 6, type
$[3^{2}]$ and genus 2, of weight 1 Figure 12: The primitive diagrams of order
8, type $[2\,3^{2}]$ and genus 2, of total weight 20 Figure 13: The 3 semi-
primitive diagrams of order 10, type $[2^{2}\,3^{2}]$, and genus 2, with the
sum of weights equal to 15 Figure 14: The 2 primitive diagrams of order 8,
type $[2^{2}\,4]$, and genus 2, with the sum of weights equal to 6 Figure 15:
The splitting procedure, by which here a 4-vertex is split into two 3-vertices
### 4.4 Particular cases
#### 4.4.1 Genus 2 partitions of $n=2p$ into $p$ doublets
In the simplest case where only $\kappa_{2}\neq 0$ (and set equal to 1 with no
loss of generality), the primitive diagrams are of order $n\leq 18$ – a sample
of which is shown in Fig. 8 666All genus 2 primitive and semi-primitive
diagrams may be found on
https://www.lpthe.jussieu.fr/~zuber/Z_UnpubPart.html. They involve only
2-lines and their dressing is given by the expression (46) above. Thus
$\displaystyle Z^{(2)}(x;\kappa_{2}=1,\kappa_{i\neq 2}=0)$ $\displaystyle=$
$\displaystyle\frac{\tilde{Y}_{2}(x)}{(1-2x^{2}Z^{(0)}(x))}\\!\\!\\!\\!\\!\\!\\!\\!$
$\displaystyle\\!\\!\\!\\!\\!\\!\\!\\!\Big{(}21\tilde{X}_{2}^{3}(x)+168\tilde{X}_{2}^{4}(x)+483\tilde{X}_{2}^{5}(x)+651\tilde{X}_{2}^{6}(x)+420\tilde{X}_{2}^{7}(x)+105\tilde{X}_{2}^{8}(x)\Big{)}$
with the notations of (29). After some substantial algebra (carried out by
Mathematica), one finds
$Z^{(2)}(x;\kappa_{2}=1,\kappa_{i\neq
2}=0)=\frac{21x^{8}(1+x^{2})}{(1-4x^{2})^{11/2}}$ (51)
in agreement with the results of [8].
#### 4.4.2 Genus 2 partitions of $n=3p$ into $p$ triplets
We now assume as in sect. 3.4.2 that only $\kappa_{3}\neq 0$ (and equals 1
with no loss of generality). Let
$s:=\sin\left(\frac{1}{3}\sin^{-1}\left(\frac{3}{2}\sqrt{3}x^{3/2}\right)\right)$.
Then, following (45), $Z^{(2)}$ takes the fairly cumbersome form
$\displaystyle Z^{(2)}(x;\kappa_{3}=1;\kappa_{i\neq 3}=0)$ $\displaystyle=$
$\displaystyle\hskip-142.26378pt\frac{192s^{6}x^{6}\left(8s^{3}\left(128\left(11264s^{9}+8676\sqrt{3}s^{6}x^{3/2}+3105s^{3}x^{3}\right)+9315\sqrt{3}x^{9/2}\right)+729x^{6}\right)}{\left(2\cos\left(\frac{1}{3}\mathrm{Arccos\,}\big{(}1-\frac{27x^{3}}{2}\big{)}\right)-1\right)\left(9\sqrt{x^{3}}-4\sqrt{3}\sin\left(\frac{1}{3}\mathrm{Arcsin\,}\big{(}\frac{3\sqrt{3x^{3}}}{2}\big{)}\right)\right)^{10}}$
(compare with the denominator of $Z^{(1)}$ in (35). The first terms of the
series expansion read
$x^{6}+144x^{9}+6046x^{12}+149674x^{15}+2771028x^{18}+42679084x^{21}+\cdots$
One finds again a singular behaviour of the form
$Z^{(2)}(x;\kappa_{3}=1;\kappa_{i\neq
3}=0)\sim\frac{\mathrm{const.}}{(\frac{4}{27}-x^{3})^{11/2}}\,.$
#### 4.4.3 Total number of genus 2 partitions
Taking all $\kappa$’s equal to 1 (and possibly $\kappa_{1}=0$), as in sect.
3.4.3, hence $W(x)=x/(1-x)$ or $\widehat{W}(x)=x^{2}/(1-x)$, we compute by (7)
the GF of the total number of genus 2 partitions (with or without singletons),
and we recover the result of Cori and Hetyei [4]
$Z^{(2)}(x;\kappa_{i}=1)=\frac{x^{6}(1+6x-19x^{2}+21x^{3})}{(1-4x)^{11/2}}\,,$
and also
$Z^{(2)}(x;\kappa_{1}=0;\kappa_{i>1}=1)=\frac{x^{6}(1+10x+5x^{2}+5x^{3}+9x^{4})}{(1-2x-3x^{2})^{11/2}}$
in accordance with (10).
#### 4.4.4 Genus 2 partitions into $r$ parts
The two-variable GF of the number of genus 2 partitions into a given number of
parts is obtained as in sect. 3.4.4 by setting all $\kappa_{i}=y$. Theorem 2
leads to
$\displaystyle Z^{(2)}(x,y)$ $\displaystyle=$
$\displaystyle\frac{x^{6}y^{2}\,p(x,y)}{((1+x-xy)^{2}-4x)^{11/2}}$ (52)
$\displaystyle p(x,y)$ $\displaystyle=$ $\displaystyle
1-x(4-10y)+x^{2}(6-10y-15y^{2})-x^{3}(4+10y-39y^{2}+4y^{3})$
$\displaystyle\qquad+x^{4}(1+10y-15y^{2}-4y^{3}+8y^{4})$
as first derived by Cori–Hetyei [4].
The counting of genus 2 partitions into $r$ parts is then obtained by
identifying the coefficient of $y^{r}$ in (52). For example, for $r=2$
(partitions into two parts with or without singleton)
$\displaystyle Z^{(2)}(x;r=2)$ $\displaystyle=$
$\displaystyle\frac{x^{6}}{(1-x)^{7}}$ $\displaystyle=$
$\displaystyle\sum_{n\geq 6}{n\choose 6}x^{n}$ $\displaystyle=$
$\displaystyle\sum_{n\geq 6}x^{n}\sum_{p=2}^{n-2}\frac{n}{6}{p-1\choose
2}{n-p-1\choose 2}$
in agreement with a general result for $r=2$ and arbitrary genus [2]. For
$r=3$ (partitions into three parts without singleton)
$Z^{(2)}(x;r=3)=\frac{14x^{7}(1+2x)}{(1-x)^{9}}=14\sum_{n\geq 7}{n\choose
7}\frac{3n-13}{8}x^{n}$
## 5 Conclusion and perspectives
In principle the method could be extended to higher genus, but at the price of
an increasing number of (semi-)primitive diagrams, whose set remains to be
listed, with an Ansatz of the form
$Z^{(g)}(x)=\frac{\sum\mathrm{dressing\ of\ (semi-)primitive\ diagrams\ of\
genus}\ g}{1-\sum_{n}n\kappa_{n}x^{n}(Z^{(0)}(x))^{n-1}}\,.$ (53)
For instance, in genus 3, primitive diagrams may occur up to $n=30$ and they
start at order $n=12$. An Ansatz for partitions into doublets (i.e., of type
$[2^{p}]$), for $g=3$ is thus
$Z^{(3)}(x;\kappa_{2}=1,\kappa_{i\neq
2}=0)=\frac{\tilde{Y}_{2}(x)\tilde{X}_{2}^{5}(x)}{(1-2x^{2}Z^{(0)}(x))}\sum_{j=0}^{9}a_{j}\tilde{X}_{2}^{j}(x)$
in which the numerical coefficients $a_{j}$ count the primitives of type
$[2^{j+6}]$ and may be determined against the known result of [8]
$Z^{(3)}(x;\kappa_{2}=1,\kappa_{i\neq
2}=0)=\frac{11x^{12}(135+558x^{2}+158x^{4})}{(1-4x^{2})^{17/2}}\,.$ (54)
hence
$\displaystyle Z^{(3)}(x;\kappa_{2}=1,\kappa_{i\neq 2}=0)$ $\displaystyle=$
$\displaystyle\frac{11\tilde{Y}_{2}(x)\tilde{X}_{2}^{5}(x)}{(1-2x^{2}Z^{(0)}(x))}\,\Big{(}135+2313\tilde{X}_{2}(x)+15728\tilde{X}_{2}^{2}(x)+57770\tilde{X}_{2}^{3}(x)$
$\displaystyle+128985\tilde{X}_{2}^{4}(x)\\!\\!\\!\\!$ $\displaystyle+$
$\displaystyle\\!\\!\\!\\!183955\tilde{X}_{2}^{5}(x)+169078\tilde{X}_{2}^{6}(x)+97188\tilde{X}_{2}^{7}(x)+31850\tilde{X}_{2}^{8}(x)+4550\tilde{X}_{2}^{9}(x)\Big{)}\,.$
We end this paper with a few remarks on some intriguing issues.
There is some evidence of a universal singular behaviour of all generating
functions,
$Z^{(g)}(x)\sim(x_{0}-x)^{\frac{1}{2}-3g}$ (55)
as can be seen on the partitions into doublets (32,51,54), and for $g=1,2$ on
other cases. This would imply a large $n$ behaviour of coefficients
$C_{n,[\alpha]}^{(g)}$ (for appropriately rescaled patterns $\alpha$) of the
form
$C_{n,[\alpha]}^{(g)}\sim
x_{0}^{-n-3g+\frac{1}{2}}\,n^{3g-\frac{1}{2}}\qquad\mathrm{as}\
n,[\alpha]\mathrm{\ grow\ large}\,.$
The “critical exponent” $\frac{1}{2}-3g$ is familiar to physicists in the
context of boundary loop models and Wilson loops [11]. Such a connection is
natural in the case of partitions into doublets, since it is known that in
that case, the counting amounts to computing the expectation value of ${\rm
tr}\,M^{n}$ in a Gaussian matrix integral, hence for large $n$, of a large
loop. That the same singular or asymptotic behaviour takes place in (all ?)
other cases seems to indicate that an effective Gaussian theory takes place in
that limit.777I’m grateful to Ivan Kostov for discussions on that point
A natural question is whether the Topological Recurrence of Chekhov, Eynard
and Orantin [7] is relevant for the counting of partitions and is related to
or independent of the approach of this paper.
As mentioned in the introduction, the formulae derived in this paper yield an
interpolation between expansions on ordinary and on free cumulants. What is
the relevance of this interpolation? How does it compare with other existing
interpolations ?
All these questions are left for future investigation.
Acknowledgements. It is a pleasure to thank Philippe Di Francesco, Elba
Garcia-Failde and Ivan Kostov for discussions and comments, and Colin
McSwiggen for suggesting amendments of this paper. I’m particularly grateful
to Robert Coquereaux for a careful reading of a first draft of the manuscript
and for providing me with very efficient Mathematica codes.
## References
* [1] É. Brézin, C. Itzykson, G. Parisi, J.-B. Zuber, Planar diagrams, Comm. Math. Phys. 59 (1978) 35–51
* [2] R. Coquereaux and J.-B. Zuber, Counting partitions by genus. II. A compendium of results, to appear
* [3] R. Cori and G. Hetyei, Counting genus one partitions and permutations, Sém. Lothar. Combin. 70 (2013) [B70e], http://arxiv.org/abs/1306.4628
* [4] R. Cori and G. Hetyei, Counting partitions of a fixed genus, The Electronic Journal of Combinatorics 25 (4) (2018) #P 4.26, http://arxiv.org/abs/1710.09992
* [5] P. Cvitanovic, Planar perturbation expansion, Phys. Lett. 99B (1981) 49–52
* [6] J.-M. Drouffe, as cited in D. Bessis, C. Itzykson and J.-B. Zuber, Quantum field theory techniques in graphical enumeration, Adv. Appl. Math. 1 (1980) 109–157
* [7] B. Eynard, Counting Surfaces, Progress in Mathematical Physics 70, Birkhäuser 2016
* [8] J. Harer and D. Zagier, The Euler characteristic of the moduli space of curves, Invent. Math. 85 (1986) 457–485
* [9] G. ’t Hooft, A Planar Diagram Theory for Strong Interactions, Nucl. Phys. B 72 (1974) 461–473
* [10] A. Jacques, Sur le genre d’une paire de substitutions, C. R. Acad. Sci. Paris 267 (1968), 625–627.
* [11] I. Kostov, Boundary Loop Models and 2D Quantum Gravity, in Exact Methods in Low-Dimensional Statistical Physics and Quantum Computing, Les Houches Summer School 2008, J. Jacobsen, S. Ouvry, V. Pasquier, D. Serban and L. Cugliandolo edrs, Oxford U. Press
* [12] G. Kreweras, Sur les partitions non croisées d’un cycle, Discrete Math., 1 (1972) 333–350
* [13] S.K. Lando and A.K. Zvonkin, Graphs on Surfaces and Applications, with an appendix by D. Zagier, Encycl. of Math. Sci. 141 (2004)
* [14] R. Speicher, Multiplicative functions on the lattice of non-crossing partitions and free convolution, Math. Annalen, 298 (1994) 611– 628
* [15] D.V. Voiculescu, Addition of non-commuting random variables, J. Operator Theory 18 (1987) 223–235
* [16] T. R. S. Walsh and A. B. Lehman, Counting rooted maps by genus I, J. Combinatorial Theory B 13 (1972), 192–218
* [17] T. R. S. Walsh and A. B. Lehman, Counting rooted maps by genus II, J. Combinatorial Theory B 13 (1972), 122–141
* [18] M. Yip, Genus one partitions, PhD thesis, University of Waterloo, 2006
|
# Gelfand-Fuchs cohomology for affine superspaces $\mathbb{A}^{m,n}$
Slava Pimenov
###### Contents
1. 0 Introduction
2. 1 Notations and recollections
3. 2 Cohomology of $\mathfrak{gl}(m,n)$
4. 3 Cohomology of $\mathcal{V}_{m,n}$
## 0 Introduction
Let $\mathbb{A}^{m,n}$ be the the affine super space of even dimension $m$ and
odd dimension $n$ over an algebraically closed field $\mathbf{k}$ of
characteristic $0$. Consider Lie superalgebras $\mathcal{V}_{m,n}$ of vector
fields in the formal neighborhood of $0\in\mathbb{A}^{m,n}$. This is a
topological Lie superalgebra with $\mathbf{k}$-linear topology induced by the
defining ideal of point $0\in\mathbb{A}^{m,n}$. We are interested in the
continuous cohomology groups $H^{\bullet}(\mathcal{V}_{m,n},\mathbf{k})$.
Previously established results cover cases $0\leqslant m\leqslant n$, as well
as $n=0$ and $n=1$ and arbitrary $m\geqslant 0$ ([GF], [Fuk], [Ko], [AF],
[Pi1]). These results are collected below in theorem 1.2. The main result of
this paper is the following theorem that has been stated as a conjecture in
the previous paper ([Pi1]).
For any $m\geqslant n\geqslant 0$ we have an isomorphism
$H^{\bullet}(\mathcal{V}_{m,n},\mathbf{k})\ \simeq\
H^{\bullet}(\SS^{2n}X_{2(m-n)},\mathbf{k}).$
Here $\SS$ denotes the topological suspension functor, and $X_{2(m-n)}$ is the
pullback of the tautological $GL(m,\mathbb{C})$-torsor over
$BGL(m,\mathbb{C})$ to the $2(m-n)$-dimensional skeleton of
$BGL(m,\mathbb{C})$ consisting of cells of dimensions up to $2(m-n)$.
Combined with the previously established results this completely settles the
question of local Gelfand-Fuchs cohomology for super-manifolds.
The main tool in the calculation is the following theorem regarding cohomology
of Lie superalgebras $\mathfrak{gl}(m,n)$. Let $V$ be the standard
representation of $\mathfrak{gl}(m,n)$, and denote $\Sigma^{\lambda}(V)$ the
Schur functor corresponding to a diagram $\lambda$. We will write
$\mathcal{H}_{m,n}$ for the set of diagrams contained in a thick hook with $m$
rows and $n$ columns.
Let $\mathfrak{g}=\mathfrak{gl}(m,n)$ with $m\geqslant n\geqslant 0$, and
$\lambda\in\mathcal{H}_{m-n+k,k}-\mathcal{H}_{m-n+k-1,k-1}$
for some $0\leqslant k\leqslant n$. Then
$H^{\bullet}(\mathfrak{g},\Sigma^{\lambda}(V)\mathop{\otimes}\limits\Sigma^{\lambda}(V^{*}))\
\simeq\
\mathbf{k}[e_{1},\ldots,e_{2m-1}]\mathop{\otimes}\limits\mathbf{k}[e^{\prime}_{2(n-k)+1},\ldots,e^{\prime}_{2n-1}].$
This theorem appears to be a new result and may be of interest beyond the
Gelfand-Fuchs cohomology theory.
[Outline of the paper.] In section 1 we recall the relevant notations and
results that are used in this paper.
The section 2 is dedicated to the proof of theorem ‣ 0 Introduction. We
proceed by induction on the number of odd variables $n$ and use the spectral
sequence for the Lie subalgebra
$\mathfrak{gl}(m,n-1)\oplus\mathfrak{gl}(1)\hookrightarrow\mathfrak{gl}(m,n)$
to reduce the question to $\mathfrak{gl}(m,n-1)$.
First we observe that the first layer of this spectral sequence has a
universal structure, which allows us to compare spectral sequences for a fixed
diagram $\lambda$ but different values of $m$ and $n$. We combine this with
the special case of theorem ‣ 0 Introduction for $n=1$ that was established
in ([Pi1]) to identify all the diagrams contributing to the second layer of
the spectral sequence. Then by direct examination of the second and third
layers we establish the required isomorphism.
We would like to point out that this does not provide an independent proof of
theorem ‣ 0 Introduction for $n=1$ as we use this result to greatly simplify
analysis of the first layer of the spectral sequence. Combining the proof in
this paper and the proof of the special case for $n=1$, the overall process
looks as follows. We start with $\mathfrak{gl}(1,1)$ as the base of induction,
then we grow number of even variables to get to $\mathfrak{gl}(m,1)$ then we
grow number of odd variables and arrive to $\mathfrak{gl}(m,n)$.
In the case of $\mathfrak{gl}(1,1)$ we have a description of the
indecomposable components of the coefficient module
$\Sigma^{\lambda}(V)\mathop{\otimes}\limits\Sigma^{\lambda}(V^{*})$ that give
rise to the cohomology groups in theorem ‣ 0 Introduction. Presently, we do
not have a similar description for other $\mathfrak{gl}(n,n)$.
The section 3 deals with the proof of theorem ‣ 0 Introduction. It follows
the general process developed in ([Pi1]) which in turn is a refinement of the
proof of Gelfand and Fuchs in the classical case. We consider the spectral
sequence for the Lie subalgebra
$\mathfrak{gl}(m,n)\hookrightarrow\mathcal{V}_{m,n}$ and identify all the
diagrams contributing to its first layer. We observe that up to transposition
these diagrams are the same as in the spectral sequence for
$\mathfrak{gl}(n-1,m+1)\hookrightarrow\mathcal{V}_{n-1,m+1}$, which allows us
to compare these two spectral sequences.
In the latter case we have $n-1<m+1$, so this is covered by the previous
result of Astashkevich and Fuchs ([AF]) which says that the cohomology
$H^{\bullet}(\mathcal{V}_{n-1,m+1},\mathbf{k})$ is isomorphic to the
cohomology of $(2m+1)$-dimensional sphere. This makes the analysis of the
original spectral sequence for $\mathcal{V}_{m,n}$ much simpler and allows us
to establish the required isomorphism.
The author would like to thank BIMSA (Beijing Institute of Mathematical
Sciences and Applications) for providing excellent working conditions during
preparation of this paper.
## 1 Notations and recollections
We will retain conventions and notations from [Pi1]. Here we will briefly
recall them and state the relevant results that will be used in this paper.
[Young diagrams and Schur functors.] Let $\lambda$ be a Young diagram of size
$d$, in other words it is an unordered partition
$\lambda=(\lambda_{1},\ldots,\lambda_{k})$ of $d$, with
$\lambda_{1}\geqslant\lambda_{2}\geqslant\ldots\geqslant\lambda_{k}>0$ and
$\sum_{i=1}^{k}\lambda_{i}=d$. We will refer to $k$ as the height of $\lambda$
and write $k=\mathrm{ht}(\lambda)$, and $d=|\lambda|$. We will denote
$\lambda^{\prime}$ the transposed Young diagram, specifically we put
$\lambda^{\prime}_{j}=\max\\{i\mid\lambda_{i}\geqslant j\\}$.
For any diagram $\lambda$ we construct a truncated diagram
$\overline{\lambda}$ obtained from $\lambda$ by removing the first column. In
other words we put $\overline{\lambda}_{i}=\max\\{\lambda_{i}-1,0\\}$.
Furthermore, we construct an extended diagram $\widetilde{\lambda}$ by adding
to $\lambda$ the first column of height $d=|\lambda|$. Formally, we put
$\widetilde{\lambda}_{i}=\lambda_{i}+1$ for $1\leqslant i\leqslant d$.
For any $m,n\geqslant 0$ we consider a subset $\mathcal{H}_{m,n}$ of Young
diagrams of arbitrary size, consisting of diagrams contained in a thick hook
with $m$ rows and $n$ columns. More precisely $\lambda\in\mathcal{H}_{m,n}$ if
and only if $\lambda_{i}\leqslant n$ whenever $i>m$. By convention, if either
$m$ or $n$ is negative we put $\mathcal{H}_{m,n}$ to be an empty set.
For any partition $\lambda$ we will denote $\Sigma^{\lambda}$ the
corresponding Schur functor acting on the symmetric monoidal category of super
vector spaces. For a super vector space $V=(V_{0},V_{1})$ of dimension
$(m,n)$, with $m=\mathop{\mathrm{dim}}\nolimits V_{0}$ and
$n=\mathop{\mathrm{dim}}\nolimits V_{1}$, the Schur functor
$\Sigma^{\lambda}(V)$ is non-zero if and only if
$\lambda\in\mathcal{H}_{m,n}$.
We will denote by $S^{n}$ and $\Lambda^{n}$ the functors of symmetric and
exterior powers respectively and recall that for any two super vector spaces
$V$ and $W$ we have isomorphisms of
$(\mathfrak{gl}(V)\times\mathfrak{gl}(W))$-modules
(1.0.1) $S^{n}(V\mathop{\otimes}\limits
W)=\bigoplus_{|\lambda|=n}\Sigma^{\lambda}(V)\mathop{\otimes}\limits\Sigma^{\lambda}(W),\quad\quad\Lambda^{n}(V\mathop{\otimes}\limits
W)=\bigoplus_{|\lambda|=n}\Sigma^{\lambda}(V)\mathop{\otimes}\limits\Sigma^{\lambda^{\prime}}(W).$
[Lie superalgebra $\mathfrak{gl}(m,n)$.] Let $V$ be a super vector space of
dimension $(m,n)$, then we write $\mathfrak{gl}(m,n)$ for the Lie superalgebra
of endomorphisms $\mathrm{End}(V)\simeq V\mathop{\otimes}\limits V^{*}$. We
will refer to $V$ as the standard representation of $\mathfrak{gl}(m,n)$. We
would like to point out here that even though Lie superalgebras
$\mathfrak{gl}(m,n)$ and $\mathfrak{gl}(n,m)$ are isomorphic, their standard
representations are different. Specifically, the standard representation $W$
of $\mathfrak{gl}(n,m)$ is obtained from $V$ by the change of parity
$W\simeq\Pi(V)$. Therefore, up to change of parity we have isomorphisms
$\Sigma^{\lambda}(W)=\Sigma^{\lambda}(\Pi
V)\simeq\Sigma^{\lambda^{\prime}}(V)$.
Let $\mathfrak{g}=\mathfrak{gl}(m,n)$, and $V$ its standard representation, we
are interested in the cohomology spaces
$H^{\bullet}(\mathfrak{g},\Sigma^{\lambda}(V)\mathop{\otimes}\limits\Sigma^{\lambda}(V^{*}))$.
In [Pi1] we have established the following result.
###### Theorem 1.1.
Let $\mathfrak{g}=\mathfrak{gl}(m,1)$, $V$ the standard representation of
$\mathfrak{g}$ and $\lambda\in\mathcal{H}_{m,1}$, then
$H^{\bullet}(\mathfrak{g},\Sigma^{\lambda}(V)\mathop{\otimes}\limits\Sigma^{\lambda}(V^{*}))=\begin{cases}\mathbf{k}[e_{1},\ldots,e_{2m-1}],&\text{if
$\mathrm{ht}(\lambda)\leqslant m-1$,}\\\
\mathbf{k}[e_{1},\ldots,e_{2m-1},e^{\prime}_{1}],&\text{otherwise},\end{cases}$
where generators $e_{i}$ are of cohomological degree $i$ and $e^{\prime}_{1}$
is of degree $1$.
Notice that the condition $\mathrm{ht}(\lambda)\leqslant m-1$ here can be
rewritten as $\lambda\in\mathcal{H}_{m-1,0}$.
[Lie superalgebra $\mathcal{V}_{m,n}$.] Consider a (super)commutative
superalgebra
$\mathcal{O}_{\mathbb{A}^{m,n}}=\mathbf{k}[x_{1},\ldots
x_{m},\xi_{1},\ldots\xi_{n}]$
of algebraic functions on the affine superspace $\mathbb{A}^{m,n}$. We assume
that variables $x_{i}$ are even and $\xi_{j}$ are odd. We denote
$\widehat{\mathcal{O}}_{\mathbb{A}^{m,n}}=\mathbf{k}[[x_{1},\ldots,x_{m},\xi_{1},\ldots,\xi_{n}]]$
its completion at zero, equipped with the inverse limit topology. We are
interested in the Lie superalgebra of continuous derivations
$\mathcal{V}_{m,n}=\mathrm{Der}_{\mathrm{cont}}(\widehat{\mathcal{O}}_{\mathbb{A}^{m,n}}).$
Explicitly, $\mathcal{V}_{m,n}$ is formed by elements $\sum
f_{i}{\partial_{x_{i}}}+\sum g_{j}{\partial_{\xi_{j}}}$, with
$f_{i},g_{j}\in\widehat{\mathcal{O}}_{\mathbb{A}^{m,n}}$. The bracket is given
by the action of derivations $\partial_{x_{i}}$ and $\partial_{\xi_{j}}$ on
functions. It contains $\mathfrak{gl}(m,n)$ as a subalgebra spanned by
elements with linear coefficients:
$\\{x_{i}\partial_{x_{j}},x_{i}\partial_{\xi_{j}},\xi_{i}\partial_{x_{j}},\xi_{i}\partial_{\xi_{j}}\\}$.
We will write $H^{\bullet}(\mathcal{V}_{m,n},\mathbf{k})$ for the continuous
cohomology spaces with respect to topology on $\mathcal{V}_{m,n}$ induced by
the inverse limit topology on $\widehat{\mathcal{O}}_{\mathbb{A}^{m,n}}$. We
recall the previously established results regarding this cohomology, for
details we refer to [Fuk], [AF], [Pi1].
The cohomology of $\mathcal{V}_{m,n}$ will be related to the cohomology of
various topological spaces, that can be constructed using the following
procedure. Consider the topological group $GL(m,\mathbb{C})$, and let $BGL(m)$
be its classifying space. Denote by $p\colon EGL(m)\to BGL(m)$ the
tautological principal $GL(m)$-bundle over the classifying space. Let us write
$\mathop{\mathrm{sk}}\nolimits_{d}BGL(m)$ for the $d$-dimensional skeleton of
$BGL(m)$, i.e. the subspace formed by all cells of dimension up to $d$. The
spaces that will be of interest to us are
$X_{d}=p^{-1}(\mathop{\mathrm{sk}}\nolimits_{d}BGL(m))$ for various $d$, i.e.
the pullbacks of the tautological bundle to the $d$-dimensional skeleta.
Furthermore, denote by $\SS$ the topological suspension functor.
###### Theorem 1.2.
We have the following isomorphisms.
1. a)
[Fuk] If $n=0$, then
$H^{\bullet}(\mathcal{V}_{m,0},\mathbf{k})\ \simeq\
H^{\bullet}(X_{2m},\mathbf{k}).$
2. b)
[AF] If $m<n$, then
$H^{\bullet}(\mathcal{V}_{m,n},\mathbf{k})\simeq
H^{\bullet}(S^{2n-1},\mathbf{k}).$
3. c)
[AF] If $m=n$, then
$H^{\bullet}(\mathcal{V}_{n,n},\mathbf{k})\ \simeq\
H^{\bullet}(\SS^{2n}GL(n,\mathbb{C}),\mathbf{k}).$
4. d)
[Pi1] If $n=1$, then
$H^{\bullet}(\mathcal{V}_{m,1},\mathbf{k})\ \simeq\
H^{\bullet}(\SS^{2}X_{2(m-1)},\mathbf{k}).$
[Spectral sequence.] The main tool used in calculations in this paper is the
spectral sequence relating the cohomology of a Lie superalgebra $\mathfrak{g}$
and its subalgebra $\mathfrak{h}$. For any $\mathfrak{g}$-module $M$ we have
an increasing filtration of the chain complex
$\Lambda^{\bullet}\mathfrak{g}\mathop{\otimes}\limits M$ by the number of
elements from $\mathfrak{h}$. It induces a decreasing filtration on the
cochain complex $C^{\bullet}(\mathfrak{g},M)$, giving rise to a spectral
sequence
$E_{1}^{pq}=H^{q}(\mathfrak{h},\
\mathrm{Hom}(\Lambda^{p}(\mathfrak{g}/\mathfrak{h}),M)\ )\Rightarrow
H^{p+q}(\mathfrak{g},M).$
We will use cohomological indexing convention for the spectral sequence: on
the layer $E_{r}$ we have differentials
$d_{r}\colon E_{r}^{p,q}\to E_{r}^{p+r,q-r+1}.$
## 2 Cohomology of $\mathfrak{gl}(m,n)$
Let $\mathfrak{g}=\mathfrak{gl}(m,n)$, and $V$ its standard representation.
Dimension of the super vector space $V$ is $(m,n)$. Consider a direct sum
decomposition of $V$ into two subspaces $V=W\oplus E$, such that
$\mathop{\mathrm{dim}}\nolimits W=(m,n-1)$ and $\mathop{\mathrm{dim}}\nolimits
E=(0,1)$. Denote by $\mathfrak{h}$ the subalgebra of $\mathfrak{g}$ that
preserves this decomposition, in other words
$\mathfrak{h}=\mathrm{End}(W)\oplus\mathrm{End}(E)\simeq\mathfrak{gl}(m,n-1)\oplus\mathfrak{gl}(1)\hookrightarrow\mathfrak{g}.$
We identify $W$ with the standard representation of $\mathfrak{gl}(m,n-1)$ and
$E$ with the standard representation of
$\mathfrak{gl}(0,1)\simeq\mathfrak{gl}(1)$. The quotient space
$\mathfrak{g}/\mathfrak{h}$ is isomorphic to $W\mathop{\otimes}\limits
E^{*}\oplus W^{*}\mathop{\otimes}\limits E$. We have
$\Lambda^{p}(\mathfrak{g}/\mathfrak{h})\ \simeq\
\bigoplus_{i+j=p}\Lambda^{i}(W\mathop{\otimes}\limits
E^{*})\mathop{\otimes}\limits\Lambda^{j}(W^{*}\mathop{\otimes}\limits E).$
Since $E$ is of odd dimension $1$, we can rewrite this as follows
(2.0.1) $\Lambda^{p}(\mathfrak{g}/\mathfrak{h})\ \simeq\
\bigoplus_{i+j=p}S^{i}(W)\mathop{\otimes}\limits
S^{j}(W^{*})\mathop{\otimes}\limits\Lambda^{i}(E^{*})\mathop{\otimes}\limits\Lambda^{j}(E).$
We are interested in the cohomology of $\mathfrak{g}$ with coefficients in
$\Sigma^{\lambda}(V)\mathop{\otimes}\limits\Sigma^{\lambda}(V^{*})$. Using
decomposition $V=W\oplus E$ we can write
(2.0.2) $\Sigma^{\lambda}(V)\ \simeq\
\bigoplus_{\mu}\Sigma^{\mu}(W)\mathop{\otimes}\limits\Lambda^{p}(E),$
where the sum is taken over all diagrams $\mu$ obtained from $\lambda$ by
removing at most one box from each row, and $p=|\lambda|-|\mu|$ is the total
number of removed boxes. We also have a similar expansion for
$\Sigma^{\lambda}(V^{*})$.
Let us consider the spectral sequence $E$ for the Lie subalgebra
$\mathfrak{h}\hookrightarrow\mathfrak{g}$ and coefficients
$\Sigma^{\lambda}(V)\mathop{\otimes}\limits\Sigma^{\lambda}(V^{*})$:
$E_{1}^{pq}=H^{\bullet}(\mathfrak{gl}(m,n-1)\oplus\mathfrak{gl}(1),\Lambda^{p}(\mathfrak{g}/\mathfrak{h})^{*}\mathop{\otimes}\limits\Sigma^{\lambda}(V)\mathop{\otimes}\limits\Sigma^{\lambda}(V^{*})).$
Combining 2.0.1 and 2.0.2 we find that diagrams $\mu$ contributing to the
first layer of this spectral sequence are obtained from $\lambda$ by first
removing say $i$ boxes in such a way that from each row we remove at most one
box, and then adding say $j$ boxes in such a way that in each column we add at
most one box. In particular, we immediately see that the diagrams $\mu$
appearing in $E_{1}$ are of height at most $\mathrm{ht}(\lambda)+1$.
Furthermore, the weight with respect to the action of the subalgebra
$\mathfrak{gl}(1)\hookrightarrow\mathfrak{h}$ of a component corresponding to
a diagram $\mu$ is $|\lambda|-|\mu|$, in other words it depends only on the
diagram $\mu$ itself, and not on the specific way it was obtained from
$\lambda$ by the procedure described above. We have a similar picture on the
dual side with components containing $\Sigma^{\nu}(W^{*})$, however since the
cohomology
$H^{\bullet}(\mathfrak{gl}(m,n-1),\Sigma^{\mu}(W)\mathop{\otimes}\limits\Sigma^{\nu}(W^{*}))$
vanishes unless $\mu=\nu$ it is sufficient to keep track only of the diagrams
$\mu$. We will refer to such components as components of type $\mu$.
The differential on the first layer of the spectral sequence is induced by
maps between the coefficients of $H^{\bullet}(\mathfrak{h},-)$, which in turn
corresponds to the action of $\mathfrak{g}/\mathfrak{h}$ on the coefficients
$\Sigma^{\lambda}(V)\mathop{\otimes}\limits\Sigma^{\lambda}(V^{*})$. In terms
of the above decomposition a differential between two components of type $\mu$
corresponds to two different ways of obtaining diagram $\mu$ from $\lambda$.
More precisely, let $\nu$ be the intermediate diagram in the process of
constructing $\mu$ from $\lambda$, i.e. $\nu$ is a common subdiagram of $\mu$
and $\lambda$ such that $\lambda-\nu$ has at most one box in each row, and
$\mu-\nu$ has at most one box in each column.
We say that a box of the diagram $\nu$ is flippable if the diagram $\nu_{1}$
obtained from $\nu$ by removing this box provides another valid way of
obtaining $\mu$ from $\lambda$. The differential in $E_{1}$ is a linear
combination of maps into components obtained by flipping a box (either for
$\Sigma^{\mu}(W)$ or for $\Sigma^{\mu}(W^{*})$). It is clear from the
construction that for each $\mu$ the subcomplex of components of type $\mu$ is
bounded.
[Universal complex.] The previous discussion of the first layer of the
spectral sequence can by summarized by saying that it is “universal” in a
certain sense. We will make this statement more precise. Let $S_{d}$ be a
symmetric group on $d$ elements and $S_{\bullet}$ denote the collection of all
$S_{d}$ for $d\geqslant 0$. An $S_{\bullet}$-module $M$ is a direct sum
$M=\bigoplus_{d\geqslant 0}M_{d}$, where each $M_{d}$ is an $S_{d}$-module.
Since the category of $S_{\bullet}$-modules is semisimple, and simple modules
correspond to partitions $\lambda$ of arbitrary size, we can further decompose
$M\ =\
\bigoplus_{\lambda}L_{\lambda}\mathop{\otimes}\limits\mathrm{Hom}_{S_{\bullet}}(L_{\lambda},M),$
where $L_{\lambda}$ is the simple module corresponding to a partition
$\lambda$. To simplify notation we will write
$M_{\lambda}=\mathrm{Hom}_{S_{\bullet}}(L_{\lambda},M)$.
For a super vector space $V$ the Schur functor $\Sigma(M,V)$ is defined by
$\Sigma(M,V)\ =\ \bigoplus_{d\geqslant
0}M_{d}\mathop{\otimes}\limits_{\mathbf{k}[S_{d}]}V^{\mathop{\otimes}\limits
d}\ \simeq\ \bigoplus_{\lambda}\Sigma^{\lambda}(V)\mathop{\otimes}\limits
M_{\lambda},$
where $S_{d}$ acts on the tensor power $V^{\mathop{\otimes}\limits d}$ by
permuting factors. Of course, as was already mentioned in the previous
section, the dimension of $V$ imposes restriction on what terms contribute to
the direct sum. If $\mathop{\mathrm{dim}}\nolimits V=(m,n)$, then the
contribution comes only from diagrams $\lambda\in\mathcal{H}_{m,n}$.
The above discussion of the first layer of the spectral sequence can be stated
as the following lemma.
###### Lemma 2.1.
There exists a complex of $S_{\bullet}$-modules
$\mathbb{E}=\mathbb{E}(\lambda)$, such that for any $(m,n)$ the first layer of
the spectral sequence associated to the Lie subalgebra
$\mathfrak{gl}(m,n-1)\oplus\mathfrak{gl}(1)\hookrightarrow\mathfrak{gl}(m,n)$
and coefficients
$\Sigma^{\lambda}(V)\mathop{\otimes}\limits\Sigma^{\lambda}(V^{*})$ has the
form
$E_{1}^{pq}\ \simeq\ \bigoplus_{\mu\atop
i+j=q}H^{i}(\mathfrak{gl}(m,n-1),\Sigma^{\mu}(W)\mathop{\otimes}\limits\Sigma^{\mu}(W^{*}))\mathop{\otimes}\limits
H^{j}(\mathfrak{gl}(1),\mathbf{k})\mathop{\otimes}\limits\mathbb{E}^{p}_{\mu}.$
where $W$ is the standard representation of $\mathfrak{gl}(m,n-1)$, and the
sum is over $\mu\in\mathcal{H}_{m,n-1}$.
Moreover, the differentials in $E_{1}$ are induced by the differentials in
$\mathbb{E}$, so the second layer $E_{2}$ (without differentials) has the form
$E_{2}^{pq}\ \simeq\ \bigoplus_{\mu\atop
i+j=q}H^{i}(\mathfrak{gl}(m,n-1),\Sigma^{\mu}(W)\mathop{\otimes}\limits\Sigma^{\mu}(W^{*}))\mathop{\otimes}\limits
H^{j}(\mathfrak{gl}(1),\mathbf{k})\mathop{\otimes}\limits
H^{p}(\mathbb{E}_{\mu}).$
$\square$
Because of the universal nature of the complex $\mathbb{E}$ we will be able to
obtain information about $H^{\bullet}(\mathbb{E})$ by comparing spectral
sequences for different values of $(m,n)$. First, we will need the following
simple lemma.
###### Lemma 2.2.
Consider a spectral sequence $E_{\bullet}$ concentrated in the quadrant with
$p,q\geqslant 0$, such that $E_{2}^{0,q}$ is an algebra isomorphic to
$A\mathop{\otimes}\limits\mathbf{k}[e_{1}]$ for some algebra $A$, and
$\mathop{\mathrm{deg}}\nolimits e_{1}=1$. Assume that the second layer is a
free $A\mathop{\otimes}\limits\mathbf{k}[e_{1}]$-module, with generators in
degrees $(p,0)$, and the differential on $E_{2}$ is compatible with the module
structure.
1. a)
If $E_{\infty}^{0,q}\simeq E_{2}^{0,q}$ and $E_{\infty}^{pq}=0$ for $p>0$,
then $E_{2}^{pq}=0$ for $p>0$.
2. b)
If $E_{\infty}^{0,q}\simeq A\hookrightarrow E_{2}^{0,q}$ and
$E_{\infty}^{pq}=0$ for $p>0$, then
$E_{2}^{pq}\simeq
A\mathop{\otimes}\limits\mathbf{k}[e_{1}]\mathop{\otimes}\limits\mathbf{k}[c_{1}]$
as a $A\mathop{\otimes}\limits\mathbf{k}[e_{1}]$-module. Here
$\mathop{\mathrm{deg}}\nolimits c_{1}=(2,0)$ and the differential in $E_{2}$
sends $e_{1}$ to $c_{1}$.
Proof: For part (a) let us assume that $E_{2}^{pq}\neq 0$ for some $p>0$, and
let us denote $p_{0}$ the minimal such $p$. By our assumption $E_{2}$ is
generated by elements in degrees $(p,0)$, therefore we must have
$E_{2}^{p_{0},0}\neq 0$. Since these elements do not survive to $E_{\infty}$
and $p_{0}$ is minimal, we must have non-zero differentials starting from the
first column. But this is impossible since $E_{\infty}^{0,q}\simeq
E_{2}^{0,q}$.
For part (b) observe that since $e_{1}$ doesn’t survive until $E_{\infty}$ it
must by killed by some differential, however, since our spectral sequence has
non-zero terms only for $p,q\geqslant 0$, we see that the differential of
$E_{2}$ doesn’t vanish on $e_{1}$. Denote by $c_{1}\in E_{2}^{2,0}$ its image.
Since by assumption the differential is compatible with the module structure
and $A$ is contained in the kernel of the differential, this completely
determines restriction of $d_{2}$ to the first column. Furthermore, since
$E_{2}$ is a free $A\mathop{\otimes}\limits\mathbf{k}[e_{1}]$-module, we see
that the image $\mathop{\mathrm{Im}}\nolimits d_{2}\colon E_{2}^{0\bullet}\to
E_{2}^{2\bullet}$ is identified with
$Ac_{1}\subset(A\mathop{\otimes}\limits\mathbf{k}[e_{1}])c_{1}$, and it is
surjective on $E_{2}^{2,0}$.
The element $e_{1}c_{1}$ does not survive to $E_{\infty}$, therefore as
before, $d_{2}$ doesn’t vanish on it and we identify $d_{2}(e_{1}c_{1})$ with
$c_{1}^{2}$. Repeating this argument we obtain the required isomorphism.
$\square$
These two lemmas allow us to establish the following result concerning the
universal complex $\mathbb{E}(\lambda)$.
###### Lemma 2.3.
Let $\mathbb{E}=\mathbb{E}(\lambda)$, then
1. a)
for all $p>0$ we have $H^{2p}(\mathbb{E})\simeq L_{\mu}$, for some $\mu$ with
$\mathrm{ht}(\mu)=\mathrm{ht}(\lambda)+1$,
2. b)
$H^{0}(\mathbb{E})\simeq L_{\overline{\lambda}}$, where $\overline{\lambda}$
is the truncation of $\lambda$,
3. c)
for all $p\geqslant 0$ we have $H^{2p}(\mathbb{E})\simeq L_{(\lambda/p+1)}$,
where $(\lambda/k)$ denotes the diagram obtained from $\lambda$ by adding one
box in the first $k-1$ columns and removing $k$’th column. In other words
$(\lambda/k)^{\prime}=(\lambda^{\prime}_{1}+1,\ldots,\lambda^{\prime}_{k-1}+1,\lambda^{\prime}_{k+1},\ldots).$
Proof: Consider Lie superalgebra $\mathfrak{g}=\mathfrak{gl}(k,1)$ and its
subalgebra $\mathfrak{h}=\mathfrak{gl}(k)\oplus\mathfrak{gl}(1)$. Now, as
usual, let $W$ be the standard representation of $\mathfrak{gl}(k)$ then for
any diagram $\mu$ with $\mathrm{ht}(\mu)\leqslant k$ we have
$H^{\bullet}(\mathfrak{gl}(k),\Sigma^{\mu}(W)\mathop{\otimes}\limits\Sigma^{\mu}(W^{*}))\simeq\mathbf{k}[e_{1},\ldots,e_{2k-1}].$
We also have $H^{\bullet}(\mathfrak{gl}(1))\simeq\mathbf{k}[e_{1}]$. Using
lemma 2.1 we see that the spectral sequence for the Lie subalgebra
$\mathfrak{h}\hookrightarrow\mathfrak{g}$ satisfies conditions of lemma 2.2.
First, let $k\geqslant\mathrm{ht}(\lambda)+1$. As we already saw, all the
diagrams $\mu$ appearing in the universal complex $\mathbb{E}$ have
$\mathrm{ht}(\mu)\leqslant\mathrm{ht}(\lambda)+1$. Using theorem 1.1 we find
that the spectral sequence converges to $\mathbf{k}[e_{1},\ldots,e_{2k-1}]$.
Therefore, we are in the situation of lemma 2.2(b).
On the other hand, let $k=\mathrm{ht}(\lambda)$. Then again using theorem 1.1
we find that the spectral sequence converges to
$\mathbf{k}[e_{1},\ldots,e_{2k-1},e^{\prime}_{1}]$, so we are in the situation
of lemma 2.2(a). The difference between these two spectral sequences is that
in the latter case we have a restriction imposed on the diagrams $\mu$ by the
dimension of $W$. Namely, we lose diagrams $\mu$ with $\mathrm{ht}(\mu)>k$.
Therefore, diagrams $\mu$ contributing to $H^{p}(\mathbb{E})$ for $p>0$ must
be of height $\mathrm{ht}(\lambda)+1$. Combining this with isomorphism from
lemma 2.2(b) we prove part (a).
For part (b), notice that since
$H^{0}(\mathfrak{gl}(k,1),\Sigma^{\lambda}(V)\mathop{\otimes}\limits\Sigma^{\lambda}(V^{*}))=\mathbf{k}$
for any $k$, the cohomology $H^{0}(\mathbb{E})$ is isomorphic to a simple
module. Furthermore, from the construction of the first layer $E_{1}$ we see
that it must by isomorphic to $L_{\mu}$ for the smallest possible diagram
$\mu$ in the decomposition (2.0.2). It is straightforward to see that this
diagram is precisely the truncated diagram $\overline{\lambda}$.
Clearly, $(\lambda/1)=\overline{\lambda}$, so part (c) for $p=0$ is just a
reformulation of part (b). Let us consider the rest of the diagrams $\mu$
contributing to $H^{\bullet}(\mathbb{E}(\lambda))$. According to part (a) they
are of height $\mathrm{ht}(\lambda)+1$, so they have $\mathrm{ht}(\lambda)+1$
boxes in the first column. This is only possible if in the process of
obtaining $\mu$ from $\lambda$ no box was removed from the first column and
exactly one box was added to it. This implies that for any diagram $\mu$ of
height $\mathrm{ht}(\lambda)+1$ appearing in the universal complex
$\mathbb{E}(\lambda)$ we have the truncated diagram $\overline{\mu}$ appearing
in $\mathbb{E}(\overline{\lambda})$. More precisely, for any such $\mu$ and
$p\geqslant 2$ we have
$\mathbb{E}(\lambda)^{p}_{\mu}\simeq\mathbb{E}(\overline{\lambda})^{p-2}_{\overline{\mu}}$
and therefore
$H^{p}(\mathbb{E}(\lambda)_{\mu})\simeq
H^{p-2}(\mathbb{E}(\overline{\lambda})_{\overline{\mu}}).$
Thus, we reduced the question to the structure of the universal complex
$\mathbb{E}(\overline{\lambda})$, for a diagram $\overline{\lambda}$ that
contains one less column than $\lambda$. Using induction on the number of
columns we immediately see that this identification gives us the required
isomorphism of part (c). The base of induction is the diagram $\lambda$ of
size zero, in which case the statement follows from the decomposition (2.0.1).
$\square$
We are now ready to prove the main theorem of this section.
###### Theorem 2.4.
Let $\mathfrak{g}=\mathfrak{gl}(m,n)$ with $m\geqslant n\geqslant 0$, and
$\lambda\in\mathcal{H}_{m-n+k,k}-\mathcal{H}_{m-n+k-1,k-1}$
for some $0\leqslant k\leqslant n$. Then
$H^{\bullet}(\mathfrak{g},\Sigma^{\lambda}(V)\mathop{\otimes}\limits\Sigma^{\lambda}(V^{*}))\
\simeq\
\mathbf{k}[e_{1},\ldots,e_{2m-1}]\mathop{\otimes}\limits\mathbf{k}[e^{\prime}_{2(n-k)+1},\ldots,e^{\prime}_{2n-1}],$
where $\mathop{\mathrm{deg}}\nolimits e_{i}=\mathop{\mathrm{deg}}\nolimits
e^{\prime}_{i}=i$. The generators $e_{1},\ldots,e_{2m-1}$ are the images of
the standard generators of $H^{\bullet}(\mathfrak{gl}(m),\mathbf{k})$ under
the composition
(2.4.1)
${H^{\bullet}(\mathfrak{gl}(m),\mathbf{k})}$${H^{\bullet}(\mathfrak{g},\mathbf{k})}$${H^{\bullet}(\mathfrak{g},\Sigma^{\lambda}(V)\mathop{\otimes}\limits\Sigma^{\lambda}(V^{*})).}$$\scriptstyle{\simeq}$$\scriptstyle{\mathrm{res}}$$\scriptstyle{\mathrm{coev}}$
Here $\mathrm{res}$ is the map induced by restriction to the Lie subalgebra
$\mathfrak{gl}(m)\hookrightarrow\mathfrak{g}$, and $\mathrm{coev}$ is induced
by the coevaluation map
$\mathbf{k}\hookrightarrow\Sigma^{\lambda}(V)\mathop{\otimes}\limits\Sigma^{\lambda}(V^{*})$.
Proof: We consider Lie subalgebra
$\mathfrak{h}=\mathfrak{gl}(m,n-1)\oplus\mathfrak{gl}(1)\hookrightarrow\mathfrak{gl}(m,n)$
and prove the theorem by induction on $n$. When $n=0$ this is the classical
purely even case and the result is well known. When $n=1$ the statement of the
theorem is a reformulation of theorem 1.1. From now on, we will assume that
$n\geqslant 2$ and the theorem holds for all $\mathfrak{gl}(m,n^{\prime})$
with $n^{\prime}<n$. According to lemma 2.1 we have the spectral sequence
$E_{2}\ \simeq\
\bigoplus_{\mu}H^{\bullet}(\mathfrak{gl}(m,n-1),\Sigma^{\mu}(W)\mathop{\otimes}\limits\Sigma^{\mu}(W^{*}))\mathop{\otimes}\limits\mathbf{k}[e^{\prime\prime}_{1}]\mathop{\otimes}\limits
H^{\bullet}(\mathbb{E}_{\mu})\Rightarrow
H^{\bullet}(\mathfrak{g},\Sigma^{\lambda}(V)\mathop{\otimes}\limits\Sigma^{\lambda}(V^{*})).$
[Case $k=0$.] First, we consider the case when $k=0$ separately. The condition
$\lambda\in\mathcal{H}_{m-n,0}$ is equivalent to
$\mathrm{ht}(\lambda)\leqslant m-n$. From lemma 2.3 we see that all diagrams
$\mu$ contributing to the second layer of the spectral sequence have
$\mathrm{ht}(\mu)\leqslant m-n+1$, i.e. $\mu\in\mathcal{H}_{m-(n-1),0}$, hence
by the inductive assumption
$H^{\bullet}(\mathfrak{gl}(m,n-1),\Sigma^{\mu}(W)\mathop{\otimes}\limits\Sigma^{\mu}(W^{*}))\
\simeq\ \mathbf{k}[e_{1},\ldots,e_{2m-1}].$
Furthermore, again by lemma 2.3 we have
$E_{2}\ \simeq\
\mathbf{k}[e_{1},\ldots,e_{2m-1}]\mathop{\otimes}\limits\mathbf{k}[e^{\prime\prime}_{1}]\mathop{\otimes}\limits\mathbf{k}[c_{1}]$
and the differential on $E_{2}$ sends $e^{\prime}_{1}$ to $c_{1}$. Therefore,
we find that the spectral sequence converges to
$H^{\bullet}(\mathfrak{g},\Sigma^{\lambda}(V)\mathop{\otimes}\limits\Sigma^{\lambda}(V^{*}))\
\simeq\ \mathbf{k}[e_{1},\ldots,e_{2m-1}].$
The statement regarding classes $e_{i}$ is a tautology for $n=0$. Assume that
it hold for all $n^{\prime}<n$. The coevaluation map
$\mathbf{k}\to\Sigma^{\lambda}(V)\mathop{\otimes}\limits\Sigma^{\lambda}(V^{*})$
induces a morphism of spectral sequences. By inductive assumption this
morphism is an isomorphism on the second layer, therefore it also induces
isomorphism $H^{\bullet}(\mathfrak{g},\mathbf{k})\simeq
H^{\bullet}(\mathfrak{g},\Sigma^{\lambda}(V)\mathop{\otimes}\limits\Sigma^{\lambda}(V^{*}))$.
This completes the proof in the case $k=0$.
[Case $k>0$.] According to lemma 2.3(c) the contribution to the second layer
of the spectral sequence in the column $2p$ comes from the diagram
$(\lambda/p+1)$. To simplify notation, let us write
$\mathcal{H}^{\circ}(n,k)=\mathcal{H}_{m-n+k,k}-\mathcal{H}_{m-n+k-1,k-1}.$
We omit $m$ in the notation since within the scope of this proof the number of
even variables $m$ never changes. It is straightforward to check that since
$\lambda\in\mathcal{H}^{\circ}(n,k)$ for $0\leqslant p\leqslant k-1$ we have
$(\lambda/p+1)\in\mathcal{H}^{\circ}(n-1,k-1),$
and for $p\geqslant k$
$(\lambda/p+1)\in\mathcal{H}^{\circ}(n-1,k).$
Therefore, by inductive assumption for $0\leqslant p\leqslant k-1$ the $2p$’th
column of $E_{2}$ is isomorphic to
$E_{2}^{2p,\bullet}\simeq\mathbf{k}[e_{1},\ldots,e_{2m-1}]\mathop{\otimes}\limits\mathbf{k}[e^{\prime}_{2(n-k)+1},\ldots,e^{\prime}_{2n-3}]\mathop{\otimes}\limits\mathbf{k}[e^{\prime\prime}_{1}].$
For $p\geqslant k$ we consider two cases. First, if $k<n$, then again by
inductive assumption we have
$E_{2}^{2p,\bullet}\simeq\mathbf{k}[e_{1},\ldots,e_{2m-1}]\mathop{\otimes}\limits\mathbf{k}[e^{\prime}_{2(n-k)-1},\ldots,e^{\prime}_{2n-3}]\mathop{\otimes}\limits\mathbf{k}[e^{\prime\prime}_{1}].$
If on the other hand, $k=n$, then
$(\lambda/p+1)\in\mathcal{H}^{\circ}(n-1,n)$, hence the Schur functor
$\Sigma^{(\lambda/p+1)}(W)=0$, and all columns in $E_{2}$ starting from column
$2k$ vanish.
Since the differential on $E_{2}$ sends generator $e^{\prime\prime}_{1}\in
E_{2}^{2p,1}$ to the basis element in $E_{2}^{2(p+1),0}$ we see that on the
third layer the spectral sequence has only two non-zero columns: for $p=0$ and
either for $p=2k$ if $k<n$ or for $p=2k-2$ if $k=n$. Specifically,
$E_{3}^{0,\bullet}\simeq\mathbf{k}[e_{1},\ldots,e_{2m-1}]\mathop{\otimes}\limits\mathbf{k}[e^{\prime}_{2(n-k)+1},\ldots,e^{\prime}_{2n-3}],$
and if $0<k<n$, then
$E_{3}^{2k,\bullet}\simeq\mathbf{k}[e_{1},\ldots,e_{2m-1}]\mathop{\otimes}\limits\mathbf{k}[e^{\prime}_{2(n-k)+1},\ldots,e^{\prime}_{2n-3}]\mathop{\otimes}\limits\mathbf{k}e^{\prime}_{2(n-k)-1},$
and finally if $k=n$, then
$E_{3}^{2(k-1),\bullet}\simeq\mathbf{k}[e_{1},\ldots,e_{2m-1}]\mathop{\otimes}\limits\mathbf{k}[e^{\prime}_{1},\ldots,e^{\prime}_{2n-3}]\mathop{\otimes}\limits\mathbf{k}e^{\prime\prime}_{1}.$
Let us show that starting from $E_{3}$ all differentials in the spectral
sequence vanish. First, consider generators $e_{i}$. The coevaluation map
$\mathbf{k}\to\Sigma^{\lambda}(V)\mathop{\otimes}\limits\Sigma^{\lambda}(V^{*})$
induces a map from the spectral sequence for the trivial coefficients
$F_{\bullet}$ to our spectral sequence $E_{\bullet}$. By inductive assumption
the classes $e_{i}$ in the first column of $F_{\bullet}$ map to corresponding
classes $e_{i}$ in $E_{\bullet}$. As we have seen for the trivial coefficients
all the differentials vanish on $e_{i}$, hence they must also vanish in
$E_{\bullet}$.
Now consider generators $e^{\prime}_{j}$, and assume first that $k<n$. The
differential can only be non-zero on the layer $E_{2k}$ and send generator
$e^{\prime}_{j}$ to $E_{2k}^{2k,j-2k+1}$. However, since $j\leqslant 2n-3$ we
have $j-2k+1\leqslant 2(n-k)-2$ and column $2k$ has non-zero terms only for
$q\geqslant 2(n-k)-1$.
Finally, for $k=n$, the differential can only be non-zero on the layer
$E_{2(n-1)}$ and send generator $e^{\prime}_{j}$ to
$E_{2(n-1)}^{2(n-1),j-2n+3}$. Again, since $j\leqslant 2n-3$ we have
$j-2n+3\leqslant 0$ but all non-zero terms are in degree $q\geqslant 1$.
Observe, that for $k<n$ the total degree of the generator
$e^{\prime}_{2(n-k)-1}$ in the column $2k$ is $(2n-1)$, and similarly for
$k=n$ the total degree of $e^{\prime\prime}_{1}$ in the column $2(n-1)$ is
again $(2n-1)$. By renaming this generator $e^{\prime}_{2n-1}$ we obtain the
required isomorphism. The identification (2.4.1) immediately follows from the
previous discussion.
This concludes the proof of the theorem.
$\square$
## 3 Cohomology of $\mathcal{V}_{m,n}$
The calculation of cohomology $H^{\bullet}(\mathcal{V}_{m,n},\mathbf{k})$
follows the general argument originally developed for the classical case of
$\mathcal{V}_{m,0}$ by Gelfand and Fuchs with some refinements that were
needed to apply it to $\mathcal{V}_{m,1}$. Here we briefly recall the major
steps of this procedure, for details we refer to [Fuk] and [Pi1].
Consider Lie subalgebra $\mathfrak{gl}(m,n)\hookrightarrow\mathcal{V}_{m,n}$,
and let $V$ be the standard representation of $\mathfrak{gl}(m,n)$. The
continuous dual space
$\mathrm{Hom}(\mathcal{V}_{m,n},\mathbf{k})\ \simeq\ \bigoplus_{i\geqslant
0}\left(S^{i}(V^{*})\mathop{\otimes}\limits V\right).$
Therefore, in the spectral sequence for the Lie subalgebra
$\mathfrak{gl}(m,n)\hookrightarrow\mathcal{V}_{m,n}$
$E_{1}^{pq}=H^{q}(\mathfrak{gl}(m,n),\mathrm{Hom}(\Lambda^{p}(\mathcal{V}_{m,n}/\mathfrak{gl}(m,n)),\mathbf{k}))\Rightarrow
H^{p+q}(\mathcal{V}_{m,n},\mathbf{k}),$
the coefficients of the cohomology groups of $\mathfrak{gl}(m,n)$ can be
written as
$\bigoplus_{\sum
p_{i}=p}\Lambda^{p_{i}}\left(S^{i}(V^{*})\mathop{\otimes}\limits V\right),$
where $i\geqslant 0$ and $i\neq 1$. This can be simplified by observing that
the contributions to the first layer of the spectral sequence can only come
from terms of the form
$\Lambda^{p}(V)\mathop{\otimes}\limits\Lambda^{p}(S^{2}(V^{*})\mathop{\otimes}\limits
V).$
By expanding the second exterior power and using calculus of Schur functors
one then shows that the first layer of the spectral sequence has the form
(3.0.1) $E_{1}^{2p,q}\ \simeq\
\bigoplus_{|\lambda|=p}H^{\bullet}(\mathfrak{gl}(m,n),\Sigma^{\widetilde{\lambda}}(V)\mathop{\otimes}\limits\Sigma^{\widetilde{\lambda}}(V^{*})),$
where $\widetilde{\lambda}$ is obtained from $\lambda$ by adding to it one
more column with $|\lambda|$ boxes in it, in other words
$\widetilde{\lambda}_{i}=\lambda_{i}+1$ for $1\leqslant i\leqslant p$.
###### Theorem 3.1.
For any $m\geqslant n\geqslant 0$ we have an isomorphism
$H^{\bullet}(\mathcal{V}_{m,n},\mathbf{k})\ \simeq\
H^{\bullet}(\SS^{2n}X_{2(m-n)},\mathbf{k}).$
Proof: First of all notice that if $\lambda$ is a diagram in (3.0.1) then its
transposed $\lambda^{\prime}$ is a diagram appearing in the similar spectral
sequence for the Lie subalgebra
$\mathfrak{gl}(n-1,m+1)\hookrightarrow\mathcal{V}_{n-1,m+1}$. To simplify
notation we put
$\mathcal{H}^{\circ}(m,n,k)=\mathcal{H}_{m-n+k,k}-\mathcal{H}_{m-n+k-1,k-1}.$
Clearly, if $\lambda\in\mathcal{H}^{\circ}(m,n-1,k)$ for $k\geqslant 1$, then
$\widetilde{\lambda}\in\mathcal{H}^{\circ}(m,n,k+1)$. If $k=0$, then there are
two possibilities: if $|\lambda|\leqslant m-n$, then
$\widetilde{\lambda}\in\mathcal{H}(m,n,0)$, otherwise
$\widetilde{\lambda}\in\mathcal{H}^{\circ}(m,n,1)$.
Denote by $\widehat{\lambda}$ the diagram obtained from $\lambda$ by adding
one more row with $|\lambda|$ boxes, in other words
$\widehat{\lambda}=(|\lambda|,\lambda_{1},\lambda_{2},\ldots)=\left(\widetilde{\lambda^{\prime}}\right)^{\prime}.$
If $\lambda\in\mathcal{H}^{\circ}(m,n-1,k)$ for any $k\geqslant 0$, then
$\widehat{\lambda}\in\mathcal{H}^{\circ}(m+1,n-1,k)$.
Let us denote by $F_{\bullet}$ the spectral sequence (3.0.1) for
$\mathcal{V}_{n-1,m+1}$. On the first layer $F_{1}$ the term corresponding to
the diagram $\lambda\in\mathcal{H}^{\circ}(m,n-1,k)$ for $0\leqslant
k\leqslant n-1$ is isomorphic to
$(F_{1})_{\lambda}\simeq\mathbf{k}[e_{1},\ldots
e_{2m+1}]\mathop{\otimes}\limits\mathbf{k}[e^{\prime}_{2(n-k)-1},\ldots
e^{\prime}_{2n-3}].$
And the corresponding term in the spectral sequence $E_{1}$ is isomorphic to
$(E_{1})_{\lambda}\simeq\begin{cases}\mathbf{k}[e_{1},\ldots
e_{2m-1}]\mathop{\otimes}\limits\mathbf{k}[e^{\prime}_{2(n-k)-1},\ldots
e^{\prime}_{2n-1}],&\text{if $|\lambda|>m-n$},\\\ \mathbf{k}[e_{1},\ldots
e_{2m-1}],&\text{if $|\lambda|\leqslant m-n$}.\end{cases}$
Since $n-1<m+1$ the cohomology of $\mathcal{V}_{n-1,m+1}$ is covered by
theorem 1.2(b). Therefore, the spectral sequence $F_{\bullet}$ converges to
$H^{\bullet}(S^{2m+1},\mathbf{k})=\mathbf{k}[e_{2m+1}]$. Moreover, as was
shown in [AF] this class $e_{2m+1}$ maps to the corresponding class in
$H^{\bullet}(\mathcal{V}_{0,m+1},\mathbf{k})$ under the restriction map to the
Lie subalgebra $\mathcal{V}_{0,m+1}\hookrightarrow\mathcal{V}_{n-1,m+1}$. And
from the discussion in [Pi1] section 3, it follows that this class further
maps to $e_{2m+1}\in H^{\bullet}(\mathfrak{gl}(n-1,m+1),\mathbf{k})$ under the
restriction map to the Lie subalgebra
$\mathfrak{gl}(n-1,m+1)\hookrightarrow\mathcal{V}_{n-1,m+1}$. Hence, all the
differentials in $F_{\bullet}$ vanish on the generator $e_{2m+1}$ and from the
degree considerations $e_{2m+1}$ doesn’t appear in the image of the
differentials of any other generator $e_{i}$ or $e^{\prime}_{j}$. So we have
the sub-spectral sequence of $F_{\bullet}$ that we will denote by
$G_{\bullet}$, such that
$F_{\bullet}\simeq G_{\bullet}\mathop{\otimes}\limits\mathbf{k}[e_{2m+1}].$
The component of $G_{1}$ corresponding to a diagram
$\lambda\in\mathcal{H}^{\circ}(m,n-1,k)$ for $0\leqslant k\leqslant n-1$ is
$(G_{1})_{\lambda}\simeq\mathbf{k}[e_{1},\ldots
e_{2m-1}]\mathop{\otimes}\limits\mathbf{k}[e^{\prime}_{2(n-k)-1},\ldots
e^{\prime}_{2n-3}],$
and $G_{\bullet}$ converges to $\mathbf{k}$ (in degree $0$).
Let us compare spectral sequences $E_{\bullet}$ and $G_{\bullet}$. We
introduce an intermediate spectral sequence $\widetilde{E}_{\bullet}$ by
adding to $E_{\bullet}$ the “missing” classes $e_{2n-1}$ to all the small
diagrams $\lambda$ with $|\lambda|\leqslant m-n$. The differentials in the
spectral sequence $E_{\bullet}$ can be described as follows. For generators
$e_{2i-1}$ the differentials $d_{r}$ vanish up to layer $r=2i$, and on the
layer $E_{2i}$ they send
$(e_{2i-1}z_{\lambda})\mapsto\sum_{\mu\in\lambda\cdot c_{i}}z_{\mu},$
where sum is taken over diagrams $\mu$ in the decomposition of the product of
$\lambda$ and the Chern class $c_{i}$, i.e. $\mu$ is obtained from $\lambda$
by adding $i$ boxes such that no more than one box added in each row. Here
$z_{\lambda}$ denotes the generator of the component $(E_{1})_{\lambda}$.
Similarly, for generators $e^{\prime}_{2j-1}$ differentials vanish up to layer
$r=2j$ and on that layer they send
$e^{\prime}_{2j-1}z_{\lambda}\mapsto\sum_{\nu\in\lambda\cdot s_{j}}z_{\nu},$
where $\nu$ is a diagram in the decomposition of the product of $\lambda$ and
Segre class $s_{j}$, i.e. $\nu$ is obtained from $\lambda$ by adding $j$
boxes, such that no more than one box is added in each column.
We define spectral sequence $\widetilde{E}_{\bullet}$ by setting
$(\widetilde{E}_{1})_{\lambda}=\begin{cases}(E_{1})_{\lambda},&\text{if
$|\lambda|>m-n$},\\\
(E_{1})_{\lambda}\mathop{\otimes}\limits\mathbf{k}[e_{2n-1}],&\text{if
$|\lambda|\leqslant m-n$}.\end{cases}$
The differentials are defined as described above.
In fact one can construct a filtered complex for this spectral sequence
$\widetilde{E}_{\bullet}$. We start from the cochain complex
$C^{\bullet}=C^{\bullet}(\mathcal{V}_{m,n},\mathbf{k})$ with the filtration
induced by the Lie subalgebra $\mathfrak{gl}(m,n)$ as described in 1.2. In
every degree $p\geqslant 0$ this is a bounded filtration of $C^{p}$, therefore
the filtration of the entire complex is both complete and cocomplete. In such
case we can construct a bicomplex $B^{\bullet\bullet}$, so that its
totalization equipped with one of the natural filtrations of the bicomplex
(say in the vertical direction) is filtered quasi-isomorphic to the cochain
complex $C^{\bullet}$. This can be seen as a special case of Koszul duality
between filtered complexes, that are identified via Rees construction with
complexes of (flat) $\mathbf{k}[u]$-modules, and on the dual side complexes of
$\mathbf{k}[\varepsilon]$-modules, that are identified with bicomplexes, where
horizontal differential is given by $d$ and vertical differential by the
action of $\varepsilon$ (for a brief summary we refer to [Pi2] section 1.2,
for a detailed discussion see for example [Po]).
Now, since the totalization $\mathop{\mathrm{Tot}}\nolimits
B^{\bullet\bullet}$ is filtered quasi-isomorphic to $C^{\bullet}$ the first
layers of the corresponding spectral sequences are isomorphic. We construct
bicomplex $\widetilde{B}^{\bullet\bullet}$ by putting
$\widetilde{B}^{p\bullet}\simeq\begin{cases}B^{p\bullet},&\text{if
$p>m-n$},\\\ B^{p\bullet}\mathop{\otimes}\limits\mathbf{k}[e_{2n-1}],&\text{if
$p\leqslant m-n$}.\end{cases}$
We put both horizontal and vertical differentials to be zero on $e_{2n-1}$. We
will denote $\widetilde{C}^{\bullet}$ the totalization
$\mathop{\mathrm{Tot}}\nolimits\widetilde{B}^{\bullet\bullet}$ equipped with
the filtration in the vertical direction.
Notice that $\widetilde{E}_{1}\simeq
G_{1}\mathop{\otimes}\limits\mathbf{k}[e_{2n-1}]$. In the spectral sequence
$G_{\bullet}$ the roles of generator classes $e$ and $e^{\prime}$ are
reversed, however, the diagrams appearing in $G_{\bullet}$ are the transposes
of those appearing in $E_{\bullet}$, therefore the differentials in
$G_{\bullet}$ have the same description as above. Therefore, since the
spectral sequence $G_{\bullet}$ converges to $\mathbf{k}$ all potential
targets for differential starting from the new class $e_{2n-1}$ are already
killed in $G_{\bullet}$. Hence, we find that $\widetilde{E}_{\bullet}$
converges to $\mathbf{k}[e_{2n-1}]$.
Finally, consider the short exact sequence
$C^{\bullet}\to\widetilde{C}^{\bullet}\to
Q^{\bullet}\mathop{\otimes}\limits\mathbf{k}e_{2n-1}$. Here the spectral
sequence for $Q^{\bullet}$ has only contributions from diagrams $\lambda$ with
$|\lambda|\leqslant m-n$ and
$(Q_{1})_{\lambda}\simeq\mathbf{k}[e_{1},\ldots e_{2m-1}].$
This, as in the classical case, is isomorphic to the spectral sequence for the
fiber product
$X_{2(m-n)}=\mathop{\mathrm{sk}}\nolimits_{2(m-n)}BGL(m)\times_{BGL(m)}EGL(m).$
Since $\widetilde{E}_{\bullet}$ converges to $\mathbf{k}[e_{2n-1}]$, from the
long exact sequence we find that
$\displaystyle H^{0}(\mathcal{V}_{m,n},\mathbf{k})$
$\displaystyle=\mathbf{k},$ $\displaystyle
H^{i}(\mathcal{V}_{m,n},\mathbf{k})$ $\displaystyle=0,\quad\text{for
$1\leqslant i\leqslant 2n$}$ $\displaystyle
H^{i}(\mathcal{V}_{m,n},\mathbf{k})$ $\displaystyle\simeq
H^{i-2n}(X_{2(m-n)},\mathbf{k}),\quad\text{for $i>2n$}.$
$\square$
## References
* [AF] A. Astashkevich, D. Fuchs. On the cohomology of the Lie superalgebra $W(m|n)$. Unconventional Lie algebras (1993).
* [BAF] I. Basdouri, M. Ben Ammar, N. Ben Fraj, M. Boujelbene, K. Kaouthar. Cohomology of the Lie Superalgebra of Contact Vector Fields on $\mathbb{R}^{1|1}$ and Deformations of the Superspace of Symbols. Journal of Nonlinear Mathematical Physics 16 (2009), 373–409.
* [ESS] I. Entova-Aizenbud, V. Serganova, A. Sherman. It takes two spectral sequences. https://arxiv.org/abs/2307.06156.
* [FK] A. Faouzi, K. Kaouthar. About the Cohomology of the Lie Superalgebra of Vector Fields on $\mathbb{R}^{n|n}$. Communications in Algebra 37 (2009), 2679–2687.
* [FKV] J. M. Figueroa-O’Farrill, T. Kimura, A. Vaintrob. The universal Vassiliev invariant for the Lie superalgebra $\mathfrak{gl}(1|1)$. Comm. Math. Phys. 185 (1997), 93–127.
* [FL] D. Fuchs, D. Leites. Cohomology of Lie superalgebras. C. R. Acad. Bulgare Sci., 37 (1984), 1595–1596.
* [Fuk] D. B. Fuks. Cohomology of Infinite-Dimensional Lie Algebras. Monographs in Contemporary Mathematics (1986).
* [Ful] W. Fulton. Young tableaux. With applications to representation theory and geometry. London Mathematical Society Student Texts 35, Cambridge University Press (1997).
* [GF] I. Gelfand, D. Fuks. Cohomology of Lie algebras of tangent vector fields of a smooth manifold. Funkts. Anal. Prilozhen. 3 (1969), 32–52.
* [HK] B. Hennion, M. Kapranov. Gelfand-Fuchs cohomology in algebraic geometry and factorization algebras. https://arxiv.org/abs/1811.05032.
* [Kl] A. Kleshchev. Linear and Projective Representations of Symmetric Groups. Cambridge Univ. Press (2005).
* [Ko] J.-L. Koszul. Les superalgèbres des Lie W(n) et leur représentations. Géométrie différentielle (Paris 1986). Travaux en Cours, 33, Hermann, Paris (1988), 161–171.
* [Mu] I. Musson. Lie Superalgebras and Enveloping Algebras. Graduate Studies in Mathematics 131. Amer. Math. Soc. (2012).
* [Pi1] S. Pimenov. Gelfand-Fuchs cohomology for affine superspaces $\mathbb{A}^{n,1}$. https://arxiv.org/abs/2210.16585.
* [Pi2] S. Pimenov. Monadicity of localization for Lie super-algebras $\mathfrak{gl}(m,n)$. https://arxiv.org/abs/2110.00802.
* [Po] L. Positselski. Two kinds of derived categories, Koszul duality, and comodule-contramodule correspondence. https://arxiv.org/abs/0905.2621.
|
1]Akanksha Agrawal
1]John Augustine
2]David Peleg
1]Srikkanth Ramachandran
[1]Indian Institute of Technology Madras
[2]Weizmann Institute of Science, Israel
Recurrent Problems in the LOCAL Model
The paper considers the model of distributed computing introduced by Schmid and Suomela [HotSDN'13], generalizing the and models. In this framework, multiple instances of the same problem, differing
from each other by the subnetwork to which they apply, recur over time,
and need to be solved efficiently online. To do that, one may rely on
an initial preprocessing phase for computing some useful information.
This preprocessing phase makes it possible, in some cases, to obtain improved
distributed algorithms, overcoming locality-based time lower bounds.
A first contribution of the current paper is expanding the spectrum of problem types to which the model applies. In addition to subnetwork-defined recurrent problems, we introduce also recurrent problems of two additional types: (i) instances defined by partial client sets, and (ii) instances defined by partially fixed outputs.
Our second contribution is exploring and illustrating the versatility
and applicability of the framework via examining
new recurrent variants of three classical graph problems.
The first problem is Minimum Client Dominating Set (), a recurrent version of the classical dominating set problem with each recurrent instance requiring us to dominate a partial client set.
We provide a constant time approximation scheme for the problem on trees and planar graphs,
overcoming the $\Omega(\log^*n)$ based locality lower bound.
The second problem is Color Completion (), a recurrent version of the coloring problem in which each recurrent instance comes with a partially fixed coloring (of some of the vertices) that must be completed.
We study the minimum number of new colors and the minimum total number of colors necessary for completing this task.
We show that it is not possible to find a constant time approximation scheme for the minimum number of additional colors required to complete the precoloring. On the positive side, we provide an algorithm that computes a $2$-approximation for the total number of colors used in the completed coloring (including the set of pre-assigned colors), as well as a one round algorithm for color completion that uses an asymptotically optimal number of colors.
The third problem we study is a recurrent version of Locally Checkable Labellings (LCL) on paths of length $n$. We show that such problems have complexities that are either $\Theta(1)$ or $\Theta(n)$, extending the results of
Foerster et al. [INFOCOM'19].
§ INTRODUCTION
The area of distributed network algorithm concerns the development and analysis of distributed algorithm operating on a network of processors interconnected by communication links. In particular, a substantial body of research has been dedicated to the development of various graph algorithms for problems whose input consists of the network topology. Examples for such problems are finding maximal independent set (MIS) for the network, finding a maximal or maximum matching (MM), a minimum dominating set (MDS), a proper coloring with few colors, and so on, and considerable efforts were invested in developing sophisticated and highly efficient algorithms for these problems. Such algorithms are particularly significant in settings where the distributed network at hand is dynamic, and its topology keeps changing at a high rate.
The observation motivating the current study is that in many practical settings, the network itself may be static, or change relatively infrequently. In such settings, problems depending solely on the graph structure need be solved only once. In contrast, there are a variety of other problems, related to computational processes that occur repeatedly in the network, which need to be solved at a much higher frequency, and whose input consists of the network topology together with some other (varying) elements. For such problems, the traditional model might not provide a satisfactory solution, in the sense that it may be unnecessarily expensive to solve the entire problem afresh for each instance. Rather, it may be possible to derive improved algorithmic solutions that take advantage of the fact that the network topology is static.
We refer to such problems as recurrent problems.
We envision that depending on the desired problems that the network needs to support, one can compute and store additional information about the topology of the network within each node, to enable recurrent problems to be solved faster. Inherently this captures an aspect of network design. When a network is built, it maybe useful to compute useful information about its topology keeping in mind the recurrent problems that it must support during its lifetime.
This framework has already been studied in literature as the $\SUPPORTED$ model <cit.>, wherein the recurrent problems are simply instances of the original problem but on a (edge induced) subgraph of the original graph. Edges of the original graph remain valid communication links.
We believe that the $\SUPPORTED$ model (as mentioned in <cit.>) does not fully capture all recurrent problems. To demonstrate this, we study a couple of natural extensions of the classical local problems of coloring and dominating set.
§.§ Recurrent Problems
We consider graph-related optimization problems each of whose instances $\langle G,\InS\rangle$ consists of a network topology $G=(V,E)$, on which the distributed algorithm is to be run, and some problem-specific input $\InS$.
The term "recurrent problem" refers to a setting where the network $G$ is fixed, and is the same in all instances (hence we often omit it). Formally, there is a stream of instances
that arrive from time to time and must be solved efficiently.
The optimization problem itself may be a variant of some classical graph optimization problem, except it has some additional restrictions, specified in each instance $\InS$.
Two concrete types of restrictions that are of particular interest are
partial client set (PCS) and
partially fixed output (PFO).
Partial client set (PCS)
An instance $\InS$ restricted in this manner specifies a subset $C\subseteq V$ of client vertices to which the problem applies. The rest of the vertices are not involved (except in their capacity as part of the network).
For example, consider the maximum matching problem. In the PCS variant of this problem, a PCS-restricted instance will specify a vertex subset $C$ such that the matching is only allowed (and required) to connect vertex pairs of $C$.
Partially fixed output (PFO)
An instance $\InS$ restricted in this manner specifies a part of the output. The rest of the output must be determined by the algorithm.
For example, consider $k$-centers problem (where the goal is to select a subset $C$ of $k$ vertices serving as centers, so as to minimize the maximum distance from any vertex of $V$ to $C$). In the PFO variant of $k$-centers problem, a PFO-restricted instance will specify a vertex subset $C_{pre}$ of $k'$ vertices that were already pre-selected as centers, and the task left to the algorithm is to select the remaining $k-k'$ centers.
Naturally, some recurrent problems may involve other novel restrictions as well as hybrids, thereby opening up the possibility for rich theory to be developed.
§.§.§ Two representative examples: CDS and PCC
In this paper, we will focus on two concrete examples for recurrent problems of practical significance, and use them for illustration. The first of these two example problems, named $\CDS$,
serves to illustrate a recurrent problem with PCS-restricted instances (where the set of clients changes in each instance). The second problem, named $\PCC$, illustrates a recurrent problem with PFO-restricted instances (where parts of the output are fixed in advance in each instance).
Minimum client-dominating set ($\CDS$)
In certain contexts, a dominating set $D$ in a network $G$ (i.e., such that every vertex $v\in V$ either belongs to $D$ or has a neighbor in $D$) is used for placing servers providing some service to all the vertices in the network (interpreted as clients), in settings where it is required that each vertex is served by a server located either locally or at one of its neighbors. The minimum dominating set (MDS) problem requires finding the smallest possible dominating set for $G$.
We consider the recurrent variant of the $\CDS$ problem with PCS-restricted instances. This problem arises in settings where the set of clients in need of service does not include all the vertices of $G$, but rather varies from one instance to the next. In such settings, the network $G$ is static, and from time to time, a set of clients $C\subseteq V$, formed in an ad-hoc manner due to incoming user requests, requests to select and establish a (preferably small) subset $D$ of vertices from among their neighbors, which will provide them some service. In other words, the set $D$ is required to dominate the vertices in $C$. On the face of it, solving the minimum dominating set problem once on $G$ may be useful, but not guarantee optimal results for each recurrent instance $\InS$; rather, for each instance $\InS$, it may be necessary to solve the specialized problem once the request appears in the network. Hereafter, we refer to this problem as minimum client-dominating set ($\CDS$).
Note that one may also consider a generalized problem that includes also a PFO component, by specifying in each instance $\InS$ also a partial set $D'$ of vertices that were pre-selected as servers (or dominators). Our results are presented for the $\CDS$ problem (without PFO restrictions), but it should be clear that they can easily be extended to the generalized problem with PFO restrictions[Essentially, for this problem, the pre-selected vertices of $D'$ can be used to satisfy all the clients that neighbor them, leaving us with a smaller set $C'$ of unsatisfied clients that need to be covered.].
Color Completion ($\PCC$)
In certain contexts, a proper coloring of a distributed network is used for purposes of scheduling various mutually exclusive tasks over the processors of the network. For example, suppose that performing a certain task by a processor requires it to temporarily lock all its adjacent links for its exclusive use, preventing their use by the processor at the other end. Employing a proper coloring as the schedule (in which all the processors colored by color $t$ operate simultaneously at round $t$) enables such mutually exclusive operation. Naturally, it is desirable to use as few colors as possible, in order to maximize parallelism.
We consider the recurrent variant of the coloring problem with PFO-restricted instances.
From time to time we may receive a partial (collision-free) coloring assignment to some subset $C\subseteq V$ of the vertices, representing processors constrained to operate on some particular time slots. We are required to color all the remaining vertices in $V\setminus C$ properly and consistently with the initial coloring. Typically, using colors already occurring in the precoloring (i.e., used by some vertices in the set $C$) is fine, since these time slots are already committed for the task at hand. However, it is desirable to use as few new time slots (or new colors), to minimize the overall time spent on the task.
Note that one may also consider a generalized problem that includes also a PCS component, by specifying in each instance $\InS$ also a partial set $V'$ of vertices that are interested in being scheduled, and hence need to be colored. Our results are presented for the $\PCC$ problem (without PCS restrictions), but it should be clear that they can easily be extended to the generalized problem with PCS restrictions[Essentially, for this problem, the vertices of $V\setminus V'$, which do not require coloring, can simply avoid participating in the coloring process.].
§.§ The model
The model is an extension of the well studied and models with an additional preprocessing phase.
Specifically the solution to a problem in the $\SUPPORTED$ model consists of two stages, (i) a preprocessing stage and (ii) an online stage.
* In the preprocessing stage, run an algorithm $\cA_{pre}(G)$ on the topology of the network $G$ and obtain information $\Inf(G)$ to be stored at the network vertices (different vertices may of course store different information).
* During runtime, a stream of instances
arrive. Whenever a new instance $\InS$ arrives, run an algorithm $\cA(\InS,\Inf(G))$ to solve this problem instance.
In view of the fact that the preprocessing stage takes place only once, the particulars of the preprocessing algorithm are less important to us, and we allow it to be arbitrary (even oracular).
For the scope of this paper, in the upper bounds that we show, all our preprocessing phases are decidable, whereas the lower bounds hold for any arbitrary preprocessing.
In the online stage, we insist that the computations performed by each node in a single round must be polynomial in the size of the graph. Therefore even knowledge of the complete network for each node might not be sufficient, as underlying information about the topology (such as chromatic number) might not be computable in polynomial time.
For a given problem $\Pi$ on a graph $G$, one may seek to optimize several parameters. For the scope of this paper, we consider only two, (i) the round complexity of the online algorithm, i.e., the number of synchronous rounds required to solve each recurrent instance and (ii) the size of the output to each node in the preprocessing phase, i.e., the amount of additional information that needs to be stored in each node of the graph from the preprocessing phase.
We use $\SUPTIME(\Pi, G)$ to denote the worst case online round complexity for any deterministic algorithm across all instances of $\Pi$. We use $\SUPSPACE(\Pi, G)$ to be the optimal size of the output to each node in the preprocessing phase that enables $\Pi$ to be solved in $\SUPTIME(\Pi, G)$ rounds in the online stage.
We use $\LTIME(\Pi, p)$ to denote the worst case round complexity for $\Pi$ in the classical local model on all graphs with given parameter $p$. Depending on the problem, $\LTIME(\Pi)$ may be described by a combination of different parameters of the input graph, such as the number of nodes $n$ or maximum degree $\Delta$.
§.§ Our Contributions
In Section 2, study the $\CDS$ problem. We first show that even on a path, it is not possible to optimally solve $\CDS$ in $o(n)$ time. We next look at $1 + \epsilon$ approximations. We show that on trees and planar graphs, one can obtain a $1 + \epsilon$ approximation in $O(\frac{1}{\epsilon})$ and $\tilde{O}\left(\frac{1}{\epsilon}^{\log_{24/23}{3}}\right) $ rounds respectively. To achieve these bounds, we only require to store $O(1)$ bits per node as the output of the preprocessing phase.
In Section 3, we study the $\PCC$ problem. We provide an algorithm to complete a given coloring using at most $\chi (\Delta + 1) / k$ new colors in $k$ rounds. We show that for $k = 1$, the number of colors used is asymptotically tight in the worst case.
In Section 4, we study a generic class of problems called Locally Checkable Labellings (LCL). We show that on a path, every LCL problem either has worst case complexity $\Theta(1)$ or $\Theta(n)$. In the specific case of recurrent problems where the online instances are a specific LCL on a sub-path of the given path (as considered in prior works such as <cit.>), we provide an efficient centralized algorithm to classify the LCL into one of the two cases and also construct the distributed algorithm to solve an LCL given its description, thereby extending the results in <cit.>. In our construction, the preprocessing phase requires only $O(1)$ additional bits to be stored per node.
Finally in Section 5, we provide some partial results on sub-graph maximal matching and sub-graph maximal independent set that could potentially be useful in finding optimal solutions for these problems in the $\SUPPORTED$ model.
§.§ Related Work
The model for first proposed by Schmid and Suomela <cit.>. Foerster et al. <cit.> provide several results including lower bounds for problems such as sinkless orientation and approximating independent set. For global network optimization problems, such as minimum spanning tree, near optimal universal lower and upper bounds have been shown by (<cit.>).
We stress that in all related prior work above, the problems to be solved are same as the traditional problems, but on a subgraph of the given graph. Most of our solutions here are adaptations of existing algorithms for the relevant problems in the model.
Dominating Set. Czygrinow et al <cit.> provided an $O_{\epsilon}(\log^* n)$ round algorithm for a $1 + \epsilon$ approximation for the dominating set problem and it was later extended to bounded genus graphs by Amiri et al<cit.>. Foerster et al. <cit.> briefly discuss about extending these results to the model.
Coloring. Color Completion has been one of the methods used for $\Delta + 1$ coloring graphs in $\log^*n + f(\Delta)$ rounds. Existing algorithms decide on a coloring for a subgraph of the given graph and then recursively complete the chosen coloring. Barenboim <cit.> provided the first sublinear in $\Delta$ algorithm. The current best known algorithm has round complexity $\log^*n + O(\sqrt{\Delta \log \Delta})$ (see <cit.>). Maus <cit.> also provided a smooth tradeoff between the number of colors and the round complexity, specifically in $k + \log^* n$ rounds, graphs can be properly colored using $O(\Delta^2 / k)$ colors for any $1 \leq k \leq \sqrt{\Delta}$. We note that Maus's algorithm does not provide a $\Delta + 1$ coloring but rather an $O(\Delta)$ coloring.
LCL. Locally Checkable Labellings (LCL) were first proposed by Naor and Stockmeyer <cit.>. Chang et al. <cit.> showed gaps in the deterministic complexity of LCL's. They showed that the worst case deterministic round complexity of LCL's on any hereditary graph class is either $\omega(\log_{\Delta} n)$ or $O(\log^* n)$. They also show that for paths, there is no LCL with complexity $o(n)$ and $\omega(\log^* n)$. Later Chang et al <cit.> showed that on trees, the deterministic worst case complexities for LCL's is either $\Theta(1), \Theta(\log^* n), \Theta(\log n)$ or $n^{\Theta(1)}$. They also provide examples of LCL's with complexity $\Theta(n^{1/k})$ for any integer $k$. More recently, Balliu et al. <cit.> showed that for a more restricted class of LCL problems called homogenous LCL problems, on rooted trees, there is a centralized algorithm that takes as input the description of the LCL and decides which of the above complexity classes it belongs to. Given the LCL, deciding its distributed complexity class on trees was shown to be hard by Chang <cit.>.
§ DOMINATING SETS
§.§ Client Dominating Set
Given a graph $G$ and a subset of its vertices $C \subseteq V(G)$, called the client set, we say that a subset $D$ is a client dominating set of $G, C$ if for every client $c \in C$, there exists $v \in D$ such that either $v = c$ or $v$ is a neighbor of $c$.
Given a graph $G$ and a subset of its vertices $C \subseteq V(G)$, called the client set, find a client dominating set of minimum size.
The $\CDS$ problem is of course a generalization of the Dominating Set problem as the dominating set is precisely the case when $C = V(G)$. It is also possible to reduce the $\CDS$ problem to an instance of a Dominating Set problem. Given a graph $G$ and a client set $C$, we can construct a graph $G_C$ which is obtained by adding a path on two vertices, $P_2$ to $G$ and connecting every nonclient vertex (i.e. $V(G) \setminus C$) to one end of the path $P_2$. See Figure <ref> (a).
ıin 2,...,4
[draw, circle, inner sep=1mm, fill=black] (Aı) at (ı*60:1) ;
ıin 5,...,7
[draw, circle, inner sep=1mm] (Aı) at (ı*60:1) ;
(A2) – (A3);
(A2) – (A5);
(A7) – (A5);
(A4) – (A6);
[draw, circle, inner sep=1mm, fill=gray] (B1) at (0:2) ;
[draw, circle, inner sep=1mm, fill=gray] (B2) at (0:3) ;
[thick] (B1) – (B2);
[thick] (A5) – (B1);
[thick] (A6) – (B1);
[thick] (A7) – (B1);
[rounded corners, dashed] (135:2) rectangle (315:2) ;
at (90:1.8) $G$;
ıin 2,...,4
[draw, circle, inner sep=1mm, fill=black] (Cı) at ($(6,0) + (\i*60:1)$) ;
ıin 5,...,7
[draw, circle, inner sep=1mm] (Cı) at ($(6,0) + (\i*60:1)$) ;
(C2) – (C3);
(C2) – (C5);
(C7) – (C5);
(C4) – (C6);
[rounded corners, dashed] ($(6, 0) + (135:2)$) rectangle ($(6,0) + (315:2)$) ;
at ($(6,0) + (90:1.8)$) $G$;
[draw, circle, inner sep=1mm, fill=gray] (D1) at ($(7.5,0) + (5*60:1)$) ;
[draw, circle, inner sep=1mm, fill=gray] (D2) at ($(8.5,0) + (5*60:1)$) ;
[draw, circle, inner sep=1mm, fill=gray] (E1) at ($(7.5,0) + (6*60:1)$) ;
[draw, circle, inner sep=1mm, fill=gray] (E2) at ($(8.5,0) + (6*60:1)$) ;
[draw, circle, inner sep=1mm, fill=gray] (F1) at ($(7.5,0) + (7*60:1)$) ;
[draw, circle, inner sep=1mm, fill=gray] (F2) at ($(8.5,0) + (7*60:1)$) ;
[thick] (D1) – (D2);
[thick] (E1) – (E2);
[thick] (F1) – (F2);
[thick] (C5) – (D1);
[thick] (C6) – (E1);
[thick] (C7) – (F1);
at (0, -1.8) $(a)$;
at (6, -1.8) $(b)$;
$(a)$ PTAS preserving reduction $(b)$ Locality preserving reduction, black vertices are clients, thick edges and gray vertices are added.
Given a graph $G$ and a client set $C \subseteq V(G)$, consider the graph $G_C$ with
* $V(G_C) = V(G) \cup \{u_1, u_2\}$ where $u_1, u_2 \not\in V(G)$ are two new vertices
* $E(G_C) = E(G) \cup \{(u_1, v) \mid v \in V(G) \setminus C\} \cup \{(u_1, u_2)\}$
For any $D \subseteq V(G_C)$,
$D \cap V(G)$ is a client dominating set of $G, C$ if and only if $D \cup \{u_1\}$ is a dominating set of $G_C$.
($\Rightarrow$) Suppose $D \cap V(G)$ is a client dominating set of $G, C$, then all vertices in $C$ have a neighbor in $D \cap V(G)$. Now we look at those vertices in $G_C$ that are dominated by $D \cap V(G)$. The only possible vertices that are not dominated in $G_C$ are the non clients $V(G) \setminus C$ and the two vertices $u_1, u_2$. Notice that $u_1$ dominates all of them. Therefore $(D \cap V(G)) \cup \{u_1\}$ dominates $G_C$. $D \cup \{u_1\}$ is the almost the same set, except possibly with $u_2$ removed. As $u_2$ is not necessary when $u_1$ is present, $D \cup \{u_1\}$ must dominate $G_C$.
($\Leftarrow$) Suppose $D \cup \{u_1\}$ is a dominating set for $G_C$. $u_1$ only dominates the vertices $V(G) \setminus C, u_1, u_2$. The dominators of the remaining vertices (i.e $C$) must thus be present solely in $V(G)$, i.e., they must be $(D \cup \{u_1\})\cap V(G) = D \cap V(G)$.
Notice that given a dominating set $D$ of $G_C$, one can replace $u_2$ (if it exists in the solution) with $u_1$ and then by Claim <ref>, $D \cap V(G)$ is a client dominating set. If $D$ is optimal, then $D \cap V(G)$ must an optimal client dominating set. Furthermore, suppose a $1 + \epsilon$ approximation for the dominating set is known for $G_C$, then using Claim <ref> we can get a dominating set of size $(1 + \epsilon)(|D^*| + 1) - 1 = (1 + \epsilon)|D^*| + \epsilon \leq (1 + 2\epsilon) |D^*|$.
The above reduction holds only for centralized algorithms. Since the above reduction does not preserve locality, non-clients which are far apart in $G$ may be close in $G_C$, a distributed algorithm for dominating set does not immediately imply a distributed algorithm for $\CDS$ with the same round complexity. While we are unable to provide a locality preserving reduction for $1 + \epsilon$ approximating a dominating set, we shall discuss one attempt, which is a slight modification of the above. To each non-client, connect a different path of length $2$, instead of the same path as we have done here (See Figure <ref> (b)). While the new reduction is locality preserving and one can obtain an optimal solution via the new reduction, it does not seem straightforward to obtain an approximation. The reason is that the size of the dominating set for $G_C$ is more than the corresponding client dominating set by an additive $|V(G)| - |C|$ term. Thus if $D^*$ is a client dominating set, the corresponding dominating set in $G_C$ has size, $(1 + \epsilon) (|D^*| + |V(G)| - |C|) - (|V(G)| - |C|) = (1 + \epsilon) |D^*| + \epsilon (|V(G)| - |C|)$. The additive term $\epsilon (|V(G)| - |C|)$ is too expensive and does not lead to even a constant approximation as $|D^*|$ could be arbitrarily small (even $1$) compared to $\epsilon (|V(G)| - |C|)$.
§.§ Lower Bound for Paths
We establish two lower bounds for $\CDS$ on a path.
First, we argue that, regardless of the preprocessing, the online runtime of
every (exact) deterministic distributed algorithm for the $\CDS$ problem
must take time $\Omega(D)$ on networks of diameter $D$.
Second, we show that the online runtime of every deterministic distributed approximation algorithm for $\CDS$ with ratio $1+\epsilon$ must require time $\Omega(1/\epsilon)$ on some input.
Let $\cA$ be a deterministic distributed local algorithm for $\CDS$ with arbitrary preprocessing. Then there exists some input for which $\cA$ requires $\Omega(D)$ time.
We prove the statement by contradiction. Suppose there exists a deterministic algorithm $\cA$ whose worst case run time is $o(D)$.
Consider a path $P=(v_1, v_2, \dots v_n)$ where $n={4k+2}$
for even $k$
and the following two instances of clients (see Figure <ref>):
* $C_1 = \{v_2, v_4, \dots v_{4k}\}$, i.e., every vertex at an odd distance from the leftmost vertex except $v_n$.
* $C_2 = \{v_4, v_6, \dots v_{4k+2}\}$, i.e., every vertex at an odd distance from the leftmost vertex except $v_{2}$.
[draw, circle, inner sep = 6pt] (n1) at (1, 0) ;
[draw=red, circle, inner sep = 6pt] (n2) at (2, 0) ;
(text0) at (2.0, 0) $v_2$;
[draw=red, circle, inner sep = 7pt] (nn2) at (2, 0) ;
[draw, circle, inner sep = 6pt] (n3) at (3, 0) ;
(text0) at (14.0, 0) $v_n$;
[draw=red, circle, inner sep = 6pt] (n4) at (4, 0) ;
[draw=red, circle, inner sep = 7pt] (nn4) at (4, 0) ;
(n1) – (nn2);
(nn2) – (n3);
(n3) – (nn4);
[decoration=brace,mirror,raise=15pt,decorate] (1-0.2,0) – node[below=20pt]
$2k$ (5+0.3, 0);
[decoration=brace,mirror,raise=15pt,decorate] (6-0.2,0) – node[below=20pt]
$2k$ (12+0.2, 0);
at (5-0.3, 0)[circle,fill,inner sep=0.5pt];
at (5, 0)[circle,fill,inner sep=0.5pt];
(text0) at (2.0, -3) $v_2$;
at (5+0.3, 0)[circle,fill,inner sep=0.5pt];
[draw, circle, inner sep = 6pt] (n5) at (6, 0) ;
[draw=red, circle, inner sep = 6pt] (n6) at (7, 0) ;
[draw=red, circle, inner sep = 7pt] (nn6) at (7, 0) ;
[draw=blue, circle, inner sep = 6pt, line width = 1pt] (n7) at (8, 0) ;
[draw=red, circle, inner sep = 6pt] (n8) at (9, 0) ;
[draw=red, circle, inner sep = 7pt] (nn8) at (9, 0) ;
(text0) at (14.0, -3.0) $v_n$;
(n5) – (nn6);
(nn6) – (n7);
(n7) – (nn8);
at (10-0.3, 0)[circle,fill,inner sep=0.5pt];
at (10, 0)[circle,fill,inner sep=0.5pt];
at (10+0.3, 0)[circle,fill,inner sep=0.5pt];
[draw, circle, inner sep = 6pt] (n9) at (11, 0) ;
[draw=red, circle, inner sep = 6pt] (n10) at (12, 0) ;
[draw=red, circle, inner sep = 7pt] (nn10) at (12, 0) ;
[draw, circle, inner sep = 6pt] (n11) at (13, 0) ;
[draw, circle, inner sep = 6pt] (n12) at (14, 0) ;
(n9) – (nn10);
(nn10) – (n11);
(n11) – (n12);
at (7, -1.5) (a) Instance $C_1$;
[draw, circle, inner sep = 6pt] (m1) at (1, -3) ;
[draw, circle, inner sep = 6pt] (mm2) at (2, -3) ;
[draw, circle, inner sep = 6pt] (m3) at (3, -3) ;
[draw=red, circle, inner sep = 6pt] (m4) at (4, -3) ;
[draw=red, circle, inner sep = 7pt] (mm4) at (4, -3) ;
(m1) – (mm2);
(mm2) – (m3);
(m3) – (mm4);
at (5-0.3, -3)[circle,fill,inner sep=0.5pt];
at (5, -3)[circle,fill,inner sep=0.5pt];
at (5+0.3, -3)[circle,fill,inner sep=0.5pt];
[draw, circle, inner sep = 6pt] (m5) at (6, -3) ;
[draw=red, circle, inner sep = 6pt] (m6) at (7, -3) ;
[draw=red, circle, inner sep = 7pt] (mm6) at (7, -3) ;
[draw=blue, circle, inner sep = 6pt, line width = 1pt] (m7) at (8, -3) ;
[draw=red, circle, inner sep = 6pt] (m8) at (9, -3) ;
[draw=red, circle, inner sep = 7pt] (mm8) at (9, -3) ;
(m5) – (mm6);
(mm6) – (m7);
(m7) – (mm8);
at (10-0.3, -3)[circle,fill,inner sep=0.5pt];
at (10, -3)[circle,fill,inner sep=0.5pt];
at (10+0.3, -3)[circle,fill,inner sep=0.5pt];
[draw, circle, inner sep = 6pt] (m9) at (11, -3) ;
[draw=red, circle, inner sep = 6pt] (m10) at (12, -3) ;
[draw=red, circle, inner sep = 7pt] (mm10) at (12, -3) ;
[draw, circle, inner sep = 6pt] (m11) at (13, -3) ;
[draw=red, circle, inner sep = 6pt] (m12) at (14, -3) ;
[draw=red, circle, inner sep = 7pt] (mm12) at (14, -3) ;
(m9) – (mm10);
(mm10) – (m11);
(m11) – (mm12);
at (7, -4) (b) Instance $C_2$;
The instances $C_1$ and $C_2$, differing
in $v_2$ and $v_n$.
Red double circles denote clients.
The blue node is $v_{2k+1}$.
Both these instances have unique optimal solutions that are disjoint. For $C_1$, the optimal solution is to place the dominators at $v_3, v_7, \dots v_{4k-1}$, whereas for $C_2$ the optimal solution is to place them at $v_5, v_9, \dots v_{4k+1}$. Consider the vertex $v_{2k+1}$. It must be chosen as a dominator in exactly one of the two given instances. Since $\cA$ operates in $t = o(D) = o(k)$ rounds, the inputs in the $t$-neighborhood of $v_{2k+1}$, which are observable to $v_{2k+1}$ during the execution, are identical in both instances, and hence the output of $v_{2k+1}$ must be identical as well, yielding the desired contradiction.
Let $\cA$ be a deterministic distributed local approximation algorithm for $\CDS$,
with arbitrary preprocessing, whose online runtime on every path and every instance is at most $k=4\ell+1$ for some integer $\ell\ge 1$.
There exists a network and a set of clients for which the approximation ratio
of $\cA$ is at least $1+1/(k+2)$.
An illustration of the instance $(P,S)$ for $\ell=1$. Here $k=5$.
The client vertices of the set $S$ are drawn as double red circles.
The vertices included in the optimal dominating set $D^*$ for $S$
are marked by $*$.
The vertices included in the optimal dominating set $D^{*'}$
for the modified instance $S'$ are marked by $*'$.
Consider an algorithm $\cA$ as specified in the theorem.
Let $P=(v_1,v_2,\ldots,v_{4k+7})$ be a path with $ID(v_i)=i$ for every $i$.
For $i\le j$, denote by $P[v_i,v_j]$ the subpath of a path $P$ from $v_i$ to $v_j$.
Assume an arbitrary preprocessing stage took place, providing the vertices
of $P$ with some additional information.
Let the client set $S$ consist of all the odd-indexed vertices on $P$.
Consider the execution of $\cA$ on $P$ and $S$.
Partition $P$ into three subpaths,
$B=P[v_{k+4},v_{3k+5}]$, and
(See Fig. <ref> for an illustration.)
Let $D$ be the set of vertices chosen to the dominating set by the algorithm.
For $X\in\{A,B,C\}$,
let $S[X]=S\cap X$ be the set of clients in the subpath $X$,
and $D[X]=D\cap X$ be the set of dominators selected in $X$.
There are three cases to consider.
Case (1): $|D[B]| \ge 2\ell+2$.
Note that no matter where the dominators of $D[B]$ are placed within
the subpath $B$, at least $\ell+1$ dominators must be selected
in the subpath $C$ in order to dominate all the clients of $S[C]$.
In particular, this holds even if some dominator in $D[B]$
dominates the leftmost client in $C$, $v_{3k+6}$
(node 21 in Fig. <ref>).
Similarly, at least $\ell+1$ dominators must be selected
in the subpath $A$ in order to dominate all the clients of $S[A]$.
(Here, the dominators in $D[B]$ cannot help.)
Altogether, $|D|\ge 4\ell+4$.
On the other hand, note that the unique optimum solution for this instance,
consists of $4\ell+3$ dominators (see Fig. <ref>).
Hence in this case, the approximation ratio of $\cA$ is no better than $(4\ell+4)/(4\ell+3)$.
Case (2): $|D[B]| = 2\ell+1$ but $D[B]$ does not dominate all of $S[B]$.
In this case, some of the clients of $S[B]$ must be dominated by dominators
outside the subpath $B$. Inspecting the structure, it is clear that
the only client in $B$ that may be dominated by a dominator outside $B$
is the leftmost client, $v_{k+4}$ (node 9 in Fig. <ref>),
and the only way to do that is by selecting $v_{k+3}$, the rightmost node in $A$,
to $D$.
It is also clear that despite such a selection, $D[A]$ must contain at least
$\ell+1$ additional dominators in order to dominate all the clients
of $S[A]$.
Also, $|D[C]|\ge \ell+1$ is necessary to dominate $S[C]$.
Hence again, overall $|D|\ge 4\ell+4$, yielding the same approximation ratio
as in case (1).
Case (3): $|D[B]| = 2\ell+1$ and $D[B]$ dominates all of $S[B]$.
Note that in this case, the unique choice is
Define another instance consisting of
the client set $S'=S\setminus\{v_1,v_{4k+7}\}$, namely, $S$ with the first
and last vertices omitted, and consider the execution of algorithm $\cA$ on this instance.
Notice that in a $k$-round distributed execution, each node is exposed only
to information collected from its distance-$k$ neighborhood.
This implies that the vertices of $B$ see exactly the same view in this new
execution on $S'$ as in the execution on $S$, so their output must be the same.
Hence $D'[B]=D[B]$ (and hence $|D'[B]|=2\ell+1$).
Also note that despite the fact that each of $A$ and $C$ now have
one client fewer than in $S$,
we must have $|D'[A]|\ge \ell+1$ and $|D'[C]|\ge \ell+1$
in order to dominate all the clients of $S'[A]$ and $S'[C]$, respectively.
Hence in total $|D'| \ge 4\ell+3$. However, for this instance the optimum
solution $D^{*'}=\{v_4,v_8,\ldots,v_{4k+4}\}$ is smaller,
consisting of only $k+1=4\ell+2$ vertices (see Fig. <ref>).
Hence in this case, the approximation ratio of $\cA$ is
$(4\ell+3)/(4\ell+2)$ or higher.
In summary over all cases, the approximation ratio of $\cA$ is
$\mbox{~\hskip 100pt}
\displaystyle \min\left\{\frac{4\ell+3}{4\ell+2}~~,~~ \frac{4\ell+4}{4\ell+3}\right\} ~=~
\frac{k+3}{k+2} ~=~ 1+\frac{1}{k+2}$.
§.§ A CTAS for Trees
In this section we describe the CTAS for $\CDS$ on trees,
prove its correctness and analyze its complexity.
Like the CTAS on a path, the algorithm for trees is based on a preprocessing stage in which the tree is partitioned into subtrees of depth $O(k)$ for integer parameter $k$. Each recurrent instance is then solved by computing a local near-optimal CDS on each subtree, while taking care to ensure that the resulting solutions combine into a $1+4/(k-1)$ approximation of the optimal global solution.
The “interface” between adjacent subtrees is more difficult to handle than in the case of paths, as making a single change in the selection in one subtree (e.g., in one of its leaves) might affect several adjacent subtrees, which makes both the algorithm and its analysis somewhat more complex.
Let us first describe the preprocessing stage, which is applied to the network tree $T$.
The algorithm has an integer parameter $\ell\ge 1$ and sets $k=4\ell+1$.
Root the tree $T$ at a root vertex $r_0$, and mark each vertex $v$ by
its layer $\layer(v)$, namely, its distance from $r_0$
(by definition $\layer(r_0)=0$).
Partition the tree $T$ into subtrees by taking every vertex $v$
with $\layer(v)=pk$ for integer $p\ge 0$ as a root and defining $T[v]$ as
the subtree of depth $k$ rooted at $v$. See Fig. <ref>(a).
For notational convenience, we sometimes use $T[v]$ to denote also the
vertex set of the subtree $T[v]$.
Also, for any subtree $T[v]$ and vertex set $X\subseteq T$,
we denote $X[v] = X \cap T[v]$.
(a) A decomposition of the tree $T$ into subtrees for $\ell=1$, $k=5$.
Layer-leaves are marked by a blue dashed circle, and
real leaves are marked by a green double circle.
(b) A , $k=5$. The client vertices of $S$ are drawn as double red
circles. The cuts along root-to-root paths are marked by blue dashed elypses.
The peak-tree $\tT[v]$ is marked by a purple dashed curve.
The leaves of a subtree $T[v]$ can be classified into real leaves and
layer-leaves, namely, leaves of $T[v]$ that are internal nodes in $T$.
A subtree that has no other subtree below it (namely, all of whose leaves
are real) is called a leaf-subtree or simply .
Otherwise, it is called an internal-subtree or .
(See Fig. <ref>(a).)
We partition the vertices of $T$ into two subsets.
Let $\lleaves$ be the set of all layer-leaves,
and $\main$ be the set of all remainig vertices.
This induces a partition of the vertices of each subtree $T[v]$ into
and $\main[v]$.
(For an , $\lleaves[v]=\emptyset$.)
During the recurrent stage, each instance consists of a set $S$ of clients. This induces additional distinctions on the tree structure.
Internal subtrees are classified into two types.
The $T[v]$ is called a if on every path from $v$ to a
root $w$ hanging from a layer-leaf of $T[v]$ there are two consecutive vertices
that do not belong to $S$. See Fig. <ref>(b).
The figure also illustrates the fact that in a $T[v]$
one can identify a subtree $\tT[v]$, referred to as the peak of $T[v]$,
with the property that for every edge $(x,y)$ connecting a vertex $x\in\tT[v]$
to a child $y\notin\tT[v]$, both $x,y\not\in S$.
This implies that nodes below $\tT[v]$ cannot help in dominating
clients in $\tT[v]$, namely, taking them into $D$
cannot dominate client vertices in $\tT[v]$.
$T[v]$ is a if it is not a , namely, there is
at least one path from $v$ to a root $w$ hanging from some layer-leaf of $T[v]$
with no two consecutive vertices of $V\setminus S$.
The idea behind the approximation scheme is as follows.
Our algorithm solves the $\CDS$ problem separately,
in an optimal manner, on each subtree $T[v]$ of depth at most $k$
for the client set $S[v]$.
This can be done in time $O(k)$, but might entail inaccuracies.
As illustrated in the lower bound of Sect. <ref>,
the main hindrance to the accuracy of a local distributed algorithm for $\CDS$
stems from long paths with a periodic occurrence of client vertices.
Such a path, threading its way along some root-to-leaf path in $T$,
might be handled poorly by the local computations.
Our goal is to bound the loss by at most 1 per subtree in the decomposition.
This is justified for , since in a the optimum solution
$D^*$ must also use $\Omega(k)$ dominators to cover all the clients,
so the inaccuracy ratio is just $1/\Omega(k)$.
This approach is made complicated due to the fact that some subtrees
are not full, and may require only a small number of dominators.
For such subtrees (specifically, and ),
we cannot allow the algorithm to “waste” more than the optimum solution.
Hence when comparing the number of dominators used by the algorithm
to that of the optimum $D^*$, we must use an accounting method
that will equate the costs over and , and charge
all the “waste” to .
This is done as follows. In a first phase, we locally solve the problem
optimally in each and . This is only used in order to decide,
for each such subtree $T[v]$, whether the root's parent, denoted $\parent(v)$,
must belong to the dominating set. This is important since these vertices cover
the “interference layers” between adjacent subtrees.
For the , an optimal local solution cannot be computed.
Therefore, we artificially impose a “waste” in every $T[v]$,
by selecting the parent of its root, $\parent(v)$, as a dominator,
whether or not necessary.
As explained above, this “waste” is justified by the fact that
$D^*$ must also use $\Omega(k)$ dominators in these subtrees.
As a result, when we compute a dominating set for the remaining
undominated clients in the second phase of the algorithm,
the solution computed by the algorithm on each subtree $T'$
is no greater than the number of $D^*$ dominators in $T'$.
§.§.§ Optimal procedure $\PROC$
A simple procedure $\PROC$ we use is an optimal algorithm for $\CDS$ on rooted trees, which runs in time $O(\mathsf{depth}(T))$ on a tree $T$.
The algorithm starts with an empty set of dominators $D$ and works its way
from the leaves up, handling each node $w$ only after it finishes handling
all its children.
It adds $w$ to the set $D$ in one of the following two cases:
(1) Some of $w$'s children are clients and are not yet dominated, or
(2) $w$ itself is an undominated client and is the root.
It is easy to verify that this algorithm yields a minimum cardinality solution
for $\CDS$. It is also easy to implement this greedy algorithm
as an $O({\sf depth}(T))$ time distributed protocol.
Modification for subtrees:
When applying this procedure to a subtree $T[v]$ of $T$ where $v$
is not the root of $T$, we make the following small but important modification:
When the procedure reaches $v$ itself, if $v\in S$ and $v$ is still
non-dominated, then we add $\parent(v)$ instead of $v$ to the solution.
(This can be done since $\parent(v)$ belongs to the tree $T$,
although it is outside the subtree $T[v]$.)
§.§.§ Approximation algorithm $\AA$
[1]
every subtree $T[v]$
Decide if it is an , a or a $D^{\lleaves[v]} \gets \emptyset$
$\RLT \gets \{v \mid T[v]~\mbox{ is an \leaftree}\}$ (* roots *)
$\RCT \gets \{v \mid T[v]~\mbox{ is a \cuttree}\}$ (* roots *)
$\RFT \gets \{v \mid T[v]~\mbox{ is a \fulltree}\}$ (* roots *)
$\RT \gets \RLT\cup\RCT\cup\RFT$ (* all subtree roots *)
(* First dominator selection phase *)
every $T[v]$
Apply Procedure $\PROC$ to $(T[v],S[v])$
$\parent(v)\in T[w]$ was selected as a dominator
let $D^{\lleaves}[w] \gets D^{\lleaves}[w] \cup \{\parent(v)\}$
every $T[v]$
Apply Procedure $\PROC$ to the peak-tree $(\tT[v],S\cap\tT[v])$
$\parent(v)\in T[w]$ was selected as a dominator
let $D^{\lleaves}[w] \gets D^{\lleaves}[w] \cup \{\parent(v)\}$
every $T[v]$, with $\parent(v)\in T[w]$
$D^{\lleaves}[w] \gets D^{\lleaves}[w] \cup \{\parent(v)\}$
$D^{\lleaves} \gets \bigcup_{v\in\RT} D^{\lleaves}[v]$
let $S'$ be the set of all vertices that are dominated by $D^{\lleaves}$
$S''\gets S\setminus S'$ (* Remaining undominated clients *)
(* Second dominator selection phase *)
every subtree $T[v]$
Apply Procedure $\PROC$ to $(T[v],S''[v])$
Let $D^{\main}[v]$ be its output set of dominators (* these are all internal nodes *)
$D^{\main} \gets \bigcup_{v\in\RT} D^{\main}[v]$
$D^{\cA} \gets D^{\lleaves} \cup D^{\main}$
§.§.§ Analysis
For an instance $(T,S)$ of $\CDS$, a set $D$ is said to be an upmost dominating set if it has the following property:
For every $w\in D$, replacing $w$ by $\parent(w)$ results in a non-dominating set.
(This property also suggests a natural bottom-up process for transforming
a solution $D$ into an upmost solution $D'$ of the same size.)
Denote the optimum solution by $D^*$. Without loss of generality we may assume that $D^*$ is an upmost dominating set.
The following is immediate from the definition of upmost dominating sets.
Consider an instance $(T,S)$ of $\CDS$ and an upmost dominating set $D$ for it.
If $v\in D$, then there exists
some child $v'$ of $v$
in $T$ such that $v'\in S$ and $v$ is its only dominator
(or in other words, no child of $v'$ is in $D$).
For any instance $(T,S)$ of $\CDS$, the dominating set selected by Procedure
$\PROC$ is equal to the unique optimum upmost solution $D^*$.
We further partition the dominators of $D^*[v]$ into subsets,
according to whether they are layer-leaves or internal nodes,
and identify also the set of all external dominators,
namely, dominators that are either outside $T[v]$ or layer-leaves.
$D^{*,\lleaves}[v] = D^* \cap \lleaves[v]$,
$D^{*,\main}[v] = D^* \cap \main[v]$,
$D^{*,ext}[v] = D^* \setminus D^{*,\main}[v]$,
$D^{*,\lleaves} = \bigcup_{v\in\RT} D^{*,\lleaves}[v]$,
$D^{*,\main} = \bigcup_{v\in\RT} D^{*,\main}[v]$.
We also partition the vertices in each set $D^{\lleaves}[v]$ into two subsets.
\begin{align*}
D_{C,L}^{\lleaves}[v] = & \{ w \mid w ~\mbox{ was added to }~ D^{\lleaves}[v]
~\mbox{ in Steps \ref{step:3} and \ref{step:4} of the algorithm} \},
\\
D_F^{\lleaves}[v] = & \{ w \mid w ~\mbox{ was added to }~ D^{\lleaves}[v]
~\mbox{ in Step \ref{step:5} of the algorithm} \},
\end{align*}
$D_{C,L}^{\lleaves} = \sum_{v\in RT} D_{C,L}^{\lleaves}[v]$,
$D_{F}^{\lleaves} = \sum_{v\in RT} D_{F}^{\lleaves}[v]$.
For every $v\in\RLT$, where $z=\parent(v)\in T[w]$,
$D^{\lleaves}[v] = D^{*,\lleaves}[v] = \emptyset$,
(b) $D^{\main}[v] = D^{*,\main}[v]$.
Claim (a) follows trivially since have no layer-leaves,
so $\lleaves[v]=\emptyset$.
Claim (b) follows from the observation that for an $T[v]$,
both $D$ and $D^*$
induce optimum upmost dominating sets for $T[v]$, and these induced
dominating sets, $\tilde D$ and ${\tilde D}^*$,
are identical by Obs. <ref>.
It may be instrumental to pause and make the following observation
concerning the sets $\tilde D$ and ${\tilde D}^*$ discussed in the above proof.
For the purpose of dominating the clients of $S[v]$, either both sets contain
$z=\parent(v)$ or both do not. One might hope that this will establish that
$D^{*,\lleaves}[w] = D^{\lleaves}[w]$.
However, this argument is false,
since we need to account for the possibility that one of the dominating sets
($D$ or $D^*$) includes $z$ in order to dominate another client child $v'$,
other than $v$, while the other dominates $v'$ in some other way,
and does not include $z$.
Nevertheless, we can prove the following weaker properties, which suffice for our purpose.
$D_{C,L}^{\lleaves}[v] ~\subseteq~ D^{*,\lleaves}[v] ~\subseteq~ D^{\lleaves}[v]$
for every $v\in\RT$.
To prove the second containment, suppose $z \in D^{*,\lleaves}[v]$.
As $D^*$ is an upmost dominating set for $T$,
by Obs. <ref>,
$z$ must have some child $v'$ such that $v'\in S$ and $v'$ is not dominated
by any of its children it $T$.
We argue that this $v'$ forces $z$ to be in $D^{\lleaves}[v]$.
To see this, consider the following three cases.
* If $v'\in\RLT$, then both $D\cap(T[v']\cup\{z\})$ and $D^*\cap(T[v']\cup\{z\})$
are optimum upmost dominating sets for $T[v']$, hence they are identical
by Obs. <ref>, and therefore $z\in D$, implying
that $z \in D^{\lleaves}[v]$.
* If $v'\in\RCT$, then both $D\cap(\tT[v']\cup\{z\})$ and
$D^*\cap(\tT[v']\cup\{z\})$ are optimum upmost dominating sets for $\tT[v']$,
and $z \in D^{\lleaves}[v]$ by the same argument.
* If $v'\in\RFT$, then $z=\parent(v') \in D^{\lleaves}[v]$ by Step <ref>
of the algorithm.
To prove the first containment, suppose $z \in D_{C,L}^{\lleaves}[v]$.
Then $z$ was added in Step <ref> or <ref> of the algorithm
to the set $D^{\lleaves}[v]$ since it belonged to the dominating set $D[v']$
generated by Procedure $\PROC$ for the subtree $T[v']$ for some vertex
As the procedure always generates an upmost dominating set,
it follows that $v'\in S$, and after selecting $D[v']$,
$v'$ is still not dominated.
As $D^*$ also induces an upmost dominating set for $T[v']$,
the same holds for $D^*[v']$, hence $\parent(v')=z$ must be in $D^*$ as well,
i.e., $z \in D^{*,\lleaves}[v]$.
For every $v\in\RCT$, $D\cap \tT[v] = D^*\cap \tT[v]$.
The claim follows from the observation that for a $T[v]$,
both $D$ and $D^*$
induce optimum upmost dominating sets for $\tT[v]$, and these induced
dominating sets, $\tilde D$ and ${\tilde D}^*$,
are identical by Obs. <ref>.
We make use of the following straightforward monotonicity property.
For every rooted tree $T$ and two client sets $S_1\subseteq S_2$,
the corresponding optimum dominating sets $D_1$ and $D_2$,
for $(T,S_1)$ and $(T,S_2)$ respectively, satisfy
$|D_1| \le |D_2|$.
$|D^{\main}[v]| \le |D^{*,\main}[v]|$
for every $v\in\RFT\cup\RCT$.
Recall that $S'[v]$ is the set of clients from $S[v]$ that were dominated by
the set of dominators $D^{\lleaves}$ selected in the first phase.
Let $S^{*'}[v]$ be the set of clients from $S[v]$ that are dominated by
$S^{*'}[v] ~\subseteq~ S^{'}[v]$.
Consider some client $z\in S^{*'}[v]$, which is dominated by some external
dominator in $D^{*,ext}[v]$. We need to show that $z$ is also dominated
by some external dominator in $D^{\lleaves}$, so $z\in S^{'}[v]$.
The potential external dominators of $\main[v]$.
Note that the only internal nodes of $\main[v]$ that can potentially
be dominated by external nodes dominators are the root $v$,
which can be dominated by $\parent(v)$,
and the parents of nodes in $\lleaves[v]$, which can be dominated by
their children.
Hence there are two cases to consider.
(1) $z=v$:
Then $z$ is dominated in $D^{*,ext}[v]$ by $\parent(z)$, which is its only
potential external dominator.
This implies that $\parent(z) \in D^{*,\lleaves}[w]$ for some $w$.
By the second containment in Lemma <ref>,
also $\parent(z)\in D^{\lleaves}[w]$, so $z\in S^{'}[v]$.
(2) $z=\parent(w)$ for some $w\in D^{*,\lleaves}[v]$:
Then $z$ is dominated in $D^{*,ext}[v]$ by $w$.
By the second containment in Lemma <ref>,
also $w\in D^{\lleaves}[v]$,
so $D^{\lleaves}$ dominates $z$ as well, hence $z\in S^{'}[v]$.
Recall that $S''[v]=S[v] \setminus S'[v]$, the clients from $S[v]$
that were not dominated by the end of the first phase.
Letting $S^{*''}[v]=S[v] \setminus S^{*'}[v]$,
we have by Claim <ref> that
$$S^{''}[v] ~\subseteq~ S^{*''}[v].$$
The clients of $S^{*''}[v]$ must be dominated by $D^{*,\main}[v]$,
and the clients of $S^{''}[v]$ are dominated by $D^{\main}[v]$.
The lemma now follows from Obs. <ref>.
Lemma <ref> and Obs. <ref>(b)
$|D^{\main}| \le |D^{*,\main}|$.
Combining with the first containment in Lemma <ref>,
$$|D^{\cA}| - |D_{F}^{\lleaves}| ~=~ |D^{\main}| + |D_{C,L}^{\lleaves}|
~\le~ |D^{*,\main}| + |D^{*,\lleaves}| = |D^*|.$$
Denote by $t_f$ (respectively, $t_c$) the number of
(resp., ) in the decomposition of $T$.
Noting that $|D_F^{\lleaves}| = t_f$, we get that
\le |D^*|+t_f$.
Since a full tree contains a path on which, for two consecutive vertices, at least one is in $S$,
it is immediate that
$|D^*[v]| \ge (k-1)/4$ for every $T[v],$
and therefore
$|D^*| \ge t_f \cdot (k-1)/4$.
It follows that the approximation ratio of the algorithm satisfies
$$\rho ~\le~ \frac{|D^{\cA}|}{|D^*|} ~\le~ \frac{|D^*|+t_f}{|D^*|}
~=~ 1 + \frac{t_f}{|D^*|} ~\le~ 1 + \frac{t_f}{t_f \cdot (k-1)/4}
~=~ 1 + \frac{4}{k-1}~.$$
We get the following result.
For every positive integer $k$, there exists a deterministic distributed local approximation algorithm for $\CDS$, with preprocessing allowed, whose online runtime on every $n$-vertex tree and every instance is at most $O(k)$ with approximation ratio of at most $1 + \frac{4}{k-1}$.
§.§ A CTAS for on Planar Graphs
§.§.§ Constant Approximation for on Planar Graphs
The state of the art algorithm for constant round planar dominating set approximation in the $\LOCAL$ model achieves an approximation ratio of $20$ by a recent work of Heydt et al. <cit.>. Their algorithm and analysis extend to the client dominating set problem with slight modifications. See Algorithm <ref> for the pseudocode.
Constant Approximation for MCDS in Planar Graphs
[1]
$C \gets$ client set
For every $A \subseteq V(G)$, define $N_C(A) = \{w \mid w \in C \text{ and } (w, v) \in E(G) \text{ for some } v \in A \}$
$N_C[A] = N_C(A) \cup (A \cap C)$
$D_1 \gets \{v \in V(G) \mid \forall A \subseteq V(G) \setminus \{v\}, N_C[A] \supseteq N_C(v) \Rightarrow |A| \geq 4\}$
For every $v \in V(G)$, compute $B_v ~=~ \left\{w \in V(G) \setminus D_1 ~\bigm|~ | N_C(v) \cup N_C(w) | \geq 10 \right\}$
$D_2 \gets \left\{v \in V(G) \setminus D_1 \bigm| B_v \neq \emptyset \right\}$
$D_3 \gets C \setminus N_C[D_1 \cup D_2]$
Return $D_1 \cup D_2 \cup D_3$
Algorithm <ref> provides a $39$-approximation for the MCDS problem in planar graphs.
The proof outline is almost same as that in <cit.>. Let $D^*= \{b_1, b_2, \dots b_{|D^*|}\}$ be some optimal solution for a given MCDS instance. Define the set
\begin{equation}
\label{eq:def hatD}
\hat{D} ~~=~~ \{v \in V(G) ~~\bigm|~~ \mbox{ for every }~ A \subseteq D^* \setminus \{v\},~~ N_C[A] \supseteq N_C(v) \Rightarrow |A| \geq 4 \}.
\end{equation}
Observe that $\hat{D}$ is defined similarly to the set $D_1$ constructed in the algorithm, except that $V(G)$ is replaced with $D^*$. Every element in $D_1$ must also belong to $\hat{D}$, so
\begin{equation}
\label{eq:D1 Dhat}
D_1 \subseteq \hat{D}
\end{equation}
$|\hat{D} \setminus D^*| < 4 |D^*|$
Suppose, for the sake of contradiction, that $|\hat{D} \setminus D^*| \geq 4 |D^*|$. Then there exists an independent set of size at least $|D^*|$ in the graph induced by $\hat{D} \setminus D^*$, as every subgraph of a planar graph is $4$-colorable.
Let $I = \{a_1, a_2, \dots a_{|D^*|}\}$ be an arbitrary such independent set.
For every client $c \in C$, let $f(c)$ be the smallest integer such that $b_{f(c)}$ dominates $c$.
Let $G'$ be the graph obtained by contracting all edges $(c,~b_{f(c)})$ in $G$, for every $c \in C \setminus (I \cup D^*)$. The underlying simple graph induced by $I \cup D^*$ in the graph $G'$ is bipartite, and every vertex in $I$ has degree at least $4$.
Denoting the number of vertices and edges in this bipartite graph by $n$ and $m$, respectively,
we get $m \geq 4 \cdot \frac{n}{2} \geq 2n$. However, every simple planar bipartite graph satisfies $m < 2n$, yielding the desired contradiction.
$|D_2 \setminus (D^* \cup \hat{D})| \leq 3 |D^*|$
Consider any vertex $v \in D_2$ such that $v \not \in D^*$. By definition (See Line 6 in Algorithm <ref>), $v \not \in D_1$, so by the definition of $D_1$ there must exist a set of size at most $3$ that dominates the client neighbors of $v$. Let $A_v = \{y_1, y_2, y_3\}$ be any such set for some $y_1, y_2, y_3$ (that need not be distinct).
For every $v$, the set $B_v$ computed by the algorithm satisfies $B_v \subseteq A_v$
Suppose, for the sake of contradiction, that there exists some $w \not \in A_v$ belonging to $B_v$. By the definition of $B_v$, $w$ and $v$ share (at least) $10$ common clients, $C''$. Note that $C''$ does not include $w$ and $v$. Moreover, $C''$ must also be dominated by the vertices of $A_v$, hence at least one of the vertices in $A_v$ must dominate at least $\lceil 10/3 \rceil = 4$ of these $10$ clients. Suppose this vertex is $y_1$. By the above discussions, we must have $|N_C(v) \cap N_C(w) \cap N_C(y_1)| \geq 3$, which implies the existence of $K_{3, 3}$ as a subgraph, contradicting the planarity of the graph.
The $v-w$ relation $v \in B_w$ is symmetric, so we can split $D_2$ as,
\begin{eqnarray*}
D^1_2 &=& \bigcup_{v \in D^* \setminus D_1} \{v\}\cup B_v~,
\\
D^2_2 &=& \bigcup_{v \in \hat{D} \setminus (D^* \cup D_1)} \{v\} \cup B_v~, \mbox{ and}
\\
D^3_2 &=& \bigcup_{v \not \in (\hat{D} \cup D^* \cup D_1)} \{v\} \cup B_v~.
\end{eqnarray*}
$D^3_2 \subseteq D^1_2$
Consider a vertex $v' \in D^3_2$. Then
there exists some vertex $v$ such that $v \not \in (\hat{D} \cup D^* \cup D_1)$ and $v'\in \{v\} \cup B(v)$.
Since $v\not\in \hat{D}$, by Eq. (<ref>) there exists a set $A_v = \{b_1, b_2, b_3\} \subseteq D^*$ that dominates $N_C(v)$. By symmetry, if $b_i \in B_v$ then $v \in B_{b_i}$ and therefore $v$ and $B_v$ are included in $D^1_2$, so $v' \in D^1_2$.
$D^2_2 \setminus \hat{D} = \emptyset$
Suppose, for sake of contradiction, that there exists some $w \in D^2_2 \setminus \hat{D}$. There must exist $v \in \hat{D}\setminus(D^* \cup D_1)$ such that $w \in B_v$. By symmetry $v \in B_w$. As $w \not\in \hat{D}$, there exists a set $A_w \subseteq D^*$ with $|A_w| \leq 3$ that dominates $N_C(w)$. From Claim <ref>, $B_w \subseteq A_w \subseteq D^*$. This implies that $v \in D^*$ which is a contradiction.
Finally we have $D_2 \setminus (D^* \cup \hat{D}) \subseteq \cup_{v \in D^* \setminus D_1} B_v$ and since $|B_v| \leq |A_v| \leq 3$, we have
$|D_2 \setminus (D^* \cup \hat{D})| \leq 3 |D^*|$,
completing the proof of Lemma <ref>.
If $v \not \in D_1 \cup D_2$, then $|N_C(v)| \leq 30$
Suppose, for the sake of contradiction, that there is some vertex $v\not\in D_1 \cup D_2$ such that $N_C(v) \geq 31$. By the definition of $D_1$, as $v \not \in D_1$, there exists a set $A \subseteq V(G) \setminus \{v\}$ of size at most $3$, that dominates all clients of $v$, and therefore at least one vertex $w\in A$ dominates at least $\lceil 31 / 3 \rceil = 11$ clients. We must have $|N_C(v) \cup N_C(w)| \geq 10$ and therefore $v \in D_2$, leading to contradiction.
The above lemma shows that after removing clients that are dominated by $D_1 \cup D_2$, every other vertex can dominate at most $30$ clients. Therefore, the set $D_3$ constructed in the last step of the algorithm, which takes all the remaining undominated clients to the dominating set, must be at most $31$ times the optimal, i.e., $|D_3 \setminus D^*| \leq 31$. Putting the lemmas together, we can bound size of $D = D_1 \cup D_2 \cup D_3$ as $|D| \leq |D^*| + |\hat{D}\setminus D^*| + |D \setminus (\hat{D} \cup D^*)| \leq |D^*| + |\hat{D}| + |D_2 \setminus(D^* \cup \hat{D})| + |D_3 \setminus D^*| \leq 39 |D^*|$, proving Theorem <ref>.
§.§.§ 1+epsilon approximation
We adapt the distributed $1 + \epsilon$-approximation scheme of Czygrinow et al. <cit.>, whose round complexity is $O(\left(\frac{1}{\epsilon}\right)^c\log ^* n)$ where $c = \log_{24/23} 3$. We first provide a high level overview of their algorithm and the major differences and difficulties towards adapting it to the recurrent $\CDS$ problem with preprocessing.
Fast $\LOCAL$ Algorithm. The graph $G$ is partitioned into several disjoint connected components (called clusters) such that (i) each cluster has diameter at most $d$, and (ii) the total number of edges crossing two clusters is at most $E$. All the cross edges are then removed and the dominating set problem is solved optimally and independently within each cluster. If the cluster diameter $d$ is small enough, then the previous step requires only $O(d)$ rounds, as the entire graph can be collected at some delegated leader who can then solve the problem locally. If the number of cross edges $E$ is small enough, then we get a good approximation of the dominating set.
For finding a good clustering, the algorithm first makes use of a constant approximation which can be obtained in constant rounds. Clustering around the dominators computed by the constant approximation results in clusters with diameter $d \leq 2$. Each cluster is then contracted into a single node. Let $G_0$ be the obtained underlying simple graph. Observe that $G_0$ has at most $39 |D^*|$ vertices, where $D^{*}$ is the optimal client dominating set. The graph $G_0$ is initially weighted with each edge having weight $1$.
Now suppose we are able to cluster $G_0$ into connected components $G'_1, G'_2, \dots G'_s$ so that the total weight of edges crossing clusters, $|E_{\mathsf{cross}}|$, is at most $\epsilon$ of its initial total (which is at most $3 \cdot 39|D^*|$ by planarity). Let $D$ be the union of set of dominators obtained by solving each graph $G'_i$ independently and optimally. We argue that $D$ is $1 + \epsilon'$ approximation. Let $D_2$ be the constant approximation obtained in the first step. Consider the set $\hat{D} = D^{*} \cup \{v \mid (u, w) \in E_{\mathsf{cross}} \text{ and } v \text{ dominates } w\}$. $\hat{D}$ is a valid dominating set and moreover, $\hat{D} \cap V(G'_i)$ dominates all clients in $V[G'_i]$ for every $i$.
Since $D$ was obtained by solving $G'_i$ optimally, we have $|D \cap V(G'_i)| \leq |\hat{D} \cap V(G'_i)$. Adding up over all clusters, we get $|D| \leq \hat{D}$.
However $|\hat{D}| \leq |D^*| + 2 \cdot (\epsilon \cdot (3 \cdot 39 |D^*|))$.
Plugging in we get $|D| \leq (1 + 234 \epsilon) |D^*|$.
A clustering of $G_0$ is computed by repeatedly applying a contraction process. The contraction process for a weighted graph $G$ is as follows. A large weight subset of the edges of $G$ is chosen and then oriented such that every node has out-degree at most $1$. Such oriented graphs are called pseudo-forests. For a planar graph, it is possible to choose in one round a pseudo-forest that has at least $\frac{1}{6}^{th}$ the total weight of all edges. The pseudo forest is then $3$-colored using the Cole-Vishkin Algorithm. The $3$-coloring is used to split the forest into disjoint stars (graphs with diameter at most $2$), while not losing more than a quarter (in weight) of the edges of the pseudo-forest. Each star is then contracted into a single node. After contraction, it is possible that the graph has multiple edges. All multiple edges between a pair of nodes are replaced by a single edge whose weight is set to equal their total weight.
The above contraction process is applied repeatedly until the weight of the edges reduces to $\epsilon$ of the initial total. Since each contraction removes at least $\frac{1}{24}$ of the edges, it is sufficient to repeat the process $t = O(\log_{24/23}{\frac{1}{\epsilon}})$ times.
Let the final graph obtained be $G_t$. $G_t$ provides a clustering of the original graph $G$, which can be obtained by uncontracting all the edges. The number of cross edges of the clustering is the weight of $G_t$. Each time a star is contracted, the diameter of the corresponding clusters increases by a multiplicative factor of at most $3$ and so the diameter
of each cluster given by $G_t$
is $O(3^t)$.
The most time consuming step in this process is that of $3$-coloring the pseudo-forest, which takes $O(3^{i} \log^*n)$ rounds during the $i^{th}$ iteration of the contraction process. The other operations take $O(3^i)$ rounds. The total round complexity is $O(\sum\limits_{i=0}^t 3^i \log^* n) = O(3^t \log^* n)$.
Adapting to $\CDS$. First, we remove the edges that are not incident on any client. These edges do not contribute to the criteria for a set to be a dominating set, and they can be ignored. We then compute a constant approximation $\tilde{D}$ as per Algorithm <ref>. The initial clustering is obtained by choosing for each client $c$ an arbitrary dominator of $c$ from $\tilde{D}$ and contracting the edge between them. Additionally, every vertex that is neither a client nor a dominator chooses an arbitrary neighboring client and the edge between them is merged. The remaining steps are identical to the previous procedure.
Speeding up using a preprocessing phase. One potential preprocessing operation that may improve the round complexity of the online stage might be to compute a proper $4$-coloring of the planar graph. Unfortunately, while a coloring of any graph remains valid after the removal of edges or vertices, it does not remain valid after contractions. An arbitrary precomputed coloring might not be of much use in coloring the contracted graphs that arise from repeated contractions. To accommodate contractions, we precompute a non-repetitive coloring of $G$ (which is the only output of our preprocessing phase). A non-repetitive coloring is a coloring of the graph such that for any even length simple path, the ordered set of colors in the first half of the path is different from that of the second half. Non-repetitive colorings were first proposed by Alon et al <cit.>. The minimum number of colors required to realise a non-repetitive coloring is called the Thue number of the graph and is denoted by $\pi(G)$. Dujmović et al. <cit.> showed recently that $\pi(G) \leq 768$ for all planar graphs $G$.
Suppose we have a pseudo forest $F$ that needs to be $3$-colored and suppose $F$ is obtained from $G_t$, i.e., after $t$ iterations of the contraction process. Let $\out(v)$ denote the other end of the outgoing edge of $v$ in $F$. In order to $3$-color the forest, it is sufficient to choose colors in such a way that $\out(v)$ and $v$ have different colors, for every $v$. We can associate with each node $v$ of $G_t$, a connected component (denoted $G_v$) in the original graph $G$ that contains the ends of all edges that were contracted to $v$. Choose any edge $e$ that crosses $G_v$ and $G_{\out(v)}$. Construct a spanning tree of $G_v$ and root it at the endpoint $r(v)$ of $e$ that lies in $G_v$. We now color $v$ with the ordered set of non-repetitive colors traced on the unique path from $r(v)$ to $r(\out(v))$, excluding $r(\out(v))$, in the graph $G_v \cup G_{\out(v)} \cup \{e\}$. We enumerate these colors from $1$ to $728^{d+1}$ where $d$ is the maximum diameter of the clusters. Let the computed path be $P_v$.
Observe that whenever $\out(\out(v)) \neq v$, the paths $P_v$ and $P_{\out(v)}$ can be concatenated to form a simple path in the graph $G$. If $P_v$ and $P_{\out(v)}$ have different lengths, then the colors assigned to them are different. Otherwise, by the property of a non-repetitive coloring, the ordered set of colors of $P_v$ and $P_{\out(v)}$ must be different. When $\out(\out(v)) = v$, we have a 2-cycle. In this case we color one of the nodes $\{v, \out(v)\}$ (whichever has higher id, say $v$) with its own non-repetitive color and redefine $P_{v} = \{\out(v)\}$. Now the paths $P_v$ and $P_{\out(v)}$ may be concatenated to obtain a simple path $P$. See Algorithm <ref> for the pseudo-code.
We now have a $768^{d+1}$ coloring of the pseudo-forest $F$, which can then be reduced to a $3$-coloring using the Cole-Vishkin Algorithm. The complexity is $O(d \log^* {768^{d+1}}) = O(d \log^* d)$. This leads us to our main lemma:
Given a clustering of the graph $G$, Algorithm <ref> provides a $3$-coloring of the graph obtained by contracting each cluster into a single vertex. Moreover this algorithm can be implemented as an $O(d \log^* d)$ round $\LOCAL$ protocol, where $d$ is the maximum diameter amongst the induced components of the clustering.
Algorithm <ref> is the main unique ingredient to our adaptation of Czygrinow et al's algorithm. Plugging this component into their algorithm directly leads to an $O_{\epsilon}(1)$ $\LOCAL$ algorithm. For concreteness, the complete clustering procedure is described in Algorithm <ref> with some minor changes to account for the clients. Once clustering is done, we proceed in the same way, i.e., solve the $\CDS$ problem optimally and independently within each cluster. Solving $\CDS$ exactly requires NP-Hard problems to be solved in the online phase, which may be undesirable. This can be fixed by replacing the optimal solution with a $PTAS$ in planar graphs for the $\CDS$ problem by a similar adaptation of Baker's algorithm <cit.>.
$3$-coloring pseudo-forest
[1]
(i) $\col : V(G) \rightarrow [768],$ A non-repetitve coloring of the given planar graph $G$
(ii) $\cluster : V(G) \rightarrow \mathbb{N}$, describes a partitioning of the vertices of $G$ that induce connected components of diameter at most $d$
(iii) $G_t:$ the graph where every cluster is contracted to a single node.
(iv) $\out : V(G_t) \rightarrow V(G_t)$, describes a pseudoforest in the graph $G_t$ $\out(v)$ is the other end of the unique outgoing edge from $v$
Output: $\col_f : V(G_t) \rightarrow [3]$, a proper $3$-coloring of the given pseudoforest
clusters $v \in V(G_t)$ (in parallel)
$p \gets \out(v)$, the parent of $v$ in pseudo-forest of $G_t$
Let $G_v, G_p$ be the connected components of $G$ that are contracted to $v, p$ in $G_t$
$e_v \gets $ any edge in $G$ that crosses $G_v, G_p$ and $r_v \gets $ the end of $e$ in $G_v$
$T_v \gets $ Any spanning tree of $G_v$, rooted at $r_v$
clusters $v \in V(G_t)$ (in parallel)
$\out(p) \neq v$ or $v < p$ detect cycles of length $2$
$\path(v) \gets$ The unique path from $r_v$ to $r_p$ in the graph $T_v \cup T_p \cup \{e\}$
Treating the case of cycle of length 2 separately
$\path(v) \gets \{r_v\}$
$\col_f(v) \gets $ the ordered set of colors in $\path(v)$
Enumerate $\col_f(v)$ using integers from $1$ to $768^{d + 1}$
Reduce $\col_f(c)$ to a $3$-coloring using the Cole-Vishkin Algorithm
For every planar graph $G$,
* $\SUPTIME(\CDS_{\epsilon}, G)$ is $O(\left(\frac{1}{\epsilon}\right)^{c} \log^*{\left(\frac{1}{\epsilon}\right)})$, where $c = \log_{24/23} 3$.
* Realizing the above round complexity requires only $O(1)$(i.e. a constant independent of both $\epsilon$ and $G$) additional bits to be stored in each node of $G$.
Clustering for Planar CDS
[1]
Input: Client set $C$, a non-repetitive coloring of $G$ and $\epsilon$.
Output: A $1+\epsilon$ approximation of the optimal set dominating $C$.
Phase 1: Finding a good initial clustering.
Remove all edges that do not have a client incident on them.
Remove isolated vertices after previous step.
Compute a constant approximation $D^{\star}$ for $C$ using Algorithm <ref>.
nodes $v \in V(G) \setminus D^{\star}$
$v$ has a neighbor in $D^{\star}$
$u \gets $ any neighbor in $D^{\star}$
$u \gets $ any neighbor in $C$ or $\perp$ (if such a node doesn't exist)
Contract the edge $e = (u, v)$, if $u$ exists
Done in parallel and implicitly, i.e., contracted vertices know their neighbors
Phase 2: Improving the clustering
$G_0 \gets $ underlying simple graph obtained at end of Phase 1.
Set $\mathsf{wt}(e) \gets 1$ for all $e \in E(G_0)$
$t = 0, 1, \dots \lceil \log_{24/23} \frac{234}{\epsilon} \rceil$
$\mathsf{out}(u) \gets $ any neighbor $v$ such that $\mathsf{wt}((u, v))$ is maximized
$H \gets $ induced by the edges $\{(\mathsf{out}(u), u) \mid u \in G_t\}$ Heavy pseudo-forest
$\mathsf{col} \gets $ $3$-coloring of $H$ obtained using Algorithm <ref>.
$u \in H$ with $\mathsf{col}(u) = 1$ (in parallel)
$I_u, O_u \gets \{(u, v) \mid u = \mathsf{out}(v) \}, \{(u, v) \mid v = \mathsf{out}(u)\}$
Remove either $I_u$ or $O_u$ from $H$, whichever has smaller total weight
$u \in H$ with $\mathsf{col}(u) = 2$ (in parallel)
$I_u, O_u \gets \{(u, v) \mid u = \mathsf{out}(v), \mathsf{col}(v) = 3 \}, \{(u, v) \mid v = \mathsf{out}(u), \mathsf{col}(v) = 3\}$
Remove either $I_u$ or $O_u$ from $H$, whichever has smaller total weight
$H$ now consists of connected components with diameter at most $10$.
$F \gets $ rooted spanning forest of $H$
$E_F, O_F \gets $ edges of $F$ at even and odd depths respectively
Remove either $E_F$ or $O_F$, whichever has smaller total weight
For all edges $e \in E(H)$, contract $e$ in $G_t$
$G_{t+1} \gets $ underlying simple graph obtained after contractions.
For all edges $e = (u, v) \in G_{t+1}$, set $\mathsf{wt}(e) \gets $ number of edges between $u, v$ after all contractions of edges in $H$.
As mentioned previously, we adapt the scheme of Czygrinow et al. The high level idea is to carefully cluster the graph into components with small diameter and essentially solve the $\CDS$ problem independently within each cluster (i.e., ignoring or removing the cross edges) by a brute-force manner.
The clustering procedure is outlined in Algorithm <ref>. We go through the procedure and analyze it below.
Phase 1: The first observation to be made is that edges with no incident client on them can be ignored. The existence or absence of these edges does not affect the correctness of any candidate solution to the $\CDS$ instance. After this removal we may get several disconnected components, which we can solve separately.
In the initial clustering (Lines 1-11 of Algorithm <ref>), each cluster has diameter at most $4$. This is easy to see as there is a path of length at most $2$ to some vertex in $D^{\star}$. Note that Clients are directly dominated by some vertex in $D^{\star}$ and non-clients either have a neighboring client adjacent to them or are isolated. Each vertex in $D^{\star}$ is present in its own unique cluster.
Phase 2: The objective of this phase is to improve the clustering in Phase 1. Let $G_0$ be the contracted graph obtained at the end of Phase 1. By planarity we have, $|E(G_0)| \leq 3 |V(G_0)| \leq 3 \cdot 39 |D_{\mathsf{opt}}|$.
By definition, we have $\mathsf{wt}(G_0) = |E(G_0)| \leq 117 |D_{\mathsf{opt}}|$. Here we use $\wt(G)$ to denote the total weight of all edges in $G$.
We now describe the clustering procedure of Phase 2 (Lines 15-32). In Line 16, obtains a heavy-weight pseudo-forest subgraph of $G_0$ by a simple local greedy procedure, choose an arbitrary incident edge with maximum weight (Line 15).
$\mathsf{wt}(H) \geq \frac{1}{6} \mathsf{wt}(G)$
We make use of Nash-Williams Theorem, i.e. since $G_t$ is planar it can be decomposed into forests $F_1, F_2, F_3$. In each of these forests, there exists an orientation such that every node has out degree at most $1$. Let the outgoing edge of $u$ in the three forests be $\mathsf{out}_1(u), \mathsf{out}_2(u), \mathsf{out}_3(u)$ and let $\mathsf{out}(u)$ be the chosen outgoing edge in Line 15. WLOG, let $F_1$ be the forest with highest weight amongst the three. By pigeon hole principle we have, $\wt(F_1) = \sum_u \wt((\mathsf{out}(u), u)) \geq \frac{1}{3} \wt(G)$.
The chosen edges in Line 15-16 is done for each node independently. While for the forests $F_1, F_2, F_3$, $\mathsf{out}_i(u)$ corresponded to a unique edge, this is not necessarily the case for the pseudo-forest $H$ chosen in Line 16. In particular we could have $\mathsf{out}(u) = v$ and $\mathsf{out}(v) = u$ and therefore it is not the case that $\wt(H) = \sum_u \wt((\mathsf{out}(u), u))$. However each edge $(\mathsf{out}(u), u)$ is counted at most twice in the summation from which we get,
\begin{equation*} \begin{split}
\wt(H) &\geq \frac{1}{2} \sum_u \wt((\mathsf{out}(u), u)) \\
&\geq \frac{1}{2} \sum_u \wt((\mathsf{out}_1(u), u)) \ \ \ \ [\text{By greedy choice of } \mathsf{out}(u)] \\
& \geq \frac{1}{2} \wt(F_1) \geq \frac{1}{6} \wt(G_t)
\end{split} \end{equation*}
We next address Lines 18-25. This part of the algorithm breaks down the forest $H$ into small diameter components. This is done in two steps. In the first step, for each node with color $1$, either all its incoming or the unique outgoing edge is removed (whichever has smaller weight). The second step does the same with nodes of color $2$, except it ignores edges leading to/ incoming from nodes with color $1$. Observe that at most half the total weight of edges is lost in these two steps. Hence after this step we have $\wt(H) \geq \frac{1}{12} \wt(G_t)$.
In Line 26, every connected component in $H$ has diameter at most $10$.
Orient every edge from $u$ to $\mathsf{out}(u)$. We show that there is no directed path of length at least $6$ in $H$. Because out-degree is at most $1$, on any path in $H$, the direction of the edges can change at most once. Therefore this implies that diameter is at most $10$.
Suppose, for sake of contradiction, that there existed a directed path of length at least $6$. None of the nodes in the middle of the path can have color $1$, since these nodes must have non-zero in-degree and out-degree. There are four nodes in the middle of the path and can be colored either $2$ or $3$. By pigeon-hole principle, at least one of the nodes must have color $2$ and have non-zero in-degree and out-degree leading to nodes with color $3$. This contradicts the fact that at least one of these edges must have been removed in Line 24.
In Line 30, $H$ consists of vertex disjoint stars with weight at least $\frac{1}{24} \wt(G_t)$
Since the diameter of $H$ is $10$, in $O(1)$ rounds, we can compute a spanning forest of $H$. Subsequently either all the even depth or odd depth edges are removed, i.e. diameter of each connected component in $H$ is at most $2$. By the greedy choice at most $\frac{1}{2}$ the weight of $H$ is lost during this procedure.
We now analyze the correctness of the algorithm.
We have $\wt(G_{t+1}) \leq \frac{23}{24} \wt(G_t)$. The value of $T$ is chosen such that $\wt(G_T) \leq \frac{\epsilon}{234} \wt(G_0)$.
Let $D$ be the $\CDS$ solution computed independently (and optimally) on the clusters given by $G_T$ and let $\Dopt$ be any optimal solution to the given instance. For a node $u \in G_T$, let $V_u$ be the set of vertices of $G_0$ that were contracted to $u$. Define $\Dopt_u = \Dopt \cap V_u$ and $D_u = D \cap V_u$. Let $W_u$ be the vertices of $G[V_u]$ that have an incident edge of $G_0$ leading to a vertex not in $V_u$. We have that $\Dopt_u \cup W_u$ dominates all clients in $G[V_u]$. Since $D_u$ is an optimal solution, we get,
\begin{equation*}\begin{split}
|D_u| &\leq |\Dopt_u \cup W_u| \\
\Rightarrow \sum\limits_u |D_u| &\leq \sum\limits_u |\Dopt_u \cup W_u| \\
\Rightarrow |D| &\leq |\Dopt| + 2 |E(G_T)| \\
\Rightarrow |D| &\leq |\Dopt| + \frac{1}{117} |E(G_0)| \\
\Rightarrow |D| &\leq (1 + \epsilon) |\Dopt|
\end{split} \end{equation*}
We now analyze the round complexity. We leave it to the reader to verify that Phase 1 can be implemented as a $O(1)$ round distributed protocol (essentially for each line, only $1$ round of communication with neighbors is needed).
Except Line 17, all other lines in 15-32 can be implemented as $O(1)$ round complexity in the graph $G_t$. Let $d_t$ be the maximum diameter of a clusters given by $G_t$. Any $\LOCAL$ algorithm in $G_t$ can be simulated by $G$ in $d_t$ rounds (collect $G_t$ and simulate). It is already shown that Line 17 takes $O(d_t \log^* d_t)$ time.
Since $G_{t+1}$ is obtained by contracting stars, we have $d_{t+1} \leq 3 d_t + 2$. This gives $d_t = O(3^t)$. The overall round complexity of Phase 2, is thus, $O(\sum_{t=0}^{T} d_t \log^* d_t) = O(3^T \log^* 3^T) = O(\frac{1}{\epsilon}^c \log^*{\frac{1}{\epsilon}})$ where $c = \log_{24/23} 3$.
§ COLOR COMPLETION PROBLEMS
Consider a graph $G(V,E)$ and a coloring $c : V \mapsto \{1,\dots,k\}$.
The vertex $v$ is properly colored if each of its neighbors has a different color.
The classical vertex coloring problem requires deciding if there exists a coloring for which all vertices are properly colored.
When some of the vertices are already assigned a predefined coloring, the resulting recurrent problem is referred to as color completion ($\PCC$).
We use the following measures
for evaluating the number of colors used in any valid solution.
* Let $\cP_{pc}$ be the set of colors used by the precolored vertices, and denote $\chi_{pc} = |\cP_{pc}|$.
* Let $\cP_{\UN}$ be the set of colors used for the uncolored vertices;
denote $\chi_{\UN} = |\cP_{\UN}|$.
* Let $\cP_{new} = \cP_{\UN} \setminus \cP_{pc}$ be the
new colors
used for the uncolored vertices; denote $\chi_{new} = |\cP_{new}|$.
* Let $\cP_{all} = \cP_{pc} \cup \cP_{new}$ be the final set of colors
of all vertices; denote $\chi_{all} = |\cP_{all}|$.
For a given instance of $\PCC$, let $\chi^*_{\UN}$ (respectively, $\chi^*_{new}$, $\chi^*_{all}$) be the smallest possible value of $\chi_{\UN}$ (resp., $\chi_{new}$, $\chi_{all}$) over all possible proper color completions of the precoloring.
Additionally, for a given algorithm $\cA$, let $\chi^{\cA}_{\UN}$ (respectively, $\chi^{\cA}_{new}$, $\chi^{\cA}_{all}$) be the value of $\chi_{\UN}$ (resp., $\chi_{new}$, $\chi_{all}$) in the solution computed by $\cA$ for the instance.
The efficiency of an algorithm for $\PCC$ can be measured by two parameters of interest, namely, $\chi_{new}$ and $\chi_{all}$. The difference between them becomes noticeable in instances where the colors in $\cP_{pc}$ are not contiguous.
We denote by $\PCC_{new}(\chi)$ (resp. $\PCC_{all}(\chi)$) the problem of color completion such that $\chi_{new}$ (resp. $\chi_{all}$) is at most $\chi$.
§.§ Single Round Color Completion
We first consider what can be done when the online algorithm is restricted to a single round of communication.
Consider a graph $G$ with maximum degree $\Delta=\Delta(G)$ and chromatic number $\chi=\chi(G)$ with $\Delta > 0$.
We have $\SUPTIME(\PCC_{new}(\chi \cdot \Delta), G) = 1$.
The algorithm uses the color palette
$$\cP ~=~ \{(i,j) \mid 1\le i\le \chi, ~~ 1\le j\le \Delta\}.$$
In the preprocessing stage, compute a proper default coloring $dc$ of the graph using the color palette $\cP^{def} = \{i \mid 1\le i\le \chi\}$, and let each vertex $v$ store its default color $dc(v)$ for future use. These values are not used as colors in the final coloring.
In the recurrent stage,
we are given an arbitrary precoloring $c(w) \in \cP$ for some nodes, and need to complete it to a proper coloring by selecting a color $c(v)$ for each non-precolored node $v$.
(It is assumed that the precoloring itself is proper, i.e., no two precolored neighboring vertices use the same color.)
The algorithm requires a single round of communication. Each precolored node $w$ informs its neighbors about its color $c(w)$. Now consider a non-precolored node $v$. If all neighbors of $v$ are colored, then $v$ chooses a free color from the color palette. As $\chi \cdot \Delta \geq 2 \Delta \geq \Delta + 1$, such a color is guaranteed to exist.
Otherwise, $v$ finds a free color of the form $(dc(v),j)$ for $1\le j\le\Delta$ satisfying
$c(w) \ne (i,j)$ for all precolored neighbors $w$ of $v$. The node $v$ then selects $c(v) \gets (dc(v),j)$.
By this algorithm, the color $(i,j)$ selected by $v$ is different from the color of any precolored neighbor of $v$. Also, $(i,j)$ cannot be the selected color of any non-precolored neighbor $w$ of $v$. This is because the default color $dc(w)=i'$ of $w$ satisfies $i'\ne i$, and therefore, the selected color $c(w)$ of $w$ is of the form $(i',k)$ for some $k$, which must differ from $(i,j)$ at least on the first component. Thus, the coloring $c$ is proper.
In the absence of any preprocessing, Linial <cit.> showed that we require
$\Omega(\log^* n)$ rounds to color the graph even if it is just a path.
To complement this, Linial also provides an $O(\log^* n)$ round algorithm that colors the graphs with maximum degree $\Delta$ with
$O(\Delta^2)$ colors.
The algorithm works by repeatedly reducing a given proper coloring with $n$ colors to one with at most $\lceil 5 \Delta^2 \log_2 n \rceil$ colors. The same algorithm can be adapted to $\PCC$ with a small change yielding at most $\lceil 23 \Delta^2 \log_2 n \rceil$ new colors. (See Section <ref>). A consequence of the above is that one can readily adapt existing solutions of graph coloring to color completion. For example the results of Maus <cit.>, Barenboim et al.<cit.> can be extended to $\PCC$, with the number of colors used replaced by $\chi_{new}$ and retaining the same round complexities.
We complement the result of Thm. <ref> with the following lower bound.
For every integer $\chi, \Delta$, there exists a graph $G$ with chromatic number $\chi$ and maximum degree $\Delta$ such that for every single round deterministic distributed algorithm $\mathcal{A}$, the total number of colors used by $\mathcal{A}$ over all recurrent instances of $\PCC$ is at least $\chi \cdot (\Delta - \chi + 2)$ even after an arbitrary preprocessing of $G$.
mylabel/.style args=at #1 #2 with #3
mark= at position #1
with [#2] #3;
ıin 1,...,6
[draw, circle, inner sep=1mm] (Aı) at ($(\i*60:1)$) ;
ıin 1,...,6
ȷin 1,...,4
[draw, circle, inner sep=1mm] (Bıȷ) at ($(\i*60:1)+(\i*60+\j*20-2.5*20:1.5)$) ;
(Bıȷ) – (Aı);
ıin 1,...,6
ȷin ı,...,6
(Aı) – (Aȷ);
[draw, dashed, circle, inner sep=10mm, label=above:$K_{\chi}$] at (0, 0) ;
[domain=180-20:180+20, mylabel=at 0.7 above left with $\Delta-\chi+1$,stealth-stealth] plot (3*cos(), 3*sin());
Graph whose single round color completion assigns at least $\chi \cdot (\Delta - \chi + 1)$ different colors across all instances. In this example $\chi = 6, \Delta = 9$.
The lower bound graph is obtained by taking the clique $K_{\chi}$ and and adding $\Delta - \chi + 1$ different nodes to each node of $K_{\chi}$ (See Figure
). Let the vertices of the clique be $v_1, v_2 \dots v_{\chi}$ and let $v_{ij}$ denote the $j^{th}$ neighbor of $v_i$ for each $1 \leq i \leq \chi$ and $0 \leq j \leq \Delta-\chi$.
Let $\mathcal{A}$ be any single round deterministic distributed algorithm that solves $\PCC$. We construct $\chi \cdot (\Delta - \chi + 2)$ instances, namely $I_{i, j}$ for each $1 \leq i \leq \chi$ and $0 \leq j \leq \Delta - \chi + 1$ as follows. We define $I_{i, 0}$ to be the instance where none of the nodes are precolored. Let $\col_{i, j}$ be the solution to the instance $I_{i, j}$ given by algorithm $\mathcal{A}$. We construct $I_{i, j}$ (for $j > 0$) from $I_{i, j-1}$ and $\col_{i, j-1}$. $I_{i, j}$ is same as the instance $I_{i, j-1}$ except that vertex $v_{i, j-1}$ is precolored with $\col_{i, j-1}(v_i)$.
We shall now argue that the following $\chi \cdot (\Delta - \chi + 2)$ colors, $\col_{i, j}(v_i)$ for $1 \leq i \leq \chi$ and $0 \leq j \leq \Delta-\chi+1$ are all distinct, which proves the theorem.
Consider $\col_{a, b}(v_a)$ and $\col_{c, d}(v_c)$ for some $1 \leq a < c \leq \chi$ and $0 \leq b, d \leq \Delta - \chi + 1$. To argue that these colors are different, we construct a new instance $I$ wherein $v_{a, j}$ is precolored with $\col_{a, j}(v_a)$ for every $0 \leq j < b$ and $v_{b, k}$ is precolored with $\col_{b, k}(v_b)$ for every $0 \leq k < d$.
Since $\mathcal{A}$ operates in a single round, for node $v_a$, the instance $I$ is indistinguishable from instance $I_{a, b}$. Therefore the color assigned to $a$ by $\mathcal{A}$ for instance $I$ must be $\col_{a, b}(v_a)$.
Similarly, with respect to node $v_c$, the instances $I_{c, d}$ and $I$ are indistinguishable and thus $c$ is assigned $\col_{c, d}(v_c)$ by $\mathcal{A}$ for instance $I$.
Since $v_a, v_c$ are directly connected and $\mathcal{A}$ assigns a proper coloring to $G$ for instance $I$, and $v_a, v_c$ are adjacent in $G$, we have $\col_{a, b}(v_a) \neq \col_{c, d}(v_c)$.
The only pairs left to consider are of the form $\col_{a, b}(v_a)$ and $\col_{a, d}(v_d)$ for some $b < d$. To see that these are different, consider instance $I_{a, d}$. The vertex $v_{a, b}$ is precolored with $\col_{a, b}(v_a)$ and $v_a$ is assigned $\col_{a, d}(v_a)$ by $\mathcal{A}$. Since $v_a, v_{a, b}$ are adjacent, it follows that $\col_{a, b}(v_a) \neq \col_{a, d}(v_a)$.
§.§ CC with Enough New Colors
We now describe how the single round Color Completion can be extended for multiple rounds.
Consider a graph $G$ with maximum degree $\Delta=\Delta(G)$ and chromatic number $\chi=\chi(G)$ with $\Delta > 0$ and let $k$ be any integer with $1 \leq k \leq \chi$.
We have,
$$\SUPTIME(\PCC_{new}(\max(\lceil \frac{\chi}{k} \rceil \cdot \Delta, \Delta + 1)), G) \leq k$$
The preprocessing stage is same as that of the single round algorithm, where we precompute a proper $\chi$-coloring of the graph. Let $dc(v)$ be the color of $v$. In the recurrent stage, each precolored node $w$ sends its assigned precolor $c(w)$ to all its neighbors during the first round.
Consider the same color palette $\mathcal{P}$ used for the single round color completion, except when $k = \chi$. In case $k = \chi$, add another color $(1, \Delta+1)$ to the palette.
During round $i$ ($1 \leq i \leq k$), nodes $v$ with $dc(v) \equiv i \pmod k$ decide on their colors. If node $v$ has all neighbors precolored, then it chooses any free color of the form either (i) $(1, j)$ for some $1 \leq j \leq \Delta$ or (ii) $(1, \Delta + 1)$ if $\chi = k$ and $(2, 1)$ otherwise. If any neighbor of $v$ is not precolored, then it selects any free color of the form $(\lceil \frac{dc(v)}{k} \rceil, j)$ for some $1 \leq j \leq \Delta$. At least one free color is guaranteed to exist as number of neighboring vertices that have already fixed color before round $i$ is at most $\Delta - 1$. The node finalizes the chosen color as $c(v)$ and if $i < k$, sends $c(v)$ to all its neighbors.
We now argue that the coloring assigned is proper. It is sufficient to show that whenever a node $v$ adopts a color $c(v)$, $c(v)$ is different from $c(w)$ for all neighbors $w$ of $v$. We always choose $c(v)$ so that it is different from the colors of all neighbors $c(w)$ where $w$ was colored at a previous round. It remains to consider those neighbors of $v$ that are colored in the same round as $v$. Let $w$ be an arbitrary such neighbor. We have $dc(v) \neq dc(w)$ as $dc$ is a proper coloring. Since $dc(w) \equiv dc(v) \pmod{k}$, we must have $\lceil \frac{dc(v)}{k} \rceil \neq \lceil \frac{dc(w)}{k} \rceil$ and therefore the chosen colors must be different.
We compare Theorem <ref> with the algorithm of Maus <cit.> that colors a graph using $O(\Delta^{1 + \epsilon})$ colors within $\Delta^{\frac{1}{2} - \frac{\epsilon}{2}}$ rounds, i.e. the algorithm uses at most $\frac{c \Delta^2}{k^2}$ colors in $k$ rounds for some constant $c$ and every $k$ with $1 \leq k \leq \sqrt{\Delta}$.
Comparing the number of colors, the algorithm of Maus uses fewer colors whenever $\sqrt{\Delta} > k > \frac{c \Delta}{\chi}$.
§.§ CC Without Preprocessing
The classical algorithm of Linial <cit.> adopts a coloring in one round with at most $\lceil 5 \Delta^2 \log n \rceil$ colors. The proof is based on the existence of a family of sets that intersect at “few elements”. The existence of such a family of sets is shown with the help of a probabilistic argument. Specifically, for any given pair of integers $n, \Delta$, there exist $n$ sets $F_1, F_2, \dots F_n$, each a subset of $[m]$ for some integer $m \leq \lceil 5 \Delta^2 \log n \rceil$, that satisfy the following property:
\begin{gather*}
\mathbf{P_0:} \hskip 2em \forall \{i_0, i_1, i_2, \dots i_{\Delta}\} \subseteq [n], \ \ \left|F_{i_0} \setminus \bigcup_{j=1}^{\Delta} F_{i_j}\right| > 0.
\end{gather*}
The existence of these sets implies a distributed $1$-round
for classical coloring, since a vertex $v$ with a unique identifier $id(v)$ can choose any color from $F_{id(v)} \setminus \bigcup_{u \in \Gamma(v)} F_{id(u)}$. The coloring is proper since the sets satisfy the given property and the maximum color chosen is $m \leq \lceil 5 \Delta^2 \log n \rceil$.
To adapt this algorithm to Color Completion, it is sufficient to modify the property constraint as follows:
\begin{gather*}
\mathbf{P_\Delta:} \hskip 2em \forall \{i_0, i_1, i_2, \dots i_{\Delta}\} \subseteq [n], \ \ \left|F_{i_0} \setminus \bigcup_{j=1}^{\Delta} F_{i_j}\right| > {\mathbf{\Delta}}.
\end{gather*}
Applying the same probabilistic argument, we can show the following.
For sufficiently large $n$, there exists an integer $m \leq \lceil 23 \Delta^2 \log_2 n \rceil$ and
sets $F_1, F_2, \dots F_n \subset [m]$, that satisfy property $\mathbf{P_\Delta}$.
Given $n$ and $m$ as in the lemma, select the sets $F_i$ randomly as follows.
For each integer $x = 1, 2, \dots m$ and each $i = 1, 2, \dots n$, add $x$ to $F_i$ with probability $1/\Delta$.
For a given set $\{i_0, i_1, \dots i_{\Delta}\} \subseteq [n]$, the probability that a particular $x \in [m]$ belongs to $F_{i_0}$ but not the remaining $\Delta$ sets is $\frac{1}{\Delta} \cdot \left(1 - \frac{1}{\Delta}\right)^{\Delta} \geq \frac{1}{4\Delta}$.
Hence, the probability that fewer than $\Delta + 1$ of the elements in $[m]$ belong to $F_{i_0}$ but not the remaining $\Delta$ sets is at most
$\sum_{j=1}^{\Delta} \binom{m}{j} \left(1 - \frac{1}{4 \Delta}\right)^{m-j}$.
As long as $m > 2\Delta$, the terms are increasing, i.e., $\binom{m}{j+1} x^{m-j-1} > \binom{m}{j} x^{m-j}$. Therefore, we can bound the summation by
\begin{equation}
\sum_{j=1}^{\Delta} \binom{m}{j} \left(1 - \frac{1}{4 \Delta}\right)^{m-j} \leq \Delta \binom{m}{\Delta} \left( 1 - \frac{1}{4 \Delta}\right)^m.
\end{equation}
Finally, the probability that the chosen sets do not satisfy the property for at least one of the subsets $\{i_0, i_1 \dots i_{\Delta}\}$ is at most
\begin{equation}
\binom{n}{\Delta + 1} \cdot (\Delta + 1) \cdot \Delta \cdot \binom{m}{\Delta} \cdot (1 - \frac{1}{4 \Delta})^m \leq n^{\Delta + 1} \cdot m^{\Delta} \cdot e^{-\frac{m}{4\Delta}} \cdot \frac{1}{\Delta!}~.
\end{equation}
If the final expression above is strictly less than $1$, then the existence is guaranteed. This occurs whenever $m > 4 \Delta(\Delta + 1) \ln n + 4 \Delta^2 \ln m$. To find such a value of $m$, suppose $c_1 \Delta^2 \ln n < m < c_2 \Delta^2 \ln n$, then $\ln m < \ln{c_2} + \ln{\Delta^2\ln n} < \ln{c_2} + 3\ln n$, using which we can get a weaker (and easily solvable) lower bound for $m$,
\begin{equation*} \begin{split}
4\Delta(\Delta + 1) \ln n + 4 \Delta^2 \ln m &< 4 \Delta(\Delta + 1) \ln n + 4 \Delta^2 \ln{c_2} + 12 \Delta^2 \ln{n} \\
&< 20 \Delta^2 \ln n + 4 \Delta^2 \ln c_2
\end{split} \end{equation*}
Therefore, if we can choose $c_1, c_2$ so that $20 + 4 \frac{\ln{c_2}}{\ln n} < c_1 < c_2$ we are done. Considering $n \geq 3$, we can choose any $c_2$ such that $c_2 - (20 + \frac{4 \ln c_2}{\ln 3})$ exceeds $0$. The smallest such value is around $c_2 = 33$, therefore an upper bound on $m$ (and also the maximum number of colors) is at most $33 \Delta^2 \ln n \approx 23 \Delta^2 \log_2{n} $.
Color Completion can be solved with $\chi_{new} \leq \chi_{all} \leq \lceil 23 \Delta^2 \log_2{n} \rceil$ colors in one LOCAL round.
§.§ CC with fewer than Delta + 1 colors
We next discuss coloring algorithms based on a preprocesing stage, which use fewer than $\Delta+1$ colors when possible.
§.§.§ A recurrent algorithm
Our main result is an algorithm that, for a graph $G$ with chromatic number $\chi$, uses preprocessing, and in the recurrent stage solves any instance of $\PCC$ with at most $\chi$ new colors in $\chi$ rounds.
The algorithm operates as follows.
The preprocessing stage computes a proper-$\chi$ coloring of the graph $G$.
This is stored implicitly, i.e., each node $v$ stores a single color (a positive integer) $dc(v)$. We call this coloring the initial coloring of $G$.
Online algorithm. We call the algorithm the “priority recoloring” algorithm. The set of nodes with the same initial coloring form an independent set which implies that nodes belonging to this set may be colored independently. We use the standard greedy algorithm to simultaneously color nodes with the same initial color in a single round. The initial colors are only computed to partition the original set of nodes into $\chi$ independent sets.
The input of each recurrent instance is a subset $S$ of the nodes that were precolored, i.e., each $v \in S$ has a precolor $c(v)$. For convenience, consider $c(v) = 0$ for all $v \not \in S$.
The required output is a color completion of the precoloring: each node $v \not \in S$ outputs a color $c(v) \in \mathbb{N}$ such that the colors assigned to all vertices form
a proper coloring of the graph $G$.
The online algorithm $\cA$ operates as follows.
* For $r = 1, 2, \dots \chi$ rounds, do
* If $dc(v) = r$ and $c(v) = 0$ then, $c(v) \gets \min(\mathbb{N} \setminus \Gamma(v))$, where
$\Gamma(v) = \{c(w) | (w, v) \in E(G)\}$
For a given instance of the problem, $\chi^*_{\UN}$ (respectively, $\chi^*_{new}$, $\chi^*_{all}$) is the smallest possible value of $\chi_{\UN}$ (resp., $\chi_{new}$, $\chi_{all}$) over all possible proper color completions of the precoloring, and $\chi^{\cA}_{\UN}$ (respectively, $\chi^{\cA}_{new}$, $\chi^{\cA}_{all}$) is the value of $\chi_{\UN}$ (resp., $\chi_{new}$, $\chi_{all}$) in the solution computed by the priority algorithm.
For any coloring, $\chi_{all} = \chi_{pc} + \chi_{new}$.
In particular,
$\chi^*_{all} = \chi_{pc} + \chi^*_{new}$
$\chi^{\cA}_{all} = \chi_{pc} + \chi^{\cA}_{new}$.
$\chi^{\cA}_{new} \le \chi$.
For every integer $k\ge 1$, let $\mathbb{N}_k=\{1,\ldots,k\}$.
Let $M=\max\cP_{pc}$, and let $FREE=\mathbb{N}_{M+\chi} \setminus\cP_{pc}$ be the set of free colors (not used in the precoloring) up to $M+\chi$.
Note that the cardinality of the set $FREE$ is at least $\chi$.
Let $\hat F = \{f_1,\ldots,f_{\chi}\}$ consist of the smallest $\chi$ integers in the set $FREE$.
By induction on $k$ from 1 to $\chi$, one can verify that during iteration $k$ of the algorithm, the colors the algorithm uses for the uncolored vertices of default color $dc(v)=k$ are taken from $FREE \cup \{f_1,\ldots,f_k\}$. Hence $\cP^{\cA}_{\UN} \subseteq \cP_{pc} \cup \hat F$, implying that $\chi^{\cA}_{new} \le |\hat F| = \chi$.
Consider a graph $G$ with chromatic number $\chi = \chi(G)$. With preprocessing allowed, there exists an algorithm $\cA$ that can solve an instance of $\PCC$ with $\chi_{all}^{\cA} \leq \chi + \chi^*_{all}-1$ colors and with $\chi_{new}^{\cA} \leq \chi$ in $\chi$ units of time.
§.§.§ Hard examples and negative results
A natural question is how tight these bounds are.
Note first that the priority recoloring algorithm does not necessarily yield a good approximation for $\chi_{new}$ (i.e., a bound of the form $\chi^{\cA}_{new} \le \rho\cdot\chi^*_{new}$ for some approximation ratio $\rho$). To see this, consider the example of Fig. <ref>. In this example, $\chi^{\cA}_{new} =4$ while $\chi^*_{new}=0$.
Poor approximation for $\chi_{new}$.
Black numbers denote the optimal coloring in the preprocessing stage ($\chi=4$).
The red numbers represent the precoloring ($\chi_{pc}=10$).
The green numbers are a coloring of the clique nodes that optimizes the number of new colors (yielding $\chi^*_{new}=0$).
Note that the priority algorithm will use the new colors 7, 8, 9, 10, so $\chi^{\cA}_{new}=4$.
In this example, the problem can be attributed in part to the fact that the precoloring uses two non-contiguous blocks of colors, namely, $\{1,\ldots,6\}\cup\{11,\ldots,14\}$.
However, it is possible to construct an example where the priority coloring algorithm performs poorly despite the fact that the precoloring uses a single contiguous block of colors. Consider the graphs constructed recursively as shown in Figures <ref> and <ref>.
Initial coloring: The numbers on the graphs show the initial coloring. Note that the initial colors of the nodes in the cliques $K_{\chi-2}$ are not specified, they must be completed so that they are consistent with those mentioned in the figure.
Pre coloring: The nodes in the cliques ($K_{\chi-2}$) are precolored with colors from $1, \dots \chi - 2$.
For the graph $G_{\chi - 2}$, The priority recoloring algorithm uses $2\chi - 2$ total colors and $\chi - 2$ new colors, however the optimal solution uses only $\chi$ total colors and $2$ new colors. The optimal solution can be obtained by the priority recoloring algorithm if a different initial coloring is chosen, in particular replace color $x$ by color $\chi + 1 - x$ in the same graph and for that initial coloring the priority recoloring algorithm gives an optimal solution.
(c1) – (c2);
(c1) – (c4);
(c2) – (c3);
[rounded corners, dashed] (2.2, -4) rectangle (8.3, 0.5) ;
(text1) at (5.3, -3.5) $G_1$;
[rounded corners, dashed] (8.5, -3) rectangle (11.75, 0.5) ;
(text0) at (8.5+1.625, -2.5) $G_0$;
(text1) at (5.3, -4.5) $G_2$;
Constructing $G_2$ from $G_0, G_1$ (Initial coloring)
at (9, -1)[circle,fill,inner sep=0.5pt];
at (9.2, -1)[circle,fill,inner sep=0.5pt];
at (9.4, -1)[circle,fill,inner sep=0.5pt];
(c1) – (c2);
(c1) – (c3);
(c1) – (c4);
at (6-1.1, 0.7)[circle,fill,inner sep=0.5pt];
at (6.1-1.1, 0.7)[circle,fill,inner sep=0.5pt];
at (6.2-1.1, 0.7)[circle,fill,inner sep=0.5pt];
[rounded corners, dashed] (3.7-1.4, -1-1.4) rectangle (3.7+1.7, -1+1.5) ;
(text0) at (3.7, -1-1.1) $G_0$;
[rounded corners, dashed] (6.9-1.4, -1-1.4) rectangle (6.9+1.7, -1+1.5) ;
(text1) at (6.9, -1-1.1) $G_1$;
[rounded corners, dashed] (11-1.4, -1-1.4) rectangle (11+1.7, -1+1.7) ;
(text2) at (11, -1-1.1) $G_{k-1}$;
(text3) at (5.5, -3) $G_k$;
Constructing $G_k$ from $G_0, G_1, \dots G_{k-1}$ (Numbers denote Initial coloring) (a) The following precoloring instance is bad : color all the $K_{\chi - 2}$ cliques with colors from $1, 2, \dots \chi-2$ and leave the rest uncolored.
However, combining Lemma <ref> and Obs. <ref> we get the following.
$\chi^{\cA}_{all} \le \chi_{pc} + \chi$.
Since $\chi^*_{all} \ge \max\{\chi_{pc},\chi\}$, we get an approximation of ratio 2 for $\chi_{all}$.
$\chi^{\cA}_{all} \le 2\chi^*_{all}$.
(Lower bound for $\chi^{\cA}_{new}$).
For every deterministic distributed algorithm $\cA$ that solves $\PCC$ with the guarantee that $\chi^{\cA}_{new} < \chi^{*}_{new} + \chi$, there exists a graph $G$ such that even with preprocessing allowed, there exists an instance of $\PCC$ for which $\cA$ takes $\Omega(D)$ units of time, where $D$ is the diameter of the graph $G$.
[draw, rectangle, inner sep = 2pt] (k) at (1, 0) $K_{\chi}$;
[draw, circle, inner sep = 2pt] (a) at (2, 0) $1$;
[draw, circle, inner sep = 2pt] (b) at (3, 0) $2$;
(k) – (a) – (b);
[draw, circle, fill=black, inner sep=0.1pt] at (3.5, 0) ;
[draw, circle, fill=black, inner sep=0.1pt] at (3.7, 0) ;
[draw, circle, fill=black, inner sep=0.1pt] at (3.9, 0) ;
[draw, circle, inner sep=2pt] (f) at (4.5, 0) $1$;
[draw, circle, inner sep = 2pt] (c) at (5.5, 1.5) $c_1$;
[draw, circle, inner sep = 2pt] (d) at (5.5, 0.5) $c_2$;
[draw, circle, fill=black, inner sep=0.1pt] at (5.5, -0) ;
[draw, circle, fill=black, inner sep=0.1pt] at (5.5, -0.2) ;
[draw, circle, fill=black, inner sep=0.1pt] at (5.5, -0.4) ;
[draw, circle, inner sep=2pt] (e) at (5.5, -1) $c_{\chi}$;
(f) – (c);
(f) – (d);
(f) – (e);
[decorate, decoration = calligraphic brace, mirror] (2,-0.4) – (4.5,-0.4);
at (3.25, -0.7) $t$ vertices;
Lower bound graph for $\PCC$. $K_{\chi}$ denotes a clique of size $\chi$ and one node of $K_{\chi}$ is connected to the end of the path with $t$ vertices.
Consider the graph $G$ shown in Figure <ref>. The given labels to the nodes denote the precoloring and the none of the nodes in the clique $K_{\chi-1}$ are precolored. The diameter of the graph is $t + 2$.
Consider the set of instances where the precolors $c_1, c_2, \dots c_{\chi}$ are chosen to be distinct integers from the set $S = \{3, 4, \dots 2 \chi + 2\}$. There are in total $\binom{2\chi}{\chi}$ different instance precolorings.
For any deterministic algorithm $\cA$ that runs in $o(t)$ time, the output, consisting of the colors chosen by $\cA$ for the nodes in the clique $K_{\chi}$, must be same for each of the $\binom{2\chi}{\chi}$ instances described above. Let these colors be $\cP_{\UN} = \{\gamma_1, \gamma_2, \dots \gamma_{\chi}\}$. Since $|S| = 2\chi$, $|S \setminus \cP_{\UN}| \geq \chi$ which implies that there exists an instance (colors $c_1, c_2, \dots$ chosen from $S \setminus \cP_{\UN}$) such that $\cP_{c} \cap \cP_{\UN} = \emptyset$ and consequently for that instance, $\chi^{\cA}_{new} = |\cP_{\UN}| = \chi$.
However it is optimal to color the nodes of the clique with the colors $c_1, c_2 \dots c_{\chi}$ which gives $\chi^*_{new} = 0$.
Thus there exists an instance for which $\chi^{\cA} = \chi^*_{new} + \chi$.
Note that the proof shows also that for any such algorithm $\cA$, there are some instances for which
$\chi^*_{all} = \chi+2$ but $\chi^{\cA}_{all} = 2\chi+2$ and therefore there cannot exist a deterministic CTAS to minimize $\chi_{new}$.
Randomization also does not help. In the graph constructed above, for any randomized algorithm that takes $o(t)$ rounds, the distribution of the colors assigned to the vertices of the clique $K_{\chi-1}$ must be independent of the values of $c_1, c_2, \dots c_{\chi}$. Furthermore, there must exist a set of $\chi$ colors $T$, such that the probability that the algorithm chooses $T$ is no more than $\frac{1}{\binom{2\chi}{\chi}}$. For the input where $c_1, c_2, \dots c_{\chi}$ are chosen to be from $S \setminus T$, the same bounds for $\chi^{\mathcal{A}}_{all}$ and $\chi^{*}_{all}$ can be achieved. Therefore any algorithm that operates in $o(D)$ rounds and places fewer that $\chi^*_{all} + \chi$ colors cannot succeed with probability more than $\frac{1}{\binom{2\chi}{\chi}}$. This implies the following.
There is no deterministic CTAS for the $\PCC$ problem that minimizes $\chi_{new}$. Furthermore, there is no randomized CTAS that succeeds with any fixed probability.
Another implication of
Thm. <ref>
is that
without preprocessing, solving $\PCC$ with $\chi^{\cA}_{new} < \chi^{*}_{new} + \chi$ requires time $\Omega(D)$.
For every integer $\chi \geq 2$ and deterministic algorithm $\mathcal{A}$ that solves $PCC$ with the guarantee that $\chi^{\mathcal{A}}_{new} \leq \chi^*_{new} + 1$, there exists a graph $G$ with chromatic number $\chi$ and a pre-coloring of $G$ for which $\mathcal{A}$ takes $\chi$ units of time, even with arbitrary preprocessing allowed.
[draw, circle, inner sep = 2pt] (n11) at (1, 0) ;
[draw, circle, inner sep = 2pt] (n12) at (1, 1) ;
[draw, circle, inner sep = 2pt] (n13) at (1, 2) ;
[draw, circle, inner sep = 2pt] (n14) at (1, 3) ;
[draw, circle, inner sep = 2pt] (n21) at (3, 0) ;
[draw, circle, inner sep = 2pt] (n22) at (3, 1) ;
[draw, circle, inner sep = 2pt] (n23) at (3, 2) ;
[draw, circle, inner sep = 2pt] (n24) at (3, 3) ;
[draw, circle, inner sep = 2pt] (n31) at (5, 0) ;
[draw, circle, inner sep = 2pt] (n32) at (5, 1) ;
[draw, circle, inner sep = 2pt] (n33) at (5, 2) ;
[draw, circle, inner sep = 2pt] (n34) at (5, 3) ;
(n11) – (n12);
(n11) .. controls (0.5,1) and (0.5,1) .. (n13);
(n11) .. controls (-0,1.5) and (-0,1.5) .. (n14);
(n12) .. controls (0.5,2) and (0.5,2) .. (n14);
(n12) – (n13);
(n13) – (n14);
(n21) – (n22);
(n21) .. controls (0.5+2,1) and (0.5+2,1) .. (n23);
(n21) .. controls (-0+2,1.5) and (-0+2,1.5) .. (n24);
(n22) .. controls (0.5+2,2) and (0.5+2,2) .. (n24);
(n22) – (n23);
(n23) – (n24);
(n31) – (n32);
(n31) .. controls (0.5+4,1) and (0.5+4,1) .. (n33);
(n31) .. controls (-0+4,1.5) and (-0+4,1.5) .. (n34);
(n32) .. controls (0.5+4,2) and (0.5+4,2) .. (n34);
(n32) – (n33);
(n33) – (n34);
[red] (n11) – (n22);
[red] (n11) – (n23);
[red] (n11) – (n24);
[red] (n12) – (n21);
[red] (n12) – (n23);
[red] (n12) – (n24);
[red] (n13) – (n21);
[red] (n13) – (n22);
[red] (n13) – (n24);
[red] (n14) – (n21);
[red] (n14) – (n23);
[red] (n14) – (n22);
[red] (n21) – (n32);
[red] (n21) – (n33);
[red] (n21) – (n34);
[red] (n22) – (n31);
[red] (n22) – (n33);
[red] (n22) – (n34);
[red] (n23) – (n31);
[red] (n23) – (n32);
[red] (n23) – (n34);
[red] (n24) – (n31);
[red] (n24) – (n33);
[red] (n24) – (n32);
at (5+0.3, 1.5)[circle,fill,inner sep=0.5pt];
at (5+0.6, 1.5)[circle,fill,inner sep=0.5pt];
at (5+0.9, 1.5)[circle,fill,inner sep=0.5pt];
[draw, circle, inner sep = 2pt] (nl1) at (7, 0) ;
[draw, circle, inner sep = 2pt] (nl2) at (7, 1) ;
[draw, circle, inner sep = 2pt] (nl3) at (7, 2) ;
[draw, circle, inner sep = 2pt] (nl4) at (7, 3) ;
(nl1) – (nl2);
(nl1) .. controls (0.5+6,1) and (0.5+6,1) .. (nl3);
(nl1) .. controls (-0+6, 1.5) and (-0+6,1.5) .. (nl4);
(nl2) .. controls (0.5+6,2) and (0.5+6,2) .. (nl4);
(nl2) – (nl3);
(nl3) – (nl4);
Lower bound graph when $\chi = 4$
Consider a series of $l$ cliques of size $\chi$. Let $v_{i,j}$ be the $i^{th}$ vertex of the $j^{th}$ clique for $1 \leq i \leq \chi$ and $1 \leq j \leq l$.
In addition to the $l \cdot \binom{\chi}{2}$ edges between vertices of each clique, add an edge between $v_{i, j}$ and $v_{i+1, k}$ for all $1 \leq i < l$ and $j \neq k$. In particular all pairs of edges between vertices of clique $i$ and clique $i+1$ are connected, except $v_{i, j}$ and $v_{i+1, j}$. See Figure <ref> for an example with $\chi = 4$.
It is easy to verify that the graph has chromatic number $\chi$. The color assignment $c(v_{i,j}) = j$ is a proper $\chi$-coloring.
The diameter of the graph is $l-1$ and is similar to a path with $l$ vertices except that each vertex is replaced by a clique and between cliques maximum number of edges are added such that chromatic number of the graph remains same.
The only way to color the graph using $\chi$ colors is to assign the vertices $v_{i, 1}, v_{i, 2}, \dots v_{i, l}$ the same color for every $1 \leq i \leq \chi$.
Consider a precoloring where only vertices of clique $1$ are colored. In these instances, the color of the vertices in clique $l$ must be same regardless of the pre-colors assigned to vertices of clique $1$. Suppose $c_j$ is the color assigned to $v_{i, l}$, consider the precoloring instance with $c_{pre}(v_{i, 1}) = c_{i\mod{\chi}+1}$ (next color in cyclic order). There is no possible way to complete the coloring in $o(D)$ time without using an additional color.
Suppose the algorithm assigns color $\chi + 1$, then between any two adjacent cliques ($i, i+1$), there can be at most one $j$ such that $c(v_{i,j}) \neq c(v_{i+1, j})$. Therefore at least one of the cliques $\chi+1, \chi+2, \dots l$ must have a different output when the input is changed. However this cannot occur if the algorithm takes less than $\chi$ units of time.
§ RECURRENT LOCALLY CHECKABLE LABELLINGS ()
Locally Checkable Labellings (LCL) were first proposed by Naor and Stockmeyer <cit.>. Informally, an LCL problem on a graph $G$ asks for an assignment $\Gamma_{out}$ of labels, to the vertices of $G$ that satisfy a set of rules that are verifiable “locally". These are problems whose solutions can be verified by an $O(1)$ round distributed algorithm in the $\LOCAL$ model. Whenever the solution is incorrect, at least one of the nodes in the graph identifies so (not necessarily all of them).
An LCL problem for a graph $G$ is described by a 5-tuple $(r, \Sigma_{in}, \Sigma_{out}, \Gamma_{in}, \mathcal{C})$ where
* $\Sigma_{in}$ is a set of input labels,
* $\Gamma_{in} : V(G) \rightarrow \Sigma_{in}$, is an assignment of input labels to each vertex of $G$
* $\Sigma_{out}$ is a set of output labels
* $\mathcal{C}$ is a set of rules. Each element of $\mathcal{C}$ is a labelled centered graph $H$ with a designated center $w\in V(H)$, and a labelling $\Gamma: V(H) \mapsto \Sigma_{in} \times \Sigma_{out}$.
The distance of every node in $H$ from $w$ is at most $r$.
For a given vertex $u \in V(G)$, let $G_r(u)$ be the graph induced by vertices $v$ of $G$ that are at a distance at most $r$ from $u$.
A given labelling $\Gamma_{out} : V(G) \rightarrow \Sigma_{out}$ is valid if and only if for every vertex $u \in V(G)$, there is a graph $H \in \mathcal{C}$ and an isomorphism $\phi : V(G_r(u)) \rightarrow V(H)$ such that,
* $\phi(u)$ is the designated center of $H$
* $(\Gamma_{in}(u), \Gamma_{out}(u)) = \Gamma(\phi(u))$
Problems such as computing (an arbitrary) Dominating Set, Vertex Cover, Maximal Matching, $\Delta + 1$ Coloring can be represented as LCLs. The examples mentioned previously do not require input labels (i.e., we can construct LCL's where every vertex has the same input label). Problems such as finding
client dominating set or a color completion (i.e., variants of the classical problems with PFO or PCS instances) can also be captured by the above definition, however they crucially require input labels, i.e. $|\Sigma_{in}| > 1$ for these LCL's.
To realise the Client Dominating Set as an LCL, consider $\Sigma_{in}$ to be $\{\textsf{client}, \textsf{non-client}\}$ and $\Sigma_{out} = \{\textsf{server}, \textsf{non-server}\}$. The input labelling $\Gamma^{in}$, assigns the input labels accordingly as per the client set $C$ given by the $\CDS$ instance. The set of rules $\mathcal{C}$ consists of all centered graphs with radius $1$ and degree at most $\Delta(G)$ wherein one of the following holds: (i) the center is labelled a $\textsf{server}$, (ii) one of the neighbors of the center is labelled as a $\textsf{server}$ or (iii) the center has input label . Restricting $\Sigma_{in} = \{\textsf{client}\}$ captures the classical Dominating Set problem. Note that LCL's are often not optimisation problems, i.e. we often can't minimize/maximize any set of labels as such problems are often not locally verifiable.
§.§ Subgraph LCL's without Input Labels on Paths
In this section we consider a subset of recurrent LCL's, named subgraph LCL's without input labels, which were studied by Foerster et al. <cit.>.
In subgraph LCL's, the online instances ask for a valid labelling for some (edge induced) subgraph of the given graph $G$.
This class of LCL's is easier to solve, but already captures several classical problems, such as finding a dominating set, maximal matching, maximal independent set, $(k, l)$-ruling sets etc.
We consider subgraph LCL on a path $P_n$.
Before getting to the solution, we first remark that one may consider without loss of generality only LCL's with radius $1$. Given an LCL problem of radius $r$, one may construct an equivalent LCL with radius $1$ at the cost of increasing the output label size and the set of rules.
From a prior work (Theorem 3 in Foerster et al. <cit.>), we may infer that if the round complexity of $\Pi$ in the $\LOCAL$ model is $o(n)$, then it must be $O(1)$ in the $\SUPPORTED$ model. This result is non-constructive, i.e., it argues that given a $o(n)$ round distributed algorithm, one can transform it into an $O(1)$ round algorithm. Additionally, it does not help categorize LCL problems that are $\Theta(n)$ in the $\LOCAL$ model. Some LCL problems (such as $2$-coloring) are $\Theta(n)$ in the $\LOCAL$ model, but clearly $O(1)$ in the $\SUPPORTED$ model. One can also construct LCL's that remain $\Theta(n)$ in the $\SUPPORTED$ model. Furthermore, the proof offers no insight about the additional amount of memory per node that is needed for the preprocessing stage. The following theorem addresses the above questions. Note that as done in prior work, we treat the size of the description of $\Pi$ as constant in the round complexity (in particular, $|\Sigma_{out}|$ and $|\Sigma_{in}|$ are constants).
Let $\Pi$ be a subgraph LCL with $|\Sigma_{in}| = 1$ and let $P_n$ be a path on $n$ vertices, then
* $\SUPTIME(\Pi, P_n)$ is either $\Theta(1)$ or $\Theta(n)$
* $\SUPSPACE(\Pi, P_n)$ is $O(1)$
* $\SUPTIME(\Pi, P_n)$ and an optimal solution for $\Pi$ can be found in time polynomial in size of $\Pi$ by a centralized algorithm.
As remarked earlier, we may assume that the radius $r$ for the LCL problem is $1$. Therefore, on a path we can represent $\mathcal{C}$ as consisting of centered paths of length 1, 2 or 3, whose (ordered) label sets form a subset of $\Sigma_{out} \cup \Sigma^2_{out} \cup \Sigma^3_{out}$ (recall that $|\Sigma_{in}| = 1$ and can be ignored).
Note that the tuples in $\mathcal{C}$ are ordered, in particular $(a, b)$ is different from $(b, a)$.
For tuples of length $2$, we assume the first element is the label of the center and for tuples of size $3$, we assume that the middle element is the center. For example, $(a, b) \in \mathcal{C}$ represents a path of length $2$ with the center labeled $a$. Similarly $(a, b, c)$ represents a path of length $3$ with the center labelled $b$ and the endpoint vertices labelled $a$ and $c$.
Construct a directed graph $G_d$ defined as follows.
Its vertex set is $V(G_d)=\Sigma^2_{out}$, and $E(G_d)$ contains a directed edge from $(a, b)$ to $(b, c)$ if and only if $(a, b, c) \in \mathcal{C}$.
Define the starting and terminal vertices of $G_d$ as
\begin{eqnarray*}
S &=& \{(a,b)\in \Sigma^2_{out} \mid (a,b)\in\mathcal{C}\},
\\
T &=& \{(a,b)\in \Sigma^2_{out} \mid (b,a)\in\mathcal{C}\}.
\end{eqnarray*}
The key observation underlying our proof is that finding a solution to the LCL problem on a path of length $n$ is equivalent to finding a walk in $G_d$ of length $n$ that begins at a starting vertex and ends at a terminal vertex.
For every path $P_n$ with $n \geq 3$, an assignment of output labels $(s_0, s_1, \dots s_{n-1})$ is valid if and only if $(s_0, s_1), (s_1, s_2) \dots (s_{n-2}, s_{n-1})$ is a walk in $G_d$ that begins at a starting vertex in $S$ and ends at a terminal vertex in $T$.
($\Rightarrow$) By correctness of the solution we must have $(s_0, s_1), (s_{n-1}, s_{n-2}) \in \mathcal{C}$. By definition, $(s_0, s_1) \in S$ and $(s_{n-2}, s_{n-1}) \in T$.
By correctness of the LCL we have, $(s_{i-1}, s_{i}, s_{i+1}) \in \mathcal{C}$ and therefore there exists an edge between $(s_{i-1}, s_i)$ and $(s_i, s_{i+1})$ for every $1 < i < n-1$. Hence the given sequence represents a walk in $G_d$.
($\Leftarrow$) As starting and terminal vertices are in $S$ and $T$, respectively, we have $(s_0, s_1) \in \mathcal{C}$ and $(s_{n-1}, s_{n-2}) \in \mathcal{C}$. Therefore the rules are satisfied for the ends of the path. For the intermediate vertices we have that there is an edge from $(s_{i-1}, s_i)$ to $(s_i, s_{i + 1})$ for every $1 < i < n-1$, and therefore $(s_{i-1}, s_i, s_{i+1}) \in \mathcal{C}$. Hence the sequence is a valid assignment of labels.
Given a directed graph $G_d$ and two vertices $u, w \in V(G_d)$, define $\mathrm{walkspan}(u, w)$ as the set of lengths of walks in $G_d$ that start at $u$ and end at $w$. We extend the definition to subsets $U, W \subseteq V(G_d)$ in the natural way, i.e., $\mathrm{walkspan}(U, W) = \bigcup_{u \in U, w \in W} \mathrm{walkspan}(u, w)$.
Let $\alpha=|\Sigma_{out}|^2$. For a set of integers $S$ and a positive integer $k$, let $S/k = \{j \pmod k \mid j \in S\}$.
If $\SUPTIME(\Pi, P_n) = o(n)$ then $G_d$ contains a cycle $C$ and a vertex $v \in C$ such that
\begin{equation}\label{eqn:criteria}
\mathrm{walkspan}(S, v) / |C| ~=~ \mathrm{walkspan}(v, T) / |C| ~=~ \{0, 1, \dots |C|-1\}.
\end{equation}
Let $\mathcal{A}$ be any distributed algorithm for the online phase that solves $\Pi$ in $o(n)$ (recall that $\mathcal{A}$ can use any information obtained out of an arbitrary preprocessing phase).
Let the given path be $P_n = (v_1, v_2, \dots v_n)$, ordered from its left end to its right end. We assume $n>6\alpha$.
Consider the subpath $Q = (v_{n/2-\alpha/2}, ..., v_{n/2 + \alpha/2 + 1})$ i.e., a path of length at least $\alpha + 1$, around the center of $P$.
Now construct $\alpha$ instances for the online phase, namely, $I_1, I_2, \dots I_{\alpha}$, where
$I_i=(v_i, v_{i+1}, \dots v_{n-i+1})$ for $1 \le i \le \alpha$, namely, each $I_i$ is obtained from $P_n$ by removing the $i-1$ first and last vertices.
Note that since $n>6\alpha$, the first (respectively, last) vertex of $Q$ is at distance $\Omega(n)$ from $v_{\alpha}$ (resp., $v_{n-\alpha+1}$). Hence, each of the instances $I_i$ fully contains the subpath $Q$, and moreover, its start segment (from $v_i$ to the first vertex of $Q$) and end segment (from the last vertex of $Q$ to $v_{n-i+1}$) are of length $\Omega(n)$.
This implies that during the execution of the online algorithm on any given recurrent instance $I_i$, the vertices in $Q$ cannot distinguish between any of the instances constructed above (i.e., they will see exactly the same inputs, and consequently perform exactly the same steps, on each of these instances). Consequently, for every vertex $v_j$ in $Q$, the output of $\mathcal{A}$ is the same for every instance $I_i$. Let the output be $\bar\psi = (s_0, s_1, \dots s_{n-1})$.
As $|Q| > \alpha$, there exists a subpath of $Q$, say $\bar Q$, whose assigned labels
$s_t, s_{t+1}, \dots s_{j-1}, s_j$ form a simple cycle, i.e., such that $s_{j-1}=s_t$ and $s_j = s_{t+1}$.
By the correctness of these labels, we have that $(s_t, s_{t+1}), (s_{t+1}, s_{t+2}) \dots (s_{j-1}, s_{j})$ is a cycle in $G_d$. Denote this cycle by $C$ and let the first vertex of $\bar Q$ be $v_{\ell}$. We show that $C$ and $(s_t, s_{t+1})$ are the desired cycle and vertex satisfying the properties of the lemma.
Note that in all the instances $I_1, I_2, \dots I_{\alpha}$, the labels of the vertices $v_{\ell}, v_{\ell+1}$ assigned by $\mathcal{A}$ remain the same (i.e., $s_t, s_{t+1}$ respectively).
Consider instance $I_i=(v_i, v_{i+1}, \dots v_{n-i+1})$.
Let the assigned labels by $\mathcal{A}$ to this path be $\psi=$ $(s'_1, s'_2, \dots s'_{\ell-i+1}, s'_{\ell-i+2} \dots s'_{n-i+1})$. We have $s'_{\ell-i+1} = s_t$ and $s'_{\ell-i+2} = s_{t+1}$ (Here $s'_{\ell-i+1}, s'_{\ell-i+2}$ are the labels of $v_{\ell}, v_{\ell+1}$ respectively). Note that $\psi$ is valid. Therefore, by Claim <ref>, $(s'_1, s'_2) \in S$ and $(s'_1, s'_2), (s'_2, s'_3), \dots (s'_{\ell-i+1}, s'_{\ell-i+2})$ is a walk of length $\ell-i+1$ in $G_d$ that ends at $(s'_{\ell-i+1}, s'_{\ell-i+2})=(s_t, s_{t+1})$. It follows that for every $i = 1, 2, \dots \alpha$, there exists a walk that (i) starts from some vertex in $S$, (ii) ends at vertex $(s_t, s_{t+1})$ and (iii) is of length $\ell-i+1$. We have shown that $\mathrm{walkspan}(S, (s_t, s_{t+1}))$ contains $\alpha \geq |C|$ contiguous integers and hence $\mathrm{walkspan}(S, (s_t, s_{t+1})) / |C| = \{0, 1, \dots |C|-1\}$. By a symmetric argument we can show that $\mathrm{walkspan}((s_i, s_{t+1}), T) / |C| = \{0, 1, \dots |C|-1\}$.
If $G_d$ contains a cycle $C$ and a vertex $v \in C$ that satisfies Equation (<ref>), then $\Pi$ is solvable in $O(\alpha^2) = O(|\Sigma|^4)$ rounds in the model.
We first compute the shortest length walks of each congruence class in $\mathrm{walkspan}(S, v)$ and $\mathrm{walkspan}(v, T)$ modulo $|C|$. We show that the shortest such walk has length at most $2 \alpha^2$.
Consider any walk $W = (w_0, w_1, \dots)$ in $G_d$.
Decompose the walk into an alternating sequence of simple paths
and cycles, i.e., $W = P_0 \circ C_0 \circ P_1 \circ C_1 \dots$. Such a decomposition can be obtained by finding the smallest prefix of the walk that contains a simple cycle, say $P_0 \circ W_0$. Remove the vertices of $P_0 \circ W_0$ except the last vertex, and repeat recursively for the remaining part of the walk.
We would like to shorten the walk $W$, while maintaining two invariants: (i) the remainder $|W|\pmod{|C|}$ obtained when the length of the walk is divided by $|C|$, and (ii) the fact that $W$ starts at a vertex from $S$ and ends at a vertex in $T$. We first observe that removing cycles in the walk does not affect invariant (ii). To achieve (i) we use the following well known number-theoretic fact.
For any sequence of $n$ (not necessarily distinct) integers $a_1, a_2, \dots a_n$, there exists a subset of these integers whose sum is divisible by $n$.
Define $s_i = (a_1 + a_2 + \dots + a_i) \pmod n$ for $i = 1, 2, \dots n$ and define $s_0 = 0$. By the pigeon-hole principle, there exist $0 \leq i < j \leq n$ such that $s_i = s_j$. The desired set is $\{a_{i+1}, a_{i+2}, \dots, a_j\}$.
Apply the following shortening process to $W$. While there are at least $|C|$
cycles in the walk decomposition, choose any subset of the cycles whose total length is divisible by $|C|$ and remove them. At the end of this process, we are left with a sequence of simple paths and cycles with at most $|C| - 1 < \alpha$ cycles and at most $|C| \leq \alpha$ paths.
Each simple cycle and path contains at most $\alpha$ vertices and therefore the length of the final shortened walk $W$ is at most $2 \alpha^2$.
We are now ready to describe the distributed recurrent algorithm for the solving $\Pi$, consisting of a preprocessing stage and an online procedure.
Preprocessing Stage. In the preprocessing phase, we first compute a candidate cycle and vertex pair $C, v$ satisfying Equation (<ref>). Since $\Pi$ is global knowledge, $C, v$ can be reconstructed by each node in the online stage, as long as they use the same deterministic algorithm to find it. We only require the length of the cycle, $|C|$. Split the path into blocks of size exactly $|C|$, except possibly the last block. Color each node using two colors $0, 1$ such that two adjacent nodes have the same color if and only if they belong to the same block. Let this coloring be $\psi$. In addition to the above decomposition, we also orient the edges such that every node has outdegree at most $1$. This gives a consistent left to right orientation to the nodes of the path. We require only $1$ bit to be stored in each node, namely which of its neighbors has the outgoing edge. In total we have only two bits of information given to each node during the preprocessing stage, one bit for orientation and another bit for the block decomposition.
Online Stage. Each node computes a candidate $C, v$ that satisfies Equation (<ref>) using the same deterministic algorithm. We also compute the shortest length walks $L_1, L_2, \dots L_{|C|}$ from a vertex in $S$ to $v$ and the walks $R_1, R_2 \dots R_{|C|}$ from $v$ to a vertex in $T$ such that $|L_i| \equiv |R_i| \equiv i \pmod {|C|}$. We discuss later how all of the above information can be obtained by centralized algorithms that run in time polynomial in $|\Sigma_{in}|$ (This bound does not affect round complexity, but shows that nodes only perform computation that is polynomial in $|\Sigma_{in}|$).
Let $I$ be the online instance which is a set of subpaths of the path $P$. We solve each subpath independently. Let $P_s$ be a subpath in $I$. We may assume that $P_s$ has at least $\alpha^2 + 2\alpha$ nodes, otherwise the instance can be solved by a single node that collects subpath $P_s$.
ıin 1,...,9
int(Mod(ȷ, 3));
[draw, circle, inner sep=1mm] (Aı) at ($(\i,0)$) ;
ifthenelse(==0, "", "ultra thick");
ı>1[] (Aı) – (Aȷ);
at ($(\i,-1)$) $s_{\pgfmathresult}$;
ıin 1,...,3
[draw, circle, inner sep = 0.1mm] at ($(\i*0.25+9.5, 0)$) ;
ıin 10,...,18
[draw, circle, inner sep=1mm] (Aı) at ($(\j,0)$) ;
int(Mod(-9, 3));
ifthenelse(==0, "", "ultra thick");
ı>10[] (Aı) – (A);
at ($(\j,-1)$) $s_{\pgfmathresult}$;
[stealth-stealth] (2, 1) – (17, 1);
at (9.5,1.5) $P_s$;
ıin 1,...,5
at ($(\i+1, -2)$) $t_{\i}$;
ıin 1,...,5
at ($(18-\i, -2)$) $u_{\i}$;
[rounded corners, dashed] (2-0.5,0.5) rectangle (6+0.5,-0.5);
at (1.3, 1) $L_3$;
[rounded corners, dashed] (2-0.5,0.5) rectangle (6+0.5,-0.5);
[rounded corners, dashed] (13-0.5,0.5) rectangle (17+0.5,-0.5);
at (17.8, 1) $R_3$;
Given path $P_n$ decomposed into blocks of size $k=3$. A subpath $P_s$ chosen for some online instance. Labels $s_i$ are obtained from the cycle $C$ of the corresponding graph $G_d$.
Decompose the subpath $P_s$ into blocks by removing edges whose ends (say $u, v$) have different colors ($\psi(u) \neq \psi(v)$). This decomposes $P_s$ into blocks of size exactly $k = |C|$, except possibly the first and last blocks (i.e. those containing the ends of the subpath). Each block is also oriented from left to right (using the orientation remembered from the preprocessing phase). Let the labels of the cycle $C$ be $(s_1, s_2), (s_2, s_3) \dots (s_k, s_1), (s_1, s_2)$ with $(s_1, s_2)$ denoting the vertex $v$. Label the $i^{th}$ vertex of the block (numbered from left) with $s_i$. This is already “almost" a valid labelling for $P_s$, because except for a ends of the subpath $P_s$, all nodes see a graph from $\mathcal{C}$ in their local $1$-neighborhood. Let the number of vertices in the first and last blocks of $P_s$ be $a', b'$ respectively and let $a, b$ be integers such that $1 \leq a, b, \leq k$ and $a \equiv a'+1 \pmod{k}$, $b \equiv b'-1 \pmod{k}$. $L_a = \{(t_1, t_2) (t_2, t_3) \dots (t_{|L_a|-1}, s_1) (s_1, s_2)\}$. Replace the labels of the first $|L_a|-1$ nodes of $P_s$ with $t_1, t_2, \dots t_{|L_a|-1}$. Similarly replace the labels of the last $|R_b|-1$ vertices with the labels obtained from the walk $R_b$. Both can be done distributively in $|L_a| + |R_b| = O(\alpha^2)$ rounds by having the ends of the subpath $P_s$ relay this information to the nodes. As the length of the path is at least $4 \alpha^2 + 2\alpha$, the first $|L_a|-1$ and last $|R_b|-1$ vertices of subpath $P_s$ are disjoint. The resulting labelling traces a walk in $G_d$ from a vertex in $S$ to a vertex in $T$ and by Claim <ref> is a valid labelling for the LCL $\Pi$.
Figure <ref> shows an example of an LCL whose cycle, $C = \{(s_1, s_2), (s_2, s_3), (s_3, s_1)\}$, contains three vertices. The online subpath $P_s$ is such that the precomputed block decomposition decomposes the first and last blocks into sizes $a'=2, b'=1$ respectively. We have $a=3, b=3$ to be the desired walk lengths. In this example $L_3, R_3$ both are walks on $6$ vertices. $L_3 = \{(t_1, t_2), (t_2, t_3), \dots (t_5, s_1), (s_1, s_2)\}$. Similarly $R_3$ is the walk $\{(s_1, s_2), (s_2, u_5), (u_5, u_4) \dots (u_2, u_1)\}$.
We conclude with justifying the algorithm and its time complexity analysis.
First, notice that we do not need to precompute the cycle $C$ in the preprocessing stage. Given the description of $\Pi$, we can verify in the online execution if there exists a cycle $C$ and vertex $v$ satisfying Equation (<ref>). This can be done in a single round (in which the number of computational steps performed locally at each vertex is polynomial in $|\Sigma_{in}|$),
as $C$ has $\alpha = |\Sigma|^2$ vertices.
To compute (online) the desired walks (or establish that they do not exist), note that we only need to consider walks of length at most $\alpha^2 = |\Sigma|^4$. |
# Kernel Density Estimation with Linked Boundary Conditions
Matthew J. Colbrook Department of Applied Mathematics and Mathematical
Physics, University of Cambridge, Wilberforce Road, Cambridge CB3 0WA, UK.
Email<EMAIL_ADDRESS>Zdravko I. Botev School of Mathematics and
Statistics, The University of New South Wales, Sydney, NSW 2052, Australia.
Karsten Kuritz Institute for Systems Theory and Automatic Control, University
of Stuttgart, 70569 Stuttgart, Germany. Shev MacNamara ARC Centre of
Excellence for Mathematical and Statistical Frontiers, School of Mathematical
and Physical Sciences, University of Technology Sydney, NSW 2007, Australia.
###### Abstract
Kernel density estimation on a finite interval poses an outstanding challenge
because of the well-recognized bias at the boundaries of the interval.
Motivated by an application in cancer research, we consider a boundary
constraint linking the values of the unknown target density function at the
boundaries. We provide a kernel density estimator (KDE) that successfully
incorporates this linked boundary condition, leading to a non-self-adjoint
diffusion process and expansions in non-separable generalized eigenfunctions.
The solution is rigorously analyzed through an integral representation given
by the unified transform (or Fokas method). The new KDE possesses many
desirable properties, such as consistency, asymptotically negligible bias at
the boundaries, and an increased rate of approximation, as measured by the
AMISE. We apply our method to the motivating example in biology and provide
numerical experiments with synthetic data, including comparisons with state-
of-the-art KDEs (which currently cannot handle linked boundary constraints).
Results suggest that the new method is fast and accurate. Furthermore, we
demonstrate how to build statistical estimators of the boundary conditions
satisfied by the target function without apriori knowledge. Our analysis can
also be extended to more general boundary conditions that may be encountered
in applications.
Keywords: density estimation, diffusion, unified transform, linked boundary
conditions, boundary bias, biological cell cycle.
## 1 Introduction and Background
Suppose we are given an independent and identically distributed sample
$X_{1},\ldots,X_{n}$ from some unknown density function $f_{X}$. Throughout,
we will use a subscript $X$ in $f_{X}$ to indicate that $f_{X}$ is the
probability density function of the random variable $X$. We will also denote
expectation and variance with respect to $f_{X}$ by $\mathbb{E}_{f_{X}}$ and
$\mathrm{Var}_{f_{X}}$ respectively. Estimating the density $f_{X}$ is one of
the most common problems for discovering patterns in statistical data [7, 62,
63]. When the support of $f_{X}$ is the whole real line, a simple and popular
non-parametric method for estimating $f_{X}$ is the kernel density estimator
(KDE)
$\widehat{f}(x;t)=\frac{1}{n\sqrt{t}}\sum_{k=1}^{n}\varphi\left(\frac{x-X_{k}}{\sqrt{t}}\right),$
(1)
with a kernel $\varphi(x)$. A common choice is a Gaussian kernel
$\varphi(x)=\exp(-x^{2}/2)/\sqrt{2\pi}$. Here $\sqrt{t}$ is the so-called
bandwidth parameter that controls the smoothness of the estimator (see, for
example, [68, 67, 62, 63] and references therein). Another viewpoint is to
connect kernel density estimation to a diffusion equation, an approach
pioneered by the second author in [5]. Our goal in this article is to extend
this analysis to linked boundary conditions. A key tool in our analysis is the
unified transform (also known as the Fokas method), a novel transform for
analyzing boundary value problems for linear (and integrable non-linear)
partial differential equations [25, 26, 28, 27, 66, 17, 16, 12, 11, 10, 13, 9,
61]. An excellent pedagogical review of this method can be found in the paper
of Deconinck, Trogdon & Vasan [18].
It is well-known that $\widehat{f}(x;t)$ is not an appropriate kernel
estimator when $f_{X}$ has compact support [29], which (without loss of
generality) we assume to be the unit interval $[0,1]$. The main reason for
this is that $\widehat{f}(x;t)$ exhibits significant boundary bias at the end-
points of the interval. For example, with a Gaussian kernel, no matter how
small the bandwidth parameter, $\widehat{f}(x;t)$ will have non-zero
probability mass outside the interval $[0,1]$. Various solutions have been
offered to cope with this boundary bias issue, which may be classified into
three main types:
1. (a)
Using special (non-Gaussian) kernels with support on $[0,1]$ or on
$[0,\infty)$, as in [6, 42, 56];
2. (b)
Adding bias-correction terms to $\widehat{f}(x;t)$ as in [14, 37];
3. (c)
Employing domain transformations [29, 44], which work by mapping the data to
$(-\infty,\infty)$, constructing a KDE on the whole real line, and finally
mapping the estimate back to $[0,1]$.
Additionally, sometimes we not only know that $f_{X}$ has support on $[0,1]$,
but also have extra information about the values of $f_{X}$ at the boundaries.
One example of this situation is what we will refer to as a _linked boundary
condition_ , where we know apriori that
$f_{X}(0)=rf_{X}(1)$
for some known given parameter $r\geq 0$. Most of our analysis also carries
over to complex $r$, as long as $r\neq-1$ (the PDE (2) is degenerate irregular
and the problem ill-posed when $r=-1$), but we focus on $r\geq 0$ since in
statistics $f_{X}\geq 0$. An example that motivated the current article arises
in the field of biology [39, 38], in particular cell cycle studies in cancer
research. The cell cycle itself is one of the fundamentals of biology and
knowledge about its regulation is crucial in the treatment of various
diseases, most prominently cancer. Cancer is characterized by an uncontrolled
cell growth and commonly treated with cytotoxic drugs. These drugs interfere
with the cell cycle and in this way cause cancer cells to die. By studying the
effect of chemicals on the cell cycle one can discover new drugs, identify
potential resistance mechanisms or evaluate combinatorial therapy. These kind
of studies have benefited from continued improvement in cell population
analysis methods like fluorescence microscopy, flow cytometry, CyTOF or
single-cell omics, where the abundance of up to thousands of cellular
components for every individual cell in a population is measured. In such an
experiment, cells in an unsynchronized cell population are spread over all
stages of the cell cycle. Trajectory inference algorithms then reduce the
dimensionality to a pseudotime scale by ordering cells in the population based
on their similarity in the dataset [53]. Subsequently, mathematical methods
based on ergodic principles infer molecular kinetics in the cell cycle from
the distribution of cells in pseudotime. The value at the left boundary of
this distribution must, because of cell division, be double the value at the
right boundary. In other words, we have linked boundary conditions with the
constant $r=2$, but otherwise, we do not know the value of the density at the
boundaries of the domain. The problem is described in more detail in Section
5.2, where we also demonstrate the estimator with linked boundary condition on
a real dataset. In particular, for this example, respecting the linked
boundary condition is crucial for generating the correct kinetics due to a
certain mapping between pseudotime and real time. See also [39, 38], for
example, for further motivation and discussion. In other applications, even if
we do not know the value of $r$, one can approximate the true value of $r$
which, together with the methods proposed in this article, leads to an
increase in the rate of approximation of $f_{X}$ as the sample size $n$
becomes large (we do this for an example in Section 5.1, see also §3 for some
results in this direction).
Unfortunately, to the best of our knowledge, all of the currently existing
kernel density estimation methods, bias-correcting or not, cannot
satisfactorily handle the linked boundary condition. Figure 1 shows a typical
example of what can go wrong when a standard density estimator is applied to
real biological data. The result is a smooth density with two unacceptable
features:
* •
The domain $x\in[0,1]$ is not respected, and instead the solution has positive
density for negative values of $x$, and also for $x>1$, which are physically
unreasonable. This problem can be addressed using existing bias-correction
methods and is not the challenge that we had to overcome in this article.
* •
The density does not respect the important biological constraint of the linked
boundary condition (that the left value should be double the right, in this
particular application), and instead the density decays to zero as $|x|$
becomes large. Existing bias-correction methods do not address this problem.
Figure 1: A typical example of output from a KDE (ksdensity from MATLAB)
applied to our real biological data. This does not respect the domain, and it
also does not respect the important linked boundary conditions. The methods
that we propose in this article address those issues simultaneously, with
results for this data set shown in Figure 7.
The purpose of this article is to describe a new KDE that can handle the more
general problem of linked boundary conditions with an arbitrary value of $r$;
the situation of interest in the biological application where $r=2$ is then
solved as an important special case. Figure 7 (C) shows a successful
application of our proposed method. The MAPiT toolbox for single-cell data
analysis [38] applies our new KDE with linked boundary conditions to analyze
cell cycle dependent molecular kinetics.
Our proposed estimator is of type (a), that is, we construct a special kernel
with support on $[0,1]$, and such that the linked boundary condition is
incorporated into the resulting estimator. Our kernel is inspired by the
solution of a diffusion-type PDE [1, 5, 45, 55, 69]. In particular, we modify
the diffusion model in [5] so that it satisfies the linked boundary
conditions. Unlike the case in [5], the non-self-adjoint initial-boundary
problem that arises cannot be diagonalized, meaning the solution cannot be
expressed as a series solution of eigenfunctions of the spatial differential
operator in the usual sense. Instead, we use the unified transform, which
provides an algorithmic recipe for solving these types of problems via an
integral solution. This was the way we first found the solution formula to our
diffusion model, and the integral representation simplifies many of the proofs
in our analysis. So far, the only case of our problem considered in the
literature on this method has been $r=1$ [66] (periodic). For the heat
equation with oblique Robin boundary conditions/non-local boundary conditions
we refer the reader to [43, 47, 51] and for interface problems we refer the
reader to [59, 60, 58]. Recently linked boundary conditions have been
considered for the Schrödinger equation in [50] (however, in [50], the
parameters were chosen such that the characteristic values were simple, in
other words the eigenvalues were simple, making the analysis easier and
leading to a series solution in terms of bona fide eigenfunctions).
We then construct a series expansion in non-separable generalized
eigenfunctions of the spatial derivative operator by deforming the contours in
the integral representation and applying Cauchy’s residue theorem. This formal
solution is then rigorously verified and studied via a non-symmetric heat
kernel. Each of these representations (integral and series) is beneficial for
different analysis. For instance, the integral representation is much easier
to construct and makes it easier to study regularity properties, as well as
some parts of the behavior as $t\downarrow 0$, whereas the kernel
representation is useful for proving conservation of mass (the solution
generates a true probability measure) and studying the asymptotic mean
integrated squared error (AMISE). Although it is not the goal of the present
article, we envisage that the method that we demonstrate here can also be
generalized to the multivariate case and to scenarios where other types of
boundary conditions (such as linked derivatives or on-local boundary
conditions) arise or can be estimated. In these situations, we recommend using
the unified transform to find the solution of the resulting PDE. For numerical
implementation of the unified transform, we refer the reader to [15].
We also consider the discrete counterpart of the continuous model for two
reasons. First, it is a numerical approximation to the continuous model and a
useful way to compute the solution of the PDE. Second, the discrete model is
relevant when we deal with data which is already pre-binned.
The rest of the article is organized as follows. In the next section, we
describe the continuous model for the application at hand. Our results provide
the necessary assurances that the PDE model is a valid and accurate density
estimator. We then discuss the issue of choosing an optimal bandwidth
(stopping time for the PDE model), including pointwise bias, asymptotic
properties and the AMISE. We briefly discuss numerical methods for calculating
the estimator and, in particular, a discretized version of the continuous PDE,
which we prove converges to the unique continuous solution. Finally, the new
method is applied to a real dataset from a biological application in Section
5.2, and we also provide an illustrative set of examples with synthetic
datasets. We compare our new estimator to several well-known methods and these
results suggest that our new method is typically more accurate and faster, and
that it does not suffer from boundary bias. We finish with a short conclusion.
All technical analysis and proofs are moved to the appendices to ensure that
the presentation flows more easily. Freely available code for the new kernel
methods is also provided at https://github.com/MColbrook/Kernel-Density-
Estimation-with-Linked-BCs.
## 2 The Continuous Linked–Boundaries Model
In this section, we present the continuous diffusion model that satisfies the
linked boundary condition and discuss the analytical properties of its
solution. Our proposed diffusion model for a linked-boundary KDE is the
solution of the formal PDE system:
$\begin{split}\frac{\partial f}{\partial
t}&=\frac{1}{2}\frac{\partial^{2}f}{\partial x^{2}},\qquad
x\in[0,1],\;\;\;t>0,\\\ \mathrm{IC:}\quad\lim_{t\downarrow
0}f(\cdot,t)&=f_{0},\\\ \mathrm{BCs:}\quad f(0,t)&=rf(1,t),\quad\frac{\partial
f}{\partial x}(0,t)=\frac{\partial f}{\partial x}(1,t),\quad\forall
t>0.\end{split}$ (2)
The boundary condition $\frac{\partial f}{\partial x}(0,t)=\frac{\partial
f}{\partial x}(1,t)$ is enforced so that the solution at any time $t\geq 0$
gives a probability measure (see Theorem 4). When considering the setup
described in the introduction, the initial condition is given by
$\textstyle f_{0}=\frac{1}{n}\sum_{k=1}^{n}\delta_{X_{k}},$ (3)
the empirical measure of the given sample $X_{1},\ldots,X_{n}$. In other
words, $f_{0}$ is a sum of Dirac delta distributions. However, in our analysis
we also consider more general initial conditions. Many of the existence and
uniqueness theorems carry over from the well-known $r=1$ (periodic) case. In
particular, the boundary conditions and PDE make sense when the initial data
is given by a finite Borel measure, which we also denote by $f_{0}$. Sometimes
we will also refer to a function $g$ as a measure through the formula
$g(U)=\int_{U}g(x)dx$ for Borel sets $U$. Therefore, since the initial data is
a distribution, we need to be precise by what we mean when writing
$\lim_{t\downarrow 0}f(\cdot,t)=f_{0}$.
###### Definition 1.
Denote the class of finite Borel measures on $[0,1]$ by $M([0,1])$ and equip
this space with the vague topology (i.e. weak∗ topology). We let
$C^{w}(0,T;M([0,1]))$ denote the space of all continuous maps
$\displaystyle\mu:[0,T)\rightarrow M([0,1]),$ $\displaystyle\mu(t)=\mu_{t}.$
In other words, $\mu_{t}$ is continuous as a function of $t$ in the vague
topology, meaning that for any given function $g$ that is continuous on the
interval $[0,1]$, the integral $\int_{0}^{1}g(x)d\mu_{t}(x)$ is continuous as
a function of $t\in[0,T)$.
We look for weak solutions of (2). In terms of notation, we will denote the
derivative with respect to $x$ by $g_{x}$ and use $\mu(g)$ to denote the
integration of a function $g$ against a measure $\mu$. The following adjoint
boundary conditions are exactly those that arise from formal integration by
parts.
###### Definition 2.
Let $\mathcal{F}(r)$ denote all $g\in C^{\infty}([0,1])$ satisfying the
adjoint linked boundary conditions
$g(1)=g(0),\quad g_{x}(1)=rg_{x}(0).$ (4)
###### Definition 3 (Weak Solution).
Let $f_{0}\in M([0,1])$ such that $f_{0}(\\{0\\})=rf_{0}(\\{1\\})$. We say
that $\mu\in C^{w}(0,T;M([0,1]))$ is a weak solution to (2) for $t\in[0,T)$ if
$\mu_{0}=f_{0}$ and for all $g\in\mathcal{F}(r)$, $\mu_{t}(g)$ is
differentiable for $t>0$ with
$\frac{d}{dt}\mu_{t}(g)=\frac{1}{2}\mu_{t}(g_{xx}).$ (5)
We can now precisely state the well-posedness of (2).
###### Theorem 1 (Well-Posedness).
Assume our initial condition $f_{0}$ lies in $M([0,1])$ and satisfies
$f_{0}(\\{0\\})=rf_{0}(\\{1\\})$. Then there exists a unique weak solution to
(2) for $t\in[0,T)$ for any $T\in(0,\infty]$, which we denote by $f(\cdot,t)$.
For $t>0$ this weak solution is a function that is smooth in $t$ and real
analytic as a function of $x$. Furthermore, the solution has the following
properties which generalize the classical periodic case of $r=1$:
1. 1.
If $f_{0}\in C([0,1])$ (the space of continuous functions on $[0,1]$), then
for any $x\in(0,1)$, $f(x,t)$ converges to $f_{0}(x)$ as $t\downarrow 0$. If
$f_{0}(0)=rf_{0}(1)$ then $f(\cdot,t)$ converges to $f_{0}$ as $t\downarrow 0$
uniformly over the whole closed interval $[0,1]$.
2. 2.
If $1\leq p<\infty$ and $f_{0}\in L^{p}([0,1])$, then $f$ is the unique weak
solution in $C(0,T;L^{p}([0,1]))$ and $f(\cdot,t)$ converges to $f_{0}$ as
$t\downarrow 0$ in $L^{p}([0,1])$.
###### Proof.
See Appendix A.2. ∎
The system (2) is a natural candidate for density estimation with such a
linked boundary condition. Whilst Theorem 1 is expected and analogous to the
$r=1$ case, due to the non-self-adjoint boundary conditions, it is not
immediately obvious what properties solutions of (2) have. For example, one
question is whether or not the solution is a probability density for $t>0$,
and what its asymptotic properties are. Moreover, we would like to be able to
write down an explicit solution formula (and ultimately use this to
numerically compute the solution), a formal derivation of which is given in
Appendix A.1 using the unified transform.
### 2.1 Solution formula and consistency of KDE at boundaries
If we ignore the constant $r$ in the boundary conditions of (2) (and replace
it by the special case $r=1$), then we would have the simple diffusion
equation with periodic boundary conditions. One can successfully apply Fourier
methods, separation-of-variables or Sturm–Liouville theory to solve the
periodic version of this PDE [24, 30]. However, when $r\neq 1$, making the
ansatz that a solution is of the ‘rank one’, separable form $f(x,t)=g(x)h(t)$
leads to a non-complete set of functions and separation of variables fails.
The differential operator associated with the evolution equation in (2) is
regular in the sense of Birkhoff [3] but not self-adjoint when $r\neq 1$, due
to the boundary conditions. Nevertheless, it is possible to generalize the
notion of eigenfunctions of the differential operator [8] and these
generalized eigenfunctions form a complete system in $L^{2}([0,1])$ [49, 40]
(and in fact form a Riesz basis). This allows us to obtain a series expansion
of the solution. The easiest way to derive this is through the unified
transform, which also generates a useful integral representation.
###### Theorem 2 (Integral and Series Representations of Diffusion
Estimator).
Suppose that the conditions of Theorem 1 hold. Then the the unique solution of
(2) has the following representations for $t>0$.
Integral representation:
$\begin{split}&2\pi
f(x,t)=\int_{-\infty}^{\infty}{\exp(ikx-k^{2}t/2)}\hat{f}_{0}(k)dk\\\
&-\textstyle\int_{\partial
D^{+}}\frac{\exp(ikx-k^{2}t/2)}{{\Upsilon(k)}}\left\\{\hat{f}_{0}(k)[(1+r)\exp(ik)-2r]+\hat{f}_{0}(-k)(1-r)\exp(-ik)\right\\}dk\\\
&-\textstyle\int_{\partial
D^{-}}\frac{\exp(ik(x-1)-k^{2}t/2)}{\Upsilon(k)}\left\\{\hat{f}_{0}(k)[2\exp(ik)-(1+r)]+\hat{f}_{0}(-k)(1-r)\right\\}dk.\end{split}$
(6)
Here the contours $\partial D^{\pm}$ are shown in Figure 8 and are
deformations of the boundaries of
$D^{\pm}=\\{k\in\mathbb{C}^{\pm}:\mathrm{Re}(k^{2})<0\\}$. The determinant
function is given by $\Upsilon(k)=2(1+r)(\cos(k)-1)$ and
$\hat{f}_{0}(k):=\int_{0}^{1}\exp(-ikx)f_{0}(x)dx.$
Series representation:
$\begin{split}f(x,t)=&\frac{2}{(1+r)}\hat{c}_{0}(0)\phi_{0}(x)\\\
&+\sum_{n\in\mathbb{N}}\frac{4\exp(-k_{n}^{2}t/2)}{(1+r)}\big{\\{}\hat{c}_{0}(k_{n})\phi_{n}(x)-k_{n}t(1-r)\hat{c}_{0}(k_{n})\sin(k_{n}x)\\\
&\quad\quad\quad\quad\quad+[\hat{s}_{0}(k_{n})-(1-r)\hat{s}_{1}(k_{n})]\sin(k_{n}x)\big{\\}},\end{split}$
(7)
where $k_{n}=2n\pi$ and
$\displaystyle\phi_{n}(x)=\left(r+(1-r)x\right)\cos(k_{n}x),$
$\displaystyle\hat{s}_{0}(k)=\int_{0}^{1}\sin(kx)f_{0}(x)dx,$
$\displaystyle\hat{c}_{0}(k)=\int_{0}^{1}\cos(kx)f_{0}(x)dx,$
$\displaystyle\hat{s}_{1}(k)=\int_{0}^{1}\sin(kx)xf_{0}(x)dx.$
###### Proof.
See Appendix A.2. ∎
In the case where $r\neq 1$, in addition to the usual separable solutions
$\exp(ik_{n}x-k_{n}^{2}t/2)$, the series expansion also includes the non-
separable solutions $\exp(ik_{n}x-k_{n}^{2}t/2)(x+ik_{n}t)$. We can understand
these as being generalized eigenfunctions in the following sense (see the
early papers [41, 65]). Define the operator
$\mathbb{A}=-\frac{d^{2}}{dx^{2}},\quad\mathcal{D}(\mathbb{A})=\\{u\in
H^{2}([0,1]):u(0)=ru(1),u_{x}(0)=u_{x}(1)\\},$ (8)
where $\mathcal{D}(\mathbb{A})$ denotes the domain of $\mathbb{A}$. We use
$\mathcal{N}$ to denote the null space, which is sometimes often termed the
kernel, of an operator, i.e. $\mathcal{N}(S)$ is the space of all vectors $v$
with $S(v)=0$. It is then easily checked that
$\phi_{n}\in\mathcal{N}((\mathbb{A}-k_{n}^{2}I)^{2})$. In particular, both
$\phi_{n}$ and $(\mathbb{A}-k_{n}^{2}I)\phi_{n}$ satisfy the required boundary
conditions. These functions block diagonalize the operator in an analogous
form to the Jordan normal form for finite matrices. If we consider any
generalized eigenspace $\mathcal{N}((\mathbb{A}-k_{n}^{2}I)^{2})$
corresponding to $k_{n}^{2}=4\pi^{2}n^{2}$ with $n>0$ and choose the basis
$\\{\sin(k_{n}x),\phi_{n}(x)/(2k_{n})\\}$, the operator acts on this subspace
as the matrix
$\left(\begin{tabular}[]{cc}$k_{n}^{2}$&$1-r$\\\
$0$&$k_{n}^{2}$\end{tabular}\right),$
which cannot be diagonalized for $r\neq 1$.
For our purposes of kernel density estimation, we define an integral kernel
$K$ so that we can write the solution as
$f(x,t)=\int_{0}^{1}K(r;x,y,t)f_{0}(y)dy.$
After some residue calculus (see (34) in the Appendix), this is given by the
somewhat complicated expression:
$\begin{split}K(r;x,y,t)&=\sum_{n\in\mathbb{Z}}{\exp(ik_{n}x-k_{n}^{2}t/2)}\Big{[}\exp(-ik_{n}y)+\frac{1-r}{1+r}(x+ik_{n}t)\exp(-ik_{n}y)\\\
&+\frac{1-r}{1+r}(x+ik_{n}t-1)\exp(ik_{n}y)+\frac{1-r}{1+r}y(\exp(ik_{n}y)-\exp(-ik_{n}y))\Big{]},\end{split}$
(9)
which can be re-expressed in terms of the more common $r=1$ kernel and its
derivative, as in (40). For the initial data (3) this gives the density
estimate
$f(x,t)=\frac{1}{n}\sum_{k=1}^{n}K(r;x,X_{k},t),$
a generalization of (1). A key consequence of the solution from Theorem 2 is
that the pointwise bias of the corresponding diffusion estimator vanishes if
$f_{X}$ is continuous with $f_{X}(0)=rf_{X}(1)$. Namely, we have the
following.
###### Theorem 3 (Consistency of Estimator at Boundaries).
Suppose that the initial data is given by (3) and that $f_{X}\in C([0,1])$
with $f_{X}(0)=rf_{X}(1)$. Then the solution of the PDE (2) satisfies
$\lim_{t\downarrow 0}\mathbb{E}_{f_{X}}(f(x,t))=f_{X}(x),$ (10)
uniformly in $x$. Further, if in addition $f_{X}\in C^{1}([0,1])$ and
$x_{t}=x+\mathcal{O}(\sqrt{t})$, then our estimator satisfies
$\left|\mathbb{E}_{f_{X}}(f(x_{t},t))-f_{X}(x)\right|\leq C(f_{X})\sqrt{t},$
with $C(f_{X})$ a constant independent of $x\in[0,1]$, but dependent on the
true $f_{X}$.
###### Proof.
See Appendix A.3. ∎
###### Remark 1.
For consistency in $L^{p}$ spaces, we refer the reader to Proposition 3 in
Appendix A.3.
### 2.2 Conservation of probability and non-negativity
In addition to establishing that the behavior of the PDE solution near the
boundaries is satisfactory, we also want the PDE solution to be a proper bona
fide probability density — a non-negative function integrating to unity. The
main tool in the proof of this is the Maximum Principle [24, 30] for parabolic
PDEs. The Maximum Principle states that a solution of the diffusion equation
attains a maximum on the ‘boundary’ of the two-dimensional region in space
$x\in[0,1]$ and time $t\geq 0$. If our initial condition is given by a
continuous function, then the maximum principle gives the following.
###### Proposition 1 (Bounds on Diffusion Estimator).
Suppose that the conditions of Theorem 1 hold and that $f_{0}$ is a continuous
function with $f_{0}(0)=rf_{0}(1)$ and non-negative with $0\leq a\leq
f_{0}(x)\leq b$ for all $x\in[0,1]$. Then for any $t>0$ and $x\in[0,1]$ we
have
$\textstyle\min\left\\{\frac{2r}{1+r},\frac{2}{1+r}\right\\}a\leq
f(x,t)\leq\max\left\\{\frac{2r}{1+r},\frac{2}{1+r}\right\\}b.$ (11)
In particular, $f$ remains bounded away from $0$ if $a>0$ and $r>0$.
###### Proof.
See Appendix A.4. ∎
However, we also want this to hold when $f_{0}$ is given by (3). Furthermore,
if we start with a probability measure as our initial condition, then we want
the solution to be the density function of a probability distribution for any
$t>0$. In the context of density estimation, this essential property
corresponds to conservation of probability. This is made precise in the
following theorem, which does not require continuous initial data.
###### Theorem 4 (A Bona Fide Kernel Density Estimator).
Suppose that the conditions of Theorem 1 hold and that the initial condition
$f_{0}$ is a probability measure. Then,
1. 1.
$\int_{0}^{1}f(x,t)dx=1$, for $t>0$,
2. 2.
$f(x,t)\geq 0$ for $t>0$ and $x\in[0,1]$.
###### Proof.
See Appendix A.5. ∎
From the solution formula (7), we can also characterize the behavior of the
solution for large bandwidths (large $t$), that is, when the estimator
oversmooths the data. An example of this is given in Figure 2.
###### Corollary 1 (Oversmoothing Behavior with Large Bandwidth).
Suppose that the conditions of Theorem 1 hold, then as $t\rightarrow\infty$,
$f$ converges uniformly on $[0,1]$ to the linear function
$\textstyle f_{\infty}(x):=\frac{2}{(1+r)}\hat{c}_{0}(0)\phi_{0}(x).$ (12)
This linear function is the unique stationary function that obeys the boundary
conditions and has the same integral over $[0,1]$ as $f_{0}$.
Figure 2: An example of the solution of the continuous PDE (2) at three time
points, with $f_{0}(x)=\frac{6}{11}(-2x^{2}+x+2)$. The values at the
boundaries change with time, but the ratio remains a constant with
$f(0,t)=2f(1,t)$. As $t\rightarrow\infty$, the solution converges to a
straight line.
## 3 Asymptotic Properties and Bandwidth Choice
An important issue in kernel density estimation is how to choose the bandwidth
parameter or, equivalently, the final or stopping time $T$ at which we compute
the solution of the PDE. This issue has already received extensive attention
in the literature [34, 57, 19, 35]. We now give a brief summary of that issue,
and we also make two suggestions for known methods already available. After
that, we address the issue specifically in the context of our linked
boundaries model.
At one extreme, if we choose $T=0$, then we recover the initial condition,
which is precisely the raw data, with an estimator with zero bias, but
infinite variance. At the other extreme, if we let $T\rightarrow\infty$, then
we obtain a stationary density that is a straight line (see Corollary 1),
which contains no information whatsoever about the raw data (other than the
empirical mean), giving an estimator of zero variance, but significant bias.
In between, $0<T<\infty$, we have some smoothing effect while also retaining
some information from the original data — an optimal balance between the
variance and the bias of the estimator.
One would also like a consistent estimator — as more and more data are
included, it must converge to the true density (for instance, in the mean
squared sense). Various proposals for the stopping times and their properties
are available. One of the most common choices is ‘Silverman’s rule of thumb’
[62], which works very well when the data is close to being normally
distributed. We expect that this choice is fine for the simpler datasets and
examples that we consider in this article. Another possible approach is to use
the output from the freely available software of one of the authors:
https://au.mathworks.com/matlabcentral/fileexchange/14034-kernel-density-
estimator. This is expected to be a better choice than Silverman’s rule in
situations where there are many widely separated peaks in the data. In
particular, [5] introduced a non-parametric selection method that avoids the
so-called _normal reference rules_ that may adversely affect plug-in
estimators of the bandwidth.
We now give a more precise treatment of the choice of smoothing bandwidth for
the linked boundaries model, as well as discussing the Mean Integrated Squared
Error (MISE) defined by
$\displaystyle\mathrm{MISE}\\{f\\}(t)$
$\displaystyle=\mathbb{E}_{f_{X}}\left\\{\int_{0}^{1}[f(x,t)-f_{X}(x,t)]^{2}dx\right\\}$
(13)
$\displaystyle=\int_{0}^{1}\mathbb{\\{}\mathbb{E}_{f_{X}}[f(x,t)]-f_{X}(x)\\}^{2}dx+\int_{0}^{1}\mathrm{Var}_{f_{X}}[f(x,t)]dx.$
(14)
Often one is interested in the asymptotic approximation to the MISE, denoted
AMISE, under the requirements that $t=t_{n}\downarrow 0$ and
$n\sqrt{t_{n}}\rightarrow\infty$, which ensure consistency of the estimator.
The asymptotically optimal bandwidth is then the minimizer of the AMISE. For
our continuous model of kernel density estimation we have the following result
(proven in Appendix B) which gives the same $\mathcal{O}(n^{-4/5})$ rate of
convergence as the Gaussian KDE on the whole real line.
###### Theorem 5 (Asymptotic Bias and Variance of Diffusion Estimator).
Let $t_{n}$ be such that $\lim_{n\rightarrow\infty}t_{n}=0$ and
$\lim_{n\rightarrow\infty}n\sqrt{t_{n}}=\infty$ and suppose that $f_{X}\in
C^{2}([0,1])$ (twice continuously differentiable) with $f_{X}(0)=rf_{X}(1)$.
Then the following hold as $n\rightarrow\infty$:
1. 1.
The integrated variance has the asymptotic behavior
$\int_{0}^{1}\mathrm{Var}_{f_{X}}[f(x,t_{n})]dx\sim\frac{1}{2n\sqrt{\pi
t_{n}}}.$ (15)
2. 2.
If $f_{X}^{\prime}(0)=f_{X}^{\prime}(1)$ then the integrated squared bias is
$\int_{0}^{1}\left\\{\mathbb{E}_{f_{X}}[f(x,t_{n})]-f_{X}(x)\right\\}^{2}dx\sim
t_{n}^{2}\int_{0}^{1}\frac{1}{4}\left[f_{X}^{{}^{\prime\prime}}(x)\right]^{2}dx.$
(16)
3. 3.
If $f_{X}^{\prime}(0)\neq f_{X}^{\prime}(1)$ then the integrated squared bias
is
$\int_{0}^{1}\\{\mathbb{E}_{f_{X}}[f(x,t_{n})]-f_{X}(x)\\}^{2}dx\sim
t_{n}^{3/2}\frac{4-2\sqrt{2}}{3\sqrt{\pi}}\frac{r^{2}+1}{(1+r)^{2}}[f_{X}^{\prime}(1)-f_{X}^{\prime}(0)]^{2}.$
(17)
###### Proof.
See Appendix B. ∎
A direct consequence of this result is that we can select the stopping time
$t$ or bandwidth to minimize the AMISE.
###### Corollary 2 (Asymptotically Optimal Bandwidth Choices).
Combining the leading order bias and variance terms gives the asymptotic
approximation to the MISE:
1. 1.
If $f_{X}^{\prime}(1)=f_{X}^{\prime}(0)$ then
$\mathrm{AMISE}\\{f\\}(t_{n})=\frac{1}{2n\sqrt{\pi
t_{n}}}+t_{n}^{2}\int_{0}^{1}\frac{1}{4}\left[f_{X}^{{}^{\prime\prime}}(x)\right]^{2}dx.$
(18)
Hence, the square of the asymptotically optimal bandwidth is
$t^{*}=(2n\sqrt{\pi}\|f_{X}^{{}^{\prime\prime}}\|_{L^{2}}^{2})^{-2/5}$
with the minimum value
$\min_{t}\mathrm{AMISE}\\{f\\}(t)=\frac{5\|f_{X}^{{}^{\prime\prime}}\|_{L^{2}}^{2/5}}{2^{14/5}\pi^{2/5}}n^{-4/5}.$
2. 2.
If $f_{X}^{\prime}(1)\neq f_{X}^{\prime}(0)$ then
$\begin{split}\mathrm{AMISE}\\{f\\}(t_{n})&=\frac{1}{2n\sqrt{\pi
t_{n}}}+t_{n}^{3/2}\frac{4-2\sqrt{2}}{3\sqrt{\pi}}\frac{r^{2}+1}{(1+r)^{2}}[f_{X}^{\prime}(1)-f_{X}^{\prime}(0)]^{2}\\\
&=\frac{1}{2n\sqrt{\pi
t}}+t^{3/2}\frac{A(r)}{3}\left[f_{X}^{\prime}(1)-f_{X}^{\prime}(0)\right]^{2}.\end{split}$
(19)
Hence, the square of the asymptotically optimal bandwidth is
$t^{*}=(2n\sqrt{\pi}A(r))^{-1/2}\left|f_{X}^{\prime}(1)-f_{X}^{\prime}(0)\right|^{-1}$
with the minimum value
$\min_{t}\mathrm{AMISE}\\{f\\}(t)=\frac{2^{5/4}\sqrt{\left|f_{X}^{\prime}(1)-f_{X}^{\prime}(0)\right|}}{3\pi^{3/8}}A(r)^{1/4}n^{-3/4}.$
A few remarks are in order. First, it is interesting to note that in the case
of $f_{X}^{\prime}(1)=f_{X}^{\prime}(0)$, the optimum choice $t^{*}$ and the
minimum AMISE do not depend on $r$, and are the same as the more familiar
‘whole line’ situation — in other words, we can confidently use existing
methods in the literature (such as recommended above) to choose a stopping
time. Second, it seems plausible that we could estimate
$f_{X}^{\prime}(1)-f_{X}^{\prime}(0)$ (or the value of $r$) adaptively and
change the boundary conditions in the model (2) accordingly. A full discussion
of solving the heat equation with linked boundary conditions for the first
spatial derivative is beyond the scope of this paper but can be done using the
same methods we present here. Future work will aim to incorporate an adaptive
estimate of the true boundary conditions (both for the density function and
its first derivative - we do this for the density function in §5.1) and
resulting adaptive boundary conditions. We mention a result in this direction
which will appear when we compare our model to that of [5], whose code is
based around the discrete cosine transform, the continuous version of which
solves the heat equation subject to the boundary conditions
$f^{\prime}_{c}(0)=f_{c}^{\prime}(1)=0.$ We have used the subscript $c$ to
avoid confusion with our solution $f$ to (2). The analogous result to Theorem
5 is the following theorem which can be proven using the same techniques and
hence we have omitted the proof. Similarly, one can then derive the optimum
choice of $t$ and the minimum AMISE $\mathcal{O}(n^{-3/4})$ (slower rate)
under the condition that $(f_{X}^{\prime}(1),f_{X}^{\prime}(0))\not=(0,0)$.
###### Theorem 6 (Boundary Effects on Asymptotic Bias).
Let $t_{n}$ be such that $\lim_{n\rightarrow\infty}t_{n}=0$ and also
$\lim_{n\rightarrow\infty}n\sqrt{t_{n}}=\infty$. Suppose that $f_{X}\in
C^{2}([0,1])$. Then the following hold as $n\rightarrow\infty$:
1. 1.
The integrated variance has the asymptotic behavior
$\int_{0}^{1}\mathrm{Var}_{f_{X}}[f_{c}(x,t_{n})]dx\sim\frac{1}{2n\sqrt{\pi
t_{n}}}.$ (20)
2. 2.
If $f_{X}^{\prime}(0)=f_{X}^{\prime}(1)=0$ then
$\int_{0}^{1}\\{\mathbb{E}_{f_{X}}[f_{c}(x,t_{n})]-f_{X}(x)\\}^{2}dx\sim
t_{n}^{2}\int_{0}^{1}\frac{1}{4}\left[f_{X}^{{}^{\prime\prime}}(x)\right]^{2}dx.$
(21)
3. 3.
If $f_{X}^{\prime}(0)\neq 0$ or $f_{X}^{\prime}(1)\neq 0$ then
$\int_{0}^{1}\\{\mathbb{E}_{f_{X}}[f_{c}(x,t_{n})]-f_{X}(x)\\}^{2}dx\sim
t_{n}^{3/2}\frac{4-2\sqrt{2}}{3\sqrt{\pi}}[f_{X}^{\prime}(1)^{2}+f_{X}^{\prime}(0)^{2}].$
(22)
## 4 Numerical Approximations of the PDE Estimator
Before giving numerical examples with the new estimator, we consider practical
methods for solving the PDE (2), in order to evaluate the KDE
$f(x,t)=\frac{1}{n}\sum_{k=1}^{n}K(r;x,X_{k},t),$ on a regular grid. There are
two different practical computational methods to compute the density estimator
based on the PDE (2):
1\. Series Expansion:
Essentially solving the continuous model (2) via the series or contour
integral representation in Theorem 2.
2\. Backward Euler method:
Solving a discretized or binned version of (2), as explained in the rest of
this section. In Theorem 7, we show that this binned estimator converges to
the continuous PDE estimator.
The two methods have relative advantages and disadvantages. The backward Euler
method is a first order finite difference method (however, this is not a
problem in practice as argued below), but it is simple and easy to use,
especially if the initial data is already discretely binned. The backward
Euler method also maintains the key property of positivity and satisfies the
same maximum principle properties as the continuous solution (see Appendix C
and Lemma 3). The reason for not using second order methods such as
Crank–Nicolson is that for large time steps this would not preserve non-
negativity of the solution. In other words, the discrete solution can no
longer be interpreted as a probability distribution (a well-known result says
that any general linear method that is unconditionally positivity preserving
for all positive ODEs must have order $\leq 1$ [4]). However, methods such as
Crank–Nicolson can also easily be used for the discrete model if desired, but
for brevity we do not discuss such methods further. The series expansion of
the continuous PDE model is typically highly accurate for $t>0$, but less easy
to implement. We provide MATLAB codes for both methods:
https://github.com/MColbrook/Kernel-Density-Estimation-with-Linked-Boundary-
Conditions.
To derive the appropriate time-stepping method, we do the following:
1. 1.
We approximate the exact solution $f$ by a vector $\bm{u}$. That is,
$u(x_{i};\cdot)\approx f(x_{i},\cdot)$. Here $x_{i}=ih$ is the $i$th grid
point on the grid of $m+2$ equally spaced points in the domain $[0,1]$, for
$i=0,1,\ldots,m,m+1$. The spacing between two consecutive grid points is
$h=\frac{1}{m+1}.$ Note here that $m$ is typically smaller than $n$, the
number of samples that form the empirical measure.
2. 2.
The two boundary conditions in (2) give two equations involving values at the
two boundary nodes, i.e. at node $0$ and at node $m+1$. That is,
$\displaystyle\quad u_{0}$ $\displaystyle=$ $\displaystyle ru_{m+1},$ (23)
$\displaystyle u_{1}-u_{0}$ $\displaystyle=$ $\displaystyle u_{m+1}-u_{m}.$
(24)
This motivates us to make the following definitions for the boundary nodes:
$u_{0}\stackrel{{\scriptstyle\mathrm{def}}}{{=}}\frac{r}{r+1}(u_{1}+u_{m}),\qquad
u_{m+1}\stackrel{{\scriptstyle\mathrm{def}}}{{=}}\frac{1}{r+1}(u_{1}+u_{m}).$
(25)
We are left with a set of $m$ equations involving $m$ unknown values
$u_{1},\ldots,u_{m}$, at the $m$ interior nodes $1,\ldots,m$, where we use a
standard second-order finite difference approximation of the (spatial) second
derivative.
3. 3.
We consider the corresponding $m\times m$ _four-corners matrix_ with the
following structure:
$\mathbf{A}=\left(\begin{tabular}[]{ccccc}$2-\frac{r}{r+1}$&$-1$&&&$-\frac{r}{r+1}$\\\
$-1$&$2$&$-1$\\\ &$\ddots$&$\ddots$&$\ddots$\\\ &&$-1$&$2$&$-1$\\\
$-\frac{1}{r+1}$&&&$-1$&$-\frac{1}{r+1}$\end{tabular}\right).$ (26)
Given a time $T$ at which we wish to evaluate the solution, we consider a time
step $\Delta t=2h^{2}$. For ease of the analysis, we assume that $T$ is a
multiple of $\Delta t$, though his can be avoided by making the last time step
smaller if needed. We use a superscript $k$ to denote the solution at time
$k\Delta t$ (i.e. the $k$th step), then the backwards Euler method can be
written as
$\bm{u}^{k+1}=\left(\mathbf{I}+\mathbf{A}\right)^{-1}\bm{u}^{k},\quad
k=0,...,T/\Delta t-1,$ (27)
where $\mathbf{I}$ denotes the $m\times m$ identity matrix. The matrix inverse
can be applied in $\mathcal{O}(m)$ operations using the fact that $\mathbf{A}$
is a rank one perturbation of a tridiagonal matrix. Even though we take small
time steps, the total time $T=\mathcal{O}(n^{-2/5})$ is small. It follows that
the total complexity is $\mathcal{O}(m^{3}n^{-2/5})$, giving an error (in the
interior) of order $\mathcal{O}(h^{2})=\mathcal{O}(m^{-2})$. The error of the
continuous model scales as $\mathcal{O}(n^{-2/5})$. If there is freedom in
selecting the number of bins $m+2$, this suggests choosing
$m=\mathcal{O}(n^{1/5})$ which leads to a modest
$\mathcal{O}(n^{1/5})=\mathcal{O}(m)$ complexity. A key property of the matrix
(26) is that it has zero column sum, off-diagonals are negative or zero, and
the main diagonal entries are positive. This allows the interpretation of (27)
as a discrete-time Markov process. In Appendix C, we prove the following
theorem for completeness (using explicit formulae for the eigenvalues and
eigenvectors of $\mathbf{A}$).
###### Theorem 7 (Convergence of Binned to Diffusion Estimator).
The solution of the binned estimator (27) with the four corner matrix in (26)
converges to the solution of the continuous problem (2) as
$m\rightarrow\infty$:
$\sup_{\epsilon\leq t\leq T}\sup_{0\leq k\leq
m+1}|u(k/(m+1);t)-f(k/(m+1);t)|\rightarrow 0,\qquad n\rightarrow\infty.$
###### Proof.
See Appendix C. ∎
Further interesting properties of the discrete system are discussed in
Appendix C. In Theorem 7, we have restricted $t\geq\epsilon>0$ to include the
possibility that the initial condition may not be a proper function, but an
empirical measure. We finally remark that sometimes the solution is needed at
later times (e.g. $\mathcal{O}(1)$), for example when querying the solution at
various times $t$ as part of minimizing least squares cross validation to
determine a good choice of $T$. In that case, we recommend computing the
matrix exponential
$\textstyle\bm{u}(t)=\exp\left(-\frac{t}{2h^{2}}\mathbf{A}\right)\bm{u}(0).$
There are many possible methods to compute the matrix exponential [48], such
as MATLAB’s expm code based on [31, 2].
## 5 Numerical Experiments
### 5.1 Numerical examples with synthetic data
First, we test the estimator on examples where the true density $f_{X}$ is
known. We begin with the trimodal distribution shown in Figure 3. We will
demonstrate two versions of the method. First, when the exact value of $r$ is
known (labelled “Linked 1”), and second where we estimate the value of $r$ by
$r_{\mathrm{est}}=\frac{\sum_{j=1}^{n}\chi_{<n^{-1/2}}(X_{j})}{\sum_{j=1}^{n}\chi_{>1-n^{-1/2}}(X_{j})}$
(labelled “Linked 2”). We expect both to perform similarly for sufficiently
large $n$. For stopping times, we have used the software that adaptively
chooses the bandwidth, discussed in Section 3. In other words, we do not give
our algorithms any information other than the given sample. We compare with
three other methods. The first is the density estimation proposed in [5] based
on the discrete cosine transform (labelled “Cosine”). The second is the well-
known and arguably state-of-the-art beta kernel method of [6], which we label
“Beta” in the plots. This method is free from boundary bias, at the cost of an
increased boundary variance. Finally, we also compare with a method which uses
copula kernels [33] and which has been found to be competitive with the beta
kernel approach of [6]. This method has an automatic bandwidth selector which
we shall use, and we label it “Copula” in the plots. The latter two methods
are freely available in the R package evmix [32] which can be found at
https://CRAN.R-project.org/package=evmix.
We estimate the error using the $L^{2}$ and $L^{\infty}$ norms at the points
$l\times 10^{-3}$ for $l=0,...,10^{3}$. The only change is when considering
the copula method, where we take $l=1,...,10^{3}-1$ instead since we found
this method to be unstable near the boundaries. Figure 3 shows a typical
approximation of the distribution function using our proposed method and the
other methods for a sample size of $n=10^{4}$. Our proposed method is more
accurate near the boundaries of the domain (see magnified section of plots)
and behaves similarly in the middle of the domain. We found that using the
estimate $r_{\mathrm{est}}$ instead of the exact value of $r$ did not have a
great effect on the error. In other words, we can apply our model without
needing to know the value of $r$.
Figure 4 (left) shows the $L^{2}$ measure of error averaged over $100$
independent samples for each $n$. The $L^{2}$ errors for both “Linked” methods
and the “Cosine” method agreed almost perfectly with the minimum AMISE and the
analysis in Section 3 for large $n$. Using our model with an estimate of $r$
increases the convergence rate from $\mathcal{O}(n^{-3/4})$ to
$\mathcal{O}(n^{-4/5})$. Both “Linked” methods and the “Cosine” method are
found to be more accurate than the “Beta” and “Copula” methods. The tailing-
off convergence for the “Copula” method was due to a need to implement a lower
bound for the bandwidth. Below this limit, we found the “Copula” method to be
unstable. Figure 4 (right) shows the same plot but now for the $L^{\infty}$
measure of error. Here we see a more pronounced difference between the
methods, with both “Linked” methods producing much smaller errors than the
other methods. We found the same behavior in these plots for a range of other
tested distributions. Finally, we comment on the CPU times for each method,
shown in Figure 5 (averaged over the 100 samples for each $n$). In order to
produce a fair comparison, we have included the CPU time taken for automatic
bandwidth selection when using the “Linked” methods. All methods appear to
have CPU times that grow linearly with $n$. The “Cosine” method in fact scales
like $\mathcal{O}(n\log(n))$ due to the use of the discrete cosine transform.
The linked estimator is faster by about an order of magnitude than the other
methods. This is due to the exponential decay of the series for $t>0$ \- only
a small number of terms need to be summed in order to get very accurate
results.
Figure 3: Example of different methods for a sample size $n=10^{4}$. The
proposed diffusion model (“Linked”) is much more accurate near the boundaries
than the cosine model (“Cosine”) as highlighted by the magnified sections. The
method “Copula” is found to be unstable near the boundaries.
Figure 4: Left: $L^{2}$ errors of methods averaged over $100$ samples for each
$n$. Right: $L^{\infty}$ errors of methods averaged over $100$ samples for
each $n$. The $L^{2}$ errors agree well with the minimum AMISE from Section 3,
whereas the increased accuracy gained near the boundary by using the linked
boundary model is highlighted by the $L^{\infty}$ errors. Figure 5: CPU times
for each method averaged over $100$ samples for each $n$. Experiments were
performed on a basic four year old laptop. Each method appears to grow almost
linearly (up to logarithmic factors), with the linked boundary estimator an
order of magnitude faster than the other methods.
$a$ | $1.1$ | $1.2$ | $1.3$ | $1.4$ | $1.5$
---|---|---|---|---|---
Linked | $2.98\text{\times}{10}^{-3}$ | $1.33\text{\times}{10}^{-3}$ | $6.82\text{\times}{10}^{-4}$ | $3.22\text{\times}{10}^{-4}$ | $2.38\text{\times}{10}^{-4}$
LC | $1.05\text{\times}{10}^{-3}$ | $1.14\text{\times}{10}^{-3}$ | $1.26\text{\times}{10}^{-3}$ | $1.38\text{\times}{10}^{-3}$ | $1.52\text{\times}{10}^{-3}$
LCS | $1.23\text{\times}{10}^{-3}$ | $1.03\text{\times}{10}^{-3}$ | $9.42\text{\times}{10}^{-4}$ | $1.04\text{\times}{10}^{-3}$ | $1.19\text{\times}{10}^{-3}$
$a$ | $1.6$ | $1.7$ | $1.8$ | $1.9$ | $2$
---|---|---|---|---|---
Linked | $1.58\text{\times}{10}^{-4}$ | $1.13\text{\times}{10}^{-4}$ | $8.01\text{\times}{10}^{-5}$ | $5.96\text{\times}{10}^{-5}$ | $5.05\text{\times}{10}^{-5}$
LC | $1.65\text{\times}{10}^{-3}$ | $1.80\text{\times}{10}^{-3}$ | $1.94\text{\times}{10}^{-3}$ | $2.09\text{\times}{10}^{-3}$ | $2.27\text{\times}{10}^{-3}$
LCS | $1.30\text{\times}{10}^{-3}$ | $1.40\text{\times}{10}^{-3}$ | $1.66\text{\times}{10}^{-3}$ | $1.74\text{\times}{10}^{-3}$ | $2.16\text{\times}{10}^{-3}$
Table 1: Mean $L^{2}$ squared error over 10 simulations for different $a$.
$a$ | $1.1$ | $1.2$ | $1.3$ | $1.4$ | $1.5$
---|---|---|---|---|---
Linked | $7.32\text{\times}{10}^{-2}$ | $4.19\text{\times}{10}^{-2}$ | $2.52\text{\times}{10}^{-2}$ | $1.31\text{\times}{10}^{-2}$ | $7.97\text{\times}{10}^{-3}$
LC | $5.34\text{\times}{10}^{-1}$ | $6.40\text{\times}{10}^{-1}$ | $7.51\text{\times}{10}^{-1}$ | $8.71\text{\times}{10}^{-1}$ | $1.00$
LCS | $1.84\text{\times}{10}^{-1}$ | $1.26\text{\times}{10}^{-1}$ | $1.42\text{\times}{10}^{-1}$ | $1.72\text{\times}{10}^{-1}$ | $1.95\text{\times}{10}^{-1}$
$a$ | $1.6$ | $1.7$ | $1.8$ | $1.9$ | $2$
---|---|---|---|---|---
Linked | $4.42\text{\times}{10}^{-3}$ | $2.85\text{\times}{10}^{-3}$ | $1.18\text{\times}{10}^{-3}$ | $4.78\text{\times}{10}^{-4}$ | $2.39\text{\times}{10}^{-4}$
LC | $1.14$ | $1.28$ | $1.44$ | $1.60$ | $1.78$
LCS | $2.27\text{\times}{10}^{-1}$ | $2.47\text{\times}{10}^{-1}$ | $2.82\text{\times}{10}^{-1}$ | $3.22\text{\times}{10}^{-1}$ | $3.70\text{\times}{10}^{-1}$
Table 2: Mean $L^{\infty}$ squared error over 10 simulations for different
$a$.
Figure 6: Typical estimates for $n=10^{4}$ and $a=1.1$, $a=2$. We used the R
package logcondens for the log-concave projection method.
Next, we consider the case when $f_{X}$ is log-concave and not necessarily
smooth. Denoting the PDF of the beta distribution with parameters
$(\alpha,\beta)$ by $b(\alpha,\beta;x)$, we let
$f_{X}(x)=\frac{b(1,2;x)+2b(a,1;x)}{3}.$
The parameter $a$ controls the smoothness of $f_{X}$ near $x=0$. We have
compared our method to a method that computes log-concave maximum likelihood
estimators [20, 21]. This seeks to compute the log-concave projection of the
empirical distribution through an active set approach. Code is freely
available in logcondens [22] which can be found at
https://CRAN.R-project.org/package=logcondens. Details on such methods can be
found in [54], with a study of the more involved case of censored data in
[23]. Tables 1 and 2 show the mean squared $L^{2}$ and $L^{\infty}$ errors
respectively over $10$ simulations for $n=10^{5}$, as we vary $a$ for the
linked boundary diffusion estimator and the log-concave projection method
(abbreviated to LC), as well as its smoothed version (LCS). In each case, we
have shaded the most accurate estimator. The linked boundary diffusion
estimator performs much better when measured in the uniform norm but is
slightly worse in the $L^{2}$ sense when the distribution function becomes
less smooth. This is demonstrated in Figure 6 for a typical estimation using
$n=10^{4}$. To produce the tables, the linked boundary diffusion estimator
took about 0.5s on average per simulation, the log-concave projection took
about 5s, but its smoothed version was much slower, taking about 73s.
### 5.2 Numerical example with cell data
Figure 7: (A) Schematic cell cycle with geminin expression starting at the end
of G1. (B) DNA and geminin signal from individual cells can be used to obtain
a pseudo-temporal ordering of the population. An average cell follows the
indicated path (red) through the dataset. (C) Pseudotime values (gray), binned
data (blue) and kernel density estimate (red). The kernel density estimate was
obtained by solving our continuous PDE (2) by our discrete numerical method
with the ‘four corners matrix’ in (26). The stopping time, $t=0.00074$, came
from the stopping time software of one of the authors:
https://au.mathworks.com/matlabcentral/fileexchange/14034-kernel-density-
estimator.
This section demonstrates the application of the methods that we propose to a
problem in biology with the data taken from [39]. As mentioned in the
introduction, Figure 1 shows an example of what goes wrong when current
methods are applied. Figure 7 C demonstrates our proposed method, which
successfully incorporates the desired linked boundary condition.
This example originates from the study of biological processes, in particular,
cell cycle studies in cancer research (Figure 7 A). A recently developed
theory [36, 39] which relies on the distribution of cells along the cell cycle
enables the study of entire cell cycle progression kinetics. The method
utilizes data from single cell experiments like flow cytometry or single cell
RNA sequencing, where the abundance of up to thousands of cellular components
for every individual cell in a population is measured. Cells in a
unsynchronized cell population are spread over all stages of the cell cycle,
which can be seen in the exemplary dataset where levels of DNA and geminin in
single cells were measured by flow cytometry (Figure 7 B). The red curve in
Figure 7 B indicates the path that the average cell takes when it goes through
the cell cycle. Pseudotime algorithms perform a dimensionality reduction by
assigning a pseudotime value to each cell, which can be interpreted as its
position on the average curve. In this example, the pseudotime is a
quantitative value of the progression through the cell cycle. However, it is
in general not equal to real time. As the number of cells in a particular
stage is related to the average transit time through that stage, one can
derive a mapping from pseudotime to real time based on ergodic principles [36,
39, 38]. This mapping relies on the distribution of cells on the pseudotime
scale. As mentioned in the introduction, the distribution at the beginning and
the end of the cell cycle are linked due to cell division by
$f(0,t)=2\,f(1,t)\;.$ (28)
Ignoring this fact when estimating the density on the pseudotime scale results
in an erroneous transformation and thus inaccurate kinetics. The KDE with
linked boundary condition ($r=2$) produces a distribution that satisfies the
conditions (28) on the density due to cell division (Figure 7 C). The MAPiT
toolbox for single-cell data analysis [38] applies our new KDE with linked
boundary conditions to analyze cell cycle dependent molecular kinetics.
## 6 Conclusion
Our study was motivated by a dataset from a biological application. This
biological application required a method of density estimation that can handle
the situation of linked boundaries, which are crucial for gaining correct
kinetics. More broadly, boundary bias issues are known to be a difficult
problem in the context of kernel density estimation. To our knowledge, the
linked boundary conditions that we handle here have not been previously
addressed. We have proposed a new diffusion KDE that can successfully handle
the linked boundary conditions. By using the unified transform, we obtained an
explicit solution. In particular, we proved that this diffusion estimator is a
bona fide probability density, which is also a consistent estimator at the
linked boundaries, and derived its asymptotic integrated squared bias and
variance (which shows an increase in the rate of convergence with sample
size).
We also proposed two numerical methods to compute the estimator — one is based
on its series or integral representation and the other on the backward Euler
method. We proved that the discrete/binned estimator converges to the
continuous estimator. We found the new method competes well with other
existing methods, including state-of-the-art methods designed to cope with
boundary bias, both in terms of speed and accuracy. In particular, the new
method is more accurate close to the boundary. Our new KDE with linked
boundary conditions is now used in the MAPiT toolbox for single-cell data
analysis [38] to analyze cell cycle dependent molecular kinetics.
There remain some open questions regarding the proposed models. First, it is
possible to adapt the methods in this paper to multivariate distributions.
Second, it is possible to adapt these methods to other types of boundary
conditions such as constraints on the moments of the distribution (and other
non-local conditions). In this regard, we expect that the flexibility of the
unified transform in PDE theory will be useful in designing smoothing kernel
functions with the desired statistical properties.
#### Acknowledgments & Contributions:
MJC was supported by EPSRC grant EP/L016516/1. ZIB was supported by ARC grant
DE140100993. KK was supported by DFG grant AL316/14-1 and by the Cluster of
Excellence in Simulation Technology (EXC 310/2) at the University of
Stuttgart. SM was supported by the ARC Centre of Excellence for Mathematical
and Statistical Frontiers (ACEMS). MJC performed the theoretical
PDE/statistical analysis of both the continuous and discrete models, and the
numerical tests. SM developed and tested the binned version of the estimator.
ZIB proposed the PDE model and assisted MJC and SM in writing the paper. KK
provided the cell data and assisted in the writing of the numerical section.
MJC is grateful to Richard Samworth, Tom Trogdon and David Smith for comments,
and to Arieh Iserles for introducing him to the problem. The authors are
grateful to the referees for comments that improved the manuscript.
## Appendix A Proofs of Results in Section 2
### A.1 Formal derivation of solution formula
We begin with a formal description of how to obtain the solution formulae in
Theorem 2. The most straightforward way to construct the solution is via the
unified transform, and the following steps provide a formal solution which we
must then rigorously prove is indeed a solution.
The first step is to write the PDE in divergence form:
$[\exp(-ikx+k^{2}t/2)f]_{t}-\frac{1}{2}[\exp(-ikx+k^{2}t/2)(f_{x}+ikf)]_{x}=0,\quad
k\in\mathbb{C}.$
We will employ Green’s theorem,
$\textstyle\iint_{\Omega}\Big{(}\frac{\partial F}{\partial x}-\frac{\partial
G}{\partial y}\Big{)}dxdy=\int_{\partial\Omega}\big{(}Gdx+Fdy\big{)},$ (29)
over the domain $(0,1)\times(0,t)$. Here one must assume apriori estimates on
the smoothness of the solution $f$ which will be verified later using the
candidate solution. Define the transforms:
$\displaystyle\textstyle\hat{f}_{0}(k):=\int_{0}^{1}\exp(-ikx)f_{0}(x)dx,\quad$
$\displaystyle\textstyle\hat{f}(k,t):=\int_{0}^{1}\exp(-ikx)f(x,t)dx,$
$\displaystyle\textstyle\tilde{g}(k,t):=\int_{0}^{t}\exp(k\tau)f(1,\tau)d\tau,\quad$
$\displaystyle\textstyle\tilde{h}(k,t):=\int_{0}^{t}\exp(k\tau)f_{x}(1,\tau)d\tau,$
where again we assume these are well defined. Green’s theorem and the boundary
conditions imply (after some small amount of algebra) the so called ‘global
relation’, coupling the the transforms of the solution and initial data:
$\begin{split}\hat{f}(k,t)\exp(k^{2}t/2)=&\textstyle\hat{f}_{0}(k)-\frac{1}{2}[\tilde{h}(k^{2}/2,t)+ikr\tilde{g}(k^{2}/2,t)]\\\
&+\textstyle\frac{\exp(-ik)}{2}[\tilde{h}(k^{2}/2,t)+ik\tilde{g}(k^{2}/2,t)],\quad
k\in\mathbb{C}.\end{split}$ (30)
The next step is to invert via the inverse Fourier transform, yielding
$\begin{split}f(x,t)=\textstyle\frac{1}{2\pi}\int_{-\infty}^{\infty}&\exp(ikx-k^{2}t/2)\big{\\{}\hat{f}_{0}(k)-\frac{1}{2}[\tilde{h}(k^{2}/2,t)+ikr\tilde{g}(k^{2}/2,t)]\\\
&+\textstyle\frac{\exp(-ik)}{2}[\tilde{h}(k^{2}/2,t)+ik\tilde{g}(k^{2}/2,t)]\big{\\}}dk.\end{split}$
(31)
However, this expression contains the unknown functions $\tilde{g}$ and
$\tilde{h}$. To get rid of these we use some complex analysis and symmetries
of the global relation (30). Define the domains
$D^{+}=\\{k\in\mathbb{C}^{+}:\mathrm{Re}(k^{2})<0\\},\quad
D^{-}=\\{k\in\mathbb{C}^{-}:\mathrm{Re}(k^{2})<0\\},\quad D=D^{+}\cup D^{-}.$
(32)
These are shown in Figure 8. A quick application of Cauchy’s theorem and
Jordan’s lemma means we can re-write our solution as
$\begin{split}f(x,t)=&\textstyle\frac{1}{2\pi}\int_{-\infty}^{\infty}\exp(ikx-k^{2}t/2)\hat{f}_{0}(k)dk\\\
&\textstyle-\frac{1}{2\pi}\int_{\partial
D^{+}}\frac{\exp(ikx-k^{2}t/2)}{2}[\tilde{h}(k^{2}/2,t)+ikr\tilde{g}(k^{2}/2,t)]dk\\\
&\textstyle-\frac{1}{2\pi}\int_{\partial
D^{-}}\frac{\exp(ik(x-1)-k^{2}t/2)}{2}[\tilde{h}(k^{2}/2,t)+ik\tilde{g}(k^{2}/2,t)]dk.\end{split}$
(33)
We now use the symmetry under $k\rightarrow-k$ of the global relation (30) and
the fact that the argument in each of $\tilde{g}$ and $\tilde{h}$ is $k^{2}/2$
to set up the linear system:
$\begin{split}\frac{1}{2}\begin{pmatrix}[\exp(-ik)-1]&ik[\exp(-ik)-r]\\\
[\exp(ik)-1]&-ik[\exp(ik)-r]\end{pmatrix}\begin{pmatrix}\tilde{h}(\frac{k^{2}}{2},t)\\\
\tilde{g}(\frac{k^{2}}{2},t)\end{pmatrix}=\begin{pmatrix}\hat{f}(k,t)\exp(\frac{tk^{2}}{2})-\hat{f}_{0}(k)\\\
\hat{f}(-k,t)\exp(\frac{tk^{2}}{2})-\hat{f}_{0}(-k)\end{pmatrix}.\end{split}$
Defining the determinant function $\Upsilon(k)=2(1+r)(\cos(k)-1),$ solving the
linear system leads to the relations:
$\displaystyle\textstyle\frac{\tilde{h}(k^{2},t)+ikr\tilde{g}(k^{2},t)}{2}$
$\displaystyle=\textstyle\frac{1}{\Upsilon(k)}\Big{\\{}\hat{f}_{0}(k)[(1+r)\exp(ik)-2r]$
$\displaystyle\quad\quad\quad\quad+\textstyle\hat{f}_{0}(-k)(1-r)\exp(-ik)$
$\displaystyle\quad\quad\quad\quad\quad-\textstyle\exp(k^{2}t/2)\hat{f}(k,t)[(1+r)\exp(ik)-2r]$
$\displaystyle\quad\quad\quad\quad\quad\quad-\textstyle\exp(k^{2}t/2)\hat{f}(-k,t)(1-r)\exp(-ik)\Big{\\}},$
$\displaystyle\textstyle\frac{\tilde{h}(k^{2}/2,t)+ik\tilde{g}(k^{2}/2,t)}{2}$
$\displaystyle=\frac{1}{\Upsilon(k)}\Big{\\{}\hat{f}_{0}(k)[2\exp(ik)-(1+r)]+\hat{f}_{0}(-k)(1-r)$
$\displaystyle\textstyle\quad\quad\quad\quad-\exp(k^{2}t/2)\hat{f}(k,t)[2\exp(ik1)-(1+r)]$
$\displaystyle\textstyle\quad\quad\quad\quad\quad-\exp(k^{2}t/2)\hat{f}(-k,t)(1-r)\Big{\\}}.$
Since $\Upsilon(k)$ is zero whenever $\cos(k)=1$, before we substitute these
relations into our integral solution we deform the contours $\partial D^{+}$
and $\partial D^{-}$ as shown in Figure 8 to avoid the poles of
$\Upsilon(k)^{-1}$ along the real line.
Figure 8: Left: The domains $D^{\pm}$ as well as the orientation of the
boundaries $\partial D^{\pm}$. Right: The deformed contours to avoid the
singularity at $k=0$. The bold arrow shows a path on which both the $x$ and
$t$ exponential parts of the integrand are exponentially decaying which can be
used for efficient numerical evaluation.
Upon substitution, we are still left with unknown contributions proportional
to
$\displaystyle\textstyle I_{1}(x,t):=\int_{\partial
D^{+}}\frac{\exp(ikx)}{\Upsilon(k)}\big{\\{}\hat{f}(k,t)[(1+r)\exp(ik)-2r]\\!+\hat{f}(-k,t)(1-r)\exp(-ik)\big{\\}}dk$
$\displaystyle\textstyle I_{2}(x,t):=\int_{\partial
D^{-}}\frac{\exp(ik(x-1))}{\Upsilon(k)}\big{\\{}\hat{f}(k,t)[2\exp(ik)-(1+r)]+\hat{f}(-k,t)(1-r)\big{\\}}dk.$
We will argue that the integral $I_{1}(x,t)$ along $\partial D^{+}$ vanishes
and the argument for $I_{2}(x,t)$ follows the same reasoning. First observe
that as $k\rightarrow\infty$ in $\mathbb{C}^{+}$,
$\Upsilon(k)^{-1}\sim\exp(ik)/(1+r)$. Also, we must have that
$\displaystyle\textstyle\exp(ik)\hat{f}(k,t)=\int_{0}^{1}\exp(ik(1-x))f(x,t)dx$
is bounded in $\mathbb{C}^{+}$. $\hat{f}(-k,t)$ is also bounded in
$\mathbb{C}^{+}$ and hence the function
$\textstyle\frac{\hat{f}(k,t)[(1+r)\exp(ik)-2r]+\hat{f}(-k,t)(1-r)\exp(-ik)}{\Upsilon(k)}$
is bounded in $\mathbb{C}^{+}$. It follows that we can close the contour in
the upper half plane and use Jordan’s lemma to see that $I_{1}(x,t)$ vanishes.
We then obtain the integral form of the solution in Theorem 2.
To obtain the series form we can write
$2\exp(ik)-(1+r)=-\exp(ik)\Upsilon(k)+\exp(ik)[(1+r)\exp(ik)-2r],$ which
implies
$\displaystyle\textstyle\int_{\partial
D^{-}}\frac{\exp(ik(x-1)-k^{2}t/2)}{\Upsilon(k)}\hat{f}_{0}(k)[2\exp(ik)-(1+r)]dk=$
$\displaystyle\textstyle\int_{\partial
D^{-}}\exp(ikx-k^{2}t/2)\hat{f}_{0}(k)dk-\\!\\!\int_{\partial
D^{-}}\\!\\!\\!\frac{\exp(ikx-k^{2}t/2)}{{\Upsilon(k)}}\hat{f}_{0}(k)[(1+r)\exp(ik)-2r]dk.$
Taking into account the orientation of $\partial D^{-}$, upon deforming the
first of these integrals back to the real line, we see that it cancels the
first integral in (6). Hence we have
$\textstyle 2\pi f(x,t)=-\int_{\partial
D}\\!\\!\\!\frac{e^{ikx-k^{2}t/2}}{{\Upsilon(k)}}\big{\\{}\hat{f}_{0}(k)[(1+r)e^{ik}-2r]+\hat{f}_{0}(-k)(1-r)e^{-ik}\big{\\}}dk.$
(34)
Define the function
$\displaystyle\textstyle
F(x,t;k):=\frac{e^{ikx-k^{2}t/2}\big{\\{}\hat{f}_{0}(k)[(1+r)e^{ik}-2r]+\hat{f}_{0}(-k)(1-r)e^{-ik}\big{\\}}}{2(1+r)}.$
The integrand in (34) has a double pole at $k_{n}=2n\pi$ so we deform the
contour $\partial D$ to $\partial\tilde{D}$ shown in Figure 9. Cauchy’s
residue theorem then implies that
$\textstyle f(x,t)=-\frac{1}{2\pi}\int_{\partial
D}\frac{F(x,t;k)}{\cos(k)-1}dk=\sum_{n\in\mathbb{Z}}-2iF^{\prime}(k_{n}).$
(35)
It is then straightforward to check the equality of (35) and (7).
Figure 9: Deformation of the contour to circle the poles. The contributions
along the real line between these circles cancel.
### A.2 Proof of Theorems 1 and 2
###### Proof of Theorems 1 and 2.
For $t>0$, it is clear that the function $f$ given by (6) is smooth in $x,t$
and real analytic in $x$, as well as solving the heat equation. This follows
from being able to differentiate under the integral sign due to the
$\exp(-k^{2}t/2)$ factor and the fact that extending $x$ to a complex argument
yields an analytic function. Note also that the argument in Section A.1 does
rigorously show equivalence between the series and integral forms of $f$. It
is easy to check via the series (7) that the function $f$ satisfies the
required boundary conditions and hence (5) also holds by simple integration by
parts. Regarding the convergence properties as $t\downarrow 0$ when extra
regularity of the initial condition is assumed, Proposition 2 deals with the
case of continuous $f_{0}$, whilst Proposition 3 deals with $f_{0}\in
L^{p}([0,1])$ for $1\leq p<\infty$.
Hence there are two things left to prove; the fact that
$\mu_{t}:=f(\cdot,t)dx$ lies in $C^{w}(0,T;M([0,1]))$ as well as uniqueness in
$C^{w}(0,T;M([0,1]))$ (and $C(0,T;L^{p}([0,1]))$ for $1\leq p<\infty$).
To prove that $\mu_{t}\in C^{w}(0,T;M([0,1]))$, let $g\in C([0,1])$ and
consider the integral kernel defined by (9). By Fubini’s theorem we have
$\textstyle\int_{0}^{1}f(x,t)g(x)dx=\int_{0}^{1}\int_{0}^{1}K(r;x,y,t)g(x)dxdf_{0}(y).$
By Proposition 2 the integral
$\textstyle\int_{0}^{1}K(r;x,y,t)g(x)dx$
converges for all $x$ and is uniformly bounded as $t\downarrow 0$. We will use
the explicit calculation of the endpoints limits at $x=0,1$. By the dominated
convergence theorem, we have
$\displaystyle\textstyle\lim_{t\downarrow 0}$
$\displaystyle\textstyle\int_{0}^{1}f(x,t)g(x)dx=\big{(}\frac{r}{1+r}g(0)+\frac{1}{1+r}g(1)\big{)}f_{0}(\\{0\\})$
$\displaystyle+\textstyle\big{(}\frac{r}{1+r}g(0)+\frac{1}{1+r}g(1)\big{)}f_{0}(\\{1\\})+\int_{x\in(0,1)}g(x)df_{0}(x)$
$\displaystyle=\textstyle
f_{0}(g)+\frac{g(1)-g(0)}{1+r}[f_{0}(\\{0\\})-rf_{0}(\\{1\\})]=f_{0}(g),$
which proves the required weak continuity.
To prove uniqueness, suppose that there exists $\mu_{t},\tau_{t}\in M([0,1])$
which are both weak solutions with $\mu_{0}=\tau_{0}=f_{0}$. Set
$m_{t}=\mu_{t}-\tau_{t}$. We will consider expansions of functions in the
generalized eigenfunctions of the adjoint problem. It is straightforward to
check that the adjoint problem (with the boundary conditions in (4)) is
Birkhoff regular and hence the generalized eigenfunctions are complete in
$L^{2}([0,1])$. In fact we can show that any continuous function $g\in
C([0,1])$ of bounded variation with $g(0)=g(1)$ can be approximated uniformly
by linear combinations of these functions. This follows by either arguing as
we did in the proof of Proposition 2 (the case of non-matching derivatives
holds but is more involved) or follows from Theorem 7.4.4 of [46]. Now suppose
that $\lambda$ lies in the spectrum of the adjoint $\mathbb{A}^{*}$ defined by
$\textstyle\mathbb{A}^{*}=-\frac{d^{2}}{dx^{2}},\quad\mathcal{D}(\mathbb{A}^{*})=\\{u\in
H^{2}([0,1]):u_{x}(1)=ru_{x}(0),u(0)=u(1)\\}.$ (36)
In our case, the generalized eigenfunctions associated with $\lambda$
correspond to a basis of $\mathcal{N}((\mathbb{A}-\lambda I)^{l})$ where $l=1$
or $2$. If $l=2$, and the nullity of $(\mathbb{A}-\lambda I)^{2}$ is greater
than $\mathbb{A}-\lambda I$, we can choose a basis $\\{g_{1},g_{2}\\}$ such
that $(\mathbb{A}-\lambda I)g_{2}=g_{1}$. For the general case and chains of
generalized eigenfunctions, we refer the reader to [46]. Now suppose that
$g\in\mathcal{N}(\mathbb{A}-\lambda I)$, then $g$ must be smooth on $[0,1]$.
It follows that for $t>0$
$\textstyle 2\frac{d}{dt}m_{t}(g)=-\lambda m_{t}(g).$
Note that $m_{0}(g)=0$ and hence we must have that $m_{t}(g)=0$ for all $t\geq
0$. Similarly, suppose that
$\\{g_{1},g_{2}\\}\subset\mathcal{N}((\mathbb{A}-\lambda I)^{2})$ with
$(\mathbb{A}-\lambda I)g_{2}=g_{1}$. Then by the above reasoning we have
$m_{t}(g_{1})=0$ for all $t\geq 0$ and hence
$\textstyle 2\frac{d}{dt}m_{t}(g_{2})=-\lambda
m_{t}(g_{2})-m_{t}(g_{1})=-\lambda m_{t}(g_{2}).$
Again we see that $m_{t}(g_{2})=0$ for all $t\geq 0$. Though we don’t have to
consider it in our case, it is clear that the same argument would work for
chains of longer lengths. The expansion theorem discussed above together with
the dominated convergence theorem shows that if $g\in C([0,1])$ of bounded
variation with $g(0)=g(1)=0$, then $m_{t}(g)=0$ for all $t\geq 0$. This
implies that if $U\subset(0,1)$ is open then $m_{t}(U)=0$ for all $t\geq 0$.
In particular, we must have
$m_{t}=a(t)\delta_{0}+b(t)\delta_{1}$
with $a,b$ continuous. In fact, for any $f\in\mathcal{F}(r)$ we have
$\textstyle\frac{d}{dt}[a(t)+b(t)]f(1)=\frac{a(t)}{2}f_{xx}(0)+\frac{b(t)}{2}f_{xx}(1),$
from which we easily see that $a=b=0$ and hence uniqueness follows. This also
shows uniqueness in the space $C(0,\mathbb{A};L^{p}([0,1]))$, where no
argument at the endpoints is needed. ∎
### A.3 Proof of Theorem 3
The proof requires that we study the solution of the PDE as $t\downarrow 0$.
We break down the proof into a number of smaller results, which allows us to
use them elsewhere. Recall the definition in (9). We shall also need the
function
$\textstyle K_{1}(x,t):=\sum_{n\in\mathbb{Z}}{\exp(ik_{n}x-k_{n}^{2}t/2)},$
(37)
defined for $t>0$. Using the Poisson summation formula, we can write $K_{1}$
as
$\textstyle K_{1}(x,t)=\frac{1}{\sqrt{2\pi
t}}\sum_{n\in\mathbb{Z}}\exp\Big{(}-\frac{(x-n)^{2}}{2t}\Big{)},$
a periodic summation of the heat kernel. The following lemma is well-known and
hence stated without proof.
###### Lemma 1.
Let $w\in C([0,1])$ then
$\textstyle\int_{0}^{1}K_{1}(x-y,t)w(y)dy$ (38)
is bounded by $\|w\|_{\infty}$ and converges pointwise to $w(x)$ for any
$x\in(0,1)$ and to $(w(0)+w(1))/2$ for $x=0,1$ as $t\downarrow 0$. If
$w(0)=w(1)$ then (38) converges to $w(x)$ uniformly over the interval $[0,1]$.
We will also need the following.
###### Lemma 2.
Let $f_{0}\in L^{1}([0,1])$, then
$\textstyle
t\sum_{n\in\mathbb{N}}\left|\exp(-k_{n}^{2}t/2)k_{n}\hat{f}_{0}(k_{n})\right|\rightarrow
0\quad\text{ as }\quad t\downarrow 0.$
###### Proof.
By the Riemann–Lebesgue lemma, we have that
$\lim_{n\rightarrow\infty}\hat{f}_{0}(k_{n})=0$. So given $\epsilon>0$, let
$N$ be large such that if $n\geq N$ then
$\left|\hat{f}_{0}(k_{n})\right|\leq\epsilon$. Then
$\textstyle
t\sum_{n>N}\left|\exp(-k_{n}^{2}t/2)k_{n}\hat{f}_{0}(k_{n})\right|\leq\frac{t\epsilon}{2\pi}\sum_{n>N}\exp(-2n^{2}\pi^{2}t)4n\pi^{2}.$
Let $h=2\pi$. The sum is an approximation of the integral
$\int_{h(N+1)}^{\infty}\exp(-y^{2}t/2)ydy$ and we have
$t\sum_{n>N}\left|\exp(-k_{n}^{2}t/2)k_{n}\hat{f}_{0}(k_{n})\right|\leq\frac{\epsilon}{2\pi}\int_{0}^{\infty}\exp(-y^{2}t/2){t(y+2h)}dy<\tilde{C}\epsilon,$
for some constant $\tilde{C}$. It follows that
$\limsup_{t\downarrow
0}t\sum_{n\in\mathbb{N}}\left|\exp(-k_{n}^{2}t/2)k_{n}\hat{f}_{0}(k_{n})\right|\leq\tilde{C}\epsilon.$
Since $\epsilon>0$ was arbitrary, the lemma follows. ∎
The following Proposition then describes the limit properties of our
constructed solution as $t\downarrow 0$ in the case of continuous initial
data.
###### Proposition 2.
Let $f_{0}\in C([0,1])$ and $K$ be given by (9). For $t\in(0,1]$ define
$\textstyle f(x,t):=\int_{0}^{1}K(r;x,y,t)f_{0}(y)dy,\quad
q(x,t):=\int_{0}^{1}K(r;y,x,t)f_{0}(y)dy,$
(note the interchange of $x,y$ as arguments of $K$ for the definition of $q$).
Then there exists a constant $C$ (dependent on $r$) such that
$\sup_{x\in[0,1],t\in(0,1]}\max\\{\left|f(x,t)\right|,\left|q(x,t)\right|\\}\leq
C\|f_{0}\|_{\infty}.$ (39)
Furthermore,
$\displaystyle\lim_{t\downarrow 0}f(x,t)$
$\displaystyle=\begin{cases}f_{0}(x),\quad x\in(0,1)\\\
\frac{r}{1+r}[f_{0}(0)+f_{0}(1)],\quad x=0\\\
\frac{1}{1+r}[f_{0}(0)+f_{0}(1)],\quad x=1\end{cases}$
$\displaystyle\lim_{t\downarrow 0}q(x,t)$
$\displaystyle=\begin{cases}f_{0}(x),\quad x\in(0,1)\\\
\frac{1}{1+r}[rf_{0}(0)+f_{0}(1)],\quad x=0,1\end{cases}.$
Finally, in the case that $f_{0}(0)=rf_{0}(1)$, $f(x,t)$ converges to
$f_{0}(x)$ uniformly over $x\in[0,1]$ as $t\downarrow 0$.
###### Proof.
We can write
$\begin{split}\textstyle K(r;x,y,t)&=\textstyle
K_{1}(x-y,t)\big{[}1+(x-y)\frac{1-r}{1+r}\big{]}+K_{1}(x+y,t)(x+y-1)\frac{1-r}{1+r}\\\
&\quad\quad\quad\quad+\frac{t(1-r)}{1+r}\big{[}K_{1}^{\prime}(x+y,t)+K_{1}^{\prime}(x-y,t)\big{]}.\end{split}$
(40)
Here ′ means the derivative with respect to the spatial variable.
To study the limit as $t\downarrow 0$, we note that we can ignore the terms
with a factor of $t$ using Lemma 2. By a change of variable we have
$\textstyle\int_{0}^{1}K_{1}(x+y,t)(x+y-1)f_{0}(y)dy=\int_{0}^{1}K_{1}(x-y,t)(x-y)f_{0}(1-y)dy.$
The bound (39) now follows from Lemma 1, as do the pointwise limits from a
straightforward somewhat tedious calculation.
Now suppose that $f_{0}(0)=rf_{0}(1)$ and split the initial data as follows:
$f_{0}(x)=f_{0}(0)+x(1-r)f_{0}(1)+p_{0}(x).$ (41)
Then $p_{0}\in C([0,1])$ with the crucial property that $p_{0}(0)=p_{0}(1)=0$.
Arguing as above and using Lemma 1, we see that the following limit holds
uniformly
$\textstyle\lim_{t\downarrow 0}\int_{0}^{1}K(r;x,y,t)p_{0}(y)dy=p_{0}(x).$
So it only remains to show that
$\textstyle\int_{0}^{1}K(r;x,y,t)[f_{0}(0)+y(1-r)f_{0}(1)]dy=f_{0}(0)+x(1-r)f_{0}(1).$
(42)
Let $l(x)=f_{0}(0)+x(1-r)f_{0}(1)$ and set $a=f_{0}(1)$. An explicit
calculation yields
$\textstyle\hat{l}(k)=\frac{i}{k}(\exp(-ik)-r)a+\frac{1}{k^{2}}(\exp(-ik)-1)a(1-r).$
We then have
$\displaystyle\textstyle\hat{l}_{0}(k)[(1+r)\exp(ik)-2r]+\hat{l}_{0}(-k)(1-r)\exp(-ik)=$
$\displaystyle\textstyle-a\big{[}\frac{ri}{k}+\frac{1-r}{k^{2}}\big{]}\Upsilon(k).$
We can then apply the residue theorem to the representation (34) to obtain
(42). ∎
In the case where the true density is not continuous but belongs to
$L^{p}([0,1])$ for $p\geq 1$, we have the following.
###### Proposition 3.
Let $1\leq p<\infty$, $f_{0}\in L^{p}([0,1])$ and $K$ be given by (9). For
$t\in(0,1]$ define
$\displaystyle f(x,t):=$
$\displaystyle\textstyle\int_{0}^{1}K(r;x,y,t)f_{0}(y)dy.$
Then $f(\cdot,t)$ converges to $f_{0}$ in $L^{p}([0,1])$ as ${t\downarrow 0}$.
###### Proof.
Note that the case $r=1$ is well-known. The fact that $f_{0}\in L^{1}([0,1])$
by Hölder’s inequality together with Lemma 2 show that we can ignore the parts
multiplied by $t$ in the kernel representations (40). The fact that
$yf_{0}(y)\in L^{p}([0,1])$ implies the convergence by simply summing the
parts in (40) and using the $r=1$ case with a change of variable for the
$K_{1}(x+y,t)$ part. ∎
###### Proof of Theorem 3.
We have that
$\mathbb{E}_{f_{X}}(f(x,t))=\frac{1}{n}\sum_{k=1}^{n}\int_{0}^{1}K(r;x,y,t)dy=\int_{0}^{1}K(r;x,y,t)dy.$
The first part of the theorem therefore follows from Proposition 2. For the
second part, assume that $f_{X}\in C^{1}([0,1])$ and
$x_{t}=x+\mathcal{O}(\sqrt{t})$. The relation (49) implies the result since we
have that
$\int_{0}^{t}\left|K_{1}(x,s)\right|ds\leq C\sqrt{t}$
for some $C$ independent of $x$ and
$\left|f_{X}(x)-f_{X}(x_{t})\right|=\mathcal{O}(\sqrt{t})$. ∎
### A.4 Proof of Proposition 1
###### Proof of Proposition 1.
We first show that in this case the solution is continuous on
$[0,1]\times[0,T)$ for any $T\in(0,\infty]$. The case of continuity at points
$t>0$ has already been discussed so suppose that
$(x_{n},t_{n})\rightarrow(x,0)$ then
$\left|f(x_{n},t_{n})-f_{0}(x)\right|\leq\left|f_{0}(x_{n})-f_{0}(x)\right|+\left|f(x_{n},t_{n})-f_{0}(x_{n})\right|.$
The first term converges to zero by continuity of $f_{0}$ whilst the second
term converges to zero by the proven uniform convergence as $t\downarrow 0$.
Using the limit given by Proposition 1, we will take $T=\infty$ without loss
of generality.
Since the solution is regular in the interior and continuous on the closure,
this immediately means that we can apply the maximum principle to deduce that
$\sup_{(x,t)\in\overline{\Omega}}f(x,t)=\sup_{(x,t)\in\partial{}\Omega}f(x,t),$
where $\Omega=(0,1)\times(0,T)$. A similar result holds for the infinum.
Evaluating (35) at $x=0$ leads to
$\textstyle
f(0,t)=\frac{2r}{1+r}\sum_{n\in\mathbb{Z}}{\exp(-k_{n}^{2}t/2)}\hat{f}_{0}(k_{n})=\frac{r\sqrt{2}}{(1+r)\sqrt{\pi
t}}\int_{-\infty}^{\infty}\exp(-x^{2}/(2t))f_{0}(y)dy,$
where we have used the function $K_{1}$ defined by (37) and extended $f_{0}$
periodically (values at the endpoints contributed nothing). Hence
$\textstyle\frac{2ar}{1+r}\leq f(0,t)\leq\frac{2br}{1+r}.$
Similar calculations yield
$\textstyle\frac{2a}{1+r}\leq f(1,t)\leq\frac{2b}{1+r}.$
The fact $\max\\{2r/(1+r),2/(1+r)\\}\geq 1$ (recall $r\geq 0$) finishes the
proof. ∎
### A.5 Proof of Theorem 4
###### Proof of Theorem 4.
We have that
$\int_{0}^{1}f(x,t)dx=\int_{0}^{1}\int_{0}^{1}K(r;x,y,t)df_{0}(y)dx=\int_{0}^{1}\int_{0}^{1}K(r;x,y,t)dxdf_{0}(y)$
by Fubini’s theorem. Using the series representation (40) and integrating term
by term (justified due to the exponential decaying factors) we have
$\int_{0}^{1}K(r;x,y,t)dx=1+\frac{1-r}{1+r}\sum_{n\in\mathbb{Z}}\exp(-k_{n}^{2}t/2)\int_{0}^{1}\big{[}x\exp(ik_{n}(x-y))+(x-1)\exp(ik_{n}(x+y))\big{]}dx.$
All other terms vanish since the integral of $\exp(ik_{n}x)$ is $0$ unless
$n=0$. We can change variables for the second term in the integrand to see
that the above is equal to
$\displaystyle
1+\frac{1-r}{1+r}\sum_{n\in\mathbb{Z}}\exp(-k_{n}^{2}t/2)\int_{0}^{1}\big{[}x\exp(ik_{n}(x-y))-x\exp(-ik_{n}(x-y))\big{]}dx$
$\displaystyle=$ $\displaystyle
1+\frac{1-r}{1+r}\sum_{n\in\mathbb{Z}}\exp(-k_{n}^{2}t/2)\int_{0}^{1}2ix\sin(k_{n}(x-y))dx=1,$
where we have used the fact that $\sin(k_{n}(x-y))$ is odd in $k_{n}$ and
$\exp(-k_{n}^{2}t/2)$ is even in the last equality. Since $f_{0}$ is a
probability measure, it follows that $\int_{0}^{1}f(x,t)dx=1$, i.e. part (1)
holds.
We next show that the integral kernel $K(r;x,y,t)$ is non-negative for $r\geq
0,t>0$ and $x,y\in[0,1]$. Suppose this were false for some
$(x_{0},y_{0})\in[0,1]^{2}$. The Poisson summation formula gives
$K(r;0,y,t)=\frac{2r}{1+r}K_{1}(y,t)>0,\quad
K(r;1,y,t)=\frac{2}{1+r}K_{1}(y,t)>0,$
and hence $(x_{0},y_{0})\in(0,1)^{2}$. Choose $f_{0}=u_{n}$ that integrates to
$1$ where $u_{n}(y)\geq 0$ and $u_{n}(y)=0$ unless $\|y-y_{0}\|\leq 1/n$. Then
for large $n$, $u_{n}$ satisfies the required boundary conditions (vanishes in
a neighborhood of the endpoints) and we must have that
$f_{n}(x_{0},t):=\int_{0}^{1}K(r;x_{0},y,t)u_{n}(y)dy\geq 0,$
by Proposition 1. But it clearly holds by continuity of the integral kernel
that $\lim_{n\rightarrow\infty}f_{n}(x_{0},t)=K(r;x_{0},y_{0},t)<0$, a
contradiction. This proves part (2) of the theorem. ∎
## Appendix B Proof of Theorem 5
###### Proof.
We begin with the proof of 1. Recall that
$\textstyle\mathrm{Var}_{f_{X}}[f(x,t)]=\frac{\mathbb{E}_{f_{X}}[K(x,Y,t)^{2}]}{n}-\frac{\mathbb{E}_{f_{X}}[K(x,Y,t)]^{2}}{n},$
where $K$ is the kernel given by (40). The second of these terms is bounded by
a constant multiple of $1/n$ so we consider the first. Recall the
decomposition (40):
$\begin{split}\textstyle K(r;x,y,t)&=\textstyle
K_{1}(x-y,t)\big{[}1+(x-y)\frac{1-r}{1+r}\big{]}+K_{1}(x+y,t)(x+y-1)\frac{1-r}{1+r}\\\
&\quad\quad\quad\quad+\frac{t(1-r)}{1+r}\big{[}K_{1}^{\prime}(x+y,t)+K_{1}^{\prime}(x-y,t)\big{]},\end{split}$
where $K_{1}$ is the standard periodic heat kernel. For $x,y\in[0,1]$ we have
that
$\displaystyle K_{1}(x-y,t)$ $\displaystyle\textstyle\sim_{t\downarrow
0}\frac{1}{\sqrt{2\pi
t}}\big{[}\exp\big{(}-\frac{(x-y)^{2}}{2t}\big{)}+\exp\big{(}-\frac{(x-y-1)^{2}}{2t}\big{)}+\exp\big{(}-\frac{(x-y+1)^{2}}{2t}\big{)}\big{]}$
$\displaystyle K_{1}(x+y,t)$ $\displaystyle\textstyle\sim_{t\downarrow
0}\frac{1}{\sqrt{2\pi
t}}\big{[}\exp\big{(}-\frac{(x+y)^{2}}{2t}\big{)}+\exp\big{(}-\frac{(x+y-1)^{2}}{2t}\big{)}+\exp\big{(}-\frac{(x+y-2)^{2}}{2t}\big{)}\big{]},$
with the rest of the expansion exponentially small as $t\downarrow 0$ and the
asymptotics valid upon taking derivatives. Using this, it is straightforward
to show that we can write
$\textstyle K(r;x,y,t)=\frac{1}{\sqrt{2\pi t}}G(r;x,y,t),$
where $G$ is bounded. From the above asymptotic expansions, we can write
$\displaystyle G(r;x,y,t)$
$\displaystyle=\textstyle\exp\Big{(}-\frac{(x-y)^{2}}{2t}\Big{)}+h_{1}(x,y,t)\exp\Big{(}-\frac{(x-y-1)^{2}}{2t}\Big{)}$
$\displaystyle\quad\textstyle+h_{2}(x,y,t)\exp\Big{(}-\frac{(x-y+1)^{2}}{2t}\Big{)}+h_{3}(x,y,t)\exp\Big{(}-\frac{(x+y)^{2}}{2t}\Big{)}$
$\displaystyle\quad\quad\textstyle+h_{4}(x,y,t)\exp\Big{(}-\frac{(x+y-2)^{2}}{2t}\Big{)}+E(x,y,t),$
where the $h_{i}$ are bounded and the error term $E(x,y,t)$ is exponentially
small as $t\downarrow 0$ uniformly in $x,y$. Furthermore, we have
$\textstyle\lim_{t\downarrow
0}\int_{0}^{1}\int_{0}^{1}f_{X}(y)K(r;x,y,t)h_{1}(x,y,t)\exp\Big{(}-\frac{(x-y-1)^{2}}{2t}\Big{)}dydx=0$
by the dominated convergence theorem (by considering the inner integral as a
function of $x$). Similar results hold for the other $h_{i}$ multiplied by
their relative Gaussian functions. Similarly, we have
$\textstyle\lim_{t\downarrow
0}\frac{1}{\sqrt{t}}\int_{0}^{1}\int_{0}^{1}\exp\Big{(}-\frac{(x-y)^{2}}{2t}\Big{)}f_{X}(y)h_{1}(x,y,t)\exp\Big{(}-\frac{(x-y-1)^{2}}{2t}\Big{)}dydx=0$
and likewise for the other $h_{i}$ multiplied by their relative Gaussian
functions. The integral
$\textstyle\frac{1}{\sqrt{\pi
t}}\int_{0}^{1}f_{X}(y)\exp\Big{(}-\frac{(x-y)^{2}}{t}\Big{)}dy$
is bounded and converges pointwise for almost all $x\in[0,1]$ to $f_{X}(x)$.
It follows that
$\textstyle\int_{0}^{1}\frac{\mathbb{E}_{f_{X}}[K(x,Y,t)^{2}]}{n}dx=\frac{1}{2n\pi
t}\int_{0}^{1}\int_{0}^{1}f_{X}(y)\exp\Big{(}-\frac{(x-y)^{2}}{t}\Big{)}dydx+\underline{\text{o}}(\frac{1}{n\sqrt{t}}).$
(43)
The rate (15) now follows.
We now prove 2 and 3. Define the function $p_{0}(x)$ via
$f_{X}(x)=f_{X}(0)+x(1-r)f_{X}(1)+p_{0}(x)$, then the proof of Proposition 2
showed that
$\mathbb{E}_{f_{X}}[f(x,t)]-f_{X}(x)=\int_{0}^{1}K(r;x,y,t)[p_{0}(y)-p_{0}(x)]dy.$
(44)
Define the function
$w(x,y)=p_{0}(y)-p_{0}(x)+(x-y)\frac{1-r}{1+r}(p_{0}(y)+p_{0}(1-y)),$ (45)
then (44) and (40) imply that $\mathbb{E}_{f_{X}}[f(x,t)]-f_{X}(x)$ is equal
to
$\textstyle\int_{0}^{1}K_{1}(x-y,t)w(x,y)dy+\frac{t(1-r)}{1+r}\int_{0}^{1}K_{1}(x-y,t)[p_{0}^{\prime}(y)-p_{0}^{\prime}(1-y)]dy,$
(46)
where we have integrated by parts for the last term and used
$p_{0}(0)=p_{0}(1)=0$. Define the function
$\textstyle F(x,t)=\int_{0}^{1}K_{1}(x-y,t)w(x,y)dy,$ (47)
then taking the partial derivative with respect to time, integrating by parts
and using $w(x,0)=w(x,1)=0$ we have
$\textstyle\frac{\partial F}{\partial
t}(x,t)=K_{1}(x,t)[p_{0}^{\prime}(1)-p_{0}^{\prime}(0)]\frac{rx-
x-r}{1+r}+\frac{1}{2}\int_{0}^{1}K_{1}(x-y,t)\frac{\partial^{2}w}{\partial
y^{2}}(x,y)dy.$ (48)
First we assume that $p_{0}^{\prime}(0)\neq p_{0}^{\prime}(1)$. In this case
the above shows that
$\textstyle\mathbb{E}_{f_{X}}[f(x,t)]-f_{X}(x)=\frac{rx-
x-r}{1+r}[p_{0}^{\prime}(1)-p_{0}^{\prime}(0)]\int_{0}^{t}K_{1}(x,s)ds+\mathcal{O}(t),$
(49)
where the $\mathcal{O}(t)$ is uniform in $x$. Using the above asymptotics for
$K_{1}(x,t)$ and the dominated convergence theorem, it follows that
$\begin{split}&\textstyle\int_{0}^{1}\big{\\{}\mathbb{E}_{f_{X}}[f(x,t)]-f_{X}(x)\big{\\}}^{2}dx\\\
&\sim_{t\downarrow
0}\textstyle\frac{[p_{0}^{\prime}(1)-p_{0}^{\prime}(0)]^{2}}{2\pi(1+r)^{2}}\int_{0}^{1}(rx-
x-r)^{2}\Big{[}\int_{0}^{t}\frac{\exp\big{(}-\frac{x^{2}}{2s}\big{)}}{\sqrt{s}}+\frac{\exp\big{(}-\frac{(x-1)^{2}}{2s}\big{)}}{\sqrt{s}}ds\Big{]}^{2}dx.\end{split}$
(50)
Let
$\tau(x)=\exp(-x^{2})-\sqrt{\pi}\left|x\right|\mathrm{erfc}(\left|x\right|)$,
then we can perform the integral in the square brackets in terms of $\tau$ to
yield
$\displaystyle\textstyle\int_{0}^{1}\big{\\{}\mathbb{E}_{f_{X}}[f(x,t)]-f_{X}(x)\big{\\}}^{2}dx$
(51) $\displaystyle\textstyle\sim_{t\downarrow
0}t\Big{\\{}\frac{2[p_{0}^{\prime}(1)-p_{0}^{\prime}(0)]^{2}}{\pi(1+r)^{2}}\int_{0}^{1}(rx-
x-r)^{2}\big{[}\tau(\frac{x}{\sqrt{2t}})+\tau(\frac{x-1}{\sqrt{2t}})\big{]}^{2}dx\Big{\\}}$
(52) $\displaystyle\textstyle\sim_{t\downarrow
0}t^{3/2}\Big{\\{}\frac{2[p_{0}^{\prime}(1)-p_{0}^{\prime}(0)]^{2}}{\pi(1+r)^{2}}(r^{2}+1)\sqrt{2}\int_{0}^{\infty}\tau(y)^{2}dy\Big{\\}}.$
(53)
To finish the proof in this case, we have that
$\textstyle\int_{0}^{\infty}\tau(y)^{2}dy=\frac{1}{3}(\sqrt{2}-1)\sqrt{\pi}.$
Next suppose that $p_{0}^{\prime}(0)=p_{0}^{\prime}(1)$. In this case we have
$\displaystyle\textstyle\frac{\partial F}{\partial t}(x,t)$
$\displaystyle=\textstyle\frac{1}{2}\int_{0}^{1}K_{1}(x-y,t)\frac{\partial^{2}w}{\partial
y^{2}}(x,y)dy$ (54)
$\displaystyle=\textstyle\frac{1}{2}\frac{\partial^{2}w}{\partial
y^{2}}(x,x)+U(x,t),$ (55)
for some bounded function $U(x,t)$ which converges to $0$ as $t\downarrow 0$
for almost all $x\in[0,1]$. It follows that
$\textstyle F(x,t)=t\frac{1}{2}\frac{\partial^{2}w}{\partial
y^{2}}(x,x)+t\tilde{F}(x,t),$ (56)
for some bounded function $\tilde{F}(x,t)$ which converges to $0$ as
$t\downarrow 0$ for almost all $x\in[0,1]$. Similarly, we have
$\textstyle\frac{t(1-r)}{1+r}\int_{0}^{1}K_{1}(x-y,t)[p_{0}^{\prime}(y)-p_{0}^{\prime}(1-y)]dy=\frac{t(1-r)}{1+r}[p_{0}^{\prime}(x)-p_{0}^{\prime}(1-x)]+tV(x,t),$
(57)
for some bounded function $V(x,t)$ which converges to $0$ as $t\downarrow 0$
for almost all $x\in[0,1]$. It follows from (46) that
$\textstyle\mathbb{E}_{f_{X}}[f(x,t)]-f_{X}(x)=t\Big{\\{}\frac{1}{2}\frac{\partial^{2}w}{\partial
y^{2}}(x,x)+\frac{(1-r)}{1+r}[p_{0}^{\prime}(x)-p_{0}^{\prime}(1-x)]\Big{\\}}+tW(x,t),$
(58)
for some bounded function $W(x,t)$ which converges to $0$ as $t\downarrow 0$
for almost all $x\in[0,1]$. But we have
$\textstyle\frac{\partial^{2}w}{\partial
y^{2}}(x,x)=f_{X}^{{}^{\prime\prime}}(x)-2\frac{1-r}{1+r}[p_{0}^{\prime}(x)-p_{0}^{\prime}(1-x)].$
The dominated convergence theorem then implies that
$\textstyle\int_{0}^{1}\\{\mathbb{E}_{f_{X}}[f(x,t)]-f_{X}(x)\\}^{2}dx\sim_{t\downarrow
0}t^{2}\int_{0}^{1}\frac{1}{4}[f_{X}^{{}^{\prime\prime}}(x)]^{2}dx.$ (59)
∎
## Appendix C Four Corners Matrix and Proof of Theorem 7
The ‘Four Corners Matrix’ (26), is a non-symmetric example of a ‘tridiagonal
Toeplitz matrix with four perturbed corners’ [70, 64]. Although we do not
pursue it further, one can also show (by extending the techniques of [64])
that all functions of (26) are the sum of (i) a Toeplitz part, which can be
thought of as the solution without boundary conditions; and (ii) a Hankel
part, which is precisely the correction due to the boundary conditions. Exact
and explicit formulas for the eigenvalues and eigenvectors are available and
we will use these to prove Theorem 7
There is a unique zero eigenvalue, corresponding to the stationary density as
$t\rightarrow\infty$. The stationary density is an affine function in the
continuous PDE setting. In the discrete setting the components of the
stationary eigenvector $\bm{v}$ are equally-spaced, in the sense that $\forall
i,j,\;\;v_{i}-v_{i+1}=v_{j}-v_{j+1}=\textrm{constant}$. All non-zero
eigenvalues of $\mathbf{A}$ are positive and we are in the setting of [70,
Theorem 3.2 (i)]. In the case that $r\neq 1$, we can group the spectral data
into two classes with eigenvalues:
$\lambda_{k}=2-2\cos\theta_{k},\qquad k=1,\ldots,m,$
where
$\theta_{k}=\begin{cases}k\frac{2\pi}{m}&\mbox{ if }1\leq
k\leq\left\lfloor\frac{m-1}{2}\right\rfloor\\\
(k-\left\lfloor\frac{m-1}{2}\right\rfloor-1)\frac{2\pi}{m+1}&\mbox{ if
}\left\lfloor\frac{m-1}{2}\right\rfloor+1\leq k\leq m.\end{cases}$
The zero eigenvalue, when $k=\left\lfloor\frac{m-1}{2}\right\rfloor+1$, has
already been discussed. Other eigenvalues correspond to eigenvectors with
components (listed via subscripts)
$\begin{cases}v^{k}_{j}=r\sin((j-1)\theta_{k})-\sin(j\theta_{k})&\mbox{ if
}1\leq k\leq\left\lfloor\frac{m-1}{2}\right\rfloor\\\
w^{k-\lfloor\frac{m-1}{2}\rfloor-1}_{j}=\sin(j\theta_{k})&\mbox{ if
}\left\lfloor\frac{m-1}{2}\right\rfloor+2\leq k\leq m.\end{cases}$ (60)
Some properties of the discrete model and its links to the continuous model
are:
* •
All eigenvalues of the Four Corners Matrix are purely real. Also, the
eigenvalues of the operator in the continuous model are likewise purely real.
This is perhaps surprising since the matrix is not symmetric, and the operator
is not self-adjoint.
* •
The Four Corners Matrix $\mathbf{A}$ is diagonalizable. In contrast, the
operator for the continuous PDE is not diagonalizable, and instead, the
analogy of the Jordan Normal Form from linear algebra applies to the operator.
Despite this, the following still hold:
1. 1.
The eigenvalues of the discrete model matrix $\mathbf{A}$ converge to that of
the continuous model (including algebraic multiplicities). This holds, for
example, in the Attouch–Wets topology - the convergence is locally uniform.
2. 2.
The eigenvectors converge to the generalized eigenfunctions of the continuous
operator. Letting $j=\lfloor(m+1)x\rfloor$ we have ($k\neq 0$)
$\displaystyle\lim_{m\rightarrow\infty}w^{k}_{j}=\sin(2\pi kx)$
$\displaystyle\lim_{m\rightarrow\infty}\frac{m}{4\pi^{2}k^{2}}\big{[}(r-1)w^{k}_{j}-v^{k}_{j}\big{]}=\phi_{k}(x).$
We prove Theorem 7 by invoking the celebrated Lax Equivalence Theorem, which
states that ‘stability and consistency implies convergence’ [52]. We will take
consistency for granted. Typically when proofs in the literature use the Lax
Equivalence Theorem, it is also taken for granted that the PDE is well-posed.
Fortunately, we have already established that the PDE is indeed well-posed in
Theorem 1. It remains only to show stability. Even though the matrix
$\mathbf{A}$ has non-negative eigenvalues, this does not immediately imply
stability of the backward Euler method since $\mathbf{A}$ is not normal, i.e.
$\mathbf{A}$ does not commute with $\mathbf{A}^{*}$. We establish stability
for our problem by showing that bounds for the continuous model in Proposition
1 have corresponding bounds in the discrete model as follows, where we use a
subscript $l^{p}$ to denote the operator $l^{p}$ norm. In particular, Lemma 3
shows the discrete model is stable in the maximum norm. Convergence then
follows from the Lax Theorem.
###### Lemma 3 (Stability and Well-posedness of Discrete Approximation).
Let $m\geq 2$, then the backward Euler method (27) preserves probability
vectors and satisfies the bound
$\left\|\left(\mathbf{I}+\mathbf{A}\right)^{-K}\right\|_{l^{\infty}}\leq\max\Big{\\{}\frac{2r}{1+r},\frac{2}{1+r}\Big{\\}},\quad\forall
K\in\mathbb{N}.$
As a result of this lemma, we also gain stability in any $p$–norm via
interpolation.
###### Proof of Lemma 3.
Since the sums of each column $\mathbf{A}$ are zero, it follows that
$\sum_{j=1}^{m}u_{j}^{k+1}=\sum_{j=1}^{m}u_{j}^{k}.$
Hence to prove the first part, it suffices to show that $\bm{u}^{k+1}$ is non-
negative if $\bm{u}^{k}$ is. Suppose this were false, and let
$j\in\\{1,..,m\\}$ be such that $u^{k+1}_{j}<0$ is the smallest component of
$\bm{u}^{k+1}$. We have that
$u_{j}^{k}=(1+\mathbf{A}_{jj})u_{j}^{k+1}+\sum_{l\neq
j}\mathbf{A}_{j,l}u_{l}^{k+1}.$
By choice of $j$ and the fact that $\mathbf{A}_{jj}$ is positive, the off-
diagonals of $\mathbf{A}$ are negative and the sum of the $j$th column of
$\mathbf{A}$ is zero, it follows that
$\mathbf{A}_{jj}u_{j}^{k+1}+\sum_{l\neq j}\mathbf{A}_{j,l}u_{l}^{k+1}\leq 0.$
But this then implies that $u_{j}^{k}\leq u_{j}^{k+1}$, the required
contradiction.
To prove the second part, let $\bm{u}\in\mathbb{R}_{\geq 0}^{m}$ be any
initial vector with $\|\bm{u}\|_{\infty}\leq 1$ and let $\mathbbm{1}$ denote
the vector with $1$ in all entries. The eigenvector in the kernel of
$\mathbf{A}$ is a linear multiple of $w^{0}$ defined by
$\textstyle w^{0}_{j}=1+\frac{1-r}{1+rm}(j-1).$
Define the vector $x$ via
$x(1-r)/(1+r)=\mathbbm{1}-2(1+rm)/[(m+1)(r+1)]w^{0}$. This has components
$\textstyle x_{j}=\frac{m+1-2j}{m+1}.$
Extend this vector to have $x_{0}=0$, then an application of the discrete
Fourier transform implies that we can write for $j\neq 0$
$\textstyle x_{j}=\frac{1}{m+1}\sum_{k=1}^{m}G_{m}(k)\exp\left(\frac{2\pi
ikj}{m+1}\right),$
where
$\textstyle G_{m}(k)=\frac{\exp(4\pi ik/(m+1))-1}{(\exp(2\pi
ik/(m+1))-1)^{2}}=\frac{-i}{2}\frac{\sin(2\pi
k/(m+1))}{\sin^{2}(k\pi/(m+1))}.$
Hence we have that
$\displaystyle x_{j}$
$\displaystyle=\textstyle\frac{1}{m+1}\sum_{k=1}^{m-\lfloor\frac{m-1}{2}\rfloor-1}\left[G_{m}(k)\exp\left(\frac{2\pi
ikj}{m+1}\right)-\overline{G_{m}(k)}\exp\left(-\frac{2\pi
ikj}{m+1}\right)\right]$
$\displaystyle=\textstyle\frac{1}{m+1}\sum_{k=1}^{m-\lfloor\frac{m-1}{2}\rfloor-1}\frac{\sin(2\pi
k/(m+1))}{\sin^{2}(k\pi/(m+1))}\sin\left(\frac{2\pi kj}{m+1}\right).$
This implies that we can write $1$ as a linear combination of eigenvectors:
$\textstyle
1=\frac{2(1+rm)w^{0}_{j}}{(m+1)(1+r)}+\frac{1-r}{(m+1)(1+r)}\sum_{k=1}^{m-\lfloor\frac{m-1}{2}\rfloor-1}\frac{\sin(2\pi
k/(m+1))}{\sin^{2}(k\pi/(m+1))}w^{k}_{j}.$
Define $\bm{Q}^{K}=\left(\mathbf{I}+\mathbf{A}\right)^{-K}\mathbbm{1}$, and
$\bm{q}^{K}=\left(\mathbf{I}+\mathbf{A}\right)^{-K}\bm{u}$. In particular,
using the eigenvalue decomposition we have
$\displaystyle Q_{j}^{K}$
$\displaystyle=\textstyle\frac{2(1+rm)w^{0}_{j}}{(m+1)(1+r)}$
$\displaystyle+\frac{1-r}{(m+1)(1+r)}\textstyle\sum_{k=1}^{m-\lfloor\frac{m-1}{2}\rfloor-1}\frac{\sin(2\pi
k/(m+1))}{\sin^{2}(k\pi/(m+1))}\sin\left(\frac{2\pi
kj}{m+1}\right)\left(3-2\cos\left(\frac{2\pi k}{m+1}\right)\right)^{-K}.$
Using similar arguments to the first part of the proof, it is easy to prove
the Discrete Maximum Principle:
$\sup_{K\in\mathbb{N}\cup\\{0\\}}\left\|\bm{Q}^{K}\right\|_{\infty}=\max\left\\{\sup_{K\in\mathbb{N}\cup\\{0\\}}Q_{1}^{K},\sup_{K\in\mathbb{N}\cup\\{0\\}}Q_{m}^{K},1\right\\}.$
Explicitly, we have that
$\displaystyle Q_{1}^{K}$ $\displaystyle=\textstyle\frac{2(1+rm)}{(m+1)(1+r)}$
$\displaystyle+\frac{1-r}{(m+1)(1+r)}\textstyle\sum_{k=1}^{m-\lfloor\frac{m-1}{2}\rfloor-1}\frac{\sin^{2}(2\pi
k/(m+1))}{\sin^{2}(k\pi/(m+1))}\left(3-2\cos\left(\frac{2\pi
k}{m+1}\right)\right)^{-K}.$
This is monotonic in $K$ with limit $2(1+rm)/[(m+1)(1+r)]$. Similarly, we have
$\displaystyle Q_{m}^{K}$ $\displaystyle=\textstyle\frac{2(m+r)}{(m+1)(1+r)}$
$\displaystyle-\frac{1-r}{(m+1)(1+r)}\textstyle\sum_{k=1}^{n-\lfloor\frac{m-1}{2}\rfloor-1}\frac{\sin^{2}(2\pi
k/(m+1))}{\sin^{2}(k\pi/(m+1))}\left(3-2\cos\left(\frac{2\pi
k}{m+1}\right)\right)^{-K},$
which is monotonic in $K$ with limit $2(m+r)/[(m+1)(1+r)]$. Now, we must have
that each entry of $\bm{Q}^{K}\pm\bm{q}^{K}$ is non-negative since this is
true for $K=0$. It follows that
$\left\|\bm{q}^{K}\right\|_{\infty}\leq\left\|\bm{Q}^{K}\right\|_{\infty}=\max\left\\{\frac{2(1+rm)}{(m+1)(1+r)},\frac{2(m+r)}{(m+1)(1+r)}\right\\}.$
Since the $l^{\infty}$ operator norm of a real matrix is independent of
whether the underlying field is $\mathbb{R}$ or $\mathbb{C}$, the lemma now
follows by taking suprema over $m$. ∎
## References
* [1] N. Agarwal and N. R. Aluru. A data-driven stochastic collocation approach for uncertainty quantification in MEMS. International Journal for Numerical Methods in Engineering, 83(5):575–597, 2010.
* [2] A. H. Al-Mohy and N. J. Higham. A new scaling and squaring algorithm for the matrix exponential. SIAM Journal on Matrix Analysis and Applications, 31(3):970–989, 2010.
* [3] G. D. Birkhoff. Boundary value and expansion problems of ordinary linear differential equations. Transactions of the American Mathematical Society, 9(4):373–395, 1908.
* [4] C. Bolley and M. Crouzeix. Conservation de la positivité lors de la discrétisation des problèmes d’évolution paraboliques. RAIRO. Analyse numérique, 12(3):237–245, 1978.
* [5] Z. I. Botev, J. F. Grotowski, and D. P. Kroese. Kernel density estimation via diffusion. The Annals of Statistics, 38(5):2916–2957, 2010.
* [6] S. X. Chen. Beta kernel estimators for density functions. Computational Statistics & Data Analysis, 31(2):131–145, 1999\.
* [7] E. Chevallier, E. Kalunga, and J. Angulo. Kernel density estimation on spaces of Gaussian distributions and symmetric positive definite matrices. SIAM Journal on Imaging Sciences, 10(1):191–215, 2017.
* [8] E. A. Coddington and N. Levinson. Theory of ordinary differential equations. Tata McGraw-Hill Education, 1955.
* [9] M. J. Colbrook. Extending the unified transform: curvilinear polygons and variable coefficient PDEs. IMA Journal of Numerical Analysis, 40(2):976–1004, 2020.
* [10] M. J. Colbrook and L. J. Ayton. A spectral collocation method for acoustic scattering by multiple elastic plates. Journal of Sound and Vibration, 461:114904, 2019.
* [11] M. J. Colbrook, L. J. Ayton, and A. S. Fokas. The unified transform for mixed boundary condition problems in unbounded domains. Proceedings of the Royal Society A, 475(2222):20180605, 2019.
* [12] M. J. Colbrook, N. Flyer, and B. Fornberg. On the Fokas method for the solution of elliptic problems in both convex and non-convex polygonal domains. Journal of Computational Physics, 374:996–1016, 2018.
* [13] M. J. Colbrook, A. S. Fokas, and P. Hashemzadeh. A hybrid analytical-numerical technique for elliptic PDEs. SIAM Journal on Scientific Computing, 41(2):A1066–A1090, 2019.
* [14] J. Dai and S. Sperlich. Simple and effective boundary correction for kernel densities and regression with an application to the world income and Engel curve estimation. Computational Statistics & Data Analysis, 54(11):2487–2497, 2010\.
* [15] F. P. J. de Barros, M. J. Colbrook, and A. S. Fokas. A hybrid analytical-numerical method for solving advection-dispersion problems on a half-line. International Journal of Heat and Mass Transfer, 139:482–491, 2019\.
* [16] B. Deconinck, Q. Guo, E. Shlizerman, and V. Vasan. Fokas’s uniform transform method for linear systems. arXiv preprint arXiv:1705.00358, 2017.
* [17] B. Deconinck, B. Pelloni, and N. E. Sheils. Non-steady-state heat conduction in composite walls. Proceedings of the Royal Society A: Mathematical, Physical and Engineering Sciences, 470(2165):20130605, 2014.
* [18] B. Deconinck, T. Trogdon, and V. Vasan. The method of Fokas for solving linear partial differential equations. SIAM Review, 56(1):159–186, 2014.
* [19] D. Devroye, J. Beirlant, R. Cao, R. Fraiman, P. Hall, M. C. Jones, G. Lugosi, E. Mammen, J. S. Marron, C. Sánchez-Sellero, J. de Una, F. Udina, and L. Devroye. Universal smoothing factor selection in density estimation: theory and practice. Test, 6(2):223–320, 1997.
* [20] L. Dümbgen, A. Hüsler, and K. Rufibach. Active set and EM algorithms for log-concave densities based on complete and censored data. arXiv preprint arXiv:0707.4643, 2007.
* [21] L. Dümbgen and K. Rufibach. logcondens: Computations related to univariate log-concave density estimation. Journal of Statistical Software, 39, 2010.
* [22] L. Dümbgen and K. Rufibach. logcondens: Computations related to univariate log-concave density estimation. Journal of Statistical Software, 39(6):1–28, 2011.
* [23] L. Dümbgen, K. Rufibach, and D. Schuhmacher. Maximum-likelihood estimation of a log-concave density based on censored data. Electronic Journal of statistics, 8(1):1405–1437, 2014.
* [24] L. C. Evans. Partial differential equations. American Mathematical Society, 2010.
* [25] A. S. Fokas. A unified transform method for solving linear and certain nonlinear PDEs. In Proc. R. Soc. A, volume 453, pages 1411–1443. The Royal Society, 1997.
* [26] A. S. Fokas. Integrable nonlinear evolution equations on the half-line. Communications in mathematical physics, 230(1):1–39, 2002.
* [27] A. S. Fokas. A unified approach to boundary value problems. SIAM, 2008.
* [28] A. S. Fokas and B. Pelloni. A transform method for linear evolution PDEs on a finite interval. IMA journal of applied mathematics, 70(4):564–587, 2005.
* [29] G. Geenens. Probit transformation for kernel density estimation on the unit interval. Journal of the American Statistical Association, 109(505):346–358, 2014.
* [30] D. Gilbarg and N. S. Trudinger. Elliptic partial differential equations of second order. Springer, 2015.
* [31] N. J. Higham. The scaling and squaring method for the matrix exponential revisited. SIAM Journal on Matrix Analysis and Applications, 26(4):1179–1193, 2005.
* [32] Y. Hu and C. Scarrott. evmix: An R package for extreme value mixture modeling, threshold estimation and boundary corrected kernel density estimation. Journal of Statistical Software, 84(5):1–27, 2018.
* [33] M. C. Jones and D. A. Henderson. Kernel-type density estimation on the unit interval. Biometrika, 94(4):977–984, 2007.
* [34] M. C. Jones, J. S. Marron, and S. J. Sheather. Progress in data-based bandwidth selection for kernel density estimation. Department of Statistics [University of North Carolina at Chapel Hill], 1992.
* [35] M. C. Jones, J. S. Marron, and S. J. Sheather. A brief survey of bandwidth selection for density estimation. Journal of the american statistical association, 91(433):401–407, 1996.
* [36] R. Kafri, J. Levy, M. B. Ginzberg, S. Oh, G. Lahav, and M. W. Kirschner. Dynamics extracted from fixed cells reveal feedback linking cell growth to cell cycle. Nature, 494(7438):480–483, 2013.
* [37] R. J. Karunamuni and T. Alberts. On boundary correction in kernel density estimation. Statistical Methodology, 2(3):191–212, 2005.
* [38] K. Kuritz, D. Stöhr, D. S. Maichl, N. Pollak, M. Rehm, and F. Allgöwer. Reconstructing temporal and spatial dynamics from single-cell pseudotime using prior knowledge of real scale cell densities. Nature Scientific Reports, 10(1):3619, 2020.
* [39] K. Kuritz, D. Stöhr, N. Pollak, and F. Allgöwer. On the relationship between cell cycle analysis with ergodic principles and age-structured cell population models. Journal of Theoretical Biology, 414:91–102, 2017.
* [40] J. Locker. Spectral theory of non-self-adjoint two-point differential operators. American Mathematical Soc., 2000.
* [41] M. Machover. A generalized eigenfunction expansion of the Green’s function. Proceedings of the American Mathematical Society, 16(3):348–352, 1965.
* [42] P. Malec and M. Schienle. Nonparametric kernel density estimation near the boundary. Computational Statistics & Data Analysis, 72:57–76, 2014.
* [43] D. Mantzavinos and A. S. Fokas. The unified method for the heat equation: I. non-separable boundary conditions and non-local constraints in one dimension. European Journal of Applied Mathematics, 24(6):857–886, 2013.
* [44] J. S. Marron and D. Ruppert. Transformations to reduce boundary bias in kernel density estimation. Journal of the Royal Statistical Society. Series B (Methodological), pages 653–671, 1994.
* [45] R. Mehmood, G. Zhang, R. Bie, H. Dawood, and H. Ahmad. Clustering by fast search and find of density peaks via heat diffusion. Neurocomputing, 208:210–217, 2016.
* [46] R. Mennicken and M. Möller. Non-self-adjoint boundary eigenvalue problems, volume 192. Elsevier, 2003.
* [47] P. D. Miller and D. A. Smith. The diffusion equation with nonlocal data. Journal of Mathematical Analysis and Applications, 466(2):1119–1143, 2018.
* [48] C. Moler and C. Van Loan. Nineteen dubious ways to compute the exponential of a matrix, twenty-five years later. SIAM review, 45(1):3–49, 2003.
* [49] M. A. Naimark. Linear differential operators, harrap, london, 1967. Trans ER Dawson from Russian original, 1952.
* [50] P. Olver, N. E. Sheils, and D. Smith. Revivals and fractalisation in the linear free space schrödinger equation. Quarterly of Applied Mathematics, 2019.
* [51] B. Pelloni and D. A. Smith. Nonlocal and multipoint boundary value problems for linear evolution equations. Studies in Applied Mathematics, 141(1):46–88, 2018.
* [52] R. D. Richtmyer and K. W. Morton. Difference methods for initial-value problems. Malabar, Fla.: Krieger Publishing Co.,— c1994, 2nd ed., 1994.
* [53] W. Saelens, R. Cannoodt, H. Todorov, and Y. Saeys. A comparison of single-cell trajectory inference methods. Nature Biotechnology, 37:547–554, 2019.
* [54] R. J. Samworth. Recent progress in log-concave density estimation. arXiv preprint arXiv:1709.03154, 2017.
* [55] D. Santhosh and V. V. Srinivas. Bivariate frequency analysis of floods using a diffusion based kernel density estimator. Water Resources Research, 49(12):8328–8343, 2013.
* [56] O. Scaillet. Density estimation using inverse and reciprocal inverse gaussian kernels. Nonparametric statistics, 16(1-2):217–226, 2004.
* [57] S. J. Sheather and M. C. Jones. A reliable data-based bandwidth selection method for kernel density estimation. Journal of the Royal Statistical Society: Series B (Methodological), 53(3):683–690, 1991.
* [58] N. E. Sheils. Interface Problems using the Fokas Method. PhD thesis, 2015.
* [59] N. E. Sheils and B. Deconinck. Heat conduction on the ring: Interface problems with periodic boundary conditions. Applied Mathematics Letters, 37:107–111, 2014.
* [60] N. E. Sheils and B. Deconinck. Interface problems for dispersive equations. Studies in Applied Mathematics, 134(3):253–275, 2015.
* [61] N. E. Sheils and B. Deconinck. The time-dependent schrödinger equation with piecewise constant potentials. European Journal of Applied Mathematics, 31(1):57–83, 2020.
* [62] B. W. Silverman. Density estimation for statistics and data analysis. Routledge, 2018.
* [63] J. S. Simonoff. Smoothing methods in Statistics. Springer Science & Business Media, 2012.
* [64] G. Strang and S. MacNamara. Functions of difference matrices are Toeplitz plus Hankel. SIAM Review, 56(3):525–546, 2014.
* [65] J. Tamarkin. Some general problems of the theory of ordinary linear differential equations and expansion of an arbitrary function in series of fundamental functions. Mathematische Zeitschrift, 27(1):1–54, 1928.
* [66] T. Trogdon and B. Deconinck. The solution of linear constant-coefficient evolution PDEs with periodic boundary conditions. Applicable Analysis, 91(3):529–544, 2012.
* [67] A. B. Tsybakov. Introduction to nonparametric estimation. Springer Science & Business Media, 2008.
* [68] M. P. Wand and M. C. Jones. Kernel smoothing. Chapman and Hall/CRC, 1994.
* [69] X. Xu, Z. Yan, and S. Xu. Estimating wind speed probability distribution by diffusion-based kernel density method. Electric Power Systems Research, 121:28–37, 2015.
* [70] W.-C. Yueh and S. S. Cheng. Explicit eigenvalues and inverses of tridiagonal Toeplitz matrices with four perturbed corners. ANZIAM Journal, 49(03):361–387, 2008.
|
11affiliationtext: School of Computer Science, Electrical and Electronic
Engineering, and Engineering Maths
University of Bristol, Bristol, BS8 1UB, UK.**affiliationtext:
<EMAIL_ADDRESS>
# Multimodal sensor fusion in the latent representation space
Robert J. Piechocki Xiaoyang Wang Mohammud J. Bocus
###### Abstract
A new method for multimodal sensor fusion is introduced. The technique relies
on a two-stage process. In the first stage, a multimodal generative model is
constructed from unlabelled training data. In the second stage, the generative
model serves as a reconstruction prior and the search manifold for the sensor
fusion tasks. The method also handles cases where observations are accessed
only via subsampling i.e. compressed sensing. We demonstrate the effectiveness
and excellent performance on a range of multimodal fusion experiments such as
multisensory classification, denoising, and recovery from subsampled
observations.
## Introduction
_Controlled hallucination_[1] is an evocative term referring to the Bayesian
brain hypothesis [2]. It posits that perception is not merely a function of
sensory information processing capturing the world as is. Instead, the brain
is a predictive machine - it attempts to infer the causes of sensory inputs.
To achieve this, the brain builds and continually refines its world model. The
world model serves as a prior and when combined with the sensory signals will
produce the best guess for its causes. Hallucination (uncontrolled) occurs
when the sensory inputs cannot be reconciled with, or contradict the prior
world model. This might occur in our model, and when it does, it manifests
itself at the fusion stage with the stochastic gradient descent procedure
getting trapped in a local minimum. The method presented in this paper is
somewhat inspired by the Bayesian brain hypothesis, but it also builds upon
multimodal generative modelling and deep compressed sensing.
Multimodal data fusion attracts academic and industrial interests alike [3]
and plays a vital role in several applications. Automated driving is arguably
the most challenging industrial domain [4]. Automated vehicles use a plethora
of sensors: Lidar, mmWave radar, video and ultrasonic, and attempt to perform
some form of sensor fusion for environmental perception and precise
localization. A high-quality of final fusion estimate is a prerequisite for
safe driving. Amongst other application areas, a notable mention deserves
eHealth and Ambient Assisted Living (AAL). These new paradigms are contingent
on gathering information from various sensors around the home to monitor and
track the movement signatures of people. The aim is to build long-term
behavioral sensing machine which also affords privacy. Such platforms rely on
an array of environmental and wearable sensors, with sensor fusion being one
of the key challenges.
In this contribution, we focus on a one-time snapshot problem (i.e. we are not
building temporal structures). However, we try to explore the problem of
multimodal sensor fusion from a new perspective, essentially, from a Bayesian
viewpoint.The concept is depicted in Fig. 1, alongside two main groups of
approaches to sensor fusion. Traditionally, sensor fusion for classification
tasks has been performed at the decision level as in Fig. 1(a). Assuming that
conditional independence holds, a pointwise product of final pmf (probability
mass function) across all modalities is taken. Feature fusion, as depicted in
Fig. 1(b), has become very popular with the advent of deep neural networks
[3], and can produce very good results. Fig. 1(c) shows our technique during
the fusion stage (Stage 2). Blue arrows indicate the direction of
backpropagation gradient flow during fusion.
Figure 1: Multimodal Sensor Fusion: (a) Decision fusion, (b) Feature fusion,
(c) Our technique: fusion in the latent representation with optional
compressed sensing measurements; $F$ features, $p(z)$ prior model, $\bf{G}$
generators, $X$ complete data, $Y$ subsampled data. For clarity $M=2$
modalities are shown, the concept generalises to any $M$.
Contributions:
* •
A novel method for multimodal sensor fusion is presented. The method attempts
to find the best estimate (_maximum a posteriori_) for the causes of observed
data. The estimate is then used to perform specific downstream fusion tasks.
* •
The method can fuse the modalities under lossy data conditions i.e. when the
data is subsampled, lost and/or noisy. Such phenomena occur in real-world
situations such as the transmission of information wirelessly, or intentional
subsampling to expedite the measurement (rapid MRI imaging and radar) etc.
* •
It can leverage between modalities. A strong modality can be used to aid the
recovery of another modality that is lossy or less informative (weak
modality). This is referred to as asymmetric Compressed Sensing.
## Related Work
In this section, we review the state-of-the-art in three areas directly
relevant to our contribution: multimodal generative modeling, sensor fusion,
and compressed sensing. One of the main aims of Multimodal Variational
Autoencoders (MVAEs) is to learn shared representation across different data
types in a fully self-supervised manner, thus avoiding the need to label a
huge amount of data, which is time-consuming and expensive [5]. It is indeed a
challenge to infer the low-dimensional joint representation from multiple
modalities, which can ultimately be used in downstream tasks such as self-
supervised clustering or classification. This is because the modalities may
vastly differ in characteristics, including dimensionality, data distribution,
and sparsity [6]. Recently, several methods have been proposed to combine
multimodal data using generative models such as Variational Autoencoders
(VAEs) [7, 8, 5, 9, 10, 11]. These methods aim to learn a joint distribution
in the latent space via inference networks and try to reconstruct modality-
specific data, even when one modality is missing. In these works, a modality
can refer to natural images, text, captions, labels or visual and non-visual
attributes of a person. JMVAE (Joint Multimodal Variational Autoencoder) [9]
makes use of a joint inference network to learn the interaction between two
modalities and they address the issue of missing modality by training an
individual (unimodal) inference network for each modality as well as a bimodal
inference network to learn the joint posterior, based on the product-of-
experts (PoE). They consequently minimize the distance between unimodal and
multimodal latent distribution. On the other hand, MVAE [7], which is also
based on PoE, considers only a partial combination of observed modalities,
thereby reducing the number of parameters and improving the computational
efficiency. Reference [8] uses the Mixture-of-Experts (MoE) approach to learn
the shared representation across multiple modalities. The latter two models
essentially differ in their choices of joint posterior approximation
functions. MoPoE (Mixture-of-Products-of-Experts)-VAE [5] aims to combine the
advantages of both approaches, MoE and PoE, without incurring significant
trade-offs. DMVAE (Disentangled Multimodal VAE) [10] uses a disentangled VAE
approach to split up the private and shared (using PoE) latent spaces of
multiple modalities, where the latent factor may be of both continuous and
discrete nature. CADA (Cross- and Distribution Aligned)-VAE [11] uses a cross-
modal embedding framework to learn a latent representation from image features
and classes (labels) using aligned VAEs optimized with cross- and
distribution- alignment objectives.
In terms of multimodal/sensor fusion for human activity sensing using Radio-
Frequency (RF), inertial and/or vision sensors, most works have considered
either decision-level fusion or feature-level fusion. For instance, the work
in [12] performs multimodal fusion at the decision level to combine the
benefits of WiFi and vision-based sensors using a hybrid deep neural network
(DNN) model to achieve good activity recognition accuracy for 3 activities.
The model essentially consists of a WiFi sensing module (dedicated
Convolutional Neural Network (CNN) architecture) and a vision sensing module
(based on the Convolutional 3D model) for processing WiFi and video frames for
unimodal inference, followed by a multimodal fusion module. Multimodal fusion
is performed at the decision level (after both WiFi and vision modules have
made a classification) because this framework is stated to be more flexible
and robust to unimodal failure compared to feature level fusion. Reference
[13] presents a method for activity recognition, which leverages four sensor
modalities, namely, skeleton sequences, inertial and motion capture
measurements and WiFi fingerprints. The fusion of signals is formulated as a
matrix concatenation. The individual signals of different sensor modalities
are transformed and represented as an image. The resulting images are then fed
to a two-dimensional CNN (EfficientNet B2) for classification. The authors of
[14] proposed a multimodal HAR system that leverages WiFi and wearable sensor
modalities to jointly infer human activities. They collect Channel Sate
Information (CSI) data from a standard WiFi Network Interface Card (NIC),
alongside the user’s local body movements via a wearable Inertial Measurement
Unit (IMU) consisting of an accelerometer, gyroscope, and magnetometer
sensors. They compute the time-variant Mean Doppler Shift (MDS) from the
processed CSI data and magnitude from the inertial data for each sensor of the
IMU. Then, various time and frequency domain features are separately extracted
from the magnitude data and the MDS. The authors apply a feature-level fusion
method which sequentially concatenates feature vectors that belong to the same
activity sample. Finally supervised machine learning techniques are used to
classify four activities, such as walking, falling, sitting, and picking up an
object from the floor.
Compared to the aforementioned works [12, 13, 14] which consider supervised
models with feature-level fusion or decision-level fusion, our technique, in
contrast, performs multimodal sensor fusion in the latent representation space
leveraging a self-supervised generative model. Our method is different from
current multimodal generative models such as those proposed in [7, 8, 5, 9] in
the sense that it can handle cases where observations are accessed only via
subsampling (i.e. compressed sensing with significant loss of data and no data
imputation). And crucially our technique attempts to directly compute the MAP
(_maximum a posteriori_) estimate.
The presented method is related to and builds upon Deep Compressed Sensing
(DCS) techniques[15, 16]. DCS, in turn, is inspired by Compressed Sensing (CS)
[17, 18]. In CS, we attempt to solve what appears to be an underdetermined
linear system, yet the solution is possible with the additional prior sparsity
constraint on the signal: $\min L0$. Since $L0$ is non-convex, $L1$ is used
instead to provide a convex relaxation, which also promotes sparsity and
allows for computationally efficient solvers. DCS, in essence, replaces the
$L0$ prior with a low dimensional manifold, which is learnable from the data
using generative models. Concurrently to DCS, Deep Image Prior [19] was
proposed. It used un-trained CNNs to solve a range of inverse problems in
computer vision (image inpainting, super-resolution, denoising).
## Methods
Assume the data generative process so that latent and common cause $Z$ gives
rise to $X_{m}$, which in turn produces observed $Y_{m}$, i.e. $Z\rightarrow
X_{m}\rightarrow Y_{m}$ forms a Markov chain. Here, $X_{m}$ is the full data
pertaining to $m^{th}$ modality, $m\in\\{1,\dots,M\\}$. Crucially, the
modalities collect data simultaneously “observing” the same scene. As an
example, in this work, we consider the different views (obtained via multiple
receivers) from the opportunistic CSI WiFi radar as different modalities. The
variable $Z$ encodes the semantic content of the scene and is typically of
central interest. Furthermore, $X_{m}$ is not accessed directly, but is
observed via a _subsampled_ $Y_{m}$. This is a compressed sensing setup:
$Y_{m}=\chi_{m}(X_{m})$: $\chi_{m}$ is a deterministic and known (typically
many-to-one) function. The only condition we impose on $\chi_{m}$ is to be
Lipschitz continuous. With the above, the conditional independence between
modalities holds (conditioned on $Z$). Therefore, the joint density factors
as:
$p\left(z,x_{1:M},y_{1:M}\right)=p\left(z\right)\prod_{m=1}^{M}{p(y_{m}|x_{m})p(x_{m}|z)}.$
(1)
The main task in this context is to produce the best guess for latent $Z$, and
possibly, to recover the full signal(s) $X_{m}$, given subsampled data
$Y_{1:M}$. We approach the problem in two stages. First we build a joint model
which approximates equation (1), and will be instantiated as a Multimodal
Variatational Autoencoer (M-VAE). More specifically, the M-VAE will provide an
approximation to $p_{\phi_{1:M},\psi_{1:M}}(z,x_{1:M})$, parameterized by deep
neural networks $\\{\phi_{1},\dots,\phi_{M}\\}$,
$\\{\psi_{1},\dots,\psi_{M}\\}$, referred to as _encoders_ and _decoders_ ,
respectively. The trained M-VAE will then be appended with
$p_{\chi_{m}}(y_{m}|x_{m})$ for each modality $m$:
$\\{\chi_{1},\dots,\chi_{M}\\}$ referred to as _samplers_. In the second
stage, we use the trained M-VAE and $\chi_{1:M}$ to facilitate the fusion and
reconstruction tasks. Specifically, our sensor fusion problem amounts to
finding the maximum a posteriori (MAP) $\hat{z}_{MAP}$ estimate of the latent
cause for a given ($i^{th}$) data point $Y_{1:M}=y^{(i)}_{1:M}$:
$\hat{z}_{MAP}=\arg\max_{z}p\left(z|Y_{1:M}=y^{(i)}_{1:M}\right),$ (2)
where,
$p\left(z|Y_{1:M}=y^{(i)}_{1:M}\right)\propto
p\left(z\right)\prod_{m=1}^{M}{\int_{X_{m}}p(Y_{m}=y^{(i)}_{m}|x_{m})p(x_{m}|z)\,dx_{m}}.$
(3)
The above MAP estimation problem is hard, and we will resort to approximations
detailed in the sections below.
### Multimodal VAE
The first task is to build a model of equation (1). As aforementioned, this
will be accomplished in two steps. Firstly, during the training stage we
assume access to full data $X_{1:M}$, therefore training an approximation to
$p_{\phi_{1:M},\psi_{1:M}}(z,x_{1:M})$ is a feasible task. The marginal data
log-likelihood for the multimodal case is:
$\displaystyle\ \log p(x_{1:M})$
$\displaystyle=D_{KL}(q(z|x_{1:M}||p(z|x_{1:M}))$ (4)
$\displaystyle+\left[\sum_{X_{m}}{\mathbb{E}_{z\sim q(z|x_{1:M})}\log
p(x_{m}|z)}-\mathbb{E}_{z\sim
q(z|x_{1:M})}\log\frac{q(z|x_{1:M})}{p(z)}\right],$ (5)
where $D_{KL}$ is the Kullback–Leibler (KL) divergence. The first summand in
equation (5), i.e. the sum over modalities follows directly from the
conditional independence. And since KL is non-negative, equation (5)
represents the lower bound (also known as Evidence Lower Bound - ELBO) on the
log probability of the data (and its negative is used as the loss for the
M-VAE). There exist a body of work on M-VAEs, the interested reader is
referred to [7, 8, 5, 9] for details and derivation. The key challenge in
training M-VAEs is the construction of variational posterior $q(z|x_{1:M})$.
We dedicate a section in the Supplementary Information document S1 to the
discussion on choices and implications for the approximation of variational
posterior. Briefly, we consider two main cases: a missing data case – i.e.
where particular modality data might be missing
($X_{m}=x_{m}^{(i)}=\emptyset$); and the full data case. The latter is
straightforward and is tackled by enforcing a particular structure of the
encoders. For the former case variational Product-of-Experts (PoE) is used:
$q_{\Phi}(z|x_{1:M})=p(z)\prod_{m=1}^{M}{q_{\phi_{m}}(z|x_{m})}.$ (6)
Should the data be missing for any particular modality,
$q_{\phi_{m}}(z|x_{m})=1$ is assumed. Derivation of equation (6) can be found
in the Supplementary Information document S1.
### Fusion on the M-VAE prior
Recall the sensor fusion problem as stated in equation (2). The $p(z)$ is
forced to be isotropic Gaussian by M-VAE, and the remaining densities are
assumed to be Gaussian. Furthermore, we assume that
$p(x_{m}|z)=\delta(x_{m}-\psi_{m}(z))$. Therefore equation (2) becomes:
$\hat{z}_{MAP}=\arg\max_{z}p\left(z|Y_{1:M}=y^{(i)}_{1:M}\right)\propto\exp{(-\|z\|^{2})}\prod_{m=1}^{M}{\exp{(-\frac{1}{2\sigma_{m}^{2}}\|y_{m}^{(i)}-\chi_{m}(\psi_{m}(z))\|^{2})}}.$
(7)
Hence, the objective to minimize becomes:
$\mathcal{L}(z)=\lambda_{0}{\|z\|^{2}}+\sum_{m=1}^{M}{\lambda_{m}\|y_{m}^{(i)}-\chi_{m}(\psi_{m}(z))\|^{2}}.$
(8)
Recall that the output of the first stage is $p(z)$ and the decoders
$\prod_{m}{p_{\psi_{n}}(x|z)}$ are parametrized by $\\{\psi_{1:M}\\}$,
$\\{\lambda_{0:M}\\}$ are constants. The MAP estimation procedure consists of
backpropagating through the sampler $\chi_{m}$ and decoder $\psi_{m}$ using
Stochastic Gradient Descent (SGD). In this step $\\{\psi_{1:M}\\}$ are non-
learnable, i.e. jointly with $\chi_{m}$ are some non-linear known (but
differentiable) functions.
$z\leftarrow
z-\eta_{0}\nabla_{z}({{\|z\|^{2}}})-\sum_{m=1}^{M}{\eta_{m}\nabla_{z}(\|y_{m}^{(i)}-\chi_{m}(\psi_{m}(z))\|^{2})}.$
(9)
The iterative fusion procedure is initialized by taking a sample from the
prior $z^{0}\sim p(z)$, $\\{\eta_{0:M}\\}$ are learning rates. One or several
SGD steps are taken for each modality in turn. The procedure terminates with
convergence - see Algorithm 1. In general, the optimization problem as set out
in equation (8) is non-convex. Therefore, there are no guarantees of
convergence to the optimal point ($\hat{z}_{MAP}$). We deploy several
strategies to minimize the risk of getting stuck in a local minimum. We
consider multiple initialization points (a number of points sampled from the
prior with Stage 2 replicated for all points). In some cases it might be
possible to sample from: $z^{0}\sim p\left(z\right)\prod
p\left(z\left|X=\check{x}_{m}^{(j)}\right.\right)$. Depending on modality,
this might be possible with data imputation ($\check{x}_{m}$ are imputed
data). The final stage will depend on a particular task (multisensory
classification/reconstruction), but in all cases it will take $\hat{z}_{MAP}$
as an input. In our experiments, we observe that the success of Stage 2 is
crucially dependent on the quality of M-VAE.
Algorithm 1 Multimodal Sensor Fusion in the Latent Representation Space (SFLR)
1:Training data: $\mathcal{D_{T}}\equiv\\{X_{1:M}^{(1:I)}\\}$, Test data
$\mathcal{D_{P}}\equiv\\{X_{1:M}^{(1:J)}\\}$, Samplers $\\{\chi_{1:M}\\}$
2:Stage 1: Train M-VAE using $\mathcal{D_{T}}$
3:Output: $p(z)$, Encoders $\\{\phi_{1:M}\\}$, Decoders $\\{\psi_{1:M}\\}$
4:Stage 2: Fusion
5:$y_{1:M}^{(i)}\sim\mathcal{D_{P}}$
6:Sample the initial point $z^{0}\sim p(z)$
7:while not converged do
8: $z\leftarrow
z-\eta_{0}\nabla_{z}({{\|z\|^{2}}})-{\eta_{1}\nabla_{z}(\|y_{1}^{(i)}-\chi_{1}(\psi_{1}(z))\|^{2})}$
9: $z\leftarrow
z-\eta_{0}\nabla_{z}({{\|z\|^{2}}})-{\eta_{2}\nabla_{z}(\|y_{2}^{(i)}-\chi_{2}(\psi_{2}(z))\|^{2})}$
10: $\vdots$
11: $z\leftarrow
z-\eta_{0}\nabla_{z}({{\|z\|^{2}}})-{\eta_{M}\nabla_{z}(\|y_{M}^{(i)}-\chi_{M}(\psi_{M}(z))\|^{2})}$
12:end while
13:$\hat{z}_{MAP}\leftarrow z$
14:Downstream tasks: $\hat{x}_{m}=\psi_{m}(\hat{z}_{MAP})$, classification
tasks $K$-NN$(\hat{z}_{MAP})$
## Experiments
In this work, we investigate the performance of the proposed method on two
datasets for multimodal sensor fusion and recovery tasks: i) a synthetic “toy
protein” dataset and ii) a passive WiFi radar dataset intended for Human
Activity Recognition (HAR).
### Synthetic toy protein dataset
A synthetic dataset containing two-dimensional (2D) protein-like data samples
with two modalities is generated. The latent distribution
$p(z),z\in\mathds{R}^{4}$ is a Gaussian mixture model with 10 components,
simulating 10 different “classes” for samples. For each modality, the data
generative model $p(x_{m}|z),x_{m}\in\mathds{R}^{N}$ is a one-layer multilayer
perceptron (MLP) with random weights. Here $m=1,2$ represents two modalities.
10,000 pairs of samples are generated using the generative model, with the
protein size $N=32$. Fig. 2(a) shows an instance of the 2D protein data with
$N=64$.
Figure 2: (a) Generated toy proteins examples ($N=64$) and (b) reconstruction
from compressed sensing observations. With 2 out of 64 measurements (3.125%),
near perfect reconstruction is possible even though the modalities are
individually subsampled.
### Passive WiFi radar dataset
We use the OPERAnet [20] dataset which was collected with the aim to evaluate
human activity recognition (HAR) and localization techniques with measurements
obtained from synchronized Radio-Frequency (RF) devices and vision-based
sensors. The RF sensors captured the changes in the wireless signals while six
daily activities were being performed by six participants, namely, sitting
down on a chair ("sit"), standing from the chair ("stand"), laying down on the
floor ("laydown"), standing from the floor ("standff"), upper body rotation
("bodyrotate), and walking ("walk"). We convert the raw time-series CSI data
from the WiFi sensors into the image-like format, namely, spectrograms using
signal processing techniques. More details are available in Section S2 of the
Supplementary Information document. 2,906 spectrogram samples (each of 4s
duration window) were generated for the 6 human activities and 80% of these
were used as training data while the remaining 20% as testing data (random
train-test split).
## Results and Discussion
### Classification results of WiFi CSI spectrograms for HAR
In this section, we evaluate the HAR sensor fusion classification performance
under a few-shot learning scenario, with 1, 5 and 10 labelled examples per
class. These correspond to 0.05%, 0.26% and 0.51% of labelled training
samples, respectively. We randomly select 80% of the samples in the dataset as
the training set and the remaining 20% is used for validation. The average
$F_{1}$-macro scores for the HAR performance are shown in Table 1 for
different models. To allow for a fair comparison, the same random seed was
used in all experiments with only two modalities (processed spectrograms data
obtained from two different receivers).
Prior to training our model (see Supplementary Fig. S1), the spectrograms were
reshaped to typical image dimensions of size $(1\times 224\times 224)$. Our
model was trained for 1,000 epochs using the training data with a fixed KL
scaling factor of $\beta=0.02$. The encoders comprised of the ResNet-18
backbone with the last fully-connected layer dimension having a value of 512.
For the decoders, corresponding CNN deconvolutional layers were used to
reconstruct the spectrograms from each modality with the same input dimension.
The latent dimension, batch size, and learning rate are set at 64, 64, and
0.001, respectively. In the second stage, the generative model serves as a
reconstruction prior and the search manifold for the sensor fusion tasks.
Essentially, in this stage, we obtain the maximum a posteriori estimate of
$\hat{z}_{MAP}$ through the process described in Algorithm 1. The final
estimate of the class is produced by $K$-NN in the latent representation
space, with labelled examples sampled from the training set.
To benchmark our technique we investigate the performance of other state-of-
the-art sensor fusion techniques. The feature-fusion is represented by CNN
models (1-channel CNN, 2-channel CNN, dual-branch CNN). All are trained in a
conventional supervised fashion from scratch using the ResNet-18 backbone and
a linear classification head is appended on top of it consisting of a hidden
linear layer of 128 units and a linear output layer of 6 nodes (for
classifying 6 human activities). The dual-input CNN refers to the case where
the embeddings from the two modalities’ CNNs are concatenated, and a
classification head is then added (as illustrated in Fig. 1(b)). The
“Probability Fusion” (decision fusion) model refers to a score-level fusion
method where the classification probabilities ($P_{1}$ and $P_{2}$) from each
modality are computed independently (using an output SoftMax layer) and then
combined using the product rule (this is optimal given conditional
independence). These models are fine-tuned with labelled samples over 200
epochs, with a batch size of 64 and the Adam optimizer was used with learning
rate of 0.0001, weight decay of 0.001 and $\beta_{1}=0.95$, $\beta_{2}=0.999$.
It can be observed from Table 1 that our method significantly outperforms all
other conventional feature and decision fusion methods. The confusion matrix
for HAR classification using our SFLR (Sensor Fusion in the Latent
Representation space) model is shown in Fig. S8 in the Supplementary
Information document for the case when only ten labelled examples are used at
the (classification) fusion stage.
Figure 3: Illustration of spectrogram recovery (for sitting down activity)
using compressed sensing with measurements as low as 784 out of 50,176
(1.56%). No additive white Gaussian noise is considered. The left column shows
the true spectrogram sample, the middle column shows reconstruction with an
initial guess (no optimization) while the right column shows reconstruction
with $\hat{Z}_{MAP}$.
### Sensor fusion from subsampled observations
Next, we evaluate the recovery performance of spectrograms under different
numbers of compressed sensing measurements. The measurement function
$\chi_{m}$ is a matrix initialized randomly and we assume that there is no
additive Gaussian noise. The Adam optimizer is used to optimize
$\hat{z}_{MAP}$ with a learning rate of 0.01. The algorithm is run for 10,000
iterations. After the loss in equation (8) has converged during the
optimization process, the samples are decoded/recovered for modality 1 and
modality 2 using their respective decoders
$\hat{x}_{m}=\psi_{m}(\hat{z}_{MAP})$. Table 2 shows the compressed sensing
results when a batch of 50 images is taken from the testing dataset and
evaluated under different number of measurements (without noise). It can be
observed that the samples can be recovered with very low reconstruction error
when the number of measurements is as low as 196 (0.39%). An illustration is
also shown in Fig. 3 where very good reconstruction is observed for the case
when the number of measurements is equal to 784. More illustrations are shown
in Fig. S7 in the Supplementary Information document, with further
experimental results in Sections S4, S5, S6.
### Toy protein classification
Similarly to the experiments on the OPERAnet dataset, we perform two tasks,
classification and sensor fusion from compressed sensing observations, on the
synthetic toy protein dataset.
As mentioned previously, the toy protein dataset contains 10 classes. The
dataset is split into a training set and a test set, containing 80% and 20% of
samples, respectively. We evaluate the classification performance under a few-
shot learning setting, using 1, 5 or 10 labelled samples per class. The few-
shot classification via the SFLR model consists of two stages. In the first
stage, the M-VAE is trained in an unsupervised manner using the training set.
Using the maximum a posterior $\hat{z}_{MAP}$ and a few labels, the $K$-NN
classifier is applied to the latent representation space. Here the encoder and
decoder in M-VAE are two-layer MLPs, with 16 neurons in the hidden layer.
We compare the SFLR method with 4 baseline models. The single modality model
only considers one modality without sensor fusion. The probability fusion
model independently computes the classification probability for each modality,
which is a representative model for decision-fusion (Fig. 1(a)). The dual-
branch feature fusion model concatenates the embedding of two modalities
before the classification layer, which is a feature fusion method (Fig. 1(b)).
All baseline models are trained in a supervised manner, with the same neural
network structure as the encoder. Table 3 shows the $F1$-macro scores for
different methods on the test set. On the 10-class protein dataset, SFLR
outperforms other sensor fusion models using limited labelled samples.
### Sensor fusion from subsampled toy proteins
Another advantage of the proposed SFLR model is that it can fuse modalities in
subsampled cases. We use a set of samplers $\chi_{1:M}$ to simulate the
subsampled observations. The measurement function $\chi_{m}$ is a matrix
initialized randomly. Here we use 10 initialization points to reduce the risk
of getting trapped in a local minimum (points sampled from the prior with
Stage 2 replicated for all of them). Fig. 2(b) shows the recovered protein
from subsampled observations, with only 2 measurements for each modality. Both
modalities are successfully recovered from the latent representation space,
even though the initial guess $z^{0}$ is far from the true modality. Note that
the proteins in Fig. 2 have a higher dimension than in the dataset, showing
the robustness of the SFLR method. Table 4 shows the average reconstruction
error of the synthetic protein dataset using different subsamplers. The
reconstruction error reduced significantly when having 2 measurements for each
modality, showing superior sensor fusion abilities.
The Supplementary Information document (see Section S7) contains additional
experiments, including tasks showcasing the ability to leverage between
modalities, where a strong modality can be used to aid the recovery of a weak
modality. It also presents the performance under subsampled and noisy
conditions.
## Conclusions and Broader Impacts
The paper presents a new method for sensor fusion. Specifically, we
demonstrate the effectiveness of classification and reconstruction tasks from
radar signals. The intended application area is human activity recognition,
which serves a vital role in the E-Health paradigm. New healthcare
technologies are the key ingredient to battling spiralling costs of
provisioning health services that beset a vast majority of countries. Such
technologies in a residential setting are seen as a key requirement in
empowering patients and imbuing a greater responsibility for own health
outcomes. However, we acknowledge that radar and sensor technologies also find
applications in a military context. Modern warfare technologies (principally
defensive) could potentially become more apt if they were to benefit from
much-improved sensor fusion. We firmly believe that, on balance, it is of
benefit to the society to continue the research in this area in the public
eye.
## Data availability
The raw dataset used in the Passive WiFi radar experiments is available from
[20]. The toy protein dataset is not publicly available at this time but can
be made available from the authors upon reasonable request.
## References
* [1] Seth, A. _Being You: A New Science of Consciousness (The Sunday Times Bestseller)_ (Faber & Faber, 2021).
* [2] Doya, K., Ishii, S., Pouget, A. & Rao, R. P. N. _Bayesian Brain: Probabilistic Approaches to Neural Coding_ (The MIT Press, 2007).
* [3] Gao, J., Li, P., Chen, Z. & Zhang, J. A survey on deep learning for multimodal data fusion. _Neural Computation_ 32, 829–864, DOI: 10.1162/neco_a_01273 (2020).
* [4] Wang, Z., Wu, Y. & Niu, Q. Multi-sensor fusion in automated driving: A survey. _IEEE Access_ 8, 2847–2868, DOI: 10.1109/ACCESS.2019.2962554 (2020).
* [5] Sutter, T. M., Daunhawer, I. & Vogt, J. E. Generalized multimodal ELBO. Preprint at https://arxiv.org/abs/2105.02470 (2021).
* [6] Minoura, K., Abe, K., Nam, H., Nishikawa, H. & Shimamura, T. A mixture-of-experts deep generative model for integrated analysis of single-cell multiomics data. _Cell Reports Methods_ 1, 100071, DOI: https://doi.org/10.1016/j.crmeth.2021.100071 (2021).
* [7] Wu, M. & Goodman, N. Multimodal generative models for scalable weakly-supervised learning. In _Proceedings of the 32nd International Conference on Neural Information Processing Systems_ , NIPS’18, 5580–5590 (Curran Associates Inc., Red Hook, NY, USA, 2018).
* [8] Shi, Y., N, S., Paige, B. & Torr, P. Variational mixture-of-experts autoencoders for multi-modal deep generative models. In Wallach, H. _et al._ (eds.) _Advances in Neural Information Processing Systems_ , vol. 32 (Curran Associates, Inc., 2019).
* [9] Suzuki, M., Nakayama, K. & Matsuo, Y. Joint multimodal learning with deep generative models. Preprint at https://arxiv.org/abs/1611.01891 (2016).
* [10] Lee, M. & Pavlovic, V. Private-shared disentangled multimodal VAE for learning of latent representations. In _IEEE Conference on Computer Vision and Pattern Recognition Workshops, CVPR Workshops 2021, virtual, June 19-25, 2021_ , 1692–1700, DOI: 10.1109/CVPRW53098.2021.00185 (Computer Vision Foundation / IEEE, 2021).
* [11] Schönfeld, E., Ebrahimi, S., Sinha, S., Darrell, T. & Akata, Z. Generalized zero- and few-shot learning via aligned variational autoencoders. In _2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)_ , 8239–8247, DOI: 10.1109/CVPR.2019.00844 (2019).
* [12] Zou, H. _et al._ WiFi and vision multimodal learning for accurate and robust device-free human activity recognition. In _2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW)_ , 426–433, DOI: 10.1109/CVPRW.2019.00056 (2019).
* [13] Memmesheimer, R., Theisen, N. & Paulus, D. Gimme signals: Discriminative signal encoding for multimodal activity recognition. In _2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)_ , 10394–10401, DOI: 10.1109/IROS45743.2020.9341699 (2020).
* [14] Muaaz, M., Chelli, A., Abdelgawwad, A. A., Mallofré, A. C. & Pätzold, M. WiWeHAR: Multimodal human activity recognition using Wi-Fi and wearable sensing modalities. _IEEE Access_ 8, 164453–164470, DOI: 10.1109/ACCESS.2020.3022287 (2020).
* [15] Bora, A., Jalal, A., Price, E. & Dimakis, A. G. Compressed sensing using generative models. In Precup, D. & Teh, Y. W. (eds.) _Proceedings of the 34th International Conference on Machine Learning_ , vol. 70 of _Proceedings of Machine Learning Research_ , 537–546 (PMLR, 2017).
* [16] Wu, Y., Rosca, M. & Lillicrap, T. Deep compressed sensing. In _Proceedings of the 36th International Conference on Machine Learning_ , vol. 97 (PMLR, 2019).
* [17] Candès, E. J., Romberg, J. K. & Tao, T. Stable signal recovery from incomplete and inaccurate measurements. _Communications on Pure and Applied Mathematics_ 59, 1207–1223, DOI: https://doi.org/10.1002/cpa.20124 (2006).
* [18] Donoho, D. Compressed sensing. _IEEE Transactions on Information Theory_ 52, 1289–1306, DOI: 10.1109/TIT.2006.871582 (2006).
* [19] Ulyanov, D., Vedaldi, A. & Lempitsky, V. Deep image prior. In _Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR)_ (2018).
* [20] Bocus, M. J. _et al._ OPERAnet: A multimodal activity recognition dataset acquired from radio frequency and vision-based sensors. Preprint at https://arxiv.org/abs/2110.04239 (2021).
* [21] Xie, Y., Li, Z. & Li, M. Precise power delay profiling with commodity WiFi. In _Proceedings of the 21st Annual International Conference on Mobile Computing and Networking_ , MobiCom ’15, 53–64, DOI: 10.1145/2789168.2790124 (ACM, New York, NY, USA, 2015).
* [22] Halperin, D., Hu, W., Sheth, A. & Wetherall, D. Tool release: Gathering 802.11n traces with channel state information. _SIGCOMM Comput. Commun. Rev._ 41, 53, DOI: 10.1145/1925861.1925870 (2011).
* [23] Bocus, M. J. _et al._ Translation resilient opportunistic WiFi sensing. In _2020 25th International Conference on Pattern Recognition (ICPR)_ , 5627–5633, DOI: 10.1109/ICPR48806.2021.9412263 (2021).
## Acknowledgements
This work was performed as a part of the OPERA Project, funded by the UK
Engineering and Physical Sciences Research Council (EPSRC), Grant
EP/R018677/1. This work has also been funded in part by the Next-Generation
Converged Digital Infrastructure (NG-CDI) Project, supported by BT and
Engineering and Physical Sciences Research Council (EPSRC), Grant ref.
EP/R004935/1.
## Author contributions statement
All authors, R.P, X.W and M.B, contributed equally to this work. The main
tasks involved conceiving and conducting the experiments, algorithm
implementation, analysis, validation and interpretation of results, and
finally preparing and reviewing the manuscript.
## Additional information
### Competing interests
The authors declare no competing interests.
## Figure legends
1. 1.
Multimodal Sensor Fusion: (a) Decision fusion, (b) Feature fusion, (c) Our
technique: fusion in the latent representation with optional compressed
sensing measurements; $F$ features, $p(z)$ prior model, $\bf{G}$ generators,
$X$ complete data, $Y$ subsampled data. For clarity $M=2$ modalities are
shown, the concept generalises to any $M$.
2. 2.
(a) Generated toy proteins examples ($N=64$) and (b) reconstruction from
compressed sensing observations. With 2 out of 64 measurements (3.125%), near
perfect reconstruction is possible even though the modalities are individually
subsampled.
3. 3.
Illustration of spectrogram recovery (for sitting down activity) using
compressed sensing with measurements as low as 784 out of 50,176 (1.56%). No
additive white Gaussian noise is considered. The left column shows the true
spectrogram sample, the middle column shows reconstruction with an initial
guess (no optimization) while the right column shows reconstruction with
$\hat{Z}_{MAP}$.
Table 1: Few-shot learning sensor fusion classification results ($F_{1}$ macro) for Human Activity Recognition. Model | 1 example | 5 examples | 10 examples
---|---|---|---
2-channel CNN | 0.427272 | 0.570888 | 0.618501
1-channel CNN (Modality 1) | 0.349084 | 0.451328 | 0.504462
1-channel CNN (Modality 2) | 0.446554 | 0.600084 | 0.605678
Probability fusion (product rule) | 0.440414 | 0.584726 | 0.641922
Dual-branch CNN | 0.508243 | 0.568795 | 0.575914
SFLR (ours) | 0.652699 | 0.718180 | 0.737507
Table 2: Compressed sensing mean reconstruction error over a batch of 50 WiFi spectrogram data samples (No additive Gaussian noise). An illustration is shown in Fig. 3. No. of measurements | Modality 1 | Modality 2
---|---|---
1 (0.002%) | 0.03118854 | 0.15024841
10 (0.02%) | 0.00938917 | 0.02824161
196 (0.39%) | 0.00348606 | 0.00613665
784 (1.56%) | 0.00305005 | 0.00505758
1,568 (3.125%) | 0.00284343 | 0.00489433
Table 3: Few-shot learning sensor fusion classification results ($F_{1}$ macro) for synthetic proteins. Model | 1 example | 5 examples | 10 examples
---|---|---|---
Single modality (Modality 1) | 0.3188 | 0.4342 | 0.5843
Single modality (Modality 2) | 0.3221 | 0.4849 | 0.5555
Probability fusion (product rule) | 0.2256 | 0.3736 | 0.3836
Dual-branch feature fusion | 0.3769 | 0.4973 | 0.5953
SFLR (ours) | 0.4183 | 0.5501 | 0.6120
Table 4: Compressed sensing mean reconstruction error over a batch of 100 protein samples. No. of Measurements | Modality 1 ($10^{-5}$) | Modality 2 ($10^{-5}$)
---|---|---
1 (3.125%) | 4,622.4 | 4,923.5
2 (6.250%) | 22.5 | 27.9
4 (12.500%) | 7.1 | 7.4
8 (25.000%) | 2.3 | 2.7
## Appendix
## S1 Approximations to variational posterior
Given the objective, the variational joint posterior $q_{\phi}(z|x_{1:M})$ can
be learned by training one single encoder network that takes all modalities
$X_{1:M}$ as input to explicitly parametrize the joint posterior. This is our
baseline model and an example for $M=2$ modalities is given in Fig. S1.
However, this approach requires all modalities to be present at all times,
thus making cross-modal generation difficult. Alternatively, the joint
variational posterior can be modelled using the following approaches:
### Variational Product of Experts
In this section we reproduce the arguments from [7]. The first option is to
approximate the joint variational posterior as a product:
$q_{\phi}(z|x_{1:M})\equiv p(z)\prod_{m=1}^{M}{q_{\phi_{m}}(z|x_{m})}.$ (S10)
In case of a missing expert, we assume: $q_{\phi_{m}}(z|x_{m})=1$.
For a system of $N$ modalities, $2^{N}$ inference networks need to be
specified, $q(z|X)$ for each subset of modalities
$X\subseteq\\{X_{1},X_{2},\dots,X_{M}\\}$. The optimal inference network
$q(z|x_{1},\dots,x_{N})$ would be the true posterior $p(z|x_{1},\dots,x_{N})$.
The conditional independence assumptions in the generative model imply a
relation among joint- and single-modality posteriors [7]:
$\displaystyle p(z|x_{1},\dots,x_{N})$
$\displaystyle=\frac{p(x_{1},\dots,x_{N}|z)p(z)}{p(x_{1},\dots,x_{N})}$
$\displaystyle=\frac{p(z)}{p(x_{1},\dots,x_{N})}\prod_{i=1}^{N}p(x_{i}|z)$
$\displaystyle=\frac{p(z)}{p(x_{1},\dots,x_{N})}\prod_{i=1}^{N}\frac{p(z|x_{i})p(x_{i})}{p(z)}$
$\displaystyle=\frac{\prod_{i=1}^{N}p(z|x_{i})}{\prod_{i=1}^{N-1}p(z)}\cdot\frac{\prod_{i=1}^{N}p(x_{i})}{p(x_{1},\dots,x_{N})}$
$\displaystyle\propto\frac{\prod_{i=1}^{N}p(z|x_{i})}{\prod_{i=1}^{N-1}p(z)}.$
(S11)
If we approximate $p(z|x_{i})$ with $q(z|x_{i})\equiv\tilde{q}(z|x_{i})p(z)$,
where $\tilde{q}(z|x_{i})$ is the underlying inference network, the quotient
term can be omitted [7]:
$\displaystyle p(z|x_{1},\dots,x_{N})$
$\displaystyle\propto\frac{\prod_{i=1}^{N}p(z|x_{i})}{\prod_{i=1}^{N-1}p(z)}$
$\displaystyle\approx\frac{\prod_{i=1}^{N}[\tilde{q}(z|x_{i})p(z)]}{\prod_{i=1}^{N-1}p(z)}$
$\displaystyle=p(z)\prod_{i=1}^{N}\tilde{q}(z|x_{i}).$ (S12)
Equation (S12) implies that we can use a Product-of-Experts (PoE), including a
“prior expert” (e.g., spherical Gaussian), as the approximating distribution
for the joint-posterior. This derivation is easily extended to any subset of
modalities yielding $q(z|X)\propto p(z)\prod_{x_{i}\in X}\tilde{q}(z|x_{i})$.
### Variational Mixture of Experts
$q_{\phi}(z|x_{1:M})\equiv\sum_{m=1}^{M}{\frac{1}{M}q_{\phi_{m}}(z|x_{m})},$
(S13)
where the above assumes an equitable distribution of power among experts. Non-
uniform weights can also be used. Missing expert: $q_{\phi_{m}}(z|x_{m})=0$.
Figure S1: M-VAE for a full data case: Single encoder network takes all
modalities.
## S2 Signal processing pipelines for passive WiFi radar
A typical scenario for opportunistic passive WiFi Radar is depicted in Figure
S2. This is an extremely challenging problem since the WiFi waveform was not
specifically designed to lend itself to Radio-Frequency (RF) imaging. In
addition, commercial WiFi chipsets have noisy RF chains and tend to suffer
from phase drifts. The WiFi backscatter does contain information about the
dynamic changes in the radio channel which is incapsulated in the Channel
State Information (CSI). Dedicated tools need to be used to extract the CSI
from WiFi network interface cards such as Atheros [21] or Intel 5300 (IWL5300)
[22]. The raw CSI data is obtained as a 3-dimensional (3D) matrix per
transmitted packet, with $n_{t}$$\times$$n_{r}$$\times$$N_{\text{sc}}$ complex
values, where $n_{t}$ is the number of transmit antennas, $n_{r}$ is the
number of receive antennas and $N_{\text{sc}}$ is the number of subcarriers.
Since the raw CSI data is very noisy in nature, the Discrete Wavelet Transform
(DWT) technique can be used to filter out in-band noise and preserve the high
frequency components, thus avoiding the distortion of the signal [23].
Afterwards, median filtering can be used to remove any undesired transients in
the CSI measurements which are not due to human motion. The Intel 5300 chipset
has a 3$\times$3 antenna configuration and only 30 subcarriers are reported by
this chipset. Thus the number of complex values per packet is equal to
$3$$\times$$3$$\times$$30=270$. Considering a packet rate as high as 1.6 kHz,
this results in a significant amount of data that needs to be processed.
Therefore, we also apply Principal Component Analysis (PCA) to reduce the
computational complexity of such high dimensional data. PCA identifies the
time-varying correlations between the CSI streams which are optimally combined
to extract only a few components that represent the variations caused by human
activities. Finally, we convert the resultant data into spectrograms (time-
frequency domain) using Short Time Fourier Transform (STFT), which are similar
to those generated by Doppler radars. The CSI is highly sensitive to the
surrounding environment and signal reflections from the human body result in
different frequencies when performing different activities. The Doppler
spectrogram generated from STFT helps to identify the change of frequencies
over time. The generated spectrograms can be directly fed to CNNs to
automatically identify a set of features, which can ultimately be used in
downstream tasks. The CSI system consisted of two receivers. For more details
on the experimental setup of the data collection, the interested reader is
kindly referred to [20]. Each receiver can be seen as another view of the
human activity being performed in the environment.
Figure S2: Opportunistic Passive WiFi Radar.
## S3 Latent representation of WiFi spectrogram data
Our base model is shown in Fig. S1. The trained latent space for different
latent dimensions are shown in Fig. S3. The trained latent space in Fig. S3
shows distinct clusters using UMAP (Uniform Manifold Approximation and
Projection) visualization. The model was trained in a self-supervised fashion,
the six clusters representing the six different human activities can be seen.
Figure S3: UMAP projection of trained latent space using our model on real
WiFi CSI spectrogram data: (a) latent dimension=16, (b) latent dimension=64,
and (c) latent dimension=128.
## S4 Sensor fusion under noisy conditions (WiFi spectrogram data)
In this experiment, we analyze the sensor fusion performance when the data
samples from the test dataset are affected by different amount of additive
Gaussian noise. In this case, the measurement matrices are initialized as
identity matrices with dimensions 50,176$\times$50,176. No noise was injected
to the input data from two modalities during training. The SFLR (Sensor Fusion
in the Latent Representation space) algorithm (see Algorithm 1 in manuscript)
is run for 1,000 iterations and the corresponding results are shown in Fig.
S4, showing one sample in the test dataset. It can be observed that even under
extreme noisy conditions, the noisy samples are denoised efficiently. These
results are further validated in Table S1 where it can be seen that the fusion
error remains essentially constant for different noise standard deviation
values considering a batch of 50 images from the test dataset.
Figure S4: Impact of additive Gaussian noise on sensor fusion from the two modalities: (a) Std Dev=0.05, (b) Std Dev=0.2, (c) Std Dev=0.4, (d) Std Dev=0.6, and (e) Std Dev=0.8. Left column shows noisy spectrogram sample, middle column shows fusion with initial guess (no optimization) while right column shows fusion with $\hat{z}_{MAP}$. Figure S5: Impact of missing pixels on spectrogram recovery from the two modalities (no additive Gaussian noise): (a) missing pixel ratio=0.1, (b) missing pixel ratio=0.2, (c) missing pixel ratio=0.4, (d) missing pixel ratio=0.6, and (e) missing pixel ratio=0.8. Left column shows spectrogram sample with missing pixels, middle column shows reconstruction with initial guess (no optimization) while right column shows reconstruction with $\hat{z}_{MAP}$. Very good recovery performance is observed in all cases. Table S1: Noisy measurements mean reconstruction error over a batch of 50 WiFi spectrogram data samples (full measurements considered). Noise standard deviation | Modality 1 | Modality 2
---|---|---
0.01 | 0.00347822 | 0.00573636
0.05 | 0.00348499 | 0.00574255
0.20 | 0.00349233 | 0.00569815
0.40 | 0.00354153 | 0.0058348
0.60 | 0.00357407 | 0.00594901
0.80 | 0.00367526 | 0.00608692
## S5 Sensor fusion performance with missing pixels (WiFi spectrogram data)
In this experiment, we evaluate the fusion performance of the samples under
different ratios of missing pixels. The results are shown in Fig. S5 when a
true data sample is randomly chosen from the test dataset and a randomly
generated (binary) mask is applied to it to simulate different missing pixel
ratios. Each measurement corresponds to an observed pixel. Therefore, in this
case the measurement matrices will be diagonal matrices with their diagonal
entries corresponding to the mask elements (1’s and 0’s). From Fig. S5, it can
be observed that the recovered samples are very close to the true ones, even
when the missing pixel ratio for both modalities is as high as 0.8. In fact,
as shown in Table S2, the reconstruction error remains essentially constant
with increasing missing pixel ratio for both modalities.
Table S2: Missing pixel mean reconstruction error over a batch of 50 WiFi spectrogram data samples. Illustrations of spectrogram fusion under different missing pixel ratios are shown in Fig. S5. Missing pixel ratio | Modality 1 | Modality 2
---|---|---
0.1 | 0.00351316 | 0.00573596
0.2 | 0.00345235 | 0.00575154
0.4 | 0.00347854 | 0.00575382
0.6 | 0.003477 | 0.00571794
0.8 | 0.00349462 | 0.00579462
## S6 Sensor fusion from asymmetric Compressed Sensing (WiFi spectrogram
data)
In this experiment, we analyse the reconstruction of the WiFi spectrogram
samples under two different scenarios, where we want to demonstrate the
benefits of having multiple modalities. We are interested in recovery for one
modality that is subsampled (loss of data) and noisy. This can be referred to
as the weak data (or weak modality). Using the SFLR method, we leverage the
second modality data, which has no loss of information or does not suffer from
noise (strong modality), to improve the recovery for the modality of interest
i.e., the weak modality. In the first case, only modality 1 (subsampled and
noisy) is considered in the reconstruction process. In the second case, the
good modality 2 is added in the iterative fusion process to improve the
reconstruction quality of modality 1.
The results are tabulated in Table S3, where additive Gaussian noise with a
standard deviation of 0.1 is considered. The results show the mean
reconstruction errors (over 50 WiFi spectrogram samples) when modality 1 is
subsampled to different extents. We see that reconstruction error has a
general tendency to decrease with increasing number of measurements. It can be
observed that the samples can be recovered with very low reconstruction error
when the number of measurements is as low as 196 (0.39%). Furthermore, from
Table S3, we observe that when only modality 1 is considered in the
reconstruction process, the reconstruction errors are high when the number of
measurements is equal to 1 (0.002%) and 10 (0.02%). However, by leveraging the
good modality 2, the reconstruction quality is greatly improved for the same
number of measurements, demonstrating the clear benefit of having multiple
modalities. An illustration of the reconstruction quality is depicted in Fig.
S6, where it can be observed that the unimodal reconstruction of modality 1 is
far from the true sample. On the other hand, the reconstruction quality of
modality 1 is improved by leveraging the good modality data.
Table S3: Mean reconstruction error over 50 WiFi spectrogram data samples. Noise standard deviation: $0.1$ | No. of Measurements | Modality 1 | Modality 2
---|---|---|---
Modality 1 with compressed sensing | 1 (0.002%) | 0.0246185 | -
10 (0.02%) | 0.01075371 | -
196 (0.39%) | 0.00258467 | -
784 (1.56%) | 0.00195997 | -
1,568 (3.125%) | 0.00184247 | -
Modality 1 with compressed sensing | 1 (0.002%) | 0.00892453 | 0.00380795
10 (0.02%) | 0.00798366 | 0.00420512
196 (0.39%) | 0.0034269 | 0.00460956
Modality 2 with full information | 784 (1.56%) | 0.0030373 | 0.00466936
1,568 (3.125%) | 0.0028537 | 0.00469946
Figure S6: Reconstruction examples showing the benefit of multimodal system
compared to a unimodal system. Modality 1 is subsampled data with 1 single
measurement while modality 2 has full information (no noise and no loss of
data). Additive Gaussian noise with a standard deviation of 0.1 is considered
in this example: (a) reconstruction with modality 1 only, (b) reconstruction
with both modalities 1 and 2. Left column shows true spectrogram sample,
middle column shows reconstruction with initial guess (no optimization) while
right column shows reconstruction with $\hat{z}_{MAP}$. Adding modality 2
during reconstruction stage helps in the sample recovery of modality 1. Figure
S7: Compressed sensing performance on different number of measurements without
additive Gaussian noise: (a) 1 measurement out of 50,176 (0.002%), (b) 10
measurements out of 50,176 (0.02%), (c) 196 measurements out of 50,176
(0.39%), (d) 784 measurements out of 50,176 (1.56%), and (e) 1,568
measurements out of 50,176 (3.125%). Left column shows true spectrogram
sample, middle column shows reconstruction with initial guess (no
optimization) while right column shows reconstruction with $\hat{z}_{MAP}$.
## S7 Toy protein dataset: additional results
### Sensor fusion from subsampled and noisy toy proteins
In this section, we present the sensor fusion results for toy protein
reconstruction under subsampled and noisy observations, as an extension to
Section "Sensor fusion from subsampled toy proteins" in the main document.
Table S4 shows the mean reconstruction error of subsampled toy protein
samples, with different levels of additive Gaussian noise. The proposed SFLR
method recovers both modalities from as low as 4 subsampled and noisy
observations.
Table S4: Compressed sensing mean reconstruction error over a batch of 100 protein samples, with different noise levels. Noise standard deviation | No. of Measurements | Modality 1 ($10^{-3}$) | Modality 2 ($10^{-3}$)
---|---|---|---
0.05 | 1 (3.125%) | 52.301 | 55.382
2 (6.250%) | 11.678 | 9.200
4 (12.500%) | 0.834 | 0.715
8 (25.000%) | 0.387 | 0.450
0.1 | 1 (3.125%) | 36.611 | 50.118
2 (6.250%) | 20.267 | 14.638
4 (12.500%) | 3.413 | 2.769
8 (25.000%) | 2.386 | 2.411
0.2 | 1 (3.125%) | 43.466 | 48.271
2 (6.250%) | 17.864 | 19.435
4 (12.500%) | 1.528 | 1.435
8 (25.000%) | 5.005 | 5.063
### Sensor fusion from asymmetric Compressed Sensing of toy proteins
We show the results of sensor fusion from asymmetric compressed sensing,
regarding the third contribution of this paper. We claim that a strong
modality can be used to aid the recovery of another modality that is lossy or
less informative (weak modality). Table S5 shows the recovery results in two
cases. In the first case, the subsampled modality 1 with additive Gaussian
noise is observed and recovered. In the second case, the noise-free modality 2
with full observation is used to help the sensor fusion. We can see that
modality 2 significantly helps with the recovery of modality 1, especially
when the number of observations are relatively small.
Table S5: Mean reconstruction error over 100 samples with asymmetric compressed sensing. Noise standard deviation: $0.1$. | No. of Measurements | Modality 1 | Modality 2
---|---|---|---
Modality 1 with compressed sensing | 1 (3.125%) | 0.0542 | -
2 (6.250%) | 0.0366 | -
4 (12.500%) | 0.0205 | -
8 (25.000%) | 0.0021 | -
Modality 1 with compressed sensing | 1 (3.125%) | 0.0076 | 0.0073
2 (6.250%) | 0.0067 | 0.0062
Modality 2 with full information | 4 (12.500%) | 0.0023 | 0.0024
8 (25.000%) | 0.0033 | 0.0031
Figure S8: Confusion matrix of Human Activity Recognition (HAR) classification
using our SFLR model (with compressed sensing). Ten labelled examples per
class are considered (refer to Table 1 in main manuscript for classification
results in terms of macro $F_{1}$ score).
|
# Kink propagation in the Artificial Axon
Xinyi Qi Giovanni Zocchi<EMAIL_ADDRESS>Department of Physics and
Astronomy, University of California - Los Angeles
###### Abstract
The Artificial Axon is a unique synthetic system, based on biomolecular
components, which supports action potentials. Here we consider, theoretically,
the corresponding space extended system, and discuss the occurrence of
solitary waves, or kinks. In contrast to action potentials, stationary kinks
are possible. We point out an analogy with the interface separating two
condensed matter phases, though our kinks are always non-equilibrium,
dissipative structures, even when stationary.
Introduction. The Artificial Axon (AA) is a synthetic structure designed to
support action potentials, thus generating these excitations for the first
time outside the living cell. The system is based on the same microscopic
mechanism as that operating in neurons, the basic components being: a
phospholipid bilayer with embedded voltage gated ion channels, and an ionic
gradient as the energy source. However, while a real axon has at least two ion
channel species and opposite ionic gradients across the cell membrane, the AA
has only one. In the experiments, a current limited voltage clamp (CLVC) takes
the role of a second ionic gradient [1, 2]. The experimental system in [2] is
built around a $\sim 100\,\mu m$ size black lipid membrane. As a dynamical
system for the voltage, it operates in zero space dimensions (similar to the
”space clamp” setup with real axons [3, 4]). That is, each side of the
membrane is basically an equi-potential surface (the name Artificial Axon,
while a misnomer in this respect, is historical [1] and we propose to keep it
for the original and future versions). Inspired by this system, here we
consider - theoretically - the corresponding space extended dynamical system.
We focus on the existence of solitary wave solutions, or propagating kinks (we
will use the two terms interchangeably, to mean a front which propagates
keeping its shape). Kinks appear in many areas of condensed matter physics
[5], from domain walls in magnetic materials [6, 7] to pattern forming
chemical reactions [8]. Our particular nonlinear structures come from a
dissection, so to speak, of the mechanism of action potential generation. We
show the existence of travelling kinks in our system, and study numerically
their characteristics in relation to the control parameters, which are the
command voltage and the conductance of the CLVC. Then we discuss a ”normal
form” for this class of dynamical systems, highlighting the relation with
other kinks separating two condensed matter phases, such as the nematic -
isotropic interface in liquid crystals. The nonlinearities which thus arise
retrace the development of simplified models of the Hodgkin-Huxley axon [9],
such as introduced 60 years ago by Fitzhugh [10] and Nagumo et al [11].
Looking at kinks thus provides a somewhat different perspective on a classic
topic in the study of excitable media.
Results. We consider the AA in one space dimension. The physical system we
have in mind is a $\sim 1\,cm$ long, $\sim 100\,\mu m$ wide supported strip of
lipid bilayer with one species of voltage gated ion channels embedded. The
bilayer might be anchored to the solid surface so as to leave a sub-micron gap
(the ”inside” of the axon) in between. At present, the stability of the
bilayer stands in the way of a practical realization, but this problem is not
unsurmountable. The bilayer acting essentially like the dielectric in a
parallel plates capacitance, the local charge density is related to the
voltage by $(\partial/\partial t)\rho(x,t)=c\,(\partial/\partial t)V(x,t)$
where $c$ and $\rho$ are capacitance and charge per unit length, respectively.
The current inside the axon follows Ohm’s law: $j=-(1/r)(\partial V/\partial
x)$ where $r$ is the resistance per unit length; then charge conservation
leads to the diffusion equation for the potential: $(\partial V(x,t)/\partial
t)-(1/(rc))(\partial^{2}V(x,t)/\partial x^{2})=0$ . In the AA, an ionic
gradient (of $K^{+}$ ions) across the membrane leads to an equilibrium
(Nernst) potential $V_{N}=(T/|e|)\,ln([K^{+}]_{out}/[K^{+}]_{in})$ , but the
system is held off equilibrium by the current injected through a current
limited voltage clamp (CLVC) [1]. The active elements are voltage gated
potassium channels inserted in the membrane: these are molecular pores which,
in the open state, selectively conduct $K^{+}$ ions. The KvAP channel used in
[2, 12] has three functionally distinct states: open, closed, and inactive;
the presence of the inactive state allows the system to generate action
potentials. Here we consider the simpler case of a ”fast” channel with no
inactivation. Then the channels can be described by an equilibrium function
$P_{O}(V)$ which gives the probability that the channel is open if the local
voltage is $V$. Introducing the current sources in the diffusion equation
above one arrives at the following $(1+1)D$ dynamical system:
$\begin{split}\frac{\partial V(x,t)}{\partial
t}-\frac{1}{rc}\frac{\partial^{2}V}{\partial
x^{2}}\,=\,\frac{\chi}{c}P_{O}(V)[V_{N}-V(x,t)]\\\
+\frac{\chi_{c}}{c}[V_{c}-V(x,t)]\end{split}$ (1)
V is the voltage inside the axon (referred to the grounded outside), and we
assume a distributed ”space clamp” for the CLVC (this would be provided by an
electrode along the axon). Eq. (1) is of the general form of a reaction -
diffusion system; these are usually studied in the context of pattern forming
chemical reactions. For us it represents a continuum limit, i.e. we consider a
uniform, distributed channel conductance instead of discrete, point-like ion
channels. This is a mean field approximation which neglects correlations
between nearby channels. The first term on the RHS of (1), when multiplied by
$c$, is the channel current, proportional to the driving force $(V_{N}-V)$ ;
$V_{N}$ is the Nernst potential, $\chi$ the conductance (per unit length) with
channels open (i.e. $\chi=n\chi_{0}$ , $\chi_{0}$ single channel conductance,
$n$ number of channels per unit length). The second term is the current
injected by the clamp; $V_{c}$ is the clamp voltage (which is a control
parameter in the experiments), $\chi_{c}$ the clamp conductance (per unit
length), which is a second control parameter. The function $P_{O}(V)$ is a
Fermi - Dirac distribution:
$P_{O}(V)\,=\,\frac{1}{exp[-q(V-V_{0})/T]+1}$ (2)
where $q$ is an effective (positive) gating charge and $V_{0}$ the midpoint
voltage where $P_{O}(V_{0})=1/2$. To fix ideas, we will use parameters
consistent with the AA in [12] :
$V_{N}=50\,mV$ , $\chi/c=100\,s^{-1}$ , $\chi_{c}/c=5\,s^{-1}$ ,
$(1/rc)=1\,cm^{2}/s$ , $V_{0}=-10\,mV$ , $q/T=0.08\,(mV)^{-1}$. We use
Gaussian units except that we express voltages in $mV$ : this is more
convenient to relate to experimental systems. Also, the temperature in (2) and
elsewhere is in energy units; thus at room temperature $T/|e|\approx
0.025\,mV$ where $e$ is the charge of the electron.
Figure 1: The traveling kink solution $V(x,t)$ for (1), (2). The plot shows
snapshots of the kink at different times; the initial condition ($t=0$) is a
hyperbolic tangent. Parameter values are those given in the text, with a clamp
voltage $V_{c}=-200\,mV$. The dotted horizontal lines show the fixed points
$V_{1}$ and $V_{3}$. Notice that the shape of the kink shifts from the initial
condition at t = 0.0s to a stable shape afterwards.
The possibility of travelling kink solutions of (1) and (2) arises because,
with the clamp at a negative voltage, say $V_{c}=-100\,mV$, there is a fixed
point of (1) (a uniform, time independent solution) with $V(x,t)\approx V_{N}$
and open channels ($P_{O}(V)\approx 1$), namely $V=V_{1}\approx(\chi
V_{N}+\chi_{c}V_{c})/(\chi+\chi_{c})$. A second fixed point is
$V(x,t)=V_{3}\approx V_{c}$ and channels closed ($P_{O}(V)\approx 0$). A
stable kink solution exists, asymptotically connecting these two stable fixed
points (a third fixed point is unstable and will be discussed later). The
essential parameters in (1) are the diffusion constant $D\equiv 1/(rc)$ and
$\chi/c$ ; from these we can form a characteristic length scale
$\Delta=1/\sqrt{r\chi}$ which gives the scale of the width of the kink
solution, and a characteristc velocity $v=D/\Delta=(1/c)\sqrt{\chi/r}$ which
similarly gives the scale for the kink velocity. With the parameters above,
$\Delta\approx 1\,mm$ and $v\approx 10\,cm/s$. Fig. 1 shows snapshots of a
travelling kink obtained by integrating (1) , (2) using the parameters above
and $V_{c}=-200\,mV$. The kink was launched with a hyperbolic tangent initial
condition ($t=0$ trace in Fig. 1); it is found to quickly (on a time scale
$\sim c/\chi$) attain a stable limiting shape and thereafter travel at
constant velocity. The velocity depends on the clamp voltage $V_{c}$, as shown
in Fig. 2. We measure it by tracking the inflection point of the solution
$V(x,t)$. The solitary wave solution exists only for $V_{c}$ within certain
bounds; correspondingly there is a maximum velocity of the kink, while the
minimum velocity is zero, as we show below.
Figure 2: Plot of kink velocity vs clamp voltage. Parameter values are those
given in the text. The velocity is determined by tracking the minimum of the
first derivative of $V(x,t)$, which corresponds to the inflection point of the
kink-shaped wave front. The left most and right most data points are close to
the values of $V_{c}$ beyond which the kink solution disappears. The graph is
asymmetric with respect to right moving and left moving kinks.
Let us now analyze these solitary wave solutions (see e.g. [5]). Eq. (1) is of
the form:
$\frac{\partial V(x,t)}{\partial t}-\frac{\partial^{2}V}{\partial
x^{2}}\,=\,g(V)$ (3)
where we have changed to non-dimensional variables using
$\Delta=1/\sqrt{r\chi}$ , $\tau=c/\chi$ , $V_{N}$ as the units of length,
time, and potential, respectively. Then,
$\begin{cases}g(V)\,=\,P_{O}(V)[1-V]+\frac{\chi_{c}}{\chi}\left[\frac{V_{c}}{V_{N}}-V\right]\\\
\\\
P_{O}(V)\,=\,\left\\{exp[-\frac{qV_{N}}{T}(V-\frac{V_{0}}{V_{N}})]+1\right\\}^{-1}\end{cases}$
(4)
We look for a travelling wave solution: $V(x,t)=\varphi(x-ut)\\\
=\varphi(z)\,$ , $z\equiv x-ut$ ; then from (3):
$\varphi^{\prime\prime}+u\,\varphi^{\prime}\,=\,-\frac{d}{d\varphi}F(\varphi)$
(5)
where $F$ is the primitive of $g$ , i.e. $g(\varphi)=dF/d\varphi$ . We may
interpret (5) as the equation of motion of a unit mass in a potential energy
$F$ , subject to a frictional force proportional to the velocity. The
dissipation parameter $u$ is the velocity of the kink. In Fig. 3 we plot the
function $F$ obtained from integrating $g$ in (4); the analytic expression,
which involves the poly log function, is readily obtained with Mathematica.
Figure 3: The function $F(\varphi)$ obtained from (4) vs the (dimensional)
membrane voltage, for clamp voltages of $-100\,mV$ and $-200\,mV$. Parameters
are as given in the text. The fixed points $V_{1}$, $V_{2}$, $V_{3}$ shown
refer to the yellow ($V_{C}=-200\,mV$) curve. As $V_{C}$ is decreased below
$-200\,mV$ the global maximum becomes the secondary maximum and vice-versa.
Increasing $V_{c}$ above $-100\,mV$, the secondary maximum eventually
disappears, at which point there is no kink solution.
The kink solution displayed in Fig. 1 corresponds, in terms of (5), to the
particle (of coordinate $\varphi$) starting with zero velocity at the maximum
$\varphi=V_{1}$ and arriving (after an infinite time) at the secondary maximum
$\varphi=V_{3}$ , also with zero velocity. The value of the dissipation
parameter $u$ for which this is possible corresponds to the propagation
velocity of the kink. Different velocities are possible transiently, for
example, a kink initially steeper than the asymptotic shape will initially
travel faster, and slow down as it attains the stable shape and velocity. This
”shaping” of the signal expresses the existence of a stable, unique solitary
wave solution. It motivated the electronic realization of an axon, and the
corresponding influential dynamical system model, by Nagumo et al [11].
Varying the clamp voltage $V_{c}$ modifies the potential $F$ , and the kink
velocity $u$ changes correspondingly, as shown in Fig. 2. For increasing
$V_{c}$ , the difference $F(V_{1})-F(V_{3})$ increases, while the secondary
maximum at $V=V_{3}$ becomes less pronounced (Fig. 3). Correspondingly, the
kink velocity increases. At a critical clamp value $V_{c}\approx-92.8\,mV$ the
secondary maximum disappears (the minimum at $V_{2}$ becomes an inflection
point, then reverses curvature), so no kink solution exists for higher clamp
voltages. Conversely, as $V_{c}$ is decreased, the difference
$F(V_{1})-F(V_{3})$ decreases, goes through zero and becomes negative.
Correspondingly the kink velocity also goes through zero and then reverses
sign. In short, $F(V_{1})-F(V_{3})$ increases monotonically with increasing
$V_{c}$ , as does the kink velocity $u$. There is a maximum positive velocity
and a maximum negative velocity (the two are not the same). There is a
particular clamp voltage ($V_{c}\approx 244.0\,mV$ with our parameters) such
that the kink is stationary ($u=0$). Trivially, for each right-moving kink
there is an identical mirror-image left-moving kink, if one inverts the
boundary conditions at infinity. From Fig. 3 we also see that two more kink
solutions exist, one connecting the maximum at $V_{1}$ with the minimum at
$V_{2}$ (evidently travelling at a faster speed compared to the kink
connecting $V_{1}$ and $V_{3}$), and a third one connecting $V_{3}$ and
$V_{2}$. These solutions are linearly unstable, because the fixed point at
$V_{2}$ is unstable; thus they would not be observed experimentally. However,
they can still be ”observed” numerically, as we see below.
It is interesting to put this problem in a ”normal form”, and see the
connection to other kinks in condensed matter physics. The simplest function
$F$ in (5) which supports a kink solution of (3) has a maximum and a minimum,
i.e. a cubic non-linearity. A kink solution exists connecting the maximum and
the minimum, but it is unstable as the minimum is an unstable fixed point. The
next simplest case is that $F$ has three extrema; assuming a single control
parameter, we may write:
$F(V)\,=\,a\,[2(1-\alpha)V^{2}+\frac{4}{3}\alpha V^{3}-V^{4}]$ (6)
$a>0$ , $\alpha\leq 1$ where we put one stable fixed point at $V_{1}=1$ and
the unstable fixed point (the minimum of $F$) at $V_{2}=0$. The third (stable)
fixed point is at $V_{3}=(\alpha-1)$. This is not the most general form: the
choice $V_{2}=0$ forces $F$ to be an even function at the ”coexistence point”
$\alpha=0$, as we discuss below; however, this choice allows to discuss
unstable kink solutions also. Apart from this difference, this situation
corresponds to (4); the parameter $\alpha$ has the role of $V_{c}/V_{N}$, if
$\chi_{c}/\chi$ is fixed. For $-1<\alpha\leq 1$ a stable kink with
$V(x\rightarrow-\infty)=V_{1}$ and $V(x\rightarrow+\infty)=V_{3}$ exists,
travelling with a speed $u$ which increases monotonically with increasing
$\alpha$. The stationary kink is obtained for $\alpha=0$; for $\alpha>0$ the
kink travels to the right and for $\alpha<0$ to the left. The simplest stable
kink is thus a solution of:
$\frac{\partial V(x,t)}{\partial t}-\frac{\partial^{2}V}{\partial
x^{2}}\,=\,4a\,[(1-\alpha)V+\alpha V^{2}-V^{3}]$ (7)
The cubic nonlinearity is a feature of several reduced parameters models of
nerve excitability, notably Fitzhugh’s ”BVP model” [10], and indeed of the
original Van der Pol relaxation oscillator [13], in appropriate coordinates.
Two further kink solutions of (7) exist, connecting $V_{1}$ and $V_{2}$ , and
$V_{3}$ and $V_{2}$. These are linearly unstable, but they can still be
obtained numerically, with the trick of arranging for the unstable fixed point
to be at $V=0$, as we did in (6). In this way, one can even discuss collisions
between different kinks: the only non-trivial example stemming from (6) is
shown in Fig. 4.
Figure 4: A 3D plot showing the collision of two different kinks. They are
obtained integrating (7) with $a=0.5$ , $\alpha=0.5$, and appropriate initial
conditions. Notice the velocity change after the collision. However, these
kinks are linearly unstable and so would not be observed experimentally.
Namely, the kink connecting $V_{1}$ and $V_{2}$ collides with the kink
connecting $V_{3}$ and $V_{2}$ travelling in the opposite direction, resulting
in the stable kink connecting $V_{1}$ and $V_{3}$ in the final state.
To ricapitulate: the fixed points of (3) are uniform, time-independent
solutions which we might call ”phases”. Two fixed points can be connected by a
kink. The fixed points are zeros of $g$, i.e. extrema of $F$, but the stable
fixed points are maxima of $F$ while the unstable ones are minima. For the
purpose of classifying, $F$ is analogous to minus the free energy of a Landau
theory describing a corresponding phase transition. The stationary kink
($\alpha=0$ in (6)) is the interface separating two coexisting phases. For
$\alpha\neq 0$ , one of the two phases is more stable and grows at the expense
of the other (i.e. the kink moves). However, we must remember that our system
is never in thermodynamic equilibrium. Even when the kink is stationary, there
are macroscopic currents in the system (the clamp current and the channels
current), and detailed balance in violated. The function $F$ derived from (4),
which is shown in Fig. 3 , has the same general form as (minus) the mean field
free energy which describes the nematic - isotropic transition in liquid
crystals [5], or also the liquid - gas transition. For the former, and
following the notation in [5], the free energy $f$ as a function of the order
parameter $S$ is:
$f\,=\,\frac{1}{2}a(T-T^{*})S^{2}-wS^{3}+uS^{4}$ (8)
where $S=P_{2}(cos\theta)$ , $P_{2}$ the Legendre Polynomial of order 2 and
$\theta$ the angle between the molecular axis and the director vector. For
fixed $V_{c}$ , the evolution of $-F$ for varying $\chi_{c}/\chi$ (where $F$
is the primitive of (4)) mirrors the evolution of (8) for varying temperature
$T$. Namely, for small values of $\chi_{c}/\chi$ there is a global minimum at
positive $V$ (i.e. channels essentially open) and a secondary minimum at
negative $V$ (channels essentially closed). Increasing $\chi_{c}/\chi$ one
reaches a coexistence point where $-F$ has the same value at the two minima,
after which the global minimum is at negative $V$ and the secondary minimum at
positive $V$ (Fig. 3), i.e. the stable phase is with channels essentially
closed. As in (8) there are limits of meta-stability where the secondary
minimum disappears. If we allow $V_{c}$ as a second control parameter, we find
a coexistence line in the $V_{c}$ \- $\chi_{c}/\chi$ plane ending in a
critical point, i.e. the phenomenology of a liquid - gas transition. For
parameter values on the coexistence line, the kink is stationary.
For the case of the stationary kink, one can write an implicit formula for the
shape: with $u=0$, multiplying (5) by $\varphi^{\prime}$ and integrating from
$-\infty$ to $x$ , with the boundary conditions $\varphi^{\prime}\rightarrow
0$ , $\varphi\rightarrow\varphi_{1}$ for $x\rightarrow-\infty$ one finds
$\frac{d\varphi}{\sqrt{-F(\varphi)+F(\varphi_{1})}}\,=\,-\sqrt{2}\,dx$ (9)
For the stationary kink of (7), which occurs for $\alpha=0$ , we have
$F(\varphi)=a(2\varphi^{2}-\varphi^{4})$ , the maxima of $F$ are at
$\varphi=\pm 1$ , and integrating (9) we find $\varphi(x)=tanh(-\sqrt{2a}\,x)$
. This is the same kink as in the mean field theory of the Ising ferromagnet,
separating two domains of opposite magnetization [5]. It has a special
symmetry (inversion about its center), stemming from the symmetry of this
particular $F$ , which is an even function at the coexistence point $\alpha=0$
. The function $F$ derived for the Artificial Axon from (4) has no such
symmetry, and correspondingly the stationary kink is not inversion symmetric
about its center, as Fig. 1 shows. For this kink too an analytic expression
can be obtained from (9) in terms of special functions.
Conclusions. We have discussed the occurrence of travelling kink solutions in
a dynamical system which represents a space extended Artificial Axon. We
considered the simplest limit: ”fast” channels described by an equilibrium
opening probability $P_{O}(V)$. Even so, the velocity of the kink represents a
non trivial eigenvalue problem (5). More generally, introducing channel
dynamics increases the dimensionality of the dynamical system and leads to
more structure (oscillations, limit cycles i.e. action potentials) as is well
known. We point out a connection to similar kinks in other areas of condensed
matter physics: some questions which can be asked of these systems are
similar, for instance, effects beyond mean field [14, 6]. For us, this means
replacing the uniform channel conductance with a space distribution of point -
like channels, eventually interacting, eventually mobile. Introducing channel
dynamics (see e.g. [15, 16]), it may be interesting to extend this study to
pattern formation in 2 space dimensions. In general, this system may inspire
the construction of new reaction - diffusion systems [17] with interesting
spatio - temporal dynamics.
###### Acknowledgements.
This work was supported by NSF grant DMR - 1809381.
## References
* Ariyaratne and Zocchi [2016] A. Ariyaratne and G. Zocchi, J. Phys. Chem. B 120, 6255 (2016).
* Vasquez and Zocchi [2017] H. G. Vasquez and G. Zocchi, EPL 119, 48003 (2017).
* Marmont [1949] G. Marmont, J Cell Comp. Physiol. 34, 351 (1949).
* Koch [1999] C. Koch, _Biophysics of Computation_ (Oxford University Press, 1999).
* Chaikin and Lubenski [1995] P. Chaikin and T. Lubenski, _Principles of condensed matter physics_ (Cambridge University Press, 1995).
* Buijnsters _et al._ [2014] F. Buijnsters, A. Fasolino, and M. Katsnelson, Phys. Rev. Lett. 113, 217202 (2014).
* Kolar _et al._ [1996] H. Kolar, J. Spence, and H. Alexander, Phys. Rev. Lett. 77, 4031 (1996).
* Rotermund _et al._ [1991] H. Rotermund, S. Jakubith, A. von Oertzen, and G. Ertl, Phys. Rev. Lett. 66, 3083 (1991).
* Hodgkin and Huxley [1952] A. L. Hodgkin and A. F. Huxley, J. Physiol. (Lond.) 117, 500 (1952).
* Fitzhugh [1961] R. Fitzhugh, Biophys. J. 1, 445 (1961).
* Nagumo _et al._ [1962] J. Nagumo, S. Arimoto, and S. Yoshizawa, Proceedings of the IRE 50, 2061 (1962).
* Vasquez and Zocchi [2019] H. G. Vasquez and G. Zocchi, Bioinspiration and Biomimetics 14, 016017 (2019).
* Van der Pol [1926] B. Van der Pol, Phil. Mag. 2, 978 (1926).
* Buijnsters _et al._ [2003] F. Buijnsters, A. Fasolino, and M. Katsnelson, Nature 426, 812 (2003).
* Morris and Lecar [1981] C. Morris and H. Lecar, Biophys. J. 35, 193 (1981).
* Pi and Zocchi [2020] Z. Pi and G. Zocchi, arXiv:2012.00221 (2020).
* Vanag and Epstein [2004] V. K. Vanag and I. R. Epstein, Phys. Rev. Lett. 92, 128301 (2004).
|
where $X_{\rm i0}/X_{\rm j0}=y_{\rm i}/y_{\rm j}$. The value of $T$ could in
this case be found by substituting into Equation (4.16) the yield ratio (given
by nucleosynthesis theory) and the abundance ratio in the early Solar System
(obtained from the abundances of decay products in meteorites). The age of the
Galaxy, which is simply $T$ plus the age of the Sun, could then be determined.
Of course, the elements were probably synthesized continually during the
period $T$ from first star formation in the Galaxy until the Solar System
condensed, so Equation (4.16) is unrealistic. As first shown by Schramm &
Wasserburg (1970), one can define a useful age parameter, here denoted $T_{\rm
ij}$, analogously to the solution of (4.16) for $T$:
$T_{\rm ij}\equiv\frac{1}{\lambda_{\rm i}-\lambda_{\rm
j}}\ln\left[\frac{y_{\rm i}/y_{\rm j}}{X_{\rm i}(T)/X_{\rm j}(T)}\right],$
(4.17)
which can, in principle, be evaluated from meteoritic and nuclear data
independently of Galactic evolution. In the limit of long-lived elements,
($\lambda_{\rm i}T\ll 1,\>\lambda_{\rm j}T\ll 1$), $T_{\rm ij}$ is just the
mean age of elements in the gas, $T_{Z}$, at the time when the Solar System
formed (Tinsley, 1975c).
The relation between the mean age ($T_{Z}$) and the elapsed time ($T$) is
model-dependent, so estimates of $T_{\rm ij}$ for long-lived pairs of elements
do not give $T$ directly. Different possibilities include the following.
1. 1.
If essentially all nucleosynthesis of the relevant elements took place in an
initial burst, then $T=T_{Z}$.
2. 2.
The simple model for chemical evolution gives the intuitive result that the
mean age is half the elapsed time, $T_{Z}=T/2$; however, because this model is
discrepant with stellar metallicities, one cannot assume that $T$ is given
simply by $2T_{Z}$ in reality.
3. 3.
In extreme infall models, the ISM and heavy elements in it have a mean age
${\sim}M_{\rm g}/f$ at all times greater than $M_{\rm g}/f$; so if chemical
evolution in the disk was strongly affected by infall before the Solar System
formed, the value of $T_{Z}$ obtained from meteoritic abundances may reflect
only the timescale for infall, independently of the age $T$.
4. 4.
There are consistent models for the Solar neighborhood, with some infall
and/or some early enrichment, that have values of $T_{Z}\simeq T/2$ (as
emphasized by Hainebach & Schramm, 1977), but since not all plausible models
have this relation it cannot be used confidently (as emphasized by Tinsley,
1977c).
In summary, there is a large uncertainty in any age estimate of the Galaxy
derived from nucleochronology, except of course that the age of the Solar
System is a reliable lower limit!
The initial Solar System abundances of short-lived radioactive elements are
sensitive to rates of nucleosynthesis at times immediately preceding the
solidification of the meteorites. Their abundances suggest that the
nucleosynthesis of most elements ceased ${\sim}10^{8}\>\rm yr$ before the
solidification time, but some material was produced only ${\sim}10^{6}\>\rm
yr$ earlier. Interpretations of these timescales include passage of the pre-
Solar material through spiral arms at $10^{8}$-yr intervals; enrichment by
fresh supernova ejecta each $10^{6}\>\rm yr$, which could result from the
average supernova rate in the Solar neighborhood; a last-minute supernova that
triggered the formation of the Solar System; and locking of radioactive
elements, with their decay products, into grains long before the Solar System
formed. These possibilities are reviewed briefly by Podosek (1978), and
discussed in detail by several authors in a conference proceedings edited by
Gehrels (1978). As yet there is no consensus on the interpretation of short-
lived radioactivities in the early Solar System, but ultimately they should
provide valuable information on star formation and interstellar chemistry.
## 5 Chemical Evolution of Galaxies
For other galaxies, and for regions of our own outside the Solar neighborhood,
there is much less detailed information on abundance distributions, but there
are some striking trends that call for explanations involving the formation
and later dynamical evolution of galaxies. A few relevant properties have been
mentioned in Section 1.1, and now further details will be described and some
of the theoretical models reviewed. Other general reviews of this subject
include those by Audouze & Tinsley (1976), Pagel (1978b), and Trimble (1975).
### 5.1 Outline of Relevant Data
Abundances are very often found to decrease outward in galaxies: gradients
have been observed in the H ii regions of disks, in disk stars, and in the
stars of spheroidal systems including elliptical galaxies and the bulge-halo
components of spirals.
In the Galactic disk, H ii regions within a few kpc of the Sun have an average
gradient $d{\rm[O/H]}/dr\simeq-0.1\>\rm kpc^{-1}$ (where $r$ is the
galactocentric distance), while stars of intermediate age have a gradient
$d{\rm[Fe/H]}/dr\simeq-0.05\>\rm kpc^{-1}$; an open cluster about $10\>\rm
kpc$ from the Sun in the anticenter direction, of age only $\lesssim
10^{8}\>\rm yr$, is apparently as metal-poor as $\rm[Fe/H]<-1$ (Christian &
Janes, 1979). (Oxygen and iron abundances are quoted for the ISM and stars,
respectively, because these are the best observed elements). The uncertainties
are such that the apparent age dependence of the gradient may or may not be
real. These data are reviewed by Janes (1977), Mayor (1977), and Peimbert
(1977). H ii regions in external galaxies generally show gradients of a
similar magnitude (e.g. Smith, 1975; Shields & Searle, 1978; and references
therein). However, only a marginal gradient appears in the Large Magellanic
Cloud and none in the Small Cloud (Pagel et al., 1978).
The most obvious explanation for these gradients would be that the outer
regions of disks are less chemically evolved than the inner regions, in the
sense of having converted a smaller fraction of their gas to stars. The simple
model, for example, would predict a $Z$ gradient given by $Z=y\ln\ \mu^{-1}$
arising from a gradient in the gas fraction $\mu$ (Section 3.2.1). However,
the best studied galaxies (the Milky Way and M101) probably do not have a
sufficient gradient in $\mu$ for this explanation to suffice. Gordon & Burton
(1976) found that the combined surface densities of atomic and molecular
hydrogen lead to a nearly constant gas fraction ($\mu\sim 0.05$) at $R>4\>\rm
kpc$ in the Galaxy, and Shields & Searle (1978) noted a similar problem in
M101. The amount of ISM interior to the Sun could be overestimated, since the
$\rm H_{2}$ density is derived from observations of CO molecules on the
assumption of a constant abundance ratio $\rm CO/H_{2}$; if in fact $\rm
CO/H_{2}$ increases inward because of a $\rm C/H$ abundance gradient, then
there is less $\rm H_{2}$ than had been thought at small radii (Peimbert,
1977). With this correction, the Galaxy probably has some gradient in $\mu$,
which could account in part for the interstellar composition gradient. Of
course, the simple model is known to be invalid in the Solar neighborhood, so
we do not expect the formula $Z=y\ln\ \mu^{-1}$ to explain gradients in
detail. Other ways of generating gradients will be mentioned in Section 5.5.
The Galactic halo stars also have a metallicity gradient. Studies of
individual stars in globular clusters show that a spread of metallicities of
$\rm[Fe/H]\sim-2$ to $0$ occurs in the innermost $5\>\rm kpc$ (measured from
the Galactic center, at any angle to the disk), while the upper limit drops to
${\sim}{-1}$ at greater radii (Cowley et al., 1978; Searle & Zinn, 1978). It
is not clear whether a systematic decline in iron and/or CNO abundances
persists further out (McClure, 1979; Kraft, 1978).
Many elliptical and S0 galaxies have gradients of integrated colors and line
strengths in their spectra that are best interpreted as composition gradients.
The same quantities also vary systematically with the absolute magnitudes of E
and S0 galaxies, as illustrated in Figure 10, indicating that the brighter
galaxies are more metal-rich. A thorough review of this subject is given by
Faber (1977). The analysis and calibration of abundance effects in the
integrated light of a galaxy are much more complicated than for individual
stars, because line strengths are strongly affected by the mixture of stellar
temperatures and luminosities, and because the whole population in the HR
diagram is shifted by effects of metallicity on the interiors and atmospheres
of stars (Section 2.4.5). Until recently, it was not clear whether all
elements heavier than helium enter the composition trends in E and S0
galaxies, or whether a few with strong spectral features (N, Mg, Na), are
mainly responsible. Cohen (1978) has now made a detailed observational study
of some lines of Ca, Na, Mg, and Fe, together with approximate theoretical
predictions of how their strengths should vary with the composition of a
population of stars; she finds no evidence against the abundances of all of
these elements varying in the same proportions.
Figure 10: A color–magnitude diagram for elliptical and S0 galaxies in several
clusters of galaxies (Visvanathan & Sandage, 1977). The color $(u-V)_{\rm c}$
is corrected for reddening, redshift, and aperture effects; magnitudes are
adjusted to the distance of the Virgo cluster using redshift ratios.
(_Crosses_ denote Virgo galaxies not used by Visvanathan & Sandage, 1977 in
their linear fit to the data). The _straight lines_ are a linear fit, with
$\pm 2\sigma$ boundaries. Despite the excellent linear fit over a large range
of magnitudes, the brightest few points for the Virgo cluster alone (_filled
circles_) and for other clusters (_open circles_) show a tendency to level off
in color.
The color–magnitude relation for elliptical galaxies is linear over a wide
range of metallicities (Figure 10), which suggests a power-law relation
between metallicity and luminosity. A tentative calibration, subject to
revision when both the data and the theoretical color–metallicity relation are
more secure, is
$Z_{\rm s}\propto L_{\rm B}^{0.3}$ (5.1)
(Tinsley, 1978b), where $Z_{\rm s}$ is the average metallicity of stars in an
elliptical galaxy of blue luminosity $L_{\rm B}$. This relation was derived by
from population models differing only in metallicity, so that the stars in all
cases had the same IMF and age (as outlined in Section 6.1.2). Such models
also predict that the mass-to-luminosity ratio should depend on metallicity,
yielding a relation $M_{\rm s}/L_{\rm B}\propto L_{\rm B}^{0.13}$. A relation
very similar to this has been obtained observationally by Schechter & Gunn
(1979) for the cores of elliptical galaxies. Equation (5.1) therefore
corresponds to a tentative metallicity–mass relation,
$Z_{\rm s}\propto M_{\rm s}^{0.25},$ (5.2)
where $M_{\rm s}$ is the mass of stars (not any extended hidden halo
material), and the main uncertainties in the exponent are due to the
color–magnitude data and to the color–metallicity calibration. There is some
evidence that the color–magnitude relation levels off at the magnitudes of the
brightest cluster galaxies, as can be seen, for example, from the brightest
few points in Figure 10.
Figure 11: (a) Star formation rates and (b) metallicities of newly formed
stars (i.e., $Z$ of the gas), at several radii in a collapse model for the
formation of a spherical galaxy (Larson, 1974a). The radius in pc is marked on
each curve, and the three ticks indicate the times at which star formation is
$10\%$, $50\%$, and $90\%$ complete (relative to the final mass of stars) at
that radius.
Heavy elements have been detected in intergalactic gas, and since these
elements almost certainly come from galactic stars they are relevant to the
chemical evolution of galaxies. A feature due to iron has been observed in the
diffuse X-ray emission spectra of several rich clusters of galaxies; the
interpretation is that these clusters contain hot gas (${\sim}10^{8}\>\rm K$),
emitting by thermal Bremsstrahlung, with approximately Solar abundances of
iron. The mass of intergalactic gas inferred from the data is model-dependent
(e.g. Bahcall & Sarazin, 1977; Perrenod, 1978), and is roughly between 1 and
30 times the total luminosity of cluster galaxies, in solar units. Now the
mass of stars in galaxies is ${\sim}10$ times their total luminosity, in solar
units101010This mass is not to be confused with the virial masses of clusters,
which are ${\sim}100$ times the total luminosity and which provide the main
evidence for hidden non-stellar matter in association with galaxies., and
their average metallicity is approximately Solar, so the intergalactic medium
(IGM) in the rich clusters apparently contains about the same total mass of
iron as do the stars themselves. These observations suggest that galaxies
sometimes lose enough metal-rich gas to affect their own chemical evolution
substantially.
Another striking observation of metal-rich IGM is absorption due to Ca and Mg
in the spectrum of a quasar (3C 232) lying $1.9^{\prime}$ on the sky from a
foreground spiral galaxy (NGC 3067); the absorption redshift matches that of
the galaxy, and neutral hydrogen absorption has been detected at the same
redshift in the radio spectrum of the quasar (Haschick & Burke, 1975). The
line strengths and positions of the objects imply that there is gas with
roughly Solar abundances at least $17\>\rm kpc$ from the center of the galaxy
(Boksenberg & Sargent, 1978).
### 5.2 Abundance Gradients in Spheroidal Systems
Most stars in elliptical galaxies and in the bulge-halo components of spirals
were probably formed within a few times $10^{9}\>\rm yr$ at a time
${\sim}10^{10}\>\rm yr$ ago. The abundance gradients in these systems
therefore reflect processes occurring during the time of star formation in a
protogalaxy; several such processes have been suggested as possible causes of
gradients.
#### 5.2.1 Dissipative collapse
The most extensive models exploring the effects of star formation during the
collapse of an initially gaseous protogalaxy are those reviewed by Larson
(1976a). In the spheroidal components of model disk galaxies, and in model
ellipticals, star formation becomes increasingly concentrated toward the
center as the density builds up. This effect is illustrated in Figure 11 (a),
which shows the SFR as a function of time at several radii in a spherical
collapse model. Stars formed at a given radius remain in orbits with little
net inward motion, but the gas sinks further in because it is dissipative
(i.e., its kinetic energy of radial motion is partly lost via collisionally
induced radiation). Thus the metals ejected by evolving stars are carried
inward by the gas, and an abundance gradient develops in the gas. As stars
continue to form, their compositions reflect this gaseous abundance gradient.
Figure 11 (b) shows the evolution of metallicity of newly formed stars (i.e.,
$Z$ of the gas) at several radii in a spherical model, and the rapid
development of a gradient is clear. The same process of dissipation produces a
central concentration in the gas density, which leads to a condensed nucleus
of stars.
If there were no dissipation, the stars and gas would collapse together and
the metals would not be concentrated inward. Thus in the outer parts of some
of these models, where the protogalactic density is too low for dissipation to
be effective, no stellar abundance gradient appears. The possible lack of a
gradient in metallicities of globular clusters beyond ${\sim}10\>\rm kpc$ from
the Galactic center has therefore been interpreted as showing that the
collapse of the Galaxy began with the stars and gas in free-fall; conversely,
the gradient at smaller radii is interpreted as showing the effects of
dissipation at a later stage of the collapse (Larson, 1977a).
#### 5.2.2 A gradient in the IMF
Aside from any effects of gas flows, negative metallicity gradients could be
produced by gradients in the IMF that led to a yield decreasing outward. Since
the yield (Equation 3.12) depends on the relative numbers of metal-producing
stars, possibilities would be a steeper slope for massive stars or more low-
mass “inert” stars, at larger radii. In the latter case, the stars that
survive to the present would still have a radial gradient in their mass
function, with an interesting consequence: most of the luminosity of an old
population of stars comes from dwarfs near the MS turnoff and evolving giants,
while most of the mass is in less massive objects that contribute little
light; thus the $M/L$ ratio increases with the proportion of very low-mass
stars, and the postulated gradient in the IMF would lead to an outward
increase of $M/L$. Such a trend is indeed observed, although it is sometimes
ascribed to an extended “halo” of non-stellar condensed objects that formed
too soon to affect chemical evolution (Section 2.2.2). van den Bergh (1975)
has suggested that the IMF tends to have more massive stars in regions of high
density, and this view of the origin of metallicity gradients is part of his
evidence. The hypothesis of a gradient in the IMF in spheroidal systems has no
convincing theoretical basis, and the trends it would explain can arise in
other ways, but nevertheless systematic variations in the IMF could be as
important as they are hard to verify.
#### 5.2.3 Finite stellar lifetimes
The timescale for star formation in a protogalaxy could be comparable to the
lifetimes of some metal-producing stars, in which case stars formed early
would be relatively unenriched. Thus if the outermost stars formed before most
of the central ones, there would be a negative metallicity gradient. In the
models in Section 5.2.1 (Larson, 1976a) it is assumed that all metals are
produced by stars with lifetimes $<3\times 10^{7}\>\rm yr$, so this effect is
negligible; but iron production by Type I supernovae could in fact be
significant on longer timescales (Section 2.4.4). What timescales are
relevant? The minimal collapse time for a protogalaxy is the free-fall time,
$t_{\rm ff}=1.7\times 10^{6}\left(\frac{M}{10^{11}\>\rm
M_{\odot}}\right)^{-\frac{1}{2}}\left(\frac{R}{1\>\rm
kpc}\right)^{\frac{3}{2}}\>\rm yr,$ (5.3)
where $M$ and $R$ are the mass and radius. For example, a galaxy with
$M=2\times 10^{11}\>\rm M_{\odot}$ and $R=15\>\rm kpc$ has $t_{\rm ff}=7\times
10^{7}\>\rm yr$; and a protogalaxy of the same mass collapsing from $R=50\>\rm
kpc$ has $t_{\rm ff}=4\times 10^{8}\>\rm yr$. Much longer timescales for star
formation are possible if the dissipation is slow, and the collapse time of
the system can be much longer if its boundary initially expands with the
Universe (Gunn & Gott, 1972). At least the outer parts of large galaxies could
therefore be metal-poor partly because of the finite lifetimes of metal-
producing stars.
A potential test is to look for variations in relative abundances. For
example, if oxygen comes only from very massive stars but iron comes partly
from stars of intermediate mass (Section 2.4; Section 4.4.1), then iron should
be more deficient than oxygen in the outermost stars. The hypothesis of a
gradient in the IMF of massive stars would predict the opposite trend in
relative abundances. Current data do not detect any gradients in relative
abundances, but oxygen itself has not been studied and nor have the faint
outer regions of elliptical galaxies.
It is quite possible that all of the processes discussed above were effective
in producing abundance gradients in spheroidal systems, so clear choices among
the theories are not to be expected.
### 5.3 The Metallicity–Mass Relation for Elliptical Galaxies
The correlation between metallicity and mass (color and luminosity) of
elliptical galaxies has been explained in several ways, of which two will be
reviewed here. These each involve dynamical effects during galaxy formation,
resulting in less complete conversion of the protogalactic gas to stars, and
so to a smaller final mean stellar metallicity, in smaller systems. One could,
of course, invoke differences in the IMFs of galaxies as a function of their
mass, but there is no independent evidence for a trend of the required form.
#### 5.3.1 Supernova-driven winds
Star formation and chemical enrichment are cut off in a protogalaxy if the
residual gas is lost, and a possible loss mechanism is a galactic wind
energized by supernova explosions. Galactic winds were first analyzed for a
steady-state case by Johnson & Axford (1971) and Mathews & Baker (1971);
similar analyses have been made for nuclear bulges of spirals by Faber &
Gallagher (1976) and for bulge-disk systems by Bregman (1978). A galaxy
sustains a steady-state wind if the supernova rate divided by the rate of
supply of gas (from evolving stars) gives the gas enough energy to escape from
the galactic potential well. For protogalaxies, we are interested not in the
steady state, but in conditions for the initiation of a wind that can remove
essentially all of the residual gas. Larson (1974b) discussed possible effects
of supernovae in heating the gas, and adopted a simple approximation as the
condition for its removal: the residual gas is assumed to be lost suddenly
when the total heat input from supernovae has provided the gas with the escape
energy, assuming uniform conditions throughout the protogalaxy. This
approximation is plausible enough to suggest how the loss condition scales
with the relevant parameters, but there are unavoidably large uncertainties in
the astrophysics involved so the results are not very secure. The scaling can
be derived as follows.
Let $E$ be the thermal energy imparted to the ISM by supernovae when a unit
mass of stars is formed; $E$ is proportional to the fraction of stars that
become supernovae, to the mean kinetic energy of material ejected in a
supernova explosion, and to the efficiency with which this energy is
transferred to the ISM as heat. (The last factor is the most uncertain). As an
approximation, let $E$ be treated as a constant, despite finite stellar
lifetimes and complicated effects of the clumpiness of the ISM, its chemical
composition, etc. Let us consider a spherical protogalaxy of mass $M$ that has
formed a mass $M_{\rm s}$ of stars and has residual gas mass $M_{\rm
g}=M-M_{\rm s}$. The condition for gas to escape can be written
$\rm Potential\>energy\>of\>gas=Energy\>from\>supernovae,$
i.e,
$K\frac{GMM_{\rm g}}{R}=EM_{\rm s},$ (5.4)
where $K$ depends on the density distribution in the galaxy and will be
assumed constant as another simplification. Large elliptical galaxies are
observed to be more tightly bound than small ones, so a greater fraction of
their gas must be converted to stars before the condition (5.4) is satisfied;
therefore, their surviving stars have a greater mean metallicity. Other
consequences of this scenario are that the more massive galaxies collapse more
extensively before star formation is cut off, so they are predicted to have
more condensed nuclei and steeper metallicity gradients than smaller galaxies
(Section 5.2.1). These trends are observed, lending support to this type of
origin for the increase of metallicity with mass.
The form of the metallicity–mass relation can be accounted for using the same
approximate model. Let the initial mass–radius relation for protogalaxies have
the form
$M\propto R^{\alpha},$ (5.5)
so Equation (5.4) can be written
$M^{1-\frac{1}{\alpha}}\left(M-M_{\rm s}\right)\propto M_{\rm s}.$ (5.6)
Asymptotic equations for the mean metallicity of stars can be derived from
very general considerations: the mass of metals synthesized and ejected is
$yM_{\rm s}$, so at early stages of evolution when $M\simeq M_{\rm g}$, we
have approximately
$Z_{\rm s}\propto\frac{yM_{\rm s}}{M_{\rm g}}\simeq\frac{yM_{\rm
s}}{M},\>\>\left(Z_{\rm s}\ll y\right).$ (5.7)
At late stages, Equation (3.21) predicts that in all cases where mass is
conserved,
$Z_{\rm s}\rightarrow y,\>\>{\rm as}\>\>M_{\rm s}\rightarrow M.$ (5.8)
The results from numerical collapse models verify these relations in cases of
interest here. Substituting Equation (5.7) into (5.6), we find the stellar
metallicity–mass relation,
$Z_{\rm s}\propto M_{\rm s}^{\frac{\alpha-1}{2\alpha-1}},\>\>\left(Z_{\rm
s}\ll y\right).$ (5.9)
The tentative empirical relation (5.2) is obtained if $\alpha=1.5$, which
agrees fairly well with the observed mass–radius relation for elliptical
galaxies if one considers how the stellar system must swell (to conserve
energy) when the gas is lost (Tinsley, 1978b). According to Equation (5.8),
the power-law relation between $Z_{\rm s}$ and $M_{\rm s}$ must level off at
large masses, with $Z_{\rm s}\rightarrow y$ in the limit when essentially all
the original material is converted to stars; this behavior agrees with the
levelling of the color–magnitude relation at the magnitudes of the brightest
cluster galaxies.
The critical parameter $E$ can plausibly have a value that would give the
right scale for the $Z_{\rm s}$–$M_{\rm s}$ relation (Larson, 1974b), but its
value is very uncertain so the success of this theory must be considered
tentative. The interaction between supernovae and the ISM could, in fact, be
so weak as to drive a wind in only the very smallest protogalaxies.
#### 5.3.2 Bursts of star formation in merging subsystems
Since the largest elliptical galaxies are the most metal-rich, a natural
hypothesis is that chemical enrichment accompanied the _growth_ of galaxies by
successive mergers among small subsystems. As noted in Section 1.2, gaseous
protogalaxies probably consisted of many dense lumps, so it is only a change
of viewpoint to consider these as merging subsystems rather than as a
collapsing unit. Moreover, extrapolations backward in time from the incidence
of strongly interacting galaxies in sight today suggest that collisions and
coalescence were common processes in the early lives of galaxies (Toomre,
1977; Vorontsov-Velyaminov, 1977). A property of colliding galaxies most
relevant to chemical evolution is that they often appear to be undergoing
intense bursts of star formation induced by the dynamical disturbance (Section
7.2), so it is reasonable to assume that star formation was caused in the past
by coalescence of subsystems in a protogalaxy. A qualitative model of chemical
enrichment by this process has been proposed by Tinsley & Larson (1979):
elliptical galaxies form by a hierarchical sequence of mergers among
subsystems, starting from small unenriched gas clouds; a burst of star
formation occurs at each merger, so at each stage of growth the fraction of
the total mass in stars increases and the mean metallicities of stars and gas
increase. In this picture, the final mass of an elliptical galaxy is
determined by the total mass of the small pieces initially present in its
surroundings. When these have all been mopped up, efficient star formation
stops. Any residual gas may get swept away if the system is moving through an
ambient IGM, or possibly blown out in a wind; if it remains bound to the
system, it could settle to a disk and leave the “elliptical galaxy” as the
central bulge of a spiral.
The resulting $Z_{\rm s}$–$M_{\rm s}$ relation depends on the _efficiency_ of
star formation as a function of the mass of the system (i.e., the system that
has been built after a given number of mergers), where efficiency is defined
as the mass of stars formed (in a given burst) per unit mass of gas. An
approximately power-law relation between $Z_{\rm s}$ and $M_{\rm s}$ can be
obtained only if the efficiency increases with the total mass of the system,
i.e., with successive mergers. For example, a relation
${\rm Efficiency\>of\>star\>formation\propto(Total\>mass)}^{p},$ (5.10)
where $p$ is a constant, leads to
$Z_{\rm s}\propto M_{\rm s}^{\frac{p}{1+p}},\>\>\left(Z_{\rm s}\ll y\right),$
(5.11)
with the usual limit $Z_{\rm s}\rightarrow y$ when all the gas is consumed.
The relation (5.10) can be justified qualitatively by considerations of gas
compression during collisions and mergers of subsystems. To reproduce the
tentative empirical relation (5.2), Equation (5.11) needs $p=1/3$, which is
consistent with the compression arguments. Equation (5.11) results from (5.10)
independently of such details as the mass distribution of merging pieces, and
it can be understood as follows: Equation (5.7) is true in any models with
mass conservation (including here conservation of the total mass of merging
pieces), while Equation (5.10) gives, dimensionally,
$\frac{M_{\rm s}}{M_{\rm g}}\propto M^{p},$
so that
$M_{\rm s}\propto M^{1+p}\>\>{\rm when}\>M_{\rm g}\simeq M\>\>\left(M_{\rm
s}\ll M\right);$ (5.12)
Equation (5.11) then follows from (5.7) and (5.12). The power law is again
predicted to level off, with $Z_{\rm s}\rightarrow y$ at high masses,
according to Equation (5.8).
As a theory for the origin of the metallicity–mass relation, this model has
the advantage of invoking processes that can be studied in nearby interacting
galaxies, but it remains to be seen whether the structural properties of
elliptical galaxies are fully consistent with its dynamical implications.
#### 5.3.3 Mergers of stellar systems
The color–magnitude (metallicity–mass) relation for elliptical galaxies is
apparently affected in a way that has nothing to do with chemical evolution:
central cluster galaxies accrete their neighbors, by the process of dynamical
friction. There is no star formation during these mergers, because the
galaxies involved are ellipticals or S0s with almost no gas. Thus the growth
in luminosity is not accompanied by chemical enrichment, and it can make the
growing system bluer because the surrounding galaxies that it accretes are
generally smaller than the central giant. Galactic cannibalism by dynamical
friction was first proposed by Ostriker & Tremaine (1975), and later papers
(e.g. Hausman & Ostriker, 1978, and references therein) have developed its
implications for cosmological tests, the origin of core–halo structure of cD
galaxies, the luminosity function of galaxies in clusters, and the
color–magnitude relation itself. The process obviously tends to make the
color–magnitude relation turn over toward bluer colors at the bright end. This
effect has been proposed as a test for the occurrence of cannibalism in
clusters, but the results are not unambiguous because there is an intrinsic
upper limit, $Z_{\rm s}\rightarrow y$, to the average stellar metallicity in
the models discussed above, that leads to a flattening of the relation anyway.
Strong evidence that galaxies in the centers of clusters _do_ merge with each
other is given by the lumpy appearance of the central supergiant (cD) members
of some clusters; the lumps are interpreted as recently swallowed galaxies,
and the timescale for them to merge into a smooth system is generally
$<10^{9}\>\rm yr$ (Gunn, 1977).
### 5.4 The Intergalactic Medium and Gas Lost from Galaxies
Loss of interstellar gas from galaxies can both affect their own evolution, as
discussed for example in Section 5.3 above, and be a significant source of
metal-enriched IGM.
#### 5.4.1 Loss of metals from galaxies
The mass of metals lost from an elliptical galaxy can be estimated by the
following argument, which is independent of the method of gas loss. The mass
of metals ever made by stars in the galaxy is ${\sim}yM_{\rm s}$ (by the
definition of the yield, Equation 3.12), and the mass of metals presently in
its stars is $Z_{\rm s}M_{\rm s}$, so the mass lost to the IGM at some stage
is ${\sim}(y-Z_{\rm s})M_{\rm s}$. This reasoning was used by Larson &
Dinerstein (1975) to predict a substantial metal-rich IGM in clusters of
galaxies, and a number of models with similar results have been advanced since
the iron X-ray line was discovered. An essentially model-independent estimate
can be made as follows.
Let $\phi(M_{\rm s})$ be the mass function of elliptical galaxies in a
cluster. Then the total mass of metals they have supplied to the ISM is
$M_{Z1}=\int\left[y-Z_{\rm s}\left(M_{\rm s}\right)\right]M_{\rm
s}\phi\left(M_{\rm s}\right)\ dM_{\rm s},$ (5.13)
where $Z_{\rm s}(M_{\rm s})$ is a function derivable from the color–magnitude
relation. In practice, $M_{\rm s}$ is expressed in terms of luminosity, and
$\phi(M_{\rm s})$ is obtained from the luminosity function. The value of $y$
should be taken as the maximum $Z_{\rm s}$ of an elliptical galaxy, which is
hard to obtain since the extensive outer regions that are probably metal-poor
are seldom observed; setting $y$ equal to the mean metallicity of local stars
(a little under $\rm Z_{\odot}$) is equivalent if elliptical galaxies have the
local IMF. In a calculation equivalent to the one just outlined, Tinsley &
Larson (1979) found that a cluster of elliptical galaxies would contain a mass
${\sim}(2-5)\>\rm M_{\odot}Z_{\odot}$ of intergalactic metals per solar
luminosity.
This is a very significant quantity of metals. For example, about $1/3$ of the
luminosity of the Coma cluster is due to its elliptical galaxies, so they
would provide a mass ${\sim}1\>\rm M_{\odot}Z_{\odot}$ of metals per solar
luminosity of the cluster, corresponding to ${\sim}0.1\>\rm Z_{\odot}$ of
metals per unit mass of galaxies (the ordinary stellar mass). If the bulges of
spiral and S0 galaxies also lost their metals due to the IGM, this estimate
could be doubled, but some of those metals may be in the disks (cf. Section
4.3.1). Iron can be considered as a representative metal in this calculation,
so we predict ${\sim}1\times\rm(Fe/H)_{\odot}\times M_{\odot}$ of iron in the
IGM per solar luminosity of the cluster. The actual mass of iron is quite
uncertain, and could be equal to the predicted amount.
#### 5.4.2 Overall gas loss from galaxies
The total mass of gas lost from elliptical galaxies is a less predictable
quantity, depending on gas flows within the galaxies and gain or loss of gas
during the time of star formation. Nevertheless, some estimates are
interesting.
In order of magnitude, almost any model will predict a mean metallicity of the
lost gas that exceeds the mean metallicity of stars, since the gas has the
composition of the last and most metal-rich stars formed; i.e., $Z_{\rm
i}\gtrsim Z_{\rm s}$. The mass of gas lost is therefore $M_{\rm gi}=M_{\rm
Zi}/Z_{\rm i}\lesssim M_{\rm Zi}/Z_{\rm s}$. With $Z_{\rm s}\lesssim y\sim\rm
Z_{\odot}$ for the mean of a typical cluster of galaxies, we therefore expect
that $M_{\rm gi}\sim M_{\rm Zi}/\rm Z_{\odot}$, very roughly. This implies
that the elliptical galaxies in the Coma cluster have ejected ${\sim}1\>\rm
M_{\odot}$ of gas per solar luminosity of the cluster, which is at the lower
end of the range of estimates of the cluster gas content, from X-ray data
(Section 5.1). Various specific calculations using the models discussed in
Sections 5.3.1 and 5.3.2 lead to similar results within a factor ${\sim}3$. It
is therefore uncertain whether gas loss from ellipticals could be the entire
source of gas in a cluster like Coma, or whether some of the IGM is simply
primordial material that was never in a galaxy. (In the latter case, the
intergalactic metals could nevertheless have been supplied by galaxies).
An interesting comment on the origin of the cluster IGM has been made by
Ostriker (1977b): the distribution of morphological types of galaxies in
clusters like Coma differs from the field distribution in having a much
smaller fraction of spirals, many more S0s, and somewhat more ellipticals
(Oemler, 1974). If one “corrects” the cluster galaxies by adding disk matter
until the overall ratio (disks)/(elliptical galaxies + bulges) equals that of
the field, the mass of extra disk matter required is ${\sim}50\%$ of the
(ordinary) mass of galaxies in the cluster. This in turn is a significant
fraction of the mass of cluster IGM. Ostriker (1977b) therefore proposes that
some of the IGM is material that would have been made into disks in a less
dense environment, but instead was swept up into a hot ambient IGM. This idea
ties in with several scenarios for the formation of disk galaxies, in which
the disk is made from diffuse gas that is accreted after the formation of a
spheroidal component. For example, in the model of Ostriker & Thuan (1975), a
significant fraction of the disk is shed by halo stars; and in the picture of
Tinsley & Larson (1979), the disk forms from diffuse gas after denser pieces
have merged to make the bulge.
#### 5.4.3 Ejection from evolving stars in elliptical galaxies
The stellar population in elliptical galaxies (and S0 galaxies and the
bulge–halo components of spirals) is predominantly very old, and the light of
these galaxies is dominated by red giants. It is almost certain that such
stars lose a few tenths of a solar mass, between the MS turnoff and the end of
their lives, to die as white dwarfs of typically $0.7\>\rm M_{\odot}$ (Section
2.4.1). This mass has been included in the total mass loss considered above,
but it is interesting to calculate also the present _rate_ of mass loss by
stars in elliptical galaxies.
For an analytical estimate, let us assume that all the stars in the system
formed at the same time, $t=0$. Let $M_{0}$ be the mass of stars formed, and
let $\phi(m)$ be the IMF; the mass of stars formed in the mass interval
$(m,\>m+dm)$ is therefore
$n(m)\ dm=M_{0}\phi(m)\ dm,$ (5.14)
by Equation (2.1). Now imagine these stars peeling off the MS and dying soon
afterward as they reach their lifetimes $\tau_{\rm m}$. The number of stars
dying per unit time is clearly
$D(t)=n\left(m_{\rm t}\right)\left|\frac{dm}{d\tau_{\rm m}}\right|_{\tau_{\rm
m}=t},$ (5.15)
where $m_{\rm t}$ is the turnoff mass ($\tau_{\rm m}=t$). The stellar
mass–lifetime relation can be approximated by a power law,
$\frac{m}{m_{1}}=\left(\frac{\tau_{\rm m}}{\tau_{1}}\right)^{-\theta},$ (5.16)
where $\tau_{1}$ is the lifetime of a fiducial mass $m_{1}$ and $\theta\simeq
0.25$ in the mass range of interest ($m_{\rm t}\sim 1\>\rm M_{\odot}$). It is
convenient to use a power-law IMF, Equation (2.3), normalized to
$\phi(m_{1})\equiv\phi_{1}$; masses in only the small range ${\sim}0.5-1\>\rm
M_{\odot}$ are relevant to the following calculation, so this IMF may be a
reasonable approximation even if a single power law would not apply to all
masses. The ejection rate can be obtained by multiplying $D(t)$ by $(m_{\rm
t}-w_{\rm m})$, the mass lost per star with remnant mass $w_{\rm m}$, with the
result
$E(t)=M_{0}\phi_{1}\theta\frac{m_{1}}{\tau_{1}}\left(m_{\rm t}-w_{\rm
m}\right)\left(\frac{t}{\tau_{1}}\right)^{-1+\theta x}.$ (5.17)
Since $(m_{\rm t}-w_{\rm m})$ changes slowly with time and $\theta x$ is
probably only a few tenths, this expression shows that the ejection rate
$E(t)$ varies approximately as $t^{-1}$. In Section 6.2.4, an analytical
expression is derived for the luminosity of stars in this model, and it is
shown that Equation (5.17) leads to a ratio of ejection rate to integrated
blue luminosity of the population,
$\frac{E}{L_{\rm B}}\sim 0.02\>\rm M_{\odot}\ L_{B\odot}^{-1}\ Gyr^{-1}$
(5.18)
at a present time ${\sim}10^{10}\>\rm yr$.
Most elliptical galaxies have a mass of neutral hydrogen that is less than 0.1
times their luminosity, in solar units, and many better studied ellipticals
have less neutral hydrogen than 0.01 times their luminosity (Knapp et al.,
1978). Faber & Gallagher (1976) argue that significant amounts of ISM cannot
be hiding in elliptical galaxies in ionized or molecular form. Thus the
ejection rate given by Equation (5.18) would provide more than the observed
amount of gas in a few Gyr, or even in less than $1\>\rm Gyr$. Possible fates
for this gas have been thoroughly discussed by Faber & Gallagher (1976); they
note that star formation at the rate in Equation (5.18) would be detectable
(unless only low-mass stars form), so they conclude that the gas ejected from
stars is being continually lost from the galaxies. On the other hand, Oemler &
Tinsley (1979) argue that star formation at the rate required to use up this
gas could have escaped detection in most ellipticals, and could account for
their supernova rate.
### 5.5 Abundance Gradients in Disks
Abundance gradients in disk stars and gas cannot be fully accounted for by
gradients in the gas fraction (Section 5.1), so it is of interest to see
whether dynamical processes analogous to those discussed in Section 5.2, for
spheroidal systems, could be responsible. A gradient in the IMF could again be
invoked, but this mechanism will not be discussed further.
Figure 12: (a) Star formation rates at several radii in the equatorial plane
of a collapse model for the formation of a disk galaxy (Larson, 1976c). The
radius in pc is marked on each curve, and the three ticks indicate the times
at which star formation is $10\%$, $50\%$, and $90\%$ complete (relative to
the final mass of stars) at that radius. (b) Metal abundances in the gas
(relative to the yield) in the equatorial plane of the same model (Tinsley &
Larson, 1978). In this Figure, the radii are given in kpc, ticks have the same
meaning as before, and open circles denote the time of maximum gas density at
each radius.
#### 5.5.1 Effects of infall
The idea that disks of galaxies form by accretion, incorporating metals from
the young halo, has been suggested by dynamical models and supported by the
metallicities of stars in the Solar neighborhood (Section 4.3). The properties
of the accretion process that affect chemical evolution are the metallicity of
infalling gas ($Z_{\rm f}$) and the ratio of SFR to infall rate ($\psi/f$). If
there is a radial gradient in these quantities, there must be a corresponding
metallicity gradient in the disk. In particular, the metallicity of the gas at
any time tends to a value given by Equation (4.6), $Z\rightarrow ky+Z_{\rm
f}$, where $k=(1-R)\psi/f$. Thus the disk gas has about this metallicity at
the present time, in regions where infall is at all effective, i.e., where $k$
is not so large that $Z$ takes too long to approach its asymptotic value; the
stars in turn reflect the metallicity of the gas at the time when they formed.
In the dynamical models studied by Tinsley & Larson (1978), the value of
$\psi/f$ decreases outward in the disk at all times, because star formation is
less efficient at low densities; $Z_{\rm f}$ is negligible at late times at
all radii, but it has a significant negative gradient at early stages when
metals from the young halo (central bulge) were most important near the
center. Figure 12 (a) illustrates the SFR versus time at several radii in the
equatorial plane of one of these models, and Figure 12 (b) gives the
corresponding metallicities in the gas. At small radii, most stars form early
from metal-rich infalling gas, while the outer regions experience star
formation on a much longer timescale and are still relatively metal-poor. The
gas at the present time thus has a negative metallicity gradient due mainly to
the gradient in $\psi/f$, while the gradient in stellar abundances is due
partly to the early gradient in $Z_{\rm f}$. The sizes of the model gradients
are comparable to observed values, so this very schematic model may possibly
be showing some effects that occur in real disk galaxies.
Generalizing these results, we can conclude as in Section 4.3 that the
chemical properties of disks could plausibly be strongly affected by gas flows
that constitute the formation of the disk itself.
#### 5.5.2 Effects of radial gas flows
Radial inflow of gas in disks possibly occurs as a result of transfer of
angular momentum by viscosity, loss of angular momentum from the gas to spiral
or bar-like density waves, and other mechanisms (e.g. Kalnajs, 1978; Ostriker,
1977a). These processes are rather speculative, since the inflow in many
models could be a numerical artifact, but it is interesting to see how
chemical evolution could be affected by flow velocities of a plausible
magnitude.
The metals are concentrated inward by this process, as by gaseous dissipation
in a collapsing spheroidal system (Section 5.2.1). Let us consider an annulus
of a galaxy between radii $r$ and $r+\delta r$, measured in the disk. The
chemical evolution of this annulus can be studied in the instantaneous
recycling approximation, using Equations (3.17) and (3.19). Let $M_{\rm g}$
and $\psi$ in those equations be replaced by $2\pi rM_{\rm g}\delta r$ and
$2\pi r\psi\delta r$, respectively, where $M_{\rm g}$ and $\psi$ now denote
the corresponding surface densities; let $f$ be replaced by the net rate of
inflow into the annulus, i.e., $F(r)-F(r+\delta r)=-(\partial F/\partial
r)\delta r$, where $F$ is a flow rate (in $\rm M_{\odot}\ yr^{-1}$) with a
positive sign for outward motion; and let $Z_{\rm f}f$ be replaced by the net
rate of inflow of metals, which is $Z(r)F(r)-Z(r+\delta r)F(r+\delta
r)=-Z(\partial F/\partial r)\delta r-(\partial Z/\partial r)F\delta r$. The
equations then reduce to
$\frac{\partial M_{\rm g}}{\partial t}=-(1-R)\psi-\frac{1}{2\pi
r}\frac{\partial F}{\partial r},$ (5.19)
and
$M_{\rm g}\frac{\partial Z}{\partial t}=y(1-R)\psi-\frac{1}{2\pi
r}\frac{\partial Z}{\partial r}F.$ (5.20)
Equation (5.20) shows that the radial flow is consistent with a steady-state
abundance gradient,
$\frac{\partial(Z/y)}{\partial r}\sim 2\pi r\frac{(1-R)\psi}{F},$ (5.21)
which is negative if the flow is inward. The flow causes $Z$ to change on a
timescale
$\tau_{\rm F}\sim\frac{2\pi r^{2}M_{\rm g}}{\lvert F\rvert}.$ (5.22)
$F$ can be expressed in terms of the flow velocity $v$, where $\lvert
F\rvert=$ (mass of gas in the annulus)/(time for gas to flow across $\delta
r$) $=2\pi rM_{\rm g}\lvert v\rvert$. The timescale for radial flow to be
effective is thus
$\tau_{\rm F}\sim\frac{r}{\lvert v\rvert},$ (5.23)
and the corresponding gradient can be written
$\frac{\partial(Z/y)}{\partial\ \ln(r)}\sim\frac{\tau_{\rm F}}{\tau_{*}},$
(5.24)
where $\tau_{*}\equiv M_{\rm g}/(1-R)\psi$ is the timescale for star formation
to use up the gas. These relations show that rapid inflow, with a timescale
less than that for star formation, quickly obliterates any radial metallicity
gradient, while slow inflow can lead to a significant one.
Substituting values of $\psi$, $M_{\rm g}$, and $r$ for the Solar
neighborhood, it is found that the interstellar abundance gradient (Section
5.1) is consistent with inflow at a few $\rm km\ s^{-1}$, carrying a flux
${\sim}1\>\rm M_{\odot}\ yr^{-1}$; the timescale for the gradient to change is
a few Gyr. There is no strong evidence for the occurrence of systematic gas
flows of this magnitude in the Galaxy, but nor can they be ruled out.
Sanders (1977) has suggested that the deep minimum in the surface density of
gas in the Galaxy, in an annulus between $0.6$ and $4\>\rm kpc$, could be due
to inflow into the central $600\>\rm pc$, where the total quantity of ISM is
enough to fill the depleted annulus. If so, inflow could perhaps be fueling
the strong star formation at the Galactic center.
## 6 Approaches to Photometric Evolution
Evolution of stars in galaxies affects not only their chemical compositions,
but also their integrated luminosities, colors, and spectra. Photometric and
chemical evolution can be studied separately, because they depend largely on
complementary properties of a galaxy: the colors at a given time are governed
strongly by the current rate of star formation relative to its past average
value, whereas the chemical composition depends mainly on the integrated past
star formation relative to the gas content and on the ratio of the SFR to gas
flow rates. In studying photometric evolution, we can ignore the effects of
ISM, except in correcting colors for any reddening or gaseous emission, and we
can avoid assumptions relating the SFR to the gas supply. Of course, a
complete understanding of the properties of a galaxy would include the
relations among its history of star formation, gas content and gas flows, and
chemical composition (cf. Figure 1), but more can be learned by tackling
pieces of the puzzle separately first.
### 6.1 Aims and Methods
Models for the stellar population of a galaxy address three related questions:
what types of stars are present, what history of star formation produced them,
and what was the population and its photometric properties in the past? The
answers to these questions have many applications, such as interpreting in
terms of star formation the correlations between photometric and morphological
properties of galaxies, and predicting changes on cosmological timescales.
Methods of constructing population models can be divided into three
categories: “population synthesis” with no explicit evolution, evolutionary
models, and analytical approximations.
#### 6.1.1 Population synthesis
This approach is to find the “best” mixture of stars to match the colors of
the galaxy under study. The inferred distribution of stars in the HR diagram
then contains information on their past formation rate and the IMF, and it is
often possible to judge the mean chemical composition and even to detect minor
components of high or low metallicity. The procedure is to observe the colors
of the galaxy and a variety of nearby stars, generally including narrow-band
photoelectric colors and indices giving the strengths of spectral features
that are sensitive to stellar temperature, luminosity, or composition. Then
synthetic colors are computed for various mixtures of stars and compared with
the galaxy colors. The search for an optimal mixture can be made in many ways,
ranging from trial-and-error to elaborate computer algorithms; the method
generally used, quadratic programming, was introduced to the field by Faber
(1972). Because of observational errors, and because the available nearby
stars do not include all types in the galaxy under study, a perfect fit is
seldom found, and the solution that (formally) minimizes the errors is not
necessarily the most plausible. The choice of a “best” synthetic population
must therefore be based on imposed astrophysical constraints; these include
such requirements as a smooth MS luminosity function, and a distribution of
subgiants and giants that could plausibly arise from evolution off the MS. The
lack of an objectively defined best fit means that the final solution depends
strongly on the imposed constraints, as emphasized by Williams (1976).
There are often several astrophysically acceptable synthetic populations that
match the galaxy colors equally well but correspond to significantly different
histories of star formation. An example of such ambiguity appears in models
for elliptical galaxies, reviewed by Faber (1977). All studies agree that
_most_ of the light of these galaxies, from blue through infrared wavelengths,
comes from an old stellar population with a distribution in the HR diagram
like an old open cluster plus an extended giant branch (cf. Figure 5).
However, such models almost always fail by a few percent to account for the
light around $3500~{}$\mathrm{\text{Å}}$$ (e.g. $U-B$ is predicted to be too
red by ${\sim}0.1\>\rm mag$), so they are lacking some hot stellar component
that is present in the real galaxies. To date it has been impossible to
determine whether the hot stars are a few upper-main-sequence stars, implying
ongoing star formation, or a minor population of old objects such as
horizontal-branch stars or blue stragglers (which can be seen in the
color–magnitude diagram for the old cluster M67, Figure 5). Obviously, it
would be very interesting to know if typical elliptical galaxies really are
still making stars at a slow residual rate! For the central bulge of M31,
which is optically indistinguishable from an elliptical galaxy, there are
broadband colors down to $1550~{}$\mathrm{\text{Å}}$$, but even these data
have not resolved the ambiguity (Wu et al., 1980). A new avenue has been
opened by a demonstration that the integrated light of stars in a nuclear
bulge region of our own galaxy matches exactly the integrated light of
comparable regions of spirals and ellipticals (Whitford, 1978). The brighter
stars are individually observable in the Galactic bulge, so a star-by-star
synthesis of their contribution to the light is possible. Perhaps in the end
this approach will tell whether an old population alone can account for the
ultraviolet light.
Another general problem is that even the best defined regions in the HR
diagram cannot be interpreted uniquely in terms of a past history of star
formation. The models are insensitive to many details of the IMF and SFR, for
two basic reasons:
1. 1.
the integrated light of galaxies is dominated by regions of the HR diagram
that depend theoretically on rather few parameters of star formation; and
2. 2.
some types of stars, such as red giants, may have evolved from a wide range of
MS masses (Section 2.4.2), so they cannot be traced uniquely back to an
initial mass and time of star formation.
The second of these problems is avoided in the evolutionary method described
next, but the first remains and will be discussed below.
#### 6.1.2 Evolutionary models
This approach relies primarily on stellar evolution theory to suggest
allowable populations, as follows. Theoretical tracks (or isochrones) of stars
in the HR diagram are used to compute the stellar population that would arise,
at a given age, from a given SFR and IMF, with a given chemical composition;
the integrated colors are then calculated, using observed colors of stars in
appropriate parts of the HR diagram, and the results are compared with the
colors of the galaxy under study. The aim is to derive from a series of models
the SFR, IMF, age, and composition(s) that best match the galaxy, and thereby
to learn not only about its present stellar population but also about its past
history and past photometric properties.
In practice, stellar evolution is not well enough understood for fully
theoretical models to be reliable. The main problems are related to late
stages of evolution, including particularly the giant branches in old stellar
populations, whose effects on models for elliptical galaxies are reviewed by
Faber (1977). These problems are alleviated by using statistical studies of
nearby giants to provide semi-empirical evolutionary tracks (Section 2.4.1),
and by allowing the most uncertain types of stars to be present in numbers
that are treated as adjustable parameters. This method thus closely resembles
some non-evolutionary population syntheses in which the constraints are chosen
to represent stellar evolutionary tracks (O’Connell, 1976).
The evolutionary approach has several advantages. The best established aspects
of stellar evolution theory are incorporated, so the resulting population is a
_possible_ one as far as can be determined. Uncertainties cannot be formally
calculated, but from trials with a variety of assumptions one can estimate
subjectively the allowable range of parameters. Often this range is small
enough to lead to useful conclusions about the past history of star formation,
and to predictions of photometric changes of cosmological interest. For
example, it is possible to determine the slope of the IMF in elliptical
galaxies closely enough to be sure that their integrated luminosity declines
with increasing age (Section 6.2).
Uncertainties in the conclusions from this method arise partly from
uncertainties in stellar evolution, and partly from the intrinsic
insensitivity of integrated colors to many parameters of interest – a problem
found earlier with population syntheses. Two parts of the HR diagram tend to
dominate the integrated light, as illustrated spectroscopically by the work
of, e.g., Morgan & Mayall (1957) and Morgan & Osterbrock (1969): B stars on
the upper main sequence, and late G through early M giants. These dominant
regions are extended out to O stars in ultraviolet light and to late M giants
in the infrared. If young stars are absent, low-mass giants dominate at visual
and longer wavelengths, so the colors depend much more on stellar evolution
than on the IMF or past SFR; at shorter wavelengths, however, turnoff stars
are seen so the colors give some information on the age of the system. If
young stars are present, the light at short wavelengths is dominated by OB
stars, whose relative numbers depend on the IMF and whose total numbers
(relative to red stars) depend on the ratio of the present SFR to its
integrated past value. Stars with lifetimes from a few times $10^{8}\>\rm yr$
to just below the age of the system (usually A and F stars) contribute
relatively little light, so there is little information on either their part
of the IMF or the detailed time-dependence of the SFR. In Section 7.1, models
will be discussed that illustrate the dominance of the upper main sequence
and/or low-mass giants, depending on the SFR.
Programs for constructing evolutionary models have been described by Tinsley
(1968, 1972a, 1978b), Searle et al. (1973), Tinsley & Gunn (1976a), and Larson
& Tinsley (1978). The mechanical details are far less troublesome than the
input “data” representing stellar tracks, and it is easy to obtain numerical
accuracy far exceeding the astrophysical certainty of the calculations. There
are two types of this technique.
1. 1.
The first method is to supply the computer with evolutionary tracks in the HR
diagram for stars with a series of discrete masses, or with isochrones for a
series of discrete ages; separate stellar data are used for each chemical
composition of interest. Then, for a given IMF and SFR, the calculation yields
the numbers of stars on a large grid of points in the HR diagram, as a
function of the age of the system.
2. 2.
The second method uses the first type of program once only for each IMF and
composition, to give the integrated colors at a series of ages of a model
whose SFR consists of a single initial burst. These are then regarded as the
colors of “generations” of stars with a given age (and IMF and composition).
A model with any prescribed SFR can then be treated, at each age, as the sum
of such generations in proportions given by the SFR. The number of generations
whose properties must be combined to obtain the integrated colors of any model
is much smaller than the number of points in the HR diagram that are referred
to directly in the first method, so the second approach is more economical. In
either method, it is clearly possible to add arbitrary numbers of stars of
undetermined evolutionary status, in the spirit of population synthesis.
While models with Solar metallicity can rely on nearby stars to provide colors
and semi-empirical evolutionary tracks, there is no such convenient sample for
other compositions. In making models for non-Solar metallicities, it is often
most convenient to change the “standard” models differentially, rather than
starting from scratch with tracks and colors for each set of stars. Faber
(1973) first used the metallicity effects discussed in Section 2.4.5 to
estimate differential changes in the integrated colors of elliptical galaxies,
as a function of metallicity, and her methods have been adapted by others
subsequently. Recent results for elliptical galaxies have been cited in
Section 5.1. The calculations of metallicity effects in integrated light are
still much less secure than one would like, and there is a need for more basic
work on stellar evolution and atmospheres at non-Solar compositions, including
non-Solar abundance ratios among elements heavier than helium (Faber, 1977).
#### 6.1.3 Analytical approximations
Some of the results from evolutionary models can be understood qualitatively
using analytical approximations. These have proved particularly tractable for
models in which all the stars form in a single initial burst, which is a first
approximation to the population in elliptical galaxies. Such models will be
considered next.
### 6.2 Evolution of a Single Generation of Stars
Many numerical models designed to match detailed photometry of elliptical
galaxies have shown that nearly all the light at visual and longer wavelengths
can be accounted for by a very old population, with a turnoff near the Sun’s
position on the MS. The metallicities of the dominant stars appear to be
within a factor of two of Solar in wide-aperture observations of giant
ellipticals, although their centers may be more metal-rich and small
ellipticals are metal-poor (Section 5.1). Reviews by van den Bergh (1975) and
Faber (1977) cover the history and recent status of this subject, and a few
subsequent developments have been referred to in Section 6.1. The implications
of a predominantly very old population for the evolution of elliptical
galaxies are best understood using analytical approximations.
#### 6.2.1 Content and luminosity
Let us consider a single generation of stars, formed with total mass $M_{0}$
in a short burst (as in Section 5.4.3), with a fixed chemical composition near
Solar. The population evolves by peeling off the MS, as can be visualized from
Figures 2 and 3.
The IMF will be taken to be a power law, normalized to
$\phi(m_{1})\equiv\phi_{1}$, where $m_{1}$ is the turnoff mass at a fiducial
time $\tau_{1}$. The power-law approximation need only hold over a small mass
interval, since the light at present comes almost entirely from stars between
$0.4\>\rm M_{\odot}$ and turnoff, and the turnoff mass at ages of interest,
${\sim}5-20\>\rm Gyr$, lies in the small range ${\sim}0.9-1.2\>\rm M_{\odot}$.
At a time $t$ after star formation, the MS stars present have masses from the
lower limit at formation, $m_{\rm L}$, up to the turnoff mass $m_{\rm t}$,
which is given by substituting $\tau_{\rm m}=t$ in Equation (5.16). Thus the
number of dwarfs with masses in the interval $(m,\ m+dm)$ is, by Equation
(2.3),
$\medsize n_{\rm d}(m)\ dm=M_{0}\phi(m)\
dm=M_{0}\phi_{1}\left(\frac{m}{m_{1}}\right)^{-(1+x)}\ dm,\>\>m_{\rm L}\leq
m\leq m_{\rm t}.$ (6.1)
Stars slightly more massive than $m_{\rm t}$ are present as giants, and their
total number is the number of stars that were on the MS with lifetimes between
$t$ and $t-\tau_{\rm g}$, where $\tau_{\rm g}$ is the duration of post-MS
evolution for masses ${\sim}m_{\rm t}$. (The term “giants” is used loosely
here to mean all post-MS stars; the analysis can easily be modified to refer
to any portion of post-MS evolution). The number of giants is therefore
$n_{\rm g}(t)=M_{0}\phi(m_{\rm t})\left|\frac{dm}{d\tau_{\rm
m}}\right|_{\tau_{\rm m}=t};\>\>\tau_{\rm
g}=M_{0}\phi_{1}\theta\frac{m_{1}}{\tau_{1}}\tau_{\rm
g}\left(\frac{t}{\tau_{1}}\right)^{-1+\theta x}.$ (6.2)
The luminosity of individual dwarfs in the mass range of interest can be
approximated by a power law,
$\ell_{\rm d}(m)=\ell_{1}\left(\frac{m}{m_{1}}\right)^{\alpha},$ (6.3)
where $\alpha\simeq 5$. For giants, an average luminosity $\ell_{\rm g}$ is
defined so that the product $\ell_{\rm g}\tau_{\rm g}$ gives correctly the
integrated light output during post-MS evolution. The values of $\ell_{1}$,
$\alpha$, and $\ell_{\rm g}$ of course depend on the wavelength interval of
interest, and so do the results below relating to luminosities. (For
bolometric light, the product $\ell_{\rm g}\tau_{\rm g}$ is proportional to
the amount of nuclear energy used, but it has no such interpretation in
restricted wavelength bands).
The integrated luminosities and masses of dwarfs and giants can now be derived
from Equations (6.1) – (6.3) and Equation (5.16). It will be assumed in the
integrals that $m_{\rm L}\ll m_{1}$. The total mass of dwarfs at time $t$
depends critically on whether the slope of the IMF $(x)$ is less than or
greater than 1:
$\displaystyle\frac{M_{0}\phi_{1}m_{1}^{2}}{x-1}\left(\frac{m_{\rm
L}}{m_{1}}\right)^{-x+1},\>\>\>\,x>1,$ (6.4a) $\displaystyle
M_{0}\phi_{1}m_{1}^{2}\ \ln\left(\frac{m_{\rm t}}{m_{\rm
L}}\right),\>\>\>\>\>\;x=1,$ (6.4b)
$\displaystyle\frac{M_{0}\phi_{1}m_{1}^{2}}{1-x}\left(\frac{t}{\tau_{1}}\right)^{-\theta(1-x)},\>\>x<1.$
(6.4c)
Giants have a total mass ${\sim}m_{\rm t}n_{\rm g}(t)$, and one can quickly
verify that the mass ratio of giants to dwarfs is greatest in the case $x<1$,
and is at most ${\sim}\tau_{\rm g}/t\sim 0.1$; the contribution of giants to
the total mass will therefore be neglected. The integrated luminosity of
dwarfs is
$L_{\rm d}(t)=\int_{m_{\rm L}}^{m_{\rm t}}\ell_{\rm d}(m)n_{\rm d}(m)\
dm=\frac{M_{0}\phi_{1}m_{1}\ell_{1}}{\alpha-x}\left(\frac{t}{\tau_{1}}\right)^{-\theta(\alpha-x)},$
(6.5)
on the assumption $x<\alpha$, which is justified below. Finally, the
integrated luminosity of giants is
$L_{\rm g}(t)=\ell_{\rm g}n_{\rm
g}(t)=M_{0}\phi_{1}\theta\frac{m_{1}}{\tau_{1}}\ell_{\rm g}\tau_{\rm
g}\left(\frac{t}{\tau_{1}}\right)^{-1+\theta x}.$ (6.6)
The above relations will be used to derive some interesting properties of this
single generation of stars.
#### 6.2.2 Remnants of dead stars
There may be a significant dark mass in the form of remnants of stars
initially above $m_{\rm t}$, especially if the IMF has a fairly shallow slope
so these stars were relatively numerous. Although it is probably a very poor
approximation to extrapolate the IMF to high masses with the slope $x$ used
near $1\>\rm M_{\odot}$, the equations will be written to show how the
contributions of remnants can be estimated in the simplest cases. (These
results can easily be modified to allow for a variable slope). In this
approximation, it will be assumed that all remnants have the same mass $w$,
and that all stars above $m_{\rm t}$ are dead. Then the total mass of remnants
is $w$ times the number of stars formed with masses between $m_{\rm t}$ and
the upper limit $m_{\rm U}$:
$M_{\rm w}(t)=w\int_{m_{\rm t}}^{m_{\rm U}}M_{0}\phi(m)\
dm=\frac{M_{0}\phi_{1}m_{1}w}{x}\left(\frac{t}{\tau_{1}}\right)^{\theta x},$
(6.7)
assuming $m_{\rm U}\ll m_{\rm t}$ and $x>0$. The relative mass of remnants is
potentially greatest if $x<1$, and then Equation (6.4c) shows that $M_{\rm
w}/M_{\rm d}\sim w/m_{\rm t}$, which could be close to unity. This result is
obviously strongly dependent on the assumption of a single power law for the
whole IMF, which would exaggerate the mass of remnants if, for example,
elliptical galaxies have a curved IMF like the function in the Solar
neighborhood (Figure 4). It may be concluded that dead remnants could possibly
affect the total mass by a factor ${\sim}2$, which cannot be predicted with
any confidence from constraints on the slope of the IMF at turnoff.
#### 6.2.3 The ratio of giants to dwarfs in the light
Some spectral features in the integrated light of elliptical galaxies depend
sensitively on the relative amounts of light contributed by giant and dwarf
stars at the feature wavelength. Examples are an iron hydride band at
$0.99\>\rm\upmu m$, known as the Wing-Ford band, which Whitford (1977) has
found to be extremely strong in late dwarfs but weak in late giants; and a
carbon monoxide band at $2.2\>\rm\upmu m$, studied especially by Frogel et al.
(1978, and earlier papers cited therein), which has the opposite behavior,
being much stronger in late giants than in late dwarfs. Since the light of
elliptical galaxies at those wavelengths must be dominated by late-type stars,
the galaxies should show a weak FeH band and a strong CO band if giants
outshine dwarfs, and vice versa. As the following analysis shows, the relative
luminosities of giants and dwarfs give important information on the slope of
the IMF, which in turn affects many other properties of elliptical galaxies
including the rate of evolution of total luminosity; it is the significance of
this effect for cosmological tests (Section 6.2.6) that has motivated much of
the analysis of spectral features.
Equations (6.5) and (6.6) together give an approximate expression for the
relative luminosities of giants and dwarfs:
$G(t)\equiv\frac{L_{\rm g}(t)}{L_{\rm d}(t)}=\theta(\alpha-x)\frac{\ell_{\rm
g}\tau_{\rm
g}}{\ell_{1}\tau_{1}}\left(\frac{t}{\tau_{1}}\right)^{\theta\alpha-1}.$ (6.8)
An alternative expression is obtained by substituting Equations (5.16) and
(6.3):
$G(t)=\theta(\alpha-x)\frac{\ell_{\rm g}\tau_{\rm g}}{\ell_{\rm d}\left(m_{\rm
t}\right)t}.$ (6.9)
The term $\ell_{\rm g}\tau_{\rm g}$ is the amount of energy radiated (at a
given wavelength) by a star of approximately turnoff mass after it leaves the
MS, while $\ell_{\rm d}\left(m_{\rm t}\right)t$ is the energy radiated during
MS evolution. Thus Equation (6.9) says that the value of $G$ in bolometric
light is, in order of magnitude, equal to the ratio of nuclear fuel consumed
after leaving the MS to that consumed on the MS; since stars near $1\>\rm
M_{\odot}$ burn the hydrogen in only $10\%$ of their mass while on the MS but
in $70\%$ before they die (Section 2.4.1), this fuel ratio is ${\sim}6$. This
high value is the underlying reason why giants can outshine dwarfs in the
integrated light of a galaxy, despite their very short lifetimes. Giants tend
to be especially dominant at long wavelengths, because most of the energy from
the giant branch as a whole comes from red giants.
The fuel burning ratio is not the only factor affecting $G$, however. The term
$(\alpha-x)$ in Equation (6.9) introduces a dependence on $x$, the slope of
the IMF. A larger value of $x$ reduces the contribution of giants simply by
reducing the number of stars in the mass range of giants (just above turnoff)
relative to those still on the MS. The dependence of $G$ on $x$ is of great
practical importance, since it allows spectroscopic criteria to set
constraints on $x$. The work of Whitford (1977) and Frogel et al. (1978) shows
that the red–infrared light of elliptical galaxies is strongly dominated by
giants, to an extent that $x$ must be less than 2, and possibly less than 1.
These constraints are consistent with the IMF in the Solar neighborhood, which
has $x<1$ in the relevant mass range (Figure 4 and Equation 2.9).
The infrared spectra of elliptical galaxies set constraints not only on the
IMF but also on the relative numbers of M giants of different spectral types
that populate the giant branch. As noted in Section 6.1.2, these numbers are
not firmly predicted by stellar evolution theory, so studies of galaxy spectra
can add to an understanding of late stages in the lives of low-mass stars.
This application of galaxy models is discussed by Faber (1977), Tinsley
(1978b), and references therein.
#### 6.2.4 The stellar mass loss rate relative to luminosity
An expression for the rate of mass loss from stars has been derived in Section
5.4.3, but Equation (5.17) is not in a useful form for comparing with
observable quantities. It is possible to obtain a useful equation for the
ejection rate per unit integrated luminosity, because both quantities scale
with the populations of stars near turnoff.
From Equations (6.5) and (6.8), the total luminosity can be written
$L(t)=\left[1+G(t)\right]L_{\rm
d}(t)=\frac{M_{0}\phi_{1}m_{1}\ell_{1}}{\alpha-x}(1+G)\left(\frac{t}{\tau_{1}}\right)^{-\theta(\alpha-x)}.$
(6.10)
Then, with Equation (5.17), the ratio of ejection rate to luminosity is
$\frac{E(t)}{L(t)}=\frac{\theta(\alpha-x)}{\ell_{1}\tau_{1}}\frac{m_{\rm
t}-w_{\rm m}}{1+G}\left(\frac{t}{\tau_{1}}\right)^{\theta\alpha-1},$ (6.11)
which shows that the ratio depends only slowly on time. A more useful relation
for finding the present ratio is given by substituting Equation (5.16) and
(6.3) to eliminate $\ell_{1}$ and $\tau_{1}$, with the result
$\frac{E(t)}{L(t)}=\theta(\alpha-x)\frac{m_{\rm t}-w_{\rm
m}}{1+G}\frac{1}{\ell_{\rm d}\left(m_{\rm t}\right)t}.$ (6.12)
This ratio can be estimated for present-day ellipticals as follows. From
spectroscopic studies in _blue_ light, $G\simeq 1$ (the value of $G$ is
greater in red or bolometric light); and approximate values of the other
quantities are $\alpha\simeq 5$, $\theta\simeq 0.25$, $m_{\rm t}\simeq 1\>\rm
M_{\odot}$, $w_{\rm m}\simeq 0.7\>\rm M_{\odot}$, $\ell_{\rm d}\simeq 1\>\rm
L_{\rm B\odot}$, $t\simeq 10\>\rm Gyr$, $x\simeq 1$. The result from Equation
(6.12) is then $E/L_{\rm B}\simeq 0.015\>\rm M_{\odot}\ L_{\rm
B\odot}Gyr^{-1}$, of which the significance was discussed in Section 5.4.3.
#### 6.2.5 The mass-to-luminosity ratio
An analytical estimate of $M_{\rm s}/L$ can be made using the mass of stars
$M_{\rm s}\simeq M_{\rm d}(t)$ (neglecting the small contribution of giants
and the very uncertain contribution of dead remnants), and the total
luminosity $L(t)$. From Equations (6.4a) – (6.4c), it is clear that the result
depends strongly on whether $x\lessgtr 1$. Moreover, it depends critically on
the assumption that $x$ is constant down to $m_{\rm L}$, since the least
massive stars (or sub-stellar objects) can be numerous enough to dominate the
mass while contributing negligibly to the light. If $x<1$, the result from
Equations (6.4c) and (6.10) is
$\frac{M_{\rm s}}{L}=\frac{\alpha-x}{1-x}\frac{1}{1+G}\frac{m_{\rm
t}}{\ell_{\rm d}\left(m_{\rm t}\right)},\>\>x<1,$ (6.13)
which is proportional to the mass-to-luminosity ratio of turnoff stars. If
$x>1$ (or if $x$ increases from a value below 1 at turnoff to above 1 at
smaller masses) Equation (6.4a) shows that $M_{\rm s}/L$ increases in
proportion to $m_{\rm L}^{-(x-1)}$, so it is sensitive to a quantity that
cannot be determined photometrically.
In all cases, photometric data (star counts, population syntheses,
spectroscopic estimates of $x$) yield only a _lower limit_ to the true mass-
to-luminosity ratio $(M/L)$ of a galaxy, since any amount of mass could be
present in hidden form. When the masses of galaxies are determined
dynamically, the empirical $M/L$ values often increase to such large values in
the outer regions that a large amount of hidden mass must indeed be present
(e.g. Spinrad et al., 1978). For this reason, values of $M/L$ determined from
population syntheses and equivalent methods are sometimes called “photometric
$M/L$ ratios” to distinguish them from the ratios of actual mass (defined
dynamically) to luminosity.
#### 6.2.6 Evolution of luminosity and the Hubble diagram
Figure 13: Schematic Hubble diagram showing how both the deceleration
parameter ($q_{0}$) and evolution of galaxies affect the departure from
linearity. _Lines_ are schematic “theoretical” curves for two values of
$q_{0}$, _dots_ are hypothetical data points, and _arrows_ indicate
qualitatively how they should be corrected if the net effect of evolution is
to make distant galaxies intrinsically brighter than nearby ones. More
precisely, evolution (in this sense) at the rate of a few percent of a
galaxy’s luminosity per Gyr makes the true value of $q_{0}$ smaller by about
unity than the value inferred from the uncorrected data points.
In one of the classic cosmological tests, the Hubble diagram, logarithmic
redshifts of galaxies are plotted against against their apparent magnitudes,
as illustrated schematically in Figure 13. For a sample with a well-defined
mean absolute magnitude, this diagram can be regarded heuristically as a plot
of “recession velocity” versus “distance”. At small redshifts, the regression
line is linear with a slope corresponding to Hubble’s Law, $\rm
redshift\propto distance$. At large redshifts, the deviation from linearity
measures the change in the ratio “velocity” / “distance” with distance itself;
since the lookback time (the light-travel time) increases with distance, the
curvature of the Hubble diagram thus gives a measure of the past expansion
rate of the Universe, and in particular of its deceleration. The deceleration
parameter $q_{0}$ can take only positive values in the simplest cosmological
models of General Relativity, the Friedmann models, and $1/2$ is a critical
value: if $q_{0}>1/2$, the deceleration is large enough for the expansion
eventually to be reversed, but if $0<q_{0}\leq 1/2$, the Universe will expand
forever; if in fact $q_{0}$ is negative, indicating that the expansion is
accelerating, more complicated cosmological models are required. Evolution of
galaxies enters the picture because the lookback times sampled must be many
Gyr for the deceleration to be detectable; the galaxies then had significantly
different luminosities, so the “distance” parameter, apparent magnitude,
cannot be estimated on the assumption of a constant absolute magnitude. The
departure of the Hubble diagram from linearity is very sensitive to evolution:
if the luminosities of elliptical galaxies grow fainter at a few percent per
Gyr, for example, the apparent value of $q_{0}$ (inferred from the shape of
the Hubble diagram) exceeds its true value by several tenths. This problem has
been discussed by Humason et al. (1956), Sandage (1961b, c), Gunn & Oke
(1975), and Tinsley (1972b, 1977b). For an approximate estimate of the
evolutionary correction to $q_{0}$, the above analytical equations can be
used.
From Equation (6.10), we have
$\frac{d(\ln\ L)}{d(\ln\ t)}=-\theta(\alpha-x)+\frac{t}{1+G}\frac{dG}{dt},$
(6.14)
and Equation (6.8) can be used to evaluate $dG/dt$. The term $\ell_{\rm
g}\tau_{\rm g}$ in the expression for $G(t)$ depends only slowly on time,
because giant branch evolution depends only weakly on mass in the relevant
range, so only the explicit time-dependence need be considered and that term
gives $(t/G)(dG/dt)=\theta\alpha-1$. Substituting in Equation (6.14), we have
$\frac{d(\ln\ L)}{d(\ln\ t)}=-\theta(\alpha-x)+\frac{G}{1+G}(\theta\alpha-1).$
(6.15)
The second term in Equation (6.15) is not very important, since
$(\theta\alpha-1)$ is a few tenths and $G/(1+G)$ lies between 0 and 1. The
main term is therefore simply $(-\theta\alpha+\theta x)$, which can be written
$\frac{d(\ln\ L)}{d(\ln\ t)}\simeq-1.3+0.3x.$ (6.16)
Essentially the same result is obtained for the evolution of luminosity in
numerical population models. The examples in Figure 14 show the predicted
dependence on the IMF: the rate at which $M_{V}$ gets dimmer is slower in
models with a larger value of $x$. Since giants supply most of the light, this
behavior is mainly because, when $x$ is large, the giant branch is fed by a
more richly populated main sequence as time goes on.
Figure 14: Evolution of colors and magnitudes of single-generation models for
the stellar population in elliptical galaxies (Tinsley & Gunn, 1976a). Curves
are for three values of the slope of the IMF: _solid lines_ , $x=2$; _dashes_
, $x=1$; _dots_ , $x=0$. Note that if $x$ is small, colors evolve slowly but
magnitudes evolve quickly.
In the Hubble diagram, evolution means that departures from linearity are due
not only to $q_{0}$ but also to systematic changes in the absolute magnitudes
of galaxies (Figure 13). If the curvature is interpreted without regard to
evolution, the result is an apparent value of $q_{0}$ that differs from the
true value by
$\Delta q_{0}\equiv{\rm apparent\ value-true\ value}\simeq-1.5\frac{d(\ln\
L)}{d(\ln\ t)}$ (6.17)
(e.g. Tinsley, 1977b). A first-order estimate, from Equation (6.16), is
therefore
$\Delta q_{0}\simeq 2.0-0.4x.$ (6.18)
The slope of the IMF, $x$, emerges as the critical parameter. As discussed in
Section 6.2.3, spectroscopic studies indicate that $x<2$, and possibly $x<1$.
In the first case, $|\Delta q_{0}|\gtrsim 1$, and in the second case, $|\Delta
q_{0}|\gtrsim 1.5$. In either case, the correction for evolution is big enough
to make a qualitative difference to the type of cosmology inferred.
Current estimates of the apparent value of $q_{0}$ range from ${\sim}0$ (Gunn
& Oke, 1975) to ${\sim}1.5$ (Kristian et al., 1978), the differences being due
to unknown sampling and observational effects. Downward corrections of order
unity are clearly important in determining whether the true value of $q_{0}$
is greater than $1/2$, less than $1/2$, or even negative.
A value of $x\gtrsim 5$ would be needed to make stellar evolution negligible
in the Hubble diagram, but such a steep IMF would make the late dwarfs
dominate the infrared light to an extent that is precluded by the giant-
dominated spectra of elliptical galaxies. A possible loophole is the
following. Equation (6.16) depends on the value of $x$ for stars near turnoff,
while the infrared spectra depend on the ratio of giants to dwarfs of types K
and M; if the IMF were to turn over sharply between ${\sim}0.5$ and $1\>\rm
M_{\odot}$ (i.e., having far fewer less massive stars that the turnoff slope
would predict), one could have both a steep slope at turnoff and a very small
contribution from dwarfs to the infrared light. This idea is, of course,
completely ad hoc, since the IMF in the Solar neighborhood has $x<2$ for all
masses $<10\>\rm M_{\odot}$, and does not cut off above ${\sim}0.2\>\rm
M_{\odot}$. It is therefore most reasonable to conclude that elliptical
galaxies have giant-dominated spectra because the IMF has a fairly shallow
slope at turnoff; if so, their luminosity evolves fast enough to make the
apparent value of $q_{0}$ exceed its true value by 1 or more.
However, this is not all we need to know to unravel the Hubble diagram. The
galaxies used for this test are the central cluster giants, which are believed
to grow secularly by cannibalizing their neighbors (Section 5.3.3). This
process could plausibly lead to a growth rate in the total stellar population
of several percent per Gyr, with a corresponding increase of luminosity in
opposition to the effect just discussed. The dynamical effects cannot yet be
calculated accurately enough for a correction to be applied to the Hubble
diagram, so this test does not yet give a usefully accurate value of $q_{0}$.
The situation is reviewed by Tinsley (1977b).
#### 6.2.7 Evolution of colors
Predictions of color evolution are of interest because they can be tested by
observations of distant elliptical galaxies whose ages are several Gyr younger
than nearby galaxies.
The colors of a single-generation population become redder with age, if the
main course of stellar evolution is peeling off the MS at turnoff and
following the red giant branch. The main contribution to the color change is
the redward evolution of the turnoff, since giant evolution is insensitive to
turnoff mass in the range of interest. Consequently, colors evolve faster if
the light is less giant-dominated, i.e. if $x$ is larger, in contrast to the
integrated luminosities just discussed. This behavior is illustrated in Figure
14.
Qualitatively different behavior is predicted if the stars can lose enough
mass to become blue horizontal-branch stars, instead of the red “clump” giants
that normally represent the core helium burning stage of metal-rich low-mass
stars (Section 2.4.1). It has been suggested that such stars lose mass at a
variety of rates, some becoming late red giants and others becoming blue.
Numerical models for galaxy populations in which mass loss occurs
stochastically on the red giant branch have been studied by Ciardullo &
Demarque (1978). Because evolution to a blue position in the HR diagram occurs
only if the star has a small mass of envelope left, the fraction of giants
becoming blue increases as the turnoff mass decreases. The upshot is that the
integrated colors of the model galaxies evolve blueward after ${\sim}8\>\rm
Gyr$.
Observations of distant elliptical galaxies are ambiguous on this point, as
reviewed by Spinrad (1977). Some of the most distant central cluster galaxies
known, with redshifts ${\sim}0.6$, have intrinsic colors that are bluer than
those of nearby ellipticals, but the distant galaxies were selected on the
basis of strong radio emission so they may be atypical. If they are typical,
the color change is about that expected according to the type of models that
evolve monotonically toward redder colors (e.g. Figure 14); the lookback time
sampled is ${\sim}4-7\>\rm Gyr$, depending on the cosmological model. Another
sample of central cluster galaxies with redshifts up to nearly $0.5$ has no
systematic dependence of color on redshift that can be disentangled from the
intrinsic scatter (Wilkinson & Oke, 1978).
Dramatic color differences between nearby and distant galaxy populations in
clusters have been discovered by Butcher & Oemler (1978a, b). Nearby clusters,
i.e. those with lookback times $<1\>\rm Gyr$, have galaxy populations that are
strongly correlated with the cluster morphology: loose, irregular clusters
have a large fraction of spiral galaxies, and centrally concentrated, regular
clusters have very few spirals and mainly S0 and elliptical galaxies; the
brighter galaxies in regular clusters are correspondingly all red. However, in
two regular clusters with lookback times ${\sim}5\>\rm Gyr$, the bright
galaxies are found to have a wide range of colors, including many that are as
blue as late-type spiral galaxies. On the assumption that the distant regular
clusters represent younger versions of the nearby ones, these very blue
galaxies must evolve (in a few Gyr) into red S0s or ellipticals. The color
change observed is many times greater than any predictions based on the
evolution of single-generation populations, so it is concluded that those
galaxies were actively forming stars just a few Gyr ago. Presumably, they are
mainly the precursors of S0 galaxies seen in nearby clusters, in which star
formation is undetectable.
## 7 Colors and Star Formation Rates
The stellar populations in most galaxies are far more complicated than those
in ellipticals, because young stars are important contributors to the light.
The time-dependence of the SFR is therefore an important parameter in addition
to the three quantities (age, IMF, and metallicity) used to characterize old
populations, and the latter quantities could also be changing in time.
Moreover, the colors of spiral and irregular galaxies are often affected by
internal reddening and gaseous emission lines. Despite the complications
presented by these galaxies, it is especially interesting to try to understand
their photometric properties in terms of histories of star formation.
Applications of such studies include explaining correlations between form and
photometric properties, finding what physical conditions are conducive to star
formation, and searching for young galaxies.
Models for galaxies with ongoing star formation are usually numerical;
analytical approximations are cumbersome except in the simplest case of a
constant SFR (Tinsley, 1973). This Section will consider models that study
only a few simple properties, mainly just UBV colors. Although more can be
learned from spectroscopic details, the UBV system has the advantage of an
extensive and homogeneous compilation of galaxy colors in the Second Reference
Catalogue of Bright Galaxies (de Vaucouleurs et al., 1976; to be referred to
as RC2).
### 7.1 UBV Colors of Normal Galaxies
The UBV colors of a sample of morphologically normal galaxies are shown in
Figure 15; the crosses are all elliptical and S0 galaxies, and the dots are a
variety of morphological types, which we consider first. The colors of these
galaxies form such a narrow distribution in the two-color diagram that it is
tempting to look for one dominant parameter that could vary among galaxies and
lead to a one-dimensional set of colors. Because the appearance and spectra of
galaxies suggest a progression of star-forming activity, ranging from very
active in late-type irregulars to negligible in ellipticals, it is natural to
suggest that the color sequence is due to different proportions of young and
old stars. Population syntheses and evolutionary models have confirmed this
view, and their conclusions can be summarized (with some oversimplification)
in a “standard scenario” for galaxy evolution: normal galaxies have the same
IMF and mean stellar metallicities, and they are of the same age, but they
differ in the time-dependence of their SFRs; in particular, the latest
(bluest) types of galaxies form stars on a long timescale, while the earliest
(reddest) ceased star formation long ago. This hypothesis is obviously
inaccurate in detail, but it provides a useful starting point. It will be used
to construct a series of “standard” galaxy models, whose colors will be
compared with observations, and then the effects of factors other than the SFR
will be considered in turn.
#### 7.1.1 “Standard” models
Figure 15: Two-color diagram for morphologically normal galaxies and globular
clusters. _Filled circles_ : galaxies from the _Hubble Atlas_ (Sandage,
1961a), excluding peculiars and those with galactic latitudes
$|b|<20^{\circ}$, with corrected colors from the RC2; the _error cross_ is for
this sample, and the _solid line_ is its mean locus estimated by eye (Larson &
Tinsley, 1978). _Crosses_ : E and S0 galaxies in the Virgo cluster, with
colors from Sandage (1972), corrected for reddening according to the RC2
formulae. _Open circles_ : galactic globular clusters (excluding those with
$E_{B-V}>0.05$), with colors from Harris & Racine (1979), corrected for
reddening according to the RC2 formulae. Figure 16: Theoretical two-color
diagram for galaxies with monotonic SFRs, Solar metallicity, and the local IMF
(Section 7.1). _Heavy line_ : “standard” models, i.e. those of age $10\>\rm
Gyr$, with SFRs ranging from constant at the top to a single initial burst at
the bottom. _Light solid lines_ : models differing from the standard set only
in age, as indicated. _Dashes_ : models differing from the standard set only
in having an IMF with a constant slope, as marked. _Dash-dot line_ : models
differing from the preceding set with $x=1$ only in having an upper stellar
mass limit $m_{\rm U}=10\>\rm M_{\odot}$, whereas all other models shown have
$m_{\rm U}=30\>\rm M_{\odot}$. _Arrows_ : approximate estimates of the effect
on colors of blue and red galaxies, respectively, of altering the metallicity
by the factor indicated.
The methods discussed in Section 6.1.2 have been used by various authors to
construct models corresponding to the standard scenario; the (typical) results
shown here are from Larson & Tinsley (1978). Let us consider models with the
IMF of the Solar neighborhood, Solar (or old-disk) metallicity, and an age of
$10\>\rm Gyr$; the exact choice of these standard parameters is not critical,
as shown below. The models have monotonically decreasing SFRs ($\psi$),
ranging from constant to a single initial burst lasting $10^{7}\>\rm yr$.
Different series of models have different monotonic functions $\psi(t)$
between these extremes, such as exponential functions, negative powers of
time, and combinations of the constant and single-burst models as two
components in different proportions. The colors of these series in the UBV
diagram all lie very near the locus indicated by a single heavy line in Figure
16: the model with a constant SFR is at the top of this line, that with an
initial burst is at the bottom, and the form of the curve in between is
essentially the same for all series with various functional forms for
$\psi(t)$. The theoretical locus for these standard models is very close to
the observed mean locus for galaxies of different types (line in Figure 15),
so the standard scenario is at least superficially consistent. Two conclusions
can be stated.
1. 1.
_The UBV colors of normal galaxies can in general be accounted for by models
with the same age, metallicity, and IMF_ ; most of the observed colors lie in
the range predicted for monotonically decreasing SFRs, within the
observational errors. A further conclusion is that late-type galaxies are not
necessarily young, even though their appearance and blue-light spectra are
dominated by short-lived OB stars; the integrated colors are instead
consistent with an underlying population of stars with ages up to many
billions of years. These conclusions have been stressed in the context of
evolutionary models by Tinsley (1968), Searle et al. (1973), Larson & Tinsley
(1974, 1978), and Huchra (1977). Caveats and deviations from the norm are
discussed below.
2. 2.
_Models with monotonically declining SFRs (and with the same age, metallicity,
and IMF) define a one-parameter sequence in the ( $U-B$, $B-V$) plane_.
Inspection of the models shows that the parameter is the ratio of the present
SFR to its average value over the lifetime of the system; or equivalently the
SFR per unit mass of stars ($\psi_{1}/M_{\rm s}$ or
$\psi_{1}/\overline{\psi}_{1}t_{1}$); or equivalently the inverse of these
quantities, a _timescale for star formation_
$T_{1}\equiv\overline{\psi}_{1}t_{1}/\psi_{1}$, in the notation of Section
2.2. Galaxies of the latest morphological types are the bluest objects in
Figure 15, and these evidently have the longest timescales for star formation,
while the earliest types, which are the reddest, have the shortest timescales;
this point will be discussed further in Section 7.1.6.
The one-parameter sequence shows that UBV colors for a given value of $T_{1}$
are almost independent of the functional form of $\psi(t)$, as long as it is
monotonic. An unfortunate consequence of this result is that the UBV colors of
a galaxy (if near the mean locus in Figure 15) cannot give any more
information about the SFR than the quantity $T_{1}$. In practice, they give
less information, because of ambiguities due to possible variations of
metallicity, etc., as shown below.
The sensitivity of colors to the single parameter $T_{1}$ is due to the
dominance of low-mass giants and/or young OB stars in the light of galaxies,
as discussed in Section 6.1.2. In effect, the contribution of low-mass giants
is proportional to the number of long-lived stars ever formed, and the
contribution of upper-main-sequence stars is proportional to the present SFR,
so their ratio is proportional to $T_{1}$. The integrated colors are
insensitive to details of the past SFR because A – F dwarfs with intermediate
lifetimes contribute relatively little light, and because the nature of the
giant branch changes little over a wide range of MS lifetimes for the
precursor stars. For the same reasons, it is difficult to extract
significantly more information about the history of star formation in a galaxy
from more detailed photometry than from UBV colors.
We next consider some possible problems with the simple one-parameter
scenario.
#### 7.1.2 Possible effects of errors
Three systematic discrepancies between the models and data can be seen on
comparing Figures 16 and 15: the heavy theoretical line lies about $0.05\>\rm
mag$ about the empirical mean locus, some galaxies are bluer than the bluest
model, and some are redder than the reddest model. The systematic offset is no
more than could be due to errors in the stellar evolution tracks, judged from
series of models based on alternative tracks. If this offset is corrected ad
hoc by moving the heavy theoretical line downward, there are still some bluer
and redder galaxies than predicted. The differences seem to be too big to
ascribe to uncertainties in the stellar evolution used, and they cannot be
corrected by redefining the “standard” age or metallicity, since an
improvement at the red end would leave more discrepant galaxies at the blue
end, and vice versa. Nor can the “standard” IMF be changed, since if the IMF
is universal it must be the same as the local function. Therefore, not all of
the discrepancies between the heavy line and the data are due to theoretical
errors within the framework of the standard scenario.
Although the mean error bar shown for the data in Figure 15 is small, some of
the colors may have significantly larger errors due to uncertainties in the
reduction. The colors plotted were corrected in the RC2 on a statistical basis
for Galactic and internal reddening, so excessively red and blue galaxies
could result from inappropriate corrections in a few cases. A reddening vector
(from the RC2) is shown in Figure 15, and it indicates that galaxies away from
the ends of the distribution could not be moved far from the mean locus except
by extremely large over- or underestimates of their reddening, because the
vector happens to lie almost parallel to the mean locus itself.
Emission lines can affect the colors of late-type galaxies, but the estimates
made by Huchra (1977) indicate that morphologically normal galaxies are
unlikely to have strong enough gaseous emission for this to be important.
In summary, it seems likely that some normal galaxies have colors that are too
red or too blue to be accounted for by the standard scenario. Two questions
arise: How can the discrepant galaxies be accounted for? And could normal
galaxies have significant variations in age, IMF, or metallicity that do not
show up on the UBV plot?
#### 7.1.3 Variations in age
Light lines in Figure 16 indicate the effects of allowing ages between 5 and
$20\>\rm Gyr$. The loci for different ages overlap, so most of the galaxies in
Figure 15 could have any ages in this range. The extreme colors, however, do
depend on age, and the bluest and reddest data points could be accounted for
if the ages of galaxies vary by a factor ${\sim}4$. We shall see that this is
not the only possible explanation of those data points, since metallicity
effects are probably important.
#### 7.1.4 Variations in metallicity
Effects of different stellar metallicities ($Z_{\rm s}$) can be estimated as
outlined in Section 6.1.2, and some approximate results are indicated in
Figure 16; the slope of the vector for red galaxies is empirical, but the
slope for blue galaxies and the length of each vector are uncertain by factors
${\sim}2$.
As discussed in Section 5.1, the sequence of colors for E–S0 galaxies (crosses
in Figure 15) is regarded as one of metallicity. This sequence closely
overlaps the locus of galaxies with different SFRs and ages, so UBV colors
alone cannot unambiguously give the SFR parameter ($T_{1}$), age, and $Z_{\rm
s}$ for a population of stars. The reddest points in Figure 15 are giant
elliptical galaxies, which almost certainly have a mean $Z_{\rm s}$ greater
than Solar (in the aperture used for the colors); if the galaxies have some
residual star formation, it is undetected to date, but it could affect the
colors enough to change the estimated $Z_{\rm s}$ and/or age somewhat.
The bluest points in Figure 15 are small late-type galaxies. It is known from
studies of the gaseous emission lines in some such galaxies that they can be
significantly metal-poor (e.g. a factor of 4 in the Small Magellanic Cloud;
Pagel et al., 1978), so a low $Z_{\rm s}$ could help to make these points very
blue. The effects of abundance changes on the colors of blue galaxies are too
uncertain to say whether another effect, such as a somewhat younger age, is
also required to account for their being bluer than the standard sequence.
#### 7.1.5 Variations in the IMF
To show possible effects of variations in the IMF, Figure 16 includes the loci
of UBV colors of models differing from the standard sequence only in their
IMF. Two of the variants have constant slopes, $x=1$ and $x=2$, while the
third has $x=1$ and an upper limit $m_{\rm U}=10\>\rm M_{\odot}$ compared to
$30\>\rm M_{\odot}$ in all other cases111111The models in Figures 16 and 18
use an IMF slightly different from Equation (2.9): the upper limit is $30\>\rm
M_{\odot}$ (except as stated), and the slope is $x=1.3$ ($\phi\propto
m^{-2.3}$) for all $m>2\>\rm M_{\odot}$. The UBV colors would be little
affected if Equation (2.9) itself, with $m_{\rm U}$ taking any value $\geq
30\>\rm M_{\odot}$, were used (cf. Huchra, 1977).. For blue galaxies, the
local IMF gives colors between those for $x=1$ and $x=2$. In general, the
colors are redder with a larger value of $x$ or a smaller value of $m_{\rm
U}$, since there are relatively few upper-MS stars. Comparisons with Figure 15
show that the variants illustrated are about the largest deviations from the
local IMF that one could have without predicting a greater color spread than
is observed.
This conclusion applies only to the bluer galaxies, and only to stars $\gtrsim
1\>\rm M_{\odot}$ that contribute significantly to their light. It is clear
from Figure 16 that the UBV colors of redder galaxies ($B-V\gtrsim 0.8$) are
very insensitive to the variations of IMF considered. Additional information
on the IMF, derived from spectroscopic studies and $M/L$, was discussed in
Section 2.2.2. Possible departures from the local IMF discussed there are a
lack of stars above $10\>\rm M_{\odot}$ in some early-type spirals, and
ubiquitous variations in the fraction of very low-mass objects. In elliptical
galaxies, the IMF between ${\sim}0.4$ and $1\>\rm M_{\odot}$ cannot be very
much steeper than the local function (Section 6.2.3).
#### 7.1.6 Relevance to the formation and structure of normal galaxies
To summarize the preceding discussion, every aspect of the standard scenario
has been shown to have its weaknesses: the metallicity, IMF, and age are known
to vary from one galaxy to another and/or could vary significantly without
affecting the locus of normal UBV colors. Nevertheless, it is true that the
main parameter causing the progression of colors of morphologically normal
galaxies is the timescale for star formation. The average UBV colors of
galaxies of different Hubble types lie along the middle of the distribution in
Figure 15, with the latest types at the top and the earliest at the bottom (de
Vaucouleurs, 1977). Thus _there is a strong correlation between the structure
of galaxies and their timescales for star formation_. Star formation seems to
be most efficient in the galaxies with the highest bulge-to-disk ratios, and,
among the spirals, it is most efficient in those with the tightly wound spiral
arms.
An obvious question is whether the shape of a galaxy is a consequence of its
efficiency of star formation, or whether, conversely, the timescale for star
formation is determined by the structure. Both effects are believed to be
present. On one hand, more efficient early star formation leads to a galaxy
with a greater bulge-to-disk ratio (Section 1.2): the formation of a
spheroidal component requires that the stars in this component formed on a
timescale less than that for gaseous dissipation in the protogalaxy. On the
other hand, the present structure of a galaxy governs its large-scale
dynamics, which in turn has important effects on star formation (Section
2.3.3); for example, a more prominent bulge implies a greater surface density,
which can lead to stronger gas compression and so to more efficient star
formation in the disk. The papers cited in Section 1.2 and Section 2.3.3 give
many more detailed discussions, and further references, on the origin of the
Hubble sequence of galaxy types and the numerous properties of galaxies that
correlate with their forms.
S0 galaxies, which are disk galaxies without spiral structure, are an
inscrutable class. Like elliptical galaxies, they have colors consistent with
no ongoing star formation (or possibly a little, showing at short wavelengths;
Section 6.1.1). There are currently two types of theory on the origin of S0
galaxies: in the first, they are former spirals that have no more star
formation because their ISM was lost, either in a collision with another
galaxy or by ram-pressure sweeping due to motion through an ambient IGM; in
the second type of theory, S0 galaxies had intrinsically very efficient star
formation in their disks at early stages. The second type of theory is
preferred by some authors because S0 galaxies in isolation and in dense
clusters have essentially the same low contents of neutral hydrogen (Faber &
Gallagher, 1976) and the same distributions of colors (Sandage & Visvanathan,
1978), and because there are some structural differences between S0s and
spirals (Burstein, 1978). Nevertheless, the first picture is supported
circumstantially by the high proportion of S0 galaxies in clusters, especially
in their central regions, and especially in clusters with hot IGM (Oemler,
1974; Melnick & Sargent, 1977). Arguments based on colors are not decisive,
because models in which star formation stopped long ago have similar present
colors to those in which it stopped only a few Gyr ago (Biermann & Tinsley,
1975). Whatever mechanism cuts off star formation, the very blue galaxy
content of distant, regular clusters strongly suggests that many S0 galaxies
were actively making stars only ${\sim}4\>\rm Gyr$ ago (Section 6.2.7). It is
especially interesting that some nearby clusters contain “anemic” spirals,
with weak spiral structure and a subnormal neutral hydrogen content, that have
been interpreted as disk systems at a stage of evolution between normal
spirals and stripped S0s (van den Bergh, 1976b).
### 7.2 Colors of Peculiar Galaxies
Galaxies with morphological peculiarities have peculiar colors too, as
illustrated in Figure 17, which is a UBV diagram for systems in the _Atlas of
Peculiar Galaxies_ (Arp, 1966). The width of their color distribution is in
striking contrast to the narrow locus of normal galaxies (Figure 15), and on
closer inspection the width turns out to be due almost entirely to interacting
galaxies, which are shown as crosses in Figure 17 (Larson & Tinsley, 1978).
The implication is that dynamical disturbances have led to an unusual star
formation history, so a study of these galaxies and their colors might shed
some light on the process of star formation in general.
Figure 17: Two-color diagram for galaxies in the _Atlas of Peculiar Galaxies_
(Arp, 1966), excluding those with galactic latitudes $|b|<20^{\circ}$, with
corrected colors from the RC2 and other sources cited by Larson & Tinsley
(1978). _Crosses_ denote interacting systems. _Open circles_ are two Type I
Seyfert galaxies, whose colors may be affected by non-thermal emission.
#### 7.2.1 Bursts of star formation and blue colors
If the SFR in a galaxy does not decrease monotonically, colors very different
from those of the standard models in Section 7.1 can be obtained. The idea of
star formation in “flashes” or “bursts” was introduced by Searle & Sargent
(1972) (see also Searle et al., 1973) to explain the very blue colors of some
dwarf irregular galaxies, and it has appeared in other contexts including
elliptical galaxies with patches of star formation (van den Bergh, 1975) and
the peculiar galaxies discussed here.
The effects of a burst of star formation on a formerly red galaxy are
illustrated in Figure 18, where the heavy curve is the locus of standard
models aged $10\>\rm Gyr$, from Figure 16. The colors of younger galaxies are
shown in two extreme cases: the dotted line is a model with a constant SFR,
evolving through the ages shown (in Gyr), and the heavy dashed line (on the
left) is a model whose star formation stopped at $10^{7}\>\rm yr$. The latter
curve can be regarded as the evolution of a cluster of stars formed in a
period of $10^{7}\>\rm yr$, or equivalently as the colors resulting from stars
formed in a burst lasting $10^{7}\>\rm yr$. The light solid lines are the loci
of models made of two components:
1. 1.
a red galaxy aged $10\>\rm Gyr$, with no ongoing star formation; and
2. 2.
stars formed in a burst of duration $10^{7}\>\rm yr$, seen at ages
$10^{7}\>\rm yr$ (upper line) and $10^{8}\>\rm yr$ (lower light solid line).
Figure 18: Theoretical two-color diagram showing the colors of young galaxies
(_dotted and heavy dashed lines_) and an old red galaxy with bursts of star
formation of various strengths and ages (_light solid and dashed lines_).
Details of these lines and their labels are explained in Section 7.2.1. The
_heavy solid line_ is the locus of standard old models, from Figure 16.
The numbers along the upper curve are a burst strength parameter, defined as
the mass ratio of stars formed in the burst to stars in the old red galaxy.
Finally, the light dashed lines represent the evolution of a composite system,
from age $10^{7}\>\rm yr$ when the star formation in the burst stops; these
are lines of constant burst strength, and they cross the lower light solid
line when the age is $10^{8}\>\rm yr$. (The heavy dashed line is the limiting
case of a burst of infinite strength, i.e., without any underlying old stars).
It can be seen that a burst of star formation in a red galaxy gives colors
initially above the normal UBV locus, since the young stars cause an
“ultraviolet excess”; as the burst ages, the colors evolve across the normal
locus after nearly $10^{8}\>\rm yr$; then they fall below the normal line and
eventually become imperceptibly different from the colors of an undisturbed
old galaxy.
Evidently the colors of peculiar galaxies in Figure 17 can be explained by
bursts of star formation of various strength and ages in galaxies with various
initial colors. Using a series of models like those in Figure 18, Larson &
Tinsley (1978) found that most of the Arp (1966) galaxies can be accounted for
with bursts of strength less than $5\%$ and duration $<2\times 10^{7}\>\rm
yr$; a few more deviant colors are probably due to observational scatter,
internal reddening, non-thermal continuum emission (in two Type I Seyfert
galaxies in the sample), and strong gaseous emission lines, whose effects on
colors are discussed by Huchra (1977). The strongest bursts of star formation
are inferred for galaxies in distorted close pairs, often with bridges or
tails, and in apparently single systems with tails and filamentary streamers.
Dynamical models for colliding galaxies predict features like these in cases
of strong tidal deformation and recent mergers (Toomre, 1977), so it appears
that violent dynamical interactions lead to star formation. For this reason,
it has been suggested that the stars in elliptical galaxies could have formed
in bursts due to collisions and mergers among protogalactic subsystems
(Section 5.3.2).
Figure 18 shows that the colors of models with bursts of strength $10\%$ and
infinity are very similar. The differences are less than the observational
uncertainties for many faint peculiar galaxies, and the theoretical
uncertainties in stellar evolution, metallicity effects, etc. In other words,
it is not possible to tell from UBV colors alone whether a galaxy is really
young or has $90\%$ of its mass in old stars! Some of the galaxies near the
upper left in Figure 17 are very chaotic in appearance, and are tempting
candidates for truly young galaxies, but in all cases the colors are
inconclusive and the appearance could be due to a violent collision or
irregularly distributed star formation in an old galaxy.
Colors at longer wavelengths, such as $V-K$, are much more sensitive than
$B-V$ to the presence of some old stars that would distinguish between a truly
young system and one with a burst strength ${\sim}10\%$ (Struck-Marcell &
Tinsley, 1978). Galaxies with very active star formation are often dusty, so
red colors could be due to reddening rather than to age; on the other hand,
very blue values of $V-K$ would indicate a lack of red giants, and much
stronger limits could be put on the mass of any underlying old component. As
yet, there are no known nearby galaxies in which the dominant young stellar
component could not be masking a significant mass of old stars.
#### 7.2.2 Highly reddened galaxies
Some regions of galaxies that are suspected of having intense star formation
are extremely dusty, and they show thermal infrared (IR) emission that is
interpreted as re-radiation of starlight by the dust. Example of such regions
include the centers of M82 and NGC 253, and the dust band around NGC 5128
(e.g. Kleinmann, 1977; Telesco, 1978). Star formation is indicated by early-
type spectra and blue colors in unobscured patches (e.g. van den Bergh, 1971,
1978), emission from interstellar molecules (e.g. Whiteoak, 1978), and the
lack of more plausible explanations for the IR emission (e.g. Kleinmann,
1977). In NGC 5128, the dust band has an IR luminosity of a few times
$10^{10}\>\rm L_{\odot}$ (Telesco, 1978), which rivals the visual luminosity
of the entire elliptical galaxy.
If the IR luminosity is assumed to represent the bolometric luminosity of
buried stars, an SFR can be estimated (Struck-Marcell & Tinsley, 1978): models
like those of Section 7.1 show that any system with a mass-to-luminosity ratio
$M_{\rm s}/L_{\rm bol}<0.5$ is so dominated by young stars that $L_{\rm bol}$
is almost directly proportional to the SFR; with the local IMF, the relation
is
$\psi\simeq(0.1-0.4)\frac{L_{\rm bol}}{\rm L_{\odot}}\>\rm M_{\odot}\
Gyr^{-1}.$ (7.1)
An upper limit to the time for which star formation could have continued at
this rate is approximately $M_{\rm s}/\psi$, so the limiting timescale
$\tau_{\rm s}$ depends only on $M_{\rm s}/L_{\rm bol}$, according to the
relation
$\tau_{\rm s}\equiv\frac{M_{\rm s}}{\psi}\sim(3-10)\frac{M_{\rm s}/L_{\rm
bol}}{\rm M_{\odot}/L_{\odot}}\>\rm Gyr.$ (7.2)
Some galactic nuclei have such strong IR emission that $M_{\rm s}/L_{\rm bol}$
is only a few hundredths, so the timescale is only a few times $10^{8}\>\rm
yr$. The dust-band region of NGC 5128 is also making stars at a prodigious
rate: given a luminosity ${\sim}10^{10}\>\rm L_{\odot}$, Equation (7.1) leads
to an SFR of about $2\>\rm M_{\odot}\ yr^{-1}$, so a respectable disk of stars
could be built in just a few times the dynamical timescale of the system. van
den Bergh (1975) has suggested that NGC 5128 could evolve into an early-type
spiral seen edge-on, like the Sombrero galaxy M104.
Since most of the bolometric light comes from massive stars, the above ratios
of SFR to $L_{\rm bol}$ could be overestimates if low-mass stars are not
forming. For example, in the models discussed, stars above $10\>\rm M_{\odot}$
contribute $90\%$ of $L_{\rm bol}$, but stars below $1\>\rm M_{\odot}$ account
for $80\%$ of the mass formed into stars. There is no reason to suspect a lack
of low-mass stars, however, especially since low-mass protostars (T Tauri
stars) are associated with dark clouds in the Milky Way.
These very dusty galaxies with intense star formation lead again to the
question of what truly young galaxies would look like. In the absence of dust
they would be very blue, like the young models in Figure 18, but it seems that
the regions of galaxies with the most intense bursts are the very reddened
ones just described. This question is important in the context of primeval
galaxies at large redshifts, i.e. the early stages of most normal galaxies
that now have ages ${\sim}10\>\rm Gyr$. Starting with the ideas of Partridge &
Peebles (1967), most models for primeval galaxies have assumed that hot young
stars would be visible and lead to detectable luminosities (despite the great
distances) during a brilliant early burst of star formation (e.g. Meier, 1976;
Kaufman & Thuan, 1977; Sunyaev et al., 1978). A model for a very dusty
primeval galaxy has been studied by Kaufman (1976), and Sunyaev et al. (1978)
considered the possibility of substantial radiation from dust. If the very
dusty nearby galaxies are the best analogs of truly primeval systems, as
suggested by Larson (1976b), their prospects for detection at optical
wavelengths are dim. Observational searches have so far produced null results.
Another factor making primeval galaxies hard to detect could be that most
galaxies have such a long timescale for star formation that there is not a
very bright phase at an early peak SFR (Tinsley, 1978a; Tinsley & Larson,
1979). One argument is that most spiral galaxies have colors that correspond
to rather long timescales for star formation (Section 7.1), precluding a
significant early burst with a corresponding peak in luminosity. Moreover,
even elliptical galaxies could form their stars over rather long time
intervals if star formation occurs during a series of mergers among subsystems
(Section 5.3.2). According to these ideas, late-type galaxies could be fainter
in the past than they are now, and early-type galaxies could experience their
brightest evolutionary stages only a few Gyr ago. Galaxies in interestingly
early evolutionary stages have indeed already been found at moderate
redshifts: distant clusters have excess numbers of blue galaxies (Section
6.2.7; Butcher & Oemler, 1978a), and counts in the field show a large excess
of blue galaxies at faint apparent magnitudes (Kron, 1978). Further studies of
these phenomena will surely shed light on the ways in which stars and galaxies
have formed during cosmological time.
## 8 Conclusion
Returning to the outline of galactic evolution in Figure 1, one can see how
much remains to be learned before the jigsaw puzzle will be complete enough
for a clear picture to emerge. Essentially every aspect of the subject needs
further observational and theoretical study, so galactic evolution will long
be a fertile field for research.
## Acknowledgements
I am grateful to Pierre Demarque, Richard B. Larson, Curtis Struck-Marcell,
and Bruce A. Twarog for their suggestions and help in the preparation of this
review. The work was supported in part by the National Science Foundation
(Grant AST77-23566) and the Alfred P. Sloan Foundation.
## References
* Alcock & Paczynski (1978) Alcock C., Paczynski B., 1978, ApJ, 223, 244
* Arnett (1971) Arnett W. D., 1971, ApJ, 166, 153
* Arnett (1978) Arnett W. D., 1978, ApJ, 219, 1008
* Arp (1966) Arp H. C., 1966, Atlas of Peculiar Galaxies. California Institute of Technology, Pasadena
* Audouze (1977) Audouze J., 1977, Proc. IAU Session, Grenoble, France, 67, 3
* Audouze & Tinsley (1976) Audouze J., Tinsley B. M., 1976, ARA&A, 14, 43
* Baade (1963) Baade W., 1963, Evolution of stars and galaxies. Harvard University Press, Cambridge
* Bahcall & Sarazin (1977) Bahcall J. N., Sarazin C. L., 1977, ApJ, 213, L99
* Biermann & Tinsley (1975) Biermann P., Tinsley B. M., 1975, A&A, 41, 441
* Boksenberg & Sargent (1978) Boksenberg A., Sargent W. L. W., 1978, ApJ, 220, 42
* Bregman (1978) Bregman J. N., 1978, ApJ, 224, 768
* Burstein (1978) Burstein D., 1978, PhD thesis, University of California, Santa Cruz
* Burton (1979) Burton W. B., 1979, IAUSymp, 84
* Butcher (1977) Butcher H., 1977, ApJ, 216, 372
* Butcher & Oemler (1978a) Butcher H., Oemler A. J., 1978a, ApJ, 219, 18
* Butcher & Oemler (1978b) Butcher H., Oemler A. J., 1978b, ApJ, 226, 559
* Butler et al. (1976) Butler D., Carbon D., Kraft R. P., 1976, ApJ, 210, 120
* Chiosi (1979) Chiosi C., 1979, A&A, 80, 252
* Chiosi et al. (1978) Chiosi C., Nasi E., Sreenivasan S. R., 1978, A&A, 63, 103
* Christian & Janes (1979) Christian C. A., Janes K. A., 1979, AJ, 84, 204
* Ciardullo & Demarque (1978) Ciardullo R. B., Demarque P., 1978, IAUSymp, 80, 345
* Cohen (1976) Cohen J. G., 1976, ApJ, 203, 587
* Cohen (1978) Cohen J. G., 1978, ApJ, 223, 487
* Conti (1978) Conti P. S., 1978, ARA&A, 16, 371
* Cowley et al. (1978) Cowley A. P., Hartwick F. D. A., Sargent W. L. W., 1978, ApJ, 220, 453
* Cox & Smith (1976) Cox D. P., Smith B. W., 1976, ApJ, 203, 361
* Da Costa (1977) Da Costa G. S., 1977, PhD thesis, Australian National University
* Dallaporta (1973) Dallaporta N., 1973, A&A, 29, 393
* Demarque & McClure (1977) Demarque P., McClure R. D., 1977, Evolution of Galaxies and Stellar Populations, Proceedings of a Conference at Yale University, p. 199
* Doroshkevich et al. (1978) Doroshkevich A. G., Shandarin S. F., Saar E., 1978, MNRAS, 184, 643
* Edmunds & Pagel (1978) Edmunds M. G., Pagel B. E. J., 1978, MNRAS, 185, 77P
* Eggen et al. (1962) Eggen O. J., Lynden-Bell D., Sandage A. R., 1962, ApJ, 136, 748
* Elmegreen & Lada (1977) Elmegreen B. G., Lada C. J., 1977, ApJ, 214, 725
* Faber (1972) Faber S. M., 1972, A&A, 20, 361
* Faber (1973) Faber S. M., 1973, ApJ, 179, 731
* Faber (1977) Faber S. M., 1977, Evolution of Galaxies and Stellar Populations, Proceedings of a Conference at Yale University, p. 157
* Faber & Gallagher (1976) Faber S. M., Gallagher J. S., 1976, ApJ, 204, 365
* Faber & Gallagher (1979) Faber S. M., Gallagher J. S., 1979, ARA&A, 17, 135
* Fowler (1979) Fowler W. A., 1979, International Conference on Astrophysics, Universite de Liege, Belgium, p. 22
* Freeman (1977) Freeman K. C., 1977, Evolution of Galaxies and Stellar Populations, Proceedings of a Conference at Yale University, p. 133
* Frogel et al. (1978) Frogel J. A., Persson S. E., Matthews K., Aaronson M., 1978, ApJ, 220, 75
* Fusi-Pecci & Renzini (1976) Fusi-Pecci F., Renzini A., 1976, A&A, 46, 447
* Gehrels (1978) Gehrels T., 1978, Proceedings of a Conference held at Tucson, Arizona, 19, 109
* Gerola & Seiden (1978) Gerola H., Seiden P. E., 1978, ApJ, 223, 129
* Gordon & Burton (1976) Gordon M. A., Burton W. B., 1976, ApJ, 208, 346
* Gott (1977) Gott J. R., 1977, ARA&A, 15, 235
* Gunn (1977) Gunn J. E., 1977, Evolution of Galaxies and Stellar Populations, Proceedings of a Conference at Yale University, p. 445
* Gunn & Gott (1972) Gunn J. E., Gott J. R., 1972, ApJ, 176, 1
* Gunn & Oke (1975) Gunn J. E., Oke J. B., 1975, ApJ, 195, 255
* Hainebach & Schramm (1977) Hainebach K. L., Schramm D. N., 1977, ApJ, 212, 347
* Hardy (1977) Hardy E., 1977, ApJ, 211, 718
* Harris (1976) Harris G. L. H., 1976, ApJS, 30, 451
* Harris & Deupree (1976) Harris G. L. H., Deupree R. G., 1976, ApJ, 209, 402
* Harris & Racine (1979) Harris W. E., Racine R., 1979, ARA&A, 17, 241
* Haschick & Burke (1975) Haschick A. D., Burke B. F., 1975, ApJ, 200, L137
* Hausman & Ostriker (1978) Hausman M. A., Ostriker J. P., 1978, ApJ, 224, 320
* Huchra (1977) Huchra J. P., 1977, ApJ, 217, 928
* Humason et al. (1956) Humason M. L., Mayall N. U., Sandage A. R., 1956, AJ, 61, 97
* Iben (1974) Iben I., 1974, ARA&A, 12, 215
* Iben (1976) Iben I. J., 1976, ApJ, 208, 165
* Iben & Truran (1978) Iben I. J., Truran J. W., 1978, ApJ, 220, 980
* Janes (1977) Janes K. A., 1977, Chemical and Dynamical Evolution of Our Galaxy, Proceedings of a Conference at Geneva Observatory, p. 173
* Johnson & Axford (1971) Johnson H. E., Axford W. I., 1971, ApJ, 165, 381
* Jones (1976) Jones B. J., 1976, Rev. Mod. Phys., 48, 107
* Kalnajs (1978) Kalnajs A. J., 1978, IAUSymp, 77, 113
* Kaufman (1976) Kaufman M., 1976, Ap&SS, 40, 369
* Kaufman & Thuan (1977) Kaufman M., Thuan T. X., 1977, ApJ, 215, 11
* King (1971) King I. R., 1971, PASP, 83, 377
* King (1977) King I. R., 1977, Evolution of Galaxies and Stellar Populations, Proceedings of a Conference at Yale University, p. 1
* Kirshner & Oke (1975) Kirshner R. P., Oke J. B., 1975, ApJ, 200, 574
* Kleinmann (1977) Kleinmann D. E., 1977, Proceedings of the Symposium of Infrared and submillimeter astronomy, Philadelphia, 63, 129
* Knapp et al. (1978) Knapp G. R., Kerr F. J., Williams B. A., 1978, ApJ, 222, 800
* Kormendy (1977) Kormendy J., 1977, Evolution of Galaxies and Stellar Populations, Proceedings of a Conference at Yale University, p. 131
* Kraft (1978) Kraft R. P., 1978, IAUSymp, 80, 167
* Kristian et al. (1978) Kristian J., Sandage A., Westphal J. A., 1978, ApJ, 221, 383
* Kron (1978) Kron R. G., 1978, PhD thesis, University of California, Berkeley
* Larson (1972a) Larson R. B., 1972a, Nature Phys. Sci., 236, 7
* Larson (1972b) Larson R. B., 1972b, Nature, 236, 21
* Larson (1974a) Larson R. B., 1974a, MNRAS, 166, 585
* Larson (1974b) Larson R. B., 1974b, MNRAS, 169, 229
* Larson (1976a) Larson R. B., 1976a, Proceedings of a Conference at the Swiss Society for Astronomy and Astrophysics, Les Diablerets, Switzerland, p. 67
* Larson (1976b) Larson R. B., 1976b, ComAp, 6, 139
* Larson (1976c) Larson R. B., 1976c, MNRAS, 176, 31
* Larson (1977a) Larson R. B., 1977a, Chemical and Dynamical Evolution of Our Galaxy, Proceedings of a Conference at Geneva Observatory, p. 3
* Larson (1977b) Larson R. B., 1977b, Evolution of Galaxies and Stellar Populations, Proceedings of a Conference at Yale University, p. 97
* Larson (1977c) Larson R. B., 1977c, American Scientist, 65, 188
* Larson & Dinerstein (1975) Larson R. B., Dinerstein H. L., 1975, PASP, 87, 911
* Larson & Tinsley (1974) Larson R. B., Tinsley B. M., 1974, ApJ, 192, 293
* Larson & Tinsley (1978) Larson R. B., Tinsley B. M., 1978, ApJ, 219, 46
* Lequeux (1979) Lequeux J., 1979, A&A, 71, 1
* Lynden-Bell (1975) Lynden-Bell D., 1975, Vistas in Astronomy, 19, 299
* Lynden-Bell (1977) Lynden-Bell D., 1977, IAUSymp, 75, 291
* Mathews & Baker (1971) Mathews W. G., Baker J. C., 1971, ApJ, 170, 241
* Mayor (1977) Mayor M., 1977, Chemical and Dynamical Evolution of Our Galaxy, Proceedings of a Conference at Geneva Observatory, p. 213
* McClure (1979) McClure R. D., 1979, Workshop on Chemical Inhomogeneities in the Galaxy, Frascati, Italy, 50, 15
* Meier (1976) Meier D. L., 1976, ApJ, 207, 343
* Melnick & Sargent (1977) Melnick J., Sargent W. L. W., 1977, ApJ, 215, 401
* Mengel (1976) Mengel J. G., 1976, A&A, 48, 83
* Mengel et al. (1979) Mengel J. G., Demarque P., Sweigart A. V., Gross P. G., 1979, ApJS, 40, 733
* Miller & Scalo (1979) Miller G. E., Scalo J. M., 1979, ApJS, 41, 513
* Morgan & Mayall (1957) Morgan W. W., Mayall N. U., 1957, PASP, 69, 291
* Morgan & Osterbrock (1969) Morgan W. W., Osterbrock D. E., 1969, AJ, 74, 515
* Mould (1978) Mould J. R., 1978, ApJ, 226, 923
* O’Connell (1976) O’Connell R. W., 1976, ApJ, 206, 370
* Oemler (1974) Oemler A., 1974, ApJ, 194, 1
* Oemler & Tinsley (1979) Oemler A., Tinsley B. M., 1979, AJ, 84, 985
* Oort (1965) Oort J. H., 1965, Stellar Dynamics. University of Chicago Press, Chicago
* Oort (1970) Oort J. H., 1970, A&A, 7, 381
* Ostriker (1977a) Ostriker J. P., 1977a, Chemical and Dynamical Evolution of Our Galaxy, Proceedings of a Conference at Geneva Observatory, p. 241
* Ostriker (1977b) Ostriker J. P., 1977b, Evolution of Galaxies and Stellar Populations, Proceedings of a Conference at Yale University, p. 369
* Ostriker & Thuan (1975) Ostriker J. P., Thuan T. X., 1975, ApJ, 202, 353
* Ostriker & Tremaine (1975) Ostriker J. P., Tremaine S. D., 1975, ApJ, 202, L113
* Pagel (1978a) Pagel B. E. J., 1978a, Les éléments et leurs isotopes dans l’Universe, Proceedings of Liège International Astrophysical Colloquium, 22, 261
* Pagel (1978b) Pagel B. E. J., 1978b, MNRAS, 183, 1P
* Pagel & Patchett (1975) Pagel B. E. J., Patchett B. E., 1975, MNRAS, 172, 13
* Pagel et al. (1978) Pagel B. E. J., Edmunds M. G., Fosbury R. A. E., Webster B. L., 1978, MNRAS, 184, 569
* Partridge & Peebles (1967) Partridge R. B., Peebles P. J. E., 1967, ApJ, 147, 868
* Peimbert (1977) Peimbert M., 1977, Chemical and Dynamical Evolution of Our Galaxy, Proceedings of a Conference at Geneva Observatory, p. 149
* Perrenod (1978) Perrenod S. C., 1978, ApJ, 226, 566
* Podosek (1978) Podosek F. A., 1978, ARA&A, 16, 293
* Racine (1971) Racine R., 1971, ApJ, 168, 393
* Rees (1977) Rees M. J., 1977, Evolution of Galaxies and Stellar Populations, Proceedings of a Conference at Yale University, p. 339
* Reeves (1972) Reeves H., 1972, A&A, 19, 215
* Reeves & Johns (1976) Reeves H., Johns O., 1976, ApJ, 206, 958
* Renzini (1976) Renzini A., 1976, Roy. Greenwich Obs. Bull., 182, 87
* Saar & Einasto (1978) Saar E., Einasto J., 1978, Chemical and Dynamical Evolution of our Galaxy; Proceedings of the Colloquium, Torun, Poland, p. 247
* Saio (1977) Saio H., 1977, Ap&SS, 50, 93
* Saio et al. (1977) Saio H., Shibata Y., Simoda M., 1977, Ap&SS, 47, 151
* Salpeter (1955) Salpeter E. E., 1955, ApJ, 121, 161
* Sandage (1961a) Sandage A., 1961a, The Hubble Atlas of Galaxies. Carnegie Institute of Washington, Washington
* Sandage (1961b) Sandage A., 1961b, ApJ, 133, 355
* Sandage (1961c) Sandage A., 1961c, ApJ, 134, 916 |
# Nonparametric Identification and Estimation of Earnings Dynamics using a
Hidden Markov Model: Evidence from the PSID
Tong Zhou Department of Computer Science
Johns Hopkins University
Baltimore, United States
Email<EMAIL_ADDRESS>
DOI: 10.1109/ICAIBD57115.2023.10206080
###### Abstract
This paper presents a hidden Markov model designed to investigate the complex
nature of earnings persistence. The proposed model assumes that the residuals
of log-earnings consist of a persistent component and a transitory component,
both following general Markov processes. Nonparametric identification is
achieved through spectral decomposition of linear operators, and a modified
stochastic EM algorithm is introduced for model estimation. Applying the
framework to the Panel Study of Income Dynamics (PSID) dataset, we find that
the earnings process displays nonlinear persistence, conditional skewness, and
conditional kurtosis. Additionally, the transitory component is found to
possess non-Gaussian properties, resulting in a significantly asymmetric
distributional impact when high-earning households face negative shocks or
low-earning households encounter positive shocks. Our empirical findings also
reveal the presence of ARCH effects in earnings at horizons ranging from 2 to
8 years, further highlighting the complex dynamics of earnings persistence.
###### Index Terms:
Hidden Markov Model, Panel Data, Nonparametric Identification, Modified
Stochastic EM, PSID
## I Introduction
Earnings dynamics is a fascinating and important area in economics, with
significant implications for understanding economic agents’ consumption
decisions. Macroeconomists employ life-cycle models and profiles of agents’
earnings dynamics to examine their various responses within the economy,
laying the foundation for the creation of sensible policies to manage business
cycles. In a broader context, the nature of earnings dynamics is crucial in
addressing a wide range of economic issues, including income inequality,
optimal design of fiscal policies and insurance programs, economic mobility,
and human capital development. As such, accurately characterizing earnings
dynamics enables more effective management and a deeper understanding of a
country’s economy.
We utilize a parsimonious specification of the earnings process, where log-
earnings consist of an unobserved persistent shock and an unobserved
transitory shock. The literature on earnings process specifications varies in
its focus on the distinction between these two types of shocks, a concept that
can be traced back to Nobel laureate Milton Friedman’s renowned permanent
income hypothesis (PIH). Although there are numerous models of earnings
dynamics, most tend to concentrate on linear specifications for these two
hidden components, inherently excluding the possibility of nonlinear
transmission of earnings shocks.
In this paper, we introduce a new nonparametric framework to explore earnings
dynamics. Both the persistent and transitory components are modeled as two
generic first-order Markov processes. Apart from the first-order restriction,
no further assumptions are imposed on the model. In essence, our specification
establishes a hidden Markov model (HMM) with two latent state variables. Our
focus is on identifying the two Markov kernels, specifically, the conditional
distributions of the persistent component given its past and the conditional
distribution of the transitory component given its past.
We propose a two-step stochastic EM algorithm for estimating the model. In the
E-step, we use an MCMC procedure to obtain draws for the two hidden components
through a likelihood-based approach. In the M-step, we perform a maximization
procedure on a series of quantile regressions with imputed values for hidden
covariates. The iteration continues until the expected likelihood is
maximized.
## II Materials and Methods
### II-A Model
#### II-A1 Setup
In line with the conventions of earnings dynamic literature, we use $\log Y$
to represent the real (log) earnings, and it can be decomposed into the
explanatory part, a persistent component $U$ and a transitory component $V$.
The earnings process for each household $i$ at time $t$ is as follows:
$\log(Y_{it})=\mathbf{z}_{it}^{\prime}\bm{\beta}+U_{it}+V_{it},$ (1)
where $\mathbf{z}_{it}$ is a set of observed demographics and known by agents
at $t$. We let $y_{it}=\log(Y_{it})-\mathbf{z}_{it}^{\prime}\bm{\beta}$ denote
the log of real income net of predictable individual components.
We assume both $U_{it}$ and $V_{it}$ follow some general unknown functions
$H_{t}(U_{i,t-1},\eta_{it})$ and $Q_{t}(V_{i,t-1},\varepsilon_{it})$, where
$\eta_{it}$ and $\varepsilon_{it}$ are assumed to follow conditional standard
uniform distributions, i.e.
$\displaystyle\eta_{it}|(U_{i,t-1},U_{i,t-2},\cdots)$
$\displaystyle=\mathsf{Unif}(0,1),~{}~{}t=2,\cdots,T$ (2)
$\displaystyle\varepsilon_{it}|(V_{i,t-1},V_{i,t-2},\cdots)$
$\displaystyle=\mathsf{Unif}(0,1),~{}~{}t=2,\cdots,T.$ (3)
This general nonparametric setting offers greater flexibility for studying the
persistence of earnings dynamics and encompasses many earnings dynamic models
as special cases including the canonical earnings dynamics models, where the
persistent component follows a unit-root process. Given that both processes
are unobserved, a Bernoulli instrumental variable $\omega(C_{it})$ is required
to differentiate them, where $\omega(\cdot)$ is a known transformation of
consumption data $C_{it}$ for agent $i$ at $t$. Since the purpose of
$\omega(C_{it})$ is merely to distinguish the two Markov kernels, it suffices
for our purposes to use a logistic function, i.e.
$\mathbb{P}(\omega(C_{it})=1|U_{it})=1/(1+\exp(-\beta_{0}-\beta_{1}U_{it}))$.
The rationale behind this setup can be found in the works of [3, 4, 5].
Putting above discussions together, we have the complete model setup
$\displaystyle y_{it}$ $\displaystyle=$ $\displaystyle U_{it}+V_{it}$
$\displaystyle U_{it}$ $\displaystyle=$ $\displaystyle
H_{t}(U_{i,t-1},\eta_{it})$ $\displaystyle V_{it}$ $\displaystyle=$
$\displaystyle Q_{t}(V_{i,t-1},\varepsilon_{it})$
$\displaystyle\mathbb{P}(\omega(C_{it})=1)$ $\displaystyle=$ $\displaystyle
1/(1+\exp(-\beta_{0}-\beta_{1}U_{it}))$
#### II-A2 Assumptions
We will outline the assumptions needed to identify the model. Our
identification strategy relies on the powerful spectral decomposition of
linear operators. A thorough overview of this approach can be found in the
work of [11].
###### Assumption 1.
1. 1.
(First-order Markov) Both $U_{it}$ and $V_{it}$ follow a generic first-order
Markov process;
2. 2.
(Conditional uniform distribution) Both $\eta_{it}$ and $\varepsilon_{it}$
follow conditional standard uniform distributions
3. 3.
(Monotonicity) The unknown condition quantile function $\tau\mapsto
H_{t}(U_{i,t-1},\tau)$ and $\tau\mapsto Q_{t}(V_{i,t-1},\tau)$ are strictly
increasing for $\tau\in(0,1)$.
4. 4.
(Invertibility) The conditional distribution functions $F(U_{it}|U_{i,t-1})$
and $F(V_{it}|V_{i,t-1})$ are both invertible w.r.t. their respective
arguments $U_{it}$ and $V_{it}$ for each $i$ and $t$.
Assumption 1) states that $U_{it}$ and $V_{it}$ have only one-period memory of
their past. This condition imposes dynamic exclusion restrictions that aid in
obtaining nonparametric identifications. This assumption is also commonly made
in structural economic models. Although it can be relaxed to allow for higher-
order Markov process, we maintain the first-order Markovian assumption in this
paper for simplicity. Assumption 2) normalizes the error terms $\eta_{it}$ and
$\varepsilon_{it}$ to follow standard uniform distributions. This setup
enables us to discuss consequences of shocks along the rank of $U_{i,t-1}$ and
$V_{i,t-1}$. This representation also nests the canonical model of earnings
dynamics as a special case where $U_{it}$ is assumed to follow a unit-root
process, i.e.
$U_{i,t+1}=U_{it}+\nu_{i,t+1}$
where $\nu_{i,t+1}=F^{-1}(\eta_{i,t+1})$ is the inverse function of the CDF of
$\eta_{i,t+1}$. Assumption 3) guarantees that $U_{it}$ and $V_{it}$ have
absolutely continuous distributions. Assumption 1) - 3) combined imply that
for all $\tau\in(0,1)$, $H_{t}(U_{i,t-1},\tau)$ happens to be the
$\tau$-conditional quantile of $U_{it}$ given $U_{i,t-1}$. This relationship
also holds for $Q_{t}(V_{i,t-1},\tau)$. Assumption 4) is furnished to
facilitate identification of the nonlinear functions $H_{t}$ and $Q_{t}$. The
monotonicity restriction on $H_{t}$ and $Q_{t}$ are necessary for the
existence of their marginal densities $f(U_{it}|U_{i,t-1})$ and
$f(V_{it}|V_{i,t-1})$. It is not a sufficient condition because a stronger
condition of absolute continuity on the distribution function
$F_{V_{t}|V_{t-1}}$ cannot be weakened. However, since it is rare that a
distribution function is continuous but not absolutely continuous, assumption
4) can be almost equivalent to the existence of the two marginal densities.
###### Assumption 2 (independence).
Two random vectors $(\eta_{i2},\cdots,\eta_{iT},U_{i1})$ and
$(\varepsilon_{i2},\cdots,\varepsilon_{iT},V_{i1})$ are statistically
independent. The Bernoulli random variable $\omega(C_{t})$ is independent of
$V_{t}$ for all $t$.
This assumption suggests that the persistent process
$\left\\{U_{it}\right\\}_{t=1}^{T}$ and the transitory process
$\left\\{V_{it}\right\\}_{t=1}^{T}$ are statistically independent. This
restriction allows for the common deconvolution technique of separating two
unknown probability densities. For instance, once one of the marginal
densities $f(U_{it})$ or $f(V_{it})$ is identified, the other one will also be
automatically identified through the deconvolution argument.
Since our identification strategy relies on the technique of manipulating
linear operators, we provide the definition of linear operator here to
facilitate our later discussions. Let $\mathcal{L}^{p}(F_{U})$ denote the
collection of functions of variable $U$ for which its $p$-th moment is finite,
i.e. $g\in\mathcal{L}^{p}(F_{u})$ implies
$\|g\|_{\mathcal{L}^{p}}=\left(\int_{\mathcal{U}}g(u)\mathrm{d}F_{U}(u)\right)^{\frac{1}{p}}<\infty,$
(4)
where $\mathcal{U}$ denotes the support of $U$. The definition for the space
$\mathcal{L}^{q}(F_{V})$ is similar.
Now we define a linear operator
$\mathcal{L}_{V|U}:\mathcal{L}^{p}(F_{U})\to\mathcal{L}^{q}(F_{V}),$ (5)
where $p,q\geq 1$. Specifically, for any $g\in\mathcal{L}^{p}(F_{U})$, we have
$\mathcal{L}_{V|U}g=\int_{\mathcal{U}}f_{V|U}(v|u)g(u)\mathrm{d}u\in\mathcal{L}^{q}(F_{V}),$
(6)
where the function $f_{V|U}$ is called the kernel of the linear operator
$\mathcal{L}_{V|U}$. This expression is particularly useful when multiple
linear operators are present, since we do not need to introduce new notations
for each involved operator.
###### Assumption 3.
There exist variables $Y_{it}$ such that
1. 1.
For any $y_{t}$ can $\widetilde{c}_{t}$, there exists a $y_{t-1}$ and
$\widetilde{c}_{t-2}$ and a neighborhood $\mathcal{N}^{r}$ around
$(y_{t},\widetilde{c}_{t-1},y_{t-1},\widetilde{c}_{t-2})$ such that, for any
$(y_{t}^{\prime},\tilde{c}_{t-1}^{\prime},y_{t-1}^{\prime},\tilde{c}_{t-2}^{\prime})\in\mathcal{N}^{r}$,
the linear operator
$\mathcal{L}_{Y_{t-2},y_{t-1}^{\prime},\tilde{c}_{t-2}^{\prime},y_{t}^{\prime},\tilde{c}_{t-1}^{\prime},Y_{t+1}}$
is one-to-one.
2. 2.
For nay $y_{t}$ and $\tilde{c}_{t-1}$, the linear operator
$\mathcal{L}_{Y_{t=1}|y_{t},\tilde{c}_{t-1},U_{t-1},V_{t}}$ is one-to-one.
3. 3.
For any $y_{t-1}$ and $\tilde{c}_{t-2}$, the linear operator
$\mathcal{L}_{Y_{t-2},y_{t-1},\tilde{c}_{t-2},Y_{t}}$ is one-to-one.
###### Assumption 4.
1. 1.
The characteristic function of $(U_{i1},\cdots,U_{iT})$ and
$(V_{i1},\cdots,V_{iT})$ do not vanish on the real line.
2. 2.
The characteristic function of $(U_{i1},\cdots,U_{iT})$ and
$(V_{i1},\cdots,V_{iT})$ are absolutely continuous.
Assumption 4.1) is commonly made to achieve nonparametric identification (see
[6]). For univariate distributions, this assumption rules out certain families
of distributions, e.g., truncated normal, symmetric uniform and many discrete
distributions. Assumption 4.2) is made to facilitate the deconvolution
argument and also implies that the joint distributions of
$(U_{i1},\cdots,U_{iT})$ and $(V_{i1},\cdots,V_{iT})$ exist.
To avoid cluttered notations, we simplify the notations by omitting the
subscript $i$ without causing confusions. In the following derivations, we
define $\widetilde{C}_{t-1}:=\omega(C_{t-1})$.
###### Assumption 5 (Uniqueness of spectral decomposition).
For any $(Y_{t},\widetilde{C}_{t-1})$ and any
$(u_{t-1},v_{t})\neq(u_{t-1}^{\prime},v_{t}^{\prime})$, there exists a
$(y_{t-1},\widetilde{c}_{t-2})$ and corresponding neighborhood
$\mathcal{N}^{r}$ satisfying Assumption 3.1), such that for some
$(y_{t}^{\prime},\tilde{c}_{t-1}^{\prime},y_{t-1}^{\prime},\tilde{c}_{t-2})\in\mathcal{N}^{r}$
with $(y_{t}^{\prime},\tilde{c}_{t-1})\neq(y_{t},\tilde{c}_{t-1})$ and
$(y_{t-1}^{\prime},\tilde{c}_{t-2})\neq(y_{t-1},\tilde{c}_{t-2})$:
$0<k(y_{t},\tilde{c}_{t-1},y_{t-1}^{\prime},\tilde{c}_{t-2},y_{t-1},\tilde{c}_{t-2},u_{t-1},v_{t})<C<\infty$
and
$\displaystyle
k(y_{t},\tilde{c}_{t-1},y_{t-1}^{\prime},\tilde{c}_{t-2},y_{t-1},\tilde{c}_{t-2},u_{t-1},v_{t})\neq$
$\displaystyle
k(y_{t},\tilde{c}_{t-1},y_{t-1}^{\prime},\tilde{c}_{t-2},y_{t-1},\tilde{c}_{t-2},u_{t-1}^{\prime},v_{t}^{\prime})$
where
$\displaystyle
k(y_{t},\tilde{c}_{t-1},y_{t-1}^{\prime},\tilde{c}_{t-2},y_{t-1},\tilde{c}_{t-2},u_{t-1},v_{t})=$
$\displaystyle\frac{f(y_{t},\tilde{c}_{t-1}|y_{t-1},\tilde{c}_{t-2},u_{t-1},v_{t})f(y_{t}^{\prime},\tilde{c}_{t-1}^{\prime}|y_{t-1}^{\prime},\tilde{c}_{t-2}^{\prime},u_{t-1},v_{t}))}{f(y_{t}^{\prime},\tilde{c}_{t-1}^{\prime}|y_{t-1},\tilde{c}_{t-2},u_{t-1},v_{t}f(u_{t-1},v_{t}|y_{t-1}^{\prime},\tilde{c}_{t-2}^{\prime},u_{t-1},v_{t}))}$
###### Assumption 6 (normalization).
The Markov kernels are normalized by $\mathbb{E}[U_{t+1}|U_{t}]=U_{t}$ and
$\mathbb{E}[V_{t+1}|V_{t}]=0$.
In the eigenfunctions $f(y_{t+1}|y_{t},\tilde{c}_{t-1},u_{t-1},v_{t})$, both
$u_{t-1}$ and $v_{t}$ are unobserved and continuously distributed. Assumption
6 is made to differentiate and identify the two components.
###### Assumption 7 (Stationarity).
For any $2\leq t\leq T$, the Markov kernels is time-invariant, i.e.,
$\displaystyle
f(Y_{t},\widetilde{C}_{t-1},U_{t-1},V_{t}|Y_{t-1},\widetilde{C}_{t-2},U_{t-2},V_{t-1})$
$\displaystyle=f(Y_{3},\widetilde{C}_{2},U_{2},V_{3}|Y_{2},\widetilde{C}_{1},U_{1},V_{2})$
Assumption 7 is not necessary for identification of the Markov density. It
eases our derivations. From the next section, we can see that only five
periods of data is sufficient for achieving nonparametric identification.
### II-B Identification
Based on the assumptions made in the previous section, identification can be
accomplished by applying Theorem 9 from [11] in dynamic settings. This
strategy can be better understood in Figure 1, where the dependence structures
and dynamic exclusion restrictions can be easily visualized.
$(Y_{t-2},\widetilde{C}_{t-3})$$(Y_{t-1},\widetilde{C}_{t-2})$$(Y_{t},\widetilde{C}_{t-1})$$(Y_{t+1},\widetilde{C}_{t})$$U_{t-2}$$U_{t-1}$$~{}U_{t~{}~{}}$$U_{t+1}$$V_{t-2}$$V_{t-1}$$~{}V_{t~{}~{}}$$V_{t+1}$
Figure 1: Graphical illustration of earnings dynamics
We state the main identification theorem
###### Theorem 1 (Identification).
Under Assumption 1 - Assumption 7, the density
$f(Y_{t+1},\widetilde{C}_{t},Y_{t},\widetilde{C}_{t-1},Y_{t-1},\widetilde{C}_{t-2},Y_{t-2},\widetilde{C}_{t-3})$
for any $t\in\left\\{4,\cdots,T-1\right\\}$ uniquely determines the densities
$f(Y_{t},\widetilde{C}_{t-1},U_{t-1},V_{t}|Y_{t-1},\widetilde{C}_{t-2},U_{t-2},V_{t-1})$.
Theorem 1 implies that our interests of Markov kernels can be identified by
basic probability rules, the Bayes rules and the deconvolution technique.
###### Corollary 1.
Under Assumption 1 - Assumption 7, the Markov kernels $f_{V_{t}|V_{t-1}}$,
$f_{U_{t}|U_{t-1}}$ and marginal distributions $f_{U_{t}}$ and $f_{V_{t}}$ are
uniquely identified, for $t=4,\dots,T-1$.
### II-C Estimation
We introduce a modified stochastic EM algorithm (MSEM) to estimate this HMM,
while the stochastic EM was originally proposed by [10]. The MSEM provides a
much more faster implementation of the estimation by replacing the likelihood
with the objective functions of quantile regression models. The MSEM is
similar to the one presented in [1]. The difference lies in the fact that
their paper involves only one state variables. Specifically, for any
$\tau\in(0,1)$ we employ the following estimating equations
$\displaystyle U_{it}$
$\displaystyle=\sum_{k=0}^{K_{1}}a_{k}^{H}(\tau)\phi_{k}(U_{i,t-1},\mathrm{age}_{it})$
$\displaystyle V_{it}$
$\displaystyle=\sum_{k=0}^{K_{2}}a_{k}^{Q}(\tau)\phi_{k}(V_{i,t-1},\mathrm{age}_{it})$
$\displaystyle U_{i1}$
$\displaystyle=\sum_{k=0}^{K_{3}}a_{k}^{H_{1}}(\tau)\phi_{k}(\mathrm{age}_{i1})$
$\displaystyle V_{i1}$
$\displaystyle=\sum_{k=0}^{K_{2}i}a_{k}^{Q_{1}}(\tau)\phi_{k}(\mathrm{age}_{i1}),$
where $\phi_{k}$ is the Hermite polynomials. We selected different orders of
polynomials for the four equations to maximize the likelihood. The quantile-
based estimation strategy provides a flexible specification of the Markov
kernels. [2] applies this estimation strategy to estimate the smoking effects
of women during pregnancy on children’s birthweights.
Another advantage of using quantile-based estimation is that the original
nonparametric estimation problem is reduced to estimating a finite number of
parameters, i.e., the coefficients of the Hermite polynomials. We discuss
$U_{it}$ as an example: the functions $a_{k}^{H}(\tau)$ are modeled as
piecewise-polynomial interpolating splines on equi-length intervals
$[\tau_{1},\tau_{2}],[\tau_{2},\tau_{3}],\cdots,[\tau_{I-1},\tau_{I}]$ that
partition the unit interval $(0,1)$. In other words, we need to estimate
$a_{k}^{H}(\tau)$ for each interval of $\tau$ and $k$. Additionally, the
objective function of quantile regressions can be used as a surrogate
likelihood. Since it is a convex function, the implementation can be fast.
Once those $a_{\tau}^{H}$ ’s are obtained, we are finished with the estimation
of $U_{it}$.
We still take $U_{it}$ as an example to illustrate the MSEM algorithm. We
start with an initial value for the parameter vector $\widehat{\theta}^{(0)}$.
Each iteration follows the following two steps until convergence of the
$\widehat{\theta}^{(s)}$ in the $s$-th iteration:
* •
_Stochastic E-step:_ Draw $U_{i}^{(m)}=(U_{i1}^{(m)},\cdots,U_{iT}^{(m)})$ for
$m=1,\cdots,M$ from $f_{i}(\cdot;\widehat{\theta}^{(s)})$.
* •
_M-step:_ Compute
$\widehat{\theta}^{(s+1)}=\operatorname*{arg\,min}_{\theta}\sum_{i=1}^{N}\sum_{m=1}^{M}R(y_{i},U_{i}^{(m)};\theta),$
where $R(\cdot)$ is the surrogate likelihood, i.e. the objective function of
the piece-wise quantile regressions. In the E-step, we use a random-walk
Metropolis-Hastings algorithm for drawing $U_{i}^{(m)}$ in the E-step. The
M-step consists of a number of quantile regressions. For instance, for each
$\ell$, the parameters $a_{k}^{H}(\tau_{\ell})$ are updated as
$\displaystyle\min_{(a_{0\ell}^{H},\cdots,a_{K\ell}^{H})}\sum_{i=1}^{N}$
$\displaystyle\sum_{t=2}^{T}\sum_{m=1}^{M}$
$\displaystyle\rho_{\tau_{\ell}}\left(U_{it}^{(m)}-\sum_{k=0}^{K}a_{k\ell}^{H}\varphi_{k}\left(U_{i,t-1}^{(m)},\textsf{age}_{it}\right)\right),$
where $\rho_{\tau}(u)=u(\tau-\bm{1}(u\leq 0))$ is the check function in
standard quantile regressions and $\bm{1}(\cdot)$ denotes the indicator
function, first introduced by [8].
[10] examined the statistical properties of the stochastic EM algorithm within
a likelihood case. He provided certain conditions under which the Markov chain
$\widehat{\theta}^{(s)}$ is ergodic. He also outlined the asymptotic
distribution of $\widehat{\theta}$. [1] characterized the asymptotic
distribution of $\widehat{\theta}$ in a manner that aligns with our model,
specifically when utilizing the surrogate likelihood during the M-step.
The MSEM algorithm can be summarized as follows
## III Results and Discussions
### III-A Data
As the longest running household panel survey in the world, the PSID dataset
contains a large amount of data in the US over 50 years. The dataset includes
a wide range of variables on income, employment, education, economic, social,
health-related factors and many other aspects of life for each individual and
their family members. However, the size, variety and complexity of the dataset
make it challenging to analyze using traditional statistical methods.
The second challenge comes from the longitudinal nature, which means that it
follows the same individuals and families over time. This presents unique
challenges for analysis, such as handling missing data, attrition, and changes
in the variables of interest over time.
The third one is data quality. The quality of PSID can vary over time, as
changes in survey methodology or sample composition can affect the accuracy
and reliability of the data. This requires careful attention to data cleaning
and quality control procedures.
Our study draws on the PSID as our primary data source. To ensure high quality
data and better compare our empirical findings with other literature, our
sample selection procedure and data preprocessing mainly follow the works of
[3, 4].
### III-B Empirical Findings
#### III-B1 Densities and Moments
Figure 2 illustrates the marginal distributions of the persistent and
transitory earnings components at the mean age. The persistent component
$U_{it}$ displays small deviations from Gaussianity. However, the marginal
distribution of $V_{it}$ provides strong evidence to reject Gaussianity owing
to its high kurtosis and fat tails. It is worth noting that the density of
$V_{it}$ in our model is less spiky than that in [1]. A possible explanation
for this difference could be the mutual dependence structure of $V_{it}$,
which is governed by its first-order Markovian property, whereas they are
assumed to be mutually independent across $t$ in their paper. In Figure 3, we
report the conditional sknewness for $\tau=11/12$, for both the $U$ component
and the $V$ component. These two panels show a similar pattern:
$U_{it}$($V_{it}$) is positively skewed for low values of
$U_{i,t-1}$($V_{i,t-1}$), and negatively skewed for high values of
$U_{i,t-1}$($V_{i,t-1}$).
Figure 2: Marginal distributions of persistent and transitory earnings
components
Figure 3: Conditional skewness of $U$ component and $V$ component.
#### III-B2 Nonlinear persistence
This experiment examines the marginal effects of persistent and transitory
shocks. For linear specifications of earnings dynamics, their marginal effects
would be constant by design. In the canonical model of earnings dynamics, for
example, where the innovation is a random walk, then the marginal effect is
$1$, regardless of $U_{i,t-1}$ and $\tau$. In contrast, our model allows the
persistence of $U_{i,t-1}$ to depend on the magnitude and direction of the
shock. As a result, the persistence of a shock to $U_{i,t-1}$ depends on the
size and sign of current and future shocks. In particular, our model enables
specific shocks to erase the memory of past shocks. Furthermore, the
interaction between the shock $\eta_{it}$ and the lagged persistent component
$U_{i,t-1}$ is a central feature of our nonlinear approach. We then estimate
the earnings model, and, given the estimated parameters, we simulate the
model. Figure 5 shows that our nonlinear model reproduces the patterns of
nonlinear persistence well.
Figure 4 indicates the presence of nonlinear persistence, which depends on
both the percentile of past earnings $(\tau_{\textsf{init}})$ and the
percentile of the quantile innovation $(\tau_{\textsf{shock}})$. Figure 6 then
shows the estimated persistence of the earnings component $U_{it}$.
Specifically, the graph shows the marginal effects, evaluated at percentiles
$\tau_{\textsf{init}}$ and $\tau_{\textsf{shock}}$ and at the mean age in the
sample. Persistence in $U$’s is higher than persistence in log-earnings
residuals, consistently with the fact that Figure 6 is net of transitory
shocks. One observation that sets this study apart from [1] is that the
persistence in Figure 6 is higher than 1. For high-earnings households hit by
good shocks and low-earnings households hit by bad shocks, persistence is even
above 1.5, with the persistence in the latter being higher than that in the
former.
Figures 7 and 8 demonstrate that the persistence in $V$’s is generally lower
in magnitude than that in $U$. One notable feature is that when high-earnings
households are hit by bad shocks and low-earnings households are hit by good
shock, the persistence can be negative. The high degree of nonlinearity
displayed here strongly rejects that $V_{t}$ follows an independent process
across $t$. If it did, its nonlinear persistence measures would remain
constant for any $t$.
Figure 4: Estimates of the average derivative of the conditional quantile
function of log-earnings residuals $y_{it}$ given $y_{i,t-1}$ with respect to
$y_{i,t-1}$ in the PSID Figure 5: Estimates of the average derivative of the
conditional quantile function of simulated model. Figure 6: Estimates of the
average derivative of the conditional quantile function of the persistent
component $U$. Figure 7: Estimates of the average derivative of the
conditional quantile function of the transitory component $V$. Figure 8:
Estimates of the average derivative of the conditional quantile function of
the persistent component $V$.
#### III-B3 ARCH effects
Figure 9 presents estimates of log-earnings residuals growth at various
horizons, from 2 to 8 years. All of them suggest the presence of ARCH effects,
which is consistent with findings in the existing literature, such as [9]. The
data also reveal that log-earnings growth is non-Gaussian and displays
negative skewness and high kurtosis. [7] finds similar features in U.S.
administrative data. [1] further highlights the sknewness and excess kurtosis
of log-earnings growth at long horizons are primarily due to the non-
Gaussianity of the transitory component.
Figure 9: Densities of log-earnings growth at various horizons.
## IV Conclusion
We develop a nonparametric identification strategy for modeling earnings
dynamics, differentiating the two unobserved components based on their
distinct impact on household consumption. We also propose an modified
stochastic EM algorithm for estimating this model. The identification tool
relies on the assumptions that several linear operators are one-to-one.
In analyzing PSID, the empirical results reveal notable nonlinearities in both
persistence component and transitory component. Specifically, substantial
nonlinear persistence and conditional skewness are observed in both
components. These findings suggest that the earnings shocks to a household
depend on both the history of past shocks and the household’s past relative
wealth. In particular, persistence is higher for high-earnings households hit
by good shocks and low-earnings households hit by bad shocks, while it is
lower for high-earnings household hit by bad shocks and low-earnings
households hit by good shocks. These features align with similar observations
in the PSID that previous earnings dynamic models cannot capture. We also find
some other features such as ARCH effects that have been documented in other
literature.
## Acknowledgment
The authors would like to thank the University of Michigan for providing the
Panel Study of Income Dynamics (PSID) data, which was essential to the success
of our research. We are also deeply grateful to Professor Roger Koenker for
his valuable suggestions and clarifications on issues about quantile
regression models.
## References
* [1] Arellano, Manuel, Richard Blundell, and Stéphane Bonhomme. “Earnings and consumption dynamics: a nonlinear panel data framework.” Econometrica 85.3 2017: 693-734.
* [2] Arellano, Manuel, and Stéphane Bonhomme. “Nonlinear panel data estimation via quantile regressions.” Econometric Theory. 2016: C61-C94.
* [3] Blundell, Richard, Luigi Pistaferri, and Ian Preston. “Consumption inequality and partial insurance.” American Economic Review 98.5 2008: 1887-1921.
* [4] Blundell, Richard, Luigi Pistaferri, and Itay Saporta-Eksten. “Consumption inequality and family labor supply.” American Economic Review 106.2 2016: 387-435.
* [5] Blundell, Richard, Luigi Pistaferri, and Itay Saporta-Eksten. “Children, time allocation, and consumption insurance.” Journal of Political Economy 126.S1 2018: S73-S115.
* [6] D’Haultfoeuille, Xavier. “On the completeness condition in nonparametric instrumental problems.” Econometric Theory 27.3 2011: 460-471.
* [7] Guvenen F, Karahan F, Ozkan S, Song J. “What do data on millions of US workers reveal about life-cycle earnings risk?”. National Bureau of Economic Research; 2015 Feb 2.
* [8] Koenker R, Bassett Jr G. “Regression quantiles”. Econometrica: journal of the Econometric Society. 1978 Jan 1:33-50.
* [9] Meghir C, Pistaferri L. “Income variance dynamics and heterogeneity”. Econometrica. 2004 Jan;72(1):1-32.
* [10] Nielsen SF. The stochastic EM algorithm: estimation and asymptotic results. Bernoulli. 2000 Jun 1:457-89.
* [11] Schennach S. “Measurement systems. Journal of Economic Literature”. 2022 Dec;60(4):1223-63.
|
# Physically Compatible 3D Object
Modeling from a Single Image
Minghao Guo MIT CSAIL Bohan Wang Pingchuan Ma MIT CSAIL Tianyuan Zhang MIT
CSAIL Crystal Elaine Owens MIT CSAIL Chuang Gan UMass Amherst MIT-IBM
Waston AI Lab Joshua B. Tenenbaum MIT CSAIL MIT BCS Kaiming He MIT CSAIL
Wojciech Matusik MIT CSAIL
###### Abstract
We present a computational framework that transforms single images into 3D
physical objects. The visual geometry of a physical object in an image is
determined by three orthogonal attributes: mechanical properties, external
forces, and rest-shape geometry. Existing single-view 3D reconstruction
methods often overlook this underlying composition, presuming rigidity or
neglecting external forces. Consequently, the reconstructed objects fail to
withstand real-world physical forces, resulting in instability or undesirable
deformation – diverging from their intended designs as depicted in the image.
Our optimization framework addresses this by embedding physical compatibility
into the reconstruction process. We explicitly decompose the three physical
attributes and link them through static equilibrium, which serves as a hard
constraint, ensuring that the optimized physical shapes exhibit desired
physical behaviors. Evaluations on a dataset collected from Objaverse
demonstrate that our framework consistently enhances the physical realism of
3D models over existing methods. The utility of our framework extends to
practical applications in dynamic simulations and 3D printing, where adherence
to physical compatibility is paramount.
**footnotetext: Corresponding author.
Figure 1: Existing methods for single-view reconstruction often result in
objects that, when subjected to real-world physical forces (such as gravity)
and user-required mechanical materials, exhibit problematic behaviors such as
toppling over (top left) and undesirable deformation (top right), diverging
from their intended depiction in the input images. In contrast, our approach
produces physical objects that maintain stability (bottom left) and mirror the
objects’ static equilibrium state captured in the input images (bottom right).
## 1 Introduction
The field of single-image 3D shape modeling has experienced significant
advancements over the past years, largely propelled by advances in single-view
reconstruction techniques. These methods, ranging from generating multi-view
consistent images for per-scene 3D reconstruction [22, 23, 21, 24, 30, 20], to
employing large reconstruction models (LRMs) for feedforward inference [12,
39, 42, 48, 45, 38], have enhanced the geometric quality and visual fidelity
of the 3D shapes to unprecedented levels.
However, reconstructing a 3D shape from an image often aims to be beyond a
mere visualization. These generated objects find applications in virtual
environments such as filming and gaming, as well as in tangible fields like
industrial design and engineering. Despite their diverse applications, a
common oversight in many current single-view reconstruction methods is the
negligence of physical principles. As shown in the top row of Fig. 1, when
subjected to real-world physics such as gravity, these 3D objects produced
from these techniques exhibit issues such as instability and undesired
deformation, diverging from their depiction in the input images. Such
inconsistency can significantly undermine the practical utility of the models,
as they fail to meet the functional and aesthetic expectations set by the
original image.
Fundamentally, an image is more than a visual representation of an object: It
captures a physical snapshot of the object in a state of static equilibrium,
under the influence of real-world forces. In this context, the geometry seen
in an image is determined by three orthogonal attributes: _mechanical
properties_ , _external forces_ , and _rest-shape geometry_. As shown in the
inset figure, these attributes collectively model the spectrum of potential
static configurations that a physical object might adopt. Reconstructing such
an object from an image is essentially an ill-posed problem, since multiple
combinations of these attributes can result in identical static geometry.
Current methods, however, often overlook this underlying composition; they
typically assume objects are rigid or neglect the impact of external forces.
The reconstructed objects thus merely replicate the visual geometry without
considering the three physical attributes.
To bridge this gap, we explicitly decompose these attributes for
reconstructing a physical object from a single image. Our framework
holistically takes mechanical properties and external forces as predefined
inputs, reflecting typical user specifications in real-world applications like
3D printing and simulations. The output is the rest-shape geometry tailored to
these inputs. These attributes are integrated through the principles of static
equilibrium physics. This explicit decomposition imposes two stringent
physical constraints in object modeling: static equilibrium is enforced as a
_hard constraint_ , and the physical object must conform to user-specified
material properties and external forces. These resulting physical objects are
stable, robust under real-world physics, and are high-fidelity replicas
inferred from the input images, as shown in the bottom row of Fig. 1.
More specifically, we propose _physical compatibility_ optimization, which is
a physically constrained optimization with rest-shape geometry as the
variable. In this setup, the objective is for the modeled physical object to
exhibit desired behaviors, such as matching the geometry depicted in the input
image under external forces and maintaining stability under gravity. The
constraint is the equation of static equilibrium simulation, ensuring that
during optimization, the physical object remains in the equilibrium state,
with internal forces generated by deformation from the rest shape balancing
the external forces. We parameterize the rest-shape geometry using a plastic
deformation field and solve this hard-constrained optimization problem by
using implicit differentiation with gradient descent.
For evaluation, we introduce five metrics designed to comprehensively assess
the physical compatibility of the modeled 3D objects under simulation. These
metrics include image loss between the rendered image of the modeled physical
object and the input image, stability under gravity, as well as measures from
finite element analysis, such as integrity and structural robustness. Our
framework’s versatility is demonstrated by its integration with five distinct
single-view reconstruction methods, each employing unique geometry
representations. Results on a dataset collected from Objaverse [9], consisting
of $100$ shapes, show that our framework consistently produces 3D objects with
enhanced physical compatibility. Furthermore, we demonstrate the practical
utility of our framework through applications in dynamic simulations and 3D
printing fabrication.
#### Related work.
Our method is mainly related to: 1) single-view 3D reconstruction, where our
work emphasizes the integration of physical modeling into the reconstruction
process; 2) physically based 3D modeling, where we incorporate physical
principles as hard constraints within the reconstruction process; and 3)
fabrication-aware shape design, where our work directly constructs physical
objects from a single input image rather than relying on manual creation of
the initial geometry. A comprehensive discussion of related work is provided
in Appendix G.
## 2 Approach
Figure 2: Overall pipeline. Given predefined mechanical properties and
external forces, our pipeline optimizes the rest-shape geometry to ensure that
the shape, when in a state of static equilibrium, aligns with the target image
and meets stability criteria. We visualize the stress distribution of the
static geometry using a colored heat map, illustrating the spatially varying
deformation of the physical object under static equilibrium.
Our objective is to create 3D objects from a single image that are physically
compatible, ensuring that they align with the input image in the static
equilibrium state while also meeting the stability requirements. Governed by
universal physical principles, the physical behavior of an object is
determined by its mechanical properties, external forces, and rest-shape
geometry. Our framework treats the rest-shape geometry as the optimization
variable, assuming that the mechanical properties and external forces are
predefined as inputs. Fig. 2 illustrates the overall pipeline.
### 2.1 Formulation of Physical Compatibility
In our approach, we treat the entity depicted in the input image as a solid
object. We employ Finite Element Method (FEM) for robust solid simulation. The
object is represented by a volumetric mesh, denoted as
$\mathcal{M}=(\mathbf{X},\mathbf{T})$. Here, $\mathbf{X}\in\mathbb{R}^{3N}$
represents the 3D positions of the vertices, with $N$ denoting the total
number of vertices. $\mathbf{T}\in\mathbb{N}^{Z\times K}$ describes the mesh
connectivity, where $Z$ represents the total number of elements and $K$
indicates the number of vertices per element. The mesh in its _rest-shape
geometry_ , which is the state without any internal or external forces
applied, is represented as
$\mathcal{M}_{\mathrm{rest}}=(\mathbf{X}_{\mathrm{rest}},\mathbf{T})$. The
input image depicts the _static geometry_ , which is the deformed geometry of
the object under static equilibrium111Although our implementation employs
_quasi-static equilibrium_ , we use the term _static equilibrium_ across the
paper for consistency., denoted as
$\mathcal{M}_{\mathrm{static}}=(\mathbf{x}_{\mathrm{static}},\mathbf{T})$. In
accordance with Newton’s laws, $\mathbf{x}_{\mathrm{static}}$ adheres to the
following equation:
$\mathbf{f}_{\mathrm{int}}(\mathbf{x}_{\mathrm{static}},\mathbf{X}_{\mathrm{rest}};\Theta)=\mathbf{f}_{\mathrm{ext}}(\mathbf{x}_{\mathrm{static}}),$
(1)
where
$\mathbf{f}_{\mathrm{int}}(\cdot,\cdot;\Theta):\mathbb{R}^{3N}\times\mathbb{R}^{3N}\rightarrow\mathbb{R}^{3N}$
denotes the internal forces exerted by deformed objects transitioning from
$\mathbf{X}_{\mathrm{rest}}$ to $\mathbf{x}_{\mathrm{static}}$,
$\mathbf{f}_{\mathrm{ext}}(\cdot):\mathbb{R}^{3N}\rightarrow\mathbb{R}^{3N}$
embodies the external interaction forces such as gravity, and $\Theta$
represents the mechanical material properties, such as the stiffness of the
object. Eq. 1 reveals that $\Theta$ (mechanical properties),
$\mathbf{f}_{\mathrm{ext}}$ (external forces), and
$\mathbf{X}_{\mathrm{rest}}$ (the rest-shape geometry) collectively determine
the static geometry $\mathbf{x}_{\mathrm{static}}$.
Given $\Theta$ and $\mathbf{f}_{\mathrm{ext}}(\cdot)$, the goal of physically
compatible modeling is to ensure that the rest-shape geometry
$\mathcal{M}_{\mathrm{rest}}$ conforms to given objectives under static
equilibrium. This is formulated as the following optimization problem:
$\displaystyle\min_{\mathbf{X}_{\mathrm{rest}},\mathbf{x}_{\mathrm{static}}}$
$\displaystyle\quad\mathcal{J}(\mathbf{X}_{\mathrm{rest}},\mathbf{x}_{\mathrm{static}})=\mathcal{L}(\mathbf{x}_{\mathrm{static}})+\mathcal{L}_{\mathrm{reg}}(\mathbf{X}_{\mathrm{rest}})$
$\displaystyle\mathrm{s.t.}$
$\displaystyle\quad\mathbf{f}_{\mathrm{int}}(\mathbf{x}_{\mathrm{static}},\mathbf{X}_{\mathrm{rest}};\Theta)=\mathbf{f}_{\mathrm{ext}}(\mathbf{x}_{\mathrm{static}}).$
(2)
Here, $\mathcal{J}(\mathbf{X}_{\mathrm{rest}},\mathbf{x}_{\mathrm{static}})$
is the objective function, consisting of
$\mathcal{L}(\mathbf{x}_{\mathrm{static}})$, which measures the alignment of
the geometry $\mathbf{x}_{\mathrm{static}}$ with the specified target.
$\mathcal{L}_{\mathrm{reg}}(\mathbf{X}_{\mathrm{rest}})$ regularizes the rest-
shape geometry $\mathbf{X}_{\mathrm{rest}}$, with more details discussed in
Section 2.2.
Within the scope of this work, two tasks for
$\mathcal{L}(\mathbf{x}_{\mathrm{static}})$ are considered: 1)
$\mathbf{x}_{\mathrm{static}}$ replicates the geometry depicted in the input
image; and 2) $\mathbf{x}_{\mathrm{static}}$ maintains stability and
inherently remains upright without toppling. In the first scenario, the loss
function is
$\mathcal{L}(\mathbf{x}_{\mathrm{static}})=\|\mathbf{x}_{\mathrm{static}}-\mathbf{X}_{\mathrm{target}}\|_{2}^{2}$
which measures the point-wise Euclidean distance between the static shape and
the target geometry
$\mathcal{M}_{\mathrm{target}}=(\mathbf{X}_{\mathrm{target}},\mathbf{T})$. In
the second scenario, the loss function is
$\mathcal{L}(\mathbf{x}_{\mathrm{static}})=\|\mathrm{proj}_{z}(\mathcal{C}(\mathbf{x}_{\mathrm{static}}))-\hat{\mathcal{C}}\|$,
where $\mathcal{C}(\cdot)$ computes the center of mass of
$\mathcal{M}_{\mathrm{static}}$, $\mathrm{proj}_{z}(\cdot)$ denotes the
projection of the center onto the $z$-plane in world coordinates, and
$\hat{\mathcal{C}}$ represents the target position for the center of mass to
guarantee stability. Minimization of this function ensures the structural
stability of $\mathcal{M}_{\mathrm{static}}$.
It is crucial to highlight that the variables $\mathbf{X}_{\mathrm{rest}}$ and
$\mathbf{x}_{\mathrm{static}}$ are tightly coupled through a hard constraint
in our problem formulation. This constraint, which ensures that the object
remains static equilibrium, is essential to achieving physical compatibility.
Enforcing this configuration guarantees that the 3D physical object conforms
strictly to external forces such as gravity, thereby ensuring the system
adheres to the inherent physical constraints.
### 2.2 Parameterization of Rest-shape Geometry
To solve the optimization problem defined Eq. 2.1, one might consider a
straightforward approach by directly treating $\mathbf{X}_{\mathrm{rest}}$ as
the optimization variable. However, this brings challenges in maintaining the
physical validity of the rest-shape geometry, i.e., there shall be no
inversions or inside-out elements. This non-inversion requirement is typically
enforced through nonlinear inequality constraints [11, 34], leading to
intractable optimization. Drawing inspiration from natural modeling processes
[40], we propose a parameterization of $\mathbf{X}_{\mathrm{rest}}$ by
treating it as the result of plastic deformation applied to an initial
configuration. A _plastic deformation_ can transform objects without the
volume preservation constraint [1]. Specifically, we denote the initial
configuration of the rest-shape geometry as
$\mathcal{M}_{\mathrm{init}}=(\mathbf{X}_{\mathrm{init}},\mathbf{T})$.
$\mathbf{X}_{\mathrm{rest}}$ is implicitly parameterized by the plastic
deformation field $\mathbf{F}_{\mathbf{p}}$ as
$\mathbf{X}_{\mathrm{rest}}:=\phi(\mathbf{F}_{\mathbf{p}};\mathbf{X}_{\mathrm{init}}),\quad\text{with}\quad\mathbf{f}_{\mathrm{int}}(\mathbf{X}_{\mathrm{rest}},\mathbf{X}_{\mathrm{init}};\Theta)=\mathbf{0}.$
(3)
Intuitively, this equation suggests that $\mathbf{X}_{\mathrm{rest}}$ results
from applying plastic strain field $\mathbf{F}_{\mathbf{p}}$ to
$\mathbf{X}_{\mathrm{init}}$ without any external forces. The plastic strain
field $\mathbf{F}_{\mathbf{p}}$ is the collection of transformations, with
each transformation is an $\mathbb{R}^{3\times 3}$ matrix applied to each
material point. Throughout this paper, we also represent plastic deformation
in its vector form as $\mathbf{F}_{\mathbf{p}}\in\mathbb{R}^{9Z}$, which
corresponds to the flattened vector form of the $\mathbb{R}^{3\times 3}$
transformation collection. For a detailed explanation of the computation of
$\mathbf{X}_{\mathrm{rest}}$ from $\mathbf{F}_{\mathbf{p}}$ and its
integration into the static equilibrium, we refer the reader to Appendix B.
There are several benefits using $\mathbf{F_{p}}$ for parameterizing rest-
shape geometry: It exhibits invariance to translation, which ensures that the
spatial positioning of $\mathbf{X}_{\mathrm{init}}$ does not affect the
deformation outcomes. Moreover, the non-inversion requirement can be
efficiently satisfied by constraining the singular values of
$\mathbf{F}_{\mathbf{p}}$, thereby avoiding the need for complicated
inequality constraints. Appendix B provides a comprehensive analysis of these
advantages.
By substituing Eq. 3, we reformulate the optimization problem Eq. 2.1 as
follows:
$\displaystyle\min_{\mathbf{F_{p}},\mathbf{x}_{\mathrm{static}}}$
$\displaystyle\quad\mathcal{J}(\mathbf{F_{p}},\mathbf{x}_{\mathrm{static}})=\mathcal{L}(\mathbf{x}_{\mathrm{static}})+\mathcal{L}_{\mathrm{reg}}(\mathbf{F_{p}})$
$\displaystyle\mathrm{s.t.}$
$\displaystyle\quad\mathbf{f}_{\mathrm{int}}(\mathbf{x}_{\mathrm{static}},\phi(\mathbf{F}_{\mathbf{p}};\mathbf{X}_{\mathrm{init}});\Theta)=\mathbf{f}_{\mathrm{ext}}(\mathbf{x}_{\mathrm{static}}).$
(4)
Here, the optimization variables are $\mathbf{F_{p}}$, where the initial
geometry configuration $\mathbf{X}_{\mathrm{init}}$ is treated as a constant.
The regularization term $\mathcal{L}_{\mathrm{reg}}(\mathbf{F_{p}})$ is
defined as the smoothness of plastic deformation using bi-harmonic energy [5],
represented as
$\mathcal{L}_{\mathrm{reg}}(\mathbf{F_{p}})=\|\mathbf{L}\mathbf{F_{p}}\|_{2}^{2}$,
where $\mathbf{L}\in\mathbb{R}^{9Z\times 9Z}$ denotes the graph Laplacian
matrix, encapsulating the connectivity of the volumetric mesh elements.
### 2.3 Implicit Differentiation-based Optimization
Solving the optimization problem in Eq. 2.2 is non-trivial due to its
nonlinear objective and the nonlinear hard constraint. A straightforward
approach is incorporating the constraint directly into the objective as an
additional loss term; however, this method may lead to imperfect satisfaction
of the constraint, which undermines the fundamental goal of ensuring physical
compatibility.
We resort to implicit differentiation, a technique used in sensitivity
analysis [6], to compute the gradient of the objective function $\mathcal{J}$
with respect to the variable $\mathbf{F_{p}}$. This approach effectively
reduces the dimensionality of the optimization variables since we only need to
calculate the gradient with respect to $\mathbf{F_{p}}$ and also ensures that
the gradient direction takes into account the hard constraint. Specifically,
the gradient is computed as follows:
$\displaystyle\frac{\partial\mathcal{J}}{\partial\mathbf{F_{p}}}=-\left(\frac{\partial\mathcal{L}}{\partial\mathbf{x}_{\mathrm{static}}}\right)\left[\frac{\partial\mathbf{f}_{\mathrm{net}}}{\partial\mathbf{x}_{\mathrm{static}}}\right]^{-1}\frac{\partial\mathbf{f}_{\mathrm{net}}}{\partial\mathbf{F_{p}}}+\frac{\partial\mathcal{L}_{\mathrm{reg}}}{\partial\mathbf{F_{p}}},$
(5)
where
$\mathbf{f}_{\mathrm{net}}=\mathbf{f}_{\mathrm{int}}-\mathbf{f}_{\mathrm{ext}}$
represents the net forces. A comprehensive derivation of this gradient formula
is provided in Appendix C. By utilizing this gradient, the optimization can be
solved using standard optimization tools, such as the Adam optimizer [17].
This facilitates the integration of our method into existing single-view
reconstruction pipelines.
### 2.4 Implementation Details
Given an input image, we initially utilize off-the-shelf single-view
reconstruction models to obtain the 3D object’s target geometry, ensuring
alignment with the input image. The output of these reconstruction models
varies depending on the geometric representation used. For instance, methods
employing tetrahedral representations, such as TetSphere [11], yields
volumetric meshes that can be directly used as
$\mathcal{M}_{\mathrm{target}}$. Conversely, methods that output surface
meshes [42] or point clouds [38], which are often non-volumetric and typically
non-manifold, require additional processing steps to be suitable for our
computational pipeline. We use TetWild [15], a robust tetrahedral meshing
algorithm, to convert these unstructured outputs into high-quality tetrahedral
meshes, resulting in volumetric mesh $\mathcal{M}_{\mathrm{target}}$. For
initiating the optimization process, we set
$\mathcal{M}_{\mathrm{init}}=\mathcal{M}_{\mathrm{target}}$, assuming that
$\mathcal{M}_{\mathrm{target}}$ is a reasonably good initial approximation for
the optimization. Note that $\mathcal{M}_{\mathrm{init}}$ is not strictly
confined to $\mathcal{M}_{\mathrm{target}}$; any volumetric mesh could
potentially serve as the initial approximation, given the flexibility of
$\mathbf{F_{p}}$ to accommodate spatially varying deformations.
For the material constitutive model, we use isotropic Neo-Hookean material as
detailed in [33]. The mechanical properties $\Theta$, including Young’s
modulus $E$, Poisson’s ratio $\nu$, and mass density $\rho$, are set by users.
These values can be specified directly through numerical input or chosen from
a collection of pre-established material options, such as plastic or rubber.
We consider gravity and fixed attachment forces as options for external
forces. Gravity is always included to reflect its omnipresence in the real
world. The use of fixed attachment forces depends on the specific needs of the
application, for instance, anchoring an object at a designated site. Detailed
formulations for both force types are provided in Appendix F.
## 3 Evaluation
In this section, we present evidence that our approach enhances the physical
compatibility of 3D objects produced using state-of-the-art single-view
reconstruction techniques. We conduct a series of quantitative evaluations
using five metrics (Sec. 3.1) to compare the physical compatibility of shapes
optimized by our framework against those produced by existing methods without
our method (Sec. 3.2). We also provide qualitative comparisons to demonstrate
to the effectiveness of our approach (Sec. 3.3). Furthermore, we explore the
practical applications of our method by illustrating how it enables the
reconstruction of diverse 3D shapes with different material properties from
the same single image, and by demonstrating that our optimized shapes are
readily adaptable for dynamic simulations and fabrication (Sec. 3.4).
### 3.1 Baselines and Evaluation Protocol
Existing metrics for evaluating single-view reconstruction methods primarily
focus on the visual appearance of the objects. Measures such as PSNR and SSIM
are used to assess image fidelity, while chamfer distance and volume IoU
evaluate geometric quality. However, these metrics do not consider the
underlying physics principles that govern the behavior of 3D objects.
Consequently, they are insufficient for evaluating the physical compatibility
of reconstructed shapes, a crucial aspect for applications requiring accurate
physical interactions and structural stability.
Table 1: Quantitative results on four metrics evaluating physical
compatibility. We apply our pipeline to five single-image reconstruction
techniques and assess our metrics on both the initial shapes from these
methods (Baseline) and the optimized shapes from the integration of our
framework with each baseline (Ours). Our method demonstrates quantitative
improvements in mean stress, stability rate, and image fidelity across all
benchmarks. Among all methods, TetSphere integrated with our framework
achieves superior performance across all evaluation metrics. This can be
attributed to the explicit volumetric representation used in TetSphere. The
mean and standard deviation are calculated across all examples for each
method. A higher deviation in Mean Stress suggests a larger variance in
structural thickness and curvature, while a higher deviation in Img. Loss
indicates a larger variance in static shape deformation.
Method | Init. Geo. | $\\#$CC. $\downarrow$ | Mean Stress $\downarrow$ (kPa) | Standable. $\uparrow$ (%) | Img. Loss $\downarrow$
---|---|---|---|---|---
Wonder3D | Baseline | NeuS | 2.54 $\pm$ 2.64 | 10.68 $\pm$ 17.47 | 6.9 | 0.073 $\pm$ 0.063
Ours | 0.45 $\pm$ 0.96 | 72.4 | 0.069 $\pm$ 0.048
LGM | Baseline | Gaussian splatting | 2.67 $\pm$ 2.13 | 1.14 $\pm$ 2.03 | 20.3 | 0.121 $\pm$ 0.091
Ours | 1.01 $\pm$ 1.34 | 85.9 | 0.116 $\pm$ 0.065
MeshLRM | Baseline | surface mesh | 1.55$\pm$ 2.13 | 0.54 $\pm$ 1.41 | 28.6 | 0.065 $\pm$ 0.042
Ours | 0.38 $\pm$ 1.05 | 73.8 | 0.064 $\pm$ 0.042
TripoSR | Baseline | NeRF | 1.43 $\pm$ 1.12 | 0.29 $\pm$ 1.28 | 24.2 | 0.066 $\pm$ 0.047
Ours | 0.22 $\pm$ 0.94 | 80.6 | 0.059 $\pm$ 0.039
TetSphere | Baseline | tet-sphere | 1.00 $\pm$ 0.00 | 0.22 $\pm$ 0.51 | 32.8 | 0.061 $\pm$ 0.045
Ours | 0.19 $\pm$ 0.78 | 92.4 | 0.057 $\pm$ 0.040
Figure 3: Quantitative results on fracture rate. We plot the relationship
between the fracture rate and the maximum stress threshold across five single-
image reconstruction methods. The shapes optimized with our framework exhibit
a consistently lower fracture rate compared to those shapes obtained without
our pipeline. MeshLRM and TripoSR feature prevalent thin structures in their
reconstructed shapes, whereas our approach significantly reduces the fracture
rate in both cases.
#### Metrics.
To address this oversight, we draw inspiration from the field of finite
element analysis [2] and introduce five novel metrics specifically designed to
assess the physical compatibility of 3D models comprehensively. These metrics
are tailored to ensure a more thorough evaluation of method performance in
real-world scenarios with rich physics:
* •
Number of Connected Components ($\\#$CC.) evaluates the structural integrity
of the object. Physical objects should not have floating or disconnected
structures, ideally consisting of one single connected component.
* •
Mean Stress calculates the average von Mises stress [26] across all tetrahedra
of all objects. It measures the extent of physical deformation. Under the same
external interactions, higher mean stress indicates a greater likelihood of
fracture and the existence of unrealistic thin structures.
* •
Percentage of Standability (Standable.) assesses whether the object can
maintain stability under gravity, remaining upright without toppling. A
standable object is one that effectively supports itself against gravitational
forces.
* •
Matching loss (Img. Loss) calculates the $l_{1}$ difference between the
rendered image of the object after applying gravity and the input target
image, quantifying the deviation of the physical object from the desired shape
due to physical influences.
* •
Fracture Rate measures the number of tetrahedral elements that exceed a
predefined stress threshold, potentially leading to fractures. The resilience
of a method against physical stresses is quantified using a degradation curve,
with more physically reliable methods exhibiting a smaller area under the
curve for the fracture rate.
Figure 4: Qualitative results on physical compatibility optimization. Left:
Rest shapes optimized using our approach result in static shapes that closely
match the input images when subjected to gravity. In contrast, shapes without
the optimization fail to replicate the geometry in the input image. Right: our
optimization process ensures that the optimized shapes are capable of
supporting themselves, whereas the baseline methods fail to achieve this
stability.
Baselines. We consider five single-view reconstruction baselines in our
evaluation, each associated with a distinct geometry representation: Wonder3D
[23] with NeuS, LGM [38] with Gaussian splatting, MeshLRM [42] with surface
mesh, TripoSR [39] with NeRF triplane, and TetSphere [11] with tetrahedral
spheres. For the baseline results, we used the publicly available inference
code to reconstruct the 3D objects.222For MeshLRM, since the pre-trained model
is not publicly available yet, we obtained the reconstructed shapes directly
from the authors for use in our study. To demonstrate the versatility of our
method, we integrated our physical compatibility optimization framework with
all five baseline models and reported the results to ensure a fair comparison.
The implementation details of our framework are provided in Appendix D.
Evaluation Datasets. The evaluation dataset was sourced from Objaverse [9]. We
initially randomly selected approximately $200$ shapes from the categories of
plants, animals, and characters – categories that demand greater physical
compatibility. Single-view images were rendered using the publicly released
code by the authors of Objaverse333https://github.com/allenai/objaverse-
rendering. Subsequently, these images were used to reconstruct 3D objects
using the baseline methods mentioned earlier. We filtered out shapes of
extremely poor quality, specifically those with more than $8$ connected
components. This process resulted in a final set of $100$ shapes for detailed
evaluation.
Despite these shapes being a part of the training data for most baseline
methods, our evaluation focuses on assessing the physical compatibility – a
factor overlooked by these methods. The results obtained from this dataset
provide valuable insights and observations on the physical compatibility of
each method, demonstrating the practical effectiveness of our approach.
Figure 5: Ablation study on Young’s modulus. By changing the material
properties, our method can produce various rest-shape geometries (top), which
all result in the same static shapes that match the input image (middle).
Although these static shapes appear identical under static equilibrium, they
exhibit different deformation when subjected to the same compression forces
exerted by the yellow block, attributable to the differences in their material
properties (bottom).
### 3.2 Quantitative Results
Table 1 shows the quantitative results for four out of five metrics evaluated
for both baselines and those integrated with our physical compatibility
optimization. Fig. 3 shows the curve of fracture rate.
Our quantitative analysis yields several observations: 1) The underlying
geometry representation significantly impacts the structural integrity of
reconstructed shapes, as evidenced by the number of connected components
($\\#$CC.). LGM, using a point cloud representation, exhibits the poorest
structural integrity, often resulting in floating structures due to its
inability to differentiate the interior from the exterior of a 3D object. In
contrast, TetSphere, with its volumetric representation, maintains the most
integral structure. 2) Both MeshLRM and TripoSR generally produce more
physically stable 3D objects, as indicated by Mean Stress and Standability
(Standable.) metrics. However, they tend to diverge under gravity, as shown by
the Matching Loss metric (Img. Loss), compared to TetSphere. 3) Notably, our
method consistently enhances the physical compatibility performance across all
baselines. The improvement is particularly significant for Wonder3D and
MeshLRM. Wonder3D typically generates multi-view images before reconstructing
the 3D shape, which can lead to thin structures due to inconsistencies across
the views. Similarly, MeshLRM’s reliance on surface mesh could often result in
thin structures. Our method strengthens the physical robustness for both
cases. 4) Our method also enhances the structure robustness to fracture, as
demonstrated in Fig. 3. It notably improves the performance of both MeshLRM
and TripoSR in reducing fracture rates.
Figure 6: Applications of physically compatible objects. Left: Our optimized
physical objects is simulation-ready and can be seamlessly integrated into
dynamic simulation pipeline to produce complex dynamics and motions. Right:
Real-world validation using 3D printing shows that shapes optimized using our
method closely replicate the input images, demonstrating the practical
effectiveness of our method in manufacturing.
### 3.3 Qualitative Results
Fig. 4 and more qualitative results in Appendix 4 illustrate the effectiveness
of our physical compatibility optimization. Without optimization, the static
shapes behave undesirably under general physical principles: they either sag
excessively under gravity, diverging from the geometry depicted in the input
image, or fail to remain upright, toppling over. Our optimization method
incorporates physical principles to ensure that the optimized rest shapes are
self-supporting and stable, and match the input images under static
equilibrium.
### 3.4 Analysis
#### Ablation study on Young’s Modulus.
We investigate the influence of predefined mechanical material properties,
particularly Young’s modulus, on the optimized rest shapes and their physical
behaviors. Using the same input image, we obtained six optimized rest shapes
with varying Young’s modulus values within our framework with TetSphere. As
shown in Fig. 5, although the optimized rest-shape geometries vary, they all
deform to the same static geometry under the influence of gravity, matching
the input image. Moreover, the physical responses to identical external
forces, such as compression by a box, differ due to the variations in material
properties. These results highlight how the explicit decomposition of physical
attributes in our framework expands the controllability of object modeling,
allowing for diverse physical behaviors under uniform external forces.
#### Application to dynamic simulation.
The immediate output of our method is a simulation-ready rest-shape geometry,
which can be seamlessly integrated into a simulation pipeline to produce
complex dynamics and motions. Fig. 6 (left) and the accompanying video in the
Supplementary Material illustrate three plants modeled using our framework,
demonstrating their behavior under gravity and complex interactions.
Implementation details of this simulation are provided in Appendix F. These
examples underscore the practical utility of our method for generating
physically realistic dynamics and simulations.
#### Application to fabrication.
We further evaluate our method in real world by fabricating three shapes using
3D printing, both with and without optimization. The results, shown in Fig. 6
(right), with detailed implementation procedures available in Appendix E,
demonstrate that the 3D printed shapes align with our computational results.
These real-world experiments demonstrate the practical effectiveness and
validate the physical realism of the objects produced by our method.
## 4 Conclusion
In this work, we introduced physical compatibility optimization for
reconstructing a physical object from a single image. Our method decomposes
three orthogonal attributes governing physical behavior: mechanical
properties, external forces, and rest-shape geometry. Unlike existing methods
that often ignore one or more dimensions, our framework holistically considers
all three factors, allowing for diverse rest-shape geometries from the same
input image by varying object stiffness and external forces. We formulate
physical compatibility optimization as a constrained optimization problem by
integrating static equilibrium as a hard constraint. Our approach produces
physical objects that match the geometry depicted in the input image under
external forces and remain stable under gravity. Both quantitative and
qualitative evaluations demonstrated improvements in physical compatibility
over existing baselines. Our method’s versatility is evident through its
integration with various single-view reconstruction methods and its practical
applications in dynamic simulations and 3D printing.
## References
* Aifantis [1987] E. C. Aifantis. The physics of plastic deformation. _International journal of plasticity_ , 3(3):211–247, 1987.
* Allaire [2007] G. Allaire. _Numerical analysis and optimization: an introduction to mathematical modelling and numerical simulation_. OUP Oxford, 2007.
* Bell et al. [2014] S. Bell, K. Bala, and N. Snavely. Intrinsic images in the wild. _ACM Transactions on Graphics (TOG)_ , 33(4):1–12, 2014.
* Bermano et al. [2017] A. H. Bermano, T. Funkhouser, and S. Rusinkiewicz. State of the art in methods and representations for fabrication-aware design. In _Computer Graphics Forum_ , volume 36, pages 509–535. Wiley Online Library, 2017.
* Botsch and Sorkine [2007] M. Botsch and O. Sorkine. On linear variational surface deformation methods. _IEEE transactions on visualization and computer graphics_ , 14(1):213–230, 2007.
* Burczyński et al. [1997] T. Burczyński, J. Kane, and C. Balakrishna. Comparison of shape design sensitivity analysis formulations via material derivative-adjoint variable and implicit differentiation techniques for 3-d and 2-d curved boundary element. _Computer methods in applied mechanics and engineering_ , 142(1-2):89–109, 1997.
* Charatan et al. [2023] D. Charatan, S. Li, A. Tagliasacchi, and V. Sitzmann. pixelsplat: 3d gaussian splats from image pairs for scalable generalizable 3d reconstruction. _arXiv preprint arXiv:2312.12337_ , 2023.
* Chen et al. [2014] X. Chen, C. Zheng, W. Xu, and K. Zhou. An asymptotic numerical method for inverse elastic shape design. _ACM Transactions on Graphics (TOG)_ , 33(4):1–11, 2014.
* Deitke et al. [2023] M. Deitke, D. Schwenk, J. Salvador, L. Weihs, O. Michel, E. VanderBilt, L. Schmidt, K. Ehsani, A. Kembhavi, and A. Farhadi. Objaverse: A universe of annotated 3d objects. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_ , pages 13142–13153, 2023.
* Feng et al. [2023] Y. Feng, Y. Shang, X. Li, T. Shao, C. Jiang, and Y. Yang. Pie-nerf: Physics-based interactive elastodynamics with nerf. _arXiv preprint arXiv:2311.13099_ , 2023.
* [11] M. Guo, B. Wang, K. He, and W. Matusik. Tetsphere splatting: Representing high-quality geometry with lagrangian volumetric meshes.
* He and Wang [2023] Z. He and T. Wang. Openlrm: Open-source large reconstruction models. https://github.com/3DTopia/OpenLRM, 2023.
* Hong et al. [2023] Y. Hong, K. Zhang, J. Gu, S. Bi, Y. Zhou, D. Liu, F. Liu, K. Sunkavalli, T. Bui, and H. Tan. LRM: Large reconstruction model for single image to 3D. Nov. 2023.
* Hsu et al. [2022] J. Hsu, N. Truong, C. Yuksel, and K. Wu. A general two-stage initialization for sag-free deformable simulations. _ACM Transactions on Graphics (TOG)_ , 41(4):1–13, 2022.
* Hu et al. [2018] Y. Hu, Q. Zhou, X. Gao, A. Jacobson, D. Zorin, and D. Panozzo. Tetrahedral meshing in the wild. _ACM Trans. Graph._ , 37(4):60:1–60:14, July 2018. ISSN 0730-0301. doi: 10.1145/3197517.3201353. URL http://doi.acm.org/10.1145/3197517.3201353.
* Kerbl et al. [2023] B. Kerbl, G. Kopanas, T. Leimkühler, and G. Drettakis. 3d gaussian splatting for real-time radiance field rendering. _ACM Transactions on Graphics_ , 42(4), 2023.
* Kingma and Ba [2015] D. P. Kingma and J. Ba. Adam: A method for stochastic optimization. _ICLR_ , 2015.
* Li et al. [2020] M. Li, Z. Ferguson, T. Schneider, T. R. Langlois, D. Zorin, D. Panozzo, C. Jiang, and D. M. Kaufman. Incremental potential contact: intersection-and inversion-free, large-deformation dynamics. _ACM Trans. Graph._ , 39(4):49, 2020.
* Li et al. [2022] X. Li, Y.-L. Qiao, P. Y. Chen, K. M. Jatavallabhula, M. Lin, C. Jiang, and C. Gan. Pac-nerf: Physics augmented continuum neural radiance fields for geometry-agnostic system identification. 2022\.
* Liu et al. [2023a] M. Liu, C. Xu, H. Jin, L. Chen, V. T. Mukund, Z. Xu, and H. Su. One-2-3-45: Any single image to 3D mesh in 45 seconds without Per-Shape optimization. June 2023a.
* Liu et al. [2023b] R. Liu, R. Wu, B. V. Hoorick, P. Tokmakov, S. Zakharov, and C. Vondrick. Zero-1-to-3: Zero-shot one image to 3d object, 2023b.
* Liu et al. [2023c] Y. Liu, C. Lin, Z. Zeng, X. Long, L. Liu, T. Komura, and W. Wang. SyncDreamer: Generating multiview-consistent images from a single-view image. Sept. 2023c.
* Long et al. [2023] X. Long, Y.-C. Guo, C. Lin, Y. Liu, Z. Dou, L. Liu, Y. Ma, S.-H. Zhang, M. Habermann, C. Theobalt, and W. Wang. Wonder3D: Single image to 3D using Cross-Domain diffusion. Oct. 2023.
* Melas-Kyriazi et al. [2023] L. Melas-Kyriazi, C. Rupprecht, I. Laina, and A. Vedaldi. RealFusion: 360° reconstruction of any object from a single image. Feb. 2023.
* Mildenhall et al. [2020] B. Mildenhall, P. P. Srinivasan, M. Tancik, J. T. Barron, R. Ramamoorthi, and R. Ng. Nerf: Representing scenes as neural radiance fields for view synthesis. _ECCV_ , 2020.
* Mises [1913] R. v. Mises. Mechanik der festen körper im plastisch-deformablen zustand. _Nachrichten von der Gesellschaft der Wissenschaften zu Göttingen, Mathematisch-Physikalische Klasse_ , 1913:582–592, 1913.
* Nicolet et al. [2021] B. Nicolet, A. Jacobson, and W. Jakob. Large steps in inverse rendering of geometry. _ACM Transactions on Graphics (TOG)_ , 40(6):1–13, 2021.
* Perlin [2002] K. Perlin. Improving noise. In _Proceedings of the 29th annual conference on Computer graphics and interactive techniques_ , pages 681–682, 2002.
* Poole et al. [2022] B. Poole, A. Jain, J. T. Barron, and B. Mildenhall. Dreamfusion: Text-to-3d using 2d diffusion. _arXiv preprint arXiv:2209.14988_ , 2022.
* Qian et al. [2023] G. Qian, J. Mai, A. Hamdi, J. Ren, A. Siarohin, B. Li, H.-Y. Lee, I. Skorokhodov, P. Wonka, S. Tulyakov, and B. Ghanem. Magic123: One image to high-quality 3d object generation using both 2d and 3d diffusion priors. _arXiv preprint arXiv:2306.17843_ , 2023.
* Shue et al. [2023] J. R. Shue, E. R. Chan, R. Po, Z. Ankner, J. Wu, and G. Wetzstein. 3d neural field generation using triplane diffusion. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_ , pages 20875–20886, 2023.
* Sifakis and Barbic [2012] E. Sifakis and J. Barbic. Fem simulation of 3d deformable solids: a practitioner’s guide to theory, discretization and model reduction. In _Acm siggraph 2012 courses_ , pages 1–50. 2012.
* Smith et al. [2018] B. Smith, F. D. Goes, and T. Kim. Stable neo-hookean flesh simulation. _ACM Transactions on Graphics (TOG)_ , 37(2):1–15, 2018.
* Smith and Schaefer [2015] J. Smith and S. Schaefer. Bijective parameterization with free boundaries. _ACM Transactions on Graphics (TOG)_ , 34(4):1–9, 2015.
* Standley et al. [2017] T. Standley, O. Sener, D. Chen, and S. Savarese. image2mass: Estimating the mass of an object from its image. In _Conference on Robot Learning_ , pages 324–333. PMLR, 2017.
* Szymanowicz et al. [2024] S. Szymanowicz, C. Rupprecht, and A. Vedaldi. Splatter image: Ultra-fast single-view 3d reconstruction. 2024\.
* Tang et al. [2023] J. Tang, J. Ren, H. Zhou, Z. Liu, and G. Zeng. DreamGaussian: Generative gaussian splatting for efficient 3D content creation. Sept. 2023.
* Tang et al. [2024] J. Tang, Z. Chen, X. Chen, T. Wang, G. Zeng, and Z. Liu. LGM: Large Multi-View gaussian model for High-Resolution 3D content creation. Feb. 2024.
* Tochilkin et al. [2024] D. Tochilkin, D. Pankratz, Z. Liu, Z. Huang, A. Letts, Y. Li, D. Liang, C. Laforte, V. Jampani, and Y.-P. Cao. Triposr: Fast 3d object reconstruction from a single image. _arXiv preprint arXiv:2403.02151_ , 2024.
* Wang et al. [2021a] B. Wang, G. Matcuk, and J. Barbič. Modeling of personalized anatomy using plastic strains. _ACM Transactions on Graphics (TOG)_ , 40(2):1–21, 2021a.
* Wang et al. [2021b] P. Wang, L. Liu, Y. Liu, C. Theobalt, T. Komura, and W. Wang. NeuS: Learning neural implicit surfaces by volume rendering for multi-view reconstruction. June 2021b.
* Wei et al. [2024] X. Wei, K. Zhang, S. Bi, H. Tan, F. Luan, V. Deschaintre, K. Sunkavalli, H. Su, and Z. Xu. Meshlrm: Large reconstruction model for high-quality mesh. _arXiv preprint arXiv:2404.12385_ , 2024.
* Wu et al. [2023] R. Wu, B. Mildenhall, P. Henzler, K. Park, R. Gao, D. Watson, P. P. Srinivasan, D. Verbin, J. T. Barron, B. Poole, et al. Reconfusion: 3d reconstruction with diffusion priors. _arXiv preprint arXiv:2312.02981_ , 2023.
* Xie et al. [2023] T. Xie, Z. Zong, Y. Qiu, X. Li, Y. Feng, Y. Yang, and C. Jiang. Physgaussian: Physics-integrated 3d gaussians for generative dynamics. _arXiv preprint arXiv:2311.12198_ , 2023.
* Xu et al. [2024] Y. Xu, Z. Shi, W. Yifan, H. Chen, C. Yang, S. Peng, Y. Shen, and G. Wetzstein. Grm: Large gaussian reconstruction model for efficient 3d reconstruction and generation. _arXiv preprint arXiv:2403.14621_ , 2024.
* Yu et al. [2021] A. Yu, V. Ye, M. Tancik, and A. Kanazawa. pixelNeRF: Neural radiance fields from one or few images. 2021\.
* Zhai et al. [2024] A. J. Zhai, Y. Shen, E. Y. Chen, G. X. Wang, X. Wang, S. Wang, K. Guan, and S. Wang. Physical property understanding from language-embedded feature fields. _arXiv preprint arXiv:2404.04242_ , 2024.
* Zhang et al. [2024] K. Zhang, S. Bi, H. Tan, Y. Xiangli, N. Zhao, K. Sunkavalli, and Z. Xu. Gs-lrm: Large reconstruction model for 3d gaussian splatting. _arXiv preprint arXiv:2404.19702_ , 2024.
* Zhong et al. [2024] L. Zhong, H.-X. Yu, J. Wu, and Y. Li. Reconstruction and simulation of elastic objects with spring-mass 3d gaussians. _arXiv preprint arXiv:2403.09434_ , 2024.
## Appendix A Additional Qualitative Results
Figure 7: Additional qualitative results of physical compatibility
optimization (part 1/2).
Figure 8: Additional qualitative results of physical compatibility
optimization (part 2/2).
Figure 7 and 8 show additional results of our physical compatibility
optimization.
## Appendix B Plastic Strain Field $\mathbf{F_{p}}$
To enhance the understanding of our framework without compromising
generalizability, let us consider $\mathcal{M}_{\mathrm{init}}$ to be a
tetrahedral mesh composed of a single element and four vertices. When subject
to static equilibrium influenced by gravity, the object adheres to the
equation:
$\mathbf{f}_{\mathrm{int}}(\mathbf{x},\phi(\mathbf{F_{p}};\mathbf{X}_{\mathrm{init}});\Theta)=\mathbf{M}\mathbf{g},$
(6)
where $\mathbf{f}_{\mathrm{int}}(\cdot,\cdot)$ denotes the elastic force
(internal force), $\mathbf{M}$ is the mass matrix, and $\mathbf{g}$ denotes
the gravity acceleration. To compute this force, we first consider the elastic
energy $\mathcal{E}$. The definition of elastic energy unfolds as follows:
$\displaystyle\mathcal{E}(\mathbf{F_{e}},\mathbf{F_{p}};\Theta)$
$\displaystyle=V(\mathbf{F_{p}})\Phi(\mathbf{F_{e}};\Theta),$ $\displaystyle
V(\mathbf{F_{p}})$
$\displaystyle=V_{\mathrm{init}}\mathrm{det}(\mathbf{F_{p}}),$
$\displaystyle\mathbf{F_{e}}$ $\displaystyle=\mathbf{F}\mathbf{F_{p}}^{-1},$
$\displaystyle\mathbf{F}$
$\displaystyle=\partial\mathbf{x}/\partial\mathbf{X}_{\mathrm{init}},$
where $V(\mathbf{F_{p}})$ represents the volume of the element under plastic
strain, $V_{\mathrm{init}}$ is the initial volume of the element,
$\mathbf{F_{e}}$ denotes the elastic deformation gradient, $\mathbf{F}$
represent the total deformation gradient, and $\Phi(\cdot;\Theta)$ the elastic
energy density function. This deformation gradient $\mathbf{F}$ is computed
through standard methodology [32].
Consequently, the derivation of the elastic force encapsulates the computation
of the first-order partial derivative of the elastic energy with respect to
the vertex positions:
$\displaystyle\mathbf{f}_{\mathrm{int}}(\mathbf{x},\phi(\mathbf{F_{p}};\mathbf{X}_{\mathrm{init}});\Theta)$
$\displaystyle:=\frac{\partial\mathcal{E}(\mathbf{F_{e}}(\mathbf{x}),\mathbf{F_{p}};\Theta)}{\partial\mathbf{x}}$
$\displaystyle=V(\mathbf{F_{p}})\frac{\partial\Phi}{\partial\mathbf{F_{e}}}\colon\frac{\partial\mathbf{F}}{\partial\mathbf{x}}\mathbf{F_{p}}^{-1}.$
Notably, given the linear dependence of $\mathbf{F}$ on $\mathbf{x}$,
$\frac{\partial\mathbf{F}}{\partial\mathbf{x}}$ remains constant.
Given $\mathbf{F_{p}}$ and $\mathbf{X}_{\mathrm{init}}$ as inputs, the
solution to Eq. 6 is the static shape,
$\mathbf{x}=\mathbf{x}_{\mathrm{static}}$. Likewise, to calculate
$\mathbf{X}_{\mathrm{rest}}$ from $\mathbf{F_{p}}$ and
$\mathbf{X}_{\mathrm{init}}$ in Eq. 3, we solve a similar equation with zero
external force.
$\displaystyle\mathbf{f}_{\mathrm{int}}(\mathbf{x},\phi(\mathbf{F_{p}};\mathbf{X}_{\mathrm{init}});\Theta)$
$\displaystyle=\mathbf{0},$
where the solution to this equation is $\mathbf{X}_{\mathrm{rest}}$.
Considering the elastic energy, the translation of
$\mathbf{X}_{\mathrm{init}}$ does not alter the deformation gradient
$\mathbf{F}$. Consequently, $\mathbf{F_{p}}$ remain unaffected and exhibit
translation invariance. In terms of the elastic force, it maintains
translation invariance as well, since $\mathbf{F}$ is not affected by any
shift in $\mathbf{X}_{\mathrm{init}}$.
Finally, by using isotropic materials, our approach enables a further
reduction in the DOFs of $\mathbf{F_{p}}$. Let us denote $\mathbf{F_{p}}$ as
$\mathbf{F_{p}}=\mathbf{R}\mathbf{S}$. The elastic deformation gradient is
then derived as
$\mathbf{F_{e}}=\mathbf{F}(\mathbf{R}\mathbf{S})^{-1}=\mathbf{F}\mathbf{S}^{-1}\mathbf{R}^{-1}$.
Given the invariance property
$\Phi(\mathbf{F_{e}};\theta)=\Phi(\mathbf{F_{e}}\mathbf{R};\theta)$, which
constantly holds for isotropic materials, the rotation component $\mathbf{R}$
becomes redundant and can be excluded from the formulation. This
simplification implies that the only requirement for $\mathbf{F_{p}}$ is to be
a symmetric matrix. During the optimization process, this property facilitates
the prevention of the inversion: In order to ensure that
$\mathrm{det}(\mathbf{F_{p}})>0$, we can simply adjust the eigenvalues of
$\mathbf{F_{p}}$ to make they remain positive. This adjustment is crucial for
the rest mesh $\mathbf{X}_{\mathrm{rest}}$ to maintain in the non-inverted
state.
## Appendix C Computation of Gradient
By differentiating the constraint in Eq. 2.2 with respect to $\mathbf{F_{p}}$,
we obtain
$\frac{\partial\mathbf{f}_{\mathrm{net}}}{\partial\mathbf{F_{p}}}+\frac{\partial\mathbf{f}_{\mathrm{net}}}{\partial\mathbf{x}_{\mathrm{static}}}\frac{\partial\mathbf{x}_{\mathrm{static}}}{\partial\mathbf{F_{p}}}=0.$
(7)
Then, we have
$\frac{\partial\mathbf{x}_{\mathrm{static}}}{\partial\mathbf{F_{p}}}=-[\frac{\partial\mathbf{f}_{\mathrm{net}}}{\partial\mathbf{x}_{\mathrm{static}}}]^{-1}\frac{\partial\mathbf{f}_{\mathrm{net}}}{\partial\mathbf{F_{p}}}.$
(8)
Substituting the result into the objective in Eq. 2.2, we get
$\displaystyle\frac{\partial\mathcal{J}}{\partial\mathbf{F_{p}}}$
$\displaystyle=\frac{\partial\mathcal{L}}{\partial\mathbf{F_{p}}}+\frac{\partial\mathcal{L}_{\mathrm{reg}}}{\partial\mathbf{F_{p}}}$
$\displaystyle=\frac{\partial\mathcal{L}}{\partial\mathbf{x}_{\mathrm{static}}}\frac{\partial\mathbf{x}_{\mathrm{static}}}{\partial\mathbf{F_{p}}}+\frac{\partial\mathcal{L}_{\mathrm{reg}}}{\partial\mathbf{F_{p}}}$
$\displaystyle=-\frac{\partial\mathcal{L}}{\partial\mathbf{x}_{\mathrm{static}}}[\frac{\partial\mathbf{f}_{\mathrm{net}}}{\partial\mathbf{x}_{\mathrm{static}}}]^{-1}\frac{\partial\mathbf{f}_{\mathrm{net}}}{\partial\mathbf{F_{p}}}+\frac{\partial\mathcal{L}_{\mathrm{reg}}}{\partial\mathbf{F_{p}}},$
(9)
which is the gradient with respect to $\mathbf{F_{p}}$. In practice,
$\frac{\partial\mathbf{f}_{\mathrm{net}}}{\partial\mathbf{x}_{\mathrm{static}}}$
and $\frac{\partial\mathbf{f}_{\mathrm{net}}}{\partial\mathbf{F_{p}}}$ are
stored as sparse matrices and computed based on [40]. Considering about the
performance, we first compute
$\frac{\partial\mathcal{L}}{\partial\mathbf{x}_{\mathrm{static}}}[\frac{\partial\mathbf{f}_{\mathrm{net}}}{\partial\mathbf{x}_{\mathrm{static}}}]^{-1}$
using sparse linear solver. This results in a dense vector with size $3N$. We
then multiply it with
$\frac{\partial\mathbf{f}_{\mathrm{net}}}{\partial\mathbf{F_{p}}}$.
## Appendix D Implementation Details of Evaluation
To evaluate the physical compatibility of baseline methods, which often
produce shapes comprising multiple connected components, we first extract the
largest connected component from each mesh. All meshes are then normalized to
the unit cube. Notably, the reconstructed shapes from TripoSR and Wonder3D are
not axis-aligned; thus, we manually rotate these shapes to ensure the head
points towards the $z$-axis in the world coordinate space. For integrating our
physical compatibility framework, We use two sets of Young’s modulus,
$E=5\times 10^{4}\mathrm{Pa}$ and $E=5\times 10^{5}\mathrm{Pa}$, which are
selected based on whether the shape would become overly soft, potentially
leading to static equilibrium failure due to excessive stress causing
numerical bounds to be exceeded. Poisson’s ratio $\nu=0.45$ and mass density
$\rho=1000\mathrm{kg/m^{3}}$ are consistent across all meshes. Evaluation
metrics require solving for static equilibrium Eq. 1. We employ the Newton-
Raphson solver with line search, setting the maximum number of iterations to
be $200$. For optimizing Eq. 2.2, we use gradient descent and allow up to
$1000$ iterations. Our experiments run on a desktop PC with an AMD Ryzen 9
5950X 16-core CPU and 64GB RAM. The average runtime for this optimization
process is approximately $80$ seconds.
## Appendix E Implementation Details of 3D Printing
The selected model shapes were 3D printed using stereolithography (Form3;
Formlabs, $100$ $\mu$m layer thickness) to create the flexible designs (using
Flexible 80A, tensile modulus ${<}3$ MPa, 100% strain to failure) and rigid
designs (using White Resin V4; tensile modulus $1.6$ GPa), both without post-
curing. The flexible flowers are $55$ and $65$ mm in height and the rigid
goose is $50$ mm in length. Shapes with and without optimization were printed
with similar support structures designed to preserve delicate features.
## Appendix F Dynamic Simulation of Deformable Objects
We model each solid deformable object using FEM with hyperelastic materials
for dynamic simulation. Then, we solve the standard partial differential
equation (PDE) for dynamic FEM simulation:
$M\ddot{x}+D(x)\dot{x}+f_{\mathrm{elastic}}(x)+f_{\mathrm{attachment}}(x)+f_{\mathrm{contact}}(x)=Mg,$
(10)
where $x$ represents the node positions within the finite element meshes – we
use tetrahedral meshes – of the objects, $M$ denotes the mass matrix, $D$ is
the Rayleigh damping matrix, $f_{\mathrm{elastic}}(\cdot)$ is the hyperelastic
forces, $f_{\mathrm{attachment}}(x)$ is the attachment forces that constrain
the objects to a specific location, and $f_{\mathrm{contact}}(\cdot)$ denotes
the contact forces between surfaces. We employ the implicit backward Euler
method for time discretization, transforming the PDE into:
$A^{n}x^{n+1}+b^{n}+f_{\mathrm{elastic}}(x^{n+1})+f_{\mathrm{attachment}}(x^{n+1})+f_{\mathrm{contact}}(x^{n+1})=0,$
(11)
where $x^{n+1}$ is the position vector at timestep $(n+1)$, $A^{n}$ and
$b^{n}$ is a constant matrix and vector, respectively, derived from values at
timestep $n$, Finally, we solve this nonlinear equation using Newton’s method
at each timestep.
The hyperelastic material selected for the deformable objects is the same as
the one used for the rest shape optimization [33] in Sec. 2. Attachment forces
are modeled as spring forces
$f_{\mathrm{attachment}}(x)=k_{a}(Sx-\bar{x}(t))$, where $k_{a}$ is the
stiffness of the spring, the selection matrix $S$ selects the attached
vertices, and $\bar{x}(t)$ denotes the target attachment locations at time
$t$. Contact forces are generated from penalizing any vertex penetration into
the contact surface, expressed as $f=k_{c}d$, where $k_{c}$ represents the
contact stiffness and $d$ denotes the penetration depth, with $d=0$ in the
absence of contact. This gives the normal contact forces. Friction forces are
computed following the methods outlined in [18]. Then, the total contact force
$f_{\mathrm{contact}}$ is the sum of normal contact forces and friction
forces.
For the dynamic simulation in Figure 7, the attachment of each plant is
defined as the bottom part of each pot. We keyframe-animate the trajectory of
attachment vertices $\bar{x}(t)$. Gravity is enabled throughout the entire
simulation. At the end of the sequence, we apply wind forces to the plants,
computed using 4D Perlin Noise [28].
## Appendix G Related Work
#### Single-view 3D reconstruction.
Recent strides in single-view 3D reconstruction have mainly been fueled by
data-driven methods, paralleled by advancements in 3D geometry representation,
including NeRF [25], NeuS [41], triplanes [31], Gaussian splatting [16],
surface meshes [27], and tet-spheres [11]. These developments have
significantly enhanced the geometric quality and visual fidelity of the
reconstructed 3D shapes. There are primarily two types of single-view
reconstruction methods: 1) Test-time optimization-based methods [29, 23, 37,
43], use multiview diffusion models [21] and iteratively reconstruct 3D scenes
using these diffusion priors. 2) Feedforward methods [13, 46, 36, 7, 42, 48]
leverage large datasets and learn general 3D priors for shape reconstruction
to enable efficient one-step 3D reconstruction from single or sparse views.
Unlike the aforementioned methods, our work emphasizes the integration of
physical modeling into the reconstruction process. This integration
distinguishes our work by ensuring that the resulting 3D shapes are not only
visually accurate but also physically plausible under real-world conditions.
#### Physics-based 3D modeling.
There has been an increasing interest in incorporating physics into 3D shape
modeling. While many approaches utilize video input, which offers a richer
temporal context for inferring physical properties such as material parameters
[49] and geometry [19], others approach the problem by first reconstructing an
object’s geometry from multi-view images and subsequently applying physical
simulations [10, 44]. Additionally, several studies have explored extracting
physical information from static images [47, 3, 35], using data-driven
techniques to estimate properties like shading, mass, and material. In
contrast, our work incorporates physical principles, specifically static
equilibrium, as hard constraints within the reconstruction process. This
integration allows for the optimization of 3D models that adhere to desired
physical behaviors depicted by the image.
#### Fabrication-aware shape design.
Originating from the computer graphics community, fabrication-aware shape
design systems enable designers to specify higher-level objectives – such as
structural integrity, deformation, and appearance – with the final shape as
the output of the computational system [4]. Related methodologies in this
domain, particularly those addressing static equilibrium, include inverse
elastic shape design [8] and sag-free initialization [14]. However, these
approaches typically require a manually created initial geometry, whereas our
work aims to construct the physical object directly from a single input image.
## Appendix H Limitations and Future Work
One limitation of our framework is its reliance on predefined material
properties and external forces as inputs. Although this provides
controllability of the final optimized rest-shape geometry, automating the
extraction of these parameters from a single image presents a potential avenue
for future work. Moreover, our method relies on the use of a tetrahedral mesh,
which is derived by tetrahedralizing the output geometry produced by baseline
methods. A natural extension of our work is the development of a
differentiable converter that can transform any geometric representation into
a tetrahedral mesh. This would enable future research where our physical
compatibility optimization could be integrated into a pre-trained large
reconstruction model, which could then be fine-tuned to directly produce
physically compatible 3D objects. Lastly, our current methodology focuses
solely on physical objects in a state of static equilibrium. Exploring the
reconstruction of 3D objects undergoing dynamics captured from video is an
intriguing prospect for future research.
## Appendix I Broader Impacts
Our research presents a computational framework for reconstructing physical
objects from single images. This advancement holds significant potential for
various applications, including dynamic simulations, 3D printing, virtual
reality, and industrial design. By ensuring that the reconstructed objects
adhere to real-world physical laws, our method can enhance the realism and
functionality of virtual environments, improve the precision of 3D printed
objects, and contribute to the development of more reliable industrial
designs.
There are mainly two potential negative societal impacts: Improved 3D
reconstruction capabilities could potentially be misused to create highly
realistic fake objects or environments for disinformation purposes. This could
include generating deceptive media content that appears authentic. As the
framework automates the reconstruction process, there is a potential risk of
it being used in automated systems without sufficient oversight, potentially
leading to unintended and harmful outcomes due to errors or misuse. Developing
systems to monitor the use of the technology and ensure accountability for its
applications, as well as providing comprehensive guidelines and training for
users to promote ethical use and awareness of potential misuse, will address
these potential negative impacts.
|
# Spectrification is incompatible with Szabó spectral sequence
Benjamin Cooper , Pravakar Paul and Nicholas Seguin University of Iowa,
Department of Mathematics, 14 MacLean Hall, Iowa City, IA 52242-1419 USA ben-
<EMAIL_ADDRESS><EMAIL_ADDRESS><EMAIL_ADDRESS>
###### Abstract.
The Lipshitz-Sarkar Steenrod operations on Khovanov homology are incompatible
with Szabó’s geometric spectral sequence.
## Introduction
Variations on constructions of knot homology theories have determined a rich
mathematical structure, reflecting both their many origins and the surprising
capacity for dissimilar settings to carry the complexity of knot theory. In
the presence of many different, but ultimately equivalent definitions, it is
common to use the extra structure available in one setting to predict new
structure in another. Here we show that extra structure present in one setting
is incompatible with extra structure from another.
R. Lipshitz and S. Sarkar introduced a spectrum-level refinement of Khovanov
homology which determines Steenrod operations
$Sq^{n}:Kh^{t,q}(K;\mathbb{F}_{2})\to Kh^{t+n,q}(K;\mathbb{F}_{2})$ for $n\geq
1$ [LS14a, LS14b]. Z. Szabó defined a spectral sequence which interpolates
from Khovanov homology to a knot homology theory (conjecturally) isomorphic to
the Heegaard-Floer homology $\widehat{HF}(-\Sigma(K)\\#(S^{1}\times S^{2}))$
of the double branched cover $\Sigma(K)$ connected sum $S^{1}\times S^{2}$
[OS05, Sza15]. We prove that for each $n\geq 1$, there is a knot $K_{n}$ so
that $Sq^{n}$ does not commute with the differentials of Szabó’s spectral
sequence.
Compatibility between these constructions is a litmus test for the existence
of a spectrum-level refinement of Szabó’s spectral sequence or its terminus,
Heegaard-Floer homology. The incompatibility observed in this paper is
evidence of a richer story.
## Spectral sequences
Suppose that $(CKh(K),d)$ is the Khovanov chain complex associated to a link
$K$. A map $\delta:CKh(K)\to CKh(K)$ of chain complexes is a differential
when $(d+\delta)^{2}=0$. Associated to such a differential $\delta$ is a
spectral sequence $\\{E_{n},d_{n}\\}_{n=2}^{\infty}$, consisting of pages
$E_{n}=\oplus_{(i,j)\in\mathbb{Z}\times\mathbb{Z}}E^{i,j}_{n}$ and
differentials $d_{n}:E_{n}\to E_{n}$ such that $E_{n+1}:=H(E_{n},d_{n})$, from
the Khovanov homology $E_{2}=Kh(K)$, (where $Kh(K):=H(CKh(K),d)$), to the
homology of the total complex $E_{\infty}=H(CKh(K),d+\delta)$.
###### Definition.
An endomorphism $f_{m}:E_{m}\to E_{m}$ acting on the $E_{m}$-page gives rise
to an operation on $\\{E_{n}\\}_{n=m}^{\infty}$ when there is a sequence of
linear maps $f_{n}:E_{n}\to E_{n}$ for $n>m$ which satisfy the two properties
below.
1. (1)
The map $f_{n}:E_{n}\to E_{n}$ commutes with the differential on the
$E_{n}$-page: $f_{n}d_{n}=d_{n}f_{n}$.
2. (2)
The map $f_{n+1}:E_{n+1}\to E_{n+1}$ agrees with the homology of $f_{n}$ on
the $E_{n}$-page: $f_{n+1}=H(f_{n},d_{n}):H(E_{n},d_{n})\to H(E_{n},d_{n})$.
The Khovanov chain complex $(CKh(K),d)$ is defined as a direct sum of tensor
products of the Frobenius algebra $\mathbb{F}_{2}[x]/(x^{2})$. Choosing any
point on the knot $K$ allows us to adjoin a handle corresponding to
multiplication by $x$. This gives rise to a chain map $X:CKh(K)\to CKh(K)$ and
an induced map $X_{*}$ on homology [Kho03, §3]. A. Shumakovitch introduced a
decomposition of the form
$Kh(K;\mathbb{F}_{2})\cong\widetilde{Kh}(K;\mathbb{F}_{2})\oplus
X_{*}\widetilde{Kh}(K;\mathbb{F}_{2})$ (1.1)
where $\widetilde{Kh}(K;\mathbb{F}_{2})\subset Kh(K;\mathbb{F}_{2})$,
$\widetilde{Kh}(K;\mathbb{F}_{2})\cong
Kh(K;\mathbb{F}_{2})\otimes_{\mathbb{F}_{2}[x]/(x^{2})}\mathbb{F}_{2}$ is the
reduced Khovanov homology [Shu14, §3].
The lemma below shows that the first part of this story extends to Szabó’s
spectral sequence.
###### Lemma.
The map $X_{*}:Kh(K)\to Kh(K)$ gives rise to an operation on the Szabó’s
spectral sequence.
###### Proof.
We construct a map $X_{Sz}:CKh(K)\to CKh(K)$ which commutes with Szabó’s
differential $\delta_{Sz}$ and agrees with $X$ on the associated graded of the
filtration defining the spectral sequence. This is accomplished by refactoring
Szabó’s proof of invariance under the Reidemeister 1 move [Sza15, Thm. 7.2].
Pick a point $p$ on $K$. Adding a kink in the knot at $p$ gives a knot diagram
$K^{\sharp}$. By resolving the crossing at the kink, the chain complex
$\langle K^{\sharp}\rangle$
$\left[\begin{minipage}{25.29494pt} \includegraphics[scale={.45}]{PKink}
\end{minipage}\,\,\,\right]_{(\langle
K^{\sharp}\rangle,\delta_{Sz})}$$=$$Cone\Bigg{(}\left[\begin{minipage}{25.29494pt}
\includegraphics[scale={.45}]{PZero}
\end{minipage}\,\,\right]_{(C_{0},\delta_{0})}$$\left[\begin{minipage}{25.29494pt}
\includegraphics[scale={.45}]{POne}
\end{minipage}\,\,\right]_{(C_{1},\delta_{1})}\Bigg{)}$$S$
associated to the diagram $K^{\sharp}$, can be written as a cone on a map $S$,
so $\delta_{Sz}^{2}=0$ implies $S\delta_{0}=\delta_{1}S$. Now the disjoint
circle in the diagram for $C_{0}$ produces a decomposition $C_{0}\cong
C_{0}^{+}\oplus C_{0}^{-}$ where $C_{0}^{-}$ consists of elements divisible by
$x$ (in the Frobenius algebra associated to the disjoint circle) and
$C_{0}^{+}$ those elements which are not divisible by $x$. Under this
isomorphism $S$ is a sum of two maps $S=X_{Sz}+1$ where $X_{Sz}:C_{0}^{-}\to
C_{1}$ and $1:C_{0}^{+}\to C_{1}$. Szabó observes that the map $1$ is the
identity map by construction. So $S$ and $1$ are chain maps, which implies
that $X_{Sz}$ must also commute with Szabó differentials.
Since the map $X_{Sz}$ has positive homological degree or $t$-degree, it
preserves the filtration $F_{k}C:=\\{y\in C:t(y)\geq k\\}$ defining the Szabó
spectral sequence and induces an operation on the spectral sequence. Lastly,
since the first order term of the Szabó differential is the Khovanov
differential, the first order term of the map $X_{Sz}$ is $X$, so $X$ (and
$X_{*}=H(X,d)$) extend to operations on the spectral sequence. ∎
The map $X_{Sz}$ can be written as $X_{Sz}=\sum_{n=0}^{\infty}X_{n}$ where
$X_{0}=X$ and $X_{n}:=\sum_{p+q=n}E_{p,q}$ for $n>0$ where $E_{p,q}$ is the
assignment from [Sza15, Def. 4.5]. This formula depends on an orientation of
the saddle $S$, but any two choices are homotopic [Sza15, §5.1].
## The counterexample
We now combine the materials above with the output of the computer programs
KnotKit by C. Seed and JavaKh by D. Bar-Natan and J. Green [Cotb, Cota, BN07]
to produce an incompatibility between the Bockstein $Sq^{1}:Kh(K)\to Kh(K)$
and the Szabó spectral sequence. This occurs on the $E_{3}$-page of the
spectral sequence associated to the torus knot $T(4,5)$.
The Poincaré polynomial of the unreduced $\mathbb{F}_{2}$-Khovanov homology of
the torus knot $T(4,5)$ is given by
$\displaystyle P_{2}$
$\displaystyle=(q^{11}+q^{13})t^{0}+(q^{15}+q^{17})t^{2}+(q^{17}+q^{19})t^{3}+(q^{17}+q^{19})t^{4}+(q^{21}+q^{23})t^{5}$
$\displaystyle\quad\quad+(q^{19}+2q^{21}+q^{23})t^{6}+(q^{21}+2q^{23}+q^{25})t^{7}+(q^{23}+q^{25})t^{8}$
$\displaystyle\quad\quad+(q^{25}+2q^{27}+q^{29})t^{9}+(q^{27}+q^{29})t^{10}$
This is the Poincaré polynomial of the $E_{2}$-page of the Szabó spectral
sequence. The polynomials associated to the $E_{3}$ and $E_{4}=E_{\infty}$
pages are given below.
$\displaystyle P_{3}$
$\displaystyle=(q^{11}+q^{13})t^{0}+(q^{17}+q^{19})t^{3}+(q^{19}+2q^{21}+q^{23})t^{6}+(q^{21}+q^{23})t^{7}$
$\displaystyle\quad\quad+(q^{23}+q^{25})t^{8}+(q^{25}+2q^{27}+q^{29})t^{9}+(q^{27}+q^{29})t^{10}$
$\displaystyle P_{4}$
$\displaystyle=(q^{11}+q^{13})t^{0}+(q^{19}+q^{21})t^{6}+(q^{21}+q^{23})t^{7}$
$\displaystyle\quad\quad+(q^{23}+q^{25})t^{8}+(q^{25}+2q^{27}+q^{29})t^{9}+(q^{27}+q^{29})t^{10}$
The diagram below also contains this information. In the diagram, the non-zero
$d_{2}$ differentials are denoted by solid arrows and non-zero $d_{3}$
differentials are denoted by dashed arrows. The $(t,q)$-bidegree of the
differential $d_{n}:E_{n}\to E_{n}$ is $(n,2n-2)$.
$t$$q$$0$$1$$2$$3$$4$$5$$6$$7$$8$$9$$10$$11$$13$$15$$17$$19$$21$$23$$25$$27$$29$$2$$2$$2$
The non-zero maps $Sq^{1}$ and $Sq^{2}$ of $(t,q)$-degrees $(1,0)$ and $(2,0)$
are represented by double arrows. A black dot is a generator which is in the
image of the operation $X_{*}$ and a white dot is a generator which is not in
the image of $X_{*}$, as in Eqn. (1.1). The vector spaces of rank 2 are
labelled by the number $2$. In the picture above, each such vector space
consists of one black dot and one white dot.
###### Theorem.
The Bockstein $Sq^{1}:Kh(T(4,5))\to Kh(T(4,5))$ does not give rise to an
operation on the Szabó spectral sequence.
###### Proof.
Assume that the Bockstein gives rise to an operation on the Szabó spectral
sequence. Then there are maps $Sq^{1}_{n}:E^{i,j}_{n}\to E^{i+1,j}_{n}$ for
$n\geq 2$ which satisfy the two properties in the definition above. But this
cannot be true!
First observe that, of the vector spaces: $E_{2}^{4,17}$, $E_{2}^{3,17}$,
$E_{2}^{6,21}$ and $E_{2}^{7,21}$, only $E_{2}^{4,17}$ interacts with a non-
zero $d_{2}$ differential. In this way, $E_{3}^{i,j}=H(E_{2},d_{2})$ implies
the following isomorphisms
$E_{3}^{4,17}\cong 0\quad\textnormal{ and }\quad E_{3}^{3,17}\cong
E_{2}^{3,17},\quad E_{3}^{6,21}\cong E_{2}^{6,21},\quad E_{3}^{7,21}\cong
E_{2}^{7,21}.$
By assumption, the value of $Sq^{1}_{3}$ agrees with $Sq^{1}_{2}$ under the
correspondences established by the last two of these isomorphisms. So the
diagram below appears on the $E_{3}$-page.
$(6,21)\,\,\leavevmode\hbox to6.09pt{\vbox
to6.09pt{\pgfpicture\makeatletter\hbox{\hskip 3.04544pt\lower-3.04544pt\hbox
to0.0pt{\pgfsys@beginscope\pgfsys@invoke{
}\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{
}\pgfsys@setlinewidth{0.4pt}\pgfsys@invoke{ }\nullfont\hbox
to0.0pt{\pgfsys@beginscope\pgfsys@invoke{
}{}{{}}{}{{{}}{}{}{}{}{}{}{}{}}{}\pgfsys@moveto{0.0pt}{0.0pt}\pgfsys@moveto{2.84544pt}{0.0pt}\pgfsys@curveto{2.84544pt}{1.5715pt}{1.5715pt}{2.84544pt}{0.0pt}{2.84544pt}\pgfsys@curveto{-1.5715pt}{2.84544pt}{-2.84544pt}{1.5715pt}{-2.84544pt}{0.0pt}\pgfsys@curveto{-2.84544pt}{-1.5715pt}{-1.5715pt}{-2.84544pt}{0.0pt}{-2.84544pt}\pgfsys@curveto{1.5715pt}{-2.84544pt}{2.84544pt}{-1.5715pt}{2.84544pt}{0.0pt}\pgfsys@closepath\pgfsys@moveto{0.0pt}{0.0pt}\pgfsys@stroke\pgfsys@invoke{
} \pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope{}{}{}\hss}\pgfsys@discardpath\pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope\hss}}\lxSVG@closescope\endpgfpicture}},\leavevmode\hbox
to5.69pt{\vbox to5.69pt{\pgfpicture\makeatletter\hbox{\hskip
2.84544pt\lower-2.84544pt\hbox to0.0pt{\pgfsys@beginscope\pgfsys@invoke{
}\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{
}\pgfsys@setlinewidth{0.4pt}\pgfsys@invoke{ }\nullfont\hbox
to0.0pt{\pgfsys@beginscope\pgfsys@invoke{
}{}{{}}{}{{{}}{}{}{}{}{}{}{}{}}\pgfsys@moveto{0.0pt}{0.0pt}\pgfsys@moveto{2.84544pt}{0.0pt}\pgfsys@curveto{2.84544pt}{1.5715pt}{1.5715pt}{2.84544pt}{0.0pt}{2.84544pt}\pgfsys@curveto{-1.5715pt}{2.84544pt}{-2.84544pt}{1.5715pt}{-2.84544pt}{0.0pt}\pgfsys@curveto{-2.84544pt}{-1.5715pt}{-1.5715pt}{-2.84544pt}{0.0pt}{-2.84544pt}\pgfsys@curveto{1.5715pt}{-2.84544pt}{2.84544pt}{-1.5715pt}{2.84544pt}{0.0pt}\pgfsys@closepath\pgfsys@moveto{0.0pt}{0.0pt}\pgfsys@fill\pgfsys@invoke{
} \pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope{}{}{}\hss}\pgfsys@discardpath\pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope\hss}}\lxSVG@closescope\endpgfpicture}}$$(7,21)\,\,\leavevmode\hbox
to5.69pt{\vbox to5.69pt{\pgfpicture\makeatletter\hbox{\hskip
2.84544pt\lower-2.84544pt\hbox to0.0pt{\pgfsys@beginscope\pgfsys@invoke{
}\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{
}\pgfsys@setlinewidth{0.4pt}\pgfsys@invoke{ }\nullfont\hbox
to0.0pt{\pgfsys@beginscope\pgfsys@invoke{
}{}{{}}{}{{{}}{}{}{}{}{}{}{}{}}\pgfsys@moveto{0.0pt}{0.0pt}\pgfsys@moveto{2.84544pt}{0.0pt}\pgfsys@curveto{2.84544pt}{1.5715pt}{1.5715pt}{2.84544pt}{0.0pt}{2.84544pt}\pgfsys@curveto{-1.5715pt}{2.84544pt}{-2.84544pt}{1.5715pt}{-2.84544pt}{0.0pt}\pgfsys@curveto{-2.84544pt}{-1.5715pt}{-1.5715pt}{-2.84544pt}{0.0pt}{-2.84544pt}\pgfsys@curveto{1.5715pt}{-2.84544pt}{2.84544pt}{-1.5715pt}{2.84544pt}{0.0pt}\pgfsys@closepath\pgfsys@moveto{0.0pt}{0.0pt}\pgfsys@fill\pgfsys@invoke{
} \pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope{}{}{}\hss}\pgfsys@discardpath\pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope\hss}}\lxSVG@closescope\endpgfpicture}}$$(3,17)\,\,\leavevmode\hbox
to5.69pt{\vbox to5.69pt{\pgfpicture\makeatletter\hbox{\hskip
2.84544pt\lower-2.84544pt\hbox to0.0pt{\pgfsys@beginscope\pgfsys@invoke{
}\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{
}\pgfsys@setlinewidth{0.4pt}\pgfsys@invoke{ }\nullfont\hbox
to0.0pt{\pgfsys@beginscope\pgfsys@invoke{
}{}{{}}{}{{{}}{}{}{}{}{}{}{}{}}\pgfsys@moveto{0.0pt}{0.0pt}\pgfsys@moveto{2.84544pt}{0.0pt}\pgfsys@curveto{2.84544pt}{1.5715pt}{1.5715pt}{2.84544pt}{0.0pt}{2.84544pt}\pgfsys@curveto{-1.5715pt}{2.84544pt}{-2.84544pt}{1.5715pt}{-2.84544pt}{0.0pt}\pgfsys@curveto{-2.84544pt}{-1.5715pt}{-1.5715pt}{-2.84544pt}{0.0pt}{-2.84544pt}\pgfsys@curveto{1.5715pt}{-2.84544pt}{2.84544pt}{-1.5715pt}{2.84544pt}{0.0pt}\pgfsys@closepath\pgfsys@moveto{0.0pt}{0.0pt}\pgfsys@fill\pgfsys@invoke{
} \pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope{}{}{}\hss}\pgfsys@discardpath\pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope\hss}}\lxSVG@closescope\endpgfpicture}}$$(4,17)\,\,0$$Sq^{1}_{3}\neq
0$$Sq^{1}_{3}\neq 0$$Sq^{1}_{3}=0$$d_{3}\neq 0$$d_{3}=0$
Again, $E_{3}^{4,17}=0$ implies that $d_{3}Sq^{1}_{3}=0$. On the other hand,
we shall see that $Sq^{1}_{3}d_{3}\neq 0$. To understand this composition,
first observe that the lemma above implies that $d_{3}(\leavevmode\hbox
to5.69pt{\vbox to5.69pt{\pgfpicture\makeatletter\hbox{\hskip
2.84544pt\lower-2.84544pt\hbox to0.0pt{\pgfsys@beginscope\pgfsys@invoke{
}\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{
}\pgfsys@setlinewidth{0.4pt}\pgfsys@invoke{ }\nullfont\hbox
to0.0pt{\pgfsys@beginscope\pgfsys@invoke{
}{}{{}}{}{{{}}{}{}{}{}{}{}{}{}}\pgfsys@moveto{0.0pt}{0.0pt}\pgfsys@moveto{2.84544pt}{0.0pt}\pgfsys@curveto{2.84544pt}{1.5715pt}{1.5715pt}{2.84544pt}{0.0pt}{2.84544pt}\pgfsys@curveto{-1.5715pt}{2.84544pt}{-2.84544pt}{1.5715pt}{-2.84544pt}{0.0pt}\pgfsys@curveto{-2.84544pt}{-1.5715pt}{-1.5715pt}{-2.84544pt}{0.0pt}{-2.84544pt}\pgfsys@curveto{1.5715pt}{-2.84544pt}{2.84544pt}{-1.5715pt}{2.84544pt}{0.0pt}\pgfsys@closepath\pgfsys@moveto{0.0pt}{0.0pt}\pgfsys@fill\pgfsys@invoke{
} \pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope{}{}{}\hss}\pgfsys@discardpath\pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope\hss}}\lxSVG@closescope\endpgfpicture}})=d_{3}X_{3}(\leavevmode\hbox
to6.09pt{\vbox to6.09pt{\pgfpicture\makeatletter\hbox{\hskip
3.04544pt\lower-3.04544pt\hbox to0.0pt{\pgfsys@beginscope\pgfsys@invoke{
}\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{
}\pgfsys@setlinewidth{0.4pt}\pgfsys@invoke{ }\nullfont\hbox
to0.0pt{\pgfsys@beginscope\pgfsys@invoke{
}{}{{}}{}{{{}}{}{}{}{}{}{}{}{}}{}\pgfsys@moveto{0.0pt}{0.0pt}\pgfsys@moveto{2.84544pt}{0.0pt}\pgfsys@curveto{2.84544pt}{1.5715pt}{1.5715pt}{2.84544pt}{0.0pt}{2.84544pt}\pgfsys@curveto{-1.5715pt}{2.84544pt}{-2.84544pt}{1.5715pt}{-2.84544pt}{0.0pt}\pgfsys@curveto{-2.84544pt}{-1.5715pt}{-1.5715pt}{-2.84544pt}{0.0pt}{-2.84544pt}\pgfsys@curveto{1.5715pt}{-2.84544pt}{2.84544pt}{-1.5715pt}{2.84544pt}{0.0pt}\pgfsys@closepath\pgfsys@moveto{0.0pt}{0.0pt}\pgfsys@stroke\pgfsys@invoke{
} \pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope{}{}{}\hss}\pgfsys@discardpath\pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope\hss}}\lxSVG@closescope\endpgfpicture}})=X_{3}d_{3}(\leavevmode\hbox
to6.09pt{\vbox to6.09pt{\pgfpicture\makeatletter\hbox{\hskip
3.04544pt\lower-3.04544pt\hbox to0.0pt{\pgfsys@beginscope\pgfsys@invoke{
}\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{
}\pgfsys@setlinewidth{0.4pt}\pgfsys@invoke{ }\nullfont\hbox
to0.0pt{\pgfsys@beginscope\pgfsys@invoke{
}{}{{}}{}{{{}}{}{}{}{}{}{}{}{}}{}\pgfsys@moveto{0.0pt}{0.0pt}\pgfsys@moveto{2.84544pt}{0.0pt}\pgfsys@curveto{2.84544pt}{1.5715pt}{1.5715pt}{2.84544pt}{0.0pt}{2.84544pt}\pgfsys@curveto{-1.5715pt}{2.84544pt}{-2.84544pt}{1.5715pt}{-2.84544pt}{0.0pt}\pgfsys@curveto{-2.84544pt}{-1.5715pt}{-1.5715pt}{-2.84544pt}{0.0pt}{-2.84544pt}\pgfsys@curveto{1.5715pt}{-2.84544pt}{2.84544pt}{-1.5715pt}{2.84544pt}{0.0pt}\pgfsys@closepath\pgfsys@moveto{0.0pt}{0.0pt}\pgfsys@stroke\pgfsys@invoke{
} \pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope{}{}{}\hss}\pgfsys@discardpath\pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope\hss}}\lxSVG@closescope\endpgfpicture}})=\leavevmode\hbox
to5.69pt{\vbox to5.69pt{\pgfpicture\makeatletter\hbox{\hskip
2.84544pt\lower-2.84544pt\hbox to0.0pt{\pgfsys@beginscope\pgfsys@invoke{
}\definecolor{pgfstrokecolor}{rgb}{0,0,0}\pgfsys@color@rgb@stroke{0}{0}{0}\pgfsys@invoke{
}\pgfsys@color@rgb@fill{0}{0}{0}\pgfsys@invoke{
}\pgfsys@setlinewidth{0.4pt}\pgfsys@invoke{ }\nullfont\hbox
to0.0pt{\pgfsys@beginscope\pgfsys@invoke{
}{}{{}}{}{{{}}{}{}{}{}{}{}{}{}}\pgfsys@moveto{0.0pt}{0.0pt}\pgfsys@moveto{2.84544pt}{0.0pt}\pgfsys@curveto{2.84544pt}{1.5715pt}{1.5715pt}{2.84544pt}{0.0pt}{2.84544pt}\pgfsys@curveto{-1.5715pt}{2.84544pt}{-2.84544pt}{1.5715pt}{-2.84544pt}{0.0pt}\pgfsys@curveto{-2.84544pt}{-1.5715pt}{-1.5715pt}{-2.84544pt}{0.0pt}{-2.84544pt}\pgfsys@curveto{1.5715pt}{-2.84544pt}{2.84544pt}{-1.5715pt}{2.84544pt}{0.0pt}\pgfsys@closepath\pgfsys@moveto{0.0pt}{0.0pt}\pgfsys@fill\pgfsys@invoke{
} \pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope{}{}{}\hss}\pgfsys@discardpath\pgfsys@invoke{\lxSVG@closescope
}\pgfsys@endscope\hss}}\lxSVG@closescope\endpgfpicture}}$, where
$X_{3}=H(X_{2},d_{2})$ acts non-trivially by Eqn. (1.1). Second,
$Sq^{1}_{3}=[Sq^{1}_{2}]=[Sq^{1}]$ is non-zero on this generator by virtue of
the computer computation. So $d_{3}Sq^{1}_{3}\neq Sq^{1}_{3}d_{3}$, which
contradicts the assumption that $Sq^{1}$ gives rise to an operation. ∎
###### Corollary.
There is no integral lift of the Szabó spectral sequence for which the
$E_{2}$-page agrees with (even) Khovanov homology.
###### Proof.
Suppose there is such a chain complex
$(CKh(T(4,5);\mathbb{Z}),\tilde{\delta}_{Sz}))$, so
$(CKh(T(4,5);\mathbb{F}_{2}),\delta_{Sz})\cong(CKh(T(4,5);\mathbb{Z}),\tilde{\delta}_{Sz}))\otimes_{\mathbb{Z}}\mathbb{F}_{2}$
and $\tilde{\delta}_{Sz}=d+d_{1}+d_{2}+\cdots$ where $d$ is the usual (even)
Khovanov differential. Then the Bockstein $Sq^{1}$ associated to
$\tilde{\delta}_{Sz}$ must give an operation on the Szabó spectral sequence
which agrees with the Bockstein $Sq^{1}$ associated to the (even) Khovanov $d$
differential on the $E_{2}$-page. The theorem above shows that this is not
possible. ∎
###### Remark.
In contrast to the corollary above, there is a preprint suggesting that
Szabó’s spectral sequence lifts to odd integral Khovanov homology [Bei]. The
second author has shown that the odd Bockstein map extends to an operation on
the Szabó spectral sequence [Pau22].
The Cartan formula allows us to construct examples for which $Sq^{n}$ is not
an operation on the Szabó spectral sequence.
###### Corollary.
For each $n\geq 1$, there is link $K_{n}$ for which the Steenrod operation
$Sq^{n}$ does not gives rise to an operation on Szabó’s geometric spectral
sequence.
###### Proof.
Set $K_{n}:=\sqcup_{i=1}^{n}T(4,5)$ so that $CKh(K_{n})\cong
CKh(T(4,5))^{\otimes n}$. The Szabó differential $\delta_{Sz}$ respects this
isomorphism and the Künneth formula shows $E_{m}(K_{n})\cong
E_{m}(T(4,5))^{\otimes n}$ and $E_{m+1}(T(4,5))^{\otimes n}\cong
H(E_{m}(T(4,5))^{\otimes n},d_{m})$.
Now let $a\in E^{3,17}_{3}$ be the element which satisfies $d_{3}a=b\in
E^{6,21}_{3}$ as in the proof of the previous theorem. In what follows, set
$Sq^{n}:=Sq^{n}_{3}$. We have $Sq^{n}a=0$ for all $n\geq 1$ and $Sq^{n}b=0$
for all $n>1$. Consider that $d_{3}(a\otimes b\otimes\cdots\otimes
b)=b\otimes\cdots\otimes b$ and the Steenrod operation is
$Sq^{n}(b\otimes\cdots\otimes
b)=\sum_{i_{1}+\cdots+i_{n}=n}Sq^{i_{1}}b\otimes\cdots\otimes
Sq^{i_{n}}b=Sq^{1}b\otimes\cdots\otimes Sq^{1}b$
which ensures that the composition $Sq^{n}d_{3}(a\otimes b\otimes\cdots\otimes
b)$ is non-zero. On the other hand, $Sq^{n}(a\otimes b\otimes\cdots\otimes
b)=Sq^{1}a\otimes Sq^{1}b\otimes\cdots\otimes Sq^{1}b=0$ because $Sq^{1}a=0$.
Therefore, $d_{3}Sq^{n}\neq Sq^{n}d_{3}$. ∎
## A pattern
In the first few dozen knot table examples one can observe a relationship
between $Sq^{2}$ and $d_{2}$. It appears that every non-zero $d_{2}$ leads to
a non-zero $Sq^{2}$, they occur together in the butterfly configuration
depicted below.
$t$$q$$t$$t+1$$t+2$$q$$q+2$$q+4$
See for example, $(t,q)=(2,15)$ in $T(4,5)$. More formally, this pattern
suggests that
$d_{2}(xm)=xm^{\prime}\quad\textnormal{ implies }\quad
Sq^{2}(m)=xm^{\prime}+\cdots.$
The converse is false as $T(4,5)$ contains a $Sq^{2}$-operation at
$(t,q)=(5,21)$ of a different sort.
## Acknowledgments
The authors thank R. Lipshitz for mentioning this question at the Banff
workshop in June 2021. The first author thanks the organizers M. Aganagić, S.
Krushkal and B. Webster for inviting his participation. This paper was
partially funded by Simons Award #638089.
## References
* [Bei] Simon Beier, _An integral lift, starting in odd Khovanov homology, of Szabó’s spectral sequence_ , arXiv:1205.2256.
* [BN07] Dror Bar-Natan, _Fast Khovanov homology computations_ , J. Knot Theory Ramifications 16 (2007), no. 3, 243–255. MR 2320156
* [Cota] Seed Cotton, _Computations of Lipshitz-Sarkar Steenrod square on Khovanov homology_ , arXiv:1210.1882.
* [Cotb] by same author, _Computations of Szabó’s geometric spectral sequence in Khovanov homology_ , arXiv:1110.0735v1.
* [Kho03] Mikhail Khovanov, _Patterns in knot cohomology. I_ , Experiment. Math. 12 (2003), no. 3, 365–374. MR 2034399
* [LS14a] Robert Lipshitz and Sucharit Sarkar, _A Khovanov stable homotopy type_ , J. Amer. Math. Soc. 27 (2014), no. 4, 983–1042. MR 3230817
* [LS14b] by same author, _A Steenrod square on Khovanov homology_ , J. Topol. 7 (2014), no. 3, 817–848. MR 3252965
* [OS05] Peter Ozsváth and Zoltán Szabó, _On the Heegaard Floer homology of branched double-covers_ , Adv. Math. 194 (2005), no. 1, 1–33. MR 2141852
* [Pau22] Pravakar Paul, _University of Iowa Thesis_.
* [Shu14] Alexander N. Shumakovitch, _Torsion of Khovanov homology_ , Fund. Math. 225 (2014), no. 1, 343–364. MR 3205577
* [Sza15] Zoltán Szabó, _A geometric spectral sequence in Khovanov homology_ , J. Topol. 8 (2015), no. 4, 1017–1044. MR 3431667
|
# Model selection of chaotic systems from data with hidden variables using
sparse data assimilation
H. Ribera, S. Shirman, A. V. Nguyen, N. M. Mangan
(August 28, 2024)
###### Abstract
Many natural systems exhibit chaotic behaviour such as the weather, hydrology,
neuroscience and population dynamics. Although many chaotic systems can be
described by relatively simple dynamical equations, characterizing these
systems can be challenging, due to sensitivity to initial conditions and
difficulties in differentiating chaotic behavior from noise. Ideally, one
wishes to find a parsimonious set of equations that describe a dynamical
system. However, model selection is more challenging when only a subset of the
variables are experimentally accessible. Manifold learning methods using time-
delay embeddings can successfully reconstruct the underlying structure of the
system from data with hidden variables, but not the equations. Recent work in
sparse-optimization based model selection has enabled model discovery given a
library of possible terms, but regression-based methods require measurements
of all state variables. We present a method combining variational annealing –
a technique previously used for parameter estimation in chaotic systems with
hidden variables – with sparse optimization methods to perform model
identification for chaotic systems with unmeasured variables. We applied the
method to experimental data from an electrical circuit with Lorenz-system like
behavior to successfully recover the circuit equations with two measured and
one hidden variable. We discuss the robustness of our method to varying noise
and manifold sampling using ground-truth time-series simulated from the
classic Lorenz system.
Significance statement
Chaos represents a challenge for studying the dynamic behavior of many
physical and biological systems. Since the 80s we have known that time-series
measurements from one variable of a chaotic system contain information about
the underlying structure of the full multi-dimensional system. However,
recovery of the full system from data with hidden variables has remained
elusive. This work develops a novel data-assimilation technique to identify
governing equations of chaotic systems from data with hidden variables. This
method identifies fairly simple, low-dimensional, and deterministic models
from seemingly incomplete data. Discovery of such equations can enable rich
mathematical study and physical insight for problems across nearly every
discipline including climate science, hydrology, neuroscience, ecology,
medicine and engineering.
## 1 Introduction
Hypothesis generation through data-driven model identification has the
potential to revolutionise science. Uncovering the interactions, structure,
and mechanisms that determine the behaviour of chaotic systems in particular
could improve scientific understanding in almost every discipline with
dynamical systems [30] including climate science [60], hydrology [59],
population dynamics [32], and neuroscience [53]. Many chaotic systems can be
informatively described by relatively simple dynamical equations. However,
characterization and control of these systems can be challenging [11], due to
sensitivity to initial conditions and difficulties in differentiating chaotic
behavior from noise [64]. Characterization through statistical, geometric, or
model-based means becomes more challenging when only a subset of the variables
are experimentally accessible. Our goal is to identify a parsimonious set of
equations to describe a chaotic system from measurements with hidden
variables.
Much data-analysis for chaotic systems has focused on learning the attracting
manifold structure from time-series. In the early 80s, Takens’s theorem [65]
describes the conditions under which one can use the time-delay embedding from
a single variable to construct a manifold that preserves the topological
properties of the full system. Takens’s result formalized the idea that the
information of the manifold structure, and therefore chaotic dynamics, could
be recovered from the time-history of a single state variable. Manifold
reconstruction methods [50, 28, 41] based on partial information provide
insight into the system structure, dimensionality, and statistics of chaotic
systems. By constructing manifolds from time-delays, Sugihara _et al._
developed methods discriminating chaos from noise [64] and detecting causality
between measured variables [63]. Methods including reservoir computing [67,
27], other deep learning frameworks [72], data assimilation combined with
neural networks [15], support vector machine [48], and nearest neighbours [5]
can accurately predict the dynamics of chaotic systems using a data-trained
model with no specific physical knowledge of the system. For a review of
predictive methods see [6]. Assuming a reasonable model structure is known,
data-assimilation methods [8, 9] including variational annealing [70] can
estimate model parameters for chaotic systems from incomplete, indirect, and
noisy measurements. Although these methods are designed to assimilate
information from data-streams with hidden variables and learn about chaotic
systems, they are not designed for the purpose of hypothesizing parsimonious
models or identifying model structure.
Data-driven discovery of parsimonious dynamical systems models to describe
chaotic systems is by no means new. Early on, least-squares fitting of
combinatorial sets of polynomial basis functions to time-series data followed
by information-theory based selection produced models that reproduced manifold
structure and statistics of the system [21]. Symbolic regression demonstrated
successful recovery of the widely accepted equations for the chaotic double-
pendulum system [58]. More recently sparse regression [68, 17, 34], motivated
model selection techniques such as SINDy [16], which recover the ground-truth
equations for chaotic systems from a relatively large library of functions,
without needing a computationally intensive combinatorial search. Other
sparsity-promoting frameworks have improved upon robustness for chaotic
systems equation recovery through integral formulations [57, 51, 47], data
assimilation methods [12], Bayesian frameworks [13], and entropic regression
[4]. However, all these methods require measurements of all state-variables
that significantly impact the desired dynamic. Notably, Champion _et al._
recently used an autoencoder framework for automatic discovery system
coordinates and equations, but required input time-series of a higher
dimension than intrinsic dimension of the system [18].
Model selection with hidden variables require different methodology. By
‘hidden variables’ we mean that the number of measured variables is smaller
than the intrinsic dimension of the system. Measured variables are not
considered hidden if they are corrupted by noise or indirectly sampled through
a measurement function. A few methods address the problem of model selection
with hidden variables, but they have not been demonstrated for chaotic
systems. For example, Daniels _et al._ [22, 23] combinatorially fit each model
in a predefined model space using data assimilation and subsequently use
Bayesian inference to select the best model. Successful recovery of mass-
action kinetic systems for chemical reactions was demonstrated with hidden
variables using a neural network approach [37]. A recent method uses LASSO to
select predictive models for chaotic systems from a library with higher order
derivatives given a single state variable [61]. This method effectively finds
a higher-order ODE representation of the Lorenz and Rössler systems, but it is
unclear how the recovered structures relate to the ground truth models.
In this paper we present a new method to perform model selection in dynamical
systems with hidden variables. This method combines the data assimilation
technique variational annealing, which has been used to estimate parameters
when the structure of the system is known, with sparse model selection via
hard thresholding. We call this method Data Assimilation for Hidden, Sparse
Inference (DAHSI). To demonstrate that our method could identify interpretable
models for chaotic systems, we followed the philosophy of earlier works [58,
16] and demonstrated recovery of accepted parsimonious models from
experimental data and simulated time-series where the ground truth is known.
In the Results section DAHSI successfully selected a set of models for a
circuit that has Lorenz-like behaviour from experimental data of two state
variables (one hidden). One of the identified models has the same structure as
the Lorenz system. The other identified models with high AIC/BIC support
exhibit nearly indistinguishable dynamics and suggest novel terms which may
better represent the experimental circuit system. Moreover, we used ground
truth simulations of the canonical Lorenz system to study how our method
performs with varying data size and noise. In the Materials and Methods
section we describe the DAHSI algorithm for model selection with hidden
variables.
## 2 Results: Model selection for chaotic systems
### 2.1 Identification of models for the Lorenz circuit from experimental
data
The Lorenz system [42] was originally developed to forecast the weather and
has become a canonical example when developing new methods to characterize
chaotic systems. To demonstrate model selection on experimental data with
hidden variables, we considered high-quality data from the electrical circuit
in Blakely _et al._ [10] (Fig. 1(a)). This system exhibits similar structure
and behavior to the highly studied Lorenz system and is well described by
relatively simple circuit equations
$\displaystyle\frac{\mathop{}\\!\mathrm{d}x}{\mathop{}\\!\mathrm{d}t}$
$\displaystyle=\hat{\sigma}(y-x),$ (1)
$\displaystyle\frac{\mathop{}\\!\mathrm{d}y}{\mathop{}\\!\mathrm{d}t}$
$\displaystyle=\hat{\rho}x-\hat{\gamma}y-\hat{\varepsilon}xz,$ (2)
$\displaystyle\frac{\mathop{}\\!\mathrm{d}z}{\mathop{}\\!\mathrm{d}t}$
$\displaystyle=-\hat{\beta}z+\hat{\eta}xy.$ (3)
The structure of this system is similar to the Lorenz system, but in the
standard Lorenz formulation $\hat{\varepsilon}=\hat{\eta}=\hat{\gamma}$. Here,
$\mathbf{X}=(x,y,z)$ denote the voltages across the capacitors $C_{1}$,
$C_{2}$ and $C_{3}$ in the circuit (Fig. 1(a)). The measured variables are $x$
and $z$, and $y$ is unmeasured or hidden. We denote the noisy measurements of
$x$ and $z$ by $x_{e}$ and $z_{e}$, respectively, and the measurement function
$\mathbf{h}((x,z))=(x_{e},z_{e})=\mathbf{Y}$. The experimental sampling rate
is $\Delta t_{e}=80$ ns resulting in $55,000$ time points. A low-pass filter
was applied to remove high-frequency measurement error [10]. We re-scaled the
experimental time by $\Delta t=\frac{\Delta t_{e}}{1.6\times
10^{-5}\textrm{ns}}=0.005$ so that the right hand side terms of (1)-(3) are
around $\mathcal{O}(1)$. We trained our method with $N=501$ time points (Fig.
1(a)), at a sampling rate $2\Delta t=0.01$ (re-scaled). The attractor is
reasonably well sampled with 501 points (SI Appendix, Fig. 2), and we retain
the remaining data for validation.
We demonstrated model identification with hidden variables of the Lorenz-like
system ((1)-(3)) using DAHSI (Fig. 1). First, we constructed a model library
based on domain-knowledge. In this case we used monomials up to degree two in
three variables, representing $10^{9}$ possible models composed of subsets of
possible terms. From this library we generated a generic governing equation
for each variable via the linear combination of all the candidate functions
(Fig. 1(b1)). Our goal was to find a small subset of candidate functions
within the library which describe the dynamics in the data. We did not assume
that we knew the “correct” model complexity a priori, and searched for the set
of models which balance error and simplicity.
To perform model selection, we minimised a cost function composed of the
measurement error, $A_{E}$, model error, $A_{M}$, and sparse penalty,
$\lambda\|\mathbf{p}\|_{1}$ as a function of the parameters, $\mathbf{p}$ and
library of functions $\boldsymbol{\mathbf{\Theta}}$ (Fig. 1 (b), and Materials
and Methods section). The model error contains the coupling between variables,
taking advantage of the information about hidden variables in the time-history
of the measured variables. The measurement error only depends on the
measurements and measured variables estimated from the model. Model selection
is enabled through the sparse penalty which determines the number of
parameters, $p_{k,j}$, that will be active in the model or zero.
To minimize the cost function, we used variational annealing (VA) [70], a
data-assimilation technique for non-convex parameter estimation in nonlinear,
chaotic systems. The problem is highly non-convex, with many local minima, due
to the incoherence between the data and the model [52, 1]. Decreasing the
information or measurements [38] and increasing the number of terms in the
library will both increase the number of local minima (SI Appendix, Fig. 1).
VA works by varying $R_{f}$ which sets the balance between model error and
measurement error (Fig. 1(b2)). When $R_{f}=0$, only measurement error
contributes leading to a convex cost function with an easy to find global
minima. As the model is enforced by gradually increasing $R_{f}$, the
landscape increases in complexity and many local minima appear. By
initialising the search near the minima for the previous $R_{f}$ the solution
remains near the first minima found. Varying $\lambda$ leads to different
model structures or candidate models. As the penalty strength, $\lambda$,
increases, the global minima moves to 0 in a larger number of parameters (Fig.
1(b2)). Because there are many local minima, we need to choose $N_{I}=500$
random initial guesses to fully explore the landscape.
Figure 1: DAHSI model selection for the Lorenz-like system. (a) Electrical
circuit from [10], training data of measured variables $x$ and $y$, and time-
delay embedding of test data ($\tau=0.02$). (b1) Model library and generic
governing equation for each variable. (b2) Cost function as model error weight
and the sparsity constraint vary. (c) Local minima with high cost (light
grey), low cost (dark grey) and Lorenz structure (blue) as function of
$\lambda$. (d) 25 low cost models are down-selected. (e) Model structure
identified near the Pareto front. (f) Time series, error, and relative AIC for
identified models (coloured), and higher error models (grey).
The sparse-variational annealing process generates 169 candidate models, which
must be further down-selected and validated to complete the model-selection
process. We down-selected to the 25 models (SI Appendix) with a cost function
value less than $10^{-3}$ (Fig. 1(c)). In our system there is a clear gap in
cost-function value at this value, but the criteria and gap size will be
system dependent. To ensure we have the best parameter fit for each down-
selected model we performed parameter estimation via VA without sparsity
constraint.
To validate the models, we needed to estimate an initial condition for the
hidden variable $y$, for which there is no experimental data. We used an 8th
order finite difference approximation of the time derivative of $x$ for each
model structure and solve the resulting algebraic equation for $y_{0}$ (SI
Appendix). We used the dynamic equation for $x$ since all down selected models
contain $y$ but not any higher order $y$ terms. Estimation of the initial
condition for hidden variables is only possible after the candidate models are
found and must be done for the initial condition of each segment of validation
data. This procedure takes advantage of Takens’s theorem that the information
in $y$ is available in the time-delay of $x$.
Validation within the Lyapunov time ensures that the time-series do not
diverge due to the inherent sensitivity to differences in initial conditions
introduced by measurement and numerical error. All down-selected models have a
similar Lyapunov time around 0.9 time units. We considered $S=1083$ segments
of the experimental data (excluding the training set), each of length 1/4 of a
Lyapunov time to calculate the sum of the average error for each model (Fig.
1(d)). We discarded the first four points of each time segment as these points
were used to predict the initial condition for $y$. The average error for the
$s$-th time segment of the $m$-th model is defined as
$E^{s}_{av,m}=\frac{1}{2M}\sum_{i=1}^{M}(x_{i,e}^{s}-x_{i}^{s})^{2}+(z_{i,e}^{s}-z_{i}^{s})^{2}$,
where $x_{i,e}$ and $z_{i,e}$ are the $x$ and $z$ components of the
experimental data, respectively, and $i$ is the time index. The sum of all
average errors over the time segments $S$ of the $m$-th model is
$E_{av,m}=\sum_{s=1}^{S}E^{s}_{av,m}$.
The candidate models on the Pareto front (Fig. 1(d), and SI Appendix, Table
S1) best balance model complexity and error (Fig. 1(e)). We successfully
recovered the Lorenz-like structure derived by Blakely _et al._ [10], which
has the lowest average error of recovered models with 7 active terms. For
$\lambda=3.9$ the system presented in [10] is selected for 10.6% of the
$N_{I}=500$ randomly chosen initialisation. However, we have no guarantee that
this model is the "true model" for the circuit system. All models have a
similar manifold structure (Fig. 1(e)) and low error within a Lyapunov time
(Fig. 1(f)). We believe the main limitation of our prediction window is the
uncertainty introduced by the hidden variable into the parameter estimation
during VA. This uncertainty then propagates into the $y_{0}$ estimate required
for each validation data set and magnifies noise (SI Appendix, Figs. S4 and
S5). Given the difficulty in selecting between proposed chaotic models that
exhibit such similar behaviour [2, 3], the primary goal of DAHSI as a model
identification method is to generate possible models. However, we were able to
consistently identify a unique model (salmon with 11 terms, Fig. 1(f)) with
the most support using Akaike information criteria as done in [45] and
Bayesian information criteria (SI Appendix, Fig. 6), as well as identifying a
weakly supported model (gold with 10 terms, Fig. 1(f)). By generating multiple
models that lie near the Pareto front DAHSI has effectively generated
hypothesis for additional terms, which could be tested with further
experimentation.
While DAHSI identified the same equation terms as Lorenz and the circuit
formulation from [10], the parameters fit through the final step of VA are not
the same. We compare the ability to predict the experimental data with the
classical Lorenz system, the circuit formulation from [10] and the DAHSI-
recovered models, each of which have a different number of free parameters
(Table 1). We perform parameter estimation via VA for each model and use the
validation data-set described above to calculate $E_{av}$, $\Delta$AIC, and
$\Delta$BIC. Although the average error is similar for the VA-estimated
circuit formulation and all DAHSI models, the DAHSI recovered models with 10
and 11 terms have substantially more $\Delta$AIC and $\Delta$BIC support. The
classical Lorenz parameter structure, which only has 4 free parameters, is
unable to capture the dynamics of the system. The parameters estimated via VA
for the circuit model with 6 free parameters perform much better than those
estimated from first principles [10]. Notably, the parameters estimated for
the 7-term DAHSI model are very close to the parameters estimated for the
original circuit model. Further experimentation is needed to determine if the
coefficients in the $\dot{x}$ equation should be equal, $p_{1,2}=p_{1,3}$, and
if the coefficient on the $y$ term in the $\dot{y}$ equation, $p_{2,3}$ should
be positive, negative, or zero (Fig. 1(e)). The additional terms suggested by
the 10 and 11 term DAHSI recovered models are strongly supported by the
AIC/BIC calculations, but would require further experimentation to
conclusively validate. They may represent parasitic resistances or other
physical effects which have a small but real impact on the circuit dynamics
and were neglected during the original derivation by Blakely et al. [10].
Recovery of the Lorenz-like model and identification of other models with
AIC/BIC support demonstrates that DAHSI can successfully identify parsimonious
models for chaotic systems.
Table 1: Parameter estimation for the classical Lorenz formulation (4 free
parameters, $p_{1,2}=p_{1,3}$, $p_{2,3}=p_{2,7}=-p_{3,6}$); the circuit
formulation in [10] (6 free parameters, $p_{1,2}=p_{1,3}$); and the DAHSI-
recovered models.
| | | | circuit formulation | DAHSI-recovered
---|---|---|---|---|---
| Term | Parameter | classical | as in [10] | estimated | 7-terms | 10-terms | 11-terms
eq. $\dot{x}$ | $1$ | $p_{1,1}$ | – | – | – | – | – | $-0.2514$
$x$ | $p_{1,2}$ | $-29.7560$ | $-12.9032$ | $-16.5369$ | $-16.9554$ | $-17.0172$ | $-17.0582$
$y$ | $p_{1,3}$ | $29.7560$ | $12.9032$ | $16.5369$ | $18.7853$ | $19.9884$ | $19.9840$
$z$ | $p_{1,4}$ | – | – | – | – | $0.1596$ | $0.1833$
eq. $\dot{y}$ | $x$ | $p_{2,2}$ | $68.5427$ | $54.2903$ | $28.0876$ | $24.3535$ | $22.6028$ | $22.6017$
$y$ | $p_{2,3}$ | $-12.8815$ | $-1.2903$ | $-0.0763$ | $0.2580$ | $0.3346$ | $0.3567$
$xy$ | $p_{2,6}$ | – | – | – | – | $-0.0906$ | $-0.0843$
$xz$ | $p_{2,7}$ | $-12.8815$ | $-14.2857$ | $-7.6252$ | $-6.7054$ | $-6.2507$ | $-6.2561$
eq. $\dot{z}$ | $z$ | $p_{3,4}$ | $-3.4168$ | $-3.8259$ | $-3.6547$ | $-3.6835$ | $-3.6954$ | $-3.6966$
$xy$ | $p_{3,6}$ | $12.8815$ | $3.4843$ | $4.3315$ | $4.8273$ | $5.1412$ | $5.1292$
| $xz$ | $p_{3,7}$ | – | – | – | – | $0.0903$ | $0.0791$
$E_{av}$ | – | – | 2165 | 319 | 10.37 | 9.7441 | 9.0995 | 9.0345
$\Delta$AIC | – | – | 5920 | 3852 | 139.315 | 73.887 | 5.758 | 0
$\Delta$BIC | – | – | 5885 | 3827 | 114.377 | 53.937 | 0.77 | 0
### 2.2 Robustness study on the simulated Lorenz system
To study the robustness of our method to varying noise and manifold sampling
we used ground-truth time series simulated from the classic Lorenz system,
$\displaystyle\frac{\mathop{}\\!\mathrm{d}x}{\mathop{}\\!\mathrm{d}t}$
$\displaystyle=\sigma(y-x),$ (4)
$\displaystyle\frac{\mathop{}\\!\mathrm{d}y}{\mathop{}\\!\mathrm{d}t}$
$\displaystyle=x(\rho-z)-y,$ (5)
$\displaystyle\frac{\mathop{}\\!\mathrm{d}z}{\mathop{}\\!\mathrm{d}t}$
$\displaystyle=-\beta z+xy,$ (6)
where $\sigma=10$, $\rho=28$, and $\beta=8/3$. We numerically simulated the
system using Runge-Kutta 4th order and a time step of $\Delta t=0.01$ and
$N=501$, producing time-series similar to the experimental data set. As in the
experimental data set, we considered $y$ to be the hidden variable. We studied
the recovery rates of DAHSI as a function of the VA tuning parameter,
$\alpha$, and found trends similar to previous work [55], (SI Appendix, Table
S3).
First, we studied the robustness of our method to measurement error modeled as
additive Gaussian noise of mean zero and varying standard deviation, $\omega$.
Therefore, the measurement function is
$\mathbf{h}(\mathbf{X})=\mathbf{X}+\mathcal{N}(0,\omega)$. We expect that
different noise instances, controlled by the random number generator seed,
will change our recovery rate due to random corruption of essential parts of
the data or overall poor manifold sampling.
We calculated recovery for 3 different standard deviations of noise with 20
noise seeds each and calculate the cumulative distribution function of the
recovery rate (Fig. 2(a)). The random noise seeds produced wide variation in
recovery rate between 10-90% for the lowest noise, indicating that the minimal
data set used here is not very robust. As the noise strength increased, the
cumulative distributions shifted left as more seeds have lower recover.
Setting $\omega=0.01$ produced a binomial distribution, with either a high
recovery rate (> 80%) (the majority of simulations), or a low recovery rate (<
15%). For $\omega=0.05$ there were some seeds with intermediate recovery
rates, more low recovery rates, and a few seeds that with a very high recovery
rate. The noise level dramatically affected the recovery rate for
$\omega=0.1$. The vast majority of simulations led to less than 10% recovery.
More than half had 0% recovery, and only one had higher than 85% recovery.
Next, we investigated how manifold sampling affected the recovery rate of our
system. We chose 3 different noise seeds, and varied the number number of time
points $N$ by increasing the length of the time-series (Fig. 2(b)). Varying
the length of the time-series changed the sampling of the manifold,
demonstrating that sampling lobe transitions is crucial for accurate model
recovery. For one seed (light blue line) the recovery was high for $N=501$
through $N=401$. There were sharp drops in recovery of $\approx 60$% and
$\approx 15$% when the data-set lost a lobe crossing in the attractor, as
happens at $N=351$ and $N=301$, respectively. Sharp drops in another seed
(dark blue line) also occurred when the sampling of the crossing between lobes
is reduced at $N=460$ and $301$. Decreased sampling of each lobe did not
appear to have as dramatic an effect ($N=401$ to $460$). The increase of
recovery rate for the dark-blue noise instance at $N=351$ suggests that
optimal sampling requires some nontrivial balance of different dynamic
regions. The specific corruption of noise instance had a big impact on how
many crossings are needed to get a high recovery as the recovery was
consistently high for one seed (cyan). These results suggest that optimal
manifold sampling to counter noise corruption would vastly improve DAHSI
performance on data sets with high noise.
Figure 2: Robustness to noise and manifold sampling. (a) Cumulative
distribution function of the recovery rate for three noise levels and 20 noise
seeds. $\omega=0.01$ (light grey); $\omega=0.05$ (dark grey); $\omega=0.1$
(black). (b) Recovery rate on eight different manifold sizes, for three
different noise seeds (colors).
## 3 Discussion
In this paper we have presented DAHSI, a method to identify non-linear
dynamical systems from data with hidden variables. DAHSI combines variational
annealing, a data assimilation technique, with sparse thresholding. We applied
DAHSI to an experimental data set from a circuit that exhibits Lorenz-like
dynamics [10]. The outcome is a set of candidate models, including a model
with the same Lorenz-like structure derived by Blakely _et al._ from circuit
equations [10]. Two additional parsimonious models with strong support based
on AIC/BIC-based validation were also identified. The unanticipated terms
suggested by these models may represent real physical processes in the
circuit, such parasitic resistances or other factors not included in the
idealized model derivation. Through this example, we demonstrated that DAHSI
works as an effective tool for generating models and functional hypothesis
from data.
To analyze recovery and the effects of noise and manifold sampling in a system
where we know the ground truth, we studied the performance of DAHSI applied to
simulated time-series from the classical Lorenz system. Notably, we
successfully selected the ground truth model as most likely from those
generated by DAHSI using information-criteria based validation techniques (SI
Appendix. Fig. 3). Our noise studies showed recovery rates of 80% for
$\sim\mathcal{N}(0,0.01)$ and 10% or lower for $\sim\mathcal{N}(0,0.1)$.
Therefore we anticipate that the current formulation of DAHSI will have
reasonable recovery rates for noise levels $<10\%$ of the signal value.
Further robustness to noise could be achieved through integral formulations
similar to those used for sparse regression, rather than the discretized
mapping between time-points used here [57, 51, 47]. Manifold sampling impacts
recovery and we conclude that recovery is especially sensitive to sampling at
the saddle point transition between the lobes. Moreover, the noise seed used
to generate the synthetic data impacts the recovery and we suspect this is due
to random corruption of measurements from different regions of the manifold.
For chaotic systems, increasing the time of experiment will eventually ensure
robust sampling of the manifold. However, the computational time of DAHSI
scales with the length of the input time-series [31]. Therefore, we anticipate
that short bursts of time-series designed to optimally sample the manifold
would provide optimal sampling and computational efficiency. Further metrics
for analyzing the information content of our data and minimal data-
requirements for recovering models [35] would lead to optimal manifold
sampling.
One of the main benefits of a sparse model selection framework is that we
identify likely model structures while avoiding combinatorial testing and
validation of all potential models. For example, the number possible models
described three variables with monomials up to degree two is approximately
$10^{9}$. Doing parameter estimation on each of these models and validating
would be computationally intractable, taking at least $10^{5}$ processor-days
with our setup. For comparison, our entire model selection and validation
process took just over a day of computational time. Running one initialisation
of the problem and sweeping through $\lambda=2.5:0.1:5.5$ with $N=501$ (as
done in Example 2.1) took 4 hours. We parallelized simulations using
Northwestern’s High Performance Computing Cluster Quest, running about 100 of
simulations at a time, leading to a total computational time of roughly 20
hours. Performing parameter estimation without thresholding on a single model
takes between 15 second and 15 minutes, depending on model structure.
Parameter estimation on 25 down-selected models took 5 hours with our set up.
Times estimates are for a Intel(R) Xeon(R) CPU E5-2680 v4 @ 2.40GHz processor.
In order to understand the impact of library size on a call to IPOPT, the
optimiser used in DAHSI, we tested model libraries with 7, 10, 13, 16, 19, and
30 terms (SI Appendix, Fig. 7). The computational time does not scale
monotonically with library size. Instead, we find that a library with 10 terms
can take 100 times longer to run than the library of 30 monomials. We suspect
that the variation in optimization time depends on correlations between
library functions [43], model symmetries, and other structural features.
In addition to the chaotic systems presented in the results, we have applied
DAHSI on two non-chaotic systems: on time-series data from a Lotka-Volterra-
like system with no hidden variables and on simulated time-series for a mass
action kinetics systems with hidden variables. Although DAHSI recovered
reasonable models for both systems, there are several caviats. Recover of
Lotka-Volterra required an iterative formulation (SI Appendix, Fig. 15). We
also compared DAHSI to SINDy [16] for the Lotka-Volterra system and found that
SINDy was far superior in speed when all variables are observed. Recall that a
comparison between DAHSI and SINDy is not possible for Lorenz-like circuit
system, as SINDy requires access to the unmeasured $y$ variable. The mass
action kinetic system modeled a semiconductor with two trap levels differing
by one electronic unit of charge (SI Appendix). The recovery rate for the
ground truth model was low, around 3%. Unlike chaotic systems, which are
highly non-convex, the mass-action kinetic system has a very flat cost
function due to structural parameter identifiability issues (SI Appendix,
Figs. S11-S14), [7, 29, 46, 25]. Stochastic gradient decent algorithms such as
IPOPT are known to perform poorly for flat cost functions so switching to an
optimiser designed for such systems [40] may improve recovery. Other data-
assimilation methods for parameter estimation with hidden variables such as
3D-Var, 4D-Var, Kalman filtering, and hybrid methods [8] may be more cost-
effective if VA is unnecessary to navigate to the global minimum of a highly
non-convex function.
The formulation of cost function and sparsity constraint also likely impacts
recovery. Different methods for sparse model-selection include stepwise and
all-subsets regression, ridge regression [36], LASSO [68], least angle
regression [24], and SR3 [73]. SR3 accelerates convergence and has been shown
to outperform other methods and improves performance but has an extra tuning
parameter. The parameter path for the first four methods is shown to be
different in [34] and therefore, we expect that different regularisation
methods will lead to different model identification. Comparison between
different sparsity-enforcement mechanisms within DAHSI framework could improve
recovery but may be somewhat system dependent.
We anticipate many future applications and extensions of DAHSI. The framework
for DAHSI does not have any intrinsic restrictions about the functional form
of the equations, in particular the function library need not be linear in the
unknown parameters. Variational annealing is designed to handle stochasticity
through the model error. In addition, data assimilation is commonly used for
PDE systems, including PDE discovery [19]. Therefore, we anticipate we can
apply or extend our framework to broader applications, without reformulation
as was needed in sparse-regression based frameworks for rational functions
[44], stochastic systems [14], and PDEs [56, 39]. Modifications to the
optimization methodology and further investigation of optimal data-sampling
strategies could improve the computational efficiency of DAHSI, opening up
higher dimensional problems to model selection with hidden variables.
## 4 Methods: Mathematical formulation of cost function and algorithm
The dynamics of many physical systems can be described by models with only a
few terms. Our goal is to retrieve the sparse system representation of these
type of systems given the measurements of some, but not all, of the state
variables. We consider a dynamical system with unknown governing equations
$\frac{\mathop{}\\!\mathrm{d}\mathbf{X}}{\mathop{}\\!\mathrm{d}t}=\mathbf{F}(\mathbf{X}(t),\mathbf{p}),$
(7)
where $\mathbf{X}=(x_{1},x_{2},\dots,x_{D})\in\mathbb{R}^{D}$ are the state
variables, $\mathbf{F}=(F_{1},\,F_{2},\dots,F_{D})$ are the unknown functions
that govern the dynamics of the system and $\mathbf{p}$ is a set of unknown
parameters.
For a system with hidden variables, the measurements
$\mathbf{Y}=(y_{1},y_{2},\dots,y_{L})\in\mathbb{R}^{L}$ are lower dimensional
$L\leq D$ than the underlying variables. The measurement function
$\mathbf{h}(\mathbf{X})=\mathbf{Y}$ is a known transformation of a subset of
the state variables in (86). In principle, the measurement function could map
some combination of state variables to a lower-dimension, as in
$\mathbf{h}(\mathbf{X})=x_{1}+x_{2}$. In this work we assume $\mathbf{h}$
captures Gaussian experimental noise such that,
$\mathbf{Y}=\mathbf{X}+\mathcal{N}(0,\omega)$. The measurements are taken at
$N$ equally spaced point in time between $[t_{1},\,t_{N}]$.
The function capturing the nonlinear dynamics of each state variable, $F_{k}$,
is assumed to be sparse in function space as has been done previously [33,
16]. Given a library of possible functions
$\boldsymbol{\mathbf{\Theta}}=(\theta_{1},\theta_{2},\dots,\theta_{q})$, we
can write a candidate function $\hat{F}_{k}$ as
$\hat{F}_{k}\coloneqq\hat{F}_{k}(\mathbf{X},\mathbf{p})=p_{k,1}\theta_{1}(\mathbf{X})+p_{k,2}\theta_{2}(\mathbf{X})+\cdots+p_{k,q}\theta_{q}(\mathbf{X}),$
(8)
for $k=1,2,\dots,D$. There is no inherent restriction that the functions be
linearly additive. The set of $p_{k,j}$ defines the vector
$\mathbf{p}\in\mathbb{R}^{P}$, where $P=Dq$ is the total number of unknown
parameters.
We want to estimate the unknown parameters $p_{k,j}$ and all state variables
$\mathbf{X}$ using only the measurements $\mathbf{Y}$ with the constraint that
$\mathbf{p}$ is sparse. This is equivalent to minimising the negative log
likelihood
$\begin{split}&A(\mathbf{X},\mathbf{p})=\frac{1}{N}\sum_{i=1}^{N}\|\mathbf{X}(t_{i})-\mathbf{Y}(t_{i})\|^{2}\\\
&+\frac{1}{N}\sum_{i=1}^{N-1}R_{f}\left\\{\|\mathbf{X}(t_{i+1})-\mathbf{f}(\mathbf{X}(t_{i}),\mathbf{p},\mathbf{\hat{F}})\|^{2}\right\\}+\lambda\|\mathbf{p}\|_{1}.\end{split}$
(9)
Here,
$\mathbf{f}(\mathbf{X}(t_{i}),\mathbf{p},\mathbf{\hat{F}})=\mathbf{X}(t_{i+1})$
defines the discrete time model dynamics and is obtained by discretising (86)
using a Hermite-Simpson collocation. We note that if $\lambda=0$ in 9 we
obtain the cost function used in VA. Following the statistical derivation in
[66, 26, 1], the experimental error,
$A_{E}(\mathbf{X},\mathbf{Y})=\frac{1}{N}\sum_{i=1}^{N}\|\mathbf{X}(t_{i})-\mathbf{Y}(t_{i})\|^{2}$
assumes Gaussian noise and the model error,
$A_{M}(\mathbf{X},\mathbf{p},\mathbf{\hat{F}})=\frac{1}{N}\sum_{i=1}^{N-1}\left\\{\|\mathbf{X}(t_{i+1})-\mathbf{f}(\mathbf{X}(t_{i}),\mathbf{p},\mathbf{\hat{F}})\|^{2}\right\\}$
assumes a relaxed delta function. We assume that the state at the $t_{i+1}$
depends only on the state at $t_{i}$. We assume that each element in
$\mathbf{p}$ follows a Laplace distribution with _mean_ $0$ (SI Appendix). The
details and necessary background to minimise (9) are presented in the
following sections.
### 4.1 DAHSI: Data Assimilation for Hidden Sparse Inference
Our algorithm, Data Assimilation for Hidden Sparse Inference (DAHSI), performs
model identification for chaotic systems from data with hidden variables. It
combines the data assimilation technique VA with sparse thresholding (Fig.
3(a)). The code base for DAHSI can be found at [54].
As the desired model complexity is unknown ahead of time, DAHSI sweeps through
different hard-threshold values, $\lambda$. For each $\lambda$, the cost
function (9), is minimized by iterating between VA [70, 71] and hard-
thresholding of the parameters. We chose the iterative framework over direct
incorporation of the $\ell_{1}$ penalty into the minimized cost function,
based on the results that show that least square with thresholding converges
locally, often outperforming convex variants [73, 20], and recent
demonstrations that LASSO makes mistakes early in the recovery pathway [62].
At each VA step, we minimize $A_{E}+R_{f}A_{M}$, which is 4DVar in its "weak"
formulation [66, 26], over $\mathbf{X}$ and $\mathbf{p}$ given $R_{f}$ using
IPOPT, an optimisation package that uses a gradient descent method [69]. The
state variables $\mathbf{X}^{\text{ini}}$ are initialized as $\mathbf{Y}$ for
the measured states and random values from a uniform distribution within
specified bounds for the unmeasured states. Since we expect the parameter
vector $\mathbf{p}$ to be sparse, it is initialized as
$\mathbf{p^{\text{ini}}}=0$.
Initially $R_{f}$ takes some small value $R_{f,0}=\epsilon$, as $R_{f}=0$
would lead to an unconstrained solution on the unmeasured states and
$\mathbf{p}$. At each step $\beta=0,1,2,\dots,\beta_{\max}$ of VA, $R_{f}$ is
updated to $R_{f}=R_{f,0}\alpha^{\beta}$, for $\alpha>1$. After each step
$\beta$ of VA, we enforce sparsity by applying a hard threshold, $\lambda$, to
$\mathbf{p}^{(\beta)}$. The solution,
$\\{\mathbf{X}^{(\beta)},\mathbf{p}^{(\beta)}\\}$, at each step of the VA
process is used as the initialization for the next step. We choose
$\beta_{\max}$ so that the cost function plateaus, Fig. 3(b), and our final
solution is $\\{\mathbf{X}^{\text{fin}},\mathbf{p}^{\text{fin}}\\}$. Because
there are many local minima, we run $N_{I}$ different initial guesses to fully
explore the landscape of $A_{E}+R_{f}A_{M}$. It is important to note that the
same $\lambda$ yields multiple models due to the $N_{I}$ different
initializations of the unmeasured states. For example, if we consider
$N_{I}=500$ with a fixed $\lambda=3.9$ in our Example 2.1, we find a total of
20 models (Fig. 3(b)).
To produce candidate models with varying sparsity, the entire $\beta$ sweep
with VA and thresholding is repeated for each $\lambda$. As with other model
identification methods, different $\lambda$ will yield different models (for
the same initialisation of unmeasured states). For one particular
initialisation in Example 2.1, with $\lambda=3.8$ the term $z$ is selected in
the first equation of the system. With larger $\lambda=3.9$, the term $z$ is
no longer selected (Fig. 3(c)). Although the same $\lambda$ yields multiple
models due to the difference of the initial choice of unmeasured states, as we
would expect, higher values of $\lambda$ produce models with fewer active
terms (Fig. 3(d)).
Figure 3: DAHSI Algorithm. (a) Schematic of Algorithm 1. (b) Action paths as
function of $\beta$ for $N_{I}=500$ and$\lambda=3.9$ (left). Final action
values (right) for high (light grey) and low (dark grey) action values; and
the Lorenz-like structure (blue). (c) Parameter $p_{1,4}$ in the last steps of
VA for $\lambda=3.8$ and $3.9$ (d) Model complexity as function of $\lambda$.
Algorithm 1 DAHSI Algorithm.
1:procedure DAHSI
2: Input: measurements $\mathbf{Y}$, generic model library $\mathbf{\Theta}$,
$\lambda_{max}$, $\beta_{max}$, $\alpha$
3: Calculate discrete function $\mathbf{\hat{F}}$ from $\mathbf{\Theta}$
4: for $l=1:L$ do
5: $x_{l}=y_{l}$ $\triangleright$ Fit measurements to data
6: Randomly initialise unobserved variables $\\{x_{l+1},\dots,x_{D}\\}$
7: $\mathbf{X}^{\text{ini}}=\\{x_{1},x_{2},\dots,x_{l},x_{l+1},\dots,x_{D}\\}$
8: Initialise $\mathbf{p}^{\text{ini}}=0$ $\triangleright$ Force sparsity
9: Assemble pair $\\{\mathbf{X}^{\text{ini}},\mathbf{p}^{\text{ini}}\\}$
10: $R_{f,0}=\epsilon$
11: while $\lambda<\lambda_{\max}$ do
12: for $\beta=0:\beta_{\max}$ do $\triangleright$ Variational Annealing
13: $R_{f}=R_{f,0}\alpha^{\beta}$
14: $\\{\mathbf{X}^{(\beta)},\mathbf{p}^{(\beta)}\\}$ =
$\min_{\mathbf{X},\mathbf{p}}A_{E}(\mathbf{X},\mathbf{Y})+R_{f}A_{M}(\mathbf{X},\mathbf{p},\mathbf{\hat{F}})$
$\triangleright$ Minimize via IPOPT
15: if $p^{(\beta)}_{k,j}<\lambda$ then $\triangleright$ Hard-threshold
$\mathbf{p}$
16: $p^{(\beta)}_{k,j}=0$
17: model${}^{(\lambda)}\leftarrow\mathbf{p}^{(\beta)}$ $\triangleright$ Store
models
18: $\lambda=2\lambda$ $\triangleright$ Increase $\lambda$
## References
* [1] H. Abarbanel, Predicting the future: completing models of observed complex systems, Springer, 2013.
* [2] L. A. Aguirre and S. Billings, Validating identified nonlinear models with chaotic dynamics, International Journal of Bifurcation and Chaos, 4 (1994), pp. 109–125.
* [3] L. A. Aguirre and C. Letellier, Modeling nonlinear dynamics and chaos: a review, Mathematical Problems in Engineering, 2009 (2009).
* [4] A. A. R. AlMomani, J. Sun, and E. Bollt, How entropic regression beats the outliers problem in nonlinear system identification, Chaos: An Interdisciplinary Journal of Nonlinear Science, 30 (2020), p. 013107.
* [5] N. S. Altman, An introduction to kernel and nearest-neighbor nonparametric regression, The American Statistician, 46 (1992), pp. 175–185.
* [6] P. Amil, M. C. Soriano, and C. Masoller, Machine learning algorithms for predicting the amplitude of chaotic laser pulses, Chaos: An Interdisciplinary Journal of Nonlinear Science, 29 (2019), p. 113111.
* [7] J. F. Apgar, D. K. Witmer, F. M. White, and B. Tidor, Sloppy models, parameter uncertainty, and the role of experimental design, Molecular BioSystems, 6 (2010), pp. 1890–1900.
* [8] R. N. Bannister, A review of operational methods of variational and ensemble-variational data assimilation, Quarterly Journal of the Royal Meteorological Society, 143 (2017), pp. 607–633.
* [9] B. P. Bezruchko, D. A. Smirnov, and I. V. Sysoev, Identification of chaotic systems with hidden variables (modified bock’s algorithm), Chaos, Solitons & Fractals, 29 (2006), pp. 82–90.
* [10] J. N. Blakely, M. B. Eskridge, and N. J. Corron, A simple lorenz circuit and its radio frequency implementation, Chaos: An Interdisciplinary Journal of Nonlinear Science, 17 (2007), p. 023112.
* [11] S. Boccaletti, The control of chaos: theory and applications, Physics Reports, 329 (2000), pp. 103–197.
* [12] M. Bocquet, J. Brajard, A. Carrassi, and L. Bertino, Data assimilation as a learning tool to infer ordinary differential equation representations of dynamical models, Nonlinear Processes in Geophysics, 26 (2019), pp. 143–162.
* [13] M. Bocquet, J. Brajard, A. Carrassi, and L. Bertino, Bayesian inference of chaotic dynamics by merging data assimilation, machine learning and expectation-maximization, Foundations of Data Science, 2 (2020), pp. 55–80.
* [14] L. Boninsegna, F. Nüske, and C. Clementi, Sparse learning of stochastic dynamical equations, The Journal of chemical physics, 148 (2018), p. 241723.
* [15] J. Brajard, A. Carassi, M. Bocquet, and L. Bertino, Combining data assimilation and machine learning to emulate a dynamical model from sparse and noisy observations: a case study with the lorenz 96 model, arXiv preprint arXiv:2001.01520, (2020).
* [16] S. L. Brunton, J. L. Proctor, and J. N. Kutz, Discovering governing equations from data by sparse identification of nonlinear dynamical systems, Proceedings of the National Academy of Sciences, 113 (2016), pp. 3932–3937.
* [17] E. J. Candès and M. B. Wakin, An introduction to compressive sampling, IEEE signal processing magazine, 25 (2008), pp. 21–30.
* [18] K. Champion, B. Lusch, J. N. Kutz, and S. L. Brunton, Data-driven discovery of coordinates and governing equations, Proceedings of the National Academy of Sciences, 116 (2019), pp. 22445–22451.
* [19] H. Chang and D. Zhang, Identification of physical processes via combined data-driven and data-assimilation methods, Journal of Computational Physics, 393 (2019), pp. 337–350.
* [20] R. Chartrand and V. Staneva, Restricted isometry properties and nonconvex compressive sensing, Inverse Problems, 24 (2008), p. 035020.
* [21] J. P. Crutchfield and B. McNamara, Equations of motion from a data series, Complex systems, 1 (1987), p. 121.
* [22] B. C. Daniels and I. Nemenman, Automated adaptive inference of phenomenological dynamical models, Nature communications, 6 (2015), p. 8133.
* [23] B. C. Daniels, W. S. Ryu, and I. Nemenman, Automated, predictive, and interpretable inference of caenorhabditis elegans escape dynamics, Proceedings of the National Academy of Sciences, 116 (2019), pp. 7226–7231.
* [24] B. Efron, T. Hastie, I. Johnstone, R. Tibshirani, et al., Least angle regression, The Annals of statistics, 32 (2004), pp. 407–499.
* [25] M. C. Eisenberg and M. A. Hayashi, Determining identifiable parameter combinations using subset profiling, Mathematical biosciences, 256 (2014), pp. 116–126.
* [26] G. Evensen, Data assimilation: the ensemble Kalman filter, Springer Science & Business Media, 2009.
* [27] H. Fan, J. Jiang, C. Zhang, X. Wang, and Y.-C. Lai, Long-term prediction of chaotic systems with machine learning, Physical Review Research, 2 (2020), p. 012080.
* [28] A. M. Fraser and H. L. Swinney, Independent coordinates for strange attractors from mutual information, Physical Review A, 33 (1986), pp. 1134–1140.
* [29] A. Gábor, A. F. Villaverde, and J. R. Banga, Parameter identifiability analysis and visualization in large-scale kinetic models of biosystems, BMC systems biology, 11 (2017), pp. 1–16.
* [30] L. Gardini, C. Grebogi, and S. Lenci, Chaos theory and applications: a retrospective on lessons learned and missed or new opportunities, Nonlinear Dynamics, 102 (2020), pp. 643–644.
* [31] J. Gondzio, Interior point methods 25 years later, European Journal of Operational Research, 218 (2012), pp. 587–601.
* [32] M. P. Hassell, H. N. Comins, and R. M. Mayt, Spatial structure and chaos in insect population dynamics, Nature, 353 (1991), pp. 255–258.
* [33] T. Hastie, R. Tibshirani, and J. Friedman, The elements of statistical learning: data mining, inference, and prediction, Springer Science & Business Media, 2009.
* [34] T. Hesterberg, N. H. Choi, L. Meier, C. Fraley, et al., Least angle and l1 penalized regression: A review, Statistics Surveys, 2 (2008), pp. 61–93.
* [35] L. S. T. Ho, H. Schaeffer, G. Tran, and R. Ward, Recovery guarantees for polynomial coefficients from weakly dependent data with outliers, Journal of Approximation Theory, 259 (2020), p. 105472.
* [36] A. E. Hoerl and R. W. Kennard, Ridge regression: applications to nonorthogonal problems, Technometrics, 12 (1970), pp. 69–82.
* [37] W. Ji and S. Deng, Autonomous discovery of unknown reaction pathways from data by chemical reaction neural network, arXiv preprint arXiv:2002.09062, (2020).
* [38] N. Kadakia, The Dynamics of Nonlinear Inference, PhD thesis, UC San Diego, 2017.
* [39] S. H. Kang, W. Liao, and Y. Liu, Ident: Identifying differential equations with numerical time evolution, arXiv preprint arXiv:1904.03538, (2019).
* [40] V. Kantabutra and E. Zheleva, Gradient descent with fast gliding over flat regions: a first report, in IEEE 2002 28th Annual Conference of the Industrial Electronics Society. IECON 02, IEEE.
* [41] M. B. Kennel, R. Brown, and H. D. I. Abarbanel, Determining embedding dimension for phase-space reconstruction using a geometrical construction, Physical Review A, 45 (1992), pp. 3403–3411.
* [42] E. N. Lorenz, Deterministic nonperiodic flow, Journal of the atmospheric sciences, 20 (1963), pp. 130–141.
* [43] N. M. Mangan, T. Askham, S. L. Brunton, J. N. Kutz, and J. L. Proctor, Model selection for hybrid dynamical systems via sparse regression, Proceedings of the Royal Society A, 475 (2019), p. 20180534.
* [44] N. M. Mangan, S. L. Brunton, J. L. Proctor, and J. N. Kutz, Inferring biological networks by sparse identification of nonlinear dynamics, IEEE Transactions on Molecular, Biological and Multi-Scale Communications, 2 (2016), pp. 52–63.
* [45] N. M. Mangan, J. N. Kutz, S. L. Brunton, and J. L. Proctor, Model selection for dynamical systems via sparse regression and information criteria, Proceedings of the Royal Society A: Mathematical, Physical and Engineering Sciences, 473 (2017), p. 20170009.
* [46] N. Meshkat, M. Eisenberg, and J. J. DiStefano III, An algorithm for finding globally identifiable parameter combinations of nonlinear ode models using gröbner bases, Mathematical biosciences, 222 (2009), pp. 61–72.
* [47] D. A. Messenger and D. M. Bortz, Weak sindy for partial differential equations, arXiv preprint arXiv:2007.02848, (2020).
* [48] S. Mukherjee, E. Osuna, and F. Girosi, Nonlinear prediction of chaotic time series using support vector machines, in Neural Networks for Signal Processing VII. Proceedings of the 1997 IEEE Signal Processing Society Workshop, IEEE, 1997, pp. 511–520.
* [49] E. P. Odum and G. W. Barrett, Fundamentals of ecology, vol. 3, Saunders Philadelphia, 1971.
* [50] N. H. Packard, J. P. Crutchfield, J. D. Farmer, and R. S. Shaw, Geometry from a time series, Physical Review Letters, 45 (1980), pp. 712–716.
* [51] Y. Pantazis and I. Tsamardinos, A unified approach for sparse dynamical system inference from temporal measurements, Bioinformatics, 35 (2018), pp. 3387–3396.
* [52] L. M. Pecora and T. L. Carroll, Synchronization in chaotic systems, Physical Review Letters, 64 (1990), pp. 821–824.
* [53] M. Rabinovich and H. Abarbanel, The role of chaos in neural systems, Neuroscience, 87 (1998), pp. 5–14.
* [54] H. Ribera, DAHSI code base. https://github.com/hribera/DAHSI, 2021.
* [55] P. J. Rozdeba, Nonlinear Inference in Partially Observed Physical Systems and Deep Neural Networks, PhD thesis, UC San Diego, 2018.
* [56] S. H. Rudy, S. L. Brunton, J. L. Proctor, and J. N. Kutz, Data-driven discovery of partial differential equations, Science Advances, 3 (2017), p. e1602614.
* [57] H. Schaeffer and S. G. McCalla, Sparse model selection via integral terms, Physical Review E, 96 (2017).
* [58] M. Schmidt and H. Lipson, Distilling free-form natural laws from experimental data, science, 324 (2009), pp. 81–85.
* [59] B. Sivakumar, Chaos theory in hydrology: important issues and interpretations, Journal of hydrology, 227 (2000), pp. 1–20.
* [60] J. Slingo and T. Palmer, Uncertainty in weather and climate prediction, Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences, 369 (2011), pp. 4751–4767.
* [61] A. Somacal, L. Boechi, M. Jonckheere, V. Lefieux, D. Picard, and E. Smucler, Uncovering differential equations from data with hidden variables, arXiv preprint arXiv:2002.02250, (2020).
* [62] W. Su, M. Bogdan, and E. Candès, False discoveries occur early on the lasso path, The Annals of Statistics, 45 (2017).
* [63] G. Sugihara, R. May, H. Ye, C.-h. Hsieh, E. Deyle, M. Fogarty, and S. Munch, Detecting causality in complex ecosystems, science, 338 (2012), pp. 496–500.
* [64] G. Sugihara and R. M. May, Nonlinear forecasting as a way of distinguishing chaos from measurement error in time series, Nature, 344 (1990), pp. 734–741.
* [65] F. Takens, Detecting strange attractors in turbulence, in Dynamical systems and turbulence, Warwick 1980, Springer, 1981, pp. 366–381.
* [66] O. Talagrand and P. Courtier, Variational assimilation of meteorological observations with the adjoint vorticity equation. i: Theory, Quarterly Journal of the Royal Meteorological Society, 113 (1987), pp. 1311–1328.
* [67] Y. Tang, J. Kurths, W. Lin, E. Ott, and L. Kocarev, Introduction to focus issue: When machine learning meets complex systems: Networks, chaos, and nonlinear dynamics, Chaos: An Interdisciplinary Journal of Nonlinear Science, 30 (2020), p. 063151.
* [68] R. Tibshirani, Regression shrinkage and selection via the lasso, Journal of the Royal Statistical Society: Series B (Methodological), 58 (1996), pp. 267–288.
* [69] A. Wächter, An interior point algorithm for large-scale nonlinear optimization with applications in process engineering, PhD thesis, PhD thesis, Carnegie Mellon University, 2002.
* [70] J. Ye, N. Kadakia, P. Rozdeba, H. Abarbanel, and J. Quinn, Improved variational methods in statistical data assimilation., Nonlinear Processes in Geophysics, 22 (2015).
* [71] J. Ye, D. Rey, N. Kadakia, M. Eldridge, U. I. Morone, P. Rozdeba, H. D. Abarbanel, and J. C. Quinn, Systematic variational method for statistical nonlinear state and parameter estimation, Physical Review E, 92 (2015), p. 052901.
* [72] K. Yeo, Model-free prediction of noisy chaotic time series by deep learning, arXiv preprint arXiv:1710.01693, (2017).
* [73] P. Zheng, T. Askham, S. L. Brunton, J. N. Kutz, and A. Y. Aravkin, A unified framework for sparse relaxed regularized regression: Sr3, IEEE Access, 7 (2018), pp. 1404–1423.
Supplementary Information for
Model selection of chaotic systems from data with hidden variables using
sparse data assimilation
H. Ribera, S. Shirman, A. V. Nguyen and N. M. Mangan
###### Contents
1. 1 Introduction
2. 2 Results: Model selection for chaotic systems
1. 2.1 Identification of models for the Lorenz circuit from experimental data
2. 2.2 Robustness study on the simulated Lorenz system
3. 3 Discussion
4. 4 Methods: Mathematical formulation of cost function and algorithm
1. 4.1 DAHSI: Data Assimilation for Hidden Sparse Inference
5. S0 Cost function analysis
6. S0 Time-delay embedding of used training data
7. S0 AIC calculation for synthetic data
1. S0.1 Initial condition choice for unmeasured y
2. S0.2 Prediction window
8. S0 Down-selected models
1. S0.1 Models identified in the Pareto front edge
2. S0.2 AIC and BIC on the 25 down-selected models
9. S0 Action derivation
10. S0 Computational time
11. S0 Semiconductor
1. S0.1 1 hidden variable
2. S0.2 Parameter identifiability
12. S0 Predator-Prey
13. S0 $\alpha$ parameter in VA algorithm
## S0 Cost function analysis
Our aim is now to explore the landscape of the cost function as to understand
the problem that we are solving and why it is very challenging. For
illustrative purposes, in the following discussion we are only considering two
dimensions of the cost function $\hat{A}=A_{E}+R_{f}A_{M}$. We use the
classical Lorenz system and take all parameters in the structure fixed and we
add two extra parameters (highlighted in red),
$\displaystyle\dot{x}$
$\displaystyle=\sigma(y-x)+{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}p_{1,1}},$
(1) $\displaystyle\dot{y}$
$\displaystyle=x(\rho-z)-y+{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}p_{2,1}},$
(2) $\displaystyle\dot{z}$ $\displaystyle=xy-\beta z.$ (3)
We then vary these two parameters and plot what the cost function looks like,
for three different values of $R_{f}$. The cost function that we want to
minimise, is the one that has a large $R_{f}$ value (Fig. 1, right).
Figure 1: Varying $p_{1,1}$ and $p_{2,1}$ for three different $R_{f}$.
The cost function $\hat{A}$ is highly non-convex and the task of finding its
global minima a priori is a difficult task.
## S0 Time-delay embedding of used training data
Figure 2: Time-delay embedding of training data ($\tau=0.02$).
## S0 AIC calculation for synthetic data
We want to find which model is the one that best represents the data synthetic
data generated (in which we added some noise $\sim\mathcal{N}(0,0.01)$). Since
we are working with chaotic systems, we only expect prediction up to the
Lyapunov time of the system. We consider 1/4 of the shortest Lyapunov time out
of all the down-selected models for the synthetic data, $t_{M}\approx 0.3$. We
use $S=300$ time series of length $t_{M}$ as our validation set, but discard
the first four points as they will be used to predict the initial condition
for $y_{0}$ (as shown in the following section). To calculate the AIC score,
we define the residual sum of squares of the $m$-th model as
$\text{RSS}_{m}=\sum_{s=1}^{S}E^{s}_{av,m}(\mathbf{Y}_{s},\mathbf{F}_{m},\mathbf{p}_{m}),$
(4)
where $\mathbf{Y}_{s}=[x_{e},z_{e}]_{s}$ is the synthetic data of the time-
series $s$, $\mathbf{F}_{m}$ the governing equations of the $m$-th model, and
$\mathbf{p}_{m}$ denotes the parameters found via parameter estimation for the
$m$-th model. $E^{s}_{av,m}$ is the average absolute error over the time-
series $s$ and is defined as
$E^{s}_{av,m}(\mathbf{Y}_{s},\mathbf{F}_{m},\mathbf{p}_{m})=\frac{1}{2M}\sum_{i=1}^{M}(x_{i,e}^{s}-x_{i}^{s})^{2}+(z_{i,e}^{s}-z_{i}^{s})^{2},$
(5)
where $x^{s}$ and $z^{s}$ denote the $x$ and $z$ component, respectively, of
the solution of the $m$-th model in the $s$ time series, found via RK4 with
$\Delta t=0.01$. $M$ denotes 1/4 of a Lyapunov time, excluding the first four
points as we have mentioned before.
Finally, we can define the AIC of the $m$-th model as
$\text{AIC}_{m}=S\log\left(\frac{\sum_{s=1}^{S}E^{s}_{av,m}(\mathbf{Y}_{s},\mathbf{F}_{m},\mathbf{p}_{m})}{S}\right)+2N_{p,m},$
(6)
where $N_{p,m}$ is the number of free parameters in the $m$-th model.
We finally re-scale by the minimum AIC value, denoted by AIC${}_{\text{min}}$,
and so $\Delta\text{AIC}_{m}=\text{AIC}_{m}-\text{AIC}_{\text{min}}$.
### S0.1 Initial condition choice for unmeasured y
We need an initial condition for each time series to be able to simulate each
model. We have an initial condition for both $x$ and $z$ given by the
experimental data, but we do not have any information for the $y$ component.
We cannot use the VA to estimate $y_{0}$ and parameters simultaneously (which
would lead to better prediction windows see next section) because our
validation data will then have been used for training. Let us consider the 8th
order finite difference approximation of the time derivative of $x$
$\frac{\mathop{}\\!\mathrm{d}x(t)}{\mathop{}\\!\mathrm{d}t}\approx\frac{\begin{multlined}3x(t+4\Delta
t)-32x(t+3\Delta t)+168x(t+2\Delta t)-672x(t+\Delta t)\\\ +672x(t+\Delta
t)-168x(t+2\Delta t)+32x(t+3\Delta t)-3x(t+4\Delta
t)\end{multlined}3x(t+4\Delta t)-32x(t+3\Delta t)+168x(t+2\Delta
t)-672x(t+\Delta t)\\\ +672x(t+\Delta t)-168x(t+2\Delta t)+32x(t+3\Delta
t)-3x(t+4\Delta t)}{840\Delta t}$ (7)
For each model, we have that
$\frac{\mathop{}\\!\mathrm{d}x(t)}{\mathop{}\\!\mathrm{d}t}=F_{1,m}(x(t),y(t),z(t),\mathbf{p}_{m}).$
(8)
Putting (7) and (8) together we have
$\frac{-x(t+2\Delta t)+8x(t+\Delta t)-8x(t-\Delta t)+x(t-2\Delta t)}{12\Delta
t}\approx F_{1,m}(x(t),y(t),z(t),\mathbf{p}_{m}).$ (9)
We need to solve for $y(0)$. We note that for the down-selected models in
Example A in our manuscript the only terms with $y$ in the first equation in
all the models is just the first order term, so for this case this is a
particularly simple equation to solve.
The results in the synthetic data indicate that there are only four candidate
models that best represent the data. Even though the $\Delta$AIC from
incorrect models (Fig. 3, red, green and yellow lines) does not increase as we
add more time series $S$ in the calculation of AIC, we consistently pick the
correct model structure (blue line) as the one with lowest $\Delta AIC$.
Figure 3: $\Delta$AIC from the different models DAHSI found using the
synthetic data.
### S0.2 Prediction window
We compare how the prediction window changes from having two observed
variables to having three observed variables. For noise $\omega=0.01$, having
one hidden variable (Fig. 4, top row) and using the real value of $y_{0}$,
leads to no prediction at all. However, using the estimated $y_{0}$ calculated
as in the previous section leads to a prediction window of about 3.5 Lyapunov
times. This shows that the parameter estimates and $y_{0}$ estimate are
compensating for each other. For the case of all variables observed (Fig. 4,
bottom row), we see that using the real value of $y_{0}$ leads to a prediction
window of about 6 Lyapunov times. If we estimate $y_{0}$ the prediction window
reduces to about 3.5 Lyapunov times. For a higher noise $\omega=0.1$, having
one hidden variable (Fig. 5, top row) and using the real value of $y_{0}$,
again leads to no prediction at all. Moreover, using the estimated $y_{0}$
calculated as in the previous section leads to a shorter prediction window
than for lower noise, about 1 Lyapunov time. This shows that with increased
noise amplified by hidden variables, the $y_{0}$ estimate cannot compensate
for the parameter estimate that well. For the case of all variables observed
with $\omega=0.1$ (Fig. 5, bottom row), we see that using the real value of
$y_{0}$ leads to a prediction window of about 3.5 Lyapunov times. If we
estimate $y_{0}$ the prediction window reduces to about 1 Lyapunov time.
Figure 4: Top: Prediction of the model with parameters estimated using 2
observed variables ($x$ and $z$), when using the real $y_{0}$ and the
estimated $y_{0}$ (as shown in §S0.1). Bottom: Prediction of the model with
parameters estimated using 3 observed variables, when using the real $y_{0}$
and the estimated $y_{0}$. The noise added to the synthetic data is
$\mathcal{N}(0,\omega)$, $\omega=0.01$. Figure 5: Top: Prediction of the model
with parameters estimated using 2 observed variables ($x$ and $z$), when using
the real $y_{0}$ and the estimated $y_{0}$ (as shown in §S0.1). Bottom:
Prediction of the model with parameters estimated using 3 observed variables,
when using the real $y_{0}$ and the estimated $y_{0}$. The noise added to the
synthetic data is $\mathcal{N}(0,\omega)$, $\omega=0.1$.
## S0 Down-selected models
We present the structure of the 25 down-selected models, but we do not provide
the parameter estimation (the parameter values can be found in the code
package [54]).
$\displaystyle\frac{\mathop{}\\!\mathrm{d}x}{\mathop{}\\!\mathrm{d}t}$
$\displaystyle=p_{1,2}x+p_{1,3}y,$ (10)
$\displaystyle\frac{\mathop{}\\!\mathrm{d}y}{\mathop{}\\!\mathrm{d}t}$
$\displaystyle=p_{2,2}x+p_{2,7}xz,$ (11)
$\displaystyle\frac{\mathop{}\\!\mathrm{d}z}{\mathop{}\\!\mathrm{d}t}$
$\displaystyle=p_{3,4}z+p_{3,6}xy.$ (12)
$\displaystyle\frac{\mathop{}\\!\mathrm{d}x}{\mathop{}\\!\mathrm{d}t}$
$\displaystyle=p_{1,2}x+p_{1,3}y,$ (13)
$\displaystyle\frac{\mathop{}\\!\mathrm{d}y}{\mathop{}\\!\mathrm{d}t}$
$\displaystyle=p_{2,2}x+p_{2,3}y+p_{2,7}xz,$ (14)
$\displaystyle\frac{\mathop{}\\!\mathrm{d}z}{\mathop{}\\!\mathrm{d}t}$
$\displaystyle=p_{3,4}z+p_{3,6}xy.$ (15)
$\displaystyle\frac{\mathop{}\\!\mathrm{d}x}{\mathop{}\\!\mathrm{d}t}$
$\displaystyle=p_{1,2}x+p_{1,3}y+p_{1,4}z,$ (16)
$\displaystyle\frac{\mathop{}\\!\mathrm{d}y}{\mathop{}\\!\mathrm{d}t}$
$\displaystyle=p_{2,2}x+p_{2,3}y+p_{2,7}xz,$ (17)
$\displaystyle\frac{\mathop{}\\!\mathrm{d}z}{\mathop{}\\!\mathrm{d}t}$
$\displaystyle=p_{3,4}z+p_{3,6}xy.$ (18)
$\displaystyle\frac{\mathop{}\\!\mathrm{d}x}{\mathop{}\\!\mathrm{d}t}$
$\displaystyle=p_{1,1}+p_{1,2}x+p_{1,3}y,$ (19)
$\displaystyle\frac{\mathop{}\\!\mathrm{d}y}{\mathop{}\\!\mathrm{d}t}$
$\displaystyle=p_{2,2}x+p_{2,7}xz,$ (20)
$\displaystyle\frac{\mathop{}\\!\mathrm{d}z}{\mathop{}\\!\mathrm{d}t}$
$\displaystyle=p_{3,4}z+p_{3,6}xy.$ (21)
$\displaystyle\frac{\mathop{}\\!\mathrm{d}x}{\mathop{}\\!\mathrm{d}t}$
$\displaystyle=p_{1,1}+p_{1,2}x+p_{1,3}y+p_{1,4}z,$ (22)
$\displaystyle\frac{\mathop{}\\!\mathrm{d}y}{\mathop{}\\!\mathrm{d}t}$
$\displaystyle=p_{2,2}x+p_{2,7}xz,$ (23)
$\displaystyle\frac{\mathop{}\\!\mathrm{d}z}{\mathop{}\\!\mathrm{d}t}$
$\displaystyle=p_{3,4}z+p_{3,6}xy.$ (24)
$\displaystyle\frac{\mathop{}\\!\mathrm{d}x}{\mathop{}\\!\mathrm{d}t}$
$\displaystyle=p_{1,1}+p_{1,2}x+p_{1,3}y+p_{1,4}z,$ (25)
$\displaystyle\frac{\mathop{}\\!\mathrm{d}y}{\mathop{}\\!\mathrm{d}t}$
$\displaystyle=p_{2,1}+p_{2,2}x+p_{2,3}y+p_{2,7}xz,$ (26)
$\displaystyle\frac{\mathop{}\\!\mathrm{d}z}{\mathop{}\\!\mathrm{d}t}$
$\displaystyle=p_{3,4}z+p_{3,6}xy.$ (27)
$\displaystyle\frac{\mathop{}\\!\mathrm{d}x}{\mathop{}\\!\mathrm{d}t}$
$\displaystyle=p_{1,2}x+p_{1,3}y,$ (28)
$\displaystyle\frac{\mathop{}\\!\mathrm{d}y}{\mathop{}\\!\mathrm{d}t}$
$\displaystyle=p_{2,2}x+p_{2,7}xz,$ (29)
$\displaystyle\frac{\mathop{}\\!\mathrm{d}z}{\mathop{}\\!\mathrm{d}t}$
$\displaystyle=0.$ (30)
$\displaystyle\frac{\mathop{}\\!\mathrm{d}x}{\mathop{}\\!\mathrm{d}t}$
$\displaystyle=p_{1,2}x+p_{1,3}y,$ (31)
$\displaystyle\frac{\mathop{}\\!\mathrm{d}y}{\mathop{}\\!\mathrm{d}t}$
$\displaystyle=p_{2,2}x+p_{2,7}xz,$ (32)
$\displaystyle\frac{\mathop{}\\!\mathrm{d}z}{\mathop{}\\!\mathrm{d}t}$
$\displaystyle=p_{3,6}xy.$ (33)
$\displaystyle\frac{\mathop{}\\!\mathrm{d}x}{\mathop{}\\!\mathrm{d}t}$
$\displaystyle=p_{1,2}x+p_{1,3}y,$ (34)
$\displaystyle\frac{\mathop{}\\!\mathrm{d}y}{\mathop{}\\!\mathrm{d}t}$
$\displaystyle=p_{2,2}x+p_{2,3}y+p_{2,7}xz,$ (35)
$\displaystyle\frac{\mathop{}\\!\mathrm{d}z}{\mathop{}\\!\mathrm{d}t}$
$\displaystyle=0.$ (36)
$\displaystyle\frac{\mathop{}\\!\mathrm{d}x}{\mathop{}\\!\mathrm{d}t}$
$\displaystyle=p_{1,1}+p_{1,2}x+p_{1,3}y,$ (37)
$\displaystyle\frac{\mathop{}\\!\mathrm{d}y}{\mathop{}\\!\mathrm{d}t}$
$\displaystyle=p_{2,2}x+p_{2,7}xz+,$ (38)
$\displaystyle\frac{\mathop{}\\!\mathrm{d}z}{\mathop{}\\!\mathrm{d}t}$
$\displaystyle=0.$ (39)
$\displaystyle\frac{\mathop{}\\!\mathrm{d}x}{\mathop{}\\!\mathrm{d}t}$
$\displaystyle=p_{1,2}x+p_{1,3}y,$ (40)
$\displaystyle\frac{\mathop{}\\!\mathrm{d}y}{\mathop{}\\!\mathrm{d}t}$
$\displaystyle=p_{2,2}x+p_{2,3}y+p_{2,7}xz,$ (41)
$\displaystyle\frac{\mathop{}\\!\mathrm{d}z}{\mathop{}\\!\mathrm{d}t}$
$\displaystyle=p_{3,6}xy.$ (42)
$\displaystyle\frac{\mathop{}\\!\mathrm{d}x}{\mathop{}\\!\mathrm{d}t}$
$\displaystyle=p_{1,2}x+p_{1,3}y+p_{1,4}z,$ (43)
$\displaystyle\frac{\mathop{}\\!\mathrm{d}y}{\mathop{}\\!\mathrm{d}t}$
$\displaystyle=p_{2,2}x+p_{2,3}y+p_{2,7}xz,$ (44)
$\displaystyle\frac{\mathop{}\\!\mathrm{d}z}{\mathop{}\\!\mathrm{d}t}$
$\displaystyle=0.$ (45)
$\displaystyle\frac{\mathop{}\\!\mathrm{d}x}{\mathop{}\\!\mathrm{d}t}$
$\displaystyle=p_{1,1}+p_{1,2}x+p_{1,3}y,$ (46)
$\displaystyle\frac{\mathop{}\\!\mathrm{d}y}{\mathop{}\\!\mathrm{d}t}$
$\displaystyle=p_{2,2}x+p_{2,7}xz,$ (47)
$\displaystyle\frac{\mathop{}\\!\mathrm{d}z}{\mathop{}\\!\mathrm{d}t}$
$\displaystyle=p_{3,6}xy.$ (48)
$\displaystyle\frac{\mathop{}\\!\mathrm{d}x}{\mathop{}\\!\mathrm{d}t}$
$\displaystyle=p_{1,2}x+p_{1,3}y+p_{1,4}z,$ (49)
$\displaystyle\frac{\mathop{}\\!\mathrm{d}y}{\mathop{}\\!\mathrm{d}t}$
$\displaystyle=p_{2,2}x+p_{2,7}xz,$ (50)
$\displaystyle\frac{\mathop{}\\!\mathrm{d}z}{\mathop{}\\!\mathrm{d}t}$
$\displaystyle=p_{3,4}z+p_{3,6}xy.$ (51)
$\displaystyle\frac{\mathop{}\\!\mathrm{d}x}{\mathop{}\\!\mathrm{d}t}$
$\displaystyle=p_{1,2}x+p_{1,3}y+p_{1,4}z,$ (52)
$\displaystyle\frac{\mathop{}\\!\mathrm{d}y}{\mathop{}\\!\mathrm{d}t}$
$\displaystyle=p_{2,2}x+p_{2,3}y+p_{2,7}xz,$ (53)
$\displaystyle\frac{\mathop{}\\!\mathrm{d}z}{\mathop{}\\!\mathrm{d}t}$
$\displaystyle=p_{3,6}xy.$ (54)
$\displaystyle\frac{\mathop{}\\!\mathrm{d}x}{\mathop{}\\!\mathrm{d}t}$
$\displaystyle=p_{1,1}+p_{1,2}x+p_{1,3}y+p_{1,4}z,$ (55)
$\displaystyle\frac{\mathop{}\\!\mathrm{d}y}{\mathop{}\\!\mathrm{d}t}$
$\displaystyle=p_{2,2}x+p_{2,7}xz,$ (56)
$\displaystyle\frac{\mathop{}\\!\mathrm{d}z}{\mathop{}\\!\mathrm{d}t}$
$\displaystyle=p_{3,6}xy.$ (57)
$\displaystyle\frac{\mathop{}\\!\mathrm{d}x}{\mathop{}\\!\mathrm{d}t}$
$\displaystyle=p_{1,2}x+p_{1,3}y+p_{1,4}z,$ (58)
$\displaystyle\frac{\mathop{}\\!\mathrm{d}y}{\mathop{}\\!\mathrm{d}t}$
$\displaystyle=p_{2,2}x+p_{2,7}xz,$ (59)
$\displaystyle\frac{\mathop{}\\!\mathrm{d}z}{\mathop{}\\!\mathrm{d}t}$
$\displaystyle=p_{3,4}z+p_{3,6}xy+p_{3,7}xz.$ (60)
$\displaystyle\frac{\mathop{}\\!\mathrm{d}x}{\mathop{}\\!\mathrm{d}t}$
$\displaystyle=p_{1,1}+p_{1,2}x+p_{1,3}y+p_{1,4}z,$ (61)
$\displaystyle\frac{\mathop{}\\!\mathrm{d}y}{\mathop{}\\!\mathrm{d}t}$
$\displaystyle=p_{2,2}x+p_{2,3}y+p_{2,7}xz,$ (62)
$\displaystyle\frac{\mathop{}\\!\mathrm{d}z}{\mathop{}\\!\mathrm{d}t}$
$\displaystyle=p_{3,6}xy.$ (63)
$\displaystyle\frac{\mathop{}\\!\mathrm{d}x}{\mathop{}\\!\mathrm{d}t}$
$\displaystyle=p_{1,2}x+p_{1,3}y+p_{1,4}z,$ (64)
$\displaystyle\frac{\mathop{}\\!\mathrm{d}y}{\mathop{}\\!\mathrm{d}t}$
$\displaystyle=p_{2,2}x+p_{2,6}xy+p_{2,7}xz,$ (65)
$\displaystyle\frac{\mathop{}\\!\mathrm{d}z}{\mathop{}\\!\mathrm{d}t}$
$\displaystyle=p_{3,4}z+p_{3,6}xy+p_{3,7}xz.$ (66)
$\displaystyle\frac{\mathop{}\\!\mathrm{d}x}{\mathop{}\\!\mathrm{d}t}$
$\displaystyle=p_{1,1}+p_{1,2}x+p_{1,3}y+p_{1,4}z,$ (67)
$\displaystyle\frac{\mathop{}\\!\mathrm{d}y}{\mathop{}\\!\mathrm{d}t}$
$\displaystyle=p_{2,2}x+p_{2,7}xz,$ (68)
$\displaystyle\frac{\mathop{}\\!\mathrm{d}z}{\mathop{}\\!\mathrm{d}t}$
$\displaystyle=p_{3,4}z+p_{3,6}xy+p_{3,7}xz.$ (69)
$\displaystyle\frac{\mathop{}\\!\mathrm{d}x}{\mathop{}\\!\mathrm{d}t}$
$\displaystyle=p_{1,1}+p_{1,2}x+p_{1,3}y+p_{1,4}z,$ (70)
$\displaystyle\frac{\mathop{}\\!\mathrm{d}y}{\mathop{}\\!\mathrm{d}t}$
$\displaystyle=p_{2,1}+p_{2,2}x+p_{2,3}y+p_{2,7}xz,$ (71)
$\displaystyle\frac{\mathop{}\\!\mathrm{d}z}{\mathop{}\\!\mathrm{d}t}$
$\displaystyle=p_{3,6}xy.$ (72)
$\displaystyle\frac{\mathop{}\\!\mathrm{d}x}{\mathop{}\\!\mathrm{d}t}$
$\displaystyle=p_{1,2}x+p_{1,3}y+p_{1,4}z,$ (73)
$\displaystyle\frac{\mathop{}\\!\mathrm{d}y}{\mathop{}\\!\mathrm{d}t}$
$\displaystyle=p_{2,2}x+p_{2,3}y+p_{2,6}xy+p_{2,7}xz,$ (74)
$\displaystyle\frac{\mathop{}\\!\mathrm{d}z}{\mathop{}\\!\mathrm{d}t}$
$\displaystyle=p_{3,4}z+p_{3,6}xy+p_{3,7}xz.$ (75)
$\displaystyle\frac{\mathop{}\\!\mathrm{d}x}{\mathop{}\\!\mathrm{d}t}$
$\displaystyle=p_{1,1}+p_{1,2}x+p_{1,3}y+p_{1,4}z,$ (76)
$\displaystyle\frac{\mathop{}\\!\mathrm{d}y}{\mathop{}\\!\mathrm{d}t}$
$\displaystyle=p_{2,2}x+p_{2,6}xy+p_{2,7}xz,$ (77)
$\displaystyle\frac{\mathop{}\\!\mathrm{d}z}{\mathop{}\\!\mathrm{d}t}$
$\displaystyle=p_{3,4}z+p_{3,6}xy+p_{3,7}xz.$ (78)
$\displaystyle\frac{\mathop{}\\!\mathrm{d}x}{\mathop{}\\!\mathrm{d}t}$
$\displaystyle=p_{1,1}+p_{1,2}x+p_{1,3}y+p_{1,4}z,$ (79)
$\displaystyle\frac{\mathop{}\\!\mathrm{d}y}{\mathop{}\\!\mathrm{d}t}$
$\displaystyle=p_{2,2}x+p_{2,3}y+p_{2,6}xy+p_{2,7}xz,$ (80)
$\displaystyle\frac{\mathop{}\\!\mathrm{d}z}{\mathop{}\\!\mathrm{d}t}$
$\displaystyle=p_{3,4}z+p_{3,6}xy+p_{3,7}xz.$ (81)
$\displaystyle\frac{\mathop{}\\!\mathrm{d}x}{\mathop{}\\!\mathrm{d}t}$
$\displaystyle=p_{1,1}+p_{1,2}x+p_{1,3}y+p_{1,4}z,$ (82)
$\displaystyle\frac{\mathop{}\\!\mathrm{d}y}{\mathop{}\\!\mathrm{d}t}$
$\displaystyle=p_{2,1}+p_{2,2}x+p_{2,3}y+p_{2,6}xy+p_{2,7}xz,$ (83)
$\displaystyle\frac{\mathop{}\\!\mathrm{d}z}{\mathop{}\\!\mathrm{d}t}$
$\displaystyle=p_{3,4}z+p_{3,6}xy+p_{3,7}xz.$ (84)
### S0.1 Models identified in the Pareto front edge
Table 1: Models identified in the Pareto front in Figure 1(d) in the main
text.
| | number of active terms
---|---|---
| Term | 6 | 7 | 8 | 9 | 10 | 11 | 12
eq. $\dot{x}$ | $1$ | 0 | 0 | -0.8112 | 0 | 0 | -0.2514 | -2.4053
$x$ | -16.5556 | -16.9554 | -16.4666 | -16.5603 | -17.0172 | -17.0582 | -17.0627
$y$ | 19.8000 | 18.7853 | 19.8120 | 16.7514 | 19.9884 | 19.9840 | 19.9862
$z$ | 0 | 0 | 0.0276 | 0.1486 | 0.1596 | 0.1833 | 1.4595
$x^{2}$ | 0 | 0 | 0 | 0 | 0 | 0 | 0
$xy$ | 0 | 0 | 0 | 0 | 0 | 0 | 0
$xz$ | 0 | 0 | 0 | 0 | 0 | 0 | 0
$y^{2}$ | 0 | 0 | 0 | 0 | 0 | 0 | 0
$yz$ | 0 | 0 | 0 | 0 | 0 | 0 | 0
$z^{2}$ | 0 | 0 | 0 | 0 | 0 | 0 | 0
eq. $\dot{y}$ | $1$ | 0 | 0 | 0 | 0 | 0 | 0 | 0.7892
$x$ | 23.2613 | 24.3535 | 23.0763 | 27.3789 | 22.6028 | 22.6017 | 22.6061
$y$ | 0 | 0.2580 | 0 | 0 | 0.3346 | 0.3567 | 0.3298
$z$ | 0 | 0 | 0 | 0 | 0 | 0 | 0
$x^{2}$ | 0 | 0 | 0 | 0 | 0 | 0 | 0
$xy$ | 0 | 0 | 0 | -0.0922 | -0.0906 | -0.0843 | -0.3647
$xz$ | -6.3345 | -6.7054 | -6.2868 | -7.4621 | -6.2507 | -6.2561 | -6.2691
$y^{2}$ | 0 | 0 | 0 | 0 | 0 | 0 | 0
$yz$ | 0 | 0 | 0 | 0 | 0 | 0 | 0
$z^{2}$ | 0 | 0 | 0 | 0 | 0 | 0 | 0
eq. $\dot{z}$ | $1$ | 0 | 0 | 0 | 0 | 0 | 0 | 0
$x$ | 0 | 0 | 0 | 0 | 0 | 0 | 0
$y$ | 0 | 0 | 0 | 0 | 0 | 0 | 0
$z$ | -3.6646 | -3.6835 | -3.6736 | 4.3951 | -3.6954 | -3.6966 | -3.6941
$x^{2}$ | 0 | 0 | 0 | 0 | 0 | 0 | 0
$xy$ | 5.1948 | 4.8273 | 5.2315 | -3.6660 | 5.1412 | 5.1292 | 5.1326
$xz$ | 0 | 0 | 0 | 0.0883 | 0.0903 | 0.0791 | 0.2900
$y^{2}$ | 0 | 0 | 0 | 0 | 0 | 0 | 0
$yz$ | 0 | 0 | 0 | 0 | 0 | 0 | 0
$z^{2}$ | 0 | 0 | 0 | 0 | 0 | 0 | 0
$E_{av}$ | | 10.1693 | 9.7441 | 9.7174 | 9.6778 | 9.0995 | 9.0345 | 9.5765
### S0.2 AIC and BIC on the 25 down-selected models
Bayesian information criteria (BIC) is defined as
$\text{BIC}_{m}=S\log\left(\frac{\sum_{s=1}^{S}E^{s}_{av,m}(\mathbf{Y}_{s},\mathbf{F}_{m},\mathbf{p}_{m})}{S}\right)+S\log(N_{p,m}).$
(85)
In the same way when we defined $\Delta$AIC in a previous section, we re-scale
by the minimum BIC value, denoted by BIC${}_{\text{min}}$, and so
$\Delta\text{BIC}_{m}=\text{BIC}_{m}-\text{BIC}_{\text{min}}$.
We will now calculate how AIC ((6)) and BIC ((85)) change as we add more time
series into the calculation. For each $S$ that we use to calculate both AIC
and BIC ($S\leq 1083$, which is the total number of time segments we have
available that are of length 1/4 of a Lyapunov time), we will pick $S$ random
time-segments to ensure that the $S$ time-segments used in the calculation are
independent samples.
For both $\Delta\text{AIC}_{m}$ and $\Delta\text{BIC}_{m}$ we are able to
consistently identify a unique model (Fig. 6). If we just look at the Pareto
front (Fig. 1(d) in the main text), one might ask if the decrease between 9
and 10 terms is meaningful. Both AIC and BIC say that it is.
Figure 6: $\Delta\text{AIC}_{m}$ and $\Delta\text{BIC}_{m}$ from the different
models DAHSI found using the experimental data in [10].
## S0 Action derivation
We consider a dynamical system with unknown governing equations
$\frac{\mathop{}\\!\mathrm{d}\mathbf{X}}{\mathop{}\\!\mathrm{d}t}=\mathbf{F}(\mathbf{X}(t),\mathbf{p}),$
(86)
where $\mathbf{X}=(x_{1},x_{2},\dots,x_{D})\in\mathbb{R}^{D}$ are the state
variables, $\mathbf{F}=(F_{1},\,F_{2},\dots,F_{D})$ are the unknown functions
that govern the dynamics of the system and $\mathbf{p}$ is a set of unknown
parameters. Te measurements
$\mathbf{Y}=(y_{1},y_{2},\dots,y_{L})\in\mathbb{R}^{L}$ are lower dimensional
$L\leq D$ than the underlying variables.
Our goal is to find $\mathbf{X}$ and $\mathbf{p}$ that maximise the
probability $P(\mathbf{X},\mathbf{p}\;|\;\mathbf{Y},\mathbf{\hat{F}})$. We
have that [1]
$P(\mathbf{X},\mathbf{p}\;|\;\mathbf{Y},\mathbf{\hat{F}})=\int\exp\left[-A_{0}(\mathbf{X},\mathbf{Y})\right]\;\mathop{}\\!\mathrm{d}\mathbf{X}.$
(87)
Furthermore,
$A_{0}(\mathbf{X},\mathbf{Y})=-\sum_{i=1}^{N}\text{CMI}\left[\mathbf{X}(t_{i}),\mathbf{Y}(t_{i})\;|\;\mathbf{Y}(t_{0}),\dots,\mathbf{Y}(t_{i-1})\right]-\sum_{i=1}^{N-1}\log\left[P(\mathbf{X}(t_{i+1}),\mathbf{p}\;|\;\mathbf{X}(t_{i}),\mathbf{\hat{F}})\right],$
(88)
We make the following assumptions:
1. 1.
The measurements $\mathbf{Y}$ have uncorrelated Gaussian error and that there
is no correlation between errors in measuring different quantities or at
varying time points [1];
2. 2.
The state at the next time point depends only on the state at the current time
point, and that our model can have some error by widening the $\delta$
function it would follow otherwise using a Gaussian approximation of it [1];
3. 3.
Each element in $\mathbf{p}$ follows a Laplace distribution with _mean_ $0$
and diversity $b$.
With assumption 1 it can be shown that
$\text{CMI}\left[\mathbf{X}(t_{i}),\mathbf{Y}(t_{i})\;|\;\mathbf{Y}(t_{0}),\dots,\mathbf{Y}(t_{i-1})\right]=\frac{1}{2\sigma_{m}^{2}}\sum_{l=1}^{L}\left(x_{l}(t_{i})-y_{l}(t_{i})\right)^{2}.$
(89)
For the second term in the sum, we need to find an expression for
$P(\mathbf{X}(t_{i+1}),\mathbf{p}\;|\;\mathbf{X}(t_{i}),\mathbf{\hat{F}})$.
Let us now focus on the $k$-th component of $\mathbf{X}(t_{i+1})$, and so our
goal is to find an expression for
$P(x_{k}(t_{i+1}),\mathbf{p}_{k}\;|\;\mathbf{X}(t_{i}),F_{k})$.
We consider the library of $q$ possible functions and the generic expression
for each equation of our model:
$\hat{F}_{k}\coloneqq\hat{F}_{k}(\mathbf{X},\mathbf{p})=p_{k,1}\theta_{1}(\mathbf{X})+p_{k,2}\theta_{2}(\mathbf{X})+\cdots+p_{k,q}\theta_{q}(\mathbf{X}),$
(90)
for $k=1,2,\dots,D$.
We can rewrite the probability we are seeking as
$P(x_{k}(t_{i+1}),\mathbf{p}_{k}\;|\;\mathbf{X}(t_{i}),F_{k})=P(\mathbf{p}_{k}\;|\;x_{k}(t_{i+1}),\mathbf{X}(t_{i}),F_{k})P(x_{k}(t_{i+1})\;|\;\mathbf{X}(t_{i}),F_{k}).$
(91)
Now each term in the right hand side can also be rewritten as
$\displaystyle P(\mathbf{p}_{k}\;|\;x_{k}(t_{i+1}),\mathbf{X}(t_{i}),F_{k})$
$\displaystyle=\frac{P(x_{k}(t_{i+1}),\mathbf{X}(t_{i}),F_{k}\;|\;\mathbf{p}_{k})P(\mathbf{p}_{k})}{P(x_{k}(t_{i+1}),\mathbf{X}(t_{i}),F_{k})},$
(92) $\displaystyle P(x_{k}(t_{i+1})\;|\;\mathbf{X}(t_{i}),F_{k})$
$\displaystyle=\frac{P(x_{k}(t_{i+1}),\mathbf{X}(t_{i}),F_{k})}{P(\mathbf{X}(t_{i}),F_{k})}.$
(93)
Thus, (91) becomes
$P(x_{k}(t_{i+1}),\mathbf{p}_{k}\;|\;\mathbf{X}(t_{i}),F_{k})=\frac{P(x_{k}(t_{i+1}),\mathbf{X}(t_{i}),F_{k}\;|\;\mathbf{p}_{k})P(\mathbf{p}_{k})}{P(\mathbf{X}(t_{i}),F_{k})}.$
(94)
We can rewrite the first therm on the right hand side in (94) as a likelihood,
$P(x_{k}(t_{i+1}),\mathbf{X}(t_{i}),F_{k}\;|\;\mathbf{p}_{k})=\mathcal{L}(\mathbf{p}_{k}\;|\;x_{k}(t_{i+1}),\mathbf{X}(t_{i}),F_{k}).$
(95)
Assuming that our next state follows a normal distribution with mean $f_{k}$
and standard deviation $\sigma^{2}$,
$\mathcal{L}(\mathbf{p}_{k}\;|\;x_{k}(t_{i+1}),\mathbf{X}(t_{i}),F_{k})=\frac{1}{\sigma\sqrt{2\pi}}\exp{\left(-\frac{\left[x_{k}(t_{i+1})-f_{k}(\mathbf{X},\mathbf{p},F_{k})\right]^{2}}{2\sigma^{2}}\right)}.$
(96)
With assumption 3, we know that each $p_{k,j}$ follows a Laplace distribution,
$p_{k,j}\sim\text{Laplace}(0,b)=\frac{1}{2b}\exp{\left(-\frac{|p_{k,j}|}{b}\right)},$
(97)
and so
$P(\mathbf{p}_{k})=\prod_{j=1}^{q}\frac{1}{2b}\exp{\left(-\frac{|p_{k,j}|}{b}\right)}.$
(98)
With this we can write (94) as
$\begin{split}P(&x_{k}(t_{i+1}),\mathbf{p}_{k}\;|\;\mathbf{X}(t_{i}),F_{k})\propto\\\
&\propto\frac{1}{\sigma\sqrt{2\pi}}\exp{\left(-\frac{\left[x_{k}(t_{i+1})-f_{k}(\mathbf{X},\mathbf{p},F_{k})\right]^{2}}{2\sigma^{2}}\right)}\prod_{j=1}^{q}\frac{1}{2b}\exp{\left(-\frac{|p_{k,j}|}{b}\right)}.\end{split}$
(99)
Note that since we are going to be minimising the action $A_{0}$ ((88)) we
forget about the constant term $P(\mathbf{X}(t_{i}),F_{k})$ in the denominator
and we just have a proportionality instead of an equality.
Note that because the $k$-th current state only depends upon the previous one,
$P\left(\mathbf{X}(t_{i+1}),\mathbf{p}\;|\;\mathbf{X}(t_{i}),\mathbf{\hat{F}}\right)=\prod_{k=1}^{D}P(x_{k}(t_{i+1}),\mathbf{p}_{k}\;|\;\mathbf{X}(t_{i}),F_{k}),$
(100)
and so, finally, we can write
$\begin{split}P(&\mathbf{X}(t_{i+1}),\mathbf{p}\;|\;\mathbf{X}(t_{i}),\mathbf{\hat{F}})\propto\\\
&\propto\prod_{k=1}^{D}\left\\{\frac{1}{\sigma\sqrt{2\pi}}\exp{\left(-\frac{\left[x_{k}(t_{i+1})-f_{k}(\mathbf{X},\mathbf{p},F_{k})\right]^{2}}{2\sigma^{2}}\right)}\prod_{j=1}^{q}\frac{1}{2b}\exp{\left(-\frac{|p_{k,j}|}{b}\right)}\right\\}.\end{split}$
(101)
Upon taking the logarithm to this expression above,
$\log(P(\mathbf{X}(t_{i+1}),\mathbf{p}\;|\;\mathbf{X}(t_{i}),\mathbf{\hat{F}}))\propto\sum_{k=1}^{D}\left\\{-\frac{\left[x_{k}(t_{i+1})-f_{k}(\mathbf{X},\mathbf{p},F_{k})\right]^{2}}{2\sigma^{2}}-\lambda\|\mathbf{p}_{k}\|_{1}\right\\}+\frac{D}{\sigma\sqrt{2\pi}}+\frac{\lambda
D}{2},$ (102)
where $\lambda=q/b$.
We have seen that (88) becomes
$A(\mathbf{X},\mathbf{p})=\frac{1}{N}\sum_{i=1}^{N}\|\mathbf{X}(t_{i})-\mathbf{Y}(t_{i})\|^{2}+\frac{1}{N}\sum_{i=1}^{N-1}R_{f}\left\\{\|\mathbf{X}(t_{i+1})-\mathbf{f}(\mathbf{X}(t_{n}),\mathbf{p},\mathbf{\hat{F}})\|^{2}\right\\}+\lambda\|\mathbf{p}\|_{1},$
(103)
which is what we wanted to show.
## S0 Computational time
We use the Lorenz system, with all variables observed, $N=1001$ time points,
$\Delta t=0.01$, and no noise. The more terms our library
$\boldsymbol{\mathbf{\Theta}}$, the more time it takes to evaluate the cost
function associated, its Jacobian and its Hessian (Fig. 7(left)). However, due
to model symmetries and other structural features, the time to run our
algorithm does not monotonically increase with increasing number of terms in
our library. A library with 10 terms can take 100 times more to run than the
full library of 30 monomials (Fig. 7(right)).
Figure 7: 7 terms: parameter estimation. 10 terms: in blue $x^{2}$ in each
equation; in red $1$ in each equation. 13 terms: $x^{2}$ and $y^{2}$ in each
equation. 16 terms: $x^{2}$, $y^{2}$ and $z^{2}$ in each equation. 19 terms:
$1$, $x^{2}$, $y^{2}$ and $z^{2}$ in each equation. 30 terms: model selection.
## S0 Semiconductor
We consider this semiconductor model ($T$ trap levels with two possible states
differing by one electronic unit of charge),
$\displaystyle\frac{\mathop{}\\!\mathrm{d}x}{\mathop{}\\!\mathrm{d}t}$
$\displaystyle=e_{n,01}y-R_{n,10}xz,$ (104)
$\displaystyle\frac{\mathop{}\\!\mathrm{d}y}{\mathop{}\\!\mathrm{d}t}$
$\displaystyle=-e_{n,01}y+R_{n,10}xz,$ (105)
$\displaystyle\frac{\mathop{}\\!\mathrm{d}z}{\mathop{}\\!\mathrm{d}t}$
$\displaystyle=e_{n,01}y-R_{n,10}xz.$ (106)
$x$ denotes the number of electrons in the conduction band, $y$ denotes the
number of traps with 2 electrons, and $z$ denotes the number of traps with 1
electron. We chose $e_{n,01}=0.5$ and $R_{n,10}=0.25$.
Figure 8: Dynamics from the original system (104)-(106).
Instead of using the library of all monomials in three variables up to degree
two, we know that there are only a few terms make sense physically. Our
generic model for this example is
$\displaystyle\frac{\mathop{}\\!\mathrm{d}x}{\mathop{}\\!\mathrm{d}t}$
$\displaystyle=p_{1,1}+p_{1,2}x+p_{1,3}y+p_{1,4}z+p_{1,5}x^{2}+p_{1,7}xz,$
(107) $\displaystyle\frac{\mathop{}\\!\mathrm{d}y}{\mathop{}\\!\mathrm{d}t}$
$\displaystyle=p_{2,1}+p_{2,2}x+p_{2,3}y+p_{2,4}z+p_{2,5}x^{2}+p_{2,7}xz,$
(108) $\displaystyle\frac{\mathop{}\\!\mathrm{d}z}{\mathop{}\\!\mathrm{d}t}$
$\displaystyle=p_{3,1}+p_{3,2}x+p_{3,3}y+p_{3,4}z+p_{3,7}xz.$ (109)
We first consider three observed variables, $D=L=3$. We consider a time series
of $N=101$ equally spaced time points, with $\Delta t=0.01$. The $\lambda$
sweep results in a different amount of active terms for each value. See Fig. 9
(left). Since we know the model from which our data comes from, we just want
to see if the model that has the right number of terms (highlighted in red)
corresponds to our original one, which it does.
### S0.1 1 hidden variable
We consider two observed variables, $L=2$. We pick $x$ and $y$. We run
$N_{I}=1,000$ different initialisations. There is a question in this
particular case on how the initial guess should be picked (see Algorithm 2. We
do a $\lambda$ sweep from $\lambda=0.1$ through $\lambda=0.3$. Out of all the
1,000 different initialisations, we recover the right sparsity pattern 68
times. The optimal $\lambda=0.19$, for which we recover the right sparsity
pattern 33 times (see Fig. 10).
observed | hidden | N | $\Delta$ t | $\beta_{\max}$ | $\lambda$ | recovery
---|---|---|---|---|---|---
2 | 1 ($z$) | 101 | 0.01 | 30 | 0.19 | 3.3%
Table 2: Recovery of the semiconductor system with one hidden variable.
Figure 9: Left: all observed variables. Right: one hidden variable.
Highlighted in red are the $\lambda$ that lead to model recovery. Figure 10:
Percentage of recovery rate for different $\lambda$ values for 1,000 different
initialisations. Initial guess for unmeasured variables is obtained through
the derivatives of the measured variables. Algorithm 2 Algorithm for picking
an initial guess for unobserved variables in the semiconductor case
1:for $d=1:(D-L)$ do $\triangleright$ loop in unmeasured variables
2: Pick at random one of the observed variables.
3: $dX_{d}\leftarrow$ Calculate gradient vector from its time series.
4: while $Z_{d}$ out of bounds do $\triangleright$ Make sure unmeasured
variable is within bounds
5: $Z_{d}(t_{1})\leftarrow$ Random initial condition for unobserved variable
within bounds.
6: for $i=1:N-1$ do
7: $Z_{d}(t_{i+1})=\Delta t\times dX_{d}(t_{i})+Z_{d}(t_{i})$
### S0.2 Parameter identifiability
There are two main reasons of why a parameter might not be identifiable: said
parameter does not influence the model output; there is a interdependence
among different parameters, that is, one can compensate the change of one
parameter (that would influence the model output) by changing other
parameter(s) and have the output be the same. In this section, we focus on the
latter.
One way to detect pairwise interplay is by plotting contours of the cost
function versus pairs of parameters. Largely eccentric contours or _valleys_
show that the cost function is almost unchanged in one direction, and the two
parameters are highly correlated. The main drawback for our case in particular
is that we will be limited to find relationships only between pairs of
parameters instead of higher dimensional interactions.
Consider the generic model (except that the right terms are fixed –
highlighted in red; $e_{n,01}=0.5$, $R_{n,10}=0.25$)
$\displaystyle\frac{\mathop{}\\!\mathrm{d}x}{\mathop{}\\!\mathrm{d}t}$
$\displaystyle=p_{1,1}+p_{1,2}x{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\,+\,e_{n,01}}y+p_{1,4}z+p_{1,5}x^{2}{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\,-\,R_{n,10}}xz,$
(110) $\displaystyle\frac{\mathop{}\\!\mathrm{d}y}{\mathop{}\\!\mathrm{d}t}$
$\displaystyle=p_{2,1}+p_{2,2}x{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\,-\,e_{n,01}}y+p_{2,4}z+p_{2,5}x^{2}{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\,+\,R_{n,10}}xz,$
(111) $\displaystyle\frac{\mathop{}\\!\mathrm{d}z}{\mathop{}\\!\mathrm{d}t}$
$\displaystyle=p_{3,1}+p_{3,2}x{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\,+\,e_{n,01}}y+p_{3,4}z{\color[rgb]{1,0,0}\definecolor[named]{pgfstrokecolor}{rgb}{1,0,0}\,-\,R_{n,10}}xz.$
(112)
We will now add only two extra parameters (two of the black terms) at a time.
Each Fig. 11-14 is obtained by picking one term (parameter 1, which is the
first extra term in the system), and then study the cost function by adding
another term (parameter 2, which is the second extra term in the system). We
study this for all the possibles “parameter 2”.
Take Fig. 11. Parameter 1 here is the term $1$ in the equation
$\mathop{}\\!\mathrm{d}x/\mathop{}\\!\mathrm{d}t$, that is, $p_{1,1}$. This
extra term is fixed for all subplots. Then parameter 2 (the second extra term)
corresponds to (in order of subplots)
$p_{1,2},\,p_{1,4},\,p_{1,5},\,p_{2,1},\,p_{2,2},\,p_{2,4},\,p_{2,5},\,p_{3,1},\,p_{3,2}\,p_{3,4}$.
Figures 12-14 follow the same logic. These four figures already show
identifiability problems.
Figure 11: parameter 1 is $1$ on the
$\mathop{}\\!\mathrm{d}x/\mathop{}\\!\mathrm{d}t$ equation.
Figure 12: parameter 1 is $x$ on the
$\mathop{}\\!\mathrm{d}x/\mathop{}\\!\mathrm{d}t$ equation.
Figure 13: parameter 1 is $1$ on the
$\mathop{}\\!\mathrm{d}y/\mathop{}\\!\mathrm{d}t$ equation.
Figure 14: parameter 1 is $z$ on the
$\mathop{}\\!\mathrm{d}z/\mathop{}\\!\mathrm{d}t$ equation.
## S0 Predator-Prey
Although it is not very common to find _pure_ predator-prey interactions in
nature, there is a classical set of data by the Hudson Bay company which
corresponds the number of snowshoe hares and Canadian lynxes trapped in
Canada, which in turn shows the relative population of both [49]. The data is
recorded yearly, so $\Delta t=1$. We use data between 1900 and 1920, thus
$N=21$. In this particular case we really do not know the dynamics behind the
system although we know that the snowshoe hare is the primary food of the
lynx. Therefore, we can assume that we have a predator-prey system, and there
is the classical Lotka-Volterra model to describe these type of dynamics. We
consider $L=D=2$ (Fig. 15(a)). We build the library of functions with all the
monomials up to degree two in two variables, and with it we construct our
generic model (Fig. 15(b)). We run our algorithm and varying $\lambda$ we
obtain a list of possible models. By looking at the corresponding AIC values
for each one, we find that the model with 7 active terms is the best one (Fig.
15(d)). We now consider that our generic model is the resulting model with 7
active terms. Again, we run the algorithm to find that the best model is one
containing only 5 terms (Fig. 15(f-h)). We iterate this process, and run the
algorithm considering the model with 5 active terms as the generic one. We
find that the best model is the one containing 4 terms (Fig. 15(i-k)). This
identified model corresponds to the Lotka-Volterra one. Once we do only
parameter estimation on it, we obtain the dynamical system shown in Fig.
15(k). We compare the original data (dashed) with the resulting model (solid),
which show an excellent match.
Figure 15: The recovery of the Lotka-Volterra system required an iterative
formulation which consisted of down-selecting relevant monomials to describe
the dynamics via AIC at the end of the variational annealing and start the
algorithm again with less terms in the generic model description.
## S0 $\alpha$ parameter in VA algorithm
We study how the parameter $\alpha$ used to increase the value of
$R_{f}=R_{f,0}\alpha^{\beta}$ during the VA algorithm affects the recovery. We
use the class Lorenz system,
$\displaystyle\frac{\mathop{}\\!\mathrm{d}x}{\mathop{}\\!\mathrm{d}t}$
$\displaystyle=\sigma(y-x),$ (113)
$\displaystyle\frac{\mathop{}\\!\mathrm{d}y}{\mathop{}\\!\mathrm{d}t}$
$\displaystyle=x(\rho-z)-y,$ (114)
$\displaystyle\frac{\mathop{}\\!\mathrm{d}z}{\mathop{}\\!\mathrm{d}t}$
$\displaystyle=-\beta z+xy,$ (115)
where $\sigma=10$, $\rho=28$, and $\beta=8/3$. We numerically simulate the
system using Runge-Kutta 4th order and a time step of $\Delta t=0.01$,
producing time-series similar to the experimental data set. We add some error
modeled as additive Gaussian noise of mean zero and standard deviation
$\omega=0.01$. Therefore, the measurement function is
$\mathbf{h}(\mathbf{X})=\mathbf{X}+\mathcal{N}(0,\omega)$. We consider
$N=501$, and $y$ to be the hidden variable.
As we increase $\alpha$ the recovery rate decreases, and for $\alpha\geq 1.3$
the recovery is 0% (Table 3).
Table 3: Recovery rates for varying $\alpha$. $\alpha$ | recovery rate (%)
---|---
1.1 | 93
1.2 | 87
1.25 | 20
1.3 | 0
1.4 | 0
1.5 | 0
|
# Testing Reactive Systems Using Behavioural Programming, a Model Centric
Approach
Yeshayahu Weiss1<EMAIL_ADDRESS>
( 1Ben-Gurion University
)
###### Abstract
Testing is a significant aspect of software development. As systems become
complex and their use becomes critical to the security and the function of
society, the need for testing methodologies that ensure reliability and detect
faults as early as possible becomes critical. In academia and in industry,
different methods are being developed for improving the testing process. The
most promising approach is the model-based approach where a model is developed
that defines how the system is expected to behave and how it is meant to
react. The tests are derived from the model and an analysis of the test
results is conducted based on it.
In the proposed doctoral research, we will investigate the prospects of using
the Behavioural Programming (BP) modeling approach as an enabler for a model-
based testing approach that we will develop. We will develop a natural
language (textual and/or graphical) for representing the requirements. The
model users will create with our language will be fed to algorithms that we
will develop. This includes algorithms for the automatic creation of minimal
sets of test cases that cover all of the system’s requirements, algorithms for
analyzing the results of the tests, and other tools that support the testing
process.
The focus of our methodology will be to find faults caused by the interaction
between different requirements in ways that are difficult for the testers to
detect. Specifically, we will focus our attention to concurrency issues such
as deadlocks and logical race condition. We will use a variety of methods that
are made possible by BP, such as non-deterministic execution of scenarios and
use of in-code model-checking for building test scenarios and for finding
minimal coverage of the test scenarios for the system requirements using
Combinatorial Test Design (CTD) methodologies. We will develop a proof-of-
concept tool kit which will allow us to demonstrate and evaluate the above
mentioned capabilities. We will compare the performance of our tools with the
performance of manual testers and of other model-based tools using comparison
criteria that we will define and develop.
This proposal also includes a description of some preliminary work. As
elaborated in the proposal, we validated that a BP based modelling language
for testing allows for effective generation, execution, and analysis of tests
for two small systems with which we have experimented (a model of a telephony
system and the Moodle education platform), each tested in a different way. In
addition, as part of this research proposal, we checked the matter of covering
all requirements using test scenarios efficiently and minimally using CTD
methodologies.
Keywords: Behavioral Programming; Model-Based Testing; Test Optimization; Test
Generation; Combinatorial Test Design
## 1 RESEARCH OBJECTIVES
Our thesis is that a comprehensive system testing methodology based on
behavioural programming (BP), supported by the right algorithms, can increase
the reliability of reactive systems as well as reduce the effort required in
system testing, thereby reducing the total development effort.
We will prove that it is possible to develop a modelling language that allows
stakeholders to more effectively describe requirements and enable algorithms
that exhaustively test systems. We will prove that our approach allows
discovery of issues and faults that are usually missed by conventional
methods. Our end goal is to shift the testing methodologies left towards
requirements. With the modelling techniques and analysis algorithms that we
will develop, testers will focus on requirements and the tests will
automatically be generated from their models. We will focus on catching bugs
that are triggered by unusual ordering of events that programmers may not
consider, e.g, when a student changes her last name in the middle of the
semester after getting married.
Motivation Scientists and engineers have developed testing tools and
methodologies for many years now, but systems with critical bugs are still
going to market. As software systems serve in critical, life-threatening
tasks, such as autonomous cars, medical devices, and nuclear factories, these
bugs must be identified as soon as possible. Our initial review on issues and
reports in bug tracking systems like JIRA and GITHUB revealed that many bugs
not caught by QA that were reported by end-users are due to unusual sequences
of events (e.g., logical race conditions).
Current testing methodologies and tools focus mostly on test automation [41].
Testers plot usage stories and the tools give them the ability to execute
these stories manually or automatically using recording facilities and
scripting languages. This is problematic in our mind because human coverage is
limited and cannot cope with the growing complexity of software systems. This
problem can potentially be solved by model based testing (MBT) [16] in which
the tests are generated from a model, but current MBT methods are too complex
for engineers and are not focused on requirements [26]. In our work, we will
propose languages and tools for developing, maintaining, and for executing
models that are aligned with the requirements of the system.
Towards improvement of quality and efficiency of testing we propose to improve
each of the following ingredients: 1) Modelling, whereby testers design models
of all required tests rather than only specifying a set of test scenarios; 2)
Automatic generation whereby an engine generates effective test suites from
the model; 3) Analysis whereby absence of contradictions and other properties
of the model are validated; 4) Quality measurements whereby algorithms analyse
the model and the results of the tests, and generate reports that indicate,
e.g., how ready the product is for production; and 5) Prioritization whereby
language features and algorithms make decisions about test scheduling, e.g.,
which tests run nightly?, before product release? after a specific change of
the software? All of these issues will be studied with reference to the
modelling techniques that we will develop, as shown in Figure 1. We are aware
that a full study of all of these topics is beyond the scope of a single Ph.D.
research study. Our focus is the modelling technique. We listed the other
topics because our research will focus on studying how our new modelling
methodologies and techniques reflect on the other issues and because we plan
to adjust the modelling techniques in order to better support all software
development phases and aspects.
Figure 1: An illustration showing how our research will focus on the
development of a new modelling technique and will study how it affects the
testing workflow.
### 1.1 Behavioural Programing methods
We will base our research on the Behavioural Programming (BP) modelling
approach, whose principles and operations are described in [28]. In short, a
BP model consists of components called b-threads, each representing an
individual aspect of behaviour or scenario that corresponds to a unique
paragraph in the requirement document of a system (if such exists). The
b-threads use a special application programming interface (API) that allows
them to synchronize with each other in a way that induces a cohesive system
behaviour. Specifically, whenever a b-thread reaches a synchronization point
(called b-sync), it posts a synchronization statement and waits for all other
b-threads to reach their next synchronization points. At synchronization
points, b-threads specify three sets of events: Requested events that the
thread proposes to be considered for triggering, and asks to be notified when
any of them occurs; Watched or waited-for events that the thread does not
request but asks to be notified when any of them is triggered; and Blocked
events that the thread currently forbids. When all b-threads are at a
synchronization point, a central mechanism uses the specified sets to
determine the next triggered event, as follows. It selects one event from the
set of requested and not blocked events. This selection can be random,
priority based or based on an elaborate selection mechanism using, e.g., AI.
The selected event is triggered by resuming all the b-threads that either
requested it or waited for it. The resumed b-threads proceed with their
execution to their next synchronization point, while the other b-threads
remain at their last synchronization point, oblivious to the triggered event,
until an event they requested or are waiting for is selected. When all
b-threads are again at a synchronization point, the process repeats. The BP
execution cycle is shown in Figure 2.
Figure 2: BP “life cycle” diagram.
We propose to use BP as an enabler for an approachable model-based testing
methodology. We will show how the inherent features and characteristics of BP
allow for effective modelling, execution, and analysis of tests. To this end,
we will use Context Oriented BP (COBP) [19] which combines BP with context
idioms that explicitly specify when b-threads are activated and what
information they need for their operation. COBP connects the behavioural model
with a data model that represents the context. Specifically, COBP consists of
an intuitive connection between the data and the behavioural models via update
and select queries. For example, a b-thread that represents a requirement
concerning a quiz (thing, for example, of a system that tests the Moodle
education platform) has access to the properties of the quiz via a designated
query protocol. In addition to adding a data model, COBP also allows effective
analysis of the testing model by explicitly mapping which b-threads run in a
given context. This allows us to avoid analysis of theoretical paths that may
not be relevant in certain contexts. COBP maintains dynamic context data that
may change at each synchronization point. The b-threads and advanced event
selection mechanisms can use this data to direct their internal logic.
Another BP tool upon which we will base our research is the model-checking
mechanism built in current BP tools. For example the BPjs library that
implements BP in JavaScript can “dry run” BP models and expand all
possibilities of the test scenarios using, e.g., Depth First Search (DFS) of
the execution graph. We will use and expand this capability for implementing
the model analysis algorithms described above.
## 2 Planned contributions:
All of our contributions will revolve around the development of new
methodologies for behavioural model-based testing of reactive systems. The
following list details some of the aspects on which we will focus:
1. 1.
Accessible executable formal modelling languages for requirement and
specification definition. These languages will facilitate the translation of
requirement specifications, often maintained by different stakeholders, to
test cases. We will develop languages that are: readable – we will make sure
that all stakeholders can read the specification so it can be used for
feedback and discussion; the language will be rich enough that it can express
all tests that will be needed for reactive system testing; and based on BP
with its advantages. The models formed by our language will allow a natural
expression of system requirements or system specifications on one hand and
will cater to automatic generation of tests on the other hand. Our Domain
Specific Languages (DSL) will include diagrammatic and textual dialects to
ease inter communication between all the teams and individuals involved in the
testing process.
2. 2.
Automatic generation of tests: defining test scenarios based on a behavioural
model of the requirements of a system and using algorithms that we will
develop based on BP technologies will automatically generate quality test
cases. The algorithms that we will develop will be based on expanding as a
graph where each b-sync point is a node (state) and each requested and non-
blocked event is an edge that represents a transition from one state to
another. A path in this graph can be translated to a test case, thus the graph
as a whole represents all possible test cases. A subgraph can describe test
cases which test a sub-system or a module, or can describe the test cases
required for integration test cases between two or more modules in the system.
3. 3.
A methodology for focusing and coverage: specification idioms and algorithms
for prioritizing tests in order, e.g., to maximize the probability of catching
a certain type of bug at the focus of a certain testing effort. This can be
done, for example, by applying t-way methods that make use of the fact that
most bugs are caused by the interaction of a small number of parameters [38].
More generally, we will develop mechanisms for managing the choice of which
tests to run. This depends on purpose and resources. For example, people run
different tests in full system testing, in regression testing, in daily or
nightly testing, and in smoke tests. The choice of tests can also depend on
the scope: are we focusing on a specific sub-system or on a certain section of
the requirements document? We will develop algorithms to choose tests in a way
that increases the coverage criteria needed for each type of test. We will
also develop methods for measuring different kinds of coverage criteria and
for reporting them to users in a useful manner.
4. 4.
Modularity: we will develop approaches that allow the addition of new
requirements that are consistent with already existing ones without touching
the parts of the models that already reflect the existing requirements, simply
by adding new b-threads. The new test cases will be automatically woven with
existing ones to generate tests that examine new possible interactions. This
type of modularity contributes to the proposed methodology the ability to
advance the development of the system step-by-step, and at each step add the
system requirements with the corresponding test cases. These test cases will
cover the new requirements and the interaction of older requirements with
them.
5. 5.
Reports and other debugging tools: we will develop algorithms and tools for
analysing test results. For this part of the research we will develop tools
and algorithms in domains such as: logging, visualization, playback,
summarization, state-based coverage measurement, etc. We will develop methods
for analysis of the test cases generated to allow debugging and validation and
for checking if the test cases really test what they intend to test based on
the system requirements or specification. We will also develop tools that
export the outcome of the tests for external processing, e.g., by Business
Intelligence (BI) or Artificial Intelligence (AI) tools.
6. 6.
Algorithms for formal analysis / verification of models based on model
checking, which requires simulating tests offline in an ”open-loop” manner. We
will develop tools for analysing test plans by examining the graphs of all
test cases using model-checking tools and algorithms that check the test
cases’ space.
## 3 Novelty prospects:
We will develop an approachable model-based testing (MBT) technique based on
the behavioural programming paradigm. Like existing MBT approaches, we will
give our users tools to focus on modelling system requirements. Unlike
existing MBT, our models will not be convoluted state machines, they will
consist of user stories and scenarios that resemble the test scripts to which
test engineers are accustomed (but greatly improve expressive power by
explicitly specifying what must, may, and may not happen).
## 4 Background and related work:
### 4.1 Testing background
Software has a tendency to fail. With the progress of the complexity of
software systems, it is becoming progressively hard to guarantee the quality
of the software. Since it is generally impossible to verify the nonappearance
of bugs in a real program, the main goal of software testing is to find bugs
as soon as possible so that they can be fixed with minimal cost [20]. It is
therefore important to follow a systematic testing practice with the purpose
of increasing product assurance while confirming that the software features
are as required. Testing methods can be classified into white-, gray- and
black-box testing according to their accessibility. When test cases are
designed based on information about how the software has been designed or
coded it is called white-box testing [1]. When the design of the test cases
depends only on the input/output, the behaviour or the functional requirements
of the software it is called black-box testing [10]. A mixture of the white-
and black-box testing methods, using the advantages of both methods, is called
gray-box testing. The following list of test levels was defined by the
International Software Testing Qualifications Board (ISTQB) [48]: Unit
testing: testing each hardware or software element; Integration Testing:
finding faults in the interfaces and in the connections between integrated
systems or units; System Testing: confirming that the integrated system meets
the stated features based on system requirements; Acceptance Testing: checking
that the system satisfies the acceptance criteria with respect to user needs,
requirements, and business processes; Regression Testing: testing that the
software works as it did before after modifications are done that are
suspected to add bugs [34]; Smoke Testing: a small group of tests that focuses
on the critical level of functionality of the SUT. It runs whenever a new
build is created or a new build process runs and verifies that the main
functionality is still valid [18]. Classically, in software testing there is a
separation of load testing and stress testing from functional testing [37].
Load testing: putting a load on a software system and measuring the system’s
response. Such tests are accompanied by tools for monitoring the performance
of the system. Stress testing: estimating the limits within which the system
keeps working when it is exposed to heavy loads or when some of its hardware
or software is at risk [37]. Manual and automated testing can be used together
at different stages of software quality verification. The method of automated
testing comprises the use of special software tools to execute tests. While
there are known weaknesses of automated tests [43], most of the industry is
adopting said tests. Still, some programmers think that test automation costs
are high relative to their value and that they should be used carefully [34].
Generally, large systems with extensive complexity need test automation with a
large return of investments (ROI). See more details about testing background
in Appendix A.
### 4.2 Testing coverage
In software testing, coverage known as code coverage or test coverage are
important metrics and benchmarks by which to measure test quality. Code
coverage is a metric to evaluate how many parts of a program have been tested.
It is one form of white-box testing which finds the areas of the program
exercised and those that were not exercised by a set of test cases. Different
researchers compiled lists of code coverage criteria [48, 36, 31]. These lists
include, for example, Statement coverage; Branch coverage; Function coverage;
Loop coverage; Condition coverage; and Finite State Machine coverage. Test
coverage is defined as a metric in testing that measures the amount of testing
performed by a set of tests related to the system under test (SUT). Test
coverage is considered to be black-box testing. Test coverage types are:
Features coverage [39]; Requirements coverage [49]; and Input parameters
coverage [10].
Test case generation based on coverage has advantages and disadvantages. The
advantages are as follows: First, reliability seems to increase with test
coverage [55]; Second, code coverage provides the ability to select a set of
tests that significantly improves coverage and prioritize them [35]; And
third, based on observations in industry, increasing code coverage becomes a
motivation for improving tests [56]. The disadvantages are: First, the number
of test cases that are generated in order to achieve more coverage are growing
exponentially, and may be impractical; Second, there is no known underlying
theory that predicts how much quality improves with coverage [9]; And third,
full code coverage (100%) does not guarantee the absence of defects,
especially in the concurrent systems (the main concern in our proposal) when
the test cases cover each part of the system, but the system behavioural
concurrent does not [45]. See more details about testing coverage in Appendix
A.
### 4.3 Test cases coverage using CTD
#### 4.3.1 CTD – background
Running all possible test cases is impractical in large and complex systems
since the total number of possible valid test cases is usually prohibitively
large (exponential in the number of requirements). Therefore approaches are
needed to generate sets of test cases that are substantially smaller than
exhaustive test sets but still “cover” systems’ requirements and are effective
at detecting faults. Combinatorial Test Design (CTD) is an approach for
solving this challenge. The approach is based on modelling a test as a set of
parameters, each with a finite set of values, and then sampling the test space
by combining possible assignments of values to parameters in a systematic
fashion. CTD methods have proven very useful in reducing the number of tests
while increasing productivity. The source of this success is as follows. If we
assume that all the faults in a system are triggered by a combination of t or
fewer combinations of parameter values, then a test suite where each such
combination appears in at least one test case is effectively equivalent to an
exhaustive test [38]. CTD methods consist of mathematical computations that
yield small test suites that cover all such combinations. Empirical studies
about software quality and reliability found that, in reality, most bugs are
triggered by very small combinations of parameter values and that CTD improves
the effectiveness of bug hunting considerably [38]. In our research, we will
study how these methods can be extended to sequence testing and to coverage
criteria that arise in the context of behavioural testing. See more details
about CTD background in Appendix A.
#### 4.3.2 CTD – related works
Classical CTD is designed for covering parameter values, not different
ordering of events. This makes it less effective for testing reactive systems.
A variant of CTD called sequence testing addressed this weakness by focusing
on t-length sequences of events from a finite set E and then requiring that
every such sequence has to occur as a subsequence of at least one test case.
Elements of t-length sequences do not have to appear in a sequence in the test
case.
The first variant of sequence testing allowed only one triggering of each
event in a test case [57]. Later versions allowed more than one triggering in
a test case and added support for advanced constraints and other features [47,
17, 8]. Current sequence testing methods consist of a two-step method: the
first step is to generate a list of all relevant sequences of length t (called
’Target Sequences’) and the second step is to generate test cases to cover the
list of all target sequences (called ’Test Sequences’). See [57]. In the test
sequence generation step, the paper starts by using a greedy algorithm that
handles constraints between two events, then a transition label system is
proposed to represent the SUTs’ requirements and graph path methods are used
in order to find the optimal valid test cases. Based on this work, additional
work has been done to expand the language of constraints by, e.g., adding the
possibility of contiguous values [47] and by allowing more complex
relationships between more than two factors [17]. Two main problems that
remain open are the ability to model the SUTs’ requirements in a way that
allows the creation of valid test cases based on t-way testing and that the
solution will be effective at run time and size, otherwise the solution will
not be applicable in complex systems. A new research study proposes to model
the SUT by a finite state machine and to generate the test cases using
automata theory [8]. In each of the papers mentioned above, the researchers
present algorithms for generating test cases, they evaluate them and present
the results as a number of test cases and their total length. All of these
examples demonstrate the fact that the number of generated test cases covering
all cases is significantly lower than the overall number of options.
In our research, we will continue this line of work by adding new algorithms
and coverage criteria that fit the new testing methodology that we are
proposing. One specific challenge that arises in our setting is how to take
advantage of the modular nature of the model that we are proposing.
Specifically, existing methods rely on an analysis of automata and state
machines whose sizes grow exponentially with the complexity of the model. In
our modelling approach, these state machines are described implicitly as the
product of smaller machines. The challenge left for research is to generate
effective test suites based on an analysis of the component without an
explicit construction of their product whose size is exponentially larger than
the sum of their sizes.
### 4.4 Model-based testing (MBT)
#### 4.4.1 MBT background
A testing methodology based on a model that defines how the SUT can be
interacted with is called Model-Based Testing (MBT) [26]. MBT is a black-box
testing technique [30]. The general process for MBT is that based on the test
requirements and the test plan, a test model is constructed. The test model is
used to generate test cases and test oracles. Because of that, there are
usually an infinite number of possible tests; usually test selection criteria
are adopted to select the proper test cases. The test execution will result in
a report that contains the outcome of the execution of the test cases. In the
final, these results are analysed and if needed corrective actions are taken.
Hereby, for each test that reports a failure, the cause of the failure is
determined. The most widely used state-based models in MBT are: finite state
machines (FSM) [52], extended finite state machines (EFSM) [37], UML state
machine diagram, timed automata, Markov chain usage models [50], and labeled
transition systems (LTSs) [30]. There is a lack of scientific knowledge
regarding these techniques, making it difficult to transfer them to the
software industry [15].
MBT limitations and challenges are [16]: Partial model \- the transition from
system specification to a complete model of the system including all
interfaces, interactions between the various components and the rest of the
relationships, is in many cases incomplete; Low up-to-datedness model \- The
basis on which the model is created (requirements, design, UML, etc.) is in
many cases updated during the life of the project, whether due to overload and
pressure or for other reasons. The immediate result is that the generated test
cases are not actually covered by the system tests; Skill level \- the skill
level required to use MBT approach - knowledge of software modelling
notations, test criteria, test metrics, or languages to generate test scripts
[14]; and High diversion \- There is a high variance in the characteristics
between the software projects and on the other hand there are many academic
solutions for MBT [15]. See more details about MBT background in Appendix A.
#### 4.4.2 MBT related work
Surveys that were published in recent years [52, 7, 40] presented a
homogeneous picture of the existing situation in both academia and industry in
the MBT world. There are many tools that present themselves as MBT. The papers
identified 70 MBT tools published in 2006-2016, 40 of which are academic
tools, 15 are commercial tools, and 15 are open source [7]. Most tools apply
(out of the 5 components that define MBT [40]) to the creation of the model
out of the requirements or specifications and create the test scenarios [40];
a small portion is also added as a tool for creating the test data and very
few tools implement the more complex steps of creating scripts, running the
test and a final step of analysing results. These MBT tools model and generate
test cases covering functional requirements but not non-functional
requirements [52]. In addition, the surveys indicate that about 20% of the MBT
products are based on the modelling of the requirements based on UML charts
(all types of charts) and on the modelling of the system with requirements
that are expressed as formal or semi-formal modelling [7]. In our research, we
will propose to develop a new MBT method. Our method will be based on system
specifications modelling language. The model that users will create with our
language will be fed to algorithms for the automatic creation of minimal sets
of test cases that cover all of the system’s requirements, automatic execution
of the generated test cases, algorithms for analysing the results of the
tests, and other tools that support the testing process.
### 4.5 Used tool
#### 4.5.1 Automatic testing tools
Cucumber [5] / Behat [6] and Gherkin [23] \- behaviour-driven development
(BDD) testing tool. Cucumber is an automatic testing tool that executes
software tests in two layers. In the first layer, tests are written in formal
language such as Gherkin and the second layer each line in the first layer
represented as a function that executes the tests. The second layer supports
Java, JavaScript C++ and other languages. Behat is a semi-official BDD
automated testing tool like Cucumber for PHP. Cucumber enables automation of
functional validation in an easily readable and understandable format (such as
plain English) for business analysts, developers, testers, and others. Gherkin
is a popular language used by Cucumber to define test cases. Its main
objective is to enable users to specify tests in a way that clients can
understand them. Gherkin tests are organized into features. Each feature is
made up of a collection of scenarios defined by a sequence of steps and
following a Given-When-Then (GWT) rule [44]. A simple example is illustrated
below. Simple test case example in Gherkin:
Feature: Login Action
Scenario: Successful Login with Valid Credentials
Given User is on Home Page
When User Navigate to LogIn Page
And User enters UserName
And Password
Then Message displayed Login Successfully
Selenium \- Selenium [46] is an object-oriented library for test automation
based on browser emulation. It is a suite of tools for automating web
application testing across platforms. Selenium runs in several browsers and
operating systems and can be used with a variety of programming languages and
testing frameworks. The use of Selenium brings many benefits because it allows
the use of a common API to control different web browsers. It can be used from
the perspective of end users to test applications through the Selenium testing
script, and it allows easier detection of browser’s incompatibilities by
running tests in different browsers. It simulates the users’ interactive
operations with Web applications [54]. In our preliminary research we tried to
use Cucumber and Gherkin as our language for system requirements modelling
language (we describe this in the preliminary result paragraph), but despite
the widespread use of these tools in the industry, the vocabulary in this
language wasn’t rich enough for our purpose.
#### 4.5.2 SMT solver
‘Z3’ SMT solver library [24] \- An SMT solver is a tool for deciding the
satisfiability (or dually the validity) of formulas that can handle equality
reasoning, arithmetics, fixed-size bit-vectors, arrays, quantifiers, and other
useful theories. Given a set of constraints an SMT solver looks for a model
that satisfies the constraints or validates that there is no such model. SMT
solvers enable applications such as extended static analysis, predicate
abstraction, test case generation, and bounded model checking over infinite
domains. Z3 is an SMT solver from Microsoft Research. It is targeted at
solving problems that arise in software verification and software analysis.
Consequently, it supports a variety of theories needed in this domain
including the regular expression and string manipulation theories that we have
used in our preliminary work. Z3 uses advanced algorithms for quantifier
instantiation and theory combination. The first external release of Z3 was in
September 2007. Users interact with Z3 using either a textual format or a
binary API. Three textual input-formats are supported: The SMT-LIB format, the
Simplify format, and a low-level native format in the spirit of the DIMACS
format for propositional SAT formulas [12]. One can also call Z3 procedurally
by using either an ANSI C API, an API for the .NET (managed common language
runtime) and a Z3 python API called ‘z3py’ (we are using the latter).
At a high level, the Z3 solver takes as input a logical formula and then tries
to decide if the formula is satisfiable. In the process, solvers employ
various heuristics that first transform the input formula into a suitable
representation and then use search procedures to check for satisfiability. In
total, the Z3 SMT solver defines more than 100 such heuristic transformations
(called tactics) that can be combined together to define a custom strategy.
Although the above sequence of transformations (tactics) works well for some
types of input formulas (e.g., in case every variable has a lower and an upper
bound), for other formulas a different set of tactics is more suited. In some
cases, the suitable set of tactics can be obtained by a small modification of
the original tactic while in others a completely different set of tactics
needs to be defined [2]. In the proposed research, we will use Z3 for advanced
analysis of the testing models. For example, in a preliminary work, we have
used Z3 to compute a set of tests that satisfy certain coverage criteria. For
this, we may need to add tactics and theories.
### 4.6 Reactive system testing – the challenge
#### 4.6.1 Reactive systems testing - background and the challenge
A reactive system such as an automatic transportation system, a satellite, a
drone, or a web application is characterized by the use of on-the-fly of
sensors and actuators. Such systems sample the environment at a high rate and
produce a rapid response to events. By nature, reactive systems generate a
plethora of execution flows that progress simultaneously, as well as
concurrency and parallel activities to respond to the complex situation.
Traditional testing methods, especially code coverage or code static analysing
do not cope well with issues of parallelism and concurrency that cause non-
deterministic behaviour and exponential growth of test cases to cover all
potential cases. Our research will focus specifically on testing reactive
systems and on managing the large space of possible interactions that such
systems allow. See more details about reactive system testing background and
the challenge in Appendix A.
#### 4.6.2 Reactive systems testing, related work.
Researchers have developed techniques that specifically take into account
concurrent software features such as non-determinism, synchronization, and
communication testing reactive systems. Much of the work in this domain
assumes that requirements are specified using formal notation, e.g., Event-B
specifications. In [53], for example the authors propose a model-based testing
approach where models are Event-B specifications. This approach provides
system developers with a template that can generate test scenarios which
contain both input values and expected results. Another approach is required
when the system has COTS and model-based testing is made more difficult to use
directly. In [42], for example, the authors propose a methodology that
traverses a Büchi automaton that models to the requirements. The traversal
starts from the initial state of the automaton and generates a sequence of
input values with which the black-box system is fed in order to obtain a
corresponding sequence of output values. In [22], the authors present a
dataset of all cases that can cause race data faults. The dataset contains 985
data race faults, which can be used to evaluate and optimize race detection
techniques. The authors also used the dataset to evaluate three race detectors
[22]. Another group of proposed methods deals with safe programming in the
sense of interacting with other processes. The approach presented in [13], for
example, works towards enabling safe programming of reactive systems. The
approach consists of two parts: 1) a programming language for implementing,
specifying, and compositionally (assume-guarantee) testing the high-level
reactive software; and 2) a runtime verification system to ensure that the
assumptions used during design-time hold at runtime. Combining a high-level
programming language and its systematic testing with runtime enforcement
bridges the gap between software testing that makes assumptions about the low-
level controllers and the physical world, and the actual execution of the
software on a real platform in the physical world.
### 4.7 Behavioural Development
The need for describing and specifying requirements systems through scenarios
and behaviour-driven descriptions has existed for a long time [28]. Many
techniques, methodologies and tools have been developed throughout the years
with varying success. In this work we will use a modelling approach called
Behavioural Programming (BP). This an approach that promotes the use of
scenarios and anti-scenarios for describing complex behaviours. The approach
is based on Statecharts [27] and Life sequence charts (LSC) [11]. See more
details about Statecharts, LSC and executable specification in Appendix B.
#### 4.7.1 BP + COBP
Describing a system by scenarios and behaviour is a natural way of system
description and specification [51]. BP serves as a link in the transition from
behavioural modelling (e.g., LSCs or Statecharts) and behavioural programming
in general-purpose programming languages [28] such as C++, Java, JavaScript
[3] and more. The BP method is described in Section 1.2.1. BP is an extendable
framework. With the basic mechanisms of BP it is possible to define and
develop high level structures and design patterns, such as break-upon or
interrupt and to extend the language with different modelling idioms. A break-
upon pattern, for example, can be added to allow the definition of a structure
such as the well-known try-catch idiom used in advanced programming languages,
by requesting an event, along with waiting for one or more other events. If
the requested event is selected, the process continues the normal activity
(try). If the event is not selected, but the process resumes with the event
being waited for then the alternative treatment (catch) is caught and treated.
Similarly, one can use an interrupt pattern that allows to break the regular
flow of the b-thread and to skip to a new flow when some event is triggered.
BP semantics definition is based on a labelled transition system (LTS) [19,
28] where each b-sync point is a state and each event selection is a
transition. In general, there may be more than one run of a b-program,
depending on the order in which the events are selected from the set of
requested and unblocked events. These runs allow designers of systems to
separate the specifications of possible behaviours from the process of
prioritization and the choice of events. Moreover this allows the b-program to
be expressed in the form of an unfolding graph and the program execution
between event occurrences is treated as atomic [28].
With BP, specifications of reactive systems are modelled with b-threads that
model individual requirements bandeled as a b-program. An obvious limitation
of this approach is that requirements sometimes conflict, or are not detailed
enough, and composing them automatically without global consideration may
yield a composition that produces undesired joint behaviour. The solution to
this can come from using the BP a model-checking tool (BPMC). The BPMC tool
can verify behavioural programs directly; without translating them into a
model-checker-specific language. B-programs can serve both as elements of a
final executable system as well as elements of an abstract system model to be
subjected to verification [29].
This proposal is based on BPjs framework [29], a platform supporting the
growing body of work in behavioural programming under one roof focused on
execution and verification of b-programs. BPjs defines a generalized version
of BP with well-defined extension points and external interfaces. Thus, BPjs
can serve as a common platform for researching and disseminating ideas in BP.
BPjs allows b-programs to be embedded in existing software systems by sending
data from a host application to the b-program and sending data from the
b-program to the host. A super-step based mechanism takes care of embedding
the events within the run of the program in a systematic way. BPjs is
implemented as a Java library that runs code written in JavaScript. It uses
the Mozilla Rhino JavaScript engine to execute regular JavaScript code, and
custom code for handling synchronization calls. BPjs framework includes an
automatic model-checking tool for verifying the developed software against a
formal specification. This tool allows for an exhaustive analysis of the code,
producing formal guarantees of quality.
Context Oriented BP (COBP) combines BP with context idioms that explicitly
specify when scenarios are relevant and what information they need. The core
idea is to connect the behavioural model with a data model that represents the
context, allowing an intuitive connection between the models via update and
select queries. Combining BP with context-oriented programming brings the best
of the two worlds, solving issues that arise when using each of the approaches
separately. COBP is a layer above BP [19]. The COBP semantic extends the BP
semantic. The life cycle of a context-aware b-program (COBP) is described
here. Each context aware b-thread (CBT) is bound to a query on the contextual
data. Whenever a new answer exists for a query, new live copies are spawned
for the relevant CBTs. The live copies repeatedly execute an internal logic
that may depend on the contextual data and then synchronize with each other,
by submitting a synchronization statement to a central event arbiter. Once all
live copies have submitted their statements, the arbiter selects an event that
was requested and was not blocked. The event is also passed to the Effect
Function which may update the contextual data, depending on its specification.
The (updated) contextual dataset is passed back to the CBTs, along with the
selected event. All CBTs that are either requested or waited for this event
are resumed, while the rest remain paused until the next cycle [19].
## 5 Methods and work plan
Our work consists of several stages. Each one builds on the results of the
previous one. There are three types of stages.
The first type is pure innovative work - this is the major work in our
proposal. This type includes:
1. 1.
Language (DSL) and / or graphic tool (i.e., Blockly) used by testing
development and system engineers (paragraph 2.1)
2. 2.
Using BP concept (such as request, wait for, block, break upon, interrupt) in
the processes’ testing and within each action (screen/field) (paragraph 2.2)
3. 3.
Control the testing process using break-upon (and context) or block to find
out whether either anomaly is a bug or is caused by another process (paragraph
2.6)
4. 4.
System Quality measurement and assessment (paragraph 2.5)
The second type is evaluation tools that are required to validate, examine or
analyse the results of the first type. This type includes:
1. 1.
How to validate our methodology (paragraph 2.5):
1. (a)
Our methodology vs. conservative methodology using different groups of testers
2. (b)
Find unknown and known bugs in an open source project (i.e., Moodle or
openemis) using new methodology and framework
3. (c)
Sandbox with planted bugs
2. 2.
Tools for assisting the test cases’ development (i.e. debugger, screenshots,
reports) and for validating test cases (paragraph 2.6)
3. 3.
(nice to have) Mathematical analysis - exponential blow-up without blocking
The third type is implementation framework and tool kits that allow the
examination of the applicability and completeness of the results of the
previous steps (paragraph 6.3).
## 6 Preliminary results
### 6.1 Test case coverage – initial results
#### 6.1.1 Test case coverage – our suggestion.
One of the challenges in our research proposal is test case coverage. The
research study on this issue and empirical proof of test case coverage methods
in Combinatorial Testing Design (CTD) [32] are described in paragraph 4.3.1.
In most of the work on the subject it seems that the barrier of laboratory
examples has not yet been breached. The examples that were used by Kuhn et al
are good enough and the total impression created on the basis of the results
obtained is enough to convince one that the direction of the solution is
correct. Even though the examples are minimal, they allow for the possibility
of understanding the innovation and the algorithms, but they are not complex
enough in relation to composite systems and the models in the real world.
Another consequence of this issue is that they bring a small number of
examples of very small models. The algorithms that they developed for
generating the system model as LTS, automata or other modelling techniques are
not applicable when the examples contain more input parameters. In addition,
because their samples have relatively few elements and they focused on the
methods they developed, they did not pay much attention to the efficiency of
the algorithm in terms of complexity and runtime. And finally, in the work
that we mentioned that presented the t-way coverage algorithms, their basic
assumption was that the SUT was already represented in the model (such as
LTS), but with no recommendation as to how to get this model.
Our suggestion in this proposal is a comprehensive methodology for the test
suite in three stages. The first stage begins with defining the test process
as BP, based on system requirements. The second stage creates a model by
Model-Checking, in which basically all of the test cases are represented as
LTS. The third stage is based on that model, creating a minimum set of test
cases using the t-way method that covers all scenarios. In addition we
developed new methods for t-way test case generation for minimal coverage by
using solvers such as the ‘z3’ library in Python. This method has two steps,
such as presented in Kuhn’s work (Target and Testing Sequences), but unlike
their work our focus is representing the system model in the solver, and doing
so as a regular expression (RegEx). The first and second stages in our
methodology harness and utilize the capabilities of BP, and the third stage is
based on Yu et al. [57] in the Kuhn group works and his followers [47, 17, 8],
while in our early work we suggested solving it using a ’z3’ solver. Our
suggested approach positions a number of advantages over what has been
presented so far. By using BP infrastructure, we are able to model composite
and large systems, e.g., on-board satellite software [4], and we use a common
python library as a solver that was proven for that system. For the minimal
coverage problem that we present, we say that $L^{\prime}$ is a t-way coverage
of $L$ if:
$\forall\sigma_{1}\cdots\sigma_{t}\in\Sigma^{t}\quad(\Sigma^{*}\sigma_{1}\cdots\Sigma^{*}\sigma_{t}\Sigma^{*})\cap
L\neq\emptyset\quad\Rightarrow\quad(\Sigma^{*}\sigma_{1}\Sigma^{*}\cdots\Sigma^{*}\sigma_{t}\Sigma^{*})\cap
L^{\prime}\neq\emptyset$ (1)
#### 6.1.2 Test case coverage – our examples
Because of the importance to the issue of the requirements coverage, we have
examined a number of options for coverage. The following are two
representative examples in our opinion as a basis for our research.
The first example that we implemented was ‘IBM ponder’ (January, 2013) [33].
We tried to crack a riddle that was published by Prof. Margalit on the IBM
Israel website. The ponder (riddle) is shown in Figure 3. We found that this
riddle represents our test case coverage problem and the same algorithms
should solve both of them. The riddle challenged the manner of generating
coverage of every three-letter word in any order by at least once in a list of
letter combinations, and minimized the number of combinations. The analogy of
the testing of this riddle is that each letter represents a possible parameter
or possible state of the SUT and is required to produce a minimum list of test
cases (each ’word’ is a test case) that covers all of the generated 3-way
words. In this example we didn’t use BP as the first and second stages because
the output was given (a long word with 18 different letters). We divided it
into two parts; first a generated list of all t-way possibilities, and second
to found the minimal number of words, and the permutation of the initial word,
that covers all of the words listed in the first part. An example of solution
source code is shown in Figure 3. In both parts we generated a RegEx that
relaxed the required solution and ran the solver ’z3’ to find them. We
represent the riddle as a general notation:
$\displaystyle\left\\{\Pi_{\Sigma^{\prime}}(w)\colon w\in L^{\prime}\right\\}$
$\displaystyle=\left\\{\Pi_{\Sigma^{\prime}}(w)\colon w\in L\right\\}$ (2)
$\displaystyle\Pi_{\Sigma^{\prime}}(w)$
$\displaystyle=\begin{cases}w[1]\Pi_{\Sigma^{\prime}}(w[2..]),&\text{if
}w[1]\in\Sigma^{\prime}\\\ \Pi_{\Sigma^{\prime}}(w[2..]),&\text{if
}w[1]\notin\Sigma^{\prime}\end{cases}$ (3) $\displaystyle L$
$\displaystyle=\Sigma!$ (4)
The RegEx for the first part (in ’z3’ notation) is:
NOT(c,S) is defined as:
And, the RegEX for the second part (in ‘z3’ notation) is:
Perhaps modelling the SUT as a RegEx presentation in characters (letters) is
not the best example of how a solver such as ‘z3’ can solve composite
problems, but at this point in our research it is good enough.
Source code demonstration is shown in Appendix C.
Figure 3: IBM Ponder.
In the second example, we try to mimic the coverage process based on Kuhn’s
followers [17]. We tried to mimic one of the latest works based on Kuhn’s
followers who represent the SUT with automata. In our work we demonstrate the
full process in two simple examples that they presented: vault and elevator.
We implemented as proof of concept (POC) our proposed research in a three-
stage process. In the first stage we implement these two examples in BP and
generate b-threads, b-sync, Requests, Waitfor and Block, which mimic the
automata like they did; In the second stage we run a model-check in a very
specific way that generates an automaton (in graph format) from the BP
program. The output is a finite state machine (FSM) representing automaton.
Using graphvizOnline website [25] we present the automaton in Figure 4. From
this graph we copy the automaton and convert the automaton format as required,
using fsm2regex website [21], and generate RegEx, as shown in Figure 5. In the
third stage we convert the format of the regex generated by fms2regex to ‘z3’
notation and run the Z3 solver to find the minimum test cases that cover the
examples.
Figure 4: Elevator automaton graph. Figure 5: Elevator translation to RegEx.
### 6.2 POC – feasibilities studies
#### 6.2.1 Telephony system – online concept, closed-loop
We started by developing a simulation of a Telephony system (TS) that was
built as a first playground that we used to adopt the BP principles in the
system testing arena. The telephony system contains the following entities and
capabilities: Telephony company Users - add, update and delete users;
Establish a call between users of different call types (domestic,
international, collect or free); Send an SMS between users in different types
(SMS or MMS); Users Bill – change and check; internal Telephony database that
contained telephony system operational data such as users, calls, SMSs, and
tariffs. Figure 6 shows the telephony system.
Figure 6: Telephony system - system diagram.
The first requirement that we define which we like to test is: ”After each
phone call that is established, the paid user’s charge per call in the correct
amount is billed based on the given rate.”
Aside from the telephony system, we developed the automated testing tools. The
testing tools used BPjs framework. The testing tools simulate the telephony
systems’ users by applying the TS APIs. Under BPjs we simulated the users
adding, between users we randomly selected, established calls (the call
parameter, call type and call length were chosen randomly). The testing system
tests, periodically (by ’testBill’ request), the bill that each user has
charged. Each one of these capabilities was a b-thread, and each activity
reacted in a b-sync mechanism. In TS we try to generate the test cases as
reacting to the TS action: the user’s bill was changed (in the testing system)
whenever a call was established, as defined in the requirement. The method for
testing was to periodically test the user’s bill. The TS compares the
referenced calculated bill in the testing system to the user’s bill in
telephony DB. However, the TS skips testing when there is a known possibility
that the bill is wrong. Then we divided the action into two separate threads:
the first was to periodically test the bill and the second was to establish
calls and update the user’s bill accordingly.
The first thread is: ”Correct amount is billed?”:
int amt = 0;
wait: creactUser(u1?)
forever
try:
request: testBill(u1, amt)
break upon:
updateBill((u1, y?);
amt += y
The second thread is: ”Charge Per call”
forever:
wait: call (u1, u2?);
request: updateBill(u1; f(u1, u2));
The second case that we checked affected the tests by adding requirements such
as ”After each SMS sent between two users, the paid user’s bill charge per SMS
in the correct amount is billed based on the given rate.” As expected, this
new requirement doesn’t affect the existing test in either thread; we
therefore added the 3rd thread to the testing system.
The third thread is: ”Charge Per SMS”
forever:
wait: sms (u1, u2?);
request: updateBill(u1; f1(u1, u2));
The method that we try to elaborate in the telephony system is that the test
thread reacts based on the action in the SUT. We called this method the
’online’ testing method or ’closed-loop’ testing.
The advantage of this method is that the basic test threads (i.e. ”charge per
call”, ”charge per sms” or ”correct amount is billed?”) are generated and they
act upon action from the SUT. That means that test scenarios are generated ’on
the fly’ and try to cover the scenarios or the use-cases in the SUT in a way
that it is done in the ”real world”.
The major disadvantage is that there is no way to model checking on test
scenarios. The graph, built from the b-thread tests, let us trace options on
the test scenarios and find the best scenarios to run accordingly. In the ’on-
the-fly’ method, where the test cases and the b-threads are built while the
test is done, it is impossible.
#### 6.2.2 Cucumber / Gherkin
Another path that we tried to research was the way to use Off the Shelf (OTS)
tools as DSL for foreground (or frontend) and to use Gherkin as an actuator
tool. We check Cucumber and Gherkin as candidates.
Cucumber is a testing tool that supports the Behaviour Driven Development
(BDD) framework. It defines application behaviour using simple English text,
defined by a language called Gherkin. The way that Cucumber works is that
Cucumber reads the code written in plain English text (i.e. in Gherkin) in the
’feature’ file. It finds the exact match of each step in the step definition
(a source code file). The piece of code to be executed can be different
software frameworks like JavaScript, Selenium, etc. Each feature file is one
’Feature’ and may contain one or more ’Scenarios’. Each scenario is built from
’Steps’. Each scenario is a test scenario that has the test logic. The
vocabulary for the logic step is the following idioms: Background, Give, When,
Then (’And’ is the same as the previous one). Gherkin is able to use words in
the steps as variables. These variables come between double quotation marks
(”xxx” or ”123”).
The following is an example of a scenario written in Gherkin as part of
Telephony system test:
Scenario: charge Per Call
Then forever
Given ”call” in ”Calls”
Then calculate charge
And update bill according ”call” type
For each scenario and for each step line in a scenario, the Cucumber finds the
function that fits, for example, for the step ’Given ”call” in ”Calls” ’:
@Given (”string in string”)
public void givenObjInClass(String obj, String inClass)
{
if (obj.equals(”usr”) && inClass.equals(”User”))
{
//Do something when ”usr” in ”User”
}
else if (obj.equals(”call”) && inClass.equals(”Calls”))
{
//Do something when ”call” in ”Calls”
}
}
Or for step ’ Then calculate charge’
@Then (”calculate charge”)
public void chargeBillCalc()
{
//Do something
}
We try to generate matching mapping between Gherkin Idioms and BP idioms such
as: Scenario generates b-thread; ’Given’ generates b-sync with ’Request’;
’When’ generates b-sync with ’Waitfor’; and ’Then’ is the test’s logic. But
this mapping was poor for all vocabulary in BP, especially the more powerful
BP idioms such as ’Block’, ’Break-upon’ and ’Interrupt’.
We found out that the power of Cucumber and Gherkin, that we are going to
adopt in our methodology, will be the separation of the testing into (at
least) two levels of testing implementation. The first level is the scenario
process level (like feature file level) and the second level is the handling
level (like the code file level).
### 6.3 Proof-of-concept (POC) tools
The POC implementation demonstrates all of the above mentioned contributions.
Figure 7: POC framework structure.
In order to prove the feasibility of the method of using BP to design system
tests and produce a graph of all the possible states from which the test
scenarios are constructed, we took one use case and applied the method in a
limited way. We were able to realize and base the research on this success.
Our proposal in this work is to build a tool kit on four levels. Each level
serves a different purpose and a different role, and enables communication
between each two neighbours on the different levels. Figure 7 showed the POC
structure of its components, the layers on which it rests and supporting
tools. This proposal is based on the proof of concept (POC) demo that we
developed.
1. 1.
Infrastructure level - This is the lower level. This level serves all other
levels and has a common infrastructure for all kinds of automated testing
using this tool kit and will be adjusted by the software engineers or
automated test developers. The infrastructure contains all functions that
actually activate the test process (e.g., runs a web browser by selenium API).
This level is mostly implemented based on BPjs and COBP and uses the context
mechanism b-thread, b-sync, with all BP vocabulary.
2. 2.
Handler level - This is the second level. On this level we define the events
(in POC these functions are called “define event”) run at each test step in
the test case and include the test’s internal logic, such as the order of
filling in fields on a web form (the process level is on the 3rd level). The
development of this level is executed by an automated test developer. Each
event definition group of functions handles one object (e.g., form process)
and is very specific to the test case and to the system under test (SUT).
3. 3.
Process level – This is the third level. This level contains the business-
logic in the test case process and selection of which test case will be
executed. The process level mimics the ’feature’ definition in the Cucumber /
Gherkin infrastructure. The process level defines the processes that require
testing and the test cases and the process logic. We could generate end-to-end
tests or just one process test; a sanity test or negative test. This should be
the level that the system engineers and the tests engineers can write or can
discuss and which documents the traceability between the system’s requirements
and specifications and system test cases. At this level the test definitions
should include all of the capabilities derived from BP methods. Using Request,
WaitFor, Block, Break-Upon and Interrupt, using context to manage the system
state and test data along the test process. At this level the model check
could analyse the test processes.
4. 4.
Abstraction level – This is the fourth level. This is the level that defines
the test on the same one as the process level. The tests that are defined here
will be translated automatically to the tests process. In the abstraction
level, the tests will be written using a new technique such as diagrams or
formal language defined by DSL and will not mention a programming language.
The DSL will have logical structures and repetitions but without the form of
”if…then…else” or ”for/while/do loops”.
Appendix C shows details and source code capturing the POC implementation. We
developed one test case on the Moodle website as preliminary study for this
proposal.
## References
* [1] Dilara Ateşoğulları and Alok Mishra “White Box Test Tools: A Comparative View” In _International Journal of Information and Computer Security_ 11, 2019, pp. 79–90
* [2] Mislav Balunović, Pavol Bielik and Martin Vechev “Learning to solve SMT formulas” In _Advances in Neural Information Processing Systems_ 2018-Decem.NeurIPS, 2018, pp. 10317–10328
* [3] Michael Bar-Sinai, Gera Weiss and Reut Shmuel “BPjs- A framework for modeling reactive systems using a scripting language and BP” In _arXiv_ , 2018 arXiv:1806.00842
* [4] Michael Bar-Sinai, Achiya Elyasaf, Aviran Sadon and Gera Weiss “A scenario based on-board software and testing environment for satellites” In _59th Israel Annual Conference on Aerospace Sciences, IACAS 2019_ , 2019, pp. 1407–1419
* [5] “BDD Testing & Collaboration Tools for Teams — Cucumber” URL: https://cucumber.io/
* [6] “Behat — a php framework for autotesting your business expectations.” URL: https://docs.behat.org/en/latest/
* [7] Maicon Bernardino, Elder M. Rodrigues, Avelino F. Zorzo and Luciano Marchezan “Systematic mapping study on MBT: Tools and models” In _IET Software_ 11.4, 2017, pp. 141–155 DOI: 10.1049/iet-sen.2015.0154
* [8] Andrea Bombarda and Angelo Gargantini “An Automata-Based Generation Method for Combinatorial Sequence Testing of Finite State Machines” In _Proceedings - 2020 IEEE 13th International Conference on Software Testing, Verification and Validation Workshops, ICSTW 2020_ , 2020, pp. 157–166 DOI: 10.1109/ICSTW50294.2020.00036
* [9] Xia Cai and Michael R. Lyu “The effect of code coverage on fault detection under different testing profiles” In _Proceedings of the 1st International Workshop on Advances in Model-Based Testing, A-MOST ’05_ , 2005, pp. 1–7 DOI: 10.1145/1083274.1083288
* [10] C. Cheng, A. Dumitrescu and P. Schroeder “Generating small combinatorial test suites to cover input-output relationships” In _Proceedings - International Conference on Quality Software_ 2003-Janua, 2003, pp. 76–82 DOI: 10.1109/QSIC.2003.1319088
* [11] Werner Damm “LSCs : Breathing Life into Message Sequence Charts” In _Formal Methods in System Design 2001_ , 2001, pp. 45–80
* [12] Leonardo De Moura and Nikolaj Bjørner “LNCS 4963 - Z3: An Efficient SMT Solver”, 2008 URL: http://research.microsoft.com/projects/z3.
* [13] Ankush Desai, Shaz Qadeer and Sanjit A. Seshia “Programming safe robotics systems: Challenges and advances” In _Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)_ 11245 LNCS Springer International Publishing, 2018, pp. 103–119 DOI: 10.1007/978-3-030-03421-4˙8
* [14] Arilo C. Dias Neto, Rajesh Subramanyan, Marlon Vieira and Guilherme H. Travassos “A survey on model-based testing approaches: A systematic review” In _Proc. - 1st ACM Int. Workshop on Empirical Assessment of Software Engineering Languages and Technologies, WEASELTech 2007, Held with the 22nd IEEE/ACM Int. Conf. Automated Software Eng., ASE 2007_ , 2007, pp. 31–36 DOI: 10.1145/1353673.1353681
* [15] Arilo C. Dias-Neto and Guilherme H. Travassos “A Picture from the Model-Based Testing Area: Concepts, Techniques, and Challenges” In _Advances in Computers_ 80.C Elsevier Inc., 2010, pp. 45–120 DOI: 10.1016/S0065-2458(10)80002-6
* [16] Arilo Claudio Dias-Neto and Guilherme Horta Travassos “Model-based testing approaches selection for software projects” In _Information and Software Technology_ 51.11 Elsevier B.V., 2009, pp. 1487–1504 DOI: 10.1016/j.infsof.2009.06.010
* [17] Feng Duan, Yu Lei, Raghu N. Kacker and D. Kuhn “An approach to T-Way test sequence generation with constraints” In _Proceedings - 2019 IEEE 12th International Conference on Software Testing, Verification and Validation Workshops, ICSTW 2019_ IEEE, 2019, pp. 241–250 DOI: 10.1109/ICSTW.2019.00059
* [18] Elfriede Dustin, Jeff Rashka and John Paul “Automated Software Testing: Introduction, Management, and Performance” In _Addison-Wesley Professional_ , 1999, pp. 575 URL: https://books.google.co.il/books?hl=iw&lr=&id=-Jxobzm2ONkC&oi=fnd&pg=PR15&dq=Automated+software+testing+introduction&ots=4RA-fg-1zE&sig=iEUXHEXRkUSKFYEg4z9YUrzwamo&redir˙esc=y#v=onepage&q=Automated%20software%20testing%20introduction&f=false
* [19] Achiya Elyasaf “Context-Oriented Behavioral Programming” In _Information and Software Technology_ 133, 2021 DOI: 10.1016/j.infsof.2020.106504
* [20] Michael Felderer et al. “Security Testing: A Survey” In _Advances in Computers_ 101, 2016, pp. 1–51 DOI: 10.1016/bs.adcom.2015.11.003
* [21] “FSM2Regex” URL: http://ivanzuzak.info/noam/webapps/fsm2regex/
* [22] Jian Gao et al. “Jbench: A Dataset of Data Races for Concurrency Testing” In _Proceedings of the 15th International Conference on Mining Software Repositories_ Association for Computing Machinery, 2018, pp. 6–9 DOI: 10.1145/3196398.3196451
* [23] “Gherkin Syntax - Cucumber Documentation” URL: https://cucumber.io/docs/gherkin/
* [24] “GitHub - Z3Prover/z3: The Z3 Theorem Prover” URL: https://github.com/Z3Prover/z3
* [25] “Graphviz - Graph Visualization Software” URL: https://graphviz.org/
* [26] Havva Gulay Gurbuz and Bedir Tekinerdogan “Model-based testing for software safety: a systematic mapping study” In _Software Quality Journal_ 26.4 Software Quality Journal, 2018, pp. 1327–1372 DOI: 10.1007/s11219-017-9386-2
* [27] David Harel “STATECHARTS: A VISUAL FORMALISM FOR COMPLEX SYSTEMS” In _Science of computer programming_ 8, 1987, pp. 231–274
* [28] David Harel, Assaf Marron and Gera Weiss “Programming coordinated behavior in java” In _Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)_ 6183 LNCS, 2010, pp. 250–274 DOI: 10.1007/978-3-642-14107-2˙12
* [29] David Harel, Robby Lampert, Assaf Marron and Gera Weiss “Model-checking behavioral programs” In _Embedded Systems Week 2011, ESWEEK 2011 - Proceedings of the 9th ACM International Conference on Embedded Software, EMSOFT’11_ , 2011, pp. 279–288 DOI: 10.1145/2038642.2038686
* [30] Sjoerd Van Der Heijden “Master Thesis Trace Collection and Data Coverage for Model Based Testing”, 2019
* [31] Ferenc Horváth et al. “Code coverage differences of Java bytecode and source code instrumentation tools” In _Software Quality Journal_ 27.1, 2019, pp. 79–123 DOI: 10.1007/s11219-017-9389-z
* [32] Linghuan Hu, W. Wong, D. Kuhn and Raghu N. Kacker “How does combinatorial testing perform in the real world: an empirical study” In _Empirical Software Engineering_ 25.4 Empirical Software Engineering, 2020, pp. 2661–2693 DOI: 10.1007/s10664-019-09799-2
* [33] IBM Research “IBM Research — Ponder this” URL: https://www.research.ibm.com/haifa/ponderthis/challenges/January2013.html%20https://www.research.ibm.com/haifa/ponderthis/index.shtml
* [34] Muhammad Abid Jamil, Muhammad Arif, Normi Sham Awang Abubakar and Akhlaq Ahmad “Software testing techniques: A literature review” In _Proceedings - 6th International Conference on Information and Communication Technology for the Muslim World, ICT4M 2016_ IEEE, 2017, pp. 177–182 DOI: 10.1109/ICT4M.2016.40
* [35] James A. Jones and Mary Jean Harrold “Test-suite reduction and prioritization for modified condition/decision coverage” In _IEEE International Conference on Software Maintenance, ICSM_ 29.3 IEEE, 2001, pp. 92–103 DOI: 10.1109/ICSM.2001.972715
* [36] Youngjoon Kim and Jiwon Yoon “Maxafl: Maximizing code coverage with a gradient-based optimization technique” In _Electronics (Switzerland)_ 10.1, 2021, pp. 1–23 DOI: 10.3390/electronics10010011
* [37] Moez Krichen “Contributions to Model-Based Testing of Dynamic and Distributed Real-Time Systems”, 2020
* [38] D. Kuhn, Dolores R. Wallace and Albert M. Gallo “Software fault interactions and implications for software testing” In _IEEE Transactions on Software Engineering_ 30.6, 2004, pp. 418–421 DOI: 10.1109/TSE.2004.24
* [39] Beatriz Pérez Lamancha and Macario Polo Usaola “Testing Product Generation in Software Product Lines” In _Ifip International Federation For Information Processing_ , 2010, pp. 111–125
* [40] Wenbin Li, Franck Le Gall and Naum Spaseski “A survey on model-based testing tools for test case generation” In _Communications in Computer and Information Science_ 779, 2018, pp. 77–89 DOI: 10.1007/978-3-319-71734-0˙7
* [41] Prasad Mahajan, Harshal Shedge and Uday Patkar “Automation Testing In Software Organization” In _International Journal of Computer Applications Technology and Research_ 5.4, 2016, pp. 198–201 DOI: 10.7753/ijcatr0504.1004
* [42] Massimo Narizzano, Luca Pulina, Armando Tacchella and Simone Vuotto “Automated Requirements-Based Testing of Black-Box Reactive Systems” In _Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)_ 12229 LNCS, 2020, pp. 153–169 DOI: 10.1007/978-3-030-55754-6˙9
* [43] Bohdan Oliinyk and Vasyl Oleksiuk “Automation in software testing, can we automate anything we want?” In _CEUR Workshop Proceedings_ 2546, 2019, pp. 224–234
* [44] Alberto Rodrigues Da Silva, Ana C R Paiva, Valter Emanuel and R Da Silva “Towards a Test Specification Language for Information Systems: Focus on Data Entity and State Machine Tests”, 2018 DOI: 10.5220/0006608002130224
* [45] Amanda Schwartz, Daniel Puckett, Ying Meng and Gregory Gay “Investigating faults missed by test suites achieving high code coverage” In _Journal of Systems and Software_ 144.June Elsevier, 2018, pp. 106–120 DOI: 10.1016/j.jss.2018.06.024
* [46] “SeleniumHQ Browser Automation” URL: https://www.selenium.dev/
* [47] Yunlong Sheng, Chao Sun, Shouda Jiang and Chang’an Wei “Extended covering arrays for sequence coverage” In _Symmetry_ 10.5, 2018 DOI: 10.3390/sym10050146
* [48] Karuturi Sneha and Gowda M Malle “Research on Software Testing Techniques and Software Automation Testing Tools” In _2017 International Conference on Energy, Communication, Data Analytics and Soft Computing (ICECDS)_ IEEE, 2017, pp. 77–81
* [49] L H Tahat, B Vaysburg, B Korel and A J Bader “Requirement-based automated black-box test generation” In _25th Annual International Computer Software and Applications Conference. COMPSAC 2001_ , 2001, pp. 489–495 DOI: 10.1109/CMPSAC.2001.960658
* [50] D Vdeedjkl Tldx et al. “State-based models in model-based testing: A systematic review” In _IEEE 4th International Conference on Knowledge-Based Engineering and Innovation (KBEI)_ , 2017, pp. 942–948
* [51] Sebastian Uchitel, Jeff Kramer and Jeff Magee “Synthesis of behavioral models from scenarios” In _IEEE Transactions on Software Engineering_ 29.2 IEEE, 2003, pp. 99–115 DOI: 10.1109/TSE.2003.1178048
* [52] Leonardo Villalobos-Arias, Christian Quesada-López, Alexandra Martinez and Marcelo Jenkins “A tertiary study on model-based testing areas, tools and challenges: Preliminary results” In _Avances en Ingenieria de Software a Nivel Iberoamericano, CIbSE 2018_ , 2018, pp. 15–28
* [53] DIeu Huong Vu, Anh Hoang Truong, Yuki Chiba and Toshiaki Aoki “Automated testing reactive systems from event-B model” In _2017 4th NAFOSTED Conference on Information and Computer Science, NICS 2017 - Proceedings_ 2017-Janua, 2017, pp. 207–212 DOI: 10.1109/NAFOSTED.2017.8108065
* [54] Xinchun Wang and Peijie Xu “Build an auto testing framework based on selenium and FitNesse” In _Proceedings - 2009 International Conference on Information Technology and Computer Science, ITCS 2009_ 2 IEEE, 2009, pp. 436–439 DOI: 10.1109/ITCS.2009.228
* [55] Mark C.K. Yang and Anne Chao “Reliability-Estimation & Stopping-Rules for Software Testing, Based on Repeated Appearances of Bugs” In _IEEE Transactions on Reliability_ 44.2, 1995, pp. 315–321 DOI: 10.1109/24.387388
* [56] Qian Yang, J. Li and David M. Weiss “A survey of coverage-based testing tools” In _Computer Journal_ 52.5, 2009, pp. 589–597 DOI: 10.1093/comjnl/bxm021
* [57] Linbin Yu et al. “Efficient algorithms for T-way test sequence generation” In _Proceedings - 2012 IEEE 17th International Conference on Engineering of Complex Computer Systems, ICECCS 2012_ IEEE, 2012, pp. 220–229 DOI: 10.1109/ICECCS.2012.17
|
# SafeNet: The Unreasonable Effectiveness of Ensembles in Private
Collaborative Learning
Harsh Chaudhari1, Matthew Jagielski2, Alina Oprea1 1Northeastern University,
2Google Research
###### Abstract
Secure multiparty computation (MPC) has been proposed to allow multiple
mutually distrustful data owners to jointly train machine learning (ML) models
on their combined data. However, by design, MPC protocols faithfully compute
the training functionality, which the adversarial ML community has shown to
leak private information and can be tampered with in poisoning attacks. In
this work, we argue that model ensembles, implemented in our framework called
SafeNet, are a highly MPC-amenable way to avoid many adversarial ML attacks.
The natural partitioning of data amongst owners in MPC training allows this
approach to be highly scalable at training time, provide provable protection
from poisoning attacks, and provably defense against a number of privacy
attacks. We demonstrate SafeNet’s efficiency, accuracy, and resilience to
poisoning on several machine learning datasets and models trained in end-to-
end and transfer learning scenarios. For instance, SafeNet reduces backdoor
attack success significantly, while achieving $39\times$ faster training and
$36\times$ less communication than the four-party MPC framework of Dalskov et
al. [28]. Our experiments show that ensembling retains these benefits even in
many non-iid settings. The simplicity, cheap setup, and robustness properties
of ensembling make it a strong first choice for training ML models privately
in MPC.
## I Introduction
Machine learning (ML) has been successful in a broad range of application
areas such as medicine, finance, and recommendation systems. Consequently,
technology companies such as Amazon, Google, Microsoft, and IBM provide
machine learning as a service (MLaaS) for ML training and prediction. In these
services, data owners outsource their ML computations to a set of more
computationally powerful servers. However, in many instances, the client data
used for ML training or classification is sensitive and may be subject to
privacy requirements. Regulations such as GDPR, HIPAA and PCR, data
sovereignty issues, and user privacy concern are common reasons preventing
organizations from collecting user data and training more accurate ML models.
These privacy requirements have led to the design of privacy-preserving ML
training methods, including the use of secure multiparty computation (MPC).
Recent literature in the area of MPC for ML proposes privacy-preserving
machine learning (PPML) frameworks [71, 69, 87, 29, 67, 88, 28, 1, Cerebro21,
90] for training and inference of various machine learning models such as
logistic regression, neural networks, and random forests. In these models,
data owners outsource shares of their data to a set of servers and the servers
run MPC protocols for ML training and prediction. An implicit assumption for
security is that the underlying datasets provided by data owners during
training have not been influenced by an adversary. However, research in
adversarial machine learning has shown that data poisoning attacks pose a high
risk to the integrity of trained ML models [10, 49, 44, 40]. Data poisoning
becomes a particularly relevant threat in PPML systems, as multiple data
owners contribute secret shares of their datasets for jointly training a ML
model inside the MPC, and poisoned samples cannot be easily detected.
Furthermore, the guarantees of MPC provide privacy against an adversary
observing the communication in the protocol, but does not protect against any
sensitive information leaked by the model about its training set. Many privacy
attacks are known to allow inference on machine learning models’ training
sets, and protecting against these attacks is an active area of research.
In this paper, we study the impact of these adversarial machine learning
threats on standard MPC frameworks for private ML training. Our first
observation is that the security definition of MPC for private ML training
does not account for data owners with poisoned data. Therefore, we extend the
security definition by considering an adversary who can poison the datasets of
a subset of owners, while at the same time controlling a subset of the servers
in the MPC protocol. Under our threat model, we empirically demonstrate that
poisoning attacks are a significant threat to the setting of private ML
training. We show the impact of backdoor [44, 23] and targeted [54, 40]
poisoning attacks on four MPC frameworks and five datasets, using logistic
regression and neural networks models. We show that with control of just a
single owner and its dataset (out of a set of 20 owners contributing data for
training), the adversary achieves $100\%$ success rate for a backdoor attack,
and higher than $83\%$ success rate for a targeted attack. These attacks are
stealthy and cannot be detected by simply monitoring standard ML accuracy
metrics.
To mitigate these attacks, we apply ensembling technique from ML, implemented
in our framework called SafeNet, which, in the collaborative learning setting
we consider, is an effective defense against poisoning attacks, while also
simultaneously preventing various types of privacy attacks. Rather than
attempting to implement an existing poisoning defense in MPC, we observe that
the structure of the MPC threat model permits a more general and efficient
solution. Our main insight is to require individual data owners to train ML
models locally, based on their own datasets, and secret share the resulting
ensemble of models in the MPC. We filter out local models with low accuracy on
a validation dataset, and use the remaining models to make predictions using a
majority voting protocol performed inside the MPC. While this permits stronger
model poisoning attacks, the natural partitioning of the MPC setting prevents
an adversary from poisoning more than a fixed subset of the models, resulting
in a limited number of poisoned models in the ensemble. We perform a detailed
analysis of the robustness properties of SafeNet, and provide lower bounds on
the ensemble’s accuracy based on the error rate on the local models in the
ensemble and the number of poisoned models, as well as a prediction
certification procedure for arbitrary inputs. The bounded contribution of each
local model also gives a provable privacy guarantee for SafeNet. Furthermore,
we show empirically that SafeNet successfully mitigates backdoor and targeted
poisoning attacks, while retaining high accuracy on the ML prediction tasks.
In addition, our approach is efficient, as ML model training is performed
locally by each data owner, and only the ensemble filtering and prediction
protocols are performed in the MPC. This provides large performance
improvements in ML training compared to existing PPML frameworks, while
simultaneously mitigating poisoning attacks. For instance, for one neural
network model, SafeNet performs training $39\times$ faster than the [28] PPML
protocol and requires $36\times$ less communication. Finally, we investigate
settings with diverse data distributions among owners, and evaluate the
accuracy and robustness of SafeNet under multiple data imbalance conditions.
To summarize, our contributions are as follows:
Adversarial ML-aware Threat Model for Private Machine Learning. We extend the
MPC security definition for private machine learning to encompass the threat
of data poisoning attacks and privacy attacks. In our threat model, the
adversary can poisoned a subset $t$ out of $m$ data owners, and control $T$
out of $N$ servers participating in the MPC. The attacker might also seek to
learn sensitive information about the local datasets through the trained
model.
SafeNet Ensemble Design. We propose SafeNet, which adapts ensembling technique
from ML to the collaborative MPC setting by having data owners train models
locally and aggregation of predictions is performed securely inside the MPC.
We show that this procedure gives provable privacy and security guarantees,
which improves as models become more accurate. We also propose various novel
extensions to this ensembling strategy which make SafeNet applicable to a
wider range of training settings (including transfer learning and
accommodating computationally restricted owners). SafeNet’s design is agnostic
to the underlying MPC framework and we show it can be instantiated over four
different MPC frameworks, supporting two, three and four servers.
Comprehensive Evaluation. We show the impact of existing backdoor and targeted
poisoning attacks on several existing PPML systems [32, 4, 28] and five
datasets, using logistic regression and neural network models. We also
empirically demonstrate the resilience of SafeNet against these attacks, for
an adversary compromising up to 9 out of 20 data owners. We report the gains
in training time and communication cost for SafeNet compared to existing PPML
frameworks. Finally, we compare SafeNet with state-of-the-art defenses against
poisoning in federated learning [16] and show its enhanced certified
robustness even under non-iid data distributions.
## II Background and Related Work
We provide background on secure multi-party computation and poisoning attacks
in ML, and discuss related work in the area of adversarial ML and MPC.
### II-A Secure Multi-Party Computation
Secure Multi-Party Computation (MPC) [93, 7, 41, 47, 31] allows a set of $n$
mutually distrusting parties to compute a joint function $f$, so that
collusion of any $t$ parties cannot modify the output of computation
(_correctness_) or learn any information beyond what is revealed by the output
(_privacy_). The area of MPC can be categorized into honest majority [7, 70,
4, 20, 13] and dishonest majority [93, 31, 30, 68, 41]. The settings of two-
party computation (2PC) [93, 62, 61, 74], three parties (3PC) [3, 4, 70], and
four parties (4PC) [48, 43, 21, 28] have been widely studied as they provide
efficient protocols. Additionally, recent works in the area of privacy
preserving ML propose training and prediction frameworks [71, 69, 87, 58, 78,
88, 1, 77] built on top of the above MPC settings. Particularly, most of the
frameworks are deployed in the outsourced computation setting where the data
is secret-shared to a set of servers which perform training and prediction
using MPC.
### II-B Data Poisoning Attacks
In a data poisoning attack, an adversary controls a subset of the training
dataset, and uses this to influence the model trained on that training set. In
a backdoor attack [73, 44, 23], an adversary seeks to add a “trigger” or
backdoor pattern into the model. The trigger is a perturbation in feature
space, which is applied to poisoned samples in training to induce
misclassification on backdoored samples at testing. In a targeted attack [54,
55, 82], the adversary’s goal is to change the classifier prediction for a
small number of specific test samples. Backdoor and targeted attacks can be
difficult to detect, due to the subtle impact they have on the ML model.
### II-C Related Work
While both MPC and adversarial machine learning have been the topic of fervent
research, work connecting them is still nascent. We are only aware of several
recent research papers that attempt to bridge these areas. Recent works [59,
18] show that MPC algorithms applied at test time can be compromised by
malicious users, allowing for efficient _model extraction_ attacks. Second,
Escudero et al. [36] show that running a semi-honest MPC protocol with
malicious parties can result in backdoor attacks in the resulting SVM model.
Both these works, as well as our own, demonstrate the difficulty of aligning
the guarantees of MPC with the additional desiderata of adversarial machine
learning. We demonstrate the effectiveness of data poisoning attacks in MPC
for neural networks and logistic regression models, and propose a novel
ensemble training algorithm in SafeNet to defend against poisoning attacks in
MPC.
Model ensembles have been proposed as a defense for ML poisoning and privacy
attacks in prior work in both the centralized training setting [9, 50] and the
collaborative learning setting. Compared to centralized approaches, which
process a single dataset, we are able to leverage the trust model of MPC,
which limits the number of poisoned models in the ensemble and can provide
stronger robustness and privacy guarantees. Ensembles have also been proposed
in MPC to protect data privacy [24] and in federated learning to provide
poisoning robustness [16]. Our work provides a stronger privacy analysis,
protecting from a broader range of threats than [24], and additionally offers
robustness guarantees. We provide a more detailed comparison with these
approaches in Section III-F.
## III SafeNet: Using Ensembles in MPC
We describe here our threat model and show how to implement ensembles in MPC.
We then show that ensembling gives us provable robustness to poisoning and
privacy adversaries.
### III-A Threat Model
$\mathsf{C}_{1}$ $\mathsf{C}_{t}$ $\mathsf{C}_{t+1}$ $\mathsf{C}_{m}$
S1S2S${}_{\scaleto{T}{3pt}}$S${}_{\footnotesize\scaleto{T+1}{3pt}}$S${}_{\footnotesize\scaleto{N}{3pt}}$S${}_{\footnotesize\scaleto{N-1}{3pt}}$SOC
ParadigmPoisonedCorruptedHonest……… Figure 1: Threat model considered in our
setting. The adversary $\mathcal{A}^{\text{p}}_{\text{soc}}$ can poison at
most $t$ out of $m$ data owners and corrupt at most $T$ out of $N$ servers
participating in the MPC computation. $\mathsf{C}_{i}$ and $\mathcal{S}_{j}$
denote the $i^{th}$ data owner and $j^{th}$ server.
Setup. We consider a set of $m$ data owners $C=\cup_{k=1}^{m}\mathsf{C}_{k}$
who wish to train a joint machine learning model $\mathcal{M}$ on their
combined dataset $\mathsf{D}=\cup_{k=1}^{m}{D}_{k}$. We adopt the Secure
Outsourced Computation (SOC) paradigm [71, 69, 87, 13, 78, 88, 1, 29, 28] for
training model $\mathcal{M}$ privately, where the owners secret-share their
respective datasets to a set of outsourced servers, who execute the MPC
protocols to train $\mathcal{M}$. The final output is a trained model in
secret-shared format among the servers. A single training/testing sample is
expressed as $({{\mathbf{{{x}}}}}_{i},\text{y}_{i})$, where
${{\mathbf{{{x}}}}}_{i}$ is the input feature vector and $\text{y}_{i}$ is its
corresponding true label or class. We use
${D}_{k}=(\textbf{X}_{\scriptscriptstyle
k},{{\mathbf{{y}}}}_{\scriptscriptstyle k})$ to denote dataset of data owner
$\mathsf{C}_{k}$ participating in the training process. Matrix
$\textbf{X}_{\scriptscriptstyle k}$ denotes a feature matrix where the number
of rows represent the total training samples possessed by $\mathsf{C}_{k}$ and
${{\mathbf{{y}}}}_{\scriptscriptstyle k}$ denotes the corresponding vector of
true labels.
Adversary in the SOC. Given a set
$S=\\{\mathcal{S}_{1},\ldots,\mathcal{S}_{N}\\}$ of servers, we define an
adversary $\mathcal{A}_{\text{soc}}$, similar to prior work [71, 69, 78, 88,
1, 28]. $\mathcal{A}_{\text{soc}}$ can statically corrupt a subset
$S_{T}\subset S$ of servers of size at most $T<N$. The exact values of $N$ and
$T$ are dependent on the MPC protocols used for training the ML model
privately. We experiment with two-party, three-party, and four-party protocols
with one corrupt server. MPC defines two main adversaries: i) _Semi-honest_ :
Adversary follows a given protocol, but tries to derive additional information
from the messages received from other parties during the protocol; ii)
_Malicious_ : Adversary has the ability to arbitrarily deviate during the
execution of the protocol.
Security Definition. MPC security is defined using the real world - ideal
world paradigm [14]. In the real world, parties participating in the MPC
interact during the execution of a protocol $\pi$ in presence of an adversary
$\mathcal{A}$. Let $\mathsf{REAL}[\mathbb{Z},\mathcal{A},\pi,\lambda]$ denote
the output of the environment $\mathbb{Z}$ when interacting with $\mathcal{A}$
and the honest parties, who execute $\pi$ on security parameter $\lambda$.
Effectively, $\mathsf{REAL}$ is a function of the inputs/outputs and messages
sent/received during the protocol. In the ideal world, the parties simply
forward their inputs to a trusted functionality $\mathcal{F}$ and forward the
functionality’s response to the environment. Let
$\mathsf{IDEAL}[\mathbb{Z},\mathcal{S},\mathcal{F},\lambda]$ denote the output
of the environment $\mathbb{Z}$ when interacting with adversary $\mathcal{S}$
and honest parties who run the protocol in presence of $\mathcal{F}$ with
security parameter $\lambda$. The security definition states that the views of
the adversary in the real and ideal world are indistinguishable:
###### Definition 1.
A protocol $\pi$ securely realizes functionality $\mathcal{F}$ if for all
environments $\mathbb{Z}$ and any adversary of type
$\mathcal{A}_{\text{soc}}$, which corrupts a subset $S_{T}$ of servers of size
at most $T<N$ in the real world, then there exists a simulator $\mathcal{S}$
attacking the ideal world, such that
$\mathsf{IDEAL}[\mathbb{Z},\mathcal{S},\mathcal{F},\lambda]\approx\mathsf{REAL}[\mathbb{Z},\mathcal{A}_{\text{soc}},\pi,\lambda]$.
Poisoning Adversary. Existing threat models for training ML models privately
assume that the local datasets contributed towards training are not under the
control of the adversary. However, data poisoning attacks have been shown to
be a real threat when ML models are trained on crowdsourced data or data
coming from untrusted sources [10, 72, 49]. Data poisoning becomes a
particularly relevant risk in PPML systems, in which data owners contribute
their own datasets for training a joint ML model. Additionally, the datasets
are secret shared among the servers participating in the MPC, and potential
poisoned samples (such as backdoored data) cannot be easily detected by the
servers running the MPC protocol.
To account for such attacks, we define a poisoning adversary
$\mathcal{A}_{\text{p}}$ that can poison a subset of local datasets of size at
most $t<m$. Data owners with poisoned data are called poisoned owners, and we
assume that the adversary can coordinate with the poisoned owners to achieve a
certain adversarial goal. For example, the adversary can mount a backdoor
attack, by selecting a backdoor pattern and poison the datasets under its
control with the particular backdoor pattern.
Poisoning Robustness: We consider an ML model to be robust against a poisoning
adversary $\mathcal{A}_{\text{p}}$, who poisons the datasets of $t$ out of $m$
owners, if it generates correct class predictions on new samples with high
probability. We provide bounds on the level of poisoning tolerated by our
designed framework to ensure robustness.
Our Adversary. We now define a new adversary
$\mathcal{A}^{\text{p}}_{\text{soc}}$ for our threat model (Figure 1) that
corrupts servers in the MPC and poisons the owners’ datasets:
* –
$\mathcal{A}^{\text{p}}_{\text{soc}}$ plays the role of
$\mathcal{A}_{\text{p}}$ and poisons $t$ out of $m$ data owners that secret
share their training data to the servers.
* –
$\mathcal{A}^{\text{p}}_{\text{soc}}$ plays the role of
$\mathcal{A}_{\text{soc}}$ and corrupts $T$ out $N$ servers taking part in the
MPC computation.
Note that the poisoned owners that $\mathcal{A}^{\text{p}}_{\text{soc}}$
controls do not interfere in the execution of the MPC protocols after secret-
sharing their data and also do not influence the honest owners.
Functionality $\mathcal{F}_{\mathsf{pTrain}}$. Based on our newly introduced
threat model, we construct a new functionality $\mathcal{F}_{\mathsf{pTrain}}$
in Figure 2 to accommodate poisoned data.
Input: $\mathcal{F}_{\mathsf{pTrain}}$ receives secret-shares of ${D}_{i}$ and
$a_{i}$ from each owner $\mathsf{C}_{i}$, where ${D}_{i}$ is a dataset and
$a_{i}$ an auxiliary input. Computation: On receiving inputs from the owners,
$\mathcal{F}_{\mathsf{pTrain}}$ computes
$O=f({D}_{1},...,{D}_{m},a_{1},\ldots,a_{m})$, where $f$ and $O$ denotes the
training algorithm and the output of the algorithm respectively. Output:
$\mathcal{F}_{\mathsf{pTrain}}$ constructs secret-shares of $O$ and sends the
appropriate shares to the servers.
Figure 2: Ideal Functionality for ML training with data poisoning
Security against $\mathcal{A}^{\text{p}}_{\text{soc}}$. A training protocol
$\Pi_{\mathsf{train}}$ is secure against adversary
$\mathcal{A}^{\text{p}}_{\text{soc}}$ if: (1) $\Pi_{\mathsf{train}}$ securely
realizes functionality $\mathcal{F}_{\mathsf{pTrain}}$ based on Definition 1;
and (2) the model trained inside the MPC provides poisoning robustness against
data poisoning attacks.
Intuitively, the security definition ensures that
$\mathcal{A}^{\text{p}}_{\text{soc}}$ learns no information about the honest
owners’ inputs when $T$ out of $N$ servers are controlled by the adversary,
while the trained model provides poisoning robustness against a subset of $t$
out of $m$ poisoned owners.
Extension to Privacy Adversary. While MPC guarantees no privacy leakage during
the execution of the protocol, it makes no promises about privacy leakage that
arises by observing the output of the protocol. This has motivated a
combination of differential privacy guarantees with MPC algorithms, to protect
against privacy leakage for both the intermediate execution as well as the
output of the protocol. For this reason, we also consider adversaries seeking
to learn information about data owners’ local datasets by observing the output
of the model, as done in membership inference [81, 94, 17] and property
inference attacks [39, 97, 83]. Recent works have used data poisoning as a
tool to further increase privacy leakage [85, 65, 19] of the trained models.
Consequently, we can extend our threat model to accommodate a stronger version
of $\mathcal{A}^{\text{p}}_{\text{soc}}$ that is also capable of performing
privacy attacks by observing the output of the trained model.
### III-B SafeNet Overview
Figure 3: Overview of the Training and Inference phases of the SafeNet
Framework.
Given our threat model in Figure 1, existing PPML frameworks provide security
against an $\mathcal{A}_{\text{soc}}$ adversary, but they are not designed to
handle an $\mathcal{A}^{\text{p}}_{\text{soc}}$ adversary. We show
experimentally in Section IV that PPML frameworks for private training are
susceptible to data poisoning attacks. While it would be possible to remedy
this by implementing specific poisoning defenses (see Section V-C for a
discussion of these approaches), we instead show that it is possible to take
advantage of the bounded poisoning capability of
$\mathcal{A}^{\text{p}}_{\text{soc}}$ to design a more general and efficient
defense. Intuitively, existing approaches train a single model on all local
datasets combined, causing the model’s training set to have a large fraction
of poisoned data ($t/m$), which is difficult to defend against. Instead, we
design SafeNet, a new protocol which uses ensemble models to realize our
threat model and provide security against
$\mathcal{A}^{\text{p}}_{\text{soc}}$. In addition to successfully mitigating
data poisoning attacks, SafeNet provides more efficient training than existing
PPML and comparable prediction accuracy.
Figure 3 provides an overview of the training and inference phases of SafeNet.
SafeNet trains an ensemble $E$ of multiple models in protocol
$\Pi_{\mathsf{train}}$, where each model $\mathcal{M}_{k}\in E$ is trained
locally by the data owner $\mathsf{C}_{k}$ on their dataset. This partitioning
prevents poisoned data from contributing to more than $t$ local models. Each
data owner samples a local validation dataset and trains the local model
$\mathcal{M}_{k}$ on the remaining data. The local models and validation
datasets are secret shared to the outsourced servers. We note that this
permits arbitrarily corrupted models, and poisoned validation datasets, but
SafeNet’s structure still allows it to tolerate these corruptions. In the
protocol running inside the MPC, the servers jointly implement a filtering
stage for identifying models with low accuracy on the combined validation data
(below a threshold $\phi$) and excluding them from the ensemble. The output of
training is a secret share of each model in the trained ensemble $E$.
In the inference phase, SafeNet implements protocol $\Pi_{\mathsf{pred}}$, to
compute the prediction $y_{k}$ of each shared model $\mathcal{M}_{k}$ on test
input $x$ inside the MPC. The servers jointly perform majority voting to
determine the most common predicted class $y$ on input $x$, using only the
models which pass the filtering stage. An optional feature of SafeNet is to
add noise to the majority vote to enable user-level differential privacy
protection, in addition to poisoning robustness.
Our SafeNet protocol leverages our threat model, which assumes that only a set
of at most $t$ out of $m$ data owners are poisoned. This ensures that an
adversary only influences a limited set of models in the ensemble, while
existing training protocols would train a single poisoned global model. We
provide bounds for the exact number of poisoned owners $t$ supported by our
ensemble in Theorem 6. Interestingly, the bound depends on the number of data
owners $m$, and the maximum error made by a clean model in the ensemble. The
same theorem also lower bounds the probability that the ensemble predicts
correctly under data poisoning performed by the $t$ poisoned owners, and we
validate experimentally that, indeed, SafeNet provides resilience to stealthy
data poisoning attacks, such as backdoor and targeted attacks. Another
advantage of SafeNet is that the training time to execute the MPC protocols in
the SOC setting is drastically reduced as each $\mathcal{M}_{k}\in E$ can be
trained locally by the respective owner. We detail below the algorithms for
training and inference in SafeNet.
### III-C SafeNet Training and Inference
To train the ensemble in SafeNet, we present our proposed ensemble method in
Algorithm 1. We discuss the realization in MPC later in Appendix B. Each owner
$\mathsf{C}_{k}$ separates out a subset of its training dataset
${{D}_{k}^{\text{v}}}\in{D}_{k}$ and then trains its model $\mathcal{M}_{k}$
on the remaining dataset ${D}_{k}\setminus{{D}_{k}^{\text{v}}}$. The trained
model $\mathcal{M}_{k}$ and validation dataset ${{D}_{k}^{\text{v}}}$ is then
secret-shared to the servers. The combined validation dataset is denoted as
${{D}_{\text{val}}}=\bigcup\limits_{i=1}^{m}{{D}_{i}^{\text{v}}}$. We assume
that all users contribute equal-size validation sets to ${{D}_{\text{val}}}$.
During the filtering stage inside the MPC, the validation accuracy AccVal of
each model is jointly computed on ${{D}_{\text{val}}}$. If the resulting
accuracy for a model is below threshold $\mathsf{\phi}$, the model is excluded
from the ensemble.
The filtering step is used to separate the models with low accuracy, either
contributed by a poisoned owner, or by an owner holding non-representative
data for the prediction task. Under the assumption that the majority of owners
are honest, it follows that the majority of validation samples are correct. If
$\mathsf{C}_{k}$ is honest, then the corresponding $\mathcal{M}_{k}$ should
have a high validation accuracy on ${{D}_{\text{val}}}$, as the corresponding
predicted outputs would most likely agree with the samples in
${{D}_{\text{val}}}$. In contrast, the predictions by a poisoned model
$\mathcal{M}_{k}$ will likely not match the samples in ${{D}_{\text{val}}}$.
In Appendix A, we compute a lower bound on the size of the validation dataset
as a function of the number of poisoned owners $t$ and filtering threshold
$\mathsf{\phi}$, such that all clean models pass the filtering stage with high
probability even when a subset of the cross-validation dataset
${{D}_{\text{val}}}$ is poisoned.
Given protocol $\Pi_{\mathsf{train}}$ that securely realizes Algorithm 1
inside the MPC (described in Appendix B), we argue security as follows:
###### Theorem 2.
Protocol $\Pi_{\mathsf{train}}$ is secure against adversary
$\mathcal{A}^{\text{p}}_{\text{soc}}$ who poisons $t$ out of $m$ data owners
and corrupts $T$ out of $N$ servers.
The proof of the theorem will be given in Appendix C after we introduce the
details of MPC instantiation and how protocol $\Pi_{\mathsf{train}}$ securely
realizes $\mathcal{F}_{\mathsf{pTrain}}$ in Appendix B-3.
During inference, the prediction of each model $\mathcal{M}_{k}$ is generated
and the servers aggregate the results to perform majority voting. Optionally,
differentially private noise is added to the sum to offer user-level privacy
guarantees. The secure inference protocol $\Pi_{\mathsf{pred}}$ in MPC and its
proof of security is given in Appendix B and C respectively.
Algorithm 1 SafeNet Training Algorithm
Input: $m$ data owners, each owner $\mathsf{C}_{k}$’s dataset ${D}_{k}$. //
Owner’s local computation in plaintext format
– For $k\in[1,m]:$
* -
Separate out ${{D}_{k}^{\text{v}}}$ from ${D}_{k}$. Train $\mathcal{M}_{k}$ on
${D}_{k}\setminus{{D}_{k}^{\text{v}}}$.
* -
Secret-share ${{D}_{k}^{\text{v}}}$ and $\mathcal{M}_{k}$ to servers.
// MPC computation in secret-shared format
– Construct a common validation dataset
${{D}_{\text{val}}}=\cup_{i=1}^{m}{{D}_{i}^{\text{v}}}$.
– Construct ensemble of models $E=\\{\mathcal{M}_{i}\\}_{i=1}^{m}$
– Initialize a vector ${\mathbf{{{b}}}}^{\mathsf{val}}$ of zeros and of size
$m$.
– For $k\in[1,m]:$ // Ensemble Filtering
* -
${\text{AccVal}}_{k}=Accuracy(\mathcal{M}_{k},{{D}_{\text{val}}})$
* -
If ${\text{AccVal}}_{k}>\mathsf{\phi}$: Set
${\mathbf{{{b}}}}^{\mathsf{val}}_{k}=1$
return $E$ and ${\mathbf{{{b}}}}^{\mathsf{val}}$
### III-D SafeNet Analysis
Here, we demonstrate the accuracy, poisoning robustness and privacy guarantees
that SafeNet provides. We first show how to lower bound SafeNet’s test
accuracy given that each clean model in the ensemble reaches a certain
accuracy level. We also give certified robustness and user-level privacy
guarantees. All of our guarantees improve as the individual models become more
accurate, making the ensemble agree on correct predictions more frequently.
Robust Accuracy Analysis. We provide lower bounds on SafeNet accuracy,
assuming that at most $t$ out $m$ models in the SafeNet ensemble $E$ are
poisoned, and the clean models have independent errors, with maximum error
rate $p<1-\mathsf{\phi}$, where $\mathsf{\phi}$ is the filtering threshold.
Theorem. (Informal) Let $\mathcal{A}^{\text{p}}_{\text{soc}}$ be an adversary
who poisons at most $t$ out of $m$ data owners and corrupts $T$ out of $N$
servers. Assume that the filtered ensemble $E$ has at least $m-t$ clean
models, each with a maximum error rate of $p<1-\mathsf{\phi}$. If the number
of poisoned owners is at most $\frac{m(1-2p)}{2(1-p)}$, ensemble $E$ correctly
classifies new samples with high probability, which is a function of $m$,
$\phi$, $t$ and $p$.
The formal theorem and the corresponding proof can be found in Appendix A.
Poisoning Robustness Analysis. Our previous theorem demonstrated that
SafeNet’s accuracy on in-distribution data is not compromised by poisoning.
Now, we show that we can also certify robustness to poisoning on a per-sample
basis for arbitrary points, inspired by certified robustness techniques for
adversarial example robustness [26]. In particular, Algorithm 2 describes a
method for certified prediction against poisoning, returning the most common
class $y$ predicted by the ensemble on a test point $x$, as well as a bound on
the number of poisoning owners $t$ which would be required to modify the
predicted class.
Input: $m$ data owners; Ensemble of models
$E=\\{\mathcal{M}_{i}\\}_{i=1}^{m}$; Testing point $x$; Differential Privacy
parameters $\varepsilon,\delta$.
$\textsc{Counts}=\sum_{i=1}^{m}\mathcal{M}_{i}(x){\color[rgb]{.75,0,.25}\definecolor[named]{pgfstrokecolor}{rgb}{.75,0,.25}~{}+~{}\textsc{DPNoise}(\varepsilon,\delta)}$
$y,c_{y}=\textsc{MostCommon}(\textsc{Counts})$ // most common predicted class
with noisy count
$y^{\prime},c_{y^{\prime}}=\textsc{SecondMostCommon}(\textsc{Counts})$ //
second most common predicted class with count
$t=\lceil(c_{y}-c_{y^{\prime}})/2\rceil-1$
return $y,t$
Algorithm 2 Certified Private Prediction $\textsc{PredGap}~{}(E,x)$
We first analyze the poisoning robustness when privacy of aggregation is not
enabled in the following theorem.
###### Theorem 3.
Let $E$ be an ensemble of models trained on datasets
$D=\\{D_{1},\dots,D_{m}\\}$. Assume that on an input $x$, the ensemble
generates prediction $y=E(x)$ without DPNoise and Algorithm 2 outputs $(y,t)$.
Moreover, assuming an adversary $\mathcal{A}^{\text{p}}_{\text{soc}}$ who
poisons at most $t$ data owners, the resulting $E^{\prime}$ trained on
poisoned data $D^{\prime}$ generates the same prediction on $x$ as $E$:
$E^{\prime}(x)=y$.
###### Proof.
If an adversary’s goal were to cause $y^{\prime}$ to be predicted on input
$x$, their most efficient strategy is to flip $y$ predictions to $y^{\prime}$.
If $y$ were the ensemble prediction, it must have at least
$\lfloor\frac{c_{y}+c_{y^{\prime}}}{2}\rfloor$ model predictions, and the
second most common prediction $y^{\prime}$ would have at most
$\lfloor\frac{c_{y}+c_{y^{\prime}}}{2}\rfloor$ model predictions. Corrupting
these predictions then requires flipping at least $(c_{y}-c_{y^{\prime}})/2$
predictions from $y$ to $y^{\prime}$. Overall, this requires at least
$\lceil(c_{y}-c_{y^{\prime}})/2\rceil$ poisoned data owners. Thus, an
adversary poisoning at most $t=\lceil(c_{y}-c_{y^{\prime}})/2\rceil-1$ data
owners still generates the same prediction $y$ on $x$. ∎
Privacy Analysis. Recent work by McMahan et al. [66] introduced the notion of
_user-level_ differential privacy where the presence of a user in the protocol
should have imperceptible impact on the final trained model. We show that,
given our threat model, SafeNet provides the strong privacy guarantee of user-
level differential privacy, which also implies example-level differential
privacy. This privacy guarantee can protect against model extraction and
property inference attacks, in addition to membership inference attacks.
###### Theorem 4.
When DPNoise function samples from a Laplace random variable
$Lap(2/\varepsilon)$, Algorithm 2 satisfies user-level
$\varepsilon$-differential privacy.
###### Proof.
Observe that replacing a local model obtained from a data owner in our
framework only changes Counts for two classes by 1 on any given query, so it
has an $\ell_{1}$ sensitivity of 2. As a result, $\text{Lap}(2/\varepsilon)$
suffices to ensure that user-level $\varepsilon$-differential privacy holds. ∎
The main crux of Theorem 4 is that no model can influence Counts too much, an
observation also made by PATE [75] and the CaPC [24] framework, but they only
considered example-level differential privacy, protecting against membership
inference attacks, but not stronger attacks that user-level differential
privacy prevents. This limitation is inherent in PATE, as the central training
set is split to train multiple models. However, our stronger analysis holds
for SafeNet in the private collaborative learning setting, as we start with
pre-existing partitions of benign and poisoned datasets. We prove Theorem 4 by
considering Laplace noise, but various improvements to PATE using different
mechanisms such as Gaussian noise and other data-dependent approaches [75,
76], can also be extended to our framework.
Combining Robustness and Privacy. Adding differentially private noise prevents
Algorithm 2 from returning the exact difference between the top two class-
label counts, making it only possible to offer probabilistic robustness
guarantees. That is, the returned $t$ is actually a noisy version of the
“true” $t^{*}$, where $t^{*}$ is used to certify correctness. However, for
several choices of the DPNoise function, the exact distribution of the noise
is known, making it easy to provide precise probabilistic guarantees similar
to those provided by Theorem 3. For example, if Gaussian noise with scale
parameter $\sigma$ is used to guarantee DP, and PredGap returns $t$, then this
prediction observed $t$, then we know that the true $t^{*}$ is larger than
$t-k$ with probability $\Phi(k/\sigma)$, where $\Phi$ denotes the Gaussian
CDF.
### III-E Extensions
In addition to providing various guarantees, we offer a number of extensions
to our original SafeNet design.
Transfer Learning. A major disadvantage of SafeNet is its slower inference
time compared to a traditional PPML framework, requiring to perform a forward
pass on all local models in the ensemble. However, for transfer learning
scenario, we propose a way where SafeNet runs almost as fast as the
traditional framework. In transfer learning [56, 34], a pre-trained model
$\mathcal{M}_{B}$, which is typically trained on a large public dataset, is
used as a “feature extractor” to improve training on a given target dataset.
In our setting, all data owners start with a common pre-trained model, and
construct their local models by fine tuning $\mathcal{M}_{B}$’s last ‘$l$’
layers using their local data. We can then modify the prediction phase of
SafeNet to reduce its inference time and cost considerably. The crucial
observation is that all local models differ only in the weights associated to
the last $l$ layers. Consequently, given a prediction query, we run
$\mathcal{M}_{B}$ upto its last $l$ layers and use its output to compute the
$l$ layers of all the local models to obtain predictions for majority voting.
The detailed description of the modified SafeNet algorithm is given in
Appendix D-A. Note that, this approach achieves the same robustness and
privacy guarantees as described in Section III-D, given that $\mathcal{M}_{B}$
was originally not tampered with.
Integration Testing. While SafeNet can handle settings with non-iid data
distributions among data owners, the local models accuracies might be impacted
by extreme non-iid settings (we analyze the sensitivity of SafeNet to data
imbalance in Section IV-H). In such cases, SafeNet _fails fast_ , allowing the
owners to determine whether or not using SafeNet is the right approach for
their setting. This is possible because SafeNet’s training phase is very
cheap, making it possible to quickly evaluate the ensemble’s accuracy on the
global validation set. If the accuracy is not good enough, the owners can use
a different approach, such as a standard MPC training. SafeNet’s strong
robustness guarantees and an efficient training phase makes it an appealing
first choice for private collaborative learning.
Low Resource Owners. If a data owner does not have sufficient resources to
train a model on their data, they cannot participate in the standard SafeNet
protocol. In such situations, computationally restricted owners can defer
their training to SafeNet, that can use standard MPC training approaches to
train their models. Training these models in MPC increases the computational
overhead of our approach, but facilitates broader participation. We provide
the details of this modification in Appendix D-B and also run an experiment in
Appendix E-A to verify that SafeNet remains efficient, while retaining the
same robustness and privacy properties.
### III-F Comparison to Existing Ensemble Strategies
Model ensembles have been considered to address adversarial machine learning
vulnerabilities in several prior works. Here, we discuss the differences
between our analysis and previous ensembling approaches.
##### Ensembles on a Centralized Training Set
Several ensemble strategies seek to train a model on a single, centralized
training set. This includes using ensembles to prevent poisoning attacks [51,
60], as well as to provide differential privacy guarantees [75] or robustness
to privacy attacks [84]. Due to centralization, none of these techniques can
take advantage of the partitioning of datasets. As a result, protection from
poisoning is only capable of handling a small number of poisoning examples,
whereas our partitioning allows large fractions of the entire dataset to be
corrupted. PATE, due to data centralization, can only guarantee privacy for
individual samples, whereas in our analysis, the _entire dataset_ of a given
owner can be changed, providing us with _user-level_ privacy.
##### CaPC [24]
Chouquette-Choo et al. [24] propose CaPC, which extends PATE to the MPC
collaborative learning setting. Their analysis gives differential privacy
guarantees for individual examples. Our approach extends their analysis to a
differential privacy guarantee for the entire local training set and model, to
provide protection against attacks such as property inference and model
extraction. In addition, our approach also provides poisoning robustness
guarantees which they cannot, as they allow information to be shared between
local training sets.
##### Cao et al. [16]
Recent work by Cao et al. [16] gave provable poisoning robustness guarantees
for federated learning aggregation. They proposed an ensembling strategy,
where, given $m$ data owners, $t$ of which are malicious, they construct an
ensemble of $\binom{m}{k}$ global models, where each model is trained on a
dataset collected from a set of $k$ clients. Our poisoning robustness argument
in Theorem 3 coincides with theirs at $k=1$, a setting they do not consider as
their approach relies on combining client datasets for federated learning.
Additionally, $k=1$ makes their approach vulnerable to data reconstruction
attacks [12], an issue SafeNet does not face as the attack directly violates
the underlying security guarantee of the MPC. We experimentally compare both
approaches on a federated learning dataset in Section V-D and show that our
approach outperforms [16].
## IV Evaluation
### IV-A Experimental Setup
We build a functional code on top of the MP-SPDZ library
[53]111https://github.com/data61/MP-SPDZ to assess the impact of data
poisoning attacks on the training phase of PPML frameworks. We consider four
different MPC settings, all available in the MP-SPDZ library: i) two-party
with one semi-honest corruption (2PC) based on [32, 27]; ii) three-party with
one semi-honest corruption (3PC) based on Araki et al. [4] with optimizations
by [69, 29]; iii) three-party with one malicious corruption based on Dalskov
et al. [28]; and iv) four-party with one malicious corruption (4PC), also
based on [28]. Note, that both semi-honest and malicious adversaries possess
poisoning capability; their roles change only inside the SOC paradigm.
In all the PPML frameworks, the data owners secret-share their training
datasets to the servers and a single ML model is trained on the combined
dataset. Typically, real number arithmetic is emulated by using $32$-bit
fixed-point representation of fractional numbers. Each fractional number
$x\in\mathbb{Z}_{2^{\ell}}$ is represented as $\lfloor x\cdot 2^{f}\rceil$,
where $\ell$ and $f$ denote the ring size and precision, respectively. We set
$\ell=64$ and $f=16$. Probabilistic truncation proposed by Dalskov et al. [29,
28] is applied after every multiplication. In the MPC library implementation,
the sigmoid function for computing the output probabilities is replaced with a
three-part approximation [71, 20, 28]. In SafeNet, models are trained locally
using the original sigmoid function. We implement softmax function using the
method of Aly et al. [2]. We perform our experiments over a LAN network on a
$32$-core server with $192$GB of memory allowing up to $20$ threads to be run
in parallel.
### IV-B Metrics
We use the following metrics to compare SafeNet with existing PPML framework:
Training Time. is the time taken to privately train a model inside the MPC
(protocol $\Pi_{\mathsf{train}}$). As is standard practice [71, 69, 20, 21,
13, 28], this excludes the time taken by the data owners to secret-share their
datasets and models to the servers as it is a one-time setup phase.
Communication Complexity. is the amount of data exchanged between the servers
during the privacy-preserving execution of the training phase.
Test Accuracy. is the percentage of test samples that the ML model correctly
predicts.
Attack Success Rate. is the percentage of target samples that were
misclassified as the label of attacker’s choice.
Robustness against worst-case adversary. We measure the resilience of SafeNet
at a certain corruption level $c$ against a powerful, worst-case adversary.
For each test sample, this adversary can select any subset of $c$ owners,
arbitrarily modifying the model to change the test sample’s classification.
This is the same adversary considered in Algorithm 2 and by Theorem 3, any any
model which is robust against this attack has a provably certified prediction.
We measure the error rate on testing samples for this worst-case adversarial
model.
### IV-C Datasets and Models
We give a descriptions of the datasets and models used for our experiments
below.
MNIST. The MNIST dataset [35] is a 10 class classification problem which is
used to predict digits between $0$ and $9$. We train a logistic regression
model for MNIST.
Adult. The Adult dataset [35] is for a binary classification problem to
predict if a person’s annual income is above $50K. We train a neural network
with one hidden layer of size $10$ nodes using ReLU activations.
Fashion. We train several neural networks on the Fashion-MNIST dataset [91]
with one to three hidden layers. The Fashion dataset is a 10-class
classification problem with $784$ features representing various garments. All
hidden layers have $128$ nodes and ReLU activations, except the output layer
using softmax.
CIFAR-10. The CIFAR-10 dataset [57] is a 10 class image dataset. CIFAR-10 is
harder than other datasets we consider, so we perform transfer learning from a
ResNet-50 model [45] pretrained on the ImageNet dataset [33]. We fine tune
only the last layer, freezing all convolutional layers.
EMNIST. The EMNIST dataset [25] is a benchmark federated learning image
dataset, split in a non-iid fashion by the person who drew a given image. We
select 100 EMNIST clients in our experiments.
### IV-D Dataset Partitioning and Model Accuracy
We conduct our experiments by varying the number of data owners. We split
MNIST and Adult datasets across 20 participating data owners, while we use 10
owners for Fashion and CIFAR-10 datsets. The EMNIST dataset used for
comparison with prior work on federated learning assumes $100$ participating
owners. Each owner selects at random $10\%$ of its local training data as the
validation dataset ${{D}_{j}^{\text{v}}}$. All models are trained using mini-
batch stochastic gradient descent.
To introduce non-iid behavior in our datasets (except for EMNIST, which is
naturally non-iid), we sample class labels from a Dirichlet distribution [46].
That is, to generate a population of non-identical owners, we sample $q\sim
Dir(\alpha p)$ from a Dirichlet distribution, where $p$ characterizes a prior
class distribution over all distinct classes, and $\alpha>0$ is a
concentration parameter which controls the degree of similarity between
owners. As $\alpha\rightarrow\infty$, all owners have identical distributions,
whereas as $\alpha\rightarrow 0$, each owner holds samples of only one
randomly chosen class. In practice, we observe $\alpha=1000$ leads to almost
iid behavior, while $\alpha=0.1$ results in an extreme imbalance distribution.
The default choice for all our experiments is $\alpha=10$, which provides a
realistic non-iid distribution. We will vary parameter $\alpha$ in Appendix
E-A.
Dataset Partition Type Local Model SafeNet Ensemble Improvement MNIST Dirchlet
80.05% 89.48% 9.03% Adult 77.32% 81.41% 4.09% FASHION 71.68% 83.26% 11.53%
CIFAR-10 54.03% 62.76% 8.73% EMNIST Natural 54.05% 79.19% 25.14%
TABLE I: Test accuracy comparison of a single local model and the entire
SafeNet ensemble. SafeNet Ensemble improves upon a single local model across
all datasets.
We measure the accuracy of a local model trained by individual data owners and
our SafeNet ensemble. Table I provides the detailed comparison of the accuracy
of the local and ensemble models across all four datasets. We observe that
SafeNet consistently outperforms local models, with improvements ranging from
4.09% to 25.14%. The lowest performance is on CIFAR-10, but in this case
SafeNet’s accuracy is very close to fine-tuning the network using the combined
dataset, which reaches 65% accuracy.
### IV-E Implementation of Poisoning Attacks
Backdoor Attacks. We use the BadNets attack by Gu et al. [44], in which the
poisoned owners inject a backdoor into the model to change the model’s
prediction from source label $y_{s}$ to target label $y_{t}$. For instance, in
an image dataset, a backdoor might set a few pixels in the corner of the image
to white. The BadNets attack strategy simply identifies a set of $k$ target
samples $\\{x^{t}_{i}\\}_{i=1}^{k}$ with true label $y_{s}$, and creates
backdoored samples with target label $y_{t}$. We use $k=100$ samples, which is
sufficient to poison all models.
To run backdoor attacks on models trained with standard PPML frameworks, the
poisoned owners create the poisoned dataset ${D}^{*}_{j}$ by adding $k$
poisoned samples and secret-sharing them as part of the training dataset to
the MPC. The framework then trains the ML model on the combined dataset
submitted by both the honest and poisoned owners.
In SafeNet, backdoor attacks are implemented at the poisoned owners, which add
$k$ backdoored samples to their dataset ${D}_{j}$ and train their local models
$\mathcal{M}^{*}_{j}$ on the combined clean and poisoned data. A model trained
only on poisoned data will be easy to filter due to low accuracy, making
training on clean samples necessary. The corrupt owners then secret-share both
the model $\mathcal{M}^{*}_{j}$ and validation set ${{D}_{j}^{\text{v}}}$
selected at random from ${D}_{j}$ to the MPC.
Targeted Attacks. We select $k$ targeted samples, and change their labels in
training to a target label $y_{t}$ different from the original label. The
models are trained to simultaneously minimize both the training and the
adversarial loss. This strategy has also been used to construct poisoned
models by prior work [55], and can be viewed as an unrestricted version of the
state-of-the-art Witches’ Brew targeted attack (which requires clean-label
poisoned samples) [40].
The next question to address is which samples to target as part of the attack.
We use two strategies to generate $k=100$ target samples, based on an ML model
trained by the adversary over the test data. In the first strategy, called
TGT-Top, the adversary chooses examples classified correctly with high
confidence by a different model. Because these examples are easy to classify,
poisoning them should be hard. We also consider an attack called TGT-Foot,
which chooses low confidence examples, which are easier to poison. For both
strategies, the adversary replaces its label with the second highest predicted
label. We compare these two strategies for target selection.
The difference between targeted and backdoor attacks is that targeted attacks
do not require the addition of a backdoor trigger to training or testing
samples, as needed in a backdoor attack. However, the impact of the backdoor
attack is larger. Targeted attacks change the prediction on a small set of
testing samples (which are selected in advance before training the model),
while the backdoor attack generalizes to any testing samples including the
backdoor pattern.
### IV-F Evaluation on Logistic Regression
We start with DIGIT 1/7 dataset, a subset of MNIST data using only digits 1
and 7, for which we evaluate the computational costs and the poisoning attack
success, for both traditional PPML and our newly proposed SafeNet framework.
We perform our experiments over four underlying MPC frameworks, with both
semi-honest and malicious adversaries. Table II provides a detailed analysis
of the training time and communication complexity for both existing PPML and
SafeNet frameworks. Note that the training time and communication cost for the
PPML frameworks is reported per epoch times the number of epochs in training.
The number of epochs is a configurable hyper-parameter, but usually at least
10 epochs are required. On the other hand, the training time and communication
reported for SafeNet is for the end-to-end execution inside the MPC,
independent of the number of epochs. We observe large improvements of SafeNet
over the existing PPML frameworks. For instance, in the semi-honest two-party
setting, SafeNet achieves $30\times$ and $17\times$ improvement in running
time and communication complexity, respectively, for $n=10$ epochs. This is
expected because SafeNet performs local model training, which is an expensive
phase in the MPC.
MPC Setting Framework Training (s) Comm. (GB) 2PC Semi-Honest PPML
n$\times$151.84 n$\times$65.64 [32] SafeNet $57.41$ $38.03$ 3PC Semi-Honest
PPML n$\times$2.63 n$\times$0.35 [4] SafeNet $0.54$ $0.15$ Malicious PPML
n$\times$32.54 n$\times$ 2.32 [28] SafeNet $9.44$ $1.47$ 4PC Malicious PPML
n$\times$5.28 n$\times$0.66 [28] SafeNet $1.09$ $0.28$
TABLE II: Training Time (in seconds) and Communication (in GB) of existing
PPML and SafeNet framework for a logistic regression model over several MPC
settings over a LAN network. n denotes the number of epochs required for
training the logistic regression model in the PPML framework. The time and
communication reported for SafeNet is for end-to-end execution.
To mount the backdoor attack, the backdoor pattern sets the top left pixel
value to white (a value of 1). We set the original class as $y_{s}=1$ and
target class as $y_{t}=7$. Figure 4 (a) shows the success rate for the 3PC
PPML and SafeNet frameworks by varying the number of poisoned owners between 0
and 10. We tested with all four PPML settings and the results are similar. We
observe that by poisoning data of a single owner, the adversary is
successfully able to introduce a backdoor in the PPML framework. The model in
the PPML framework predicts all $k=100$ target samples as $y_{t}$, achieving
$100\%$ adversarial success rate. In contrast, SafeNet is successfully able to
defend against the backdoor attack, and provides $0\%$ attack success rate up
to 9 owners with poisoned data. The test accuracy on clean data for both
frameworks is high at around $98.98\%$ even after increasing the poisoned
owners to $10$.
$0$$1$$2$$3$$4$$5$$6$$7$$8$$9$$10$$0$$20$$40$$60$$80$$100$$\\#$ Corrupt Data
OwersSuccess Rate (in $\%$)PPML FrameworkSafeNet Framework(a) Backdoor
$0$$1$$2$$3$$4$$5$$6$$7$$8$$9$$10$$0$$20$$40$$60$$80$$100$$\\#$ Corrupt Data
OwersSuccess Rate (in $\%$)PPML FrameworkSafeNet Framework(b) TGT-Top
$0$$1$$2$$3$$4$$5$$6$$7$$8$$9$$10$$0$$20$$40$$60$$80$$100$$\\#$ Corrupt Data
OwersSuccess Rate (in $\%$)PPML FrameworkSafeNet Framework(c) TGT-Foot
$0$$1$$2$$3$$4$$5$$6$$7$$8$$9$$10$$0$$20$$40$$60$$80$$100$$\\#$ Corrupt Data
OwersIdeal Success Rate (in $\%$)SafeNet-TGT-Top $\&$ BackdoorSafeNet-TGT-
Foot(d) Worst-case Adversary
Figure 4: Logistic regression attack success rate on the Digit-1/7 dataset for
PPML and SafeNet frameworks in the 3PC setting, for varying poisoned owners
launching Backdoor and Targeted attacks. Plot (a) gives the success rate for
the BadNets attack, while plots (b) and (c) show the success rates for the
TGT-Top and TGT-Foot targeted attacks. Plot (d) provides the worst-case
adversarial success when the set of poisoned owners can change per sample.
Lower attack success result in increased robustness. SafeNet achieves much
higher level of robustness than existing PPML under both attacks.
We observe in Figure 4 (b) that for the TGT-Top targeted attack, a single
owner poisoning is able to successfully misclassify $98\%$ of the target
samples in the PPML framework. As a consequence, the test accuracy of the
model drops by $\approx 10\%$. In contrast, SafeNet works as intended even at
high levels of poisoning. For the TGT-Foot attack in Figure 4 (c), the test
accuracy of the 3PC PPML framework drops by $\approx 5\%$. The attack success
rate is $94\%$ for the 3PC PPML, which is decreased to $21\%$ by SafeNet, in
presence of a single poisoned owner. The accuracy drop and success rate vary
across the two strategies because of the choice of the target samples. In TGT-
Foot, the models have low confidence on the target samples, which introduces
errors even without poisoning, making the attack succeed with slightly higher
rate in SafeNet. Still, SafeNet provides resilience against both TGT-Top and
TGT-Foot for up to 9 out of 20 poisoned owners.
Worst-case Robustness. Figure 4 (d) shows the worst-case attack success in
SafeNet, by varying the number of poisoned owners $c\in[1,10]$ and allowing
the attacker to poison a different set of $c$ owners for each testing sample
(i.e., the adversarial model considered in Algorithm 2 for which we can
certify predictions). Interestingly, SafeNet’s accuracy is similar to that
achieved under our backdoor and targeted attacks, even for this worst-case
adversarial scenario. Based on these results we conclude that: (1) the
backdoor and targeted attacks we choose to implement are as strong as the
worst-case adversarial attack, in which the set of poisoned owners is selected
per sample; (2) SafeNet provides certified robustness up to 9 out of 20
poisoned owners even under this powerful threat scenario.
Multiclass Classification. We also test both frameworks in the multiclass
classification setting for both Backdoor and Targeted attacks on MNIST dataset
and observe similar large improvements. For instance, in the semi-honest 3PC
setting, we get $240\times$ and $268\times$ improvement, respectively, in
training running time and communication complexity for $n=10$ epochs while the
success rate in the worst-case adversarial scenario not exceeding $50\%$ with
$9$ out of $20$ owners being poisoned. This experiment shows that the robust
accuracy property of our framework translates seamlessly even for the case of
a multi-class classification problem. The details of the experiment are
deferred to Appendix E.
### IV-G Evaluation on Deep Learning Models
We evaluate neural network training for PPML and SafeNet frameworks on the
Adult and Fashion datasets. We provide experiments on a three hidden layer
neural network on Fashion in this section and include additional experiments
in Appendix E.
MPC Setting Framework Training Time (s) Communication (GB) Backdoor Attack
Targeted Attack Test Accuracy Success Rate Test Accuracy Success Rate-Top
Success Rate-Foot 3PC [4] Semi-Honest PPML n $\times~{}565.45$ n
$\times~{}154.79$ $84.07\%$ $100\%$ $82.27\%$ $100\%$ $100\%$ SafeNet $156.53$
$41.39$ $84.36\%$ $0\%$ $84.48\%$ $0\%$ $32\%$ 4PC [28] Malicious PPML n
$\times~{}1392.46$ n $\times~{}280.32$ $84.12\%$ $100\%$ $82.34\%$ $100\%$
$100\%$ SafeNet $356.26$ $76.43$ $84.36\%$ $0\%$ $84.54\%$ $0\%$ $32\%$
TABLE III: Time (in seconds) and Communication (in Giga-Bytes) over a LAN
network for PPML and SafeNet framework training a Neural Network model with 3
hidden layers over Fashion dataset. n denotes the number of epochs used to
train the NN model in the PPML framework. The time and communication reported
for SafeNet is for end-to-end execution. Test Accuracy and Success Rate is
given for the case when a single owner is corrupt.
Table III provides a detailed analysis of the training time, communication,
test accuracy and success rate for the 4PC PPML framework and SafeNet using
one poisoned owner. We observe that SafeNet has $39\times$ and $36\times$
improvement in training time and communication complexity over the PPML
framework, for $n=10$ epochs. The SafeNet prediction time is on average $26$
milliseconds to perform a single secure prediction, while the existing PPML
framework takes on average $3.5$ milliseconds for the same task. We believe
this is a reasonable cost for many applications, as SafeNet has significant
training time improvements and robustness guarantees.
For the BadNets backdoor attack we set the true label $y_{s}$ as a ‘T-Shirt’
and target label $y_{t}$ as ‘Trouser’. We test the effect of both TGT-Top and
TGT-Foot attacks under multiple poisoned owners, and also evaluate another
variant of targeted attack called TGT-Random, where we randomly sample $k=100$
target samples from the test data. Figure 5 provides the worst-case
adversarial success of SafeNet against these attacks. We observe that SafeNet
provides certified robustness for TGT-Random and TGT-Top up to 4 out of 10
poisoned onwers, while the adversary is able to misclassify more target
samples in the TGT-Foot attack. The reason is that the $k$ selected target
samples have lowest confidence and models in the ensemble are likely to be in
disagreement on their prediction.
$0$$1$$2$$3$$4$$5$$0$$50$$100$$\\#$ Corrupt Data OwersIdeal Success Rate (in
$\%$)SafeNet-TGT-TopSafeNet-TGT-RandomSafeNet-TGT-FootSafeNet-Backdoor
Figure 5: Worst-case adversarial success against targeted and backdoor attacks
of a three-layer neural network trained on Fashion in SafeNet. The adversary
can change the set of $c$ poisoned owners per sample. SafeNet achieves
robustness on the backdoor, TGT-Top and TGT-Random attacks, up to 4 poisoned
owners out of 10. The TGT-Foot attack targeting low-confidence samples has
higher success.
### IV-H Evaluation of Extensions
Here, we evaluate our SafeNet extensions introduced in Section III-E. First,
we experiment with our transfer learning extension. We show that, on applying
our extension to SafeNet, its inference overhead falls dramatically. We test
our approach on Fashion and CIFAR-10 datasets. For the Fashion dataset, we use
the same setup as earlier with $m=10$ data owners, and three-layered neural
network as the model architecture, where each data owner fine-tunes only the
last layer ($l=1$) of the pre-trained model. We observe that for each secure
inference, SafeNet is now only $1.62\times$ slower and communicates
$1.26\times$ more on average than the PPML framework, while the standard
SafeNet approach is about $8\times$ slower due to the evaluation of multiple
ML models.
We observe even better improvements for CIFAR-10 dataset. Here, we use a
state-of-the-art 3PC inference protocol from [58], built specially for ResNet
models. In our setting, each owner fine-tunes the last layer of a ResNet-50
model, which was pre-trained on ImageNet data. SafeNet reaches 62.8% accuracy,
decaying smoothly in the presence of poisoning: 51.9% accuracy tolerating a
single poisoned owner, and 39.8% while tolerating two poisoned owners. The
cost of inference for a single model is an average of 59.9s, and SafeNet’s
overhead is negligible (experimental noise has a larger impact than SafeNet);
SafeNet increases communication by only 0.1%, increasing around 7MB over the
6.5GB required for standard inference.
Next, we analyze the behavior of SafeNet under different non-iid settings by
varying the concentration parameter $\alpha$. We use the same Fashion dataset
setup from Section IV-G. We observe that as $\alpha$ decreases, i.e., the
underlying data distribution of the owners become more non-iid, SafeNet’s
accuracy decreases, as expected, but SafeNet still achieves reasonable
robustness even under high data imbalance (e.g., $\alpha=1$). In extremely
imbalanced settings, such as $\alpha=0.1$, SafeNet can identify low accuracy
during training and data owners can take actions accordingly. We defer the
details for this extension to Appendix E-A, which also includes analyzing
attack success rates under extreme non-iid conditions.
## V Discussion and Comparison
We showed that SafeNet successfully mitigates a variety of data poisoning
attacks. We now discuss other aspects of our framework such as scalability and
modularity, parameter selection in practice and comparison against other
mitigation strategies and federated learning approaches.
### V-A SafeNet’s Scalability and Modularity
Scalability. The training and prediction times of SafeNet inside the MPC
depend on the number of models in the ensemble and the size of the validation
dataset. The training time increases linearly with the fraction of training
data used for validation and the number of models in the ensemble. Similarly,
the prediction phase of SafeNet has both runtime and communication scaling
linearly with the number of models in the ensemble. However, we discussed how
transfer learning can reduce the inference time of SafeNet.
Modularity. Another key advantage of SafeNet is that it can use any MPC
protocol as a backend, as long as it implements standard ML operations. We
demonstrated this by performing experiments with both malicious and semi-
honest security for four different MPC settings. As a consequence, advances in
ML inference with MPC will improve SafeNet’s runtime. SafeNet can also use any
model type implementable in MPC; if more accurate models are designed, this
will lead to improved robustness and accuracy.
### V-B Instantiating SafeNet in Practice
In this section we discuss how SafeNet can be instantiated in practice. There
are two aspects the data owners need to agree upon before instantiating
SafeNet: i) The MPC framework used for secure training and prediction phase
and ii) the parameters in Theorem 6 to achieve poisoning robustness. The MPC
framework is agreed upon by choosing the total number of outsourced servers
$N$ participating in the MPC, the number of corrupted servers $T$ and the
nature of the adversary (semi-honest or malicious in the SOC paradigm). The
owners then agree upon a filtering threshold $\mathsf{\phi}$ and the number of
poisoned owners $t$ that can be tolerated. Once these parameters are chosen
the maximum allowed error probability of the local models trained by the
honest owners based on Lemma 5 and Theorem 6, can be computed as
$p<\min(\frac{m(1-\mathsf{\phi})-t}{m-t},\frac{m-2t}{2(m-t)})$, where $m$
denotes the total number of data owners. Given the upper bound on the error
probability $p$, each honest owner trains its local model while satisfying the
above constraint.
We provide a concrete example on parameter selection as follows: We
instantiate our Fashion dataset setup, with $m=10$ data owners participating
in SafeNet. For the MPC framework we choose a three-party setting ($N=3$
servers), tolerating $T=1$ corruption. For poisoning robustness, we set
$\mathsf{\phi}=0.3$ and the number of poisoned owners to $t=2$. This gives us
the upper bound on max error probability as $p<0.375$. Also the size of the
global validation dataset is $|{{D}_{\text{val}}}|>92$ samples, i.e., each
data owner contributes $10$ cross-validation samples each such that the
constrained is satisfied. With this instantiation, we observe that none of the
clean models are filtered during training and the attack success rate of the
adversary for backdoor attacks remains the same even after poisoning $3$
owners, while our analysis holds for $t=2$ poisoned owners. Thus, in practice
SafeNet is able tolerate more poisoning than our analysis suggests.
### V-C Comparing to poisoning defenses
Defending against poisoning attacks is an active area of research, but
defenses tend to be heuristic and specific to attacks or domains. Many
defenses for backdoor poisoning attacks exist [63, 86, 22, 89], but these
strategies work only for Convolutional Neural Networks trained on image
datasets; Severi et al. [80] showed that these approaches fail when tested on
other data modalities and models. Furthermore, recent work by Goldwasser et.al
[42] formulated a way to plant backdoors that are undetectable by any defense.
In contrast, SafeNet is model agnostic and works for a variety of data
modalities. Even if an attack is undetectable, the adversary can poison only a
subset of models, making the ensemble robust against poisoning. In certain
instances SafeNet can tolerate around $30\%$ of the training data being
poisoned, while being attack agnostic. SafeNet is also robust to stronger
model poisoning attacks [5, 8, 37], which are possible when data owners train
their models locally. SafeNet tolerates model poisoning because each model
only contributes to a single vote towards the final ensemble prediction. In
fact, all our empirical and theoretical analysis of SafeNet is computed for
arbitrarily corrupted models.
### V-D Comparison with Federated Learning
Federated Learning (FL) is a distributed machine learning framework that
allows clients to train a global model without sharing their local training
datasets to the central server. However, it differs from the PPML setting we
consider in the following ways: (1) Clients do not share their local data to
the server in FL, whereas PPML allows sharing of datasets; (2) Clients
participate in multiple rounds of training in FL, whereas they communicate
only once with the servers in PPML; (3) Clients receive the global model at
each round in FL, while in SafeNet they secret-share their models once at the
start of the protocol; and, finally, (4) PPML provides stronger
confidentiality guarantees such as privacy of the global model.
It is possible to combine FL and MPC to guarantee both client and global model
privacy [52, 98, 38], but this involves large communication overhead and is
susceptible to poisoning [64]. For example, recent work [92, 8, 6] showed that
malicious data owners can significantly reduce the learned global model’s
accuracy. Existing defenses against such owners use Byzantine-robust
aggregation rules such as trimmed mean [96], coordinate-wise mean [95] and
Krum [11], which have been show to be susceptible to backdoor and model
poisoning attacks [37]. Recent work in FL such as FLTrust [15] and DeepSight
[79] provide mitigation against backdoor attacks. Both strategies are
inherently heuristic, while SafeNet offers provable robustness guarantees.
FLTrust also requires access to a clean dataset, which is not required in our
framework, and DeepSight inspects each model update before aggregation, which
is both difficult in MPC and leads to privacy leakage from the updates, a
drawback not found in SafeNet. An important privacy challenge is that
federated learning approaches permit data reconstruction attacks when the
central server is malicious [12]. SafeNet prevents such an attack, as it
directly violates the security guarantee of the MPC, when instantiated for the
malicious setting.
We experimentally compare SafeNet to the federated learning-based approach of
Cao et al. [16], who also gave provable robustness guarantees in the federated
averaging scenario. We instantiate their strategy for EMNIST dataset and
compare their Certified Accuracy metric to SafeNet’s, with $m=100$ data
owners, $k=\\{2,4\\}$ and FedAvg as the base algorithm. To ensure both
approaches have similar inference times, we fix the ensemble size to 100
models, each trained using federated learning with 50 global and local
iterations.
Figure 6: Certified Accuracy of our framework compared to Cao et al. [16]. We
fix the size of the Cao et al. ensemble to 100, to match the test runtime of
SafeNet.
Figure 6 shows that SafeNet consistently outperforms [16], in terms of
maintaining a high certified accuracy in the presence of large poisoning
rates. Moreover, their strategy is also particularly expensive at training
time when instantiated in MPC. During training, their approach requires data
owners to interact inside MPC to train models over multiple rounds. By
contrast, SafeNet only requires interaction in MPC at the beginning of the
training phase, making it significantly faster.
## VI Conclusion
In this paper, we extend the security definitions of MPC to account for data
poisoning attacks when training machine learning models privately. We consider
a novel adversarial model who can manipulate the training data of a subset of
owners and control a subset of servers in the MPC. We then propose SafeNet,
which performs ensembling in MPC, and show that our design has provable
robustness and privacy guarantees, beyond those offered by existing
approaches. We evaluate SafeNet using logistic regression and neural networks
models trained on five datasets by varying the distribution similarity across
data owners. We consider both end-to-end and transfer learning scenarios. We
demonstrate experimentally that SafeNet achieves even higher robustness than
its theoretical analysis against backdoor and targeted poisoning attacks, at a
significant performance improvement in the training time and communication
complexity compared to existing PPML frameworks.
## VII Acknowledgments
We thank Nicolas Papernot and Peter Rindal for helpful discussions and
feedback. This research was sponsored by the U.S. Army Combat Capabilities
Development Command Army Research Laboratory under Cooperative Agreement
Number W911NF-13-2-0045 (ARL Cyber Security CRA). The views and conclusions
contained in this document are those of the authors and should not be
interpreted as representing the official policies, either expressed or
implied, of the Combat Capabilities Development Command Army Research
Laboratory or the U.S. Government. The U.S. Government is authorized to
reproduce and distribute reprints for Government purposes notwithstanding any
copyright notation here on.
## References
* [1] M. Abspoel, D. Escudero, and N. Volgushev. Secure training of decision trees with continuous attributes. In PoPETS, 2021.
* [2] A. Aly and N.P. Smart. Benchmarking privacy preserving scientific operations. In ACNS, 2019.
* [3] T. Araki, A. Barak, J. Furukawa, T. Lichter, Y. Lindell, A. Nof, K. Ohara, A. Watzman, and O. Weinstein. Optimized honest-majority MPC for malicious adversaries - breaking the 1 billion-gate per second barrier. In IEEE S&P, 2017.
* [4] T. Araki, J. Furukawa, Y. Lindell, A. Nof, and K. Ohara. High-throughput semi-honest secure three-party computation with an honest majority. In ACM CCS, 2016.
* [5] E. Bagdasaryan, A. Veit, Y. Hua, D. Estrin, and Vitaly Shmatikov. How to backdoor federated learning. 2018\.
* [6] Bagdasaryan<B., A. Veit, Y. Hua, D. Estrin, and V. Shmatikov. How to backdoor federated learning. In AISTATS, 2020.
* [7] M. Ben-Or, S. Goldwasser, and A. Wigderson. Completeness Theorems for Non-Cryptographic Fault-Tolerant Distributed Computation (Extended Abstract). In ACM STOC, 1988.
* [8] A. Bhagoji, S. Chakraborty, P. Mittal, and S. Calo. Analyzing federated learning through an adversarial lens. In ICML, 2019.
* [9] B. Biggio, I. Corona, G. Fumera, G. Giacinto, and F. Roli. Bagging classifiers for fighting poisoning attacks in adversarial classification tasks. In International workshop on multiple classifier systems, 2011.
* [10] B. Biggio, B. Nelson, and P. Laskov. Poisoning attacks against support vector machines. In ICML, 2012.
* [11] P. Blanchard, E. Mhamdi, R. Guerraoui, and J. Stainer. Byzantine-tolerant machine learning. In NeurIPS, 2017.
* [12] F. Boenisch, A. Dziedzic, R. Schuster, A. Shamsabadi, I. Shumailov, and N. Papernot. When the curious abandon honesty: Federated learning is not private. In arXiv, 2021.
* [13] M. Byali, H. Chaudhari, A. Patra, and A. Suresh. Flash: Fast and robust framework for privacy-preserving machine learning. PoPETS, 2020.
* [14] R. Canetti. Security and composition of multiparty cryptographic protocols. In J. Cryptology, 2000.
* [15] X. Cao, M. Fang, J. Liu, and N. Gong. Fltrust: Byzantine-robust federated learning via trust bootstrapping. In NDSS, 2021.
* [16] X. Cao, J. Jia, and N. Gong. Provably secure federated learning against malicious clients. In AAAI, 2021.
* [17] N. Carlini, S. Chien, M. Nasr, S. Song, A. Terzis, and F. Tramer. Membership inference attacks from first principles. In IEEE Symposium on Security and Privacy (SP), 2022.
* [18] N. Chandran, D. Gupta, and A. Obbattu, L.B. andShah. Simc: Ml inference secure against malicious clients at semi-honest cost. In USENIX, 2022.
* [19] H. Chaudhari, J. Abascal, A. Oprea, M. Jagielski, F. Tramèr, and J. Ullman. Snap: Efficient extraction of private properties with poisoning. arXiv, 2022.
* [20] H. Chaudhari, A. Choudhury, A. Patra, and A. Suresh. ASTRA: High-throughput 3PC over Rings with Application to Secure Prediction. In ACM CCSW, 2019.
* [21] H. Chaudhari, R. Rachuri, and A. Suresh. Trident: Efficient 4pc framework for privacy preserving machine learning. NDSS, 2020.
* [22] B. Chen, W. Carvalho, N. Baracaldo, H. Ludwig, B. Edwards, T. Lee, I. M. Molloy, and B. Srivastava. Detecting backdoor attacks on deep neural networks by activation clustering. In SafeAI@AAAI, 2019.
* [23] X. Chen, C. Liu, B. Li, K. Lu, and D. Song. Targeted backdoor attacks on deep learning systems using data poisoning. 2017\.
* [24] C.A. Choquette-Choo, N. Dullerud, A. Dziedzic, Y. Zhang, S. Jha, N. Papernot, and X. Wang. Ca{pc} learning: Confidential and private collaborative learning. In ICLR, 2021.
* [25] G. Cohen, S. Afshar, J. Tapson, and A. van Schaik. Emnist: Extending mnist to handwritten letters. In International Joint Conference on Neural Networks (IJCNN), 2017\.
* [26] J. Cohen, E. Rosenfeld, and Z. Kolter. Certified adversarial robustness via randomized smoothing. In ICML, 2019.
* [27] R. Cramer, I. Damgrd, D. Escudero, P. Scholl, and C. Xing. SPDZ2k: Efficient MPC mod 2^k for Dishonest Majority. CRYPTO, 2018.
* [28] A. Dalskov, D. Escudero, and M. Keller. Fantastic four: Honest-majority four-party secure computation with malicious security. In USENIX, 2021.
* [29] A.P.K. Dalskov, D. Escudero, and M. Keller. Secure evaluation of quantized neural networks. In PoPETS, 2020.
* [30] I. Damgrd, M. Keller, E. Larraia, V. Pastro, P. Scholl, and N. P. Smart. Practical covertly secure MPC for dishonest majority - or: Breaking the SPDZ limits. In ESORICS, 2013.
* [31] I. Damgrd, V. Pastro, N. P. Smart, and S. Zakarias. Multiparty Computation from Somewhat Homomorphic Encryption. In CRYPTO, 2012.
* [32] D. Demmler, T. Schneider, and M. Zohner. ABY - A Framework for Efficient Mixed-Protocol Secure Two-Party Computation. In NDSS, 2015.
* [33] Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In 2009 IEEE conference on computer vision and pattern recognition, pages 248–255. Ieee, 2009.
* [34] J. Devlin, M.W. Chang, K. Lee, and K. Toutanova. Bert: Pre-training of deep bidirectional transformers for language understanding. In NAACL, 2019.
* [35] D. Dua and C. Graff. UCI machine learning repository, 2017.
* [36] D. Escudero, M. Jagielski, R. Rachuri, and P. Scholl. Adversarial Attacks and Countermeasures on Private Training in MPC. In PPML@NeurIPS, 2021.
* [37] M. Fang, X. Cao, J. Jia, and N. Gong. Local model poisoning attacks to byzantine-robust federated learning. In Usenix, 2020.
* [38] A. Fu, X. Zhang, N. Xiong, Y. Gao, H. Wang, and J. Zhang. Vfl: A verifiable federated learning with privacy-preserving for big data in industrial iot. In IEEE Transactions on Industrial Informatics, 2020.
* [39] K. Ganju, Q. Wang, W. Yang, C.A. Gunter, and N. Borisov. Property inference attacks on fully connected neural networks using permutation invariant representations. 2018\.
* [40] J. Geiping, L.H. Fowl, W.R. Huang, W. Czaja, G. Taylor, M. Moeller, and T. Goldstein. Witches’ brew: Industrial scale data poisoning via gradient matching. In ICLR, 2021.
* [41] O. Goldreich, S. Micali, and A. Wigderson. How to Play any Mental Game or A Completeness Theorem for Protocols with Honest Majority. In STOC, 1987.
* [42] S. Goldwasser, M. Kim, V. Vaikuntanathan, and O. Zamir. Planting undetectable backdoors in machine learning models. In arXiv, 2022.
* [43] S. D. Gordon, S. Ranellucci, and X. Wang. Secure computation with low communication from cross-checking. In ASIACRYPT, 2018.
* [44] T. Gu, K. Liu, B. Dolan-Gavitt, and S. Garg. Badnets: Evaluating backdooring attacks on deep neural networks. IEEE Access, 2019.
* [45] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770–778, 2016.
* [46] T.M.Harry Hsu, H.Qi, and M.Brown. Measuring the effects of non-identical data distribution for federated visual classification. In IACR ePrint, 2019.
* [47] Y. Ishai, J. Kilian, K. Nissim, and E. Petrank. Extending Oblivious Transfers Efficiently. In CRYPTO, 2003.
* [48] Y. Ishai, R. Kumaresan, E. Kushilevitz, and A. Paskin-Cherniavsky. Secure computation with minimal interaction, revisited. In CRYPTO, 2015.
* [49] M. Jagielski, A. Oprea, B. Biggio, C. Liu, C.N. Rotaru, and B. Li. Manipulating machine learning: Poisoning attacks and countermeasures for regression learning. In IEEE S&P, 2018.
* [50] J. Jia, X. Cao, and N. Gong. Intrinsic certified robustness of bagging against data poisoning attacks. In AAAI, 2021.
* [51] Jinyuan Jia, Xiaoyu Cao, and Neil Zhenqiang Gong. Intrinsic certified robustness of bagging against data poisoning attacks. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 35, pages 7961–7969, 2021.
* [52] R. Kanagavelu, Z. Li, J. Samsudin, Y. Yang, F. Yang, R. Goh, M. Cheah, P. Wiwatphonthana, K. Akkarajitsakul, and S. Wang. Two-phase multi-party computation enabled privacy-preserving federated learning. In ACM CCGRID, 2020.
* [53] M. Keller. MP-SPDZ: A versatile framework for multi-party computation. In ACM CCS, 2020.
* [54] P.W. Koh and P. Liang. Understanding black-box predictions via influence functions. In ICML, 2017.
* [55] P.W. Koh, J. Steinhardt, and P. Liang. Stronger data poisoning attacks break data sanitization defenses. In arXiv, 2018.
* [56] S. Kornblith, J. Shlens, and Q.V. Le. Do better imagenet models transfer better? In CVPR, 2019.
* [57] A. Krizhevsky, G. Hinton, et al. Learning multiple layers of features from tiny images. 2009\.
* [58] N. Kumar, M. Rathee, N. Chandran, D. Gupta, A. Rastogi, and R. Sharma. Cryptflow: Secure tensorflow inference. In IEEE Security & Privacy, 2020.
* [59] R. Lehmkuhl, P. Mishra, A. Srinivasan, and R.A. Popa. Muse: Secure inference resilient to malicious clients. In USENIX, 2021.
* [60] Alexander Levine and Soheil Feizi. Deep partition aggregation: Provable defense against general poisoning attacks. arXiv preprint arXiv:2006.14768, 2020.
* [61] Y. Lindell. Fast cut-and-choose-based protocols for malicious and covert adversaries. In J. Cryptology, 2016.
* [62] Y. Lindell and B. Pinkas. An efficient protocol for secure two-party computation in the presence of malicious adversaries. In EUROCRYPT, 2007.
* [63] K. Liu, B. Dolan, and S. Garg. Fine-pruning: Defending against backdooring attacks on deep neural networks. In RAID, 2018.
* [64] Z. Liu, Jiale G., W. Yang, K.n Fan, J.and Lam, and J. Zhao. Privacy-preserving aggregation in federated learning: A survey. In arXiv, 2022.
* [65] S. Mahloujifar, E. Ghosh, and M. Chase. Property inference from poisoning. In IEEE Symposium on Security and Privacy (SP), 2022.
* [66] H.B McMahan, D. Ramage, K. Talwar, and L. Zhang. Learning differentially private recurrent language models. In ICLR, 2018.
* [67] P. Mishra, R. Lehmkuhl, A. Srinivasan, W. Zheng, and R.A. Popa. Delphi: A cryptographic inference service for neural networks. In USENIX, 2020.
* [68] P. Mohassel and M. K. Franklin. Efficiency tradeoffs for malicious two-party computation. In PKC, 2006.
* [69] P. Mohassel and P. Rindal. ABY${}^{\mbox{3}}$: A Mixed Protocol Framework for Machine Learning. In ACM CCS, 2018.
* [70] P. Mohassel, M. Rosulek, and Y. Zhang. Fast and Secure Three-party Computation: Garbled Circuit Approach. In CCS, 2015.
* [71] P. Mohassel and Y. Zhang. Secureml: A system for scalable privacy-preserving machine learning. In IEEE S&P, 2017.
* [72] L. Muñoz-González, B. Biggio, A. Demontis, A. Paudice, V. Wongrassamee, E.C. Lupu, and F. Roli. Towards poisoning of deep learning algorithms with back-gradient optimization. In AISec@CCS, 2017.
* [73] J. Newsome, B. Karp, and D. Song. Paragraph: Thwarting signature learning by training maliciously. In RAID, 2006.
* [74] J. B. Nielsen and C. Orlandi. Cross and clean: Amortized garbled circuits with constant overhead. In TCC, 2016.
* [75] N. Papernot, M. Abadi, Ú. Erlingsson, I. Goodfellow, and K. Talwar. Semi-supervised knowledge transfer for deep learning from private training data. In ICLR, 2017.
* [76] N. Papernot, S. Song, I. Mironov, A. Raghunathan, K. Talwar, and Ú. Erlingsson. Scalable private learning with pate. 2018\.
* [77] A. Patra, T. Schneider, A. Suresh, and H. Yalame. Aby2.0: Improved mixed-protocol secure two-party computation. In USENIX, 2021.
* [78] D. Rathee, M. Rathee, N. Kumar, N. Chandran, D. Gupta, A. Rastogi, and R. Sharma. Cryptflow2: Practical 2-party secure inference. In ACM CCS, 2020.
* [79] P. Rieger, T. Nguyen, M. Miettinen, and A. Sadeghi. Deepsight: Mitigating backdoor attacks in federated learning through deep model inspection. In NDSS, 2022.
* [80] G. Severi, J. Meyer, S. Coull, and A. Oprea. Explanation-guided backdoor poisoning attacks against malware classifiers. In USENIX, 2021.
* [81] R. Shokri, M. Stronati, and V. Song, C.and Shmatikov. Membership inference attacks against machine learning models. In 2017 IEEE Symposium on Security and Privacy (SP). IEEE, 2017\.
* [82] O. Suciu, R. Marginean, Y. Kaya, H. Daume III, and T. Dumitras. When does machine learning FAIL? generalized transferability for evasion and poisoning attacks. In USENIX, 2018.
* [83] A. Suri and D. Evans. Formalizing and estimating distribution inference risks. Proceedings on Privacy Enhancing Technologies (PETS), 2022.
* [84] X. Tang, S. Mahloujifar, L. Song, V. Shejwalkar, M. Nasr, A. Houmansadr, and P. Mittal. Mitigating membership inference attacks by $\\{$Self-Distillation$\\}$ through a novel ensemble architecture. In 31st USENIX Security Symposium, 2022.
* [85] F. Tramèr, R. Shokri, A.S. Joaquin, H. Le, M. Jagielski, S. Hong, and N. Carlini. Truth Serum: Poisoning machine learning models to reveal their secrets. In ACM Computer and Communications Security (CCS), 2022.
* [86] B. Tran, J. Li, and A. Madry. Spectral signatures in backdoor attacks. In NeurIPS, 2018.
* [87] S. Wagh, D. Gupta, and N. Chandran. SecureNN: Efficient and private neural network training. In PoPETS, 2019.
* [88] S. Wagh, S. Tople, F. Benhamouda, E. Kushilevitz, P. Mittal, and T. Rabin. Falcon: Honest-majority maliciously secure framework for private deep learning. In PoPETS, 2021.
* [89] B. Wang, Y. Yao, S. Shan, H. Li, H. Viswanath, B. Zheng, and B.Y. Zhao. Neural cleanse: Identifying and mitigating backdoor attacks in neural networks. In IEEE S&P, 2019.
* [90] J.L. Watson, S. Wagh, and R.A. Popa. Piranha: A gpu platform for secure computation. In USENIX, 2022.
* [91] H. Xiao, K. Rasul, and R. Vollgraf. Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms, 2017.
* [92] C. Xie, S. Koyejo, and I. Gupta. Fall of empires: Breaking byzantine-tolerant SGD by inner product manipulation. In UAI, 2019.
* [93] A. C. Yao. Protocols for Secure Computations. In FOCS, 1982.
* [94] S. Yeom, I. Giacomelli, M. Fredrikson, and S. Jha. Privacy risk in machine learning: Analyzing the connection to overfitting. In 2018 IEEE 31st Computer Security Foundations Symposium (CSF). IEEE, 2018.
* [95] D. Yin, Y. Chen, K. Ramchandran, and P. Bartlett. Byzantine-robust distributed learning: Towards optimal statistical rates. In ICML, 2018.
* [96] D. Yin, Y. Chen, K. Ramchandran, and P. Bartlett. Defending against saddle point attack in byzantine-robust distributed learning. In ICML, 2019.
* [97] W. Zhang, S. Tople, and O. Ohrimenko. Leakage of dataset properties in Multi-Party machine learning. In 30th USENIX Security Symposium, 2021.
* [98] H. Zhu, R. Mong Goh, and W. Ng. Privacy-preserving weighted federated learning within the secret sharing framework. In IEEE Access, 2020.
## Appendix A SafeNet Analysis
In this section we first provide a detailed proof on the size of the
validation dataset ${{D}_{\text{val}}}$ such that all clean models clear the
filtering stage of the training phase of our framework. We then provide a
proof on achieving lower bounds on the test accuracy of our framework given
all clean models are a part of the ensemble.
The main idea of deriving the minimum size of ${{D}_{\text{val}}}$ uses the
point that the errors made by a clean model on a clean subset of samples in
${{D}_{\text{val}}}$ can be viewed as a Binomial distribution in $(m-t)n$ and
$p$, where $n$ denotes the size of the validation dataset
${{D}_{k}^{\text{v}}}$ contributed by an owner $\mathsf{C}_{k}$. We can then
upper bound the total errors made by a clean model by applying Chernoff bound
and consequently compute the size of ${{D}_{\text{val}}}$.
###### Lemma 5.
Let $\mathcal{A}^{\text{p}}_{\text{soc}}$ be an adversary who poisons $t$ out
of $m$ data owners and corrupts $T$ out of $N$ servers, and thus contributes
$t$ poisoned models to ensemble $E$, given as output by Algorithm 1. Assume
that $\Pi_{\mathsf{train}}$ securely realizes functionality
$\mathcal{F}_{\mathsf{pTrain}}$ and every clean model in $E$ makes an error on
a clean sample with probability at most $p<1-\mathsf{\phi}$, where
$\mathsf{\phi}$ is the filtering threshold.
If the validation dataset has at least $\frac{(2+\delta)m\log
1/\epsilon}{\delta^{2}(m-t)p}$ samples and $0\leq
t<\frac{m(1-\mathsf{\phi}-p)}{(1-p)}$, then all clean models pass the
filtering stage of the training phase with probability at least $1-\epsilon$,
where $\delta=\frac{(1-\mathsf{\phi})m-t}{(m-t)p}-1$ and $\epsilon$ denotes
the failure probability.
###### Proof.
Assume that each owner contributes equal size validation dataset
${{D}_{k}^{\text{v}}}$ of $n$ samples, then the combined validation set
${{D}_{\text{val}}}$ collected from $m$ data owners is comprised of $mn$
i.i.d. samples. However, given an adversary
$\mathcal{A}^{\text{p}}_{\text{soc}}$ from our threat model, there can be at
most $t$ poisoned owners contributing $tn$ poisoned samples to
${{D}_{\text{val}}}$. We define a Bernoulli random variable as follows:
$\displaystyle X_{i}=\begin{cases}1,&\text{w.p.}~{}p\\\
0,&\text{w.p.}~{}1-p\end{cases}$
where $X_{i}$ denotes if a clean model makes an error on the $i^{th}$ clean
sample in the validation dataset. Then there are $\text{Bin}((m-t)n,p)$ errors
made by the clean model on the clean subset of samples in
${{D}_{\text{val}}}$. Note that, a model passes the filtering stage only when
it makes $\geq\mathsf{\phi}mn$ correct predictions. We assume that the worst
case where the clean model makes incorrect predictions on all the $tn$
poisoned samples present in ${{D}_{\text{val}}}$. As a result, the clean model
must make at most $(1-\mathsf{\phi})mn-tn$ errors on the clean subset of
${{D}_{\text{val}}}$ with probability $1-\epsilon$. We can upper bound the
probability the model makes at least $(1-\mathsf{\phi})mn+1-tn$ errors with a
multiplicative Chernoff bound with $\delta>0$:
$\mathsf{Pr}[\sum_{i=1}^{(m-t)n}X_{i}>(1-\mathsf{\phi})mn-tn]\\\
=\mathsf{Pr}\left[\sum_{i=1}^{n}X_{i}>(1+\delta)\mu\right]<e^{-\frac{\delta^{2}\mu}{2+\delta}}$
where $\mu=(m-t)np$ (the mean of $Bin(mn-tn,p)$) and
$\delta=\frac{(1-\mathsf{\phi})m-t}{(m-t)p}$. The chernoff bound gives that
the probability the clean model makes too many errors is at most
$e^{-\frac{\delta^{2}\mu}{2+\delta}}=\epsilon$. Then it suffices to have this
many samples:
$|{{D}_{\text{val}}}|=mn=\frac{(2+\delta)m\log 1/\epsilon}{\delta^{2}(m-t)p}$
where $\epsilon$ denotes the failure probability and
$t<\frac{m(1-\mathsf{\phi}-p)}{(1-p)}$. The inequality on $t$ comes from
requiring $\delta>0$.
∎
As a visual interpretation of Lemma 5, Figure 7 shows the minimum number of
samples required in the global validation dataset for varying number of
poisoned owners $t$ and error probability $p$. We set the total models $m=20$,
the failure probability $\epsilon=0.01$ and the filtering threshold
$\mathsf{\phi}=0.3$. The higher the values of $t$ and $p$, the more samples
are required in the validation set. For instance, for $p=0.20$ and number of
poisoned owners $t=8$, all clean models pass the filtering stage with
probability at least $0.99$ when the validation set size has at least $60$
samples.
Figure 7: Minimum number of samples in the validation dataset as a function
of maximum error probability $p$ and number of poisoned owners $t$ for $m=20$
data owners. We set the filtering threshold $\mathsf{\phi}=0.03$ and failure
probability $\epsilon=0.01$.
We use a similar strategy as above to compute the lower bound on the test
accuracy. On a high level, the proof follows by viewing the combined errors
made by the clean models as a Binomial distribution $Bin(m-t,p)$. We can then
upper bound the total errors made by all the models in the ensemble by
applying Chernoff bounds and consequentially lower bound the ensemble
accuracy.
###### Theorem 6.
Assume that the conditions in Lemma 5 hold against adversary
$\mathcal{A}^{\text{p}}_{\text{soc}}$ poisoning at most
$t<\frac{m}{2}\frac{1-2p}{1-p}$ owners and that the errors made by the clean
models are independent. Then $E$ correctly classifies new samples with
probability at least $p_{c}=(1-\epsilon)\left(1-e^{-\frac{\delta^{\prime
2}\mu^{\prime}}{2+\delta^{\prime}}}\right)$, where $\mu^{\prime}=(m-t)p$ and
$\delta^{\prime}=\frac{m-2t}{2\mu^{\prime}}-1$.
###### Proof.
Lemma 5 shows that, with probability $>1-\epsilon$, no clean models will be
filtered during ensemble filtering. Given all clean models pass the filtering
stage, we consider the worst case where even the $t$ poisoned models bypass
filtering. Now, given a new test sample, $m-t$ clean models have uncorrelated
errors each with probability at most $p$, the error made by each clean model
can be viewed as a Bernoulli random variable with probability $p$ and so the
total errors made by clean models follow a binomial $X\sim\text{Bin}(m-t,p)$.
We assume that a new sample will be misclassified by all $t$ of the poisoned
models. Then the ensemble as a whole makes an error if $t+Bin(m-t,p)>m/2$. We
can then bound the probability this occurs by applying Chernoff bound as
follows:
$\mathsf{Pr}\left[X+t\geq\frac{m}{2}\right]=\mathsf{Pr}\left[X\geq(1+\delta^{\prime})\mu^{\prime}\right]\leq
e^{-\frac{\delta^{\prime 2}\mu^{\prime}}{2+\delta^{\prime}}},$
where $\mu^{\prime}=(m-t)p$ is the mean of $X$ and
$\delta^{\prime}=\frac{m-2t}{2\mu^{\prime}}-1>0$. Then the probability of
making a correct prediction can be lower bounded by:
$\mathsf{Pr}\left[X<\frac{m}{2}-t\right]>1-e^{-\frac{\delta^{\prime
2}\mu^{\prime}}{2+\delta^{\prime}}},$
given the number of poisoned models
$t<\frac{m(1-2p)}{2(1-p)}.$
The inequality on $t$ comes from the constraint $\delta^{\prime}>0$ for the
Chernoff bound to hold. Note that, the above bound holds only when all the
clean models pass the filtering stage, which occurs with probability at least
$1-\epsilon$ by Lemma 5. Then the bound on the probability of making a correct
prediction by the ensemble can be written as:
$\mathsf{Pr}\left[X<\frac{m}{2}-t\right]>(1-\epsilon)\left(1-e^{-\frac{\delta^{\prime
2}\mu^{\prime}}{2+\delta^{\prime}}}\right)$
∎
## Appendix B Realization in MPC
To instantiate SafeNet in MPC, we first describe the required MPC building
blocks, and then provide the SafeNet training and secure prediction protocols.
#### B-1 MPC Building Blocks
The notation ${\llbracket x\rrbracket}$ denotes a given value $x$ secret-
shared among the servers. The exact structure of secret sharing is dependent
on the particular instantiation of the underlying MPC framework[32, 4, 43, 20,
21, 13]. We assume each value and its respective secret shares to be elements
over an arithmetic ring $\mathbb{Z}_{2^{\ell}}$. All multiplication and
addition operations are carried out over $\mathbb{Z}_{2^{\ell}}$.
We express each of our building blocks in the form of an ideal functionality
and its corresponding protocol. An ideal functionality can be viewed as an
oracle, which takes input from the parties, applies a predefined function $f$
on the inputs and returns the output back to the parties. The inputs and
outputs can be in clear or in ${\llbracket\cdot\rrbracket}$-shared format
depending on the definition of the functionality. These ideal functionalities
are realized using secure protocols depending on the specific instantiation of
the MPC framework agreed upon by the parties. Below are the required building
blocks:
Secure Input Sharing. Ideal Functionality $\mathcal{F}_{\mathsf{shr}}$ takes
as input a value $x$ from a party who wants to generate a
${\llbracket\cdot\rrbracket}$-sharing of x, while other parties input $\bot$
to the functionality. $\mathcal{F}_{\mathsf{shr}}$ generates a
${\llbracket\cdot\rrbracket}$-sharing of $x$ and sends the appropriate shares
to the parties. We use $\Pi_{\mathsf{sh}}$ to denote the protocol that
realizes this functionality securely.
Secure Addition. Given ${\llbracket\cdot\rrbracket}$-shares of $x$ and $y$,
secure addition is realized by parties locally adding their shares
${\llbracket z\rrbracket}={\llbracket x\rrbracket}+{\llbracket y\rrbracket}$,
where $z=x+y$.
Secure Multiplication:. Functionality $\mathcal{F}_{\mathsf{mult}}$ takes as
input ${\llbracket\cdot\rrbracket}$-shares of values $x$ and $y$, creates
${\llbracket\cdot\rrbracket}$-shares of $z=xy$ and sends the shares of $z$ to
the parties. $\Pi_{\mathsf{mult}}$ denotes the protocol to securely realize
$\mathcal{F}_{\mathsf{mult}}$.
Secure Output Reconstruction. $\mathcal{F}_{\mathsf{op}}$ functionality takes
as input ${\llbracket\cdot\rrbracket}$-shares of a value $x$ from the parties
and a commonly agreed upon party id pid in clear. On receiving the shares and
pid, $\mathcal{F}_{\mathsf{op}}$ reconstructs $x$ and sends it to the party
associated to pid.
Secure Comparison. $\mathcal{F}_{\mathsf{comp}}$ functionality takes as input
a value $a$ in ${\llbracket\cdot\rrbracket}$-shared format.
$\mathcal{F}_{\mathsf{comp}}$ initializes a bit $b=0$, sets $b=1$ if $a>0$ and
outputs it in ${\llbracket\cdot\rrbracket}$-shared format. Protocol
$\Pi_{\mathsf{comp}}$ is used to securely realize
$\mathcal{F}_{\mathsf{comp}}$.
Secure Zero-Vector. $\mathcal{F}_{\mathsf{zvec}}$ functionality takes as input
a value $L$ in clear from the parties. $\mathcal{F}_{\mathsf{zvec}}$
constructs a vector ${\mathbf{{{z}}}}$ of all zeros of size $L$ and outputs
${\llbracket\cdot\rrbracket}$-shares of ${\mathbf{{{z}}}}$.
$\Pi_{\mathsf{zvec}}$ denotes the protocol that securely realizes
$\mathcal{F}_{\mathsf{zvec}}$.
Secure Argmax. $\mathcal{F}_{\mathsf{amax}}$ functionality takes as input a
vector ${\mathbf{{{x}}}}$ in ${\llbracket\cdot\rrbracket}$-shared format and
outputs ${\llbracket\cdot\rrbracket}$-shares of a value op, where op denotes
the index of the max element in vector ${\mathbf{{{x}}}}$.
$\Pi_{\mathsf{amx}}$ denotes the protocol that securely realizes
$\mathcal{F}_{\mathsf{amax}}$.
#### B-2 ML Building Blocks
We introduce several building blocks required for private ML training,
implemented by existing MPC frameworks [71, 69, 13, 88]:
Secure Model Prediction. $\mathcal{F}_{\mathcal{M}\mathsf{pred}}$
functionality takes as input a trained model $\mathcal{M}$ and a feature
vector ${\mathbf{{{x}}}}$ in ${\llbracket\cdot\rrbracket}$-shared format.
$\mathcal{F}_{\mathcal{M}\mathsf{pred}}$ then computes prediction $\textsc{\bf
Preds}=\mathcal{M}({\mathbf{{{x}}}})$ in one-hot vector format and outputs
${\llbracket\cdot\rrbracket}$-shares of the same.
$\Pi_{\mathcal{M}\mathsf{pred}}$ denotes the protocol which securely realizes
functionality $\mathcal{F}_{\mathcal{M}\mathsf{pred}}$.
Secure Accuracy. $\mathcal{F}_{\mathsf{acc}}$ functionality takes as input two
equal length vectors ${\mathbf{{{y}}}}_{pred}$ and ${{\mathbf{{y}}}}$ in
${\llbracket\cdot\rrbracket}$-shared format. $\mathcal{F}_{\mathsf{acc}}$ then
computes the total number matches (element-wise) between the two vectors and
outputs $\frac{\\#~{}\text{matches}}{|{{\mathbf{{y}}}}|}$ in
${\llbracket\cdot\rrbracket}$-shared format. $\Pi_{\mathsf{acc}}$ denotes the
protocol which securely realizes this functionality.
#### B-3 Protocols
We propose two protocols to realize our SafeNet framework in the SOC setting.
The first protocol $\Pi_{\mathsf{train}}$ describes the SafeNet training phase
where given ${\llbracket\cdot\rrbracket}$-shares of dataset
${{D}_{k}^{\text{v}}}$ and model $\mathcal{M}_{k}$, with respect to each owner
$\mathsf{C}_{k}$, $\Pi_{\mathsf{train}}$ outputs
${\llbracket\cdot\rrbracket}$-shares of an ensemble $E$ of $m$ models and
vector ${\mathbf{{{b}}}}^{\mathsf{val}}$. The second protocol
$\Pi_{\mathsf{pred}}$ describes the prediction phase of SafeNet, which given
${\llbracket\cdot\rrbracket}$-shares of a client’s query predicts its output
label. The detailed description for each protocol is as follows:
SafeNet Training. We follow the notation from Algorithm 1. Our goal is for
training protocol $\Pi_{\mathsf{train}}$ given in Figure 8 to securely realize
functionality $\mathcal{F}_{\mathsf{pTrain}}$ (Figure 2), where the inputs to
$\mathcal{F}_{\mathsf{pTrain}}$ are ${\llbracket\cdot\rrbracket}$-shares of
${D}_{k}={{D}_{k}^{\text{v}}}$ and $a_{k}=\mathcal{M}_{k}$, and the
corresponding outputs are ${\llbracket\cdot\rrbracket}$-shares of $O=E$ and
${\mathbf{{{b}}}}^{\mathsf{val}}$. Given the inputs to $\Pi_{\mathsf{train}}$,
the servers first construct a common validation dataset
${\llbracket{{D}_{\text{val}}}\rrbracket}=\cup_{k=1}^{m}{\llbracket{{D}_{k}^{\text{v}}}\rrbracket}$
and an ensemble of models ${\llbracket
E\rrbracket}=\\{{\llbracket\mathcal{M}_{k}\rrbracket}\\}_{k=1}^{m}$. Then for
each model $\mathcal{M}_{k}\in E$, the servers compute the validation accuracy
${\llbracket{\text{AccVal}}_{k}\rrbracket}$. The output
${\llbracket{\text{AccVal}}_{k}\rrbracket}$ is compared with a pre-agreed
threshold $\mathsf{\phi}$ to obtain a ${\llbracket\cdot\rrbracket}$-sharing of
${\mathbf{{{b}}}}^{\mathsf{val}}_{k}$, where
${\mathbf{{{b}}}}^{\mathsf{val}}_{k}=1$ if
${\text{AccVal}}_{k}>\mathsf{\phi}$. After execution of $\Pi_{\mathsf{train}}$
protocol, servers obtain ${\llbracket\cdot\rrbracket}$-shares of ensemble $E$
and vector ${\mathbf{{{b}}}}^{\mathsf{val}}$.
Input: ${\llbracket\cdot\rrbracket}$-shares of each owner $\mathsf{C}_{k}$’s
validation dataset ${{D}_{k}^{\text{v}}}$ and local model $\mathcal{M}_{k}$.
Protocol Steps: The servers perform the following: – Construct
${\llbracket\cdot\rrbracket}$-shares of ensemble
$E=\\{\mathcal{M}_{k}\\}_{k=1}^{m}$ and validation dataset
${{D}_{\text{val}}}=\cup_{k=1}^{m}{{D}_{k}^{\text{v}}}$. – Execute
$\Pi_{\mathsf{zvec}}$ with $m$ as the input and obtain
${\llbracket\cdot\rrbracket}$-shares of a vector
${\mathbf{{{b}}}}^{\mathsf{val}}$. – For $k\in[1,m]:$ – Execute
$\Pi_{\mathcal{M}\mathsf{pred}}$ with inputs as
${\llbracket\mathcal{M}_{k}\rrbracket}$ and
${\llbracket{{D}_{\text{val}}}\rrbracket}$ and obtain
${\llbracket\textsc{PREDS}_{k}\rrbracket}$, where
$\textsc{Preds}_{k}=\mathcal{M}_{k}({{D}_{\text{val}}})$ – Execute
$\Pi_{\mathsf{acc}}$ with inputs as ${\llbracket\textsc{Preds}_{k}\rrbracket}$
and
${\llbracket{{\mathbf{{y}}}}_{\scriptscriptstyle{{D}_{\text{val}}}}\rrbracket}$
and obtain ${\llbracket{\text{AccVal}}_{k}\rrbracket}$ as the output. –
Locally subtract ${\llbracket\cdot\rrbracket}$-shares of ${\text{AccVal}}_{k}$
with $\mathsf{\phi}$ to obtain
${\llbracket{\text{AccVal}}_{k}-\mathsf{\phi}\rrbracket}$. – Execute
$\Pi_{\mathsf{comp}}$ with input as
${\llbracket{\text{AccVal}}_{k}-\mathsf{\phi}\rrbracket}$ and obtain
${\llbracket b^{\prime}\rrbracket}$, where $b^{\prime}=1$ iff
${\text{AccVal}}_{k}>\mathsf{\phi}$. Set the $k^{\text{th}}$ position in
${\llbracket{\mathbf{{{b}}}}^{\mathsf{val}}\rrbracket}$ as
${\llbracket{\mathbf{{{b}}}}^{\mathsf{val}}_{k}\rrbracket}={\llbracket
b^{\prime}\rrbracket}$ Output: ${\llbracket\cdot\rrbracket}$-shares of
${\mathbf{{{b}}}}^{\mathsf{val}}$ and ensemble $E$.
Figure 8: SafeNet Training Protocol
The security proof of $\Pi_{\mathsf{train}}$ protocol as stated in Theorem 2
in Section III-C is given in Appendix C.
Input: ${\llbracket\cdot\rrbracket}$-shares of vector
${\mathbf{{{b}}}}^{\mathsf{val}}$ and ensemble $E$ among the servers. Client
${\llbracket\cdot\rrbracket}$-shares query ${\mathbf{{{x}}}}$ to the servers.
Protocol Steps: The servers perform the following: – Execute
$\Pi_{\mathsf{zvec}}$ protocol with $L$ as the input, where $L$ denotes the
number of distinct class labels and obtain
${\llbracket\cdot\rrbracket}$-shares of ${\mathbf{{{z}}}}$. – For each
$\mathcal{M}_{k}\in E:$ – Execute $\Pi_{\mathcal{M}\mathsf{pred}}$ with inputs
as ${\llbracket\mathcal{M}_{k}\rrbracket}$ and
${\llbracket{\mathbf{{{x}}}}\rrbracket}$. Obtain ${\llbracket\textsc{\bf
Preds}\rrbracket}$, where $\textsc{\bf
Preds}=\mathcal{M}_{k}({\mathbf{{{x}}}})$. – Execute $\Pi_{\mathsf{mult}}$ to
multiply ${\mathbf{{{b}}}}^{\mathsf{val}}_{k}$ to each element of vector
Preds. – Locally add
${\llbracket{\mathbf{{{z}}}}\rrbracket}={\llbracket{\mathbf{{{z}}}}\rrbracket}+{\llbracket\textsc{\bf
Preds}\rrbracket}$ to update ${\mathbf{{{z}}}}$. – Execute
$\Pi_{\mathsf{amx}}$ protocol with input as
${\llbracket{\mathbf{{{z}}}}\rrbracket}$ and obtain
${\llbracket\textsc{op}\rrbracket}$ as the output. Output:
${\llbracket\cdot\rrbracket}$-shares of op
Figure 9: SafeNet Prediction Protocol
SafeNet Prediction. Functionality $\mathcal{F}_{\mathsf{pred}}$ takes as input
party id cid, ${\llbracket\cdot\rrbracket}$-shares of client query
${\mathbf{{{x}}}}$, vector ${\mathbf{{{b}}}}^{\mathsf{val}}$ and ensemble
$E=\\{{\llbracket\mathcal{M}_{k}\rrbracket}\\}_{k=1}^{m}$ and outputs a value
op, the predicted class label by ensemble $E$ on query ${\mathbf{{{x}}}}$.
Protocol $\Pi_{\mathsf{pred}}$ realizes $\mathcal{F}_{\mathsf{pred}}$ as
follows: Given ${\llbracket\cdot\rrbracket}$-shares of ${\mathbf{{{x}}}}$,
${\mathbf{{{b}}}}^{\mathsf{val}}$ and ensemble $E$, the servers initialize a
vector ${\mathbf{{{z}}}}$ of all zeros of size $L$. For each model
$\mathcal{M}_{k}$ in the ensemble $E$, the servers compute
${\llbracket\cdot\rrbracket}$-shares of the prediction ${\textsc{\bf
Preds}}=\mathcal{M}_{k}({\mathbf{{{x}}}})$ in one-hot format. The element
${\mathbf{{{b}}}}^{\mathsf{val}}_{k}$ in vector
${\mathbf{{{b}}}}^{\mathsf{val}}$ is multiplied to each element in vector
Preds. The ${\llbracket{\textsc{\bf Preds}}\rrbracket}$ vector is added to
${\llbracket{\mathbf{{{z}}}}\rrbracket}$ to update the model’s vote towards
the final prediction. If ${\mathbf{{{b}}}}^{\mathsf{val}}_{k}=0$, then after
multiplication vector Preds is a vector of zeros and does not contribute in
the voting process towards the final prediction. The servers then compute the
argmax of vector ${\llbracket{\mathbf{{{z}}}}\rrbracket}$ and receive output
${\llbracket\textsc{op}\rrbracket}$ from $\Pi_{\mathsf{amx}}$, where op
denotes the predicted class label by the ensemble. The appropriate
${\llbracket\cdot\rrbracket}$-shares of op is forwarded to the client for
reconstruction.
###### Theorem 7.
Protocol $\Pi_{\mathsf{pred}}$ is secure against adversary
$\mathcal{A}^{\text{p}}_{\text{soc}}$ who poisons $t$ out of $m$ data owners
and corrupts $T$ out of $N$ servers.
###### Proof.
The proof is given below in Appendix C. ∎
## Appendix C Security Proofs
For concise security proofs, we assume the adversary
$\mathcal{A}^{\text{p}}_{\text{soc}}$ performs a semi-honest corruption in the
SOC paradigm, but our proofs can also be extended to malicious adversaries in
the MPC. We prove that protocol $\Pi_{\mathsf{train}}$ is secure against an
adversary of type $\mathcal{A}^{\text{p}}_{\text{soc}}$. Towards this, we
first argue that protocol $\Pi_{\mathsf{train}}$ securely realizes the
standard ideal-world functionality $\mathcal{F}_{\mathsf{pTrain}}$. We use
simulation based security to prove our claim. Next, we argue that the ensemble
$E$ trained using $\Pi_{\mathsf{train}}$ protocol provides poisoning
robustness against $\mathcal{A}^{\text{p}}_{\text{soc}}$.
See 2
###### Proof.
Let $\mathcal{A}^{\text{p}}_{\text{soc}}$ be a real-world adversary that semi-
honestly corrupts $T$ out of $N$ servers at the beginning of the protocol
$\Pi_{\mathsf{train}}$. We now present the steps of the ideal-world adversary
(simulator) $\mathcal{S}_{f}$ for $\mathcal{A}^{\text{p}}_{\text{soc}}$. Note
that, in the semi-honest setting $\mathcal{S}_{f}$ already posses the input of
$\mathcal{A}^{\text{p}}_{\text{soc}}$ and the final output shares of
${\mathbf{{{b}}}}^{\mathsf{val}}$. $\mathcal{S}_{f}$ acts on behalf of $N-T$
honest servers, sets their shares as random values in $\mathbb{Z}_{2^{\ell}}$
and simulates each step of $\Pi_{\mathsf{train}}$ protocol to the corrupt
servers as follows:
* –
No simulation is required to construct ${\llbracket\cdot\rrbracket}$-shares of
ensemble $E$ and validation dataset ${{D}_{\text{val}}}$ as it happens
locally.
* –
$\mathcal{S}_{f}$ simulates messages on behalf of honest servers as a part of
the protocol steps of $\Pi_{\mathsf{zvec}}$ with public value $m$ as the input
and eventually sends and receives appropriate
${\llbracket\cdot\rrbracket}$-shares of ${\mathbf{{{b}}}}^{\mathsf{val}}$ to
and from $\mathcal{A}^{\text{p}}_{\text{soc}}$.
* –
For $k\in[1,m]$:
* –
$\mathcal{S}_{f}$ simulates messages on behalf of honest servers, as a part of
the protocol steps of $\Pi_{\mathcal{M}\mathsf{pred}}$, with inputs to the
protocol as ${\llbracket\cdot\rrbracket}$-shares of $\mathcal{M}_{k}$ and
${{D}_{\text{val}}}$ and eventually sends and receives appropriate
${\llbracket\cdot\rrbracket}$-shares of $\textsc{PREDS}_{k}$ to and from
$\mathcal{A}^{\text{p}}_{\text{soc}}$.
* –
$\mathcal{S}_{f}$ simulates messages on behalf of honest servers, as a part of
the protocol steps of $\Pi_{\mathsf{acc}}$, with inputs to the protocol as
${\llbracket\cdot\rrbracket}$-shares of $\textsc{PREDS}_{k}$ and
${{\mathbf{{y}}}}_{{{D}_{\text{val}}}}$ and eventually sends and receives
appropriate ${\llbracket\cdot\rrbracket}$-shares of ${\text{AccVal}}_{k}$ to
and from $\mathcal{A}^{\text{p}}_{\text{soc}}$.
* –
No simulation is required for subtraction with threshold $\mathsf{\phi}$ as it
happens locally.
* –
$\mathcal{S}_{f}$ simulates messages on behalf of honest servers, as a part of
the protocol steps of $\Pi_{\mathsf{comp}}$, with inputs to the protocols as
${\llbracket\cdot\rrbracket}$-shares of ${\text{AccVal}}-\mathsf{\phi}$ and at
the end $\mathcal{S}_{f}$ instead sends the original shares of
${\mathbf{{{b}}}}^{\mathsf{val}}_{k}$ as shares of $b^{\prime}$ associated to
$\mathcal{A}^{\text{p}}_{\text{soc}}$.
* –
No simulation is required to assign
${\llbracket{\mathbf{{{b}}}}^{\mathsf{val}}_{k}\rrbracket}={\llbracket
b^{\prime}\rrbracket}$.
The proof now simply follows from the fact that simulated view and real-world
view of the adversary are computationally indistinguishable and concludes that
$\Pi_{\mathsf{train}}$ securely realizes functionality
$\mathcal{F}_{\mathsf{pTrain}}$.
Now given the output of $\Pi_{\mathsf{train}}$ protocol is an ensemble $E$, we
showed in the proof of Theorem 6 that $E$ correctly classifies a sample with
probability at least $p_{c}$. As a result the underlying trained model also
provides poisoning robustness against $\mathcal{A}^{\text{p}}_{\text{soc}}$.
∎
We use a similar argument to show protocol $\Pi_{\mathsf{pred}}$ is secure
against adversary $\mathcal{A}^{\text{p}}_{\text{soc}}$. See 7
###### Proof.
Let $\mathcal{A}^{\text{p}}_{\text{soc}}$ be a real-world adversary that
poisons $t$ out of $m$ owners and semi honestly corrupts $T$ out of $N$
servers at the beginning of $\Pi_{\mathsf{pred}}$ protocol. We present steps
of the ideal-world adversary (simulator) $\mathcal{S}_{f}$ for
$\mathcal{A}^{\text{p}}_{\text{soc}}$. $\mathcal{S}_{f}$ on behalf of the
honest servers, sets their shares as random values in $\mathbb{Z}_{2^{\ell}}$
and simulates each step of $\Pi_{\mathsf{pred}}$ protocol to the corrupt
servers as follows:
* –
$\mathcal{S}_{f}$ simulates messages on behalf of honest servers as a part of
the protocol steps of $\Pi_{\mathsf{zvec}}$ with public value $L$ as the input
and eventually sends and receives appropriate
${\llbracket\cdot\rrbracket}$-shares of ${\mathbf{{{z}}}}$ to and from
$\mathcal{A}^{\text{p}}_{\text{soc}}$.
* –
For $k\in[1,m^{\prime}]$:
* –
$\mathcal{S}_{f}$ simulates messages on behalf of honest servers, as a part of
the protocol steps of $\Pi_{\mathcal{M}\mathsf{pred}}$, which takes input as
${\llbracket\cdot\rrbracket}$-shares of $\mathcal{M}_{k}$ and
${\mathbf{{{x}}}}$. $\mathcal{S}_{f}$ eventually sends and receives
appropriate ${\llbracket\cdot\rrbracket}$-shares of ${\bf Preds}$ to and from
$\mathcal{A}^{\text{p}}_{\text{soc}}$.
* –
For every multiplication of
${\llbracket{\mathbf{{{b}}}}^{\mathsf{val}}_{k}\rrbracket}$ with respect to
each element in ${\bf Preds}$, $\mathcal{S}_{f}$ simulates messages on behalf
of honest servers, as a part of the protocol steps of $\Pi_{\mathsf{mult}}$,
which takes input as ${\llbracket\cdot\rrbracket}$-shares of ${\bf Preds}_{j}$
and ${\mathbf{{{b}}}}^{\mathsf{val}}_{k}$. $\mathcal{S}_{f}$ eventually sends
and receives appropriate ${\llbracket\cdot\rrbracket}$-shares of
${\mathbf{{{b}}}}^{\mathsf{val}}_{k}\times{\bf Preds}_{j}$ to and from
$\mathcal{A}^{\text{p}}_{\text{soc}}$.
* –
No simulation is required to update ${\llbracket{\mathbf{{{z}}}}\rrbracket}$
as addition happens locally.
* –
$\mathcal{S}_{f}$ simulates messages on behalf of honest servers, as a part of
the protocol steps of $\Pi_{\mathsf{amx}}$, which takes input as
${\llbracket\cdot\rrbracket}$-shares of ${\mathbf{{{z}}}}$. At the end
$\mathcal{S}_{f}$ instead forwards the original
${\llbracket\cdot\rrbracket}$-shares of op associated to
$\mathcal{A}^{\text{p}}_{\text{soc}}$.
The proof now simply follows from the fact that simulated view and real-world
view of the adversary are computationally indistinguishable. Poisoning
robustness argument follows from the fact that the ensemble $E$ used for
prediction was trained using protocol $\Pi_{\mathsf{train}}$ which was shown
to be secure against $\mathcal{A}^{\text{p}}_{\text{soc}}$ in Theorem 2. ∎
This concludes the security proofs of our training and prediction protocols.
## Appendix D SafeNet Extensions
### D-A Inference phase in Transfer Learning Setting
We provide a modified version of SafeNet’s Inference algorithm in the transfer
learning setting, to improve the running time and communication complexity of
SafeNet. Algorithm 3 provides the details of SafeNet’s prediction phase below.
Algorithm 3 SafeNet Inference for Transfer Learning Setting
Input: Secret-shares of backbone model $\mathcal{M}_{B}$, ensemble of $m$
fine-tuned models $E=\\{\mathcal{M}_{1},\ldots,\mathcal{M}_{m}\\}$, vector
${\mathbf{{{b}}}}^{\mathsf{val}}$ and client query ${\mathbf{{{x}}}}$.
// MPC computation in secret-shared format
Construct vector ${\mathbf{{{z}}}}$ of all zeros of size $L$, where $L$
denotes the number of distinct class labels.
– Run forward pass on $\mathcal{M}_{B}$ with input ${\mathbf{{{x}}}}$ upto its
last $l$ layers, where ${\mathbf{{{p}}}}$ denotes the output vector from that
layer.
– For $k\in[1,m]:$
* -
Run forward pass on the last $l$ layers of $\mathcal{M}_{k}$ with input as
${\mathbf{{{p}}}}$. Let the output of the computation be Preds, which is one-
hot encoding of the predicted label.
* -
Multiply ${\mathbf{{{b}}}}^{\mathsf{val}}_{k}$ to each element of Preds.
* -
Add ${\mathbf{{{z}}}}={\mathbf{{{z}}}}+\textsc{\bf Preds}$.
– Run argmax with input as ${\mathbf{{{z}}}}$ and obtain op as the output.
return op
### D-B Training with Computationally Restricted Owners
In this section we provide a modified version of SafeNet’s Training Algorithm,
to accommodate when a subset of data owners are computationally restricted,
i.e., they can not train their models locally. Algorithm 4 provides the
details of SafeNet’s training steps below.
Algorithm 4 SafeNet Training with Computationally Restricted Owners
Input: $m$ total data owners of which $m_{r}$ subset of owners are
computationally restricted, each owner $\mathsf{C}_{k}$’s dataset ${D}_{k}$.
// Computationally Restricted Owner’s local computation in plaintext
– For $k\in[1,m_{r}]:$
* -
Separate out ${{D}_{k}^{\text{v}}}$ from ${D}_{k}$.
* -
Secret-share cross-validation dataset ${{D}_{k}^{\text{v}}}$ and training
dataset ${D}_{k}\setminus{{D}_{k}^{\text{v}}}$ to servers.
// Computationally Unrestricted Owner’s local computation in plaintext
– For $k\in[m_{r+1},m]:$
* -
Separate out ${{D}_{k}^{\text{v}}}$ from ${D}_{k}$. Train $\mathcal{M}_{k}$ on
${D}_{k}\setminus{{D}_{k}^{\text{v}}}$.
* -
Secret-share ${{D}_{k}^{\text{v}}}$ and $\mathcal{M}_{k}$ to servers.
// MPC computation in secret-shared format
1\. For $k\in[1,m_{r}]:$
* -
Train $\mathcal{M}_{k}$ on ${D}_{k}\setminus{{D}_{k}^{\text{v}}}$.
2\. Construct a common validation dataset
${{D}_{\text{val}}}=\cup_{i=1}^{m}{{D}_{i}^{\text{v}}}$ and collect ensemble
of models $E=\\{\mathcal{M}_{i}\\}_{i=1}^{m}$
3\. Initialize a vector ${\mathbf{{{b}}}}^{\mathsf{val}}$ of zeros and of size
$m$.
4\. For $k\in[1,m]:$
* -
${\text{AccVal}}_{k}=Accuracy(\mathcal{M}_{k},{{D}_{\text{val}}})$
* -
If ${\text{AccVal}}_{k}>\mathsf{\phi}$:
* –
Set ${\mathbf{{{b}}}}^{\mathsf{val}}_{k}=1$
return $E$ and ${\mathbf{{{b}}}}^{\mathsf{val}}$
## Appendix E Additional Experiments
### E-A Evaluation of SafeNet Extensions
##### Integration Testing
Here, we evaluate the performance of SafeNet by varying the concentration
parameter $\alpha$ to manipulate the degree of data similarity among the
owners. The experiments are performed with the same neural network
architecture from Section IV-G on the Fashion dataset. Figure 10 gives a
comprehensive view of the variation in test accuracy and attack success rate
for backdoor and targeted attacks over several values of $\alpha$.
0.11101001000$0$$20$$40$$60$$80$$100$$\alpha$Test Accuracy (in $\%$)SafeNet
FrameworkTest Accuracy
$0$$1$$2$$3$$4$$5$$0$$20$$40$$60$$80$$100$$\\#$ Poisoned Data OwersIdeal
Success Rate (in
$\%$)$\alpha=0.1$$\alpha=1$$\alpha=10$$\alpha=100$$\alpha=1000$TGT-Top
$0$$1$$2$$3$$4$$5$$20$$40$$60$$80$$100$$\\#$ Poisoned Data OwersIdeal Success
Rate (in $\%$)$\alpha=0.1$$\alpha=1$$\alpha=10$$\alpha=100$$\alpha=1000$TGT-
Foot
$0$$1$$2$$3$$4$$5$$0$$20$$40$$60$$80$$100$$\\#$ Poisoned Data OwersIdeal
Success Rate (in
$\%$)$\alpha=0.1$$\alpha=1$$\alpha=10$$\alpha=100$$\alpha=1000$Backdoor
Figure 10: Test Accuracy and Worst-case Adversarial Success in a three layer
neural network model trained on Fashion dataset using SafeNet for varying data
distributions. Parameter $\alpha$ dictates the similarity of distributions
between the owners. Higher values of $\alpha$ denote greater similarity in
data distributions among the owners and results in increased SafeNet
robustness.
We observe that as $\alpha$ decreases, i.e., the underlying data distribution
of the owners becomes more non-iid, the test accuracy of SafeNet starts to
drop. This is expected as there will be less agreement between the different
models, and the majority vote will have a larger chance of errors. In such
cases it is easier for the adversary to launch an attack as there is rarely
any agreement among the models in the ensemble, and the final output is swayed
towards the target label of attackers’ choice. Figure 10 shows that for both
targeted and backdoor attacks, SafeNet holds up well until $\alpha$ reaches
extremely small values ($\alpha=0.1$), at which point we observe the
robustness break down. However, the design of SafeNet allows us to detect
difference in owners’ distributions at early stages of our framework. For
instance, we experiment for $\alpha=0.1$ and observe that the average AccVal
accuracy of the models is $17\%$. Such low accuracies for most of the models
in the ensemble indicate non-identical distributions and we recommend not to
use SafeNet in such cases.
##### Low Resource Users
We instantiate our Fashion dataset setup in the 3PC setting and assume $2$ out
of $10$ data owners are computationally restricted. We observe SafeNet still
runs $1.82\times$ faster and requires $3.53\times$ less communication compared
to the existing PPML framework, while retaining its robustness against
poisoning and privacy attacks.
MPC Setting Framework Training Time (s) Communication (GB) Backdoor Attack
Targeted Attack Test Accuracy Success Rate Test Accuracy Success Rate-Top
Success Rate-Foot 3PC [4] Semi-Honest PPML n$\times$243.55 n$\times$55.68
$89.14\%$ $100\%$ $87.34\%$ $83\%$ $90\%$ SafeNet $10.03$ $2.05$ $88.68\%$
$4\%$ $88.65\%$ $1\%$ $10\%$ 4PC [28] Malicious PPML n$\times$588.42
n$\times$105.85 $89.14\%$ $100\%$ $87.22\%$ $83\%$ $90\%$ SafeNet $23.39$
$3.78$ $88.65\%$ $4\%$ $88.65\%$ $1\%$ $10\%$
TABLE IV: Training time (in seconds) and Communication (in GB) over a LAN
network for traditional PPML and SafeNet framework training a multiclass
logistic regression on MNIST. n denotes the number of epochs in the PPML
framework. The time and communication reported for SafeNet is for end-to-end
execution. Test Accuracy and Success Rate are given for a single poisoned
owner.
### E-B Logistic Regresssion, Multiclass Classification
We use the same strategies for the Backdoor and Targeted attacks on the MNIST
dataset. For BadNets, we select the initial class $y_{s}=4$ and the target
label $y_{t}=9$, and use the same $y_{t}=9$ for the targeted attack. Table IV
provides a detailed analysis of the training time, communication, test
accuracy, and success rate for both frameworks, in presence of a single
poisoned owner. The worst-case adversarial success for SafeNet is in Figure
11. The slow rise in the success rate of the adversary across multiple attacks
shows the robust accuracy property of our framework translates smoothly for
the case of a multi-class classification problem.
$0$$1$$2$$3$$4$$5$$6$$7$$8$$9$$10$$0$$50$$100$$\\#$ Poisoned Data OwersIdeal
Success Rate (in $\%$)SafeNet-TGT-TopSafeNet-TGT-FootSafeNet-Backdoor
Figure 11: Worst-case adversarial success of multi-class logistic regression
on MNIST in the SafeNet framework for backdoor and targeted attacks. The
adversary can change the set of $c$ poisoned owners per sample. SafeNet
achieves certified robustness up to 9 poisoned owners out of 20 against
backdoor and TGT-TOP attacks. The TGT-Foot attack targeting low-confidence
samples has slightly higher success, as expected.
### E-C Evaluation on Deep Learning Models
Experiments on Fashion Dataset. We present results on one and two layer deep
neural networks trained on the Fashion dataset. We perform the same set of
backdoor and targeted attacks as described in Section IV. Tables V and VI
provide detailed analysis of the training time, communication, test accuracy,
and success rate for traditional PPML and SafeNet frameworks. We observe
similar improvements, where for instance in the 4PC setting, SafeNet has
$42\times$ and $43\times$ improvement in training time and communication
complexity over the PPML framework, for $n=10$ epochs for a two hidden layer
neural network. Figure 12 shows the worst-case attack success in SafeNet
(where the attacker can choose the subset of corrupted owners per sample) and
the results are similar to Figure 5.
$0$$1$$2$$3$$4$$5$$0$$20$$40$$60$$80$$100$$\\#$ Poisoned Data OwersIdeal
Success Rate (in %)SafeNet-TGT-TopSafeNet-TGT-RandomSafeNet-TGT-FootSafeNet-
Backdoor1-Layer NN
$0$$1$$2$$3$$4$$5$$0$$20$$40$$60$$80$$100$$\\#$ Poisoned Data OwersIdeal
Success Rate (in $\%$)SafeNet-TGT-TopSafeNet-TGT-RandomSafeNet-TGT-
FootSafeNet-Backdoor2-Layer NN
Figure 12: Worst-case adversarial success of one and two layer Neural Networks
on FASHION dataset in SafeNet framework for varying poisoned owners.
MPC Setting No. Hidden Layers Framework Training Time (s) Communication (GB)
3PC [4] Semi-Honest 1 PPML n$\times$382.34 n$\times$ 96.37 SafeNet $65.71$
$14.58$ 2 PPML n$\times$474.66 n$\times$ 125.58 SafeNet $108.12$ $27.98$ 4PC
[28] Malicious 1 PPML n$\times$869.12 n$\times$ 174.12 SafeNet $152.68$
$26.89$ 2 PPML n$\times$1099.06 n$\times$227.23 SafeNet $258.72$ $51.66$
TABLE V: Training Time (in seconds) and Communication (in GB) of PPML and
SafeNet frameworks for one and two layer neural network on Fashion dataset,
where n denotes the number of epochs. The time and communication reported for
SafeNet framework is for end-to-end execution.
MPC Setting No. Hidden Layers Framework Test Accuracy Backdoor Attack Targeted
Attack Success Rate Success Rate-Top Success Rate-Foot 3PC [4] Semi-Honest 1
PPML $82.40\%$ $100\%$ $100\%$ $100\%$ SafeNet $84.45\%$ $0\%$ $0\%$ $38\%$ 2
PPML $83.92\%$ $100\%$ $100\%$ $100\%$ SafeNet $84.93\%$ $0\%$ $0\%$ $46\%$
4PC [28] Malicious 1 PPML $82.82\%$ $100\%$ $100\%$ $100\%$ SafeNet $84.44\%$
$0\%$ $0\%$ $38\%$ 2 PPML $83.80\%$ $100\%$ $100\%$ $100\%$ SafeNet $84.86\%$
$0\%$ $0\%$ $46\%$
TABLE VI: Test Accuracy and Success Rate of PPML and SafeNet frameworks for
one and two layer neural network on Fashion dataset, in presence of a single
poisoned owner.
MPC Setting Framework Training Time (s) Communication (GB) 3PC Semi-Honest [4]
PPML n$\times$8.72 n$\times$0.87 SafeNet $5.79$ $1.32$ Malicious [28] PPML
n$\times$223.15 n$\times$16.49 SafeNet $179.58$ $19.29$ 4PC Malicious [28]
PPML n$\times$18.54 n$\times$1.69 SafeNet $14.67$ $2.53$
TABLE VII: Training Time (in seconds) and Communication (in GB) for training a
single layer neural network model on the Adult dataset. n denotes the number
of epochs required for training the the neural network in the PPML framework.
The values reported for SafeNet are for its total execution.
$0$$1$$2$$3$$4$$5$$6$$7$$8$$9$$10$$20$$40$$60$$80$$100$$\\#$ Poisoned Data
OwersSuccess Rate (in $\%$)PPML FrameworkSafeNet Framework(a) Backdoor
$0$$1$$2$$3$$4$$5$$6$$7$$8$$9$$10$$0$$20$$40$$60$$80$$100$$\\#$ Poisoned Data
OwersSuccess Rate (in $\%$)PPML FrameworkSafeNet Framework(b) Targeted
$0$$1$$2$$3$$4$$5$$6$$7$$8$$9$$10$$0$$20$$40$$60$$80$$100$$\\#$ Poisoned Data
OwersIdeal Success Rate (in $\%$)SafeNet-TargetedSafeNet-Backdoor(c) Worst-
case Adversary
Figure 13: Attack Success Rate and a Neural Network in PPML and SafeNet
frameworks, trained over Adult dataset, for varying corrupt owners launching
Backdoor (a) and Targeted (b) attacks. Plot (c) gives the worst-case
adversarial success of SafeNet when a different set of poisoned owners is
allowed per sample.
Experiments on Adult Dataset. We use a similar attack strategy as used for
logistic regression model in Section IV-E. We observe that no instance is
present with true label $y=1$ for feature capital-loss $=1$. Consequently, we
choose a set of $k=100$ target samples $\\{x^{t}_{i}\\}_{i=1}^{k}$ with true
label $y_{s}=0$, and create backdoored samples
$\\{Pert(x^{t}_{i}),y_{t}=1\\}_{i=1}^{k}$, where $Pert(\cdot)$ function sets |
Genons, Double Covers and Fault-tolerant Clifford Gates
Simon Burton, Elijah Durso-Sabina, Natalie C. Brown
A great deal of work has been done developing quantum codes with varying overhead and connectivity constraints.
However, given the such an abundance of codes,
there is a surprising shortage of fault-tolerant logical gates supported therein.
We define a construction,
such that given an input $[[n,k,d]]$ code,
yields a $[[2n,2k,\ge d]]$ symplectic double code
with naturally occurring fault-tolerant logical Clifford gates.
As applied to 2-dimensional
codes with genons (twists)
and domain walls, we find the symplectic double is genon free,
and of possibly higher genus. Braiding of genons on the original
code becomes Dehn twists on the symplectic double.
Such topological operations are particularly suited for
architectures with all-to-all connectivity, and
we demonstrate this experimentally on Quantinuum's H1-1
trapped-ion quantum computer.
§ INTRODUCTION
Given that you have protected your fragile quantum
information from the world, how do you then gain
access to and thereby manipulate this quantum information?
This is the fundamental trade-off in the theory
of quantum error correction.
While the primary goal is to shield quantum data from
errors, the challenge lies in devising methods to interact
with and utilize this protected information for computational
tasks. Achieving this balance between protection and
accessibility is essential for realizing the full potential
of quantum error correction in practical applications.
One of the best techniques for circumventing this problem
is to apply quantum gates
transversally between copies of the same
quantum code, see [18] 5.3.
We view this copying process as analogous to
the concept of covering spaces.
The base code $C$ is covered by the total code $C\oplus C$.
(Here we are using additive $\oplus$ notation.)
Every qubit $j$ in $C$ is covered by two qubits in $C\oplus C$.
These two qubits are called the fiber over $j$.
Transversal gates
operate separately on each fiber in the cover.
We call these gates fiber transversal. See fig:double-cover.
Making copies like this we only get trivial
covering spaces; a cartesian
product of a particular fiber $\mathbf{2}$ with the code $C$
giving $\mathbf{2}\times C = C \oplus C$.
In this work we construct a
double cover called the symplectic double of $C$
and denoted $\Double(C)$.
This cover is trivial when $C$ is a CSS code but becomes
non-trivial otherwise.
A key idea is functoriality:
logical Clifford operations on the base code
can be lifted to logical operations on the total code.
In the symplectic double we often get an interesting set of fault-tolerant
Clifford gates.
When the base Clifford gate is a product of single qubit Cliffords,
the lifted gate is fiber transversal, and
these are fault-tolerant for the same
reason as in the trivial case: locally the cover is trivial.
Of particular interest to us is a familty of topological codes defined
on 3- and 4-valent graphs inscribed on compact surfaces
of arbitrary genus. We call these genon codes.
The graph determines the code up to
local Clifford equivalence, with twists or genons associated
to the 3-valent vertices.
This family of topological codes
includes surface codes, toric codes, hyperbolic surface codes,
$XZZX$ codes, the $[[5,1,3]]$ code.
See fig:genon-codes.
Applied to genon codes, the symplectic double corresponds to
a topological double cover, which is a discrete (combinatorial)
version of branched double covers of Riemann surfaces.
Counting the genus of the base space and the double cover space
fixes the number of genons in the base, and this is the origin of
the term genon. These are also called
twists, dislocations or defects in the literature
[5, 4, 22, 10, 37, 6].
\begin{tikzcd}
F \arrow[r, rightarrowtail] & E \arrow[d, twoheadrightarrow] \\
& B
\end{tikzcd}
The fiber $F$ is included into the total space $E$
which projects down onto the base space $B$.
A trivial double cover is just two copies of the base space, or
the cartesian product of the base with the fiber.
A twisted double cover: locally this looks like a trivial
double cover, globally there is a twist.
A schematic depiction of the symplectic double as
a covering space.
Given any CSS code $C$,
the two copies $C\oplus C$
give a trivial double cover of $C$, and we have a logical
transversal CX gate applied to the fibers of the cover.
When $C$ is not a CSS code we find there is a twisted
double cover $\Double(C)$
that also supports Clifford gates applied to the fibers.
A graph $\Gamma$ on a torus with
16 vertices in red, 32 edges in white and 16 faces in grey.
The faces of $\Gamma$ are bicolourable:
we paint each face with one of two colours blue or green
such that neighbouring faces have different colours.
We associate a qubit with each vertex and a stabilizer with
each face. Using the bicolouring to determine $X$ or $Z$ type stabilizers
we recover the usual (rotated) toric code.
The graph $\Gamma$ has an odd number of faces
in the vertical direction which prohibits bicolouring the faces.
We can still define a quantum code as above
but some stabilizers are both $X$ and $Z$ type and
this is a non-CSS code.
We insert a domain wall
separating or resolving the $X$ and $Z$ sectors appropriately.
This graph $\Gamma$ has two trivalent vertices.
We place $XYZ$ around trivalent vertices .
Here the domain wall connects the two trivalent vertices.
Constructing a rotated toric code on a
graph with bicolourable faces (i)-(iii).
The frustrated bicolourability of $\Gamma$ is
related to domain walls (iv)-(vi).
Trivalent vertices, or genons,
are another reason bicolourability is frustrated (vii)-(ix).
§.§ Outline
We discuss the underlying theory of two-dimensional
topological order
with $D(\Integer_2)$ anyon excitations in <ref>.
This calculus is divorced from the microscopic details of
the system, so we don't need to take into account any qubit or
stabilizer structure, and can instead focus on the large-scale topological
properties of the system.
Crucially this phase has one non-trivial symmetry which we denote
using one-dimensional domain walls.
The endpoints of domain walls are the genons, and
by braiding these we can perform logical Clifford gates.
The symmetry exhibited by domain walls has another
incarnation as precisely
the data needed to construct a double cover
of our two-dimensional manifold, or Riemann surface
Topological operations in the base space, such as braiding
genons, will then
lift functorially to topological operations in the total space,
such as Dehn twists.
We review background on quantum stabilizer codes and notation in <ref>
and then introduce the symplectic double construction in <ref>,
and show how logical Clifford gates on the base code
lift functorially
to logical Clifford gates on the total code <ref>,
as well as their fault-tolerance.
These lifted Clifford gates lie in the phase-free $ZX$-calculus.
We introduce our formalism for topological codes with genons in
<ref>. These are called genon codes,
and come with a theory of logical string operators which is invariant under
local Clifford operations <ref>, as well as
rules for decoration by domain walls<ref>.
When a genon code has no unnecessary $Y$ operators
the symplectic double will also be a genon code <ref>.
In <ref> we go through a series of example
genon codes and symplectic doubles.
These have interesting Clifford gates,
which we call genon protocols.
The smallest interesting example is a $[[4,1,2]]$ genon code
with 4 genons, whose symplectic double is a $[[8,2,2]]$ toric code.
We show how braiding these four genons gives a protocol for
implementing the single qubit Clifford group, as well as
Dehn twists on the $[[8,2,2]]$ toric code.
We experimentally demonstrate three different protocols
on Quantinuum's H1-1 trapped-ion quantum computer, including
genon braiding on the $[[4,2,2]]$ code, Dehn
twists on the $[[8,2,2]]$ code and a lifted Clifford gate on the
$[[10,2,3]]$ code <ref>.
Such experiments are a “proof-of-principle,”
demonstrating that
the gates arising from the procedures can be realized on modern quantum computers.
We conclude in <ref>.
We aim to use a consistent colour scheme as an
aid to recognizing
the same concept as it appears in seemingly different guises.
Qubits are red dots, $X$-type string operators are blue,
and $Z$-type string operators are green.
This blue/green
colour scheme also applies to $ZX$-calculus.
The symmetry that swaps blue and green is coloured yellow
and occurs both as domain walls and the Hadamard gate.
Finally, there are several Claims in this work which can be taken to
be either unproven theorems, or conjectures.
§.§ Related work
This work necessarily treads a well-worn path, and
we cite some of the work to be found on this path here.
* This work is a continuation of
the first author's work on $ZX$-dualities and folding
“fold-transversal” logical gates are
examples of what is here called fiber transversal.
* The symplectic double construction also appears in [29],
where it is applied to subsystem codes.
* A particular example of a genon code without any genons is
the $XZZX$ code [7], and the $[[5,1,3]]$ code.
* Surface codes are an
example of genon codes on a sphere:
in [19] they consider modifying such face-bicolourable
4-valent graphs to support 3-valent dislocations,
as well as qudit generalizations.
* Another treatment of surface codes and twists
is found in [6], where the focus is on
2+1 dimension circuit construction.
* The family of genon codes defined in this work
overlaps and is inspired by the graph-based formalism of
Sarkar-Yoder [33].
They were the first to identify the well-known
$[[5,1,3]]$ as a member of a family of topological codes defined on a torus.
In the Sarkar-Yoder formalism they take genons (branch points)
to occur precisely at the trivalent (or odd-valent)
vertices, and so
obtain a deficiency in the vertex count of the double cover (see proof of Thm. 4.9).
In this work we take genons (branch points) to occur on faces
near to the trivalent vertices, giving exactly twice the number
of vertices in the double cover (and a deficiency in the doubled faces).
This works better for our purposes as we then find a connection with the
symplectic double, from which our other code constructions follow.
The Sarkar-Yoder formalism takes the branch cuts between genons
to consist of paths along edges of the graph with endpoints at the
trivalent vertices. These paths are called defect lines
and they show how these relate to a bicolour (or checkerboard)
on the faces.
In this work we take branch cuts to be paths perpendicular to the edges,
which then give the domain walls for the underlying topological order.
The Sarkar-Yoder formalism also contains a doubling construction in 4.3,
that reproduces our symplectic double when there are no genons present.
For example, the Sarkar-Yoder double of the $[[5,1,3]]$ code is the $[[10,2,3]]$
toric code.
* A description of Dehn twists on topological
codes appears in [9], 4.2.
* In [28] Appendix D, they
describe a protocol for instantaneous Dehn twists involving
permutations of qubits followed by
Pachner moves.
These Pachner moves are implemented
using constant depth CX circuits and are
designed to re-triangulate the code lattice after performing
the qubit permutation.
Their work is developed further in [38] where
more protocols for performing braids of defects (genons) and
Dehn twists are given.
* Further discussion on Clifford gates on topological codes
is in [10, 27].
§ BACKGROUND
§.§ $D(\Integer_2)$ topological order
[t]0.24 topo_0_em
Anyons at the endpoint of logical operators.
[t]0.24 topo_0_stabs
Contractible loops have no effect on the encoded state.
[t]0.24 topo_0_anti
Every green-blue crossing introduces a factor of $-1$
in the commutation relation.
[t]0.24 topo_genus_1_logops_00
On a torus we have two pairs of anti-commuting logical operators,
encoding two qubits.
[t]0.24 topo_0_domain_em
The $H$ domain wall exchanges $e$ and $m$
[t]0.24 topo_0_genon_em
Contractible loop around a genon.
[t]0.24 topo_genus_1_domain_emm
A non-contractible domain wall on a torus; this system encodes
a single qubit.
[t]0.24 topo_1234_X
Logical $\bar{X}$
[t]0.24 topo_1234_Z
Logical $\bar{Z}$
[t]0.24 topo_1234_Y
Logical $\bar{Y}$
String operators are coloured blue and green,
which are $X$- and $Z$-type respectively.
The $H$ domain wall is a yellow string, whose (any) endpoints are called genons.
(viii)-(x): A sphere supporting four genons encodes one logical qubit.
In this section, we briefly discuss the theory of $D(\Integer_2)$
topological order, describing anyon statistics through
the use domain walls, defects and genons.
For a more in depth and accessible introduction we recommend
[10], as well as
[3] VI and [12].
To start, consider an abelian anyon model with four anyon types:
vacuum, electric, magnetic and electromagnetic anyons.
We denote these $\iota, e, m, \varepsilon$ respectively (See fig:strings).
Fusion rules describe how pairs of anyons combine to
form other anyons.
These rules are as follows:
\begin{align*}
\iota \times a &= a \times \iota = a \quad \text{for all anyon labels } a \\
e \times e &=
m \times m = \varepsilon \times \varepsilon = \iota\\
e \times m &= m\times e = \varepsilon \\
e \times \varepsilon &= \varepsilon\times e = m \\
m \times \varepsilon &= \varepsilon\times m = e \\
\end{align*}
We can describe the path of an anyon on a topological
surface by a string operator whose end points are the anyons.
When these string operators are closed, they are the
logical operators of the topological code.
Here we adopt the convention that blue strings connecting
two $e$ anyons are $X$-type operators and green strings
connecting two $m$ anyons are $Z$-type operators.
These strings can cross and annihilate ends points
following the fusion rules (see. Fig. <ref>).
This topological order supports two automorphisms: the
trivial automorphism and the automorphism that swaps
$e$ and $m$. These automorphisms occur when anyons cross
a boundary, or domain wall, that separates regions.
The trivial automorphism
is realized by anyons crossing a trivial domain wall,
while the $e-m$ swapping automorphism occurs when anyons
cross a non-trivial domain wall.
The endpoints of these non-trivial domain walls we call genons.
Because $m\times m=\iota$ and $e\times e=\iota$
the associated string operators are self-inverse.
In other words, two copies of the same colour string
can be erased:
\igcs{topo_XX_0}
\cong
\igcs{topo_XX_1}
\cong
\igcs{topo_XX_2}
and similarly for $Z$-type string operators.
The $H$ domain wall exchanges $e \leftrightarrow m$
and so is also self-inverse but for a different reason:
\igcs{topo_HH_0}
\cong
\igcs{topo_HH_1}
\cong
\igcs{topo_HH_2}
Contractible loops around genons
\igcs{topo_hh}
\cong
\igcs{topo_hh_s1}
\cong
\igcs{topo_hh_1s}
imply the following equations
\begin{align*}
\cong
\igcs{topo_hh_1}
\cong
\igcs{topo_hh_2} \\
\cong
\cong
\igcs{topo_hh_4}
\cong
\igcs{topo_hh_5}
\end{align*}
We see from this that performing a braid between two genons
has the same result clockwise versus anti-clockwise:
\begin{align*}
\igcs{topo_hh_12_cw}\mapsto
\\
& \ \ \Hsp\shortparallel
\\
\igcs{topo_hh_12_ccw}\mapsto
\end{align*}
Therefore we can refer to a braid of genons by the
underlying permutation of the genon labels.
Connecting these concepts to the language of the
stabilizer formalism (see <ref>),
we record the following dictionary:
$D(\Integer_2)$ topological order Quantum stabilizer codes
$X/Z$-type string operator $X/Z$-type logical operator
contractible string operator stabilizer
$e/m$ anyon frustrated $X/Z$-type stabilizer
§.§.§ Example in genus zero
So far we have only discussed local rules for
anyon calculations, as depicted by the dashed line
surrounding grey regions. In this section we consider
the effect that the global topology has.
A sphere with four genons (two domain walls), encodes
one logical qubit, fig:strings.
By topological deformation, we braid these genons.
Here we number the genons $1,2,3,4$ and see the action
on the logical operators as we perform these braids.
The first gate we try is given by the
permutation operator $\sigma = (1,4,3,2)$.
This is swapping genons 2 and 4, and we show a clockwise braid of these two
\begin{align*}
\bar{X} = \igcs{topo_1234_X}
\igcs{topo_1432_X}
\cong
\igcs{topo_1432_Z} = \bar{Z},
\\
\bar{Z} = \igcs{topo_1234_Z}
\igcs{topo_1432_Z1}
\cong
\igcs{topo_1432_Z2}
\\
\igcs{topo_1432_Z3}
\cong
\igcs{topo_1432_Z4}
= \bar{X}.
\end{align*}
And therefore this braid implements a logical $H$ gate.
The next permutation we examine is $\sigma = (1,3,2,4)$
which swaps genons 2 and 3.
\begin{align*}
\bar{X} = \igcs{topo_1234_X}
\igcs{topo_1324_X}
\cong
\igcs{topo_1324_2}
\cong \bar{Y},
\\
\bar{Z} = \igcs{topo_1234_Z}
\igcs{topo_1324_Z1}
\cong
\igcs{topo_1324_Z}
\end{align*}
And this gives a logical $S$ gate.
Finally we note that the two permutations
$(2,1,4,3)$ and $(3,4,1,2)$ leave the logical
operators invariant.
This is because we are working on a sphere and can
drag operators around the back of the sphere:
\begin{align*}
\bar{X} = \igcs{topo_m4_X1} &\cong
\igcs{topo_m4_X2}, \\
\bar{Z} = \igcs{topo_m4_Z1} &\cong
\igcs{topo_m4_Z2} \cong
\igcs{topo_m4_Z3}.
\end{align*}
We will show in <ref> below a minimal implementation of
these gates in a four qubit quantum code.
§.§.§ Dehn twists
The torus with an anti-commuting pair of logical operators.
A Dehn twist introduces a full rotation in the torus.
Here we see the blue string operator now winds around the
back of the torus.
A Dehn twist on a torus.
On a genus zero surface (sphere) the only non-trivial
homeomorphisms up to isotopy are the genon braids.
On a higher genus surface we have many more non-trivial
A Dehn twist on a torus $T$ is a homeomorphism
of the torus that introduces a global twist in the torus,
see fig:dehn.
A horizontal Dehn twist is implemented as a linear
shear operation, up to
periodic boundary conditions:
\begin{align*}
\igc{topo_genus_1} &\hsp\mapsto\hsp \igc{topo_genus_1_xy} \\
\end{align*}
Now we compute the action of this horizontal Dehn twist on
the logical operators.
A complete set of logical operators is given in anti-commuting
pairs $(\bX_0,\bZ_0)$ and $(\bX_1,\bZ_1)$:
The action of a horizontal Dehn twist is then found to be
\begin{align*}
\igc{topo_genus_1_logops_10} \ \ &\mapsto\ \ \igc{topo_genus_1_dehn_0} \\
(\bar{X}_0,\bar{Z}_0)\hsp\hsp\hsp &\mapsto \hsp\hsp(\bar{X}_0, \bar{Z}_0\bar{Z}_1) \\
\end{align*}
\begin{align*}
\igc{topo_genus_1_logops_01} \ \ &\mapsto\ \ \igc{topo_genus_1_dehn_1} \\
(\bar{X}_1,\bar{Z}_1)\hsp\hsp\hsp &\mapsto \hsp\hsp(\bar{X}_0\bar{X}_1, \bar{Z}_1)
\end{align*}
and therefore this Dehn twist implements logical $CX_{1,0}$
with control qubit $1$ and target qubit $0$.
A similar calculation shows that the vertical
Dehn twist
\begin{align*}
\igc{topo_genus_1} &\hsp\mapsto \hsp
\raisebox{-0.24\height}{\includegraphics{images/topo_genus_1_yx.pdf}}
\end{align*}
implements a logical $CX_{0,1}$.
The combination of these two logical gates generates
the group $\GL(2,\Integer_2)$ which is isomorphic
to the permutation group $S_3$ of order 6.
These two Dehn twists are known to implement the
mapping class group of the torus [16].
The mapping class group of a surface
is the group of isotopy classes of homeomorphisms of the surface.
§.§.§ Riemann surfaces and double covers
In this section we re-interpret the above
theory of $D(\Integer_2)$ topological order
using the language of Riemann surfaces.
Loosely speaking,
if domain-walls correspond to passing between
the normal world and a “bizarro” world, then
why don't we interpret this literally?
In other words, take two copies of the topological
phase and cut/glue them together appropriately, along the domain walls.
This motivates the following consideration of
branched double covers.
Topologically, there are only two ways to double cover a circle,
which is the only compact connected one-dimensional manifold, see fig:double-cover.
When we do this with compact surfaces things get much more
See the textbook [17] 1.2.5.
Compact oriented connected topological surfaces are characterized
by their genus, which counts the number of “holes”:
genus zero genus one genus two
The genus zero surface is a sphere, and we have inscribed on it
$v=6$ vertices, $e=12$ edges joining vertices, and $f=8$ faces, thus
giving the Euler character
\chi = v - e + f = 6 - 12 + 8 = 2.
In general, for a genus $g$ surface we have that
$\chi = 2 - 2g.$
The Euler character of a surface constrains what
graphs we can inscribe on that surface.
Given a compact surface $E$ that double covers a surface $B$,
\begin{tikzcd}
E \arrow[d, twoheadrightarrow, "p"] \\
\end{tikzcd}
the Euler characteristic $\chi(.)$ satisfies the formula
\chi(E) = 2\chi(B)
If the cardinality of the fiber $p^{-1}(b)$ is $1$ at
a point $b\in B$ we say that the cover has a
branch point at $b$.
Such branch points introduce a correction into the
formula for the Euler characteristic:
\chi(E) = 2\chi(B) + m
where $m$ is the number of branch points.
This is a special case of the more general
Riemann-Hurwitz formula, see [17] Thm 1.76.
In terms of the genus $g(.)$ of the surfaces $E$ and $B$,
we have
\begin{align*}
2g(E) - 2 &= 2(2g(B) - 2) + m, \\
g(E) - 1 &= 2g(B) - 2 + \frac{1}{2}m.
\end{align*}
When the genus of both $E$ and $B$ is zero
we find that $m=2$. This cover is given
by the double cover of the Riemann sphere,
or extended complex plane, under the function $f(z)=z^2.$
Most points, such as $1=(\pm 1)^2$ and $-1=(\pm i)^2$ are
double covered by $f$, except for the $m=2$ points at $0$ and $\infty$:
The yellow line is an arbitrary line connecting the two yellow branch points
at $z=0$ and $z=\infty$.
We can construct these branched covers with some cutting and glueing.
First we take two copies of the base space:
This is the trivial double cover of the base.
Now focusing on the lower part of this figure:
This shows explicitly how domain walls and
genons correspond to branches of double covers,
and will motivate the development of the qubit
theory below, see fig:cover-genon.
Here we tabulate values of $m$ for various double
\begin{array}{c|cccccccc}
m=\ ? & g(E)\!=\!0 & g(E)\!=\!1 & g(E)\!=\!2 & g(E)\!=\!3 & g(E)\!=\!4 & g(E)\!=\!5 & & \\
\hline
g(B) = 0 & 2 & 4 & 6 & 8 & 10 & 12 \\
g(B) = 1 & & 0 & 2 & 4 & 6 & 8 \\
g(B) = 2 & & & & 0 & 2 & 4 \\
g(B) = 3 & & & & & & 0
\end{array}
The interplay between the $m$ branch points and
the resulting genus of a double cover
is the origin of the term genon.
In summary, we have the following dictionary:
$D(\Integer_2)$ topological order
Riemann surfaces
domain wall branch cut
genon branch point
§.§ $ZX$-calculus survival guide
The $ZX$-calculus
is a notation for drawing quantum circuit diagrams [13, 35].
We draw circuits from right to left, in agreement with algebraic (Dirac) notation.
The building building blocks for this
notation are wires, blue and green spiders,
and the yellow Hadamard box.
Such diagrams are examples of tensor networks.
In this section we give a brief and incomplete introduction to this notation
and how we compute with it.
Dirac notation
The definition of a blue or green spider
with $m$ outputs, $n$ inputs and labelled with phase $a\in \Integer/4$ is
given by:
\begin{align*}
\igc{green_mn} \ &:= \
\ket{0}^{\tensor m}\bra{0}^{\tensor n} + e^{2\pi ia/4} \ket{1}^{\tensor m}\bra{1}^{\tensor n}
\\
\igc{blue_mn} \ &:= \
\ket{+}^{\tensor m}\bra{+}^{\tensor n} + e^{2\pi ia/4} \ket{-}^{\tensor m}\bra{-}^{\tensor n}
\end{align*}
When the phase is zero we usually omit the phase label.
ZX-diagrams without phase labels are called phase-free.
The most important equations
for our purposes
allow us to commute the Pauli $X$ and $Z$ operators through a circuit.
When a phase $2$ operator meets a spider of the same colour,
it passes through freely:
\begin{equation*}
\begin{aligned}[c]
\igc{zx_bb_b2} &= \igd{-0.6}{zx_bb2_b} = \igc{zx_b2b_b} \\
\igc{zx_gg_g2} &= \igd{-0.6}{zx_gg2_g} = \igc{zx_g2g_g} \\
\end{aligned}
\hsp
\begin{aligned}[c]
\igc{zx_b_bb2} &= \igd{-0.6}{zx_b_b2b} = \igc{zx_b2_bb} \\
\igc{zx_g_gg2} &= \igd{-0.6}{zx_g_g2g} = \igc{zx_g2_gg} \\
\end{aligned}
\end{equation*}
When a phase $2$ operator meets a spider of the opposite colour,
it is copied:
\begin{equation*}
\begin{aligned}[c]
\igc{zx_gg_gb2} &= \igd{-0.55}{zx_b2b2gg_g} \\
\igc{zx_bb_bg2} &= \igd{-0.55}{zx_g2g2bb_b} \\
\end{aligned}
\hsp
\begin{aligned}[c]
\igd{-0.55}{zx_g_ggb2b2} &= \igc{zx_b2g_gg} \\
\igd{-0.55}{zx_b_bbg2g2} &= \igc{zx_g2b_bb} \\
\end{aligned}
\end{equation*}
states and effects get copied by spiders of the opposite colour:
\begin{equation*}
\begin{aligned}[c]
\igc{zx_bb_bg} &= \igc{zx_gg_} \\
\igc{zx_gg_gb} &= \igc{zx_bb_} \\
\end{aligned}
\hsp
\begin{aligned}[c]
\igc{zx__gg} &= \igc{zx_gb_bb} \\
\igc{zx__bb} &= \igc{zx_bg_gg} \\
\end{aligned}
\end{equation*}
Spider legs are flexible, and this is how we
justify the use of vertical wires in our $ZX$-diagrams. For example:
\igc{zx_gb_gb} := \igc{zx_b_bbgg_g} = \igc{zx_g_ggbb_b}
Using these rules we commute a Pauli $Z$ operator on the control
qubit of a $CX$:
\igc{zx_CX_g2} = \igc{zx_g2_CX}
and a Pauli $X$ operator,
\igc{zx_CX_b2_1} =
\igc{zx_CX_b2_2} =
\igd{-0.5}{zx_CX_b2_3}
This diagrammatic calculus is blue-green symmetric, and indeed,
the $ZX$-calculus has a Fourier duality which implements
this colour reversal:
\igc{zx_Hg2} = \igc{zx_b2H}\hsp
\igc{zx_Hb2} = \igc{zx_g2H}
Now we can commute the Pauli $X$ operator through a $CZ$:
\igc{zx_CZ_b2_1} =
\igc{zx_CZ_b2_2} =
\igc{zx_CZ_b2_3} =
\igd{-0.5}{zx_CZ_b2_4}
The Fourier duality plays a fundamental role in this work,
and we make use of this blue-green-yellow colour scheme consistently to
refer to this connection.
§.§ Quantum codes and symplectic geometry
In this section we recall basic facts about
qubit stabilizer codes, their relation to
symplectic geometry,
and the Clifford group [11, 21].
Given a field $\Field$, and integer $n\ge 0$,
we define the standard
$2n$-dimensional symplectic space to be the $2n$-dimensional vector space
with symplectic form
\Omega_n =
\begin{pmatrix}
0 & I_n \\
-I_n & 0
\end{pmatrix}
where $I_n$ is the $n\times n$ identity matrix.
A vector $v\in \Field^{n}\oplus\Field^n$
is written is a block column matrix:
v = \begin{pmatrix} v_X \\ v_Z \end{pmatrix}
and we call $v_X$ the $X$ part of $v$ and $v_Z$ the $Z$ part of $v$.
Similarly for covectors and row matrices.
Given a vector $v$ in the vector space $\Field^n$ we define the weight of $v$,
denoted $w(v)$,
to be the number of non-zero components of $v$.
For a vector $v$ in the standard symplectic space
$\Field^{n}\oplus\Field^n$, we define the weight as
w(v) = w(v_X) + w(v_Z) - w(v_X \cdot v_Z),
where $v_X\cdot v_Z \in \Field^n$ is the componentwise product of $v_X$ and $v_Z$.
Let $\Field_2$ be the finite field of order $2$.
Much of the theory below can be developed for other finite fields,
but here we focus on the case $\Field_2$.
We define a quantum code $C$ to be
an isotropic subspace $C\subset\Field_2^n\oplus\Field_2^n$
where $n\ge 0$ is an integer number of qubits.
We also call $C$ the codespace.
Given such a code, the logical space is
the coisotropic subspace given by:
C^{\perp} = \{v \in \Field_2^n\oplus\Field_2^n\ |\ v^{\top}\Omega_n C = 0\}
\supset C.
The parameters $[[n,k,d]]$ of a quantum code $C$ are:
$n$ the number of qubits, $k$ the dimension of $C^{\perp}/C$,
and $d$ the minimum weight of $v \in C^{\perp}$ with $v\notin C.$
Quantum codes $C$ can also be specified by
a $m\times 2n$ (parity) check matrix $\Parity$.
This is a full-rank matrix with
$ \Parity\Omega_n \Parity^{\top} = 0. $
The codespace $C$ is the rowspan of $\Parity$,
and the logical space $C^\perp$ is the kernel (nullspace)
of the matrix $\Parity\Omega_n$.
We write such a check matrix in block form
$\Parity = \bigl( \Parity_X\ \Parity_Z \bigr)$ where $\Parity_X$ and $\Parity_Z$ are $m\times n$ matrices.
Expanding the isotropic condition
we find the equivalent statement
\begin{align*}
\Parity_X \Parity_Z^\top - \Parity_Z\Parity_X^\top = 0.
\end{align*}
Given a quantum code $C\subset \Field_2^n\oplus\Field_2^n$
we say that $C$ is CSS when we have
the direct sum decomposition $C=C_X\oplus C_Z$ with
$C$ is CSS when it has a check matrix
of the form
\Parity = \bigl( \Parity_X\ \Parity_Z \bigr) =
\begin{pmatrix}
\Parity'_X & 0 \\
0 & \Parity'_Z \\
\end{pmatrix}.
In other words, $\Parity_X$ and $\Parity_Z$ can be written without any nonzero rows in common.
We will make use of Pauli operator notation:
for a vector $v\in\Symp$,
with components $(v_1,...,v_n, v_{n+1},...,v_{2n})$
we write this as a length $n$ string ($n$-tuple)
of symbols $I,X,Y,Z$ with $i$-th entry given by
\left\{
\begin{array}{lll}
I & \text{if} & v_i=0, v_{n+i}=0, \\
X & \text{if} & v_i=1, v_{n+i}=0, \\
Z & \text{if} & v_i=0, v_{n+i}=1, \\
Y & \text{if} & v_i=1, v_{n+i}=1.
\end{array}
\right.
We also use the dot $.$ in place of $I$.
For example, the vector $(1 0 1 1)\in \Field_2^2\oplus \Field_2^2$
has Pauli operator $YZ$.
The subspace of $\Symp$
spanned by $v_i, v_{n+i}$ is the $i$-th qubit.
We declare two codes $C$ and $C'$ to be isomorphic $C\cong C'$
when they are the same up to permutation of qubits.
For an example of a $[[4,1,2]]$ quantum code $C\subset \Field_2^4\oplus \Field_2^4$
we have the
parity check matrix and corresponding Pauli operator notation:
\Parity =
\left(
\begin{array}{cccc;{1pt/0pt}cccc}
\end{array}
\right)
\begin{pmatrix}
{ X}&{ Y}&{ Z}&.\\
.&{ X}&{ Y}&{ Z}\\
{ Z}&.&{ X}&{ Y}\\
\end{pmatrix}.
We have a vertical line separating the $\Parity_X$
and $\Parity_Z$ blocks,
and the dot notation is for zero or $I$ entries.
An example of a CSS code
$C\subset \Field_2^{8}\oplus \Field_2^{8}$
with parameters $[[8,2,2]]$
is given by
\Parity =
\left(
\begin{array}{cccccccc;{1pt/0pt}cccccccc}
\end{array}
\right)
\begin{pmatrix}
{ X}&{ X}&.&.&.&{ X}&{ X}&.\\
.&{ X}&{ X}&.&.&.&{ X}&{ X}\\
.&.&{ X}&{ X}&{ X}&.&.&{ X}\\
.&{ Z}&{ Z}&.&{ Z}&{ Z}&.&.\\
.&.&{ Z}&{ Z}&.&{ Z}&{ Z}&.\\
{ Z}&.&.&{ Z}&.&.&{ Z}&{ Z}\\
\end{pmatrix}.
These two examples are chosen for a reason:
the parity check matrix of the $[[8,2,2]]$
code contains two copies of the parity check matrix of
the $[[4,1,2]]$ code.
This symplectic double procedure is the subject of <ref> below.
§.§.§ The qubit Clifford and Pauli groups
The $n$-qubit Pauli group,
also called the Heisenberg-Weyl group,
is a subgroup of the unitary group $\Unitary(2^n)$
generated by $n$-fold tensor products of
the matrices
iI = \begin{pmatrix}i & 0\\0 & i\end{pmatrix},\ \
X = \begin{pmatrix}0 & 1\\1 & 0\end{pmatrix},\ \
Z = \begin{pmatrix}1 & 0\\0 & -1\end{pmatrix}.
This group has order given by
|\Pauli_2(n)| = 4\cdot 4^{n}
and the center $\Center(\Pauli_2(n))\cong \ZZ/4$
is generated by $i$.
The quotient $\Pauli_2(n)/\Center(\Pauli_2(n))$ is isomorphic to (the additive group of)
the $\Field_2$-vector space $\Field_2^{2n}.$
We write this as the short exact sequence:
\ZZ/4 \rightarrowtail \Pauli_2(n) \twoheadrightarrow \Field_2^{2n}.
The 2-cocycle for this central extension is a function
$\beta:\Field_2^{2n}\times\Field_2^{2n}\to \ZZ/4$
$$\beta(v,w) \mod 2 = \langle v, w \rangle,$$
for all $v,w\in\Field_2^{2n}$.
Here we write $\langle v, w\rangle$
for the symplectic inner product on $\Field_2^{2n}$.
See [23] 3.3.1.
The $n$-qubit Clifford group
can be defined to be the normalizer of $\Pauli_2(n)$ in the unitary group $\Unitary(2^n)$.
This is an infinite group, however for our purposes we
will be using the following finite subgroup
as our definition of
the $n$-qubit Clifford group.
This is generated from scalar and matrices,
\omega,\ \
H = \frac{1}{\sqrt{2}}\begin{pmatrix}1 & 1\\1 & -1\end{pmatrix},\ \
S = \begin{pmatrix}1 & 0\\0 & i\end{pmatrix},\ \
CZ = \begin{pmatrix}1&0&0&0\\0&1&0&0\\0&0&1&0\\0&0&0&-1\end{pmatrix}
using multiplication and tensor products.
For $n\ge 0$, this group is denoted $\Cliff_2(n)$
and has order given by
|\Cliff_2(n)| = 8\prod_{j=1}^n 2(4^j - 1)4^j.
This sequence begins as $8, 192, 92160, 743178240,...$ and is sequence A003956 in the OEIS.
These matrices have elements in the ring $\QQ[1^{1/8}].$
See [34], Figure 8, for an abstract presentation of
the Clifford group(s)
in terms of generators and relations.
The reference [23] uses a slightly different
definition of the Clifford group which is an index two subgroup of $\Cliff_2(n)$, 4.1.2.
[there's a typo in eq. (4.12)]
This is done by altering the definition of the Hadamard.
The generators are:
i=\omega^2,\ \ \omega H=\frac{i+1}{2}\begin{pmatrix}1&1\\1&-1\end{pmatrix},
\ \ S,\ \ CZ.
The generated matrices have elements in the ring $\QQ[i].$
We call this group the $n$-qubit semi-Clifford group, denoted $\SCliff_2(n)$.
The OEIS notes that the order of these groups A027638
is also the order of a unitary group acting on Siegel modular forms.
The center $\Center(\Cliff_2(n))$ is isomorphic to $\ZZ/8$,
and we define the quotient group to be the affine symplectic group over $\Field_2$:
\ASp(2n,\Field_2) := \Cliff_2(n) / \mathcal{Z}(\Cliff_2(n)).
for $n>1$, this group is not (!) isomorphic to the
expected definition of the affine symplectic group which is the
semidirect product $\Sp(2n,\Field_2)\ltimes\Field_2^{2n}.$
This is a peculiar consequence of the dimension of
our qubit space which is even. The story is much
simpler for odd prime-dimensional qudits.
Combining the above,
we have the following commutative diagram of group homomorphisms, where
every row and column is short exact:
\begin{tikzcd}
\Field_2^{2n} \arrow[r, tail] & \ASp(2n, \Field_2) \arrow[r, two heads] & \Sp(2n, \Field_2) \\
\Pauli_2(n) \arrow[r, tail] \arrow[u, two heads] & \Cliff_2(n) \arrow[r, two heads] \arrow[u, two heads] & \Mp(2n, \Field_2) \arrow[u, two heads] \\
\ZZ/4 \arrow[r, tail]\arrow[u, tail] & \ZZ/8 \arrow[r, two heads]\arrow[u, tail] & \ZZ/2\arrow[u, tail]
\end{tikzcd}
Here $\Mp(2n,\Field_2)$ is the metaplectic group over $\Field_2$.
This suggests that the Clifford group is, or should be called,
the affine metaplectic group.
See also [20].
In summary, the action of the Clifford group
on the Pauli group by conjugation is,
up to phases, represented by the symplectic group.
Here we record the action of single qubit Clifford operators
$S$ and $H$ on anti-commuting pairs of Pauli operators.
We notate the action via the barred operators $\bS$ and $\bH$
and we see that $\bS\bH\bS = \bH\bS\bH.$
If we omit the entangling gate $CZ$ from the list of
generators of the Clifford group, we get the
local Clifford group.
As symplectic matrices, this is generated by
$n$-fold direct sums of
the $\Field_2$ matrices
\bH=\begin{pmatrix}0&1\\1&0\end{pmatrix},\ \
\bS=\begin{pmatrix}1&0\\1&1\end{pmatrix}.
On a single qubit this group is $\Sp(2,\Field_2)$
which is isomorphic to the permutation group on
three elements $S_3$. See fig:lc.
The $n$-qubit local Clifford group
preserves the parameters $[[n,k,d]]$ of a quantum code.
The Clifford group preserves the parameters $n$ and $k$
of any quantum code $C$.
The local Clifford group preserves the weight of
any vector $v\in\Field_2^n\oplus\Field_2^n$ and in
particular will preserve the weight of any codeword in $C$,
thereby preserving the parameter $d$.
§ THE SYMPLECTIC DOUBLE
Given a vector space $V$ over a field $\Field$,
we construct a
symplectic space $\Double(V) := V\oplus V^{\star}$ with symplectic
\begin{align*}
\Omega: \Double(V)\tensor \Double(V) &\to \Field \\
(v\oplus c, u\oplus d) &\mapsto d(v) - c(u).
\end{align*}
Moreover, the assignment
\Double ( V ) = V\oplus V^{\star}
is functorial,
which means that given invertible $f:V\to V$,
we have that
\begin{align}\label{eq:double-f}
\Double ( f ) := f \oplus (f^{-1})^{\star}
\end{align}
is a symplectic map on $\Double(V)$:
\begin{align*}
& \Omega( (f\oplus(f^{-1})^{\star})(v\oplus c), (f\oplus(f^{-1})^{\star})(u\oplus d) ) \\
=\ & \Omega( f(v)\oplus cf^{-1}, f(u)\oplus df^{-1} ) \\
=\ & d(v) - c(u)\\
=\ & \Omega( v\oplus c, u\oplus d ).
\end{align*}
and also that $\Double(.)$ preserves composition.
In other words, we have a group homomorphism:
\begin{align}\label{eq:functorial}
\Double(.):\GL(V,\Field)\to \Sp(\Double(V),\Field)
\end{align}
and this homomorphism is injective.
When $V$ itself is symplectic, with symplectic form $\Omega_0$,
we have an isomorphism
\begin{align*}
V &\xrightarrow{\cong} V^\star \\
v &\mapsto \Omega_0(v, \_).
\end{align*}
which we use in the definition of $\Double(.)$, as the following lemma shows.
This lemma is key to
the construction of fault-tolerant Clifford gates
on doubled codes.
Given symplectic space $(V,\Omega_0)$
with $n$-dimensional isotropic subspace $U\subset V$
then $\Double(U) := U\oplus \Omega_0(U,\_)$
is a $2n$-dimensional isotropic subspace of $\Double(V)$.
Moreover, given a symplectic map $f:V\to V$ that
preserves $U$ as a subspace $f(U)=U$, then
$\Double(f)$ is a symplectic map that
preserves the subspace $\Double(U)$.
Clearly $\Double(U)$ is a subspace of $\Double(V)$,
what remains to be shown is that $\Double(U)$ is isotropic.
With $u,v,u',v'\in U$ we have
generic elements of $\Double(U)$ given by
$u\oplus\Omega_0(v,\_)$ and
The symplectic pairing
evaluated on these two elements is
\begin{align*}
& \Omega(
u'\oplus\Omega_0(v',\_)) \\
=\ & \Omega_0(v',u) - \Omega_0(v,u')\\
=\ & 0-0 = 0
\end{align*}
and so $\Double(U)$ is isotropic.
Next, the action $\Double(f):
\mapsto
\in\Double(U)$
and so $\Double(f)$ preserves the subspace $\Double(U)$
when $f$ preserves the subspace $U$.
Given a $m\times 2n$ check matrix
$\Parity = \bigl( \Parity_X\ \Parity_Z \bigr)$
the doubled check matrix $\Double(\Parity)$ is
a $2m\times 4n$ check matrix
\begin{align}\label{eq:dh}
\Double(\Parity) :=
\begin{pmatrix}
\Parity_X & \Parity_Z & 0 & 0 \\
0 & 0 & \Parity_Z & \Parity_X
\end{pmatrix}.
\end{align}
By direct calculation we verify this is the check
matrix for a quantum code (isotropic subspace), as promised by
the functoriality lemma:
\begin{align}\label{eq:dhsymp}
\Double(\Parity)\Omega_{2n} \Double(\Parity)^\top =
\begin{pmatrix}
0 & \Parity\Omega_n \Parity^\top \\
\Parity\Omega_n \Parity^\top & 0
\end{pmatrix} = 0.
\end{align}
Given a quantum code $C$ with parameters $[[n,k,d]]$,
we have $\Double(C)$ a CSS quantum code with
parameters $[[2n,2k,\ge d]]$.
By the functoriality lemma <ref>,
we see that $\Double(C)$ is a $2n$ qubit quantum code.
A check matrix for the codespace $\Double(C)=C\oplus \Omega_n(C,\_)$
is given by eq:dh, which has CSS form.
Next, we examine logical operators $v\in\Double(C)^\perp$.
Both $v_X$ and $\Omega_n v_Z$ are in $C^\perp$,
and $w(v)\ge w(v_X) + w(v_Z)$, which is lower bounded by $d$
because one or both of $v_X,\Omega_n v_Z$ are not in $C$.
A closely related result is the following fault-tolerant property of
fiber transversal Clifford gates.
Given a quantum code $C$ with parameters $[[n,k,d]]$
a logical Clifford on $\Double(C)$ that acts on each fiber separately
is fault-tolerant up to distance $d$.
This is very similar to the proof of Theorem <ref>.
See Appendix <ref>.
A quantum code $C$ is a CSS code iff
$\Double(C) = C\oplus H(C).$
We now give two operational characterizations of
the CSS codes that are the double of some other code.
The first characterization relies on the following definition.
Given an $n$ qubit CSS code $C=C_X\oplus C_Z$,
a $ZX$-duality on $C$
is a permutation $\tau:n\to n$ such that
$\tau(C_X)=C_Z, \tau(C_Z)=C_X$ where the action of
$\tau$ on subspaces of $\Field_2^n$ is by permuting coordinates.
This is slightly more general than the definition in [8].
The second characterization is in terms of concatenation
with a CSS code with stabilizers $XXXX,ZZZZ$, logicals $XXII,ZIZI,XIXI,ZZII$.
We call this the $[[4,2,2]]$ code, even though there are other CSS codes
with these parameters.
The encoder $E$ has two qubit inputs for the
stabilizer/destabilizers, and two logical qubits.
$E$ exchanges a 2-qubit logical gate
with a physical transversal Hadamard
$E$ exchanges a logical $CZ$ gate
with a physical transversal $S^{(\dag)}$ gate
We show an encoding unitary $E$ for the $[[4,2,2]]$
code as a circuit diagram, which flows
in the algebraic direction, from right to left.
Given a CSS code $C$ on $2n$ qubits,
the following are equivalent:
(1) $C$ has a fixed-point-free involutory $ZX$-duality,
(2) $C=\Double(C')$ for some $n$ qubit
quantum code $C'$, and
(3) there is a concatenation
of $C$ with $n$ copies of the $[[4,2,2]]$
code that is self-dual.
Let $\tau:2n\to 2n$ be a fixed-point-free involutory $ZX$-duality on $C$.
This means that the orbits of $\tau$ have size two.
Without loss of generality
we can assume these orbits are of the form $\{i,i+n\}_{i=1,...,n}$.
Let the check matrix for $C$ be given by
the $2m\times 4n$ matrix
\Parity =
\begin{pmatrix}
\Parity_X & 0 \\
0 & \Parity_Z \\
\end{pmatrix}
We see that $\tau H_Z^\top = H_X^\top A$
where $A$ is an invertible $m\times m$ matrix.
Therefore, we have
\begin{pmatrix}
A^\top\Parity_X & 0 \\
0 & \Parity_Z \\
\end{pmatrix}
is in the form of a doubled check matrix eq:dh.
(2)=>(1) The converse direction follows because
a doubled check matrix always has the above $\tau$ a
(fixed-point-free involutory) $ZX$-duality.
Concatenation corresponds to composing encoding circuits.
The two qubit orbits of $\tau$ correspond to the pairs
along which we concatenate with the $[[4,2,2]]$ code.
The $[[4,2,2]]$ encoder satisfies the identity implementing a $ZX$-duality.
See fig:422-ZX.
A stronger statement can be made: there is a bijection
between CSS codes $C$ with fixed-point-free involutory $ZX$-duality $\tau$,
and codes $C'$ such that $C\cong \Double(C')$.
In other words, there can be distinct codes $C'$ and $C''$ that
double to isomorphic codes $\Double(C')\cong\Double(C'')$.
We will see an example of this
in <ref> and fig:ten-two-three.
Given any of the conditions in Theorem <ref>
together with a condition on the $X$-type stabilizers,
we have from [8] Theorem 7,
that $C$ will have a transversal $CZ$ gate.
Condition (3) is a generalization of the well-known construction
of the {4,8,8} colour code by concatenating
the $[[4,2,2]]$ code with two copies of a toric
code paired along a $ZX$-duality [15].
We write this concatenation along a $ZX$-duality
$\tau$ as $[[4,2,2]]\otimes_{\tau}C$.
Given a quantum code $C$ on $n$ qubits
the following are equivalent:
(1) $\Double(C)$ has a fiber transversal CZ gate
(2) C has a basis $\{v_i\}$ such that the parity of $Y$'s in each $v_i$ is even
(3) $[[4,2,2]]\otimes_\tau\Double(C)$ has a transversal $S^{(\dag)}$ gate.
§.§ Lifting Cliffords
Recall that
Cliffords in the phase-free $ZX$-calculus are generated by
CX gates [25].
The next theorem is a consequence of the functoriality of $\Double(\ )$.
The injective group homomorphism
$\Double:\Sp(2n,\Field_2) \to \Sp(4n,\Field_2)$
lifts to a homomorphism
$\Double':\Sp(2n,\Field_2) \to \Cliff(2n)$
whose image is given by Clifford unitary gates in
the phase-free $ZX$-calculus with fixed-point-free involutory $ZX$-duality:
\begin{tikzcd}
& \Cliff(2n) \arrow[d, twoheadrightarrow] \\
\Sp(2n,\Field_2) \arrow[r, rightarrowtail, "\Double"]
\arrow[ur, rightarrowtail, "\Double'"]
& \Sp(4n,\Field_2)
\end{tikzcd}
We define $\Double'$ on the generators (redundantly) as
\begin{align*}
\igc{green_111} & \mapsto \igc{gate_CX01} \\
\igc{blue_111} & \mapsto \igc{gate_CX10} \\
\igc{gate_H} & \mapsto \igc{gate_SWAP} \\
\igc{gate_CX01} & \mapsto \igc{gate_CX0231} \\
\igc{gate_SWAP} & \mapsto \igc{gate_DSWAP} \\
\end{align*}
Note we are using the “little-endian” symplectic
convention for the string diagrams on the righ-hand-side.
This gives a (unitary) permutation representation of
$\Sp(2n,\Field_2)$ in the computational basis.
It is straightforward to check that this agrees with
the application of eq:double-f to symplectic matrices $M$ on $\Symp$:
\Double(M) = M \oplus (M^{-1})^{\top} .
For example,
given a code $C$ satisfying any
of the conditions of Theorem <ref>,
so that $C=\Double(C')$
a Hadamard on qubit $i$ in the base code $C'$
is lifted under $\Double'$ to
swapping the qubits in $C$ in the fiber over $i$.
We will explore further examples in <ref> below.
This map $\Double'$ also appears in the proof of Theorem 3.8 in
[1], there denoted as $[\![ \ ]\!]^{\natural}$.
The tanner graph of any symplectic
matrix $M\in\Sp(2n,\Field_2)$ gives a $ZX$-calculus diagram
for $\Double'(M)$.
See fig:lc-spzx for the single qubit symplectic matrices $\Sp(2,\Field_2)$
and the corresponding $ZX$-calculus diagrams.
show how to apply $\Double$ to encoders
show how $\Double$ respects logical operators by Lemma <ref>
equation for $\Double(M)$ gives converse:
phase-free $ZX$-calculus diagrams with a symmetry property
correspond to diagrams that descend to the base code.
Symplectic matrices for the local Clifford group on one qubit
The corresponding Tanner graph of the Symplectic matrices
The Tanner graph for symplectic matrices
in $\Sp(2,\Field_2)$
gives the $ZX$-calculus
diagram for the lifted Clifford gate under $\Double'$.
§ GENON CODES
In this section we develop the theory of genon codes.
Examples are discussed below in <ref>.
§.§ Genon graphs and genon codes
We are given a compact oriented surface $S$
with a finite graph $\Gamma$ embedded therein.
This subdivides the surface into
edges, and
Vertices are required to have valence three or four.
Faces must have at least two edges, so that
bigon faces are allowed, but monogon faces are not.
We call such a $\Gamma$ a genon graph.
A genon code on a genon graph $\Gamma$ is a quantum code $C$
where qubits are placed at vertices,
stabilizers are placed on faces
with the following allowed configurations at each vertex:
We will write $(C,\Gamma)$ for a genon code $C$ on $\Gamma$.
Given a genon graph $\Gamma$ with $n$ vertices,
there are $6n$ genon codes on $\Gamma$ and they are all
equivalent under local Clifford operations.
The local Clifford group acts transitively
on the set of genon codes on $\Gamma$ because any vertex
configuration of valence $r=3,4$ is local Clifford equivalent
to any other vertex configuration of valence $r$.
Conversely, given a genon code,
the stabilizer subgroup of the local Clifford group
is trivial and so we have the result that
there are $6n$ such distinct genon codes on a given graph $\Gamma$
with $n$ vertices.
It's worthwhile staring at an illustration of this proof to
see how the local Clifford group acts on the 3-valent
and 4-valent vertices.
You really do get 6 different configurations
at each vertex, and the local Clifford group moves between all of these:
Let $C$ be a genon code on $\Gamma$ encoding $k$ logical qubits.
If $\Gamma$ is bicolourable
then $k=V-F+2$,
otherwise $k=V-F+1$.
For any stabilizer code with $n$ qubits and check matrix $\Parity$
we have that $k=n-\Rank(\Parity) = V -\Rank(\Parity).$
For a genon code $C$ on $\Gamma$, $\Parity$ is given by stabilizers living
on the faces.
Let $\Gamma$ be bicolourable, with faces either black or white.
Then $\Gamma$ has only four valent vertices, and we get
a linear dependent combination of white stabilizers, and also for
black stabilizers. Conversely, any linear dependent combination of
stabilizers is a sum of either of the black or white faces.
Therefore, $k=V-F+2$, and moreover, this argument runs backwards
so that $k=V-F+2$ implies bicolourable faces.
A similar argument shows that a lack of bicolourable faces is
equivalent to $k=V-F+1$. In this case the one linear dependency
comes from the combination of all the face stabilizers.
Let $C$ be a genon code on $\Gamma$ encoding $k$ logical qubits,
with $m$ the number of 3-valent vertices, and $g$ the genus of $\Gamma$.
If $\Gamma$ is bicolourable then $k=2g$ and $m=0$,
otherwise $k=2g + \frac{m}{2} - 1$.
Let $V_3$ be the number of 3-valent vertices, and $V_4$ be the number
of 4-valent vertices.
Then we see the number of edges is
$E = \frac{3}{2} V_3 + 2V_4$.
Writing the Euler characteristic:
\begin{align*}
\chi = 2-2g &= V - E + F \\
&= V_3 + V_4 - \frac{3}{2}V_3 - 2V_4 + F \\
&= F - \frac{1}{2}V_3 - V_4.
\end{align*}
If $\Gamma$ is bicolourable, then $V_3=0$ and
by the previous lemma we find $k=V_4-F+2$. Substituting
$F=2+V_4-k$ into the above equation for $\chi$ we get
$2-2g = 2+V_4-k -V_4 $ and so $k=2g$.
If $\Gamma$ is not bicolourable the previous lemma
gives $F=V_3+V_4+1-k$ and so
$2-2g = V_3+V_4+1-k - \frac{1}{2}V_3 - V_4$
which gives $k=2g + \frac{m}{2} - 1$ as required.
§.§ String operators
See SY [33] 4 and GS [19] III D.
We'd like to refer to logical operators of a genon code
up to local Clifford equivalence.
This is a theory of string operators
based only on the graph $\Gamma$.
This will be a classical binary code $S$,
which is then linearly mapped onto the logical
space $C^\perp$ of a given genon code $C$ on $\Gamma$.
Given a genon graph $\Gamma$
we associate a vector space
basis is the edge-face pairs of $\Gamma$.
In other words, every edge has a vector space $\Field_2^2$
We notate the four
vectors in this space with black lines
\begin{align*}
(0,0): \raisebox{-0.4\height}{\igr{edge_00}}\hsp
(1,0): \raisebox{-0.4\height}{\igr{edge_10}} \\
(0,1): \raisebox{-0.4\height}{\igr{edge_01}}\hsp
(1,1): \raisebox{-0.4\height}{\igr{edge_11}}
\end{align*}
and similarly for vectors in $\Field_2^{2E}$.
We now define the subspace $S\subset \Field_2^{2E}$
of string operators,
by considering the allowed string
operators around the 3-valent vertices and 4-valent vertices.
Around 3-valent vertices,
we have a vector space $\Field_2^6$, whose intersection with $S$ is
a 5-dimensional space spanned by
and rotations/reflections.
These give all even parity vectors of $\Field_2^6$.
Around the 4-valent vertices we have a vector space
$\Field_2^8$, whose intersection with $S$ is
a 6-dimensional space spanned by
and rotations/reflections.
Note these diagrams are not all linearly independent,
for example (i)+(ii)=(iii) and (iv)+(v)=(vi).
Given a genon code $C$ on $\Gamma$
we define a linear map of string operators to logical operators
$$\phi:S\to C^\perp$$
on basis elements of $S$ as follows.
At a 4-valent vertex:
\phi:\igc{logical_v4} \mapsto a \Hsp
\phi:\igc{logical_v4a} \mapsto a
and rotations/reflections.
At a 3-valent vertex:
\phi:\igc{logical_v3a} \mapsto a \Hsp
\phi:\igc{logical_v3bc} \mapsto c
and rotations/reflections.
For example, linearity of $\phi$ implies the following:
\phi:\igc{logical_v4I} \mapsto 0 \Hsp
\phi:\igc{logical_v3b} \mapsto b \Hsp
\phi:\igc{logical_v3I} \mapsto 0 \Hsp
Notice that the kernel of $\phi$ is non-trivial,
in other words
there is some redundancy in these string operators.
Using $\phi$ we can pick out a stabilizer generator $v\in C^\perp$
with a string operator external to the corresponding face,
however the string operator internal to a face
is sent to zero, for example:
\phi:\igc{string_istab}\mapsto v\in C^\perp \Hsp
\phi:\igc{string_ostab}\mapsto 0\in C^\perp
We summarize this in the following theorem.
Given a genon code $C$ with parameters $[[n,k,d]]$, on a graph $\Gamma$
we have that
\dim(S)
= \left\{\begin{array}{ll}
2k + 2F - 2 &\text{if $\Gamma$ is bicolourable}\\
2k + 2F -1 &\text{otherwise}
\end{array}\right.
= \left\{\begin{array}{ll}
2n + 2 &\text{if $\Gamma$ is bicolourable}\\
2n + 1 &\text{otherwise}
\end{array}\right.
$\phi:S\to C^\perp$ is surjective with kernel spanned by
the internal face string operators.
Given a logical operator $v\in C^\perp$ we can construct
a string operator in $u\in S$ locally such that $\phi(u)=v$.
This is done by cases.
To find the kernel of $\phi$ we see that all the internal
face string operators are linearly independent, there are
$F$ many of these, where $F$ is the number of faces of $\Gamma$
F = \left\{\begin{array}{ll}
n-k+2 &\text{if $\Gamma$ is bicolourable}\\
n-k+1 &\text{otherwise}
\end{array}\right.
This theorem makes intuitive sense from the homological point of view:
stabilizer generators are given by operators that bound
a region, so they have an inside.
Loosely speaking, the extra information found
in the string operators $S$ includes inside-out stabilizers,
which $\phi$ must send to zero.
Because the internal face string operators are sent to zero by $\phi$
we define the diagrammatic shorthand, or syntactic sugar:
\igc{string_center} :=
\igc{string_left} =_{\phi} \igc{string_right}
where the $\phi$ subscript refers to equality in the image of $\phi$.
In words, string operators can pass directly across faces
and don't need to wind around the inside.
Examples of the use of this string diagram calculus are
given in fig:xzzx.
§.§ Domain walls
Given a genon code $C$ on a graph $\Gamma$,
we define a unique decoration on $\Gamma$,
called the domain walls as follows:
(1) Every edge lies between two faces and we
place domain walls between the center of the two faces
according to the rules:
where the star $\star$ denotes any Pauli consistent with the genon code rules.
For example a $YY$ configuration along one side of an edge is covered by these
rules because on the other side of the edge will be $XX$, $ZZ$ or $XZ$.
(2) Each face with a $Y$ at a trivalent vertex has a domain
wall from the center of the face to the $Y$ operator (the face-vertex flag):
At the center of each face there is
an even number of domain walls coming together.
Given a genon code $C$ we call the parity of a face
to be the number domain walls incident at the center mod 2.
Step 1:
we see that local Clifford operators preserve the domain wall parity
at each face. This is done by cases.
Step 2: for each face, use local Clifford operators to
convert this face stabilizer into a pure $X$ type stabilizer.
This has zero domain walls, which is even parity.
From this theorem we see that if a domain wall has an endpoint
it will be at the $Y$ operator of a 3-valent vertex
(and never at the center of a face).
We call these termination points genons.
Given a genon code $C$ we see that there is one way to decorate
the surface $\Gamma$ with domain walls, however the converse is
not true.
§.§ Double covers of genon codes
Given a genon code $C$ on a graph $\Gamma$,
we define the double cover of
$\Gamma$ relative to $C$ written $\Double(\Gamma,C)$, as follows:
(Dimension 0)
Every vertex $v\in\Gamma$ is covered by two vertices in $\Double(\Gamma,C)$,
called the fiber over $v$.
This fiber is ordered, and we write the two vertices over $v$ as $v^1$ and $v^2$.
See fig:cover-genon-1.
(Dimension 1)
Every edge $e\in\Gamma$, with vertices $v_1,v_2$,
is covered by two edges in $\Double(\Gamma,C)$,
called the fiber over $e$, written $e^1$ and $e^2$.
If $e$ does not cross a domain wall
then $e^1$ has vertices $v_1^1,v_2^1$, and
$e^2$ has vertices $v_1^2,v_2^2$.
If $e$ does cross a domain wall
then $e^1$ has vertices $v_1^1,v_2^2$, and
$e^2$ has vertices $v_1^2,v_2^1$.
See fig:cover-genon-2.
Every 3-valent vertex $v\in\Gamma$, with
incident face $f\in\Gamma$ whose stabilizer in $C$ supports a $Y$ operator
is covered by a single edge in $\Double(\Gamma,C)$
with vertices $v^1,v^2$.
(Dimension 2)
Each face in $\Double(\Gamma,C)$ is constructed by lifting closed paths
$\gamma$ around the edges of a face $f\in\Gamma$.
The lifted edges
in these lifted paths then form the edges of a face.
When the path $\gamma$ encounters a genon at $v$, the edge between
$v^1,v^2$ is included in the lifted face, see fig:cover-genon-3
If the parity of the domain walls around $f$ is even there will
be two lifted faces $f^1,f^2$,
coming from two possible lifts of $\gamma$,
otherwise there is only one lifted face $f^1$ which comes from
the unique lift of $\gamma$.
Every vertex is covered by a pair of vertices.
edge_cover edge_cover_1
Every edge is covered by a pair of edges,
with endpoint vertices swapped across domain walls.
Trivalent vertex and associated branch point
at the other end of a “half-edge”.
This half-edge becomes a proper edge in the double cover.
We show
the lift of the domain wall in purple.
The lifted faces are bicoloured green and blue.
The double cover of a genon graph $\Gamma$
relative to a code $C$ is constructed
dimension wise, starting with vertices, then edges, then faces.
Given a genon code $C$ on $\Gamma$ we say that $C$ is
clean when the stabilizers around 4-valent vertices support
only $X$-type and $Z$-type operators.
In this sense, there are no unnecessary $Y$ operators.
Given a clean genon code $C$ on $\Gamma$,
then $\Double(C)$ is a CSS genon code on $\Double(\Gamma,C)$.
Given a clean genon code $C$ on $\Gamma$,
then $\Double(\Gamma,C)$ is bicolourable and supports two CSS
genon codes, one of which is $\Double(C)$ and the other its dual.
Given a genon code $C$ on $\Gamma$,
with parameters $[[n,k,d]]$ then
$[[4,2,2]]\otimes_{\tau} \Double(C)$ is a self-dual $[[4n,2k,\ge d]]$ code.
Given a genon code $C$ on $\Gamma$,
then $\Gamma$ is bicolourable iff
$[[4,2,2]]\otimes_{\tau} \Double(C)$ is a colour code.
§ EXAMPLE GENON CODES AND PROTOCOLS
§.§ Genus zero
For this example,
the genon graph $\Gamma$ is a tetrahedron inscribed on a sphere:
there are four qubits and four face stabilizers,
see fig:cover-412.
A nice choice for $C$ is given by
the redundant stabilizer group generators:
\langle XYZI, IXYZ, ZIXY, YZIX \rangle.
Evidently, this code has a $\Integer/4$ cyclic symmetry
$1\mapsto 2\mapsto 3 \mapsto 4\mapsto 1$,
whose action on the logical operators
L_X = ZXII, \ L_Z = IZXI
is a logical Hadamard gate:
\begin{align*}
L_X & \mapsto IZXI = L_Z \\
L_Z & \mapsto IIZX = L_X\cdot(ZIXY)\cdot(IXYZ).
\end{align*}
Moreover, it turns out we can perform any
of the $4!=24$ permutations on the physical qubits
followed by local Clifford gates, and these
yield all the single qubit logical Clifford gates for this code.
We tabulate the complete protocol:
\begin{array}{c|c|c}
\text{permutation} & \text{local Clifford gate} & \text{logical gate} \\
\hline
(1, 2, 3, 4) & IIII & I \\
(1, 2, 4, 3) & \bS(\bH\cdot \bS\cdot \bH)(\bS\cdot \bH)(\bH\cdot \bS) & (\bH\cdot \bS\cdot \bH) \\
(1, 3, 2, 4) & (\bH\cdot \bS\cdot \bH)(\bS\cdot \bH)(\bH\cdot \bS)\bS & \bS \\
(1, 3, 4, 2) & (\bH\cdot \bS)\bS(\bH\cdot \bS\cdot \bH)(\bS\cdot \bH) & (\bH\cdot \bS) \\
(1, 4, 2, 3) & (\bS\cdot \bH)(\bH\cdot \bS)\bS(\bH\cdot \bS\cdot \bH) & (\bS\cdot \bH) \\
(1, 4, 3, 2) & \bH\bH\bH\bH & \bH \\
(2, 1, 3, 4) & (\bS\cdot \bH)(\bH\cdot \bS)\bS(\bH\cdot \bS\cdot \bH) & (\bH\cdot \bS\cdot \bH) \\
(2, 1, 4, 3) & \bH\bH\bH\bH & I \\
(2, 3, 1, 4) & \bS(\bH\cdot \bS\cdot \bH)(\bS\cdot \bH)(\bH\cdot \bS) & (\bS\cdot \bH) \\
(2, 3, 4, 1) & IIII & \bH \\
(2, 4, 1, 3) & (\bH\cdot \bS)\bS(\bH\cdot \bS\cdot \bH)(\bS\cdot \bH) & \bS \\
(2, 4, 3, 1) & (\bH\cdot \bS\cdot \bH)(\bS\cdot \bH)(\bH\cdot \bS)\bS & (\bH\cdot \bS) \\
(3, 1, 2, 4) & (\bH\cdot \bS)\bS(\bH\cdot \bS\cdot \bH)(\bS\cdot \bH) & (\bH\cdot \bS) \\
(3, 1, 4, 2) & (\bH\cdot \bS\cdot \bH)(\bS\cdot \bH)(\bH\cdot \bS)\bS & \bS \\
(3, 2, 1, 4) & \bH\bH\bH\bH & \bH \\
(3, 2, 4, 1) & (\bS\cdot \bH)(\bH\cdot \bS)\bS(\bH\cdot \bS\cdot \bH) & (\bS\cdot \bH) \\
(3, 4, 1, 2) & IIII & I \\
(3, 4, 2, 1) & \bS(\bH\cdot \bS\cdot \bH)(\bS\cdot \bH)(\bH\cdot \bS) & (\bH\cdot \bS\cdot \bH) \\
(4, 1, 2, 3) & IIII & \bH \\
(4, 1, 3, 2) & \bS(\bH\cdot \bS\cdot \bH)(\bS\cdot \bH)(\bH\cdot \bS) & (\bS\cdot \bH) \\
(4, 2, 1, 3) & (\bH\cdot \bS\cdot \bH)(\bS\cdot \bH)(\bH\cdot \bS)\bS & (\bH\cdot \bS) \\
(4, 2, 3, 1) & (\bH\cdot \bS)\bS(\bH\cdot \bS\cdot \bH)(\bS\cdot \bH) & \bS \\
(4, 3, 1, 2) & (\bS\cdot \bH)(\bH\cdot \bS)\bS(\bH\cdot \bS\cdot \bH) & (\bH\cdot \bS\cdot \bH) \\
(4, 3, 2, 1) & \bH\bH\bH\bH & I \\
\end{array}
Using this protocol we can lift all of these
gates to logical Clifford gates on the $[[8,2,2]]$ code
by Theorem <ref>.
We see the $(1,4,3,2)$ permutation for logical $\bH$ and the
$(1,3,2,4)$ for logical $\bS$,
as well as $(2,1,4,3)$ and $(3,4,1,2)$ for logical $I$,
agree with the anyon calculations in <ref>.
The two logical gates $\bH,\bS$ generate the whole single qubit Clifford group
and will be used in the experiments <ref> below.
We also note this $[[4,1,2]]$ code is local Clifford equivalent to the
$[[4,1,2]]$ triangular colour code presented in [24], 5.2.
The surface codes, such as the $[[5,1,2]]$ code
are genon codes on a sphere, fig:genus-zero (i).
The missing external stabilizer forms
the back of the sphere, and contains the domain walls.
In general, any surface code with a single boundary component
forms a genon code in this way.
As a next example we take Landahl's jaunty code [26].
This is a $[[14,3,3]]$ code on a rhombic dodecahedron inscribed on a sphere,
Below we tabulate some of these examples and their symplectic doubles.
base code genons see Fig. symplectic double genus
$[[4,1,2]]$ $m=4$ <ref> $[[8,2,2]]$ $g=1$
$[[5,1,2]]$ $m=4$ <ref> (i)&(ii) $[[10,2,3]]$ $g=1$
$[[6,2,2]]$ $m=6$ <ref> (iii) $[[12,4,2]]$ $g=2$
$[[14,3,3]]$ $m=8$ <ref> (iv)&(v) $[[28,6,3]]$ $g=3$
We take a genon graph to be a tetrahedron inscribed on a sphere.
The same graph with one face splayed out to infinity. Qubits
are numbered in red 1-4.
The double cover, with qubits numbered as to which qubit
is covered in the base $[[4,1,2]]$ code.
The covering branch points (4 of them) and domain walls are shown in purple.
A genus zero $[[4,1,2]]$ genon code is double covered by
an $[[8,2,2]]$ toric code.
This is the well-known $[[5,1,2]]$ surface code.
Another $[[5,1,2]]$ code on the same graph.
This is a non-CSS code, local Clifford equivalent to (i).
This is a $[[6,2,2]]$ code.
Qubits are placed on the vertices of a triangular
prism. There are two triangular faces and three square faces.
Landahl's jaunty $[[14,3,3]]$ code
We show this in splayed view with one qubit stretched
out to infinity.
A local Clifford equivalent $[[14,3,3]]$
is in a clean configuration which means the
valence 4 vertices see only $X$ and $Z$ operators,
and ensures the doubled code is a genon code, Thm.<ref>.
Various genon codes that have genus zero.
§.§ Genus one
A $[[5,1,3]]$ code.
A $[[10,2,3]]$ code.
A $[[13,1,5]]$ code.
(i)-(iii): genon graphs built from quotients
$\Integer[i]/\langle a+bi\rangle$ and logical string operators.
We marked the origin as well as the Gaussian integer $a+bi$.
(iv)-(vi): the $XZZX$ code and corresponding domain walls.
(vii)-(ix): local Clifford equivalence can remove (some) domain walls.
We parameterize these by two integers $(a,b)$.
We view these periodic lattices
as quotients of the Gaussian integers: $\Integer[i]/\langle a + bi\rangle.$
See fig:xzzx.
The resulting code
has parameters $[[n,k,d]]$ with
n = a^2 + b^2, \ \
k = \left\{\begin{array}{ll}1 & \text{if $n$ odd} \\ 2 & \text{if $n$ even}\end{array}\right.
,\ \
d = \left\{\begin{array}{ll}a+b & \text{if $n$ odd} \\ \max(a,b) & \text{if $n$ even}\end{array}\right.
See [33], Theorem 3.9.,
where they define
a family of codes called the cyclic toric codes when $\gcd(a,b)=1$.
We make a table of some of these:
\begin{array}{r|rrrrrrrr}
& a=0 & a=1 & a=2 & a=3 & a=4 & a=5 & a=6 & a=7 \\
\hline
b=2 &[[4, 2, 2]] & [[5, 1, 3]] &[[8, 2, 2]] & & & & & \\
b=3 &[[9, 1, 3]] & [[10, 2, 3]] &[[13, 1, 5]] &[[18, 2, 3]] & & & & \\
b=4 &[[16, 2, 4]] & [[17, 1, 5]] &[[20, 2, 4]] &[[25, 1, 7]] &[[32, 2, 4]] & & & \\
b=5 &[[25, 1, 5]] & [[26, 2, 5]] &[[29, 1, 7]] &[[34, 2, 5]] &[[41, 1, 9]] &[[50, 2, 5]] & & \\
b=6 &[[36, 2, 6]] & [[37, 1, 7]] &[[40, 2, 6]] &[[45, 1, 9]] &[[52, 2, 6]] &[[61, 1, 11]] &[[72, 2, 6]]& \\
b=7 &[[49, 1, 7]] & [[50, 2, 7]] &[[53, 1, 9]] &[[58, 2, 7]] &[[65, 1, 11]]& [[74, 2, 7]] &[[85,1,13]] & [[98,2,7]] \\
\end{array}
The genus zero base code with parameters $[[5,1,2]]$
The $[[10,2,3]]$ double cover.
The genus one base code is the $[[5,1,3]]$ code.
The $[[10,2,3]]$ double cover.
On the $[[10,2,3]]$ code
there is a total of six $ZX$-dualities satisfying the requirements of
Theorem <ref>.
Five of these correspond to genus zero base code, and the other is the
genus one base code.
Here we show an example of a genus zero base code (i) as covered by (ii),
as well as a genus one code (iii) as covered by (iv).
The qubits are enumerated in each base, and these numbers are lifted into the
respective cover.
So far all these genus one codes have no genons.
In fig:sixgenon we show a $[[12,4,3]]$ genus one code with six genons.
This genus one code has six genons and parameters $[[12,4,3]]$.
§ EXPERIMENTAL RESULTS
The permutations required for implementing genon protocols
and fault-tolerant gates resulting from
the lifting theorem, can be efficiently realized on a hardware with high-connectivity.
In architectures with fixed qubit structures and thus restricted connectivity, qubit permutations are realized through circuit-level swapping, moving information between two separate qubits by performing a series of local swaps of adjacent qubits between them, thus increasing the overall circuit depth.
In systems with arbitrary connectivity, qubit permutations can be realized through a simple "relabelling" of the physical qubits.
Quantinuum's trapped-ion H1-1 device is based the quantum CCD (QCCD) architecture [36, 32] realizes all-to-all connectivity by using ion-transport operations.
Ions are able to physically move around the linear trap, physically swapping locations as needed.
As such, the H1-1 realizes fault-tolerant genon protocols with little overhead, incurring only noise from transport as no circuit-level swapping is required.
The H1-1 device uses 20 $^{171}$Yb$+$ ions for physical qubits and 20 $^{138}$Ba$+$ ions for sympathetic cooling, totally to 40 physical ions in the trap. Gates are executed via stimulated Raman transitions implemented with off-resonant lasers directed in five distinct regions.
During the course of these experiments, the average physical fidelities for the single-qubit and two-qubit gates, as measured by randomized benchmarking [30] experiments averaged over all five gate zones, were $3.2(5)\times 10^{-5}$ and $9.2(5) \times 10^{-4}$ respectively.
State preparation and measurement errors were also measured to be $2.7(1) \times 10^{-3}$.
Idling and transport error are more difficult to more difficult to characterize in the QCCD architecture as different circuits require different transport sequences.
Indeed, depending on the circuit, certain permutations may not require additional transport.
This gives the opportunity but not a prior guarantee for the compiler to reduce transport costs to next to zero, potentially allowing for very low error rate over heads. But more work needs to be done to study transport overheads for these sort of logical gates in practice.
We leave it to future work to characterize how such transport impacts the logical gate fidelity of the protocol realized. For further description of the H1-1 hardware benchmarks and specifications, see [32, 2].
In this work, we realized three different experimental implementations of the theory work outlined above on the H1-1 QCCD device.
First, we realized genon braiding in the $[[4,1,2]]$, done by performing local Cliffords and qubit permutations. Such permutations are physically realized through ion-transport.
We demonstrate the ability to produce the full Clifford group through the demonstration of logical randomized benchmarking, thus showcasing the power of the genon braiding technique.
Next, by lifting the logical $S$ gate from the $[[4,1,2]]$ code, we benchmark a logical $CX$ gate of the [[8,2,2]] code, the double cover of the $[[4,1,2]]$. This logical gate, involving qubit permutations, again is efficiently realized through the qubit relabelling enabled by ion transport primitives.
Finally, we realize another implementation two qubit logical gate from lifting the transversal $SH$ gate on the $[[5,1,3]]$ code. We benchmark this gate in a similar manner to the $CX$ on the $[[8,2,2]]$ but this time require proper decoding (rather than post-selection).
§.§ The [[4,1,2]] protocol
Unitary encoding circuit.
Input qubits are numbered $1-4$.
(De-)stabilizer inputs are qubits $1-3$,
and the logical input qubit is qubit $4$.
Preparing a logical $\ket{0}$ state.
Logical $H$ gate from swapping genons on qubits 2 and 4.
Logical $S$ gate from swapping genons on qubits 2 and 3.
The [[4,1,2]] protocol implementing single qubit Clifford gates by braiding (swapping) genons.
The survival probability from the randomized benchmarking protocol on the $[[4,1,2]]$ code, as realized on the H1-1 machine. Circuit depths 16 and 256 were ran with 1600 shots each and circuit depth 512 was ran with 800 shots. We note this graph seems to show a quadratic decay as opposed to the linear decay more commonly seen by randomized benchmarking.
As a demonstration of genon braiding,
we ran logical randomized benchmarking [30, 14, 31],
on the $[[4,1,2]]$ genon code
using the circuits shown in Fig. <ref>.
See Appendix <ref> for example QASM.
The protocol proceeds in 3 steps:
(Step 1)
We begin by preparing the logical $\ket{0}$ state using
the circuit in fig:412-protocol(ii).
(Step 2)
We apply a random sequence of $N$ logical Clifford gates.
There are 192 Clifford gates, or 24 up to phase, to choose from.
This group is generated by the $H$ and $S$ gates,
which at the physical level is
realized through concatenation of
the circuits in Fig. <ref>(iii) and (iv).
We also use the redundant Clifford generators $X$ and $Z$,
coming from the logical Pauli operators of the $[[4,1,2]]$ code.
Each resulting concatenation is compiled for the H1-1 device
into at most four single qubit Clifford gates;
qubit permutations are implemented via
(Step 3)
We apply the inverse of the encoding circuit
in fig:412-protocol(i). This gives
syndrome qubits 1,2 and 3 and (unencoded) logical qubit 4.
We apply the inverse of the Clifford operation in (Step 2)
on the qubit 4, and then measure all
qubits in the $\bra{0,1}$ basis.
We treat this as an error detecting code,
so the result of the measurement is discarded if
any of the syndrome bits are non-zero.
Otherwise we record survival if the logical bit is zero.
We ran three different random sequences of lengths $N = 16,\ 256$ and 512.
Using Step 1-3 above we sample 16 circuits
for each of $N=16$ and $N=256$, and for $N=512$ we sampled 8 circuits.
Each circuit is then executed for 100 shots.
The discard counts were 28/1600, 90/1600, and 56/800 respectively.
The survival probability
is plotted in Fig. <ref>.
In general, it can be hard to extract the logical fidelity from the randomized benchmarking protocol without interleaving QEC cycles [31].
From the randomized benchmarking protocol, we would expect to see a linear decay in the survival probability resulting from accumulation of errors [30].
However, we observe that the curve seen in Fig. <ref>, matches a quadratic fit instead of a linear one.
Further investigation is needed to conclude whether such a logical randomized benchmarking experiment, without interleaving QEC cycles, is sufficient to extract a reliable logical error rate and is outside the scope of this work.
Here we use the randomize benchmarking protocol as a proof of practice that genon braiding on the $[[4,1,2]]$ can be used to realize the full Clifford group and easily implemented on the H1-1 device.
§.§ The [[8,2,2]] protocol
In this experiment we benchmark a logical $CX$ gate
on the $[[8,2,2]]$ toric code.
This code is the double cover of the $[[4,1,2]]$ base code,
and the logical $CX$ is the lift of the
logical $S$ gate on the base code
given in fig:412-protocol (iv),
using Theorem <ref>.
The fibers over the base qubits $1,2,3,4$ are
$\{1,5\},\{2,6\},\{3,7\},\{4,8\}$ respectively,
see fig:822-protocol.
To benchmark this $CX$ gate we run $4+8=12$ circuits comprised of
the following:
* State preparation and measurement (SPAM) circuits, for each of logical
$\ket{00}, \ket{11}, \ket{++},$ and $\ket{--}.$
* The action of the logical $CX$ on the two qubit computational logical states
$\ket{00}, \ket{01}, \ket{10},$ and $\ket{11}$ as well as
the two qubit phase logical states
$\ket{++}, \ket{+-}, \ket{-+},$ and $\ket{--}$.
At the end of the circuit we measure the
qubits in the
$\ket{0,1}^{\tensor 8}$ basis for the computational logical states,
or the
$\ket{+,-}^{\tensor 8}$ basis for the phase logical states.
As this is an error detecting code,
if the measurement fails to satisfy the $Z$-checks, or $X$-checks
of the $[[8,2,2]]$ code respectively, the result is discarded.
Otherwise, an error is recorded when the measurement fails to lie within
the $X$ or $Z$-type stabilizer group of the $[[8,2,2]]$ code respectively.
Each of these circuits is run for 5000 shots.
We also simulate these circuits for 50000 shots, with the
error count then divided by 10.
These results are tabulated in Table <ref>
with the logical fidelities calculated in Table <ref>.
The (simulated) overall accept probability was $96\pm 1\%$.
From the experiments, it appears that the $X$ basis is a more robust to noise than the $Z$, having no logical errors in SPAM or the $CX$ experiments. We attribute the difference between the two basis to the difference circuit depth of the encoding circuits seen in Fig. <ref>. We note that these encoding circuits were not tested for fault-tolerant properties, meaning it was not confirmed that higher weight errors do not propagate through. Further analysis is needed to construct shallower, fault-tolerant encoding circuits and an general encoding protocol, but we leave this to future work.
logical logical experimental simulated
operation state errors errors
$I$ $\ket{00}$ 3 4.8
$I$ $\ket{11}$ 4 6.9
$CX_{1,0}$ $\ket{00}$ 0 4.3
$CX_{1,0}$ $\ket{01}$ 2 4.7
$CX_{1,0}$ $\ket{10}$ 2 3.6
$CX_{1,0}$ $\ket{11}$ 3 4.6
logical logical experimental simulated
operation state errors errors
$I$ $\ket{++}$ 0 0.5
$I$ $\ket{--}$ 0 0.6
$CX_{1,0}$ $\ket{++}$ 0 0.7
$CX_{1,0}$ $\ket{+-}$ 0 0.5
$CX_{1,0}$ $\ket{-+}$ 0 0.3
$CX_{1,0}$ $\ket{--}$ 0 1.0
The number of logical errors found for the $[[8,2,2]]$ code simulations and experiments. Here the logical operation $I$ is meant to imply the state preparation and measurement errors seen while preparing the individual two-qubit logical states. A series of experiments were also performed implementing the logical $CX$ gate between the two logical qubits contained in the $[[8,2,2]]$ code block.
|—| 1c | X basis Z basis Average
$SPAM_{exp}$ $1.0000_{-2}^{+0}$ $0.9993_{-3}^{+3}$ $0.9996_{-2}^{+2}$
$SPAM_{sim}$ $0.9999_{-3}^{+3}$ $0.9988_{-1}^{+1}$ $0.9993_{-1}^{+1}$
$CX_{exp}$ $1.0000_{-2}^{+0}$ $0.9996_{-2}^{+2}$ $0.9998_{-1}^{+1}$
$CX_{sim}$ $0.99987_{-4}^{+4}$ $0.9991_{-1}^{+1}$ $0.9995_{-3}^{+3}$
The logical fidelities for state preparation and measurement (SPAM) as well as the $CX$ implementation for the $[[8,2,2]]$ code. We note that $X$ basis appears to perform better than $Z$ basis, which we attribute to the depth of the encoding circuits used.
Preparing logical $\ket{00}$.
Preparing logical $\ket{++}$.
Logical $CX_{1,0}$ gate, with input qubits numbered $1-8$.
The [[8,2,2]] Dehn twist protocol for benchmarking a logical $CX$ gate. We note these encoding circuit have not been tested for fault-tolerant properties, and may lead to higher-weight physical errors to propagate to the logical state.
§.§ The [[10,2,3]] protocol
Here we benchmark the two qubit logical gate:
g=CX_{0,1}\cdot SWAP % = SWAP\cdot CX_{1,0}.
This is found as the lift of the order three (up to phase)
transversal Clifford gate $SH$ on the $[[5,1,3]]$ base code, using
Theorem <ref>.
See fig:1023-protocol.
We follow a similar benchmarking procedure as for the $[[8,2,2]]$
protocol above with 12 different circuits,
except that instead of discarding measurements that
present a non-zero syndrome we now infer a logical correction operator
from the syndrome data
using a decoder algorithm.
This decoder accepts one of $2^4=16$ possible syndrome bits and outputs
the most likely of the $2^2=4$ logical correction operators.
We pre-calculate all of these using simulated data on $10^5$ shots,
and generate a lookup table.
Note that we build a separate lookup table for each of the 16 circuits,
in this way the decoder is aware of the specific noise model of that circuit.
This improves the performance of the benchmark substantially.
The shots where the decoder fails to give the correct logical
operation are then recorded as errors.
We tabulate experimental results in Table <ref> and fidelities of these operations in Table <ref>.
Each circuit is run for 2000 shots.
We also simulate each circuit for
$2\times 10^5$ shots and then normalize to $2000$ shots.
Similar to the $[[8,2,2]]$ results, we see a difference between the $X$ basis and $Z$ basis, with the $X$ performing a bit better. We again attribute this to the circuit depth of the encoding circuits as seen in Fig. <ref>.
logical logical experimental simulated
operation state errors errors
$I$ $\ket{00}$ 4 6.68
$I$ $\ket{11}$ 2 7.14
$g$ $\ket{00}$ 6 14.18
$g$ $\ket{01}$ 7 13.84
$g$ $\ket{10}$ 4 14.29
$g$ $\ket{11}$ 2 14.97
logical logical experimental simulated
operation state errors errors
$I$ $\ket{++}$ 1 3.05
$I$ $\ket{--}$ 3 3.16
$g$ $\ket{++}$ 8 12.30
$g$ $\ket{+-}$ 2 12.65
$g$ $\ket{-+}$ 3 12.58
$g$ $\ket{--}$ 4 13.22
The number of logical errors found for the $[[10,2,3]]$ code simulations and experiments. Here the logical operation $I$ is meant to imply the state preparation and measurement errors seen while preparing the individual two-qubit logical states. A series of experiments were also performed implementing the logical $g$ gate between the two logical qubits contained in the $[[10,2,3]]$ code block.
|—| 1c | X basis Z basis Average
$SPAM_{exp}$ $0.9990_{-7}^{+7}$ $0.9985_{-8}^{+8}$ $0.9987_{-7}^{+7}$
$SPAM_{sim}$ $0.99844_{-8}^{+8}$ $0.9965_{-1}^{+1}$ $0.9974_{-1}^{+1}$
$g_{exp}$ $0.997_{-1}^{+1}$ $0.997_{-1}^{+1}$ $0.997_{-1}^{+1}$
$g_{sim}$ $0.9936_{-1}^{+1}$ $0.9928_{-1}^{+1}$ $0.9932_{-1}^{+1}$
The logical fidelities for state preparation and measurement (SPAM) as well as the $g$ implementation. We note that $X$ basis appears to perform better than $Z$ basis, which we attribute to the depth of the encoding circuits used.
The qubit indexes
Preparing logical $\ket{00}$.
Preparing logical $\ket{++}$.
[t]0.40 10_2_3_cz A logical $CZ$ gate.
[t]0.40 10_2_3_cx A logical $CX_{0,1}\cdot SWAP$ gate.
The [[10,2,3]] protocol is derived from the $[[5,1,3]]$ base code.
§ CONCLUSION
As the field of quantum error correction advances, significant
progress has been made in the design of
quantum low-density parity check codes
with favorable properties. However,
many of these codes are limited by their scarcity of
fault-tolerant logical operations. In this work we seek
to co-design quantum codes that have both reasonable
scaling properties as well as giving fault-tolerant logicals
beyond the Pauli group.
Our study explores the use of covering spaces, a concept
that underlies mathematical fields including
Galois theory, number theory, and algebraic topology.
Specifically, we focus on double covers within the realms
of symplectic geometry, quantum codes, Riemann surfaces,
and topological quantum codes. This multidisciplinary
approach underscores a broader theoretical idea:
two-dimensional topologically ordered systems should
exhibit a correspondence between domain walls and covering
spaces, particularly in the context of abelian domain walls.
A significant contribution of our work is the explicit
protocol we develop for braiding genons and performing
Dehn twists. This protocol leverages qubit permutations
and constant depth Clifford circuits, which are efficiently
realizable on quantum computers with high connectivity,
such as Quantinuum’s H1-1. The practical implementation
of these gates results in robust logical fidelity, showcasing
the experimental viability of our approach.
Non-Clifford gates are essential for achieving universal fault-toleran
quantum computation. While Clifford gates alone form
a useful set for many quantum operations, they are insufficient
for universal quantum computing. Our construction lays
the groundwork for integrating non-Clifford gates into
the topological code framework, a critical step for universal
fault-tolerant computation. Further research is required
to fully develop and implement non-Clifford gates within
these codes. Nonetheless, the methods and constructions
found in this work appear promising and compatible with
existing approaches, suggesting a viable pathway toward
their realization.
Looking ahead, our findings suggest promising directions for
further exploration. The correspondence between domain
walls and covering spaces observed in two-dimensional
topological systems could extend to three-dimensional
systems. Such systems might exhibit defects whose statistics
enable the generation of non-Clifford gates, pushing
the boundaries of fault-tolerant quantum computation.
Drawing inspiration from number theory, algebraic geometry,
and related fields, we envision the development of more
sophisticated quantum codes and fault-tolerant operations
that could revolutionize quantum computing.
In conclusion, our work lays the groundwork for a unified
theory that bridges diverse mathematical disciplines
and advances the design of quantum error-correcting codes.
By integrating twists, defects, and higher-dimensional
topological structures, we open new pathways toward achieving
versatile and scalable quantum computation with enhanced
fault tolerance.
The authors thank Thomas Scruby, Michael Vasmer, Karl Mayer,
Shival Dasu, Dan Gresh, and Pablo Andres-Martinez
for useful conversations and feedback on this work.
[1]
M. Backens, S. Perdrix, and Q. Wang.
A Simplified Stabilizer ZX-calculus.
Electronic Proceedings in Theoretical Computer Science,
236:1–20, 2016.
[2]
C. Baldwin.
Quantinuum hardware specifications.
[3]
M. Barkeshli, P. Bonderson, M. Cheng, and Z. Wang.
Symmetry fractionalization, defects, and gauging of topological
Physical Review B, 100(11):115147, 2019.
[4]
M. Barkeshli, C.-M. Jian, and X.-L. Qi.
Twist defects and projective non-abelian braiding statistics.
Physical Review B, 87(4):045130, 2013.
[5]
H. Bombín.
Topological order with a twist: Ising anyons from an abelian model.
Physical review letters, 105(3):030403, 2010.
[6]
H. Bombin, C. Dawson, R. V. Mishmash, N. Nickerson, F. Pastawski, and
S. Roberts.
Logical blocks for fault-tolerant topological quantum computation.
PRX Quantum, 4(2):020303, 2023.
[7]
J. P. Bonilla Ataides, D. K. Tuckett, S. D. Bartlett, S. T. Flammia, and B. J.
The XZZX surface code.
Nature communications, 12(1):2172, 2021.
[8]
N. P. Breuckmann and S. Burton.
Fold-Transversal Clifford Gates for Quantum Codes.
arXiv preprint arXiv:2202.06647, 2022.
[9]
N. P. Breuckmann, C. Vuillot, E. Campbell, A. Krishna, and B. M. Terhal.
Hyperbolic and semi-hyperbolic surface codes for quantum storage.
Quantum Science and Technology, 2(3):035007, 2017.
[10]
B. J. Brown, K. Laubscher, M. S. Kesselring, and J. R. Wootton.
Poking holes and cutting corners to achieve clifford gates with the
surface code.
Physical Review X, 7(2):021029, 2017.
[11]
A. R. Calderbank, E. M. Rains, P. M. Shor, and N. J. Sloane.
Quantum error correction via codes over gf (4).
IEEE Transactions on Information Theory, 44(4):1369–1387,
[12]
N. Carqueville, M. Del Zotto, and I. Runkel.
Topological defects.
arXiv preprint arXiv:2311.02449, 2023.
[13]
B. Coecke and R. Duncan.
Interacting quantum observables: categorical algebra and
New Journal of Physics, 13(4):043016, 2011.
[14]
J. Combes, C. Granade, C. Ferrie, and S. T. Flammia.
Logical randomized benchmarking.
arXiv preprint arXiv:1702.03688, 2017.
[15]
B. Criger and B. Terhal.
Noise thresholds for the [[4, 2, 2]]-concatenated toric code.
QIC, 16(15-16):1261–1281, Nov 2016.
[16]
B. Farb and D. Margalit.
A primer on mapping class groups (pms-49), volume 41.
Princeton university press, 2011.
[17]
E. Girondo and G. González-Diez.
Introduction to compact Riemann surfaces and dessins d'enfants,
volume 79.
Cambridge University Press, 2012.
[18]
D. Gottesman.
Stabilizer codes and quantum error correction.
California Institute of Technology, 1997.
[19]
M. G. Gowda and P. K. Sarvepalli.
Quantum computation with generalized dislocation codes.
Physical Review A, 102(4):042616, 2020.
[20]
S. Gurevich and R. Hadani.
The Weil representation in characteristic two.
Advances in Mathematics, 230(3):894–926, 2012.
[21]
J. Haah.
Algebraic methods for quantum codes on lattices.
Revista Colombiana de Matemáticas, 50(2):295–345, 2016.
[22]
M. B. Hastings and A. Geller.
Reduced space-time and time costs Ising dislocation codes and
arbitrary ancillas.
Quantum Information and Computation, 15(11-12):0962–0986,
[23]
M. Heinrich.
On stabiliser techniques and their application to simulation and
certification of quantum devices.
PhD thesis, Universität zu Köln, 2021.
[24]
M. S. Kesselring, F. Pastawski, J. Eisert, and B. J. Brown.
The boundaries and twist defects of the color code and their
applications to topological quantum computation.
Quantum, 2:101, 2018.
[25]
A. Kissinger.
Phase-free ZX diagrams are CSS codes (... or how to graphically grok
the surface code).
arXiv preprint arXiv:2204.14038, 2022.
[26]
A. J. Landahl.
The surface code on the rhombic dodecahedron.
arXiv preprint arXiv:2010.06628, 2020.
[27]
A. Lavasani and M. Barkeshli.
Low overhead clifford gates from joint measurements in surface,
color, and hyperbolic codes.
Physical Review A, 98(5):052319, 2018.
[28]
A. Lavasani, G. Zhu, and M. Barkeshli.
Universal logical gates with constant overhead: instantaneous Dehn
twists for hyperbolic quantum codes.
Quantum, 3:180, Aug. 2019.
[29]
M. L. Liu, N. Tantivasadakarn, and V. V. Albert.
Subsystem CSS codes, a tighter stabilizer-to-CSS mapping, and
Goursat's Lemma.
arXiv preprint arXiv:2311.18003, 2023.
[30]
E. Magesan, J. M. Gambetta, and J. Emerson.
Scalable and robust randomized benchmarking of quantum processes.
Phys. Rev. Lett., 106:180504, May 2011.
[31]
K. Mayer, C. Ryan-Anderson, N. Brown, E. Durso-Sabina, C. H. Baldwin, D. Hayes,
J. M. Dreiling, C. Foltz, J. P. Gaebler, T. M. Gatterman, et al.
Benchmarking logical three-qubit quantum fourier transform encoded in
the steane code on a trapped-ion quantum computer.
arXiv preprint arXiv:2404.08616, 2024.
[32]
J. Pino, J. Dreiling, and C. e. a. Figgatt.
Demonstration of the trapped-ion quantum CCD computer architecture.
https://doi.org/10.1038/s41586-021-03318-4, 2021.
[33]
R. Sarkar and T. J. Yoder.
A graph-based formalism for surface codes and twists.
arXiv preprint arXiv:2101.09349, 2021.
[34]
P. Selinger.
Generators and relations for n-qubit clifford operators.
Logical Methods in Computer Science, 11, 2015.
[35]
J. van de Wetering.
Zx-calculus for the working quantum computer scientist.
arXiv preprint arXiv:2012.13966, 2020.
[36]
D. J. Wineland, C. Monroe, W. M. Itano, D. Leibfried, B. E. King, and D. M.
Experimental issues in coherent quantum-state manipulation of trapped
atomic ions.
Journal of Research of the National Institute of Standards and
Technology, 103:259–328, May-June 1998.
[37]
T. J. Yoder and I. H. Kim.
The surface code with a twist.
Quantum, 1:2, 2017.
[38]
G. Zhu, A. Lavasani, and M. Barkeshli.
Instantaneous braids and Dehn twists in topologically ordered
Physical Review B, 102(7):075105, 2020.
§ FAULT-TOLERANCE OF FIBER TRANSVERSAL GATES
Given a base code with $m\times 2n$ check matrix $\Parity = \bigl( \Parity_X\ \Parity_Z \bigr)$, the doubled code $\Double(\Parity)$ has a $2m\times 4n$ parity check matrix of the form <ref>, repeated here for convenient reference:
\begin{align}
\Double(\Parity) :=
\begin{pmatrix}
\Parity_X & \Parity_Z & 0 & 0 \\
0 & 0 & \Parity_Z & \Parity_X
\end{pmatrix}.
\end{align}
Given a fault-tolerant gate on quantum code $C$ with parameters $[[n,k,d]]$, the lifted gate on the doubled code $\Double(C)$ is also fault-tolerant to at least distance $d$.
This proof will show a correspondence between the syndromes of faults on the base and lifted codes. If the gate on the base code tolerates the original fault up to weight $t=\lfloor \frac{d-1}{2} \rfloor$, the lifted gate on the doubled code tolerates the transformed fault.
If a gate on the base code is supported on qubit $i$ on the base code, in the doubled code, the lifted gate has a two-qubit gate supported on qubits $i$ and $i+n$.
Given a fault $f\in \Field_2^{n}\oplus\Field_2^n$ written in block column form:
\begin{align}
f = \begin{pmatrix} f_X \\ f_Z \end{pmatrix},
\end{align}
we define the syndrome of the fault, $S_f \in \Field_2^m$:
\begin{align}\label{syn:base}
S_f := \Parity \Omega_{n} f = \bigl( \Parity_X\ \Parity_Z \bigr) \Omega_{n} \begin{pmatrix} f_X \\ f_Z \end{pmatrix} = \Parity_X f_Z + \Parity_Z f_X.
\end{align}
For the remainder of the proof, we will speak about some single, constant fault $f$.
In the doubled code, the syndrome of a fault $f^\prime\in \Field_2^{2n}\oplus\Field_2^{2n}$ is $\mathbf{S}_{f^\prime} \in \Field_2^{2m}$.
\begin{align}
\mathbf{S}_{f^\prime} = \begin{pmatrix}
\Parity_X & \Parity_Z & 0 & 0 \\
0 & 0 & \Parity_Z & \Parity_X
\end{pmatrix} \Omega_{2n} \begin{pmatrix} f^\prime_X \\ f^\prime_Z \end{pmatrix} = (\Parity_X \Parity_Z)f^\prime_Z \oplus (\Parity_Z \Parity_X)f^\prime_X.
\end{align}
Since the doubled code is a CSS code, we may break the syndrome into $X$ and $Z$ components. The doubled parity check matrix also has equal size $X$ and $Z$ components, so the syndrome may be represented:
\begin{align}
\mathbf{S}_{f^\prime}= \mathbf{S}_{f^\prime}^X \oplus \mathbf{S}_{f^\prime}^Z && \mathbf{S}_{f^\prime}^X , \mathbf{S}_{f^\prime}^Z \in \Field_2^m.
\end{align}
These parts of the syndromes are calculated:
\begin{align}\label{syn:double}
\mathbf{S}_{f^\prime}^X = (\Parity_X \Parity_Z)f^\prime_Z && \mathbf{S}_{f^\prime}^Z = (\Parity_Z \Parity_X)f^\prime_X .
\end{align}
We observe that, using <ref> and <ref>, if $f^\prime_Z = \begin{pmatrix} f_Z \\ f_X \end{pmatrix}$,
then $\mathbf{S}_{f^\prime}^X = S_f$. Note that $w\begin{pmatrix} f_Z \\ f_X \end{pmatrix} = w\begin{pmatrix} f_X \\ f_Z \end{pmatrix}$.
Therefore, if there is a decoder on the base such that it corrects all $\{f|w(f)\leq t\}$, there is also a decoder on the lifted code which corrects all $f^\prime_Z$ of the form $\begin{pmatrix} f_Z \\ f_X \end{pmatrix}$.
In particular, a $t$-fault-tolerant base code corrects $Y$-type Pauli faults of weight less than $t$. $Y$-type errors of this form satisfy $f_Z = f_X$ and $w(f)=w(f_Z)=w(f_X)\leq t$. For clarity, we will represent these symmetric faults as $\begin{pmatrix} f_Y \\ f_Y \end{pmatrix}$. This implies that the doubled code can correct faults of the form $\begin{pmatrix} f_Y \\ f_Y \end{pmatrix}$ where $w(f_Y)\leq t$, or in Pauli notation $Z_i Z_{i+n}$. These are exactly the weight two $Z$-type faults resulting from lifted gates on the doubled code. Also, a $t$-fault-tolerant base code can correct $X$ and $Z$ type faults with block forms $\begin{pmatrix} f_X \\ 0 \end{pmatrix}$ and $\begin{pmatrix} 0 \\ f_Z \end{pmatrix}$ respectively. Thus, the doubled code can correct all faults of the form $\begin{pmatrix} f_Y + f_X \\f_Y + f_Z \end{pmatrix}$ satisfying $w(f_X) + w(f_Y) + w(f_Z) \leq t$, which spans the space of up to $t$ faults on lifted gates.
Since the doubled code is a CSS code, the proof for $X$-type faults is the same, but with the roles of $\Parity_X$ and $\Parity_Z$ reversed.
§ EXAMPLE QASM
This is QASM source for an example of the $[[4,1,2]]$ randomized benchmarking protocol
of length $N=2$.
We show the qubit permutations as P(...) operations
as well as the resulting labels in comments.
For the $[[8,2,2]]$ protocol,
we show example QASM preparing the $\ket{0,1}$ state and
applying the fault-tolerant Clifford gate $CX_{0,1}\cdot SWAP$.
Here is a circuit for the $[[10,2,3]]$ protocol, acting on logical $\ket{+-}$:
|
The Super-Kamiokande Collaboration
# Search for Cosmic-ray Boosted Sub-GeV Dark Matter using Recoil Protons at
Super-Kamiokande
K. Abe Kamioka Observatory, Institute for Cosmic Ray Research, University of
Tokyo, Kamioka, Gifu 506-1205, Japan Kavli Institute for the Physics and
Mathematics of the Universe (WPI), The University of Tokyo Institutes for
Advanced Study, University of Tokyo, Kashiwa, Chiba 277-8583, Japan Y. Hayato
Kamioka Observatory, Institute for Cosmic Ray Research, University of Tokyo,
Kamioka, Gifu 506-1205, Japan Kavli Institute for the Physics and Mathematics
of the Universe (WPI), The University of Tokyo Institutes for Advanced Study,
University of Tokyo, Kashiwa, Chiba 277-8583, Japan K. Hiraide Kamioka
Observatory, Institute for Cosmic Ray Research, University of Tokyo, Kamioka,
Gifu 506-1205, Japan Kavli Institute for the Physics and Mathematics of the
Universe (WPI), The University of Tokyo Institutes for Advanced Study,
University of Tokyo, Kashiwa, Chiba 277-8583, Japan K. Ieki M. Ikeda
Kamioka Observatory, Institute for Cosmic Ray Research, University of Tokyo,
Kamioka, Gifu 506-1205, Japan Kavli Institute for the Physics and Mathematics
of the Universe (WPI), The University of Tokyo Institutes for Advanced Study,
University of Tokyo, Kashiwa, Chiba 277-8583, Japan J. Kameda Kamioka
Observatory, Institute for Cosmic Ray Research, University of Tokyo, Kamioka,
Gifu 506-1205, Japan Kavli Institute for the Physics and Mathematics of the
Universe (WPI), The University of Tokyo Institutes for Advanced Study,
University of Tokyo, Kashiwa, Chiba 277-8583, Japan Y. Kanemura R. Kaneshima
Y. Kashiwagi Kamioka Observatory, Institute for Cosmic Ray Research,
University of Tokyo, Kamioka, Gifu 506-1205, Japan Y. Kataoka Kamioka
Observatory, Institute for Cosmic Ray Research, University of Tokyo, Kamioka,
Gifu 506-1205, Japan Kavli Institute for the Physics and Mathematics of the
Universe (WPI), The University of Tokyo Institutes for Advanced Study,
University of Tokyo, Kashiwa, Chiba 277-8583, Japan S. Miki Kamioka
Observatory, Institute for Cosmic Ray Research, University of Tokyo, Kamioka,
Gifu 506-1205, Japan S. Mine Kamioka Observatory, Institute for Cosmic Ray
Research, University of Tokyo, Kamioka, Gifu 506-1205, Japan Department of
Physics and Astronomy, University of California, Irvine, Irvine, CA
92697-4575, USA M. Miura S. Moriyama Kamioka Observatory, Institute for
Cosmic Ray Research, University of Tokyo, Kamioka, Gifu 506-1205, Japan Kavli
Institute for the Physics and Mathematics of the Universe (WPI), The
University of Tokyo Institutes for Advanced Study, University of Tokyo,
Kashiwa, Chiba 277-8583, Japan Y. Nakano Kamioka Observatory, Institute for
Cosmic Ray Research, University of Tokyo, Kamioka, Gifu 506-1205, Japan M.
Nakahata Kamioka Observatory, Institute for Cosmic Ray Research, University
of Tokyo, Kamioka, Gifu 506-1205, Japan Kavli Institute for the Physics and
Mathematics of the Universe (WPI), The University of Tokyo Institutes for
Advanced Study, University of Tokyo, Kashiwa, Chiba 277-8583, Japan S.
Nakayama Kamioka Observatory, Institute for Cosmic Ray Research, University
of Tokyo, Kamioka, Gifu 506-1205, Japan Kavli Institute for the Physics and
Mathematics of the Universe (WPI), The University of Tokyo Institutes for
Advanced Study, University of Tokyo, Kashiwa, Chiba 277-8583, Japan Y.
Noguchi K. Okamoto K. Sato Kamioka Observatory, Institute for Cosmic Ray
Research, University of Tokyo, Kamioka, Gifu 506-1205, Japan H. Sekiya
Kamioka Observatory, Institute for Cosmic Ray Research, University of Tokyo,
Kamioka, Gifu 506-1205, Japan Kavli Institute for the Physics and Mathematics
of the Universe (WPI), The University of Tokyo Institutes for Advanced Study,
University of Tokyo, Kashiwa, Chiba 277-8583, Japan H. Shiba K. Shimizu
Kamioka Observatory, Institute for Cosmic Ray Research, University of Tokyo,
Kamioka, Gifu 506-1205, Japan M. Shiozawa Kamioka Observatory, Institute for
Cosmic Ray Research, University of Tokyo, Kamioka, Gifu 506-1205, Japan Kavli
Institute for the Physics and Mathematics of the Universe (WPI), The
University of Tokyo Institutes for Advanced Study, University of Tokyo,
Kashiwa, Chiba 277-8583, Japan Y. Sonoda Y. Suzuki Kamioka Observatory,
Institute for Cosmic Ray Research, University of Tokyo, Kamioka, Gifu
506-1205, Japan A. Takeda Kamioka Observatory, Institute for Cosmic Ray
Research, University of Tokyo, Kamioka, Gifu 506-1205, Japan Kavli Institute
for the Physics and Mathematics of the Universe (WPI), The University of Tokyo
Institutes for Advanced Study, University of Tokyo, Kashiwa, Chiba 277-8583,
Japan Y. Takemoto Kamioka Observatory, Institute for Cosmic Ray Research,
University of Tokyo, Kamioka, Gifu 506-1205, Japan Kavli Institute for the
Physics and Mathematics of the Universe (WPI), The University of Tokyo
Institutes for Advanced Study, University of Tokyo, Kashiwa, Chiba 277-8583,
Japan A. Takenaka Kamioka Observatory, Institute for Cosmic Ray Research,
University of Tokyo, Kamioka, Gifu 506-1205, Japan H. Tanaka Kamioka
Observatory, Institute for Cosmic Ray Research, University of Tokyo, Kamioka,
Gifu 506-1205, Japan Kavli Institute for the Physics and Mathematics of the
Universe (WPI), The University of Tokyo Institutes for Advanced Study,
University of Tokyo, Kashiwa, Chiba 277-8583, Japan S. Watanabe Kamioka
Observatory, Institute for Cosmic Ray Research, University of Tokyo, Kamioka,
Gifu 506-1205, Japan T. Yano Kamioka Observatory, Institute for Cosmic Ray
Research, University of Tokyo, Kamioka, Gifu 506-1205, Japan S. Han Research
Center for Cosmic Neutrinos, Institute for Cosmic Ray Research, University of
Tokyo, Kashiwa, Chiba 277-8582, Japan T. Kajita Research Center for Cosmic
Neutrinos, Institute for Cosmic Ray Research, University of Tokyo, Kashiwa,
Chiba 277-8582, Japan Kavli Institute for the Physics and Mathematics of the
Universe (WPI), The University of Tokyo Institutes for Advanced Study,
University of Tokyo, Kashiwa, Chiba 277-8583, Japan ILANCE, CNRS - University
of Tokyo International Research Laboratory, Kashiwa, Chiba 277-8582, Japan K.
Okumura Research Center for Cosmic Neutrinos, Institute for Cosmic Ray
Research, University of Tokyo, Kashiwa, Chiba 277-8582, Japan Kavli Institute
for the Physics and Mathematics of the Universe (WPI), The University of Tokyo
Institutes for Advanced Study, University of Tokyo, Kashiwa, Chiba 277-8583,
Japan T. Tashiro T. Tomiya X. Wang J. Xia S. Yoshida Research Center for
Cosmic Neutrinos, Institute for Cosmic Ray Research, University of Tokyo,
Kashiwa, Chiba 277-8582, Japan G. D. Megias Institute for Cosmic Ray
Research, University of Tokyo, Kashiwa, Chiba 277-8582, Japan P. Fernandez
L. Labarga N. Ospina B. Zaldivar Department of Theoretical Physics,
University Autonoma Madrid, 28049 Madrid, Spain B. W. Pointon Department of
Physics, British Columbia Institute of Technology, Burnaby, BC, V5G 3H2,
Canada TRIUMF, 4004 Wesbrook Mall, Vancouver, BC, V6T2A3, Canada E. Kearns
Department of Physics, Boston University, Boston, MA 02215, USA Kavli
Institute for the Physics and Mathematics of the Universe (WPI), The
University of Tokyo Institutes for Advanced Study, University of Tokyo,
Kashiwa, Chiba 277-8583, Japan J. L. Raaf Department of Physics, Boston
University, Boston, MA 02215, USA L. Wan Corresponding author
Email address<EMAIL_ADDRESS>(L. Wan) Department of Physics, Boston University,
Boston, MA 02215, USA T. Wester Department of Physics, Boston University,
Boston, MA 02215, USA J. Bian N. J. Griskevich Department of Physics and
Astronomy, University of California, Irvine, Irvine, CA 92697-4575, USA W. R.
Kropp Deceased. Department of Physics and Astronomy, University of
California, Irvine, Irvine, CA 92697-4575, USA S. Locke Department of
Physics and Astronomy, University of California, Irvine, Irvine, CA
92697-4575, USA M. B. Smy H. W. Sobel Department of Physics and Astronomy,
University of California, Irvine, Irvine, CA 92697-4575, USA Kavli Institute
for the Physics and Mathematics of the Universe (WPI), The University of Tokyo
Institutes for Advanced Study, University of Tokyo, Kashiwa, Chiba 277-8583,
Japan V. Takhistov Department of Physics and Astronomy, University of
California, Irvine, Irvine, CA 92697-4575, USA High Energy Accelerator
Research Organization (KEK), Tsukuba, Ibaraki 305-0801, Japan Kavli Institute
for the Physics and Mathematics of the Universe (WPI), The University of Tokyo
Institutes for Advanced Study, University of Tokyo, Kashiwa, Chiba 277-8583,
Japan A. Yankelevich Department of Physics and Astronomy, University of
California, Irvine, Irvine, CA 92697-4575, USA J. Hill Department of
Physics, California State University, Dominguez Hills, Carson, CA 90747, USA
R. G. Park Institute for Universe and Elementary Particles, Chonnam National
University, Gwangju 61186, Korea B. Bodur Department of Physics, Duke
University, Durham NC 27708, USA K. Scholberg C. W. Walter Department of
Physics, Duke University, Durham NC 27708, USA Kavli Institute for the
Physics and Mathematics of the Universe (WPI), The University of Tokyo
Institutes for Advanced Study, University of Tokyo, Kashiwa, Chiba 277-8583,
Japan L. Bernard A. Coffani O. Drapier S. El Hedri A. Giampaolo Th. A.
Mueller A. D. Santos P. Paganini B. Quilain Ecole Polytechnique,
IN2P3-CNRS, Laboratoire Leprince-Ringuet, F-91120 Palaiseau, France T.
Ishizuka Junior College, Fukuoka Institute of Technology, Fukuoka, Fukuoka
811-0295, Japan T. Nakamura Department of Physics, Gifu University, Gifu,
Gifu 501-1193, Japan J. S. Jang GIST College, Gwangju Institute of Science
and Technology, Gwangju 500-712, Korea J. G. Learned Department of Physics
and Astronomy, University of Hawaii, Honolulu, HI 96822, USA K. Choi
Institute for Basic Science (IBS), Daejeon, 34126, Korea S. Cao Institute
For Interdisciplinary Research in Science and Education, ICISE, Quy Nhon,
55121, Vietnam L. H. V. Anthony D. Martin M. Scott A. A. Sztuc Y. Uchida
Department of Physics, Imperial College London , London, SW7 2AZ, United
Kingdom V. Berardi M. G. Catanesi E. Radicioni Dipartimento
Interuniversitario di Fisica, INFN Sezione di Bari and Università e
Politecnico di Bari, I-70125, Bari, Italy N. F. Calabria L. N. Machado G.
De Rosa Dipartimento di Fisica, INFN Sezione di Napoli and Università di
Napoli, I-80126, Napoli, Italy G. Collazuol F. Iacob M. Lamoureux M.
Mattiazzi Dipartimento di Fisica, INFN Sezione di Padova and Università di
Padova, I-35131, Padova, Italy L. Ludovici INFN Sezione di Roma and
Università di Roma “La Sapienza”, I-00185, Roma, Italy M. Gonin G. Pronost
ILANCE, CNRS - University of Tokyo International Research Laboratory, Kashiwa,
Chiba 277-8582, Japan C. Fujisawa Y. Maekawa Y. Nishimura Department of
Physics, Keio University, Yokohama, Kanagawa, 223-8522, Japan M. Friend T.
Hasegawa T. Ishida T. Kobayashi M. Jakkapu T. Matsubara T. Nakadaira
High Energy Accelerator Research Organization (KEK), Tsukuba, Ibaraki
305-0801, Japan K. Nakamura High Energy Accelerator Research Organization
(KEK), Tsukuba, Ibaraki 305-0801, Japan Kavli Institute for the Physics and
Mathematics of the Universe (WPI), The University of Tokyo Institutes for
Advanced Study, University of Tokyo, Kashiwa, Chiba 277-8583, Japan Y. Oyama
K. Sakashita T. Sekiguchi T. Tsukamoto High Energy Accelerator Research
Organization (KEK), Tsukuba, Ibaraki 305-0801, Japan T. Boschi F. Di
Lodovico J. Gao A. Goldsack T. Katori J. Migenda M. Taani Department of
Physics, King’s College London, London, WC2R 2LS, UK S. Zsoldos Department
of Physics, King’s College London, London, WC2R 2LS, UK Kavli Institute for
the Physics and Mathematics of the Universe (WPI), The University of Tokyo
Institutes for Advanced Study, University of Tokyo, Kashiwa, Chiba 277-8583,
Japan Y. Kotsar H. Ozaki A. T. Suzuki Department of Physics, Kobe
University, Kobe, Hyogo 657-8501, Japan Y. Takeuchi Department of Physics,
Kobe University, Kobe, Hyogo 657-8501, Japan Kavli Institute for the Physics
and Mathematics of the Universe (WPI), The University of Tokyo Institutes for
Advanced Study, University of Tokyo, Kashiwa, Chiba 277-8583, Japan C.
Bronner J. Feng T. Kikawa M. Mori Department of Physics, Kyoto University,
Kyoto, Kyoto 606-8502, Japan T. Nakaya Department of Physics, Kyoto
University, Kyoto, Kyoto 606-8502, Japan Kavli Institute for the Physics and
Mathematics of the Universe (WPI), The University of Tokyo Institutes for
Advanced Study, University of Tokyo, Kashiwa, Chiba 277-8583, Japan R. A.
Wendell Department of Physics, Kyoto University, Kyoto, Kyoto 606-8502, Japan
Kavli Institute for the Physics and Mathematics of the Universe (WPI), The
University of Tokyo Institutes for Advanced Study, University of Tokyo,
Kashiwa, Chiba 277-8583, Japan K. Yasutome Department of Physics, Kyoto
University, Kyoto, Kyoto 606-8502, Japan S. J. Jenkins N. McCauley P. Mehta
K. M. Tsui Department of Physics, University of Liverpool, Liverpool, L69
7ZE, United Kingdom Y. Fukuda Department of Physics, Miyagi University of
Education, Sendai, Miyagi 980-0845, Japan Y. Itow Institute for Space-Earth
Environmental Research, Nagoya University, Nagoya, Aichi 464-8602, Japan
Kobayashi-Maskawa Institute for the Origin of Particles and the Universe,
Nagoya University, Nagoya, Aichi 464-8602, Japan H. Menjo K. Ninomiya
Institute for Space-Earth Environmental Research, Nagoya University, Nagoya,
Aichi 464-8602, Japan J. Lagoda S. M. Lakshmi M. Mandal P. Mijakowski Y.
S. Prabhu J. Zalipska National Centre For Nuclear Research, 02-093 Warsaw,
Poland M. Jia J. Jiang C. K. Jung M. J. Wilking C. Yanagisawa also at
BMCC/CUNY, Science Department, New York, New York, 1007, USA. Department of
Physics and Astronomy, State University of New York at Stony Brook, NY
11794-3800, USA M. Harada H. Ishino S. Ito H. Kitagawa Department of
Physics, Okayama University, Okayama, Okayama 700-8530, Japan Y. Koshio
Department of Physics, Okayama University, Okayama, Okayama 700-8530, Japan
Kavli Institute for the Physics and Mathematics of the Universe (WPI), The
University of Tokyo Institutes for Advanced Study, University of Tokyo,
Kashiwa, Chiba 277-8583, Japan F. Nakanishi S. Sakai Department of Physics,
Okayama University, Okayama, Okayama 700-8530, Japan G. Barr D. Barrow
Department of Physics, Oxford University, Oxford, OX1 3PU, United Kingdom L.
Cook Department of Physics, Oxford University, Oxford, OX1 3PU, United
Kingdom Kavli Institute for the Physics and Mathematics of the Universe
(WPI), The University of Tokyo Institutes for Advanced Study, University of
Tokyo, Kashiwa, Chiba 277-8583, Japan S. Samani Department of Physics,
Oxford University, Oxford, OX1 3PU, United Kingdom D. Wark Department of
Physics, Oxford University, Oxford, OX1 3PU, United Kingdom STFC, Rutherford
Appleton Laboratory, Harwell Oxford, and Daresbury Laboratory, Warrington,
OX11 0QX, United Kingdom F. Nova Rutherford Appleton Laboratory, Harwell,
Oxford, OX11 0QX, UK J. Y. Yang Department of Physics, Seoul National
University, Seoul 151-742, Korea M. Malek J. M. McElwee O. Stone M. D.
Thiesse L. F. Thompson Department of Physics and Astronomy, University of
Sheffield, S3 7RH, Sheffield, United Kingdom H. Okazawa Department of
Informatics in Social Welfare, Shizuoka University of Welfare, Yaizu,
Shizuoka, 425-8611, Japan S. B. Kim J. W. Seo I. Yu Department of Physics,
Sungkyunkwan University, Suwon 440-746, Korea A. K. Ichikawa K. D. Nakamura
S. Tairafune Department of Physics, Faculty of Science, Tohoku University,
Sendai, Miyagi, 980-8578, Japan K. Nishijima Department of Physics, Tokai
University, Hiratsuka, Kanagawa 259-1292, Japan K. Iwamoto K. Nakagiri
Department of Physics, University of Tokyo, Bunkyo, Tokyo 113-0033, Japan Y.
Nakajima Department of Physics, University of Tokyo, Bunkyo, Tokyo 113-0033,
Japan Kavli Institute for the Physics and Mathematics of the Universe (WPI),
The University of Tokyo Institutes for Advanced Study, University of Tokyo,
Kashiwa, Chiba 277-8583, Japan N. Taniuchi Department of Physics, University
of Tokyo, Bunkyo, Tokyo 113-0033, Japan M. Yokoyama Department of Physics,
University of Tokyo, Bunkyo, Tokyo 113-0033, Japan Kavli Institute for the
Physics and Mathematics of the Universe (WPI), The University of Tokyo
Institutes for Advanced Study, University of Tokyo, Kashiwa, Chiba 277-8583,
Japan K. Martens P. de Perio Kavli Institute for the Physics and
Mathematics of the Universe (WPI), The University of Tokyo Institutes for
Advanced Study, University of Tokyo, Kashiwa, Chiba 277-8583, Japan M. R.
Vagins Kavli Institute for the Physics and Mathematics of the Universe (WPI),
The University of Tokyo Institutes for Advanced Study, University of Tokyo,
Kashiwa, Chiba 277-8583, Japan Department of Physics and Astronomy,
University of California, Irvine, Irvine, CA 92697-4575, USA M. Kuze S.
Izumiyama Department of Physics,Tokyo Institute of Technology, Meguro, Tokyo
152-8551, Japan M. Inomoto M. Ishitsuka H. Ito T. Kinoshita R. Matsumoto
Y. Ommura N. Shigeta M. Shinoki T. Suganuma K. Yamauchi Department of
Physics, Faculty of Science and Technology, Tokyo University of Science, Noda,
Chiba 278-8510, Japan J. F. Martin H. A. Tanaka T. Towstego Department of
Physics, University of Toronto, ON, M5S 1A7, Canada R. Akutsu TRIUMF, 4004
Wesbrook Mall, Vancouver, BC, V6T2A3, Canada V. Gousy-Leblanc also at
University of Victoria, Department of Physics and Astronomy, PO Box 1700 STN
CSC, Victoria, BC V8W 2Y2, Canada. TRIUMF, 4004 Wesbrook Mall, Vancouver, BC,
V6T2A3, Canada M. Hartz A. Konaka N. W. Prouse TRIUMF, 4004 Wesbrook Mall,
Vancouver, BC, V6T2A3, Canada S. Chen B. D. Xu B. Zhang Department of
Engineering Physics, Tsinghua University, Beijing, 100084, China M.
Posiadala-Zezula Faculty of Physics, University of Warsaw, Warsaw, 02-093,
Poland D. Hadley M. Nicholson M. O’Flaherty B. Richards Department of
Physics, University of Warwick, Coventry, CV4 7AL, UK A. Ali Department of
Physics, University of Winnipeg, MB R3J 3L8, Canada TRIUMF, 4004 Wesbrook
Mall, Vancouver, BC, V6T2A3, Canada B. Jamieson Department of Physics,
University of Winnipeg, MB R3J 3L8, Canada Ll. Marti A. Minamino G.
Pintaudi S. Sano S. Suzuki K. Wada Department of Physics, Yokohama
National University, Yokohama, Kanagawa, 240-8501, Japan
###### Abstract
We report a search for cosmic-ray boosted dark matter with protons using the
0.37 megaton$\times$years data collected at Super-Kamiokande experiment during
the 1996-2018 period (SKI-IV phase). We searched for an excess of proton
recoils above the atmospheric neutrino background from the vicinity of the
Galactic Center. No such excess is observed, and limits are calculated for two
reference models of dark matter with either a constant interaction cross-
section or through a scalar mediator. This is the first experimental search
for boosted dark matter with hadrons using directional information. The
results present the most stringent limits on cosmic-ray boosted dark matter
and exclude the dark matter-nucleon elastic scattering cross-section between
$10^{-33}\text{ cm}^{2}$ and $10^{-27}\text{ cm}^{2}$ for dark matter mass
from 1 MeV/$c^{2}$ to 300 MeV/$c^{2}$.
DOI
††preprint: APS/123-QED
There is overwhelming evidence for the existence of dark matter Zwicky (1933);
Blumenthal _et al._ (1984); Sofue and Rubin (2001); Schumann (2019); Bertone
and Hooper (2018). The properties of the dark matter remain unknown beyond
gravitational interaction, and there are a variety of theoretical models
predicting a wide range of masses for dark matter candidates (e.g. Essig _et
al._ (2012); Knapen _et al._ (2017); Smirnov and Beacom (2019)). Despite
significant efforts of highly sensitive direct dark matter detection
experiments to probe interactions of dark matter at the mass range of
GeV/$c^{2}$ to TeV/$c^{2}$, dark matter have been elusive thus far Akerib _et
al._ (2017); Aprile _et al._ (2018). Meanwhile, dark matter at the sub-GeV
mass range is poorly explored Feng and Kumar (2008); Boehm and Fayet (2004);
Lin _et al._ (2012); Hochberg _et al._ (2015). The conventional direct dark
matter detection experiments focusing on nuclear recoils are not sensitive to
cold sub-GeV dark matter due to insufficient recoil energy, and the
experimental searches of cold sub-GeV dark matter have focused on the Migdal
effect Ibe _et al._ (2018); Aprile _et al._ (2019a); Liu _et al._ (2019a);
Armengaud _et al._ (2019) and the interaction with electrons Essig _et al._
(2012); Barak _et al._ (2020); Arnaud _et al._ (2020). Besides, if a
fraction of the cold dark matter is boosted to relativistic energies, it can
be efficiently detected in direct detection experiments as well as higher
threshold neutrino detectors Agashe _et al._ (2014); Necib _et al._ (2017);
Emken _et al._ (2018); Hu _et al._ (2017); Giudice _et al._ (2018);
Cappiello and Beacom (2019); Argüelles _et al._ (2022).
A general possibility for dark matter to obtain relativistic energies is via
the upscattering by cosmic-rays, constituting cosmic-ray boosted dark matter
(CRDM) Bringmann and Pospelov (2019); Ema _et al._ (2019); Cappiello _et
al._ (2019); Cappiello and Beacom (2019). The upscattering process originates
from the same dark matter-nucleus interactions as direct detection experiments
search for, without requiring additional assumptions or model dependence. Due
to the dark matter density distribution concentrated toward the Galactic
Center (GC) Navarro _et al._ (1996), the CRDM arriving at the Earth has a
directional preference from the GC. For terrestrial detectors, the CRDM-
nucleon interaction in the Earth can be sizable, and the dark matter can be
scattered multiple times and become attenuated when traveling through the
Earth Ge _et al._ (2021).
The boosted relativistic component can be observed by the interactions in the
detector with electrons Agashe _et al._ (2014); Ema _et al._ (2019) or
hadrons Ema _et al._ (2021); Cappiello and Beacom (2019). In 2018, the Super-
Kamiokande experiment published the first experimental search for boosted dark
matter in a terrestrial detector with electron recoils Kachulis _et al._
(2018). Later on, PROSPECT Andriamirado _et al._ (2021), PandaX-II Cui _et
al._ (2022), and CDEX-10 Xu _et al._ (2022) reported their result on CRDM
using nuclear recoils, setting cross-section limits at $10^{-31}-10^{-26}$ cm2
in a dark matter mass region from MeV/$c^{2}$ to GeV/$c^{2}$.
In this analysis, we search for CRDM from MeV/$c^{2}$ to GeV/$c^{2}$ with
recoil protons at the Super-Kamiokande (SK) experiment Fukuda _et al._
(2003). We use the data collected at SK during the 1996-2018 period (SKI-IV
phases). The large fiducial volume and the directional reconstruction ability
of SK, a water Cherenkov detector, enables a sensitive search for CRDM. The
parameter space we explore extends by more than one order of magnitude beyond
the existing limits Andriamirado _et al._ (2021); Cui _et al._ (2022).
Super-Kamiokande is a cylindrical 50 kiloton water Cherenkov detector located
in Kamioka, Japan, under a 2,700 meter water-equivalent rock overburden Fukuda
_et al._ (2003). The detector consists of an inner detector (ID) and an outer
detector (OD) optically separated at 2 m from the detector’s outer wall. There
are 11,129 inward-facing 20-inch PMTs viewing the 32 kton target volume of the
ID, and the OD is viewed by 1,885 outward-facing 8-inch PMTs. The ID is used
to reconstruct the energies, vertices, and to perform the particle
identifications (PID) of the physics events, while the OD is primarily used as
a veto for charged particles entering from outside the detector or identifying
particles that exit the ID.
This analysis uses the fully contained fiducial volume (FCFV) dataset composed
of events that have activity only in the ID (FC) and are reconstructed with
vertices more than 2 m from the ID wall, corresponding to the 22.5 kton
fiducial volume (FV). The total livetime of the dataset is 6050.3 days,
corresponding to an exposure of 0.37 megaton$\times$years. The visible energy,
corresponding to the energy of an electron that would cause the same amount of
light in the detector, of the events is required to be above 30 MeV to remove
spallation backgrounds induced by the cosmic-ray muons. To select recoil
protons without extra activities, we require the candidate events to have only
one single reconstructed Cherenkov ring.
In this FCFV sample, the majority of events are electrons and muons. Electrons
create electromagnetic showers which produce fuzzy rings and can be easily
removed, while muons and protons have a sharp ring edge. To select proton
events from the muon background, we employed a proton fitter that utilizes the
light pattern and ring topology to calculate the proton likelihood, proton
momentum, and track length Fechner _et al._ (2009). A distinctive feature of
the protons is that they are likely to have hadronic interactions in water and
lose energy by producing secondary particles. If both the secondary particles
and the scattered proton are below Cherenkov threshold, the Cherenkov light
emission is truncated and leaves a narrow proton Cherenkov ring. If the
secondary particles, typically pions, are energetic enough to emit bright
Cherenkov light, the identification of the proton becomes significantly more
difficult, and therefore the reconstruction is less efficient for higher
momentum protons due to the higher hadronic interaction probability.
Since the identification performance depends on proton momentum, we
established a series of kinematic precuts. To remove the majority of high
energy muons, we require the reconstructed proton momentum to be less than 3
GeV/$c$, the visible energy to be less than 400 MeV, and the corrected charge
within $70^{\circ}$ of the direction Abe _et al._ (2022) to be less than
2,000 photo-electrons. Due to the large mass, protons have a smaller Cherenkov
angle compared to muons at the same momentum, and thus we require the
reconstructed Cherenkov angle of candidate events to be less than 40∘.
Finally, we place a cut on the proton-muon identification likelihood.
To further enhance the proton-muon separation, a multi-variate analysis (MVA)
is employed after the precuts. The input variables include the fitted track
length, the fitted momentum, and the PID likelihood from the proton fitter
Fechner _et al._ (2009), the charge distribution within and outside of the
Cherenkov ring, the reconstructed Cherenkov angle, the vertex reconstruction
quality, and the number of decay-electrons. More details on the variable
definitions and distributions can be found in the supplementary material Abe
_et al._ (2022).
The structure of the MVA is selected as a multilayer perceptron Hocker _et
al._ (2007), which is trained with simulated protons and non-proton events
from the atmospheric neutrino MC sample after the precuts. The MVA takes the
eight input variables and outputs an estimator describing how signal- or
background-like an event is. The cut on the MVA estimator is optimized towards
best sensitivity assuming a 0.37 megaton$\times$years exposure and realistic
systematic errors, and the corresponding efficiency is shown in Fig. 1. The
proton reconstruction is only feasible within a momentum window between 1.2
GeV/$c$ and 2.3 GeV/$c$. Below 1.2 GeV/$c$, the Cherenkov light yield is too
low to reconstruct the proton ring. Above 2.3 GeV/$c$, the protons tend to
have hadronic interactions and the secondary particles make extra rings, which
complicates the proton reconstruction. After the precuts and the MVA cut, we
expected 86.0 proton events and 25.7 non-proton events in the final sample
from atmospheric neutrinos.
Figure 1: The selection efficiencies for the proton sample. The red dotted
line indicates the reduction efficiency of the FCFV sample above 30 MeV. The
blue dashed line represents the efficiency after precuts. The green solid line
is the efficiency after the MVA cut.
The systematic uncertainties in this proton sample include uncertainties in
atmospheric neutrino cross-section and flux (26%), proton hadronic interaction
systematics (4%), and detector related systematics (8% for proton events, and
13% for non-proton background events). The major source of the atmospheric
neutrino related uncertainty is the neutral current / charged current ratio
(20%). In summary, we estimated 27% for protons from atmospheric neutrinos,
29% uncertainty for non-proton background events from atmospheric neutrinos,
and 9% in proton signal efficiency. As such, we expected $111.7\pm
10.6\text{(stat.)}\pm 30.7\text{(sys.)}$ events for the searched 0.37
megaton$\times$years livetime in the final sample from atmospheric neutrinos.
Compared with the observation of 126 events, this result is within the
estimated systematic and statistical uncertainty.
The CRDM flux is determined by the dark matter distribution model, the cosmic-
ray model, and the dark matter interaction model. In this analysis, we use the
NFW profile for Galactic dark matter density distribution Navarro _et al._
(1996). For simplicity, the cosmic-ray flux is assumed to be homogeneous
within a leaky box model cylinder Strong _et al._ (2007), and the radius and
height of the cylinder are taken as $R=10$ kpc and $h=1$ kpc following Ref.
Bringmann and Pospelov (2019); Ema _et al._ (2021). The energy spectrum of
cosmic-rays is modeled from 10 MeV to above 50 GeV with Voyager’s observation
Cummings _et al._ (2016) and different theoretical calculations Boschini _et
al._ (2017); Tomassetti _et al._ (2019), as specified in Ref. Ema _et al._
(2021). For the dark matter nucleon interaction cross-section, we consider two
reference scenarios, one with fermionic dark matter and a scalar mediator, and
one with a constant dark matter-nucleon interaction cross-section. In the
scalar mediator scenario, we employed the flux and cross-section as calculated
in Ref. Ema _et al._ (2021) with a mediator mass of $m=1$ GeV/$c^{2}$. For
the constant cross-section dark matter model, we make use of a reproduced flux
from Ref. Bringmann and Pospelov (2019), and the cross-section is assumed to
be $10^{-30}$ cm2 at the dark matter-nucleon coupling constant $g=1$.
As SK is a Cherenkov detector, it can reconstruct directions of the recoil
protons, which facilitates the separation of the relatively isotropic
atmospheric neutrino backgrounds from signals that are more peaked in the
direction of the GC. The directional distribution of recoil protons with
regard to the GC is a convolution of the angular resolution of proton rings,
the kinematic correlation between recoil proton direction and the incoming
CRDM, and the model-dependent directional distribution of the CRDM flux. The
reconstructed angular resolution of proton rings is 2.6∘, a subdominant factor
compared to the kinematic angular correlation and the CRDM distribution.
Considering the two reference cross-section models and the different cylinder
sizes for cosmic-ray modeling, we found that the optimal directional cuts from
the GC varies by about $10\%$. For a more general interpretation, we fix the
GC direction cut at $\cos\theta_{GC}>0.6$.
At the large dark matter coupling scale we are probing, the CRDM attenuation
within the Earth is non-negligible, which ensures that the CRDM flux arriving
at the detector comes primarily from above the horizon. To reject the upward-
going atmospheric neutrino backgrounds and to avoid the uncertainty near the
horizon, we apply a zenith angle cut at $-\cos\theta_{z}>0.2$. The efficiency
for such a cut can be obtained by calculating the fraction of live-time the GC
is above the horizon considering the latitude of the observatory site, which
is $0.29$ for SK. After the GC direction cut and the zenith angle cut, the
expected number of backgrounds from atmospheric neutrinos in the proton sample
is expected to be 7.4 (6.5) events with (without) normalization to data.
The GC angular distribution of the MC expectation and data with and without
the zenith cut are shown in Fig. 2. To avoid the systematic bias from the
atmospheric neutrino azimuthal spectra, we employed an on-off source search,
with the on-source at the GC, and the off-source shifted from the on-source by
180∘ in right ascension, as shown in the supplementary material Abe _et al._
(2022). Applying the cut $-\cos\theta_{z}>0.2$ and $\cos\theta_{GC}>0.6$, the
remaining number of events in the proton data sample is 9 for the on-source
(GC), and 7 for the off-source. Considering the systematic uncertainty, the
upper limit on the number of the CRDM recoil proton events can be calculated
using Rolke method Rolke _et al._ (2005) as 5.7 events at 90% confidence
level.
Figure 2: The angle between proton ring and the GC for events in the proton
sample, without (upper) and with (lower) the zenith angle cut. The black
points indicate data with statistical uncertainty. The blue bands indicate MC
expectation with systematic uncertainty.
In the absence of an excess in the proton sample, we calculated the upper
limit of the dark matter-nucleon coupling and the interaction cross-section.
Note that the CRDM is produced from the same mechanism of dark matter-nucleon
scattering, and therefore the CRDM flux is also proportional to the cross-
section. Our result covers the sub-GeV dark matter mass from MeV/$c^{2}$ to
GeV/$c^{2}$ at $10^{-33}$ cm2, as shown in Fig. 3.
The recent CRDM search result from PANDAX-II Cui _et al._ (2022) assuming
constant cross-section is also shown for comparison. Due to the large exposure
of SK and the directional information from the Cherenkov ring, the constraint
from SK is better than the existing limits by a factor of 2.
If the dark matter-nucleon coupling is large enough, the CRDM flux will lose
energy when traveling through the rock overburden above the detector, imposing
an upper bound on the exclusion region. This energy can be calculated with an
analytical approximation considering the nuclear form factor effect Xia _et
al._ (2022). In the case of SK, due to the higher detection threshold from
proton Cherenkov radiation, the experiment is only sensitive to sub-GeV dark
matter above 0.5 GeV kinematic energy, and the attenuation of the rock
overburden for this energy range is calculated to be below 10% at
$\sigma<10^{-27}$ cm2. Above $10^{-27}$ cm2, the parameter space has been
excluded by an analysis using cosmic microwave background data Xu _et al._
(2018). The lower end of the search range in dark matter mass at 1 MeV/$c^{2}$
is constrained by the Big Bang nucleosynthesis Reno and Seckel (1988); Krnjaic
and McDermott (2020). At higher dark matter mass, the constraints mainly come
from the direct detection experiment CRESST-III Abdelhameed _et al._ (2019)
and the Migdal effect searches at CDEX-1B Liu _et al._ (2019b) and XENON1T
Aprile _et al._ (2019b).
Figure 3: Constraints on dark matter-nucleon cross-section. Solid lines show
the upper limit while dashed lines indicate the sensitivity. The green lines
are calculated with a constant cross-section model. The blue lines are the
cross-sections at the non-relativistic limit ($\sigma_{NR}$) for scalar
mediator model. The shaded sage green region indicates the PANDAX-II CRDM
exclusion region Cui _et al._ (2022). The shaded maroon region shows the
CRESST-III exclusion region Abdelhameed _et al._ (2019) and the shaded grey
region shows the constraints via Migdal effect from CDEX-1B Liu _et al._
(2019b) and XENON1T Aprile _et al._ (2019b).
In summary, we report a directional search for the CRDM using a newly
constructed proton sample selected from the data collected at Super-Kamiokande
during the period of 1996-2018 (SKI-IV phases). In the absence of an excess
from dark matter signals above the expected background, we derived new limits
on the dark matter-nucleon interaction cross-section, which are the most
stringent constraint on hadronic coupling of sub-GeV dark matter so far. This
result benefits from the large fiducial volume and directional reconstruction
ability of SK, which motivates further exploration of CRDM and boosted dark
matter in general from the next generation large neutrino detectors with
directional capabilities, such as Hyper-Kamiokande Abe _et al._ (2011) and
DUNE Abi _et al._ (2020). The reported proton sample efficiency and direction
distribution can also be interpreted by any theory that predicts an excess of
proton recoils from the direction of the GC.
We thank Dr. Yohei Ema for providing the CRDM flux and insightful discussions.
We gratefully acknowledge the cooperation of the Kamioka Mining and Smelting
Company. The Super-Kamiokande experiment has been built and operated from
funding by the Japanese Ministry of Education, Culture, Sports, Science and
Technology, the U.S. Department of Energy, and the U.S. National Science
Foundation. Some of us have been supported by funds from the National Research
Foundation of Korea NRF‐2009‐0083526 (KNRC) funded by the Ministry of Science,
ICT, and Future Planning and the Ministry of Education (2018R1D1A3B07050696,
2018R1D1A1B07049158), the Japan Society for the Promotion of Science, the
National Natural Science Foundation of China under Grants No. 11620101004, the
Spanish Ministry of Science, Universities and Innovation (grant
PGC2018-099388-B-I00), the Natural Sciences and Engineering Research Council
(NSERC) of Canada, the Scinet and Westgrid consortia of Compute Canada, the
National Science Centre, Poland (2015/18/E/ST2/00758), the Science and
Technology Facilities Council (STFC) and GridPPP, UK, the European Union’s
Horizon 2020 Research and Innovation Programme under the Marie Sklodowska-
Curie grant agreement no.754496, H2020-MSCA-RISE-2018 JENNIFER2 grant
agreement no.822070, and H2020-MSCA-RISE-2019 SK2HK grant agreement no.
872549.
## References
* Zwicky (1933) F. Zwicky, Helv. Phys. Acta 6, 110 (1933).
* Blumenthal _et al._ (1984) G. R. Blumenthal, S. M. Faber, J. R. Primack, and M. J. Rees, Nature 311, 517 (1984).
* Sofue and Rubin (2001) Y. Sofue and V. Rubin, Ann. Rev. Astron. Astrophys. 39, 137 (2001), arXiv:astro-ph/0010594 .
* Schumann (2019) M. Schumann, J. Phys. G 46, 103003 (2019), arXiv:1903.03026 [astro-ph.CO] .
* Bertone and Hooper (2018) G. Bertone and D. Hooper, Rev. Mod. Phys. 90, 045002 (2018), arXiv:1605.04909 [astro-ph.CO] .
* Essig _et al._ (2012) R. Essig, J. Mardon, and T. Volansky, Phys. Rev. D 85, 076007 (2012), arXiv:1108.5383 [hep-ph] .
* Knapen _et al._ (2017) S. Knapen, T. Lin, and K. M. Zurek, Phys. Rev. D 96, 115021 (2017), arXiv:1709.07882 [hep-ph] .
* Smirnov and Beacom (2019) J. Smirnov and J. F. Beacom, Phys. Rev. D 100, 043029 (2019), arXiv:1904.11503 [hep-ph] .
* Akerib _et al._ (2017) D. S. Akerib _et al._ (LUX), Phys. Rev. Lett. 118, 021303 (2017), arXiv:1608.07648 [astro-ph.CO] .
* Aprile _et al._ (2018) E. Aprile _et al._ (XENON), Phys. Rev. Lett. 121, 111302 (2018), arXiv:1805.12562 [astro-ph.CO] .
* Feng and Kumar (2008) J. L. Feng and J. Kumar, Phys. Rev. Lett. 101, 231301 (2008), arXiv:0803.4196 [hep-ph] .
* Boehm and Fayet (2004) C. Boehm and P. Fayet, Nucl. Phys. B 683, 219 (2004), arXiv:hep-ph/0305261 .
* Lin _et al._ (2012) T. Lin, H.-B. Yu, and K. M. Zurek, Phys. Rev. D 85, 063503 (2012), arXiv:1111.0293 [hep-ph] .
* Hochberg _et al._ (2015) Y. Hochberg, E. Kuflik, H. Murayama, T. Volansky, and J. G. Wacker, Phys. Rev. Lett. 115, 021301 (2015), arXiv:1411.3727 [hep-ph] .
* Ibe _et al._ (2018) M. Ibe, W. Nakano, Y. Shoji, and K. Suzuki, JHEP 03, 194 (2018), arXiv:1707.07258 [hep-ph] .
* Aprile _et al._ (2019a) E. Aprile _et al._ (XENON Collaboration), Phys. Rev. Lett. 123, 241803 (2019a).
* Liu _et al._ (2019a) Z. Z. Liu _et al._ (CDEX Collaboration), Phys. Rev. Lett. 123, 161301 (2019a).
* Armengaud _et al._ (2019) E. Armengaud _et al._ (EDELWEISS Collaboration), Phys. Rev. D 99, 082003 (2019).
* Barak _et al._ (2020) L. Barak _et al._ (SENSEI), Phys. Rev. Lett. 125, 171802 (2020), arXiv:2004.11378 [astro-ph.CO] .
* Arnaud _et al._ (2020) Q. Arnaud _et al._ (EDELWEISS), Phys. Rev. Lett. 125, 141301 (2020), arXiv:2003.01046 [astro-ph.GA] .
* Agashe _et al._ (2014) K. Agashe, Y. Cui, L. Necib, and J. Thaler, JCAP 10, 062 (2014), arXiv:1405.7370 [hep-ph] .
* Necib _et al._ (2017) L. Necib, J. Moon, T. Wongjirad, and J. M. Conrad, Phys. Rev. D 95, 075018 (2017).
* Emken _et al._ (2018) T. Emken, C. Kouvaris, and N. G. Nielsen, Phys. Rev. D 97, 063007 (2018).
* Hu _et al._ (2017) P.-K. Hu, A. Kusenko, and V. Takhistov, Phys. Lett. B 768, 18 (2017), arXiv:1611.04599 [hep-ph] .
* Giudice _et al._ (2018) G. F. Giudice, D. Kim, J.-C. Park, and S. Shin, Phys. Lett. B 780, 543 (2018), arXiv:1712.07126 [hep-ph] .
* Cappiello and Beacom (2019) C. V. Cappiello and J. F. Beacom, Phys. Rev. D 100, 103011 (2019), arXiv:1906.11283 [hep-ph] .
* Argüelles _et al._ (2022) C. A. Argüelles, V. Muñoz, I. M. Shoemaker, and V. Takhistov, (2022), arXiv:2203.12630 [hep-ph] .
* Bringmann and Pospelov (2019) T. Bringmann and M. Pospelov, Phys. Rev. Lett. 122, 171801 (2019), arXiv:1810.10543 [hep-ph] .
* Ema _et al._ (2019) Y. Ema, F. Sala, and R. Sato, Phys. Rev. Lett. 122, 181802 (2019), arXiv:1811.00520 [hep-ph] .
* Cappiello _et al._ (2019) C. V. Cappiello, K. C. Y. Ng, and J. F. Beacom, Phys. Rev. D 99, 063004 (2019), arXiv:1810.07705 [hep-ph] .
* Navarro _et al._ (1996) J. F. Navarro, C. S. Frenk, and S. D. M. White, Astrophys. J. 462, 563 (1996), arXiv:astro-ph/9508025 .
* Ge _et al._ (2021) S.-F. Ge, J. Liu, Q. Yuan, and N. Zhou, Phys. Rev. Lett. 126, 091804 (2021).
* Ema _et al._ (2021) Y. Ema, F. Sala, and R. Sato, SciPost Phys. 10, 072 (2021), arXiv:2011.01939 [hep-ph] .
* Kachulis _et al._ (2018) C. Kachulis _et al._ (Super-Kamiokande), Phys. Rev. Lett. 120, 221301 (2018), arXiv:1711.05278 [hep-ex] .
* Andriamirado _et al._ (2021) M. Andriamirado _et al._ (PROSPECT Collaboration), Phys. Rev. D 104, 012009 (2021).
* Cui _et al._ (2022) X. Cui _et al._ (PandaX-II), Phys. Rev. Lett. 128, 171801 (2022), arXiv:2112.08957 [hep-ex] .
* Xu _et al._ (2022) R. Xu _et al._ (CDEX), Phys. Rev. D 106, 052008 (2022), arXiv:2201.01704 [hep-ex] .
* Fukuda _et al._ (2003) Y. Fukuda _et al._ (Super-Kamiokande), Nucl. Instrum. Meth. A 501, 418 (2003).
* Fechner _et al._ (2009) M. Fechner _et al._ (Super-Kamiokande), Phys. Rev. D 79, 112010 (2009), arXiv:0901.1645 [hep-ex] .
* Abe _et al._ (2022) K. Abe _et al._ , (2022), Supplementary Material .
* Hocker _et al._ (2007) A. Hocker _et al._ , (2007), arXiv:physics/0703039 .
* Strong _et al._ (2007) A. W. Strong, I. V. Moskalenko, and V. S. Ptuskin, Ann. Rev. Nucl. Part. Sci. 57, 285 (2007), arXiv:astro-ph/0701517 .
* Cummings _et al._ (2016) A. C. Cummings, E. C. Stone, B. C. Heikkila, N. Lal, W. R. Webber, G. Jóhannesson, I. V. Moskalenko, E. Orlando, and T. A. Porter, Astrophys. J. 831, 18 (2016).
* Boschini _et al._ (2017) M. J. Boschini _et al._ , Astrophys. J. 840, 115 (2017), arXiv:1704.06337 [astro-ph.HE] .
* Tomassetti _et al._ (2019) N. Tomassetti, F. Barão, B. Bertucci, E. Fiandrini, and M. Orcinha, Adv. Space Res. 64, 2477 (2019), arXiv:1906.11477 [astro-ph.HE] .
* Rolke _et al._ (2005) W. A. Rolke, A. M. Lopez, and J. Conrad, Nucl. Instrum. Meth. A551, 493 (2005), arXiv:physics/0403059 [physics] .
* Xia _et al._ (2022) C. Xia, Y.-H. Xu, and Y.-F. Zhou, JCAP 02, 028 (2022), arXiv:2111.05559 [hep-ph] .
* Xu _et al._ (2018) W. L. Xu, C. Dvorkin, and A. Chael, Phys. Rev. D 97, 103530 (2018), arXiv:1802.06788 [astro-ph.CO] .
* Reno and Seckel (1988) M. H. Reno and D. Seckel, Phys. Rev. D 37, 3441 (1988).
* Krnjaic and McDermott (2020) G. Krnjaic and S. D. McDermott, Phys. Rev. D 101, 123022 (2020).
* Abdelhameed _et al._ (2019) A. H. Abdelhameed _et al._ (CRESST), Phys. Rev. D 100, 102002 (2019), arXiv:1904.00498 [astro-ph.CO] .
* Liu _et al._ (2019b) Z. Z. Liu _et al._ (CDEX), Phys. Rev. Lett. 123, 161301 (2019b), arXiv:1905.00354 [hep-ex] .
* Aprile _et al._ (2019b) E. Aprile _et al._ (XENON), Phys. Rev. Lett. 123, 241803 (2019b), arXiv:1907.12771 [hep-ex] .
* Abe _et al._ (2011) K. Abe _et al._ , (2011), arXiv:1109.3262 [hep-ex] .
* Abi _et al._ (2020) B. Abi _et al._ (DUNE), (2020), arXiv:2002.03005 [hep-ex] .
|
11institutetext: 1INAF – Osservatorio Astronomico di Brera, Via E. Bianchi 46,
23807 Merate (LC), Italy 11email<EMAIL_ADDRESS>
2Max-Planck-Institut für extraterrestrische Physik, Gießenbachstraße 1, 85748,
Garching, Germany
3School of Astronomy and Space Science, Nanjing University, Nanjing 210046,
China
4Columbia Astrophysics Laboratory, Columbia University, Columbia, NY, 10027,
USA
5INAF – Istituto di Astrofisica Spaziale e Fisica Cosmica, via A. Corti 12,
20133 Milano, Italy
6Department of Physics and Astronomy, University of California, Los Angeles,
CA, 90095-1547, USA
7Institute of Space Sciences (ICE, CSIC), Campus UAB, Carrer de Can Magrans
s/n, E-08193 Barcelona, Spain
8Institut d’Estudis Espacials de Catalunya (IEEC), Carrer Gran Capità 2–4,
08034 Barcelona, Spain
# Periodicity from X-ray sources within the inner Galactic disk
Samaresh Mondal 11 Gabriele Ponti 1122 Tong Bao 33 Frank Haberl 22 Sergio
Campana 11 Charles J. Hailey 44 Shifra Mandel 44 Sandro Mereghetti Kaya Mori
5544 Mark R. Morris 66 Nanda Rea 7788 and Lara Sidoli 55
(Received XXX; accepted YYY)
###### Abstract
Aims. For many years it had been claimed that the Galactic ridge X-ray
emission at the Galactic Center (GC) is truly diffuse in nature. However, with
the advancement of modern X-ray satellites, it has been found that most of the
diffuse emission actually comprises thousands of previously unresolved X-ray
point sources. Furthermore, many studies suggest that a vast majority of these
X-ray point sources are magnetic cataclysmic variables (CVs) and active
binaries. One unambiguous way to identify these magnetic CVs and other sources
is by detecting their X-ray periodicity. Therefore, we systematically searched
for periodic X-ray sources in the inner Galactic disk, including the GC
region.
Methods. We used data from our ongoing XMM-Newton Heritage Survey of the inner
Galactic disk ($350\degr\lesssim l\lesssim+7\degr$ and $-1\degr\lesssim
b\lesssim+1\degr$) plus archival XMM-Newton observations of the GC. We
computed the Lomb-Scargle periodogram for the soft (0.2–2 keV), hard (2–10
keV), and total (0.2–10 keV) band light curves to search for periodicities.
Furthermore, we modeled the power spectrum using a power-law model to simulate
1000 artificial light curves and estimate the detection significance of the
periodicity. We fitted the energy spectra of the sources using a simple power-
law model plus three Gaussians, at 6.4, 6.7, and 6.9 keV, for the iron $K$
emission complex.
Results. We detected periodicity in 26 sources. For 14 of them, this is the
first discovery of periodicity. For the other 12 sources, we found periods
similar to those already known, indicating no significant period evolution.
The intermediate polar (IP) type sources display relatively hard spectra
compared to polars. We also searched for the Gaia counterparts of the periodic
sources to estimate their distances using the Gaia parallax. We found a likely
Gaia counterpart for seven sources.
Conclusions. Based on the periodicity, hardness ratio, and the equivalent
width of Fe $K$ line emission, we have classified the sources into four
categories: IPs, polars, neutron star X-ray binaries, and unknown. Of the 14
sources for which we detect the periodicity for the first time, four are
likely IPs, five are likely polars, two are neutron star X-ray binaries, and
three are of an unknown nature.
###### Key Words.:
X-rays:binaries – Galaxy:center – Galaxy:disk – white dwarfs – pulsars –
novae, cataclysmic variables
## 1 Introduction
In order to understand the star formation history of our Galaxy, it is
important to know the number of stars that ended their main-sequence life long
ago. Compact remnants of dead stars, such as black holes, neutron stars (NSs),
and white dwarfs (WDs), are commonly found in binary systems and are visible
in X-rays. Accreting WD binaries are the most common type of remnant in our
Galaxy as WDs are the end product of intermediate- and low-mass stars. Many of
these low-mass stars are born in binary systems with small separations that go
through one or more mass transfer phase, leading to the formation of
cataclysmic variables (CVs). More than a thousand CVs have been found in the
solar neighborhood (Downes et al., 2001; Ritter & Kolb, 2003). CVs are
categorized as magnetic or nonmagnetic (see Cropper, 1990; Patterson, 1994;
Mukai, 2017, for a review). Magnetic CVs are primarily categorized into two
subtypes: polar and intermediate polar (IP). Polars have a very strong
magnetic field ($>10$ MG), which synchronizes the spin and orbital motion
(i.e., $P_{\rm spin}=P_{\rm orb}$). The high magnetic field in polars is
confirmed by the observation of strong optical polarization and the
measurement of cyclotron humps (Warner, 2003). In polars, the accretion
directly follows the magnetic field lines from the L1 point, and no accretion
disk is formed. IPs have a relatively weak surface magnetic field of $1-10$
MG; therefore, they have less synchronization, and an accretion disk is
created. In these systems, the material leaving the L1 point forms an
accretion disk until the magnetic pressure becomes equal to the ram pressure
of the accreting gas. The X-ray emission from CVs originates from close to the
magnetic pole. The accreting material follows the magnetic field lines, and as
it approaches the WD surface, the radial in-fall velocity reaches supersonic
speeds of 3000-10000 km s-1. A shock front appears above the WD surface, and
the in-falling gas releases its energy in the shock, resulting in hard X-ray
photons (Aizu, 1973; Saxton et al., 2005).
Early observations of the Galactic Center (GC) revealed a diffuse X-ray
emission (Worrall et al., 1982; Warwick et al., 1985; Yamauchi et al., 1996)
called Galactic ridge emission. For many years a central point of debate has
been whether the Galactic ridge emission is truly diffuse or composed of
emission from many unresolved X-ray point sources. The advent of modern X-ray
satellites such as XMM-Newton and Chandra opened up the possibility of
detecting very faint X-ray sources in crowded regions such as the inner GC.
This is not possible in the optical waveband due to the high extinction toward
the GC. A deep Chandra observation of the inner Galactic bulge has
demonstrated that more than 80% of the Galactic ridge emission is produced by
CVs and coronally active stars (Wang et al., 2002; Revnivtsev et al., 2009;
Muno et al., 2003a, 2009; Zhu et al., 2018). Although this strongly indicates
that a large fraction of the X-ray sources observed toward the GC are magnetic
CVs, the physical nature of CVs in the GC remains unclear. Moreover, it was
suggested that, based on the hard power-law-type spectral shape and the
emission of Fe $K$ complex lines, the majority are IPs (Muno et al., 2004)
The Galactic ridge X-ray emission displays a copious amount of lines from
ionized iron at 6.7 and 6.9 keV. Some studies that compared the stellar mass
distribution with the Fe XXV (6.7 keV) line intensity map suggest the presence
of truly diffuse hard X-ray emission (Uchiyama et al., 2011b; Nishiyama et
al., 2013; Yasui et al., 2015). However, a recent study by our group found
that this diffuse hard emission in the GC can be explained if one assumes that
the GC stellar population has iron abundances $\sim 1.9$ times higher than
those in the Galactic bar/bulge (Anastasopoulou et al., 2023). Furthermore,
the 20–40 keV emission from the GC observed by NuSTAR is best explained by
two-temperature plasma models with $kT_{1}\sim 1$ keV and $kT_{2}\sim 7.5$
keV. The $\sim 1$ keV temperature component is attributed to emission from
supernovae heating the interstellar medium, coronally active stars, and
nonmagnetic WDs (Revnivtsev et al., 2009). The $\sim 7.5$ keV temperature
component is thought to be produced by emission from resolved and unresolved
accreting IPs (Perez et al., 2015). An additional component with a higher
plasma temperature, $kT\sim 35$ keV (Hailey et al., 2016), was recently
measured. In addition, Muno et al. (2003b) reported the discovery of eight
periodic sources in a $17^{\prime}\times 17^{\prime}$ field of the GC. Their
periods range from 300 s to 4.5 hr. All these sources exhibit hard power-law-
type spectral shapes (with photon index $\Gamma\sim 0$) and 6.7 keV iron-line
emission. These properties are consistent with magnetically accreting magnetic
CVs.
We are in the process of performing an X-ray scan of the inner Galactic disk
using XMM-Newton (Jansen et al. 2001; PI: G. Ponti). The main aim of this
survey is to constrain the flow of hot baryons that feed large-scale energetic
events such as the Galactic chimneys (Ponti et al., 2019, 2021), the _Fermi_
bubbles (Su et al., 2010), and the eROSITA bubbles (Predehl et al., 2020). In
early 2021, while performing this survey, we detected an X-ray source with
periodic modulation at 432 s (Mondal et al., 2022). The source was previously
observed by Suzaku in 2008 (Uchiyama et al., 2011a) and classified as an IP.
Furthermore, while examining XMM-Newton archival observations, we discovered
periodicity in two other sources within 1$\aas@@fstack{\circ}$5 of the GC
(Mondal et al., 2023). These two sources are also classified as IPs based on
the detected spin period and detection of an iron emission complex in the
spectra. Therefore, we took a systematic approach to hunt for such periodic
X-ray sources that might help us classify them. In this paper we report the
discoveries obtained from a periodicity search using XMM-Newton observations
of the Galactic disk and the GC.
## 2 Observations and data reduction
We have almost completed one side of the Galactic disk extending from $l\geq
350\degr$ to $l\leq+1.5\degr$ (see Fig. 1). The survey has an exposure of 20
ks per tile and is expected to cover the Galactic disk region in the range
$350\degr<l<+7\degr$ and $-1\degr<b<+1\degr$. During this campaign, we
detected thousands of X-ray point sources of various types. A forthcoming
paper will present a sample study of the X-ray point sources. Here we are
focusing on X-ray sources that show periodic modulations. While doing this
analysis, we considered including the GC region for positional comparison of
the sources located in the disk and GC (Ponti et al., 2013, 2015). For the GC,
we used the XMM-Newton archival observations. In total, we analyzed 444 XMM-
Newton observations, including our Galactic disk scanning observations plus
the archival observations of the GC. The observation data files were processed
using the XMM-Newton Science Analysis System (SAS,
v19.0.0)111https://www.cosmos.esa.int/web/xmm-newton/sas. We used the task
evselect to construct a high energy background light curve (energy between 10
and 12 keV for EPIC-pn and above 10 keV for EPIC-MOS1 and MOS2) by selecting
only PATTERN==0. The background light curve was used to filter high background
flaring activity and create good time intervals. We used the SAS task
emldetect for point source detection and source list creation. For each
individual detector, EPIC-pn, MOS1, or MOS2 (Strüder et al., 2001; Turner et
al., 2001), the source detection was performed in five energy bands: 0.2–0.5
keV, 0.5–1 keV, 1–2 keV, 2–4 keV, and 4–12 keV for a given single observation.
The source detection algorithm separately provides net counts and maximum
likelihood values for the five energy bands and three detectors: EPIC-pn,
MOS1, and MOS2. The source detection tool also provides the keyword EXT value
that indicates whether the emission is from a point-like or extended source.
We chose EXT $=0$ to select the point sources only. The total number of point
sources detected in our survey is $\sim 50000$. Then, we applied a filter in
which only sources with a total number of counts (EPIC-pn+MOS1+MOS2) higher
than 200 were chosen. This resulted in 2500 point sources for which we
extracted the light curves using the SAS task evselect after applying the
Solar System barycenter correction to the event files using the SAS task
barycen. We only selected events with PATTERN$\leq$4 and PATTERN$\leq$12 for
EPIC-pn and the MOS1 and MOS2 detectors, respectively. We chose a circular
region of 20″ radius for the source products extraction. The background
products were extracted from an annular region centered on the source position
with inner and outer radii of 25″ and 30″, respectively. The spectra were
binned to have a minimum of 20 counts in each energy bin. Many fields of the
GC have been observed more than once. If a source has been observed multiple
times, we searched for pulsations in each observation individually.
## 3 Results
Figure 1: Mosaic of the exposure maps created using the ongoing XMM-Newton
observations of the Galactic disk plus archival observations of the GC. The
small red, blue, and green circles show the positions of confirmed or likely
NSs, IPs, and polars, respectively. The black circles indicate the
unclassified sources.
### 3.1 Period search
The XMM-Newton observations suffer from gaps due to the filtering of high
background flaring activities. As the XMM-Newton observations suffer from
gaps, we used the Lomb-Scargle periodogram (Lomb, 1976; Scargle, 1982), which
is well known for detecting periodic signals in unevenly sampled time series
data. We computed the false alarm probability to estimate the statistical
significance of the periodogram peaks. The false alarm probability obtained is
based on the analytical approximation proposed by Baluev (2008), which employs
extreme value statistics to compute an upper bound of the false alarm
probability (or a lower limit of the significance of the detected
periodicity). For our timing analysis, we used the PYTHON-based astropy
(Astropy Collaboration et al., 2013, 2018, 2022) package’s time-series
module222https://docs.astropy.org/en/stable/timeseries/index.html.
We extracted the light curves in three different bands (0.2–2, 2–10, and
0.2–10 keV) for the three EPIC detectors (pn, MOS1, and MOS2). The light
curves were extracted with a time bin of 74 ms for the EPIC-pn and 3 s for the
MOS1 and MOS2 detectors in full frame mode of observation. The Lomb-Scargle
periodogram was computed for all nine light curves of each source, and a
periodicity search was conducted. The EPIC-pn detector has a frame time of 74
ms, which allowed us to probe a maximum frequency of $\sim 6$ Hz, whereas in
the case of the MOS1 and MOS2 detectors, we were able to probe a maximum
frequency of $\sim 0.16$ Hz. We imposed the criterion that the periodicity
detected at a frequency below 0.16 Hz should be present in all three
detectors. To search for periodicity at a higher frequency within the range
0.16–6 Hz, we used only the data from EPIC-pn. We have detected periodicity at
a significance above $3\sigma$ in 23 sources. Possible periodicities with
significance between 2 and 3$\sigma$ were found in another three sources333We
also detected a few sources with detection significance of the pulsation just
above the 1$\sigma$ confidence level. We did not list these sources in Table
2; one such example is the transient Galactic bulge IP XMMU J175035.2–293557
(Hofmann et al., 2018).. Figures 8 and 9 show the Lomb-Scargle periodograms of
the 26 sources, with the horizontal lines indicating the detection
significance levels.
Figure 1 shows the mosaic of the exposure maps created from our ongoing XMM-
Newton observations of the Galactic disk plus the XMM-Newton archival
observations of the GC. The small circles indicate the positions of the
periodic sources. We have completed the survey of one side of the Galactic
disk extending to $\sim 350^{\circ}$; however, most pulsators are concentrated
near the GC.
Table 2 shows the details of the X-ray properties of the periodic sources. The
period column shows the pulsation period obtained in our analysis and compares
it with the previously reported period. The pulse fraction is computed in 2–10
keV bands. The name of the sources is taken from the 4XMM catalog except for
the sources XMMU J173029.8–330920, XMMU J175441.9–265919, and XMMU
J180140.3–23422, which were not listed in the 4XMM (Webb et al., 2020) archive
as these sources were first detected in our campaign. The X-ray position and
$1\sigma$ positional error of the sources are taken from the 4XMM catalog. We
also list the source type based on previous studies, and for sources that were
not classified before, we give a tentative classification based on the X-ray
period, hardness ratio (HR) values, and spectral properties. Figure 2 shows
the distribution of the log of pulse period for various source types. Figure 3
shows the distribution of pulse fraction for the different categories. The
pulse fraction was computed as $\rm
PF=\frac{F_{max}-F_{min}}{F_{max}+F_{min}}$, where $\rm F_{max}$ and $\rm
F_{min}$ are the maximum and minimum counts in the folded light curves,
respectively. For sources with more than one XMM-Newton observation, we report
the periodicity from the multiple observations in Tables 3 and 4.
Figure 2: Distribution of log(period) for various source types. Figure 3:
Distribution of the 2-10 keV pulse fraction in percent for different source
types.
### 3.2 Caveats for the false alarm probability
Accretion-powered systems such as CVs and X-ray binaries are known to exhibit
aperiodic variability over a wide range of timescales. This irregularity,
often referred to as ”red noise,” constitutes a significant aspect of
aperiodic variability and has the potential to introduce spurious periodic
signals, especially at lower frequencies (Warner, 1989). Consequently, it is
essential to assess the likelihood of false detections among these periodic
signals found with the Lomb-Scargle periodogram method by using a large
simulated dataset (Bao & Li, 2022).
Specifically, we employed a power-law model to characterize the source power
spectrum, which has the form of
$P(\nu)=N\nu^{-1}+C_{\rm p}.$ (1)
In this equation, $N$ represents the normalization factor, and $C_{\rm p}$
accounts for the Poisson noise, which is influenced by the mean photon flux of
the source.
To begin, we estimated the power spectral density (PSD) using the standard
periodogram approach with an $\rm[rms/mean]^{2}$ PSD normalization (Vaughan et
al., 2003). However, as mentioned in Section 3.1, some of the light curves
suffer from gaps due to background flares. For these cases, we filled the gap
with artificial light curves of Poisson noise, assuming the mean flux is
consistent with that of the source. Although such processing results in little
differences in the described PSDs, for most of the periodic sources here these
gaps are fortunately negligible in terms of time (i.e., they take less than
0.5% of the total exposure time). Only one case exhibits a significant data
gap, which takes $\sim 1.4\%$ of the single observation, with
ObsID=0783160101. This source (4XMM J174816.9-280750) consistently exhibits
the same periodic signal across multiple observations (see Table 4). Thus, the
possible uncertainty of its confidence estimation by the process of filling
gaps will not impact the verification of its periodicity.
We fitted the power spectrum of each source with Eq. 1, using the maximum
likelihood function discussed in Vaughan (2010) and the Markov chain Monte
Carlo approach, employing the Python package _emcee_
444https://emcee.readthedocs.io/en/stable/ (Foreman-Mackey et al., 2013) to
derive the best-fit parameters and their associated uncertainties.
It turns out that only three of the periodic sources could be adequately
described by the power-law model with constrained normalization, implying a
potential influence of red noise. For the remaining sources, Poisson noise
actually dominates the source power spectrum. Thus, for the source with
potential red noise, simulated light curves for this best-fifit power-law model
were constructed using the method of Timmer & Koenig (1995), which were
resampled and binned to have the same duration, mean count rate, and variance
as the observed light curve. As for sources where Poisson noise prevailed, we
followed a similar procedure to simulate their light curves, assuming pure
Poisson noise. A group of 1000 simulated time series was produced for each
source. To evaluate the false alarm level, we computed the maximum Lomb-
Scargle power for each simulated light curve. Specifically, we considered the
top 0.3% of the maximum Lomb-Scargle power from the 1000 Lomb-Scargle
periodograms, corresponding to the 3$\sigma$ confidence level estimation, and
the top 5% as the threshold corresponding to 2$\sigma$ (approximately 95%).
These simulated thresholds were then overlaid on the Lomb-Scargle periodogram
(Figs. 8 and 9), and the confidence levels, calculated using Baluev’s analysis
method (Baluev, 2008), were compared. It turns out that 23 sources exceed the
simulated-based threshold of 3$\sigma$, and 17 of them exceed the 3$\sigma$
threshold of Baluev’s method. The deviation between these two is mainly due to
that the Baluev method, by design, provides an upper limit to the false alarm
probability with little overestimation (Baluev, 2008).
### 3.3 Period and pulse fraction distribution
The top panel of Fig. 2 shows the period distribution of sources in our
sample. The distribution has two peaks at around $\sim 800$ s and $\sim 4800$
s. The first peak is associated with the population of IPs, and the second
peak corresponds to the population of polars.
The spin period of NS and likely NS systems in our sample ranges from 1.36 s
to 411.3 s. The spin period of Galactic NS high-mass X-ray binaries (HMXRBs)
ranges from a few to thousands of seconds, and the distribution has peaks
around $\sim 250$ s (Neumann et al., 2023). The red histogram in the top panel
of Fig. 2 shows the distribution of the period for NS X-ray binaries in our
sample.
The blue and cyan histogram in the middle panel of Fig. 2 shows the period
distribution for the known IPs plus the tentative identification of IPs in our
sample. The distribution has a peak of around 607 s. Typically, the spin
period of IPs ranges between 30 s and 3000 s, with a peak near 1000 s
(Scaringi et al., 2010). The middle panel of Fig. 3 (blue and cyan histogram)
shows the distribution of pulse fraction for IPs. The pulse fraction ranges
from 10% to 80% with a peak near 45%. One prominent feature in IPs is that the
pulse fraction or the modulation depth typically increases with decreasing
X-ray energy. This has been thought to be the effect of photoelectric
absorption (Norton & Watson, 1989). The distribution of pulse fraction of IPs
covers a wide range of scales and can vary from a few percent up to $\sim$100%
with an average around 24% (Norton & Watson, 1989; Haberl & Motch, 1995).
In polars, the spin and orbital periods are synchronized, ranging from 3000 s
to 30000 s (Scaringi et al., 2010). In our sample, the periods of polars vary
from 4093 s to 6784 s, with the peak at $\sim 4800$ s. The light curves of
polars show constant modulation of depth with X-ray energies. The depths are
generally higher compared to IPs (Norton & Watson, 1989). In the middle panel
of Fig. 3, it is evident that the pulse fraction for polars starts at higher
values, around 30% compared to IPs, and that more polar type sources are found
between 50% and 60%.
### 3.4 Spectral modeling
We performed time-averaged spectral modeling using the X-ray spectral fitting
software xspec555https://heasarc.gsfc.nasa.gov/xanadu/xspec/ (Arnaud, 1996).
We employed $\chi^{2}$ statics in our model fitting. The spectra were fitted
using a simple model composed of a power law and three Gaussian lines
(tbabs(power-law+g1+g2+g3)). The Galactic absorption component is represented
by tbabs (Wilms et al., 2000). For the continuum, we used a simple power-law
model, and g1, g2, and g3 represent the three Gaussian lines at 6.4, 6.7, and
6.9 keV, respectively, for iron emission complex. While doing the fit, we
freeze the line energies at the expected values, and the width of the lines is
fixed at zero eV. We jointly fit the spectra of EPIC-pn, MOS1, and MOS2
detectors. While fitting the spectra, we included a constant factor for cross-
calibration uncertainty, which is fixed to unity for EPIC-pn and allowed for
variation for MOS1 and MOS2. The spectral fitting results are summarized in
Table 5, and Figs. 10 and 11 show the fitted spectra of the sources.
Figure 4 shows the distribution of absorption column density $N_{\rm H}$
obtained from the X-ray spectral fitting. Overall, the $N_{\rm H}$
distribution has a peak near $10^{22}$ cm-2, and more than 50% of the sources
have $N_{\rm H}$ between $10^{21}-3.16\times 10^{22}$ cm-2. There are three
sources with high $N_{\rm H}$¿$10^{23}$ cm-2. The source 4XMM J175327.8–295716
has $N_{\rm H}=7\times 10^{20}$ cm-2, the lowest in our sample, which might
indicate that this source is the closest to us among our sample or has a soft
component that mimics the low $N_{\rm H}$.
Figure 5 shows the distribution of photon index $\Gamma$. The distribution has
a peak at $\Gamma\sim 0.6$. More than 50% of the sources have a flat spectral
shape with $\Gamma$¡1. A significant number of sources in our sample have a
softer spectrum with $\Gamma$¿1.2. We noticed that the majority of sources
with high $\Gamma$ values do not show any iron emission complex lines; only
two of the seven sources with $\Gamma\geq 1.3$, show strong emission lines in
the 6–7 keV band.
Figure 4: Distribution of the absorption column density, $N_{\rm H}$, for
different source types. Figure 5: Distribution of the photon index, $\Gamma$,
for different source types.
### 3.5 Gaia counterparts
A correct estimate of the distance to the source is required to derive their
luminosity. For this, we searched for counterparts in the Gaia DR3 catalog.
For each X-ray source, we computed the Gaia source density by counting the
number of Gaia sources within a circle of 1′ radius at the source position.
The Gaia density of sources is low and varies from 0.01–0.1 arcsec-2. We
compute the probability of the sources having a spurious association by
multiplying the Gaia source density with the area associated with the XMM-
Newton positional error. Table 1 lists the sources for which we found a Gaia
counterpart within the 3$\sigma$ positional uncertainty of XMM-Newton. We
found a likely Gaia counterpart for seven XMM-Newton sources. If a counterpart
is found, then we use the Gaia source ID to find the distance to the source
from Bailer-Jones et al. (2021). The distance to the sources for which a Gaia
counterpart was found varies from $\sim 1.5$ to $\sim 5$ kpc and the X-ray
luminosity is in the range $5\times 10^{32}-6\times 10^{33}$ erg s-1.
## 4 Discussion
### 4.1 Typical properties of different classes of sources
We analyzed 444 XMM-Newton observations of the GC and the Galactic disk. We
extracted X-ray light curves from nearly 2500 sources and systematically
searched for X-ray pulsation. We detected periodicity in 26 sources, 14 of
which are reported here for the first time. Many of the GC sources have a
luminosity of a few times $10^{32}$ erg s-1 (Muno et al., 2003a), which is
comparable to the luminosity typically observed in bright magnetic CVs
(Verbunt et al., 1997; Ezuka & Ishida, 1999). NS HMXRBs have much higher
luminosity and are detected during their outburst period, reaching
luminosities up to $10^{38}$ erg s-1. For the majority of the sources we did
not find a Gaia counterpart due to the high absorption column density toward
the GC and disk. Hence, the X-ray luminosity cannot be derived for a large
number of sources and we cannot use this information to classify them. The NS
HMXRBs are far less common in our Galaxy than the magnetic CVs. The nature of
the Galactic sources is a long-standing question. Identifying the magnetic CVs
or NS HMXRBs from X-ray periodicity alone can be difficult, as both types of
sources usually display periods in a similar range. The short-period
modulation in the X-ray light curve is thought to have originated from the
spin period of the magnetically accreting WD or NS. In our sample, the
smallest detected period is 1.36 s, and the maximum period detected is around
6784 s. The sample has a median period of 672 s. The pulse fraction of the
modulation ranges from 10% to 80%. The detected periods are consistent with
those of magnetic CVs and NSs in HMXRBs. A sample study of magnetic CVs
indicates the median spin period is 6000 s (Ritter & Kolb, 2003). There are
also a few magnetic CVs with very short spin periods; for example, CTCV
J2056–3014 has a spin period of 29.6 s (Lopes de Oliveira et al., 2020), and
V1460 Her has a spin period of 38.9 s (Ashley et al., 2020). The spin periods
of polars are mostly beyond 1 hr, while almost all IPs have WD spin periods
lower than 1 hr. In contrast, 85 Galactic HMXRB pulsars (both with Be and OB
supergiant companions) have a median (mean) spin period of 187 s ($\sim$970
s), with only four sources showing a period longer than 1 hr (Neumann et al.,
2023).
It is evident that the different classes of sources (NS HMXRBs and magnetic
CVs) exhibit a wide range of spin periods. Therefore, from the periodicity
alone, it is difficult to understand the nature of the unclassified sources.
Below we summarize a scheme to characterize the different classes of periodic
X-ray sources utilizing their X-ray spectral, timing properties, and
luminosity.
#### 4.1.1 NS HMXRBs
The NS HMXRBs have properties that are very similar to IPs and they typically
have very hard spectra. Figure 6 shows the period versus HR plot for
classified and unclassified sources in our sample. The HR is calculated using
the net counts in the 2–5 keV and 5–10 keV bands. We did not choose an energy
band below 2 keV simply because it would be affected by Galactic absorption.
The known NS HMXRBs appear very hard, similar to IPs; however, it is clear
from Fig. 7 that they emit very little 6.7 keV iron line as compared to IPs.
In almost all NS HMXRBs, the dominant component of the Fe K emission complex
is the neutral 6.4 keV line emission and little to no ionized 6.7 and 6.9 keV
line emission. This is because HMXRBs are mainly wind-fed systems, so the
fluorescent iron line emission from the wind of the companion star is the main
spectral feature in their spectra, while the ionized iron emission lines
usually come from an accretion disk. The known NS HMXRBs in our sample – 4XMM
J172511.3–361657 and 4XMM J174906.8–273233 – show no 6.7 keV emission, with
upper limits on their equivalent widths (EWs) of 8 eV and 15 eV, respectively.
We define the following criteria for the characterization of the NS HMXRB: (i)
$P_{\rm spin}\lesssim$1000 s, (ii) HR¿-0.2, (iii) $\rm EW_{6.7}$¡50 eV, and
(iv) a typical X-ray luminosity of $10^{33}-10^{37}$ erg s-1.
#### 4.1.2 IPs
One of the prominent features of IPs is the presence of strong ionized 6.7 keV
line emission. In our sample, all the confirmed IPs have a clear detection of
a 6.7 keV line, with the lowest EW of the sample being $78^{+34}_{-19}$ for
4XMM J174517.0–321358. Xu et al. (2016) studied a sample of bright 17 IPs
using Suzaku data. They found that the minimum and mean EW of the 6.7 keV line
of the sample is $58^{+10}_{-13}$ and $107\pm 17$ eV, respectively. The below
criteria can be used to characterize IPs. They typically have (i) a spin
period $P_{\rm spin}$¡2500 s, (ii) an HR¿-0.2, (iii) a strong 6.7 keV line
emission with $\rm EW_{6.7}$¿50 eV, and (iv) and an X-ray luminosity in the
range $10^{31}-10^{35}$ erg s-1 (Suleimanov et al., 2022).
#### 4.1.3 Polars
The X-ray emission from polars is much softer than that of IPs. The spectra of
many polars are dominated by very soft blackbody-like emission from the WD
surface (Osborne et al., 1986; Ramsay et al., 1993; Clayton & Osborne, 1994).
However, toward the GC this component is difficult to observe due to the high
absorption. In general polars also show a strong 6.7 keV line with an EW
anywhere from $50$ eV to $\sim 450$ eV (Ezuka & Ishida, 1999; Xu et al.,
2016). As a whole, the detection of a 6.7 keV line in polar can be difficult
for faint sources as they are much softer than IPs. Polars can be tentatively
classified by having softer spectra and periods above 2500 s; however, a
secure classification would require the detection of the 6.7 keV line with
good quality X-ray spectra and strong circular polarization in the optical
band. The polars can be characterized by the following characteristics: (i)
$P_{\rm spin}=P_{\rm orb}$¿2500 s, (ii) HR¡-0.2, (iii) a strong 6.7 keV line
emission with $\rm EW_{6.7}$¿50 eV, and (iv) an X-ray luminosity below
$10^{33}$ erg s-1 (Suleimanov et al., 2022).
Figure 6: HR vs. period diagram. The HR is calculated using the net counts of
two bands: 2–5 keV and 5–10 keV. Figure 7: EW of the 6.7 keV line vs. period
diagram.
### 4.2 Known NS HMXRBs
The source 4XMM J172511.3–361657 was discovered on 9 February 2004 by INTEGRAL
and named as IGR J17252–3616 (Walter et al., 2004). XMM-Newton observed the
source on 21 March 2004. A period search was performed by Zurita Heras et al.
(2006), and a pulsation of $414.8\pm 0.5$ s was discovered. An orbital period
of $9.737\pm 0.004$ days was also reported by using the Rossi X-ray Timing
Explorer (RXTE) proportional counter array data (Markwardt & Swank, 2003;
Corbet et al., 2005). The source has a flat spectrum with
$\Gamma=0.82^{+0.04}_{-0.04}$, which can also be fitted by a flat power law
with an energy cutoff or a Comptonized model with $kT\sim 5.5$ keV (Zurita
Heras et al., 2006). The spectrum shows a 6.4 keV iron line with an EW of
$70^{+6}_{-7}$ eV. Previous studies indicate the source is a wind-fed
accreting pulsar with a supergiant companion star. The source has been
observed multiple times by XMM-Newton, and we searched for pulsation in all
the observations. The pulsations found in the different observations are
consistent with each other within the 1$\sigma$ error values. The source is
highly variable, and the flux of the source can vary from $2.19\times
10^{-13}$ to $7.42\times 10^{-11}$ erg s-1 cm-2. We noticed that whenever the
source flux drops below $\sim 5\times 10^{-13}$ erg s-1 cm-2 the pulsation was
undetectable.
The source 4XMM J174906.8–273233 was discovered in 1996 by ASCA. The source is
also known as AXJ1749.1–2733 (Sakano et al., 2002). In a 1995 ASCA
observation, the source was not detected, and in 2003 INTEGRAL caught a short
outburst, which indicates its transient nature (Grebenev et al., 2004). XMM-
Newton first observed 4XMM J174906.8–273233 on 31 March 2007, and Karasev et
al. (2008) analyzed EPIC-pn data and detected a spin period of 132 s. The
source was classified as a transient X-ray pulsar in a high-mass binary
system. The source has been observed twice by XMM-Newton in 2007 and 2008;
however, the pulsation was only detected in the 2007 observation (ObsID:
0510010401). The non-detection of pulsations in 2008 could be due to the
combination of two factors: (1) the source flux was almost an order of
magnitude fainter than in the 2007 observation, and (2) the 2008 observation
had a shorter exposure than the 2007 observation, which led to $\sim 22$ times
fewer net counts in the 2008 observation than in the 2007 observation. The
source spectrum is heavily absorbed and can be fitted by a steep power-law
model with $\Gamma=1.3^{+0.1}_{-0.1}$; adding an iron line at 6.4 keV improves
the fit minutely.
### 4.3 Known IPs
The source 4XMM J174517.0–321358 (Gong, 2022; Vermette et al., 2023) was
discovered by Chandra and serendipitously observed by XMM-Newton in 2010. An
iron-line emission complex and a pulsation of 614 s were detected using XMM-
Newton data. The source is classified as an IP with a WD of $0.8M_{\odot}$
(Vermette et al., 2023). The source has been observed twice by XMM-Newton, and
in both observations, we detected a pulsation of 613 s. The X-ray spectrum
looks like that of a typical IP with a flat spectral shape and iron emission
complex.
The source 4XMM J174033.8–301501 was discovered by Suzaku in 2008 (Uchiyama et
al., 2011a). Later, the source was observed by XMM-Newton on 18 March 2021
during a Galactic disk survey (Mondal et al., 2022). The source spectrum is
well described by emission from collisionally ionized diffuse gas with a
plasma temperature of $\sim 15.7$ keV plus an iron line emission complex. A
period of 432.4 s was detected in both Suzaku and XMM-Newton data. The source
has been observed twice by XMM-Newton in 2018 and 2021. In both XMM-Newton
observations, the detected pulsations are consistent. The source has a flat
spectrum with $\Gamma=0.5^{+0.1}_{-0.1}$ and an Fe emission complex in the 6–7
keV band.
4XMM J174954.6–294336 was first discovered by Chandra (Jonker et al., 2014).
The source is classified as an IP based on the spin period of 1002 s and hard
power-law spectral shape with complex iron line emission (Johnson et al.,
2017; Mondal et al., 2023). This is only the second known IP that shows
eclipses in X-rays. The source has been observed twice by XMM-Newton, and the
pulsation is not visible in ObsID 0801681401. Mondal et al. (2023) discuss the
possibility that the pulsation is suppressed due to a complex absorption
behavior and the eclipse seen in the X-ray light curve.
4XMM J174917.7–283329 is classified as IP (Mondal et al., 2023). A period of
1212 s was detected in a 2017 XMM-Newton observation. The continuum is best
fitted by a partially absorbed apec model with a plasma temperature of $13$
keV. The source has been observed three times by XMM-Newton, but the pulsation
was detected only once, when the source flux was one order of magnitude higher
than in the other two observations.
The source 4XMM J174816.9–280750 was observed by BeppoSAX during the GC survey
in 1997–1998 (Sidoli et al., 2006). The source has a spectrum with
$\Gamma=1.3^{+0.6}_{-0.6}$ and strong emission lines at 6–7 keV, plus a
coherent pulsation of period 593 s was found in Suzaku and XMM-Newton data.
These facts favor the source as an IP (Nobukawa et al., 2009). The source has
been observed ten times by XMM-Newton, displaying significant variation in the
pulsation period between different observations. A detailed, in-depth study of
the source is required to determine whether the pulsation period variation is
due to accretion or some other effects, such as the propeller phenomenon.
4XMM J174016.0–290337 was observed by XMM-Newton on 29 September 2005 (Farrell
et al., 2010). The source displays Fe $K_{\alpha}$ emission and a periodic
modulation with a period of 626 s. The source has been observed three times by
XMM-Newton, and in all cases, a pulsation period of 622 s is detected.
The source 4XMM J174009.0–28472 was first discovered by ASCA (Sakano et al.,
2000) and a period of 729 s was found. The source was classified as NS pulsar
based on the flat power-law-type spectrum shape (Sakano et al., 2000).
However, later near-infrared/optical studies suggested it is an IP (Kaur et
al., 2010). The source has been observed four times by XMM-Newton, and we
detected a similar pulsation period value in all observations. The source has
a very flat spectrum $\Gamma=0.1^{+0.1}_{-0.1}$ with strong emission lines.
4XMM J174622.7–285218 is classified as an IP (Nucita et al., 2022). The source
was first observed in a Chandra observation of the GC, and a periodic signal
of 1745 s was found (Muno et al., 2009). The spectrum is characterized by
$\Gamma=0.7^{+0.2}_{-0.2}$, and the 6.9 keV line is the strongest with an EW
of $242^{+81}_{-74}$.
### 4.4 Known polars
The source 4XMM J174728.9–321441 was first observed by Chandra during the
Galactic bulge survey (Jonker et al., 2014). The source is classified as a
polar based on its long period of 4860 s detected in X-rays and in He ii
$\lambda$5412 line emission (Wevers et al., 2017). This source has the
steepest spectrum in our sample with $\Gamma=1.8^{+0.4}_{-0.4}$, and no iron
emission complex was detected in the XMM-Newton spectrum. The non-detection of
iron lines could be due to low signal-to-noise in the data.
### 4.5 Unclassified sources
Below, we try to classify the unknown sources using the scheme defined in
Sect. 4.1. This is a tentative classification; further follow-up of the
individual sources is required to constrain their true nature. For many
sources, we do not have any Gaia counterpart; therefore, the estimation of the
distance to the source using parallax was not possible and hence the
luminosity is not calculated. In such a case, we only used the first three
criteria for classification.
#### 4.5.1 Likely NS HMXRBs
The only two sources matching the NS HMXRB criteria are XMMU J175441.9–265919
and 4XMM J175525.0–260402. Both have relatively high HR values: $0.02\pm 0.06$
and $0.05\pm 0.02$, respectively. The upper limits on the EWs of the 6.7 keV
line for J175441.9 and J175525.0 are $\sim 28$ eV and $\sim 34$ eV,
respectively. The spin periods of these two systems are 1.36 s and 392.5 s.
The source J175525.0 was detected three times by XMM-Newton; however, the
pulsation was detected only in the longest observation. A luminosity
estimation was not possible as we did not find any counterparts in Gaia
catalogs.
#### 4.5.2 Likely IPs
The sources we categorize as IP are 4XMM J173058.9–350812, 4XMM
J175301.3–291324, 4XMM J175740.5–285105, and 4XMM J175511.6–260315. The
periods found from these systems are below 2500 s and the HRs are above -0.2.
The 6.7 keV line EWs for the sources J173058.9, J175301.3, J175740.5, and
J175511.6 are $236^{+105}_{-83}$, $105^{+189}$, $112^{+61}_{-41}$, and
$194^{+162}_{-188}$ eV, respectively. The source J175301.3 was observed three
times by XMM-Newton, and a $\sim 672$ s period was consistently found in two
of those observations. The source J175511.6 was detected three times by XMM-
Newton; however, the period of $\sim 1135$ s was detected only in the longest
observation. We detected a likely Gaia counterpart for the sources J173058.9
and J175511.6 and the distances estimated from their parallaxes are
$3.2^{+2.2}_{-1.3}$ and $4.9^{+3.2}_{-2.4}$ kpc, respectively. The luminosity
of these two sources is in the range $(0.8-1.9)\times 10^{33}$ erg s-1, which
is typical for accreting magnetic CVs (Suleimanov et al., 2022).
#### 4.5.3 Likely polars
The sources that are likely to be polars are 4XMM J173837.0–304818, 4XMM
J175327.8–295716, 4XMM J175244.4–285851, 4XMM J175328.4–244627, and XMMU
J180140.3–234221. These sources have very low HR values and long periods (see
Fig. 6), which suggests that these sources are most likely to be polars. The
long periods are most likely associated with the synchronized spin-orbital
period of the WDs. All these sources have relatively soft spectra of photon
index $\Gamma=0.8-2.1$. For most of the polar-type sources, we have an upper
limit on the EW of the 6.7 keV emission line. This is primarily because these
sources have very low net counts that give $\leq 50$ bins in the 0.2–10 keV
spectrum. The source J175328.4 is the brightest in the polar sample and has
the best signal-to-noise spectrum compared to the other four sources. In this
case, the EW of the 6.7 keV emission line is $28^{+26}_{-28}$ eV, which is
much smaller than the typical values found in IPs. The source J175327.8 was
observed six times by XMM-Newton; however, the periodicity was significantly
detected only in the two observations that have an exposure above 25 ks. We
found a likely Gaia counterpart for the source J175328.4 and its estimated
distance is $1.8^{+0.4}_{-0.2}$ kpc, giving a luminosity of $2\times 10^{33}$
erg s-1.
#### 4.5.4 Unknowns
We classify the sources XMMU J173029.8–330920, 4XMM J174809.8–300616, and 4XMM
J175452.0–295758 as unknowns. These sources have high HR and periods similar
to IPs. However, the 6.7 keV line was not detected clearly and we could only
set an upper limit on its EW. The EW of the 6.7 keV line for the sources
J173029.8, J174809.8, and J175452.0 are ¡160, ¡172, and ¡206 eV, respectively.
The source J174809.8 has been observed twice by XMM-Newton. However, it is
relatively faint, with a flux of a few times $10^{-13}$ erg cm-2 s-1 and
therefore the period was detected only in the longer observation. The source
J175452.0 was detected three times by XMM-Newton; however, the pulsation was
detected only in the longest observation.
#### 4.5.5 NS or WD?
The compact object in 4XMM J174445.6–271344 is not clearly identified. Also
known as HD 161103, it was observed by XMM-Newton on 26 January 2004. Lopes de
Oliveira et al. (2006) did a detailed multiwavelength spectroscopic study of
this source and suggested that the system hosts an NS; however, a WD scenario
was not excluded. From optical spectroscopy, the companion star of this system
is recognized as a Be star. We detected a periodicity of 3196 s from the X-ray
light curve. The X-ray spectra show strong 6.4, 6.7, and 6.9 keV emission
lines with EWs of $80^{+39}_{-40}$, $371^{+88}_{-76}$, and $109^{+43}_{-47}$
eV, respectively. Such strong 6.7 and 6.9 keV emission lines are not typically
seen in accreting NS HMXRBs. Also, the source has a much softer spectrum ($\rm
HR=-0.42\pm 0.03$ in Fig. 6) than the two confirmed NS HMXRBs (4XMM
J172511.3–361657 and 4XMM J172511.3–361657) in our sample.
## 5 Conclusion
We systematically searched for periodic X-ray sources in the inner Galactic
disk, which extends from $l\sim 350\degr$ to $l\sim+7\degr$ and includes the
GC, using XMM-Newton Heritage observations and archival data. We find 26
sources that show periodicity in their X-ray light curves, of which 12 have
previously reported periods. For these 12 sources, we have obtained periods
consistent with those previously reported. We have detected the periodicity in
the other 14 sources for the first time. We classified the sources based on
the values of the HR, period, and iron emission complex in the 6–7 keV band.
Of these 14 sources, we classify two as NS X-ray binaries, four as likely IPs,
five as likely polars, and three as unknowns. The IP-type sources display a
steep X-ray spectrum with $\Gamma\leq 1.1$ and an iron emission complex in the
6–7 keV band. The spectra of polars are much softer compared to IPs.
Table 1: Sources with possible Gaia optical counterparts.
XMM Name | Density | $G_{\rm mag}$ | Plx | Distance | $L_{\rm x}$
---|---|---|---|---|---
arcsec-2 | mas | kpc | erg s-1
4XMM J173058.9 | 0.009 | 20.05 | $0.59$ | $3.2^{+2.2}_{-1.3}$ | $1.9\times 10^{33}$
4XMM J174033.8 | 0.009 | 19.23 | $0.23$ | $4.3^{+1.5}_{-1.3}$ | $6.4\times 10^{33}$
4XMM J174009.0 | 0.03 | 18.79 | $0.71$ | $1.8^{+1.2}_{-0.5}$ | $2.3\times 10^{33}$
4XMM J174954.6 | 0.1 | 18.97 | $0.61$ | $1.6^{+2.8}_{-2.5}$ | $5.3\times 10^{32}$
4XMM J174816.9 | 0.01 | 21.13 | $0.63$ | $2.9^{+2.3}_{-1.4}$ | $8.6\times 10^{32}$
4XMM J175511.6 | 0.02 | 18.64 | $0.40$ | $4.9^{+3.2}_{-2.4}$ | $8.8\times 10^{32}$
4XMM J175328.4 | 0.02 | 8.40 | $0.55$ | $1.8^{+0.4}_{-0.2}$ | $2.2\times 10^{33}$
666Sources with Gaia counterparts found within the 3$\sigma$ positional error
of XMM-Newton. The density was calculated by drawing a circle with a radius of
1 arcmin at the source position.
###### Acknowledgements.
SM and GP acknowledge financial support from the European Research Council
(ERC) under the European Union’s Horizon 2020 research and innovation program
HotMilk (grant agreement No. 865637). SM and GP also acknowledge support from
Bando per il Finanziamento della Ricerca Fondamentale 2022 dell’Istituto
Nazionale di Astrofisica (INAF): GO Large program and from the Framework per
l’Attrazione e il Rafforzamento delle Eccellenze (FARE) per la ricerca in
Italia (R20L5S39T9). KM is partially supported by the NASA ADAP program
(NNH22ZDA001N-ADAP). We thank the referee for the comments, corrections, and
suggestions that significantly improved the manuscript.
## References
* Aizu (1973) Aizu, K. 1973, Progress of Theoretical Physics, 49, 1184
* Anastasopoulou et al. (2023) Anastasopoulou, K., Ponti, G., Sormani, M. C., et al. 2023, A&A, 671, A55
* Arnaud (1996) Arnaud, K. A. 1996, in Astronomical Society of the Pacific Conference Series, Vol. 101, Astronomical Data Analysis Software and Systems V, ed. G. H. Jacoby & J. Barnes, 17
* Ashley et al. (2020) Ashley, R. P., Marsh, T. R., Breedt, E., et al. 2020, MNRAS, 499, 149
* Astropy Collaboration et al. (2022) Astropy Collaboration, Price-Whelan, A. M., Lim, P. L., et al. 2022, ApJ, 935, 167
* Astropy Collaboration et al. (2018) Astropy Collaboration, Price-Whelan, A. M., Sipőcz, B. M., et al. 2018, AJ, 156, 123
* Astropy Collaboration et al. (2013) Astropy Collaboration, Robitaille, T. P., Tollerud, E. J., et al. 2013, A&A, 558, A33
* Bahramian et al. (2021) Bahramian, A., Heinke, C. O., Kennea, J. A., et al. 2021, MNRAS, 501, 2790
* Bailer-Jones et al. (2021) Bailer-Jones, C. A. L., Rybizki, J., Fouesneau, M., Demleitner, M., & Andrae, R. 2021, AJ, 161, 147
* Baluev (2008) Baluev, R. V. 2008, MNRAS, 385, 1279
* Bao & Li (2022) Bao, T. & Li, Z. 2022, MNRAS, 509, 3504
* Clayton & Osborne (1994) Clayton, K. L. & Osborne, J. P. 1994, MNRAS, 268, 229
* Corbet et al. (2005) Corbet, R. H. D., Markwardt, C. B., & Swank, J. H. 2005, ApJ, 633, 377
* Cropper (1990) Cropper, M. 1990, Space Sci. Rev., 54, 195
* Downes et al. (2001) Downes, R. A., Webbink, R. F., Shara, M. M., et al. 2001, PASP, 113, 764
* Ezuka & Ishida (1999) Ezuka, H. & Ishida, M. 1999, ApJS, 120, 277
* Farrell et al. (2010) Farrell, S. A., Gosling, A. J., Webb, N. A., et al. 2010, A&A, 523, A50
* Foreman-Mackey et al. (2013) Foreman-Mackey, D., Hogg, D. W., Lang, D., & Goodman, J. 2013, PASP, 125, 306
* Gong (2022) Gong, H. 2022, ApJ, 933, 240
* Grebenev et al. (2004) Grebenev, S. A., Rodriguez, J., Westergaard, N. J., Sunyaev, R. A., & Oosterbroek, T. 2004, The Astronomer’s Telegram, 252, 1
* Haberl & Motch (1995) Haberl, F. & Motch, C. 1995, A&A, 297, L37
* Hailey et al. (2016) Hailey, C. J., Mori, K., Perez, K., et al. 2016, ApJ, 826, 160
* Hofmann et al. (2018) Hofmann, F., Ponti, G., Haberl, F., & Clavel, M. 2018, A&A, 615, L7
* Jansen et al. (2001) Jansen, F., Lumb, D., Altieri, B., et al. 2001, A&A, 365, L1
* Johnson et al. (2017) Johnson, C. B., Torres, M. A. P., Hynes, R. I., et al. 2017, MNRAS, 466, 129
* Jonker et al. (2014) Jonker, P. G., Torres, M. A. P., Hynes, R. I., et al. 2014, ApJS, 210, 18
* Karasev et al. (2008) Karasev, D. I., Tsygankov, S. S., & Lutovinov, A. A. 2008, MNRAS, 386, L10
* Kaur et al. (2010) Kaur, R., Wijnands, R., Paul, B., Patruno, A., & Degenaar, N. 2010, MNRAS, 402, 2388
* Lomb (1976) Lomb, N. R. 1976, Ap&SS, 39, 447
* Lopes de Oliveira et al. (2020) Lopes de Oliveira, R., Bruch, A., Rodrigues, C. V., Oliveira, A. S., & Mukai, K. 2020, ApJ, 898, L40
* Lopes de Oliveira et al. (2006) Lopes de Oliveira, R., Motch, C., Haberl, F., Negueruela, I., & Janot-Pacheco, E. 2006, A&A, 454, 265
* Markwardt & Swank (2003) Markwardt, C. B. & Swank, J. H. 2003, The Astronomer’s Telegram, 179, 1
* Mondal et al. (2022) Mondal, S., Ponti, G., Haberl, F., et al. 2022, A&A, 666, A150
* Mondal et al. (2023) Mondal, S., Ponti, G., Haberl, F., et al. 2023, A&A, 671, A120
* Mukai (2017) Mukai, K. 2017, PASP, 129, 062001
* Muno et al. (2004) Muno, M. P., Arabadjis, J. S., Baganoff, F. K., et al. 2004, ApJ, 613, 1179
* Muno et al. (2003a) Muno, M. P., Baganoff, F. K., Bautz, M. W., et al. 2003a, ApJ, 589, 225
* Muno et al. (2003b) Muno, M. P., Baganoff, F. K., Bautz, M. W., et al. 2003b, ApJ, 599, 465
* Muno et al. (2009) Muno, M. P., Bauer, F. E., Baganoff, F. K., et al. 2009, ApJS, 181, 110
* Neumann et al. (2023) Neumann, M., Avakyan, A., Doroshenko, V., & Santangelo, A. 2023, A&A, 677, A134
* Nishiyama et al. (2013) Nishiyama, S., Yasui, K., Nagata, T., et al. 2013, ApJ, 769, L28
* Nobukawa et al. (2009) Nobukawa, M., Koyama, K., Matsumoto, H., & Tsuru, T. G. 2009, PASJ, 61, S93
* Norton & Watson (1989) Norton, A. J. & Watson, M. G. 1989, MNRAS, 237, 853
* Nucita et al. (2022) Nucita, A. A., Lezzi, S. M., De Paolis, F., et al. 2022, MNRAS, 517, 118
* Osborne et al. (1986) Osborne, J. P., Bonnet-Bidaud, J. M., Bowyer, S., et al. 1986, MNRAS, 221, 823
* Patterson (1994) Patterson, J. 1994, PASP, 106, 209
* Perez et al. (2015) Perez, K., Hailey, C. J., Bauer, F. E., et al. 2015, Nature, 520, 646
* Ponti et al. (2019) Ponti, G., Hofmann, F., Churazov, E., et al. 2019, Nature, 567, 347
* Ponti et al. (2021) Ponti, G., Morris, M. R., Churazov, E., Heywood, I., & Fender, R. P. 2021, A&A, 646, A66
* Ponti et al. (2013) Ponti, G., Morris, M. R., Terrier, R., & Goldwurm, A. 2013, in Astrophysics and Space Science Proceedings, Vol. 34, Cosmic Rays in Star-Forming Environments, ed. D. F. Torres & O. Reimer, 331
* Ponti et al. (2015) Ponti, G., Morris, M. R., Terrier, R., et al. 2015, MNRAS, 453, 172
* Predehl et al. (2020) Predehl, P., Sunyaev, R. A., Becker, W., et al. 2020, Nature, 588, 227
* Ramsay et al. (1993) Ramsay, G., Rosen, S. R., Mason, K. O., Cropper, M. S., & Watson, M. G. 1993, MNRAS, 262, 993
* Revnivtsev et al. (2009) Revnivtsev, M., Sazonov, S., Churazov, E., et al. 2009, Nature, 458, 1142
* Ritter & Kolb (2003) Ritter, H. & Kolb, U. 2003, A&A, 404, 301
* Sakano et al. (2002) Sakano, M., Koyama, K., Murakami, H., Maeda, Y., & Yamauchi, S. 2002, ApJS, 138, 19
* Sakano et al. (2000) Sakano, M., Torii, K., Koyama, K., Maeda, Y., & Yamauchi, S. 2000, PASJ, 52, 1141
* Saxton et al. (2005) Saxton, C. J., Wu, K., Cropper, M., & Ramsay, G. 2005, MNRAS, 360, 1091
* Scargle (1982) Scargle, J. D. 1982, ApJ, 263, 835
* Scaringi et al. (2010) Scaringi, S., Bird, A. J., Norton, A. J., et al. 2010, MNRAS, 401, 2207
* Sidoli et al. (2006) Sidoli, L., Mereghetti, S., Favata, F., Oosterbroek, T., & Parmar, A. N. 2006, A&A, 456, 287
* Strüder et al. (2001) Strüder, L., Briel, U., Dennerl, K., et al. 2001, A&A, 365, L18
* Su et al. (2010) Su, M., Slatyer, T. R., & Finkbeiner, D. P. 2010, ApJ, 724, 1044
* Suleimanov et al. (2022) Suleimanov, V. F., Doroshenko, V., & Werner, K. 2022, MNRAS, 511, 4937
* Timmer & Koenig (1995) Timmer, J. & Koenig, M. 1995, A&A, 300, 707
* Turner et al. (2001) Turner, M. J. L., Abbey, A., Arnaud, M., et al. 2001, A&A, 365, L27
* Uchiyama et al. (2011a) Uchiyama, H., Koyama, K., Matsumoto, H., et al. 2011a, PASJ, 63, S865
* Uchiyama et al. (2011b) Uchiyama, H., Nobukawa, M., Tsuru, T., Koyama, K., & Matsumoto, H. 2011b, PASJ, 63, S903
* Vaughan (2010) Vaughan, S. 2010, MNRAS, 402, 307
* Vaughan et al. (2003) Vaughan, S., Edelson, R., Warwick, R. S., & Uttley, P. 2003, MNRAS, 345, 1271
* Verbunt et al. (1997) Verbunt, F., Bunk, W. H., Ritter, H., & Pfeffermann, E. 1997, A&A, 327, 602
* Vermette et al. (2023) Vermette, B., Salcedo, C., Mori, K., et al. 2023, ApJ, 954, 138
* Walter et al. (2004) Walter, R., Bodaghee, A., Barlow, E. J., et al. 2004, The Astronomer’s Telegram, 229, 1
* Wang et al. (2002) Wang, Q. D., Gotthelf, E. V., & Lang, C. C. 2002, Nature, 415, 148
* Warner (1989) Warner, B. 1989, Information Bulletin on Variable Stars, 3383, 1
* Warner (2003) Warner, B. 2003, Cataclysmic Variable Stars
* Warwick et al. (1985) Warwick, R. S., Turner, M. J. L., Watson, M. G., & Willingale, R. 1985, Nature, 317, 218
* Webb et al. (2020) Webb, N. A., Coriat, M., Traulsen, I., et al. 2020, A&A, 641, A136
* Wevers et al. (2017) Wevers, T., Torres, M. A. P., Jonker, P. G., et al. 2017, MNRAS, 470, 4512
* Wilms et al. (2000) Wilms, J., Allen, A., & McCray, R. 2000, ApJ, 542, 914
* Worrall et al. (1982) Worrall, D. M., Marshall, F. E., Boldt, E. A., & Swank, J. H. 1982, ApJ, 255, 111
* Xu et al. (2016) Xu, X.-j., Wang, Q. D., & Li, X.-D. 2016, ApJ, 818, 136
* Yamauchi et al. (1996) Yamauchi, S., Kaneda, H., Koyama, K., et al. 1996, PASJ, 48, L15
* Yasui et al. (2015) Yasui, K., Nishiyama, S., Yoshikawa, T., et al. 2015, PASJ, 67, 123
* Zhu et al. (2018) Zhu, Z., Li, Z., & Morris, M. R. 2018, ApJS, 235, 26
* Zurita Heras et al. (2006) Zurita Heras, J. A., De Cesare, G., Walter, R., et al. 2006, A&A, 448, 261
## Appendix A Additional figures and tables
Table 2: Various details of the X-ray pulsators in the GC plus Galactic disk.
lat | lon | ObsID | $\rm Pos_{err}$ | Period (s) | PF | XMM Name | PGaia | Sig | Type | References
---|---|---|---|---|---|---|---|---|---|---
arcsec | Our | Previous | 2–10 keV | ¿$\sigma$
351.4972 | -0.3537 | 0886070601 | 0.14 | $414.5\pm 1.4$ | 414.8 | $68.4\pm 1.6$ | 4XMM J172511.3–361657 | $4\times 10^{-4}$ | 3 | NS HMXRB | Zurita Heras et al. (2006)
353.1032 | -0.6956 | 0861171201 | 0.73 | $607.5\pm 3.7$ | | $45.6\pm 10.8$ | 4XMM J173058.9–350812 | $1.5\times 10^{-2}$ | 3 | Likely IP |
354.7018 | 0.4782 | 0916800201 | 0.35 | $517.6\pm 3.6$ | | $41.3\pm 12.5$ | XMMU J173029.8–330920 | $1\times 10^{-3}$ | 3 | Unknown | Mondal et al. in prep
357.1486 | -1.6563 | 0865510101 | 0.48 | $614.2\pm 1.4$ | 614 | $28.2\pm 4.5$ | 4XMM J174517.0–321358 | $4\times 10^{-3}$ | 3 | IP | Vermette et al. (2023)
357.3792 | -2.0600 | 0743980401 | 0.57 | $4768.3\pm 151.5$ | 4860 | $74.2\pm 22.4$ | 4XMM J174728.9–321441 | $2.1\times 10^{-2}$ | 3 | Polar | Wevers et al. (2017)
357.6116 | 0.3037 | 0886020101 | 0.79 | $5067.3\pm 260.5$ | | $57.9\pm 16.2$ | 4XMM J173837.0–304818 | $1.2\times 10^{-2}$ | 3 | Likely Polar |
358.3043 | 0.2442 | 0886010601 | 0.48 | $432.4\pm 1.9$ | 432.1 | $54.8\pm 7.7$ | 4XMM J174033.8–301501 | $6.8\times 10^{-3}$ | 3 | IP | Mondal et al. (2022)
359.2786 | 0.9300 | 0764191201 | 0.17 | $623.2\pm 2.6$ | 626 | $37.2\pm 3.8$ | 4XMM J174016.0–290337 | $1.4\times 10^{-3}$ | 3 | IP | Farrell et al. (2010)
359.2882 | -1.0793 | 0152920101 | 0.65 | $2179.6\pm 18.2$ | | $64.1\pm 19$ | 4XMM J174809.8–300616 | $2.5\times 10^{-2}$ | 3 | Unknown |
359.4941 | 1.0946 | 0764191101 | 0.6 | $725.9\pm 3.5$ | 729 | $45.1\pm 6.8$ | 4XMM J174009.0–284725 | $3.2\times 10^{-2}$ | 3 | IP | Kaur et al. (2010)
359.8061 | -1.2096 | 0801683401 | 0.56 | $997.7\pm 7.6$ | 1001.5 | $31\pm 8.9$ | 4XMM J174954.6–294336 | $7.7\times 10^{-2}$ | 3 | IP | Mondal et al. (2023)
0.0036 | -1.9883 | 0801682901 | 0.76 | $2917.0\pm 68.5$ | | $54.9\pm 12.7$ | 4XMM J175327.8–295716 | $1.8\times 10^{-1}$ | 2 | Likely Polar |
0.1413 | -0.1089 | 0762250301 | 0.32 | $1737.7\pm 5.4$ | 1745 | $15.8\pm 3.7$ | 4XMM J174622.7–285218 | $2.5\times 10^{-3}$ | 2 | IP | Muno et al. (2009)
0.1476 | -2.2564 | 0402280101 | 0.59 | $552.1\pm 1.4$ | | $45.1\pm 9.8$ | 4XMM J175452.0–295758 | $9.9\times 10^{-2}$ | 3 | Unknown |
0.5849 | -1.5353 | 0801682801 | 0.49 | $672.5\pm 3.0$ | | $77.8\pm 18.9$ | 4XMM J175301.3–291324 | $7.7\times 10^{-2}$ | 3 | Likely IP |
0.7407 | -0.4932 | 0801681301 | 0.44 | $1209.7\pm 11.7$ | 1212.4 | $44.3\pm 8.3$ | 4XMM J174917.7–283329 | $1.3\times 10^{-2}$ | 3 | IP | Mondal et al. (2023)
0.7632 | -1.3587 | 0801682601 | 0.97 | $6784.4\pm 439.5$ | | $73\pm 20.4$ | 4XMM J175244.4–285851 | $2.8\times 10^{-1}$ | 3 | Likely Polar | Bahramian et al. (2021)
0.9918 | -0.0821 | 0783160101 | 0.47 | $593.6\pm 0.7$ | 593 | $41.3\pm 7.3$ | 4XMM J174816.9–280750 | $6.8\times 10^{-2}$ | 3 | IP | Sidoli et al. (2006)
1.3573 | 1.0522 | 0201200101 | 1.9 | $3195.8\pm 116.3$ | 3200 | $28.9\pm 6.5$ | 4XMM J174445.6–271344 | $6.2\times 10^{-1}$ | 3 | NS/WD ? | Lopes de Oliveira et al. (2006)
1.4196 | -2.2266 | 0782770201 | 0.51 | $354.68\pm 0.7$ | | $47.6\pm 10.9$ | 4XMM J175740.5–285105 | $9.2\times 10^{-1}$ | 3 | Likely IP |
1.5909 | 0.0633 | 0510010401 | 1.9 | $132.1\pm 0.3$ | 132 | $28.9\pm 3.4$ | 4XMM J174906.8–273233 | $2.0\times 10^{-1}$ | 3 | NS HMXRB | Karasev et al. (2008)
2.6999 | -0.7212 | 0886081101 | 0.59 | $1.36652\pm 2\times 10^{-5}$ | | $48.1\pm 12.6$ | XMMU J175441.9–265919 | $7.7\times 10^{-3}$ | 3 | Likely NS XRB |
3.5626 | -0.3453 | 0886081301 | 0.53 | $1134.5\pm 12.4$ | | $79.1\pm 14.4$ | 4XMM J175511.6–260315 | $1.7\times 10^{-2}$ | 3 | Likely IP |
3.5766 | -0.3953 | 0886081301 | 0.51 | $392.5\pm 1.5$ | | $28.2\pm 5$ | 4XMM J175525.0–260402 | $1.5\times 10^{-2}$ | 3 | Likely NS XRB |
4.4701 | 0.6376 | 0840910501 | 0.36 | $4093.3\pm 182.3$ | | $34.8\pm 6.1$ | 4XMM J175328.4–244627 | $8.8\times 10^{-2}$ | 3 | Likely Polar |
6.3308 | -0.4448 | 0886110501 | 0.64 | $5215.9\pm 7.5$ | | $61.8\pm 19.3$ | XMMU J180140.3–234221 | $1.7\times 10^{-2}$ | 2 | Likely Polar |
777The details of the X-ray pulsators, including the positional information,
XMM-Newton detection ID, and the source association in the 4XMM catalog. The
source XMMU J173029.8–330920, XMMU J175441.9–265919, and XMMU J180140.3–234221
are the first time detected by XMM-Newton in our ongoing _Heritage_ survey of
the Galactic disk. We also searched for Swift-XRT and Chandra counterparts of
these two sources, but no counterparts were found. The PGaia represents the
probability of spurious association with a Gaia source.
Table 3: Details of the pulsators for which more than one observation is
available.
XMM Name | ObsID | Date | Flux | Period | Exposure
---|---|---|---|---|---
erg s-1 cm-2 | ks
4XMM J172511.3–361657 | 0206380401 | 2004-03-21 | $(7.42\pm 0.04)\times 10^{-11}$ | $413.8\pm 3.7$ | 10.9
0405640201 | 2006-08-29 | $(3.94\pm 0.17)\times 10^{-13}$ | | 22.9
0405640301 | 2006-08-31 | $(6.46\pm 0.04)\times 10^{-11}$ | $414.5\pm 3.8$ | 11.3
0405640401 | 2006-09-04 | $(2.55\pm 0.02)\times 10^{-11}$ | $414.4\pm 3.3$ | 12.5
0405640501 | 2006-09-06 | $(3.21\pm 0.08)\times 10^{-12}$ | $409.8\pm 3.5$ | 11.9
0405640601 | 2006-09-08 | $(5.53\pm 0.27)\times 10^{-13}$ | | 13.9
0405640701 | 2006-09-15 | $(1.81\pm 0.02)\times 10^{-11}$ | $414.4\pm 1.7$ | 22.9
0405641001 | 2006-09-27 | $(2.19\pm 0.21)\times 10^{-13}$ | | 12.4
0405640901 | 2006-09-28 | $(2.77\pm 0.02)\times 10^{-11}$ | $413.9\pm 2.6$ | 15.2
0405640801 | 2006-10-01 | $(4.12\pm 0.02)\times 10^{-11}$ | $413.2\pm 2.5$ | 15.7
| 0886070601 | 2006-10-01 | $(3.56\pm 0.03)\times 10^{-11}$ | $414.5\pm 1.4$ | 26.6
4XMM J174517.0–321358 | 0553950201 | 2010-10-09 | $(2.10\pm 0.06)\times 10^{-12}$ | $613.8\pm 0.9$ | 86.4
0870990201 | 2021-02-28 | $(1.24\pm 0.02)\times 10^{-12}$ | $614.2\pm 2.4$ | 31.6
0865510101 | 2021-03-02 | $(1.26\pm 0.02)\times 10^{-12}$ | $614.2\pm 1.4$ | 62.9
4XMM J174033.8–301501 | 0823030101 | 2018-09-29 | $(1.96\pm 0.09)\times 10^{-12}$ | $433.0\pm 5.6$ | 8.0
0886010601 | 2021-03-18 | $(2.88\pm 0.06)\times 10^{-12}$ | $432.4\pm 1.9$ | 23.0
4XMM J174016.0–290337 | 0304220101 | 2005-09-29 | $(3.61\pm 0.13)\times 10^{-12}$ | $624.8\pm 9.4$ | 8.5
0764191201 | 2016-03-05 | $(5.14\pm 0.06)\times 10^{-12}$ | $623.2\pm 2.6$ | 33
0764191101 | 2016-03-05 | $(8.78\pm 0.30)\times 10^{-12}$ | $622.6\pm 2.6$ | 33
4XMM J174809.8–300616 | 0152920101 | 2003-04-02 | $(2.96\pm 0.17)\times 10^{-13}$ | $2179.6\pm 18.2$ | 52.2
0801683301 | 2018-04-06 | $(2.08\pm 0.39)\times 10^{-13}$ | | 29.8
4XMM J174009.0–284725 | 0511010701 | 2008-02-27 | $(3.75\pm 0.09)\times 10^{-12}$ | $733.1\pm 14.6$ | 9.3
0764191501 | 2016-02-25 | $(5.89\pm 0.22)\times 10^{-12}$ | $725.5\pm 3.7$ | 30.5
0764191101 | 2016-03-05 | $(5.90\pm 0.14)\times 10^{-12}$ | $725.9\pm 3.5$ | 33
0764191601 | 2016-03-10 | $(5.63\pm 0.21)\times 10^{-12}$ | $725.5\pm 6.0$ | 19
4XMM J174954.6–294336 | 0801681401 | 2017-10-07 | $(1.28\pm 0.05)\times 10^{-12}$ | | 28
0801683401 | 2018-04-06 | $(1.73\pm 0.06)\times 10^{-12}$ | $997.7\pm 7.6$ | 29.2
4XMM J175327.8–295716 | 0085580501 | 2000-10-11 | $(3.35\pm 0.48)\times 10^{-13}$ | | 8.0
0085581501 | 2001-03-24 | $(1.93\pm 1.22)\times 10^{-13}$ | | 7.5
0085581601 | 2001-09-07 | $(3.77\pm 1.07)\times 10^{-13}$ | | 8.2
0085581801 | 2002-03-13 | $(1.21\pm 0.53)\times 10^{-13}$ | | 8.2
0801682901 | 2018-09-07 | $(4.29\pm 0.38)\times 10^{-13}$ | $2917.0\pm 68.5$ | 27.9
0801683601 | 2018-09-25 | $(3.05\pm 0.31)\times 10^{-13}$ | $2870.2\pm 58.1$ | 31.7
4XMM J175452.0–295758 | 0085580501 | 2000-10-11 | $(1.88\pm 0.45)\times 10^{-13}$ | | 8.0
0206590201 | 2004-09-05 | $(2.46\pm 0.27)\times 10^{-13}$ | | 20.9
0402280101 | 2006-09-10 | $(2.68\pm 0.19)\times 10^{-13}$ | $552.1\pm 1.4$ | 44.1
4XMM J175301.3–291324 | 0801682501 | 2018-09-03 | $(3.29\pm 1.29)\times 10^{-13}$ | | 29.0
0801682801 | 2018-09-09 | $(1.86\pm 0.11)\times 10^{-13}$ | $673.1\pm 3.1$ | 32.9
0801683501 | 2018-09-25 | $(4.12\pm 0.57)\times 10^{-13}$ | $672.5\pm 3.0$ | 31.5
888The XMM-Newton ObsIDs details for sources in which more than one
observation is available. The flux is taken from the 4XMM catalog. In many
cases, the pulsation was not detected if the exposure was short or the source
flux was below a certain limit.
Table 4: Table 3 Continued.
XMM Name | ObsID | Date | Flux | Period | Exposure
---|---|---|---|---|---
4XMM J174917.7–283329 | 0410580401 | 2006-09-22 | $(1.85\pm 0.50)\times 10^{-13}$ | | 32.9
0410580501 | 2006-09-26 | $(3.07\pm 0.63)\times 10^{-13}$ | | 32.4
0801681301 | 2017-10-07 | $(1.32\pm 0.03)\times 10^{-12}$ | $1209.7\pm 11.7$ | 28.0
4XMM J174816.9–280750 | 0112970101 | 2000-09-23 | $(7.89\pm 0.43)\times 10^{-13}$ | $595.1\pm 5.5$ | 16.3
0112970201 | 2000-09-23 | $(9.34\pm 0.79)\times 10^{-13}$ | $587.8\pm 4.5$ | 18.1
0144220101 | 2003-03-12 | $(9.42\pm 0.61)\times 10^{-13}$ | $592.9\pm 1.4$ | 52.4
0205240101 | 2005-02-26 | $(6.37\pm 0.19)\times 10^{-13}$ | $592.7\pm 1.4$ | 51
0694640801 | 2012-10-06 | $(8.04\pm 0.37)\times 10^{-13}$ | $590.9\pm 1.7$ | 41.9
0694641501 | 2012-10-06 | $(6.67\pm 0.17)\times 10^{-13}$ | $592.9\pm 1.4$ | 51.8
0694640701 | 2012-10-02 | $(6.47\pm 0.19)\times 10^{-13}$ | $592.5\pm 1.7$ | 44.4
0694641401 | 2012-09-30 | $(7.43\pm 0.41)\times 10^{-13}$ | | 51.8
0783160101 | 2016-10-02 | $(8.51\pm 0.23)\times 10^{-13}$ | $593.6\pm 0.7$ | 106
0862471201 | 2020-10-04 | $(1.01\pm 0.05)\times 10^{-12}$ | | 46.9
4XMM J174445.6–271344 | 0201200101 | 2004-02-26 | $(2.03\pm 0.04)\times 10^{-12}$ | $3195.8\pm 116.3$ | 17.8
0691760101 | 2012-09-08 | $(1.33\pm 0.02)\times 10^{-12}$ | | 22.9
4XMM J174906.8–273233 | 0510010401 | 2007-03-31 | $(1.13\pm 0.01)\times 10^{-11}$ | $132.1\pm 0.3$ | 12.2
0511010301 | 2008-03-04 | $(2.44\pm 0.12)\times 10^{-12}$ | | 8.9
4XMM J175511.6–260315 | 0148090101 | 2003-03-17 | $(1.07\pm 0.14)\times 10^{-12}$ | | 12.1
0148090501 | 2003-09-11 | $(1.53\pm 0.17)\times 10^{-12}$ | | 11.2
0886081301 | 2023-04-06 | $(3.06\pm 0.21)\times 10^{-12}$ | $1134.5\pm 12.4$ | 24
4XMM J175525.0–260402 | 0148090101 | 2003-03-17 | $(3.03\pm 0.30)\times 10^{-12}$ | | 12.1
0148090501 | 2003-09-11 | $(4.47\pm 0.43)\times 10^{-12}$ | | 11.2
0886081301 | 2023-04-06 | $(1.84\pm 0.10)\times 10^{-12}$ | $392.5\pm 1.5$ | 24
999Same columns as Table 3.
Table 5: Details of the spectral fit.
XMM Name | $N_{\rm H}$ | $\Gamma$ | $N_{\rm po}$ | $N_{\rm 6.4}$ | $\rm EW_{6.4}$ | $N_{\rm 6.7}$ | $\rm EW_{6.7}$ | $N_{\rm 6.9}$ | $\rm EW_{6.9}$ | $\chi^{2}/\rm d.o.f$ | Flux
---|---|---|---|---|---|---|---|---|---|---|---
$\times 10^{22}\rm\ cm^{-2}$ | eV | eV | eV | 0.2–10 keV
4XMM J172511.3 | $10.2^{+0.3}_{-0.3}$ | $0.82^{+0.04}_{-0.04}$ | $3.1^{+0.3}_{-0.3}\times 10^{-3}$ | $3.9^{+0.6}_{-0.6}\times 10^{-5}$ | $70^{+6}_{-7}$ | $<5\times 10^{-6}$ | $<8$ | $<3\times 10^{-6}$ | $<7$ | 2364/2299 | $3.56\times 10^{-11}$
4XMM J173058.9 | $1.4^{+0.6}_{-0.4}$ | $0.6^{+0.3}_{-0.2}$ | $5^{+3}_{-2}\times 10^{-5}$ | $3^{+3}_{-2}\times 10^{-6}$ | $129^{+70}_{-58}$ | $5^{+3}_{-3}\times 10^{-6}$ | $236^{+105}_{-83}$ | $<1\times 10^{-6}$ | $<50$ | 69/82 | $1.52\times 10^{-12}$
XMMU J173029.8 | $5^{+5}_{-4}$ | $0.0^{+0.7}_{-0.6}$ | $9^{+28}_{-6}\times 10^{-6}$ | $2^{+1}_{-1}\times 10^{-6}$ | $270^{+135}_{-135}$ | $<2\times 10^{-6}$ | $<160$ | $2^{+3}_{-1}\times 10^{-6}$ | $204^{+306}_{-102}$ | 11/17 | $6.70\times 10^{-13}$
4XMM J174517.0 | $2.8^{+0.3}_{-0.3}$ | $1.1^{+0.1}_{-0.1}$ | $9^{+2}_{-2}\times 10^{-5}$ | $2.8^{+0.8}_{-0.8}\times 10^{-6}$ | $192_{-31}^{+32}$ | $1.4^{+0.8}_{-0.8}\times 10^{-6}$ | $78^{+34}_{-19}$ | $1.7^{+0.8}_{-0.8}\times 10^{-6}$ | $129^{+51}_{-35}$ | 339/326 | $9.59\times 10^{-13}$
4XMM J174728.9 | $0.2^{+0.1}_{-0.1}$ | $1.8^{+0.4}_{-0.4}$ | $7^{+4}_{-2}\times 10^{-6}$ | $<4\times 10^{-7}$ | $<1500$ | $<5\times 10^{-7}$ | $<2000$ | $<5\times 10^{-7}$ | $<2200$ | 16/28 | $4.00\times 10^{-14}$
4XMM J173837.0 | $0.9^{+0.7}_{-0.5}$ | $0.8^{+0.4}_{-0.4}$ | $1.1^{+1.1}_{-0.5}\times 10^{-5}$ | $<8\times 10^{-7}$ | $<351$ | $7^{+91}_{-7}\times 10^{-8}$ | $29^{+329}_{-29}$ | $1^{+9}_{-1}\times 10^{-7}$ | $59^{+320}_{-59}$ | 23/42 | $2.06\times 10^{-13}$
4XMM J174033.8 | $0.9^{+0.2}_{-0.2}$ | $0.5^{+0.1}_{-0.1}$ | $6^{+1}_{-1}\times 10^{-5}$ | $7^{+2}_{-2}\times 10^{-6}$ | $226^{+50}_{-46}$ | $4^{+2}_{-2}\times 10^{-6}$ | $103^{+42}_{-40}$ | $3^{+2}_{-2}\times 10^{-6}$ | $95^{+50}_{-49}$ | 185/189 | $2.88\times 10^{-12}$
4XMM J174016.0 | $0.44^{+0.03}_{-0.03}$ | $0.63^{+0.04}_{-0.02}$ | $1.36^{+0.09}_{-0.07}\times 10^{-4}$ | $9^{+2}_{-2}\times 10^{-6}$ | $182^{+34}_{-20}$ | $6^{+2}_{-2}\times 10^{-6}$ | $91^{+23}_{-16}$ | $7^{+2}_{-2}\times 10^{-6}$ | $135^{+27}_{-29}$ | 736/607 | $5.14\times 10^{-12}$
4XMM J174809.8 | $2^{+2}_{-1}$ | $0.3^{+0.5}_{-0.5}$ | $6^{+9}_{-3}\times 10^{-6}$ | $5^{+7}\times 10^{-7}$ | $175^{+71}$ | $<7\times 10^{-7}$ | $<172$ | $3^{+14}_{-3}\times 10^{-7}$ | $99^{+264}_{-99}$ | 50/48 | $2.96\times 10^{-13}$
4XMM J174009.0 | $0.8^{+0.3}_{-0.2}$ | $0.1^{+0.1}_{-0.1}$ | $6^{+2}_{-1}\times 10^{-5}$ | $1.8^{+0.5}_{-0.5}\times 10^{-5}$ | $359^{+65}_{-120}$ | $1.6^{+0.6}_{-0.6}\times 10^{-5}$ | $265^{+78}_{-115}$ | $7^{+6}_{-6}\times 01^{-6}$ | $109^{+66}_{-56}$ | 170/166 | $5.90\times 10^{-12}$
4XMM J174954.6 | $2.4^{+0.9}_{-0.7}$ | $0.4^{+0.3}_{-0.3}$ | $3^{+2}_{-1}\times 10^{-5}$ | $2^{+1}_{-1}\times 10^{-6}$ | $107^{+69}_{-59}$ | $3^{+2}_{-2}\times 10^{-6}$ | $172^{+103}_{-85}$ | $<3\times 10^{-6}$ | $<150$ | 90/91 | $1.73\times 10^{-12}$
4XMM J175327.8 | $7^{+7}_{-5}\times 10^{-2}$ | $1.3^{+0.3}_{-0.3}$ | $2.8^{+0.9}_{-0.7}\times 10^{-5}$ | $<2\times 10^{-6}$ | $<774$ | $<2\times 10^{-6}$ | $<660$ | $<2\times 10^{-6}$ | $<751$ | 64/41 | $4.29\times 10^{-13}$
4XMM J174622.7 | $4.3^{+0.9}_{-0.7}$ | $0.7^{+0.2}_{-0.2}$ | $2.0^{+1.0}_{-0.6}\times 10^{-5}$ | $1^{+5}_{-1}\times 10^{-7}$ | $20^{+46}_{-20}$ | $1.1^{+0.6}_{-0.6}\times 10^{-7}$ | $207^{+76}_{-71}$ | $2^{+6}_{-2}\times 10^{-7}$ | $30^{+67}_{-30}$ | 192/187 | $5.13\times 10^{-13}$
4XMM J175452.0 | $2^{+3}_{-1}$ | $0.9^{+0.9}_{-0.7}$ | $1.2^{+4.4}_{-0.9}\times 10^{-5}$ | $1.4^{+0.7}_{-0.7}\times 10^{-6}$ | $690^{+445}_{-445}$ | $<9\times 10^{-7}$ | $<206$ | $<1\times 10^{-6}$ | $<529$ | 33/41 | $2.68\times 10^{-13}$
4XMM J175301.3 | $0.3^{+0.5}_{-0.2}$ | $0.5^{+0.3}_{-0.3}$ | $5^{+3}_{-2}\times 10^{-6}$ | $10^{+6}_{-6}\times 10^{-7}$ | $521^{+288}_{-225}$ | $4^{+8}_{-4}\times 10^{-7}$ | $105^{+189}_{-105}$ | $4^{+10}_{-4}\times 10^{-7}$ | $179^{+387}_{-179}$ | 63/54 | $1.86\times 10^{-13}$
4XMM J174917.7 | $3.2^{+0.7}_{-0.6}$ | $1.0^{+0.2}_{-0.2}$ | $9^{+5}_{-3}\times 10^{-5}$ | $2^{+1}_{-1}\times 10^{-6}$ | $94^{+50}_{-47}$ | $4^{+2}_{-2}\times 10^{-6}$ | $246^{+88}_{-67}$ | $9^{+16}_{-9}\times 10^{-7}$ | $52^{+112}_{-52}$ | 107/108 | $1.31\times 10^{-12}$
4XMM J175244.4 | $0.1^{+0.2}_{-0.1}$ | $0.7^{+0.3}_{-0.3}$ | $7^{+3}_{-2}\times 10^{-6}$ | $<8\times 10^{-7}$ | $<388$ | $6^{+95}_{-6}\times 10^{-8}$ | $17^{+231}_{-17}$ | $1.4^{+0.9}_{-0.9}\times 10^{-6}$ | $796^{+443}_{-405}$ | 29/51 | $2.11\times 10^{-13}$
4XMM J174816.9 | $27^{+7}_{-6}$ | $1.3^{+0.6}_{-0.6}$ | $2.0^{+4}_{-1}\times 10^{-4}$ | $1.8^{+0.7}_{-0.7}\times 10^{-6}$ | $221^{+86}_{-86}$ | $1.0^{+0.7}_{-0.7}\times 10^{-6}$ | $109^{+76}_{-76}$ | $6^{+9}_{-6}\times 10^{-7}$ | $71^{+107}_{-71}$ | 159/164 | $8.50\times 10^{-13}$
4XMM J174445.6 | $0.37^{+0.02}_{-0.02}$ | $1.75^{+0.06}_{-0.06}$ | $3.3^{+0.2}_{-0.2}\times 10^{-4}$ | $2^{+1}_{-1}\times 10^{-6}$ | $80^{+39}_{-40}$ | $7^{+2}_{-2}\times 10^{-6}$ | $371^{+88}_{-76}$ | $2^{+1}_{-1}\times 10^{-6}$ | $109^{+43}_{-47}$ | 399/412 | $2.03\times 10^{-12}$
4XMM J175740.5 | $3^{+2}_{-1}$ | $0.2^{+0.4}_{-0.3}$ | $1.1^{+1.1}_{-0.5}\times 10^{-5}$ | $1.6^{+0.8}_{-0.8}\times 10^{-6}$ | $176^{+84}_{-49}$ | $1.3^{+0.9}_{-0.9}\times 10^{-6}$ | $112^{+61}_{-41}$ | $1.3^{+1.0}_{-1.0}\times 10^{-6}$ | $151^{+19}_{-64}$ | 113/92 | $7.63\times 10^{-13}$
4XMM J174906.8 | $21^{+1}_{-1}$ | $1.3^{+0.1}_{-0.1}$ | $2.6^{+0.8}_{-0.6}\times 10^{-3}$ | $6^{+4}_{-4}\times 10^{-6}$ | $40^{+15}_{-15}$ | $3^{+40}_{-3}\times 10^{-7}$ | $2^{+13}_{-2}$ | $3^{+44}_{-3}\times 10^{-7}$ | $2^{+16}_{-2}$ | 400/442 | $1.12\times 10^{-11}$
XMMU J175441.9 | $3^{+2}_{-1}$ | $0.3^{+0.4}_{-0.4}$ | $1.1^{+1.3}_{-0.6}\times 10^{-5}$ | $2^{+1}_{-1}\times 10^{-6}$ | $274^{+137}_{-137}$ | $3^{+14}_{-3}\times 10^{-7}$ | $28^{+131}_{-28}$ | $1^{+1}_{-1}\times 10^{-6}$ | $172^{+172}_{-172}$ | 33/26 | $4.31\times 10^{-13}$
4XMM J175511.6 | $0.35^{+0.11}_{-0.09}$ | $1.1^{+0.1}_{-0.1}$ | $2.8^{+0.6}_{-0.5}\times 10^{-5}$ | $1^{+6}_{-1}\times 10^{-7}$ | $31^{+109}_{-31}$ | $6^{+7}_{-6}\times 10^{-7}$ | $194^{+162}_{-188}$ | $<1\times 10^{-6}$ | $<263$ | 69/76 | $3.06\times 10^{-13}$
4XMM J175525.0 | $10^{+1}_{-1}$ | $1.4^{+0.2}_{-0.2}$ | $5^{+2}_{-2}\times 10^{-4}$ | $<1\times 10^{-6}$ | $<33$ | $9^{+16}_{-9}\times 10^{-7}$ | $34^{+36}_{-34}$ | $<1\times 10^{-6}$ | $<55$ | 171/170 | $1.84\times 10^{-12}$
4XMM J175328.4 | $0.43^{+0.03}_{-0.02}$ | $1.40^{+0.04}_{-0.04}$ | $6.3^{+0.4}_{-0.3}\times 10^{-4}$ | $3^{+2}_{-2}\times 10^{-6}$ | $71^{+28}_{-29}$ | $2^{+2}_{-2}\times 10^{-6}$ | $28^{+26}_{-28}$ | $4^{+3}_{-3}\times 10^{-6}$ | $90^{+38}_{-36}$ | 883/914 | $5.56\times 10^{-12}$
XMMU J180140.3 | $3^{+2}_{-1}$ | $2.1^{+1.3}_{-0.9}$ | $1.1^{+5.0}_{-0.8}\times 10^{-4}$ | $2^{+3}_{-1}\times 10^{-6}$ | $670^{+1005}_{-335}$ | $<4\times 10^{-8}$ | $<8$ | $<6\times 10^{-6}$ | $<1500$ | 25/27 | $2.31\times 10^{-13}$
101010The model used for spectral fit is power-law for the continuum plus
three Gaussian at 6.4, 6.7, and 6.9 keV for iron emission complex. The
spectral model is convolved with a Galactic absorption component tbabs. The
normalization of power-law model $N_{\rm po}$ is given in units of photons
keV-1 cm2 s-1 at 1 keV and the normalization of the lines are given in units
of photons cm2 s-1 in the line energy.
Figure 8: Lomb-Scargle periodogram of the sources listed in Table 2. The
periodograms are constructed using 2–10 keV EPIC-pn light curves. The
horizontal green and red lines indicate the $2\sigma$ and $3\sigma$ confidence
levels, respectively, computed from simulations, and the blue line indicates
the false alarm probability ($3\sigma$ confidence level) estimated from the
analytical approximation from Baluev (2008). The small inset shows the folded
light curve.
Figure 9: Fig. 8 Continued.
Figure 10: Spectral modeling of the sources in our sample using a model
composed of tbabs*(power-law+g1+g2+g3). The g1,g2, and g3 represent three
Gaussian lines, at 6.4, 6.7, and 6.9 keV, respectively. The black, red, and
green colors represent data from the EPIC-pn, MOS1, and MOS2 detectors,
respectively.
Figure 11: Fig. 10 Continued.
|
# Quantum Coding Transitions in the Presence of Boundary Dissipation
Izabella Lovas Utkarsh Agrawal Kavli Institute for Theoretical Physics,
University of California, Santa Barbara, CA, 93106 Sagar Vijay Department of
Physics, University of California, Santa Barbara, CA 93106
###### Abstract
We investigate phase transitions in the encoding of quantum information in a
quantum many-body system due to the competing effects of unitary scrambling
and boundary dissipation. Specifically, we study the fate of quantum
information in a one-dimensional qudit chain, subject to local unitary quantum
circuit evolution in the presence of depolarizating noise at the boundary. If
the qudit chain initially contains a finite amount of locally-accessible
quantum information, unitary evolution in the presence of boundary dissipation
allows this information to remain partially protected when the dissipation is
sufficiently weak, and up to time-scales growing linearly in system size $L$.
In contrast, for strong enough dissipation, this information is completely
lost to the dissipative environment. We analytically investigate this “quantum
coding transition” by considering dynamics involving Haar-random, local
unitary gates, and confirm our predictions in numerical simulations of
Clifford quantum circuits. We demonstrate that scrambling the quantum
information in the qudit chain with a unitary circuit of depth
$\mathcal{O}(\log L)$ before the onset of dissipation can perfectly protect
the information until late times. The nature of the coding transition changes
when the dynamics extend for times much longer than $L$. We further show that
at weak dissipation, it is possible to code at a finite rate, i.e. a fraction
of the many-body Hilbert space of the qudit chain can be used to encode
quantum information.
## I Introduction
The chaotic unitary evolution of an isolated quantum systems will spread
initially localized quantum information over non-local degrees of freedom, a
process known as quantum information scrambling [1, 2, 3, 4]. This
delocalization of information aids in protecting quantum information against
external interference from local noise, which is present in any real physical
system. Studying the robustness of quantum information in the presence of both
unitary scrambling and dissipation is important both to understand new
dynamical regimes of quantum many-body dynamics, and from a practical
standpoint, to design quantum codes and to appropriately interpret studies of
quantum many-body evolution in near-term quantum simulators. While dissipative
dynamical phases of matter have been the subject of intense research for
decades [5, 6, 7, 8, 9], addressing the dynamics of quantum information in
this context opens a new perspective. Similarly to how understanding the
spreading of information has led to a deeper understanding of quantum chaos
and thermalization [2, 10, 11, 12, 13, 14, 15, 16], studying quantum
information in dissipative systems can shed light on the structure of
(possibly new) dynamical regimes of quantum matter.
Besides its fundamental relevance for the dissipative dynamics of generic
quantum systems, the fate of quantum information in the presence of unitary
scrambling and destructive local noise or measurements has been explored in
the context of quantum information theory, leading to the development of the
theory of quantum error correcting codes [17, 18, 19, 20]. A key result in the
theory of quantum error correction (QEC) is the threshold theorem, stating
that for error rates below some threshold, one can reverse the effects of the
errors by applying additional quantum gates [21, 22, 23]. In other words, it
is possible to correct errors faster than they are created.
The threshold theorem is essential in designing fault-tolerant quantum
computers. Applying additional gates, trying to preserve the code-space
against the noise, allows one to perform logical operations for long times
with high precision. Such an active error correction is feasible in artificial
quantum systems with a “digital” architecture, in which real-time measurements
and unitary evolution can be executed over targeted degrees of freedom.
However, in analog quantum simulators realized, e.g., with ultracold atoms,
the options for active error correction are more restricted and costly due to
the limited control over the dynamics. This provides a strong motivation for
exploring whether the system’s intrinsic dynamics alone can protect
information, by hiding it from destructive local noise. Despite this
fundamental relevance, the conditions for obtaining such a robust, self-
generated coding dynamics in a generic quantum system without any degree of
external control, are still not fully explored.
Recently, the robustness of a self-generated code space against a special
class of local perturbations has been investigated, taking the form of local
projective measurements. These studies revealed a phase transition driven by
the measurement rate, such that the code space can store an extensive amount
of information, as long as the rate of measurements remains below a finite
threshold [24, 25, 26, 27, 28]. However, this result cannot be generalized to
more generic noise channels. For example, a quantum many-body system evolving
in the presence of random erasures occurring in the bulk with finite rate
destroys all quantum information in constant time [29, 30], and active error-
correction during the dynamics is required to protect the information beyond
this time scale. Understanding the conditions (if any) that unitary evolution
and local errors have to satisfy to guarantee the emergence of a robust, self-
generated code space, without the need for an active error correction during
the dynamics, is an open question of utmost relevance.
$\begin{array}[]{cc}\includegraphics[width=195.12767pt]{Circuit_a.png}&\includegraphics[width=173.44534pt]{phase_diagram.pdf}\\\
(a)&(b)\end{array}$
Figure 1: (a) Quantum information is encoded in a qudit chain which
subsequently evolves with a “brickwork” array of Haar-random, two-site unitary
gates and dissipation at the boundary. One timestep of these dynamics
corresponds to two layers of unitary gates along with depolarizing noise at
the boundary, as shown schematically in (a). A phase diagram for the coding
transition is shown in (b). The blue critical line is the coding transition
when the total number of timesteps $T\lesssim L/p$, see Section III. This
transition also corresponds to the de-pinning transition of an Ising domain
wall in a statistical mechanical description of quantum information in these
dynamics, as derived in the main text (Section II). This transition occurs
when $R$ is localized near the boundary and is not scrambled across the
system. The red critical line is the coding transition as the system approches
thermalization (see Section IV), across which the system becomes maximally
entangled with the environment resulting in information loss.
### I.1 Summary of Results
With these motivations, we take a step towards understanding the dynamics of
quantum information under generic scrambling and local noise, by exploring the
fate of quantum information, subjected to the competing effects of boundary
dissipation and unitary spreading in a one-dimensional chaotic quantum system.
For concreteness and simplicity, we focus on the setup sketched in Fig. 1a,
which shows a single timestep of a random quantum circuit with a
depolarization channel acting at the left boundary. We note that it is known
both in classical coding theory [31, 32, 33, 34, 35] and in the quantum case
[36, 37, 38] that random unitary dynamics provides an optimal encoding of
information. We entangle one external reference qubit $R$ near the boundary
into a Bell pair, thereby encoding one qubit of quantum information initially
localized near the dissipative boundary. We then ask what happens to this
information as the system is subject to noisy dynamics, up to time scales $T$
scaling linearly with the system size $L$, such that $T/L$ is fixed.
Importantly, by taking the thermodynamic limit $L\to\infty$ and the long time
limit $T\to\infty$ simultaneously, with $T/L$ constant, we probe the system on
time scales where it is expected to thermalize.
Interestingly, we find that this quantum information can remain robust even at
these long times, giving rise to a rich dynamical phase diagram as a function
of dissipation strength $p$ and the ratio $T/L$, as displayed in Fig. 1b. The
left panel shows the case where the noisy dynamics starts immediately after
the encoding of the quantum information locally, near the leftmost boundary.
We find a dissipation-induced quantum coding phase transition, separating a
region where the coherent information remains partially protected and gets
delocalized within the system, and a phase where all of this information
leaked to the environment. The nature of the coding transition, however,
depends on the ratio $T/L$. For $T/L\lesssim 1$ the right boundary is
effectively decoupled from the dynamics of information and we observe a
continuous second-order phase transition (blue line). For even larger ratios
$T/L$, the right boundary plays a crucial role and gives rise to a first order
phase transition (red). We also demonstrate that adding a unitary “pre-
scrambling” step after the local encoding, before the onset of the dissipative
dynamics, can efficiently increase the robustness of the encoded information.
In particular, as shown in the right panel of Fig. 1b, a pre-scrambling time
$t_{scr}$ scaling logarithmically with system size, $t_{scr}\sim\log L$,
ensures that quantum information remains perfectly protected for small enough
dissipation strengths $p$, up to time scales $T\sim L/p$.
We gain a detailed understanding of these different types of coding
transitions, by mapping the dynamics of quantum information in a circuit with
Haar-random unitary gates and boundary dissipation to the statistical
mechanics of a two-dimensional lattice magnet. This mapping, which has been
extensively employed to understand unitary circuit quantum dynamics as well as
dynamics with projective measurements (see Ref. [39, 40] for a review), allows
us to obtain analytical predictions, as well as instructive numerical results.
While the entanglement measures of interest which diagnose the quantum coding
transition require taking a formal replica limit of this lattice magnet (akin
to a limit arising when considering “quenched” disorder), we focus our
attention on understanding this lattice magnet away from the replica limit
(akin to studying an “annealed” disorder-average). Specifically, we focus on
the “annealed” disorder average of the second Rényi mutual information between
the output of the circuit $A$, and the reference qubit $R$. In this limit, the
circuit with the boundary depolarization can be mapped to the statistical
mechanis of an Ising magnet, in which a single Ising domain wall experiences
an attractive/repulsive potential at one boundary of the two-dimensional
system, whose strength is tuned by the dissipation strength. In this language,
the coding transition at times $T/L\lesssim 1$ can be understood as a second
order pinning/depinning transition of the Ising domain wall at the noisy
boundary; we provide conjectures as to the true nature of this transition in
the replica limit. At later times $T/L>1/p$, the right boundary gives rise to
a different, first order transition by “absorbing” the Ising domain wall.
Insights gained from this classical statistical picture are confirmed by large
scale numerical simulations performed on Clifford quantum random circuits.
Finally, we show that the coding transition for $T/L>1/p$ can also be
understood as a transition arising from the monogamy of entanglement. In this
case, as the system of $L$ qubits becomes entangled with a growing number of
environmental degrees of freedom, scaling as $pT$, eventually it can no longer
stay simultaneously entangled with the reference qubit, and all information
leaks to the environment. We conclude with the interesting scenario of
encoding an extensive amount of information in the system. Specifically, we
show that a similar coding transition persists when we entangle an extensive
number of reference qubits into Bell pairs with the qubits of the system. In
particular, we identify two threshold values for the dissipation strength $p$,
$p_{th,1}$ and $p_{th,2}$, separating three regions according to the behavior
of the information density. The information density is perfectly protected in
the system for $p<p_{th,1}$, while it starts to leak into the environment
above this threshold. A finite density of information still survives in the
region $p_{th,1}<p<p_{th,2}$, until eventually reaching zero at the upper
threshold $p_{th,2}$.
The rest of the paper is organized as follows. In Sec. II, we introduce the
mapping between the coherent quantum information in random circuits and the
properties of an Ising domain wall experiencing a repulsive/attractive
boundary on the left and an absorbing boundary on the right, by considering
the “annealed” second Rényi mutual information between the circuit output and
the encoded information. We derive the random walk model in Sec. II.1. We then
show in Sec. II.2 that different phases on either side of the coding
transition can be understood by inspecting the weighted trajectories of the
Ising domain wall in this statistical mechanical model.
We turn to the detailed discussion of the second order coding transition in
the regime $T\lesssim L/p$, induced by the dissipative boundary alone without
the interference of the clean boundary, in Sec. III. We first rely on the
random walk model to gain a qualitative understanding of the phase transition,
and discuss the classical pinning/depinning transition of the Ising domain
wall in Sec. III.1. Building on these insights, we verify the presence of the
quantum coding transition and study its properties numerically in Sec III.2,
by performing large scale numerical simulations on Clifford quantum circuits,
before discussing the nature of this transition in more detail in Sec. III.3.
To end the section, in Sec. III.4 we comment on increasing the robustness of
the encoded information by applying a unitary pre-scambling before the onset
of dissipative dynamics. We show that a pre-scrambling time $t_{\mathrm{scr}}$
scaling logarithmically with system size provides perfect protection for the
coherent information for weak enough dissipation $p$, up to time scales
$T/L\sim O(1)$.
We turn to the first order coding transition, induced by the interplay of the
dissipative left boundary and the clean right boundary at times $T\gtrsim
L/p$, in Sec. IV. First, we discuss that this phase transition can be
understood in the statistical mechanical framework as the absorption of the
entanglement domain wall by the right boundary and is driven by the monogamy
of entanglement as the system becomes entangled with a growing number of
environmental qubits. We present and analyze the numerical results obtained
from Clifford circuit simulations in Sec. IV.1, and find good agreement with
the predictions of the statistical mechanics of the Ising lattice magnet. We
argue that this coding transition is of first order, and discuss its scaling
properties in Sec. IV.2. Finally, Sec V serves as an outlook to the case of
encoding an extensive amount of information into the system. Here we consider
entangling a finite density of reference qubits with the system, and find a
monogamy induced coding transition at late times $T\gtrsim L/p$, similar to
the one observed for a single bit of quantum information. Here we find three
phases, with the information perfectly protected for $p<p_{th,1}$, a finite
density of information surviving for $p_{th,1}<p<p_{th,2}$, and the density
reaching zero above $p_{th,2}$. We conclude by summarizing our results, and
discussing open questions in Sec. VI.
###### Contents
1. I Introduction
1. I.1 Summary of Results
2. II Dissipation in Quantum Circuit Evolution
1. II.1 Statistical Mechanics of Random Unitary Evolution and Dissipation
2. II.2 Boundary Dissipation and the Encoding of Quantum Information
3. III Quantum Coding Transition
1. III.1 Annealed Mutual Information, and the Pinning of an Ising Domain Wall
2. III.2 Numerical Study
3. III.3 The Replica Limit and the Nature of the Phase Transition
4. III.4 Perfect information protection using scrambling
4. IV Coding transition on the approach to thermalization
1. IV.1 Numerical Study
2. IV.2 Nature of the Phase Transition
5. V Encoding at a Finite Rate
6. VI Summary and Discussion
7. A Lattice Partition Function and the Annealed Phase Transition
8. B Alternative random circuit protocols
## II Dissipation in Quantum Circuit Evolution
### II.1 Statistical Mechanics of Random Unitary Evolution and Dissipation
Past studies of random local unitary evolution [39, 40], evolution with
projective measurements [24, 25, 26] and with dissipation [30, 29, 41, 42, 43,
44] have uncovered a wealth of universal structures governing the dynamics of
information-theoretic quantities such as the Rényi entanglement entropy.
Averaging over an ensemble of unitary gates in this setting gives rise to an
emergent classical statistical mechanics of quantum entanglement, which must
be understood in an appropriate “replica limit” in order to recover the
behavior of the information-theoretic quantities of interest. A qualitatively-
accurate understanding of the behavior of quantum entanglement in chaotic
unitary dynamics, and in dynamics with projective measurements can still be
obtained even without taking the replica limit [45, 46, 13, 47], though these
approaches often fail to capture quantitative, universal properties
characterizing distinct regimes of quantum many-body evolution (e.g. of the
volume-law-entangled phase of infrequently monitored quantum many-body
evolution [48]) or of critical points (e.g. separating different phases of
monitored quantum dynamics).
Here, we consider the evolution of qudits under random, local unitary gates
and boundary dissipation. Averaging over the ensemble of unitary gates, in the
calculation of the evolving _purity_ of subsystem, leads to an emergent
statistical mechanics of an Ising magnet. We present the various ingredients
that the unitary evolution and dissipation correspond to in this setting,
before using these ingredients extensively in subsequent sections to
understand the stability of encoded quantum information under this evolution.
$\begin{array}[]{c}\includegraphics[width=164.77771pt]{Circuit_b}\\\
\includegraphics[width=117.07924pt]{Circuit_c_2}\\\ \end{array}$
Figure 2: Top. Performing a Haar-average over the unitary gates in the
calculation of the purity of the evolving state gives rise to an Ising magnet,
whose partition function may be written as the product of transfer matrices,
given in Eq. (5), (6) and (7). Bottom. A coarse-grained description of this
Ising magnet involves a single Ising domain wall (green)in the presence of a
boundary magnetic field (shaded red). The boundary conditions at the bottom of
the Ising magnet, which are fixed by the initial state of the quantum system,
are not shown.
We focus our attention on a one-dimensional chain of qudits, with Hilbert
space dimension $q$ at each lattice site. The dissipation acts on the boundary
qudit, and is described by the depolarizing channel $\Phi$ acting on the
density matrix $\rho$ of this qudit as
$\displaystyle\Phi(\rho)=(1-p)\,\rho+p\cdot\frac{\mathds{1}_{q\times q}}{q}$
(1)
with $p\in[0,1]$ parametrizing the “strength” of the dissipation. For future
convenience, we choose to rewrite the depolarizing channel as an _operator_
$\hat{\Phi}$ which acts within a Hilbert space of dimension $q^{2}$. The
operator $\hat{\Phi}$ takes the form
$\displaystyle\hat{\Phi}=\sum_{i,j=1}^{q}\left[(1-p)\ket{i,j}\bra{i,j}+\frac{p}{q}\ket{i,i}\bra{j,j}\right]$
(2)
where $\ket{i}$ for $i\in\\{1,\ldots,q\\}$ denotes an orthonormal basis of
states of a single qudit111The qudit density matrix
$\rho\equiv\sum_{i,j}\rho_{ij}\ket{i}\bra{j}$ is a _state_
$\ket{\rho}\equiv\sum_{i,j}\rho_{ij}\ket{i,j}$ in the doubled Hilbert space on
which the operator $\hat{\Phi}$ acts as
$\hat{\Phi}\ket{\rho}=(1-p)\ket{\rho}+(p/q)\sum_{i}\ket{i,i}$. .
Apart from the dissipation, the remaining qudits will be chosen to evolve
according to two-site unitary gates, chosen from the uniform (Haar) measure
for the unitary group U$(q^{2})$. Given such a two-qudit unitary gate $U$, we
note that the average over the Haar measure of $U\otimes U^{*}\otimes U\otimes
U^{*}$ – a quantity which will naturally appear in subsequent sections – is
given by
$\displaystyle V\equiv$ $\displaystyle\langle U\otimes U^{*}\otimes U\otimes
U^{*}\rangle$
$\displaystyle=\sum_{\sigma,\tau\in\\{\uparrow,\downarrow\\}}\mathrm{wg}_{2}(\sigma\tau)\ket{\tau,\tau}\bra{\sigma,\sigma}$
(3)
where $\langle\cdots\rangle$ denotes the Haar average, the Weingarten function
is given as $\mathrm{wg}_{2}(+)=\frac{q^{2}}{q^{4}-1}$ and
$\mathrm{wg}_{2}(-)=\frac{-1}{q^{4}-1}$, and the states $\ket{\uparrow}$ and
$\ket{\downarrow}$ are defined as
$\ket{\uparrow}\equiv\sum_{i,j=1}^{q}\ket{i,i,j,j}$ and
$\ket{\downarrow}\equiv\sum_{i,j=1}^{q}\ket{i,j,j,i}$ so that
$\displaystyle\braket{\sigma}{\tau}=(q^{2}-q)\delta_{\sigma,\tau}+q.$ (4)
From these expressions, it is clear that
$\displaystyle V\ket{\uparrow\uparrow}=\ket{\uparrow\uparrow}\hskip
50.58878ptV\ket{\downarrow\downarrow}=\ket{\downarrow\downarrow}$ (5)
$\displaystyle
V\ket{\uparrow\downarrow}=V\ket{\downarrow\uparrow}=\frac{q}{q^{2}+1}\left[\ket{\downarrow\downarrow}+\ket{\uparrow\uparrow}\right]$
(6)
From Eq. (2), the operator $D\equiv\hat{\Phi}\otimes\hat{\Phi}$ acts on these
states as
$\displaystyle D\ket{\uparrow}=\ket{\uparrow}\hskip
21.68121ptD\ket{\downarrow}=(1-p)^{2}\ket{\downarrow}+\frac{p(2-p)}{q}\ket{\uparrow}$
(7)
### II.2 Boundary Dissipation and the Encoding of Quantum Information
We now consider a qudit chain consisting of $L$ qudits, into which quantum
information has been encoded. We may imagine that this quantum information is
represented by physical reference qudits which are maximally-entangled with
the one-dimensional system. This system subsequently evolves according to a
unitary circuit composed of Haar-random unitary gates in a “brickwork” array,
together with dissipation which acts near the boundary. We first focus on the
case where only a single qudit is encoded in the one-dimensional system, and
with dissipation acting periodically in time on the boundary qudit, as shown
schematically in Fig. 2a. A single timestep of this evolution corresponds to
the application of two layers of two-site unitary gates, followed by the
depolarizing channel (1) on the boundary qudit.
To diagnose whether this qudit of encoded information can remain in the
system, even as the boundary dissipation continues to act, we study the
behavior of the bipartite mutual information between the reference qudit
($R$), and the system ($A$) at a time $t$; this mutual information is defined
as
$\displaystyle I_{A,R}(t)=S_{A}(t)+S_{R}(t)-S_{A\cup R}(t)$ (8)
where $S_{A}\equiv-\Tr\left[\rho_{A}(t)\log_{q}\,\rho_{A}(t)\right]$ is the
von Neumann entanglement entropy of subsystem $A$ at a time $t$. We note that
$I_{A,R}(t)$ is related to the coherent information present in the system. If
$I_{A,R}=2$ the entangled qudit can be perfectly recovered by applying a
recovery operation to the system alone whereas for $I_{A,R}=0$ the information
has leaked to the environment, that is, $I_{E,R}=2$ [49, 50].
The mutual information (8) averaged over realizations of the random unitary
evolution, thus diagnoses whether quantum information remains in the system,
even in the presence of boundary dissipation. Instead of considering the Haar-
average of the mutual information, we turn our attention on the “annealed”
average of the second Rényi mutual information between $A$ and $R$, defined as
$\displaystyle I_{A,R}^{(\mathrm{ann})}(t)\equiv\log_{q}\langle
q^{\,I_{A,R}^{(2)}(t)}\rangle$ (9)
where $I_{A,R}^{(2)}(t)=S_{A}^{(2)}(t)+S_{R}^{(2)}(t)-S_{A\cup R}^{(2)}(t)$,
with the second Rényi entropy defined as
$S_{A}^{(2)}\equiv-\log_{q}\mathrm{Tr}\rho_{A}(t)^{2}$, and
$\langle\cdots\rangle$ denotes the Haar average over the unitary gates in the
circuit. The behavior of the annealed mutual information (9) can provide a
qualitative understanding of the quantity of interest (8), as discussed at the
beginning of this section, though quantitative details may differ, as we will
later clarify.
We proceed to calculate the annealed mutual information (9). We initialize the
qudits in a product state, except for the qudit at a site $x_{0}$ away from
the boundary which is maximally entangled with the reference qudit. As the
system evolves in the presence of unitary gates and dissipation, it is evident
that the purity of the reference qudit remains unchanged,
$\Tr\rho_{R}(t)^{2}=q^{-1}$ for all times $t$. Furthermore, calculation of
$\langle\Tr\rho_{A}(t)^{2}\rangle$ and $\langle\Tr\rho_{A\cup
R}(t)^{2}\rangle$ involves performing a Haar average of four copies of the
quantum circuit. Following the discussion in the previous section, it is thus
clear that these Haar-averaged purities may be written as partition functions
for an Ising magnet of finite extent in the vertical direction – corresponding
to the time direction in the quantum circuit – and with horizontal extent
fixed by the number of qudits in the system. The Ising spins live on the links
of a square lattice, and are acted upon by the transfer matrices matrices $V$
and $D$, as given in Eq. (5), (6) and (7), depending on whether a Haar-random
unitary gate or dissipation is applied at a particular point in spacetime in
the quantum circuit, respectively. The full transfer matrix is shown
schematically in Fig. 2b.
The boundary conditions for the Ising partition sum, at the ($i$) bottom and
($ii$) top boundaries are determined by ($i$) the initial state of the qudit
chain along with the location of the reference qudit, and ($ii$) the subsystem
over which the purity is being calculated, respectively. First, fixing Ising
spins at the top boundary to be in the $\downarrow$ state corresponds to
keeping the corresponding qudit within the region for which the purity is
being calculated. As a result, the spins at the top boundary are all fixed in
the $\downarrow$ state for both the calculation of
$\langle\Tr\,\rho_{A}(t)^{2}\rangle$ and $\langle\Tr\,\rho_{A\cup
R}(t)^{2}\rangle$, as shown in Fig. 2b. These two purities thus only differ in
their bottom boundary conditions. Here, the boundary spins are allowed to
freely fluctuate, with the exception of the spin corresponding to the qudit at
a distance $x$ away from the boundary; the state of this Ising spin determines
whether the reference qudit is included in the subsystem whose purity is being
computed. More precisely, this spin is fixed in the $\uparrow$ or $\downarrow$
state in the calculation of the quantities,
$\langle\Tr\,\rho_{A}(t)^{2}\rangle$ and $\langle\Tr\,\rho_{A\cup
R}(t)^{2}\rangle$, respectively.
It is convenient to evaluate these partition functions by contracting the
transfer matrix from the top boundary condition, i.e. “backwards” in time with
respect to the arrow of time in the quantum circuit. Let $Z(t)$ denote the
partition sum obtained by evolving the all-down state of the Ising spins for
$t$ timesteps by repeatedly applying the row transfer matrix corresponding to
a single timestep of the dynamics. The partition sum $Z(t)$ describes a
single, directed Ising domain wall, which can only be created/annihilated at
the boundary of the system. This can be seen as follows. First, starting with
the all-down state, the dissipation (7) can flip the boundary Ising spin from
$\ket{\downarrow}$ to $\ket{\uparrow}$, thus creating an Ising domain wall
near the boundary. The effect of the Haar-random unitary gates (5), (6) in the
bulk of the quantum circuit is to simply move the domain wall. Notably, Eq.
(5) implies that the Haar-random gates cannot create or annihilate Ising
domain walls in the bulk of the system, though gates acting near the boundary
can annihilate the Ising domain wall. Once the state of the boundary spin is
$\ket{\uparrow}$, the dissipation cannot alter this state since
$D\ket{\uparrow}=\ket{\uparrow}$; this is simply a consequence of the fact
that the depolarizing channel (1) leaves the maximally-mixed density matrix
$\rho=\mathds{1}_{q\times q}/q$ unchanged.
The partition sum $Z(t)$ is thus performed over histories of the entanglement
domain wall trajectories, which can propagate in the bulk of the system, or be
created/annihilated at the boundary. Formally, we write
$\displaystyle Z(t)=\sum_{x\geq 0}z(x,t)$ (10)
where $z(x,t)$ is a restricted sum over trajectories of the entanglement
domain wall where the domain wall ends up between sites $x-1$ and $x$ at time
$t$. In this convention, $z(0,t)$ corresponds to trajectories where the
entanglement domain wall no longer exists at time $t$, as it has been
annihilated at the left interface.
We may now write the Haar-averaged purities as
$\displaystyle\langle\Tr\,\rho_{A}(t)^{2}\rangle=q^{2}\sum_{y>x_{0}}z(y,t)+q\sum_{y\leq
x_{0}}z(y,t)$ (11) $\displaystyle\langle\Tr\,\rho_{A\cup
R}(t)^{2}\rangle=q^{2}\sum_{y\leq x_{0}}z(y,t)+q\sum_{y>x_{0}}z(y,t)$ (12)
This is due to the fact that $\langle\Tr\,\rho_{A\cup R}(t)^{2}\rangle$
involves a sum over trajectories of the entanglement domain wall, with an
additional weight $q^{2}$ given to trajectories which end at a position
$y>x_{0}$ and a weight $q$ given to trajectories ending at $y\leq x_{0}$,
where $x_{0}$ is the location of the entangled reference qudit. The opposite
weighting scheme is true for $\langle\Tr\,\rho_{A}(t)^{2}\rangle$. These
additional weights arise due to the fact that depending on the final position
of the entanglement domain wall, the boundary spin at $x$ is contracted with
the state $\ket{\uparrow}$ or $\ket{\downarrow}$. These overlaps are given in
Eq. (4). With these expressions, it is straightforward to see that
$\displaystyle
I_{A,R}^{(\mathrm{ann})}(t)=\log_{q}\left[\frac{q^{2}-q(q-1)P(x_{0},t)}{1+(q-1)P(x_{0},t)}\right]$
(13)
where
$\displaystyle P(x_{0},t)\equiv\frac{1}{Z(t)}\sum_{y\geq x_{0}}z(y,t)$ (14)
is the probability that the domain wall ends at a position $y\geq x_{0}$ at
time $t$.
## III Quantum Coding Transition
In this section, we study the behavior of the encoding of quantum information
in the system, after evolving the system by the quantum circuit for $T$
timesteps, for a fixed dissipation strength $p$. The number of timesteps of
the evolution $T$ can be large so that $T/L\sim O(1)$ but is taken to be small
enough throughout the entirety of this section, so that the left and right
ends of the one-dimensional qudit chain are causally disconnected. As $p$ is
increased from zero, we will find an “quantum coding” transition, where
information initially encoded in the system is lost to the environment above a
threshold $p=p_{c}$.
### III.1 Annealed Mutual Information, and the Pinning of an Ising Domain
Wall
First, we investigate the behavior of $I_{A,R}^{(\mathrm{ann})}$ as the
dissipation strength $p$ is tuned, by studying the Ising lattice magnet that
emerges after performing a Haar-average over the unitary gates in the quantum
circuit.
As discussed in Sec. II.2, the partition sum $Z(T)$ describes a single Ising
domain wall which can propagate through the bulk of the two-dimensional
system, and be created/annihilated at the left boundary of the system. Tuning
the dissipation strength, which alters the Ising symmetry-breaking field
applied at the boundary, modulates an effective “pinning potential” for the
Ising domain wall. This can be clearly seen in the limiting cases when $p=0$
or $1$. In the former case, the dissipation is completely absent, and Eq. (5)
implies that the all-down state is left invariant by the transfer matrix for
the Haar-averaged circuit. Thus, in this limit, there is no Ising domain wall.
In contrast, when $p=1$, the boundary spin is fixed in the $\ket{\uparrow}$
state, and the domain wall is effectively repelled from the left boundary.
Increasing the dissipation strength can then drive a pinning/de-pinning phase
transition for the entanglement domain wall. Similar phase transitions due to
the presence of a boundary magnetic field in an Ising magnet have been studied
in the literature (see, e.g. Ref. [51, 52, 53]). Equivalently, the temporally-
directed nature of the Ising domain wall also suggests these paths may be
thought of as the imaginary-time trajectories of a single quantum-mechanical
particle on the half-line, which experiences a potential near the boundary,
which is tuned by the dissipation strength. $Z(T)$ is thus an amplitude for
this particle to propagate under imaginary time-evolution by this Hamiltonian.
In this setting, the particle can undergo a localization transition when the
potential is _sufficiently_ attractive [52]. This result is to be contrasted
with the well-studied problem of a particle on the full line, with a delta-
function potential near the origin, which always forms a bound-state in the
potential well as long as the potential is attractive.
The annealed mutual information precisely measures the localization of the
Ising domain wall, as is evident from Eq. (13). Deep within a localized phase,
where the transverse wandering of the domain wall is governed by a length-
scale $\ell_{\perp}$, the probability $P(x_{0},T)\sim e^{-x_{0}/\ell_{\perp}}$
($\ell_{\perp}\ll x_{0})$, so that $I^{(\mathrm{ann})}_{A,R}$ is a constant,
deviating from its maximal value of $2$ by a constant correction which changes
within the localized phase. In contrast, in the delocalized phase, the
probability $P(x_{0},T)\overset{T\rightarrow\infty}{=}1$, where the limit is
taken, keeping the ratio $T/L=\mathrm{const.}$ fixed.
Properties of this coding transition, as seen by annealed-averaged
observables, such as the annealed mutual information, may be obtained by
studying the lattice partition function for the Ising domain wall, which we
present in Appendix A, due to the technical nature of the calculations
involved. From this study, we find that
1. 1.
The phase transition occurs at a probability $p_{c}$ which varies as a
function of the on-site Hilbert space dimension $q$. The behavior of $p_{c}$
as $q$ is tuned may be determined by studying the lattice partition function.
In the limit $q\rightarrow\infty$, the coding transition is absent.
Specifically, we find that
$\displaystyle p_{c}=1-O(q^{-2})$ (15)
so that information is always preserved in the system in the limit that the
on-site Hilbert space dimension is strictly infinite.
2. 2.
Near the phase transition, the annealed mutual information takes the universal
scaling form
$\displaystyle
I^{(\mathrm{ann})}_{A,R}(T)=T^{-\beta/\nu}F(T^{1/\nu}(p-p_{c}))$ (16)
where $\beta=1/2$ and $\nu=2$. The function $F(x)\sim x^{\beta}$ as
$x\rightarrow-\infty$. This relation is obtained by determining that in the
thermodynamic limit, the annealed mutual information should vanish on
approaching the transition as $I^{(\mathrm{ann})}_{A,R}\sim\ell_{\perp}^{-1}$,
where $\ell_{\perp}$ is the distance of a transverse excursion of the Ising
domain wall in the pinned phase. This length scale is shown to diverge as
$\ell_{\perp}\overset{p\rightarrow p_{c}^{-}}{\sim}(p_{c}-p)^{-\beta}$ upon
approaching the phase transition.
The above scaling form for the annealed mutual information is in good
quantitative agreement with numerical studies, which we perform by directly
studying the transfer matrix for the Ising magnet. A numerically-obtained
scaling collapse for the annealed mutual information is shown in Fig. 3, which
is consistent with Eq. (16).
Figure 3: Scaling collapse of the annealed mutual information, consistent with
the scaling form in Eq. (16). The inset shows the behavior of the annealed
mutual information as a function of dissipation strength $p$, indicating the
presence of an coding transition. The exponents $\beta=1/2$, $\nu=2$ are
determined from properties of the pinning transition of the Ising domain wall.
The system size is taken to be large enough that the left and right ends of
the qudit chain are causally disconnected.
We expect that the qualitative behaviors presented here hold for the
“quenched-averaged” quantities of interest, such as the averaged von Neumann
mutual information $\langle I_{A,R}(t)\rangle$, which truly diagnose the loss
of quantum information from the system, as the dynamics proceed. The true
nature of the phase transition, however, will be different, as we discuss in
Sec. III.3.
### III.2 Numerical Study
Having obtained a qualitative understanding of the coding transition by
considering the “annealed” Haar average of the Rényi mutual information, we
now demonstrate the presence of this transition in numerical studies of
quantum circuit evolution in a qubit chain ($q=2$ on-site Hilbert space
dimension). Here, the unitary time evolution of the bulk is governed by
Clifford random unitary gates, arranged in a brickwork structure. This setup
allows us to simulate the dynamics of large systems for sufficiently long
times to study the phase transition introduced above, by relying on the
stabilizer formalism. The boundary dissipation is realized as a random erasure
channel, acting on the leftmost qubit with probability $p$ in each time step,
by deleting the information stored in the qubit. In the stabilizer formalism,
this boundary erasure channel is implemented by deleting all stabilizers
acting non-trivially (as a non-identity operator) on the leftmost qubit.
We note that besides the protocol described above, we also considered other
forms of boundary dissipation and Clifford scrambling, all giving rise to
similar results for the behavior of the mutual information. Specifically, we
implemented an alternative dissipation channel, by applying a CNOT gate
entangling the boundary qubit with an environmental ancilla qubit that was
subsequently traced out from the density matrix. Moreover, we considered
protocols with sparse bulk scrambling, where each unitary gate in the
brickwork structure is a random Clifford unitary with probability $p_{U}<1$,
but the trivial identity operator with probability $1-p_{U}$. This scenario
allowed us to tune the efficiency of the scrambling through the parameter
$p_{U}$, while keeping the boundary noise fixed, leading to a phase transition
similar to the one discussed in the main text. We discuss these alternative
protocols in more detail, and present supplementary numerical results in
Appendix B.
The Bell pair is encoded in the initial state at the leftmost site, by
entangling the boundary qubit with a reference qubit, while the remaining
qubits are initialized in a random product state. We run the dissipative
dynamics for time $T$, with system size $L$ chosen to keep $T/L<1$ fixed, such
that the right boundary of the system is not causally connected to the Bell
pair. This setting allows us to detect the coding transition induced by a
single boundary, by increasing the evolution time $T$. Importantly, due to the
fixed ratio $T/L$, the long time limit $T\to\infty$ and the thermodynamics
limit $L\to\infty$ are performed simultaneously, therefore, we are probing the
mutual information on time scales where the system is expected to become
thermalized.
Figure 4: Coding transition induced by a single boundary. The mutual
information between the reference qubit and the output of the circuit shown as
a function of dissipation strength $p$, for $T/L<1$ fixed, with boundary
dissipation realized as a random erasure channel. The scaling with circuit
depths $T$ points to a phase transition between a phase with partially
protected information, and a phase with all information lost.
The mutual information $I_{A,R}$ between the output of the dissipative quantum
circuit $A$ and the reference qubit $R$ is shown in Fig. 4, for different
dissipation strengths $p$ and circuit depths $T$. These results are consistent
with a coding transition tuned by the dissipation strength $p$, between a
phase where the system retains part of the encoded information, and a strongly
dissipative phase with all information lost. We note that determining the
critical exponents and critical point of this transition from finite time data
is numerically challenging. Nevertheless, we attempt to estimate these
parameters by noting that the mutual information obeys the finite size scaling
$I_{A,R}\sim T^{-\beta/\nu}$ at the critical dissipation strength $p_{c}$,
while it saturates to a finite value as $T\rightarrow\infty$ for $p<p_{c}$.
Relying on this observation, we identify $p_{c}$ with the smallest $p$ where
the numerical data are consistent with $I_{A,R}$ approaching zero
algebraically as $T\rightarrow\infty$, yielding the estimate $p_{c}\approx
0.5$. We then use the critical scaling $\left.I_{A,R}\right|_{p=p_{c}}\sim
T^{-\beta/\nu}$ to fit the ratio $\beta/\nu$, see Fig. 5a. Finally, we fit
estimate $\nu$ by requiring a good scaling collapse for the full set of data
from Fig. 4. We obtain the critical parameters $p_{c}=0.5$, $\beta/\nu=0.34$
and $\nu=2$, yielding the scaling collapse shown in Fig. 5b. We note, however,
that due to the large number of fitting parameters, the critical exponents
extracted this way carry a considerable uncertainty. We leave the more
thorough investigation of critical properties for future work.
Figure 5: Critical properties of the coding transition for a single boundary.
(a) Critical power law scaling of the mutual information with respect to
circuit depth $T$ at the estimated transition point, $p_{c}=0.5$. The scaling
relation $I_{A,R}\sim T^{-\beta/\nu}$ is used to extract $\beta/\nu=0.34$
(dashed line). (b) Full scaling collapse of rescaled mutual information
$T^{\beta/\nu}I_{A,R}$ as a function of $T^{1/\nu}\left(p-p_{c}\right)$, using
$\nu=2$.
### III.3 The Replica Limit and the Nature of the Phase Transition
The behavior of quenched-averaged quantities, e.g. the Haar-averaged Rényi
mutual information $\langle I_{A,R}^{(2)}(t)\rangle$, close to the coding
phase transition are quantitatively distinct from the annealed-averaged mutual
information studied in Sec. III.1. This is suggested by the numerical studies
in the previous section, which present strong evidence that the coding phase
transition is in a different universality class from a de-pinning phase
transition for a single Ising domain wall. Here, we will provide some
conjectures on the nature of this phase transition, based on analytic
arguments.
We will focus our attention on the averaged second Rényi mutual information
$\langle I_{A,R}^{(2)}(t)\rangle$ whose behavior may be obtained via a
“replica trick”; the second Rényi entropy may be obtained in the limit
$S_{A}^{(2)}(t)=\displaystyle\lim_{k\rightarrow
0}\left(1-\left[\Tr\rho_{A}(t)^{2}\right]^{k}\right)/k$, so that the
calculation of the Haar-averaged mutual information reduces to evaluating
quantities such as $\langle\left[\Tr\rho_{A}(t)^{2}\right]^{k}\rangle$ in a
replica limit $k\rightarrow 0$. After the Haar average, these quantities may
be regarded as partition functions for lattice magnets with “spins” taking
values in the permutation group on $2k$ elements $S_{2k}$ [39]. A drastic
simplification in the limit of large, but finite, on-site Hilbert space
dimension $q$ occurs [54], whereby
$\langle\left[\Tr\rho_{A}(t)^{2}\right]^{k}\rangle$ may be regarded as $k$
copies of an Ising magnet, with weak inter-replica interactions at each
spacetime point where a Haar-random unitary gate has been applied. The intra-
replica interactions for each Ising magnet are described by the statistical
mechanical rules presented in Sec. II.1. The inter-replica interactions are
known to be attractive, and vanish in the limit that $q$ is strictly infinite
[54]. As already derived in II.1, the boundary dissipation acts as an Ising
symmetry-breaking field, giving rise to a boundary potential for the Ising
domain wall within each replica.
Figure 6: The Haar-averaged Rényi mutual information between the reference
qudit(s) and the system, $\langle I^{(2)}_{A,R}(t)\rangle$ is described in the
large-$q$ limit, by the $k$ Ising domain walls in the presence of attractive,
inter-replica interactions, and an attractive interface within each replica,
in the limit $k\rightarrow 0$. This is described by the path integral in Eq.
(18).
The replica limit of the resulting theory may thus be regarded as the
description of a directed path in a random environment [55, 56], restricted to
the half-line $x\geq 0$, and in the presence of a potential near this
boundary, due to the dissipation. The path integral for this problem for a
given realization of the disorder is formally given by
$\displaystyle Z[V]=\int\,Dx(\tau)\,e^{-S[x,V]}$ (17)
where
$\displaystyle S[x,V]\equiv\int
d\tau\left[\frac{1}{2}\left(\frac{dx}{d\tau}\right)^{2}+V[x,\tau]-u\,\delta[x]\right].$
(18)
Here $x(\tau)$ is the coordinate of the path at time $\tau$. The random
potential in the bulk $V[x,\tau]$ is taken to have zero mean, and is short-
range-correlated in spacetime, e.g. we may take the potential to be delta-
function-correlated as
$\overline{V[x,\tau]V[x^{\prime},\tau^{\prime}]}=\sigma^{2}\delta(x-x^{\prime})\delta(\tau-\tau^{\prime})$,
where $\overline{\cdots}$ denotes an average over the probability distribution
for the disorder. The statistical mechanics of the replicated theory
$\overline{Z^{k}}$ thus describes $k$ interacting paths in the presence of a
boundary potential, and thus resembles that of the Haar-averaged quantities
$\langle\left[\Tr\rho_{A}(t)^{2}\right]^{k}\rangle$,
$\langle\left[\Tr\rho_{A\cup R}(t)^{2}\right]^{k}\rangle$ in the limit of
large, but finite, $q$. A schematic depiction of this replicated theory is
shown in Fig. 6.
The weak inter-replica interactions are known to be a relevant perturbation at
the critical point describing the pinning of a single Ising domain wall [57].
Remarkably, the new critical point describing the pinning/de-pinning of a
directed polymer to an interface, has been understood exactly [57] by Bethe
ansatz techniques. The characteristic wandering length of the polymer
transverse to the interface diverges with an exponent $\nu_{\perp}=2$ on
approaching the phase transition from the localized phase, while the
divergence of the specific heat is characterized by the exponent $\alpha=0$.
For time-independent dissipation (e.g. the depolarizing channel is applied
identically at the boundary at each time of the quantum circuit evolution), we
thus expect the coding transition to be in the universality class of this de-
pinning phase transition for a directed polymer.
In contrast, if the boundary dissipation varies randomly in time - as was
studied in Sec. III.2 \- then the nature of the phase transition is not
completely understood. This problem corresponds to having an imaginary-time-
dependent boundary potential $u(\tau)=u_{0}+v(\tau)$ in (18), where $v(\tau)$
has zero mean and is short-range-correlated in spacetime; for simplicity, we
take
$\overline{\overline{v(\tau_{1})v(\tau_{2})}}=\mu^{2}\delta(\tau_{1}-\tau_{2})$,
with $\overline{\overline{\cdots}}$ denoting the average over the distribution
for $v(\tau)$.
We may study the relevance of randomness in this boundary potential at the de-
pinning transition. Here, the action is invariant under coarse-graining and
re-scaling $\tau^{\prime}=\tau/b^{z}$, and $x^{\prime}\equiv x/b$ where $z$ is
the dynamical critical exponent at the phase transition. Under this
transformation, the random boundary potential becomes $\int
d\tau\,v(\tau)\delta[x]\longrightarrow
b^{z-1}\int\,d\tau^{\prime}\,v(b^{z}\tau^{\prime})\delta[x^{\prime}]$, so that
we identify $v^{\prime}(\tau^{\prime})\equiv b^{z-1}v(b^{z}\tau^{\prime})$ as
the renormalized potential in the coarse-grained theory. The correlations of
the renormalized potential are thus
$\displaystyle\overline{\overline{v^{\prime}(\tau^{\prime}_{1})v^{\prime}(\tau^{\prime}_{2})}}=\mu^{2}b^{z-2}\delta(\tau^{\prime}_{1}-\tau^{\prime}_{2})$
(19)
Therefore, the strength of the disorder decreases under renormalization when
$z<2$. It has been conjectured [58] that $z=3/2$ at the pinning transition for
the directed polymer, so that the randomness in the boundary potential should
be irrelevant by Eq. (19), so that the same fixed-point describing the de-
pinning of a directed polymer studied in Ref. [57] should describe the
resulting transition in the presence of randomness.
We are, however, unaware of the correctness of this result in Ref. [58] for
the dynamical exponent. The numerical studies presented in Sec. III.2 further
suggest that $\nu_{\parallel}=2$ (as opposed to
$\nu_{\parallel}=z\nu_{\perp}=3$, which is what would be predicted on the
basis of $z=3/2$ and $\nu_{\perp}=2$), though more extensive numerical studies
are required to pin down the nature of this transition. We note, for
completeness, that Eq. (19) suggests that the random boundary potential is a
marginal perturbation exactly at the de-pinning phase transition for the Ising
domain wall (which has $z=2$ [53]). A Wilsonian renormalization-group
calculation to higher order further suggests that the disorder is marginally
_relevant_ [59]. The nature of the resulting critical point is not understood,
and deserves further investigation.
### III.4 Perfect information protection using scrambling
In the low-dissipation phase of the coding transition, quantum information is
only partially protected. One would expect that the information protection can
be improved by first scrambling the information with unitary gates, which can
effectively act like a random encoding, before the dissipation is turned on;
we refer to this as a “pre-scrambling” step. Here we argue that for fixed
system size $L$ and dissipation strength $p$, scrambling the initially local
quantum information via a random unitary circuit of logarithmic depth
$t_{\mathrm{scr}}=k\log L$ for some sufficiently large $k$, can lead to
perfect protection of quantum information within the system, up to times of
order $T\sim L/p$. For a pre-scrambling step with a fixed depth
$t_{\mathrm{scr}}=k\log L$ and for low $k$, we can observe the coding
transition by tuning the dissipation strength $p$. The coding transition will
now be manifest in a step-function-like behavior of the mutual information
$I_{A,R}$ across the transition due to the perfect preservation of information
for sufficiently low dissipation.
Figure 7: The behavior of the Ising domain wall in the presence of a pre-
scrambling step, whereby the initially local quantum information is evolved by
a quantum circuit of depth $t_{\mathrm{scr}}$. We consider propagation of the
domain wall backwards in time, with respect to the arrow of time in the
quantum circuit. In this picture, trajectories of the domain wall which are
survive in the bulk into the pre-scrambling step (right) are exponentially
suppressed relative to trajectories which are annihilated at the boundary
beforehand (left).
To gain some intuition for this result, we again consider the statistical
mechanics of the Ising domain wall. As before, the domain wall is naturally
thought of as propagating in a direction which is opposite to the arrow of
time in the quantum circuit evolution. The domain wall thus propagates through
$T$ timesteps of the circuit involving boundary dissipation, and then
encounters the pre-scrambling step where the dissipation is absent. This
corresponds to free evolution of the domain wall without the symmetry-breaking
field at the boundary. When this field at the boundary is turned off,
trajectories of the domain wall which have already been annihilated at the
boundary – such as the one shown in the left panel of Fig. 7 – do not cost
additional weights in the partition sum. On the other hand, “surviving” domain
wall trajectories in the bulk – such as the one shown in the right panel of
Fig. 7 – incur a weight of $q/(q^{2}+1)$ at each time step. Thus the weights
of the bulk trajectories of the domain wall are exponentially suppressed in
time relative to trajectories terminating at the boundary.
Let $Z_{a}(t,T)$ be the partition function for the Ising domain wall, after
the $T$ timesteps of the dynamics with dissipation have taken place, followed
by an additional $t$ timesteps of pre-scrambling, and so that the domain wall
has been annihilated at the boundary of the system. In contrast, let
$Z_{b}(t,T)$ be the partition function for the Ising domain wall to “survive”
in the bulk of the system after the same evolution. To determine the behavior
of the annealed mutual information, we wish to determine the probability that
the domain wall ends at position $x\geq x_{0}$ after another $t$ steps of the
dissipation-free evolution, as per Eq. (13), where $x_{0}$ is the location of
the entangled reference qubit of quantum information. For simplicity of
presentation, we take $x_{0}$ to be at the boundary of the qubit chain, so
that this probability $P(t,T)$ is
$\displaystyle P(t,T)$
$\displaystyle=\frac{Z_{b}(t,T)}{Z_{a}(t,T)+Z_{b}(t,T)}$ (20)
To make progress, we note that since the “surviving” trajectories contributing
to $Z_{b}(t,T)$ are exponentially suppressed in time, we may write that
$Z_{b}(t,T)=Z_{b}(0,T)e^{-\gamma t}$, where $\gamma$ is a phenomenological
decay rate which will be a function of the local Hilbert space dimension, and
the dissipation strength. We further approximate the partition sum
$Z_{a}(t,T)$ by its value before the pre-scrambling step, so that
$Z_{a}(t,T)=Z_{a}(0,T)$. With these approximations, we may write
$\displaystyle P(t,T)$ $\displaystyle=\frac{P(0,T)}{P(0,T)+[1-P(0,T)]e^{\gamma
t}}$ (21)
The annealed mutual information is now obtained from Eq. (13). At sufficiently
long times, so that $P(t,T)\ll 1$, we thus find that the mutual information
deviates from its maximal value by
$\displaystyle
2-I^{(\mathrm{ann})}_{A,R}(t)=\frac{q^{2}-1}{q}\cdot\frac{P(0,T)}{P(0,T)+[1-P(0,T)]e^{\gamma
t}}$ (22)
In the pinned phase of the domain wall, we expect $P(0,T)$ is exponentially
small in the number of timesteps $T$. In contrast, in the de-pinned phase, the
probability that the domain wall has been annihilated at the interface decays
as a power-law in time due to the diffusive nature of the Ising domain wall,
so that $P(0,T)=1-O(T^{-a})$, with $a$ a constant. For fixed $T$, we thus find
that for a sufficiently long pre-scrambling time $t$, the mutual information
deviates from its maximal value as
$\displaystyle 2-I^{(\mathrm{ann})}_{A,R}(t)\sim\begin{cases}e^{-\gamma
t}&p<p_{c}\\\ T^{a}e^{-\gamma t}&p>p_{c}\end{cases}.$ (23)
Evaluating this expression at the scrambling time $t_{\mathrm{scr}}=k\log L$
yields
$\displaystyle 2-I_{A,R}^{\mathrm{(ann)}}(t)\sim\begin{cases}L^{-\gamma
k}&p<p_{c}\\\ L^{a-\gamma k}&p>p_{c}\end{cases}.$ (24)
$\begin{array}[]{c}\includegraphics[width=433.62pt]{log_scram_final.pdf}\\\
\text{(a)}\\\ \includegraphics[width=433.62pt]{Imutual_logscrambling.pdf}\\\
\text{(b)}\end{array}$
Figure 8: Coding transition with logarithmic-depth pre-scrambling. In (a),
$I_{A,R}^{\mathrm{(ann)}}$ vs $p$ is plotted with a pre-scrambling circuit of
depth $t_{\mathrm{scr}}\sim\log L$. The subsequent evolution with dissipation
proceeds for a total number of timesteps $T=L$. The main plot is for
$t_{\mathrm{scr}}=4\log_{2}(L)$. The annealed mutual information approaches
the maximum value as $L$ is increased indicating that logarithmic-depth
encoding is enough to protect the information against boundary dissipation.
Inset shows the plot for $t_{\mathrm{scr}}=\log_{2}(L)$ with
$I_{A,R}^{\mathrm{(ann)}}$ going through a transition with respect to $p$. The
results agree with eq. (24) derived in the main text. In (b), the mutual
information, as calculated in Clifford dynamics, for dynamics with pre-
scrambling of depth $t_{\mathrm{scr}}=\log_{2}(L)$, plotted as a function of
dissipation strength $p$. Boundary dissipation is realized as a random erasure
channel, and $T/L=1/2$ is kept fixed for different system sizes. The mutual
information reveals a phase transition, with the critical point appearing as a
crossing point of the data for different system sizes.
The above calculation implies that for $t_{\mathrm{scr}}=k\log L$, with $k$
large enough, quantum information is perfectly preserved. Logarithmic
scrambling is enough to protect the information against noise. For low values
of $k$, the mutual information can exhibit different behavior depending on
whether $a-\gamma k$ is positive or negative. We show the results obtained
from studying the annealed MI numerically in Fig. 8a, and find good agreements
with the considerations above.
We now turn to the simulation of Clifford quantum circuit dynamics. To explore
how logarithmic pre-scrambling affects the coding transition induced by a
single boundary, we modify the circuit protocol to include a unitary, non-
dissipative pre-scrambling step, with pre-scrambling time scaling
logarithmically with system size, $t_{\mathrm{scr}}=k\log L$, before applying
the dissipative dynamics for time $T$. We then approach the thermodynamic
limit by increasing $T$ and $L$, while keeping the aspect ratio $T/L<1$ fixed.
In accordance with the insights gained above from the annealed Haar average,
we find a phase transition for $k=1$ as a function of $p$ between a phase
retaining information between the input and output of the circuit, and a phase
with all information destroyed by dissipation, as shown in Fig. 8b. The
critical properties are different from the case without pre-scrambling
discussed in the previous subsection, and, as predicted by the annealed model,
the critical point is signaled by a crossing point in the mutual information
obtained for different system sizes. We find a similar coding transition for
$k\leq k_{\rm max}$, with $k_{\rm max}\sim O(1)$. For even larger values of
$k$, the mutual information remains maximal for all values of $p$.
## IV Coding transition on the approach to thermalization
In the previous section, we studied systems of size $L$ with dissipation
acting near the left boundary in the regime $T\lesssim L$ so that the right
boundary did not play a role in the dynamics. More precisely, as long as $L/T$
remains larger than the velocity of the entanglement domain wall, which is
less than the lightcone velocity in the quantum circuit, the coding transition
can be understood as a depinning transition of the domain wall, such that for
noise rate $p$ below the critical value $p_{c}$ some amount of information
survives.
In this section, we study what happens when the dynamics in the coding phase
extend for even longer periods of time, and show that the surviving
information will eventually be lost to the environment as the system
completely thermalizes. We may understand this result by considering the
dynamics of the Ising domain wall, which describes the behavior of the
annealed mutual information. For sufficiently large $T/L$ the domain wall will
escape and get annihilated at the right boundary. Thus using eq. (13)
$I_{A,R}^{\mathrm{(ann)}}$ becomes zero and the information gets leaked to the
environment. Intuitively speaking, the system gets entangled with $pT$ number
of environment qubits, and when $pT\gtrsim L$ the system gets maximally
entangled with the environment and become thermalized. By the monogamy of
entanglement, the reference qudits can no longer be entangled with the system
but are lost to the environment. Therefore for large $T/L$ there is a
transition with respect to the dissipation strength $p$, and the location of
the critical point scales as $p_{d}\sim T/L$; for $p>p_{d}$ the information
gets completely entangled with the environment. This transition is also
visible with respect to $T$ and fixed dissipation strength $p$.
We study this coding transition by performing $t_{\mathrm{scr}}=L$ steps of
pre-scrambling before turning on the noise. As explained in the previous
section, linear pre-scrambling perfectly protects the information for all
strengths of dissipation, and when $T/L$ is sufficiently small. This pre-
scrambling step has the effect of making the transition appear as a “step
function” in the mutual information $I_{A,R}$ as a function of dissipation
strength. Indeed, $I_{A,R}^{(\mathrm{ann})}(T)$ vs $p$ for $T/L=4$ in Haar
random circuit in Fig. 9 shows such a behavior, and appears to be a scaling
function of $(p-p_{d})L$ (see inset).
Figure 9: Plot of $I_{A,R}^{(\mathrm{ann})}$ in Haar random circuits. $T/L=4$
and $t_{\mathrm{scr}}=L$. Inset. The data collapse to a single curve as a
function of $(p-p_{d})L$.
### IV.1 Numerical Study
We also verify the above transition in the Clifford circuit setting introduced
in the previous section. Here, after initializing the Bell pair at the left
boundary of the chain, we run a pre-scrambling step linear in system size,
$t_{\mathrm{scr}}=L$, followed by the dissipative dynamics applied for time
$T$. As before, we examine the finite size scaling by increasing $T$ and $L$,
while keeping $T/L>1$ fixed. As already discussed in the annealed framework,
we find a phase transition for large enough aspect ratio $T/L>1$. In Fig. 10a,
we plot the mutual information between the reference qubit and the output of
the circuit as a function of $p$ for different system sizes $L$, using a pre-
scrambling time $t_{\mathrm{scr}}=L$ and aspect ratio $T/L=4$. In perfect
agreement with the annealed picture, the mutual information curve approaches a
step function in the thermodynamic limit, confirming a phase transition
between a phase with all the information protected, and a phase with all
information destroyed.
We find a good scaling collapse with the scaling function depending on
$(p-p_{d})L^{1/2}$, see Fig. 10b. The form of the scaling function differs
from the annealed result. This deviation can be understood by noting that for
the annealed case we applied a deterministic boundary depolarization channel,
Eq. (1) whereas the dissipation in the Clifford circuit is applied at random
time steps, and this disorder may change the properties of the transition.
Indeed, the effect of randomness in the dissipation channel can be studied by
introducing disorder into the annealed model and applying channel (1) at
random times which leads to scaling function depending on $(p-p_{d})L^{1/2}$
(data not shown), in perfect agreement with the Clifford circuit results. The
discrepancy between the factor of $L$ and $L^{1/2}$ can be understood as
follows. With randomness, the number of environment qubits entangled with the
system increase linearly with $T$ but has fluctuations of order $\sqrt{T}$.
This results in the critical point fluctuating as $\delta p/\sqrt{T}$ leading
to $(p-p_{d})L^{1/2}$ dependence of the mutual information.
Figure 10: Coding transition upon approaching thermalization. (a) Mutual
information between the input and the output of the circuit shown as a
function of dissipation strength $p$, converging towards a step function in
the thermodynamic limit. Pre-scrambling time is set to $t_{\mathrm{scr}}=L$,
followed by dissipative dynamics for time $T$, with $T/L=4$ fixed. (b) Data
collapse as a function of $(p-p_{d})L^{1/2}$, with the critical point
$p_{d}=0.136$ corresponding to the crossing point of finite size data.
### IV.2 Nature of the Phase Transition
We end this section by discussing the nature of the transition explored above.
We argue below that coding transition in this regime is a first-order phase
transition.
To begin with, let us consider the large qudit limit such that
$1/q\ll(1-p)^{2}$. The partition function in the annealed picture contains
contributions coming from all possible trajectories of the domain wall. The
contribution at time $t$ from trajectories having the domain wall at $n_{DW}$
number of time steps is of order $(1/q)^{n_{DW}}((1-p)^{2})^{t-n_{DW}}$. The
entropic factor, due to there being more configurations with the domain wall
as opposed to without it, can only renormalize the $1/q$ factor. Thus the
partition function is dominated by the term having no domain wall at any point
of time, $(1-p)^{2t}$. However, for $(1-p)^{2t}>(1/q)^{L}$, it is preferable
for the domain wall to go all the way to the right boundary and get
annihilated there. Thus at $t_{c}\sim\frac{\log 1/q}{\log(1-p)}L$ the nature
of the domain wall changes discontinuously from being stuck at the noisy
boundary to getting annihilated at the un-noisy boundary indicating a first-
order transition. The finite $q$ corrections to the above picture only act as
thermal fluctuations which causes the domain wall to have some excursions
inside the bulk. The contributions from these excursions will be sub-leading
and we expect the transition to remain first-order. Note that similar time
scales were also identified in [30] for the system to become perfectly
thermalized in the presence of noise.
As in the standard theory of first-order phase transition, the two boundaries
correspond to the two local minima for the domain wall and the system
discontinuously jumps from one to another. The mutual information then is a
function of the probability that the system is in one of the two minima (see
eq. (13)). Since the free energy is extensive, the probability of being in a
particular minimum scales as a function of $\delta gV$ where $\delta g$ is the
tuning parameter for the transition and $V$ is the total volume of the system.
In our case, the volume is equal to $T$. This explains the observed finite-
size collapse as a function of $(p-p_{d})T$ in Fig. 9.
## V Encoding at a Finite Rate
So far we looked into the dynamics of a single bell pair localized near the
noisy boundary. But it is equally interesting to understand the effects of the
noise when we have an extensive number of Bell pairs in the initial state. We
denote the code rate, defined as the fraction of the system’s qubits entangled
in Bell pairs, by $C=N_{R}/L$ where $N_{R}$ is the total number of Bell pairs.
For the purpose of this section, we will consider code density $C=1/2$ but we
believe that the qualitative results should not change for different values of
$C$ as long as $C$ is not close to $1$. To make the final results independent
of the distribution of the Bell pairs at the initial time we will perform
random encoding by performing unitary scrambling for time
$t_{\mathrm{scr}}=L$.
We plot the annealed mutual information between the input and output,
$I_{A,R}^{(\mathrm{ann})}$, in Fig. 11 as a function of the dissipation
strength for $T=7L$. We find two threshold values for the noise rate,
$p_{th,1},p_{th,2}$. For $p<p_{th,1}$, the information is perfectly protected
and $I_{A,R}^{\mathrm{(ann)}}$ is equal to the maximal value $2CL$. For
$p_{th,1}<p<p_{th,2}$, the information starts leaking to the environment but
still a finite density of it remains in the system. Finally when $p>p_{th,2}$
the information is completely leaked to the environment. Note that the values
of $p_{th}$ change with the ratio $T/L$.
Similarly to the strategy followed in the previous sections, we verify these
predictions by performing numerical simulations in Clifford quantum random
circuits. We show the density of the mutual information between the output of
the circuit $A$ and the reference qubits, $I_{A,R}/N_{R}$, with $N_{R}=L/2$
denoting the number of input Bell pairs, as a function of dissipation strength
$p$ in Fig. 12, for different system sizes $L$ with $T/L=4$ fixed. As noted
above, here we applied a linear unitary pre-scrambling step for time
$t_{\mathrm{scr}}=L$, before the onset of the noisy dynamics, such that the
results do not depend on the spatial distribution of the Bell pairs in the
initial state. We find a phase with perfectly protected information for small
enough dissipation strength $p$, followed by a crossover region with a finite
density of preserved coherent information decreasing continuously with $p$,
eventually decaying to zero for large $p$.
$\begin{array}[]{c}\includegraphics[width=433.62pt]{Extensive_Coding_Rate.pdf}\\\
\\\ \includegraphics[width=359.90538pt]{finite_rate_figure.pdf}\end{array}$
Figure 11: Top. Schematic representation of the statistical mechanics of the
Ising domain wall in the calculation of the annealed mutual information, when
coding at a finite rate. Typical domain wall trajectories when
$p_{th,1}<p<p_{th,2}$ are shown. In $Z_{\Downarrow}$ the domain wall remains
localized whereas it is delocalized for $Z_{\Uparrow}$, as explained in the
text. Bottom. Plot of the annealed mutual information between Bell pairs
entangled with the system’s qubits at alternate sites ($C=1/2$) and the
system. The Bell pairs are scrambled by a unitary circuit for time
$t_{\mathrm{scr}}=L$. The system is evolved in presence of the boundary
dissipation for time $T=7L$. We find that for $p<p_{th}^{1}\approx 0.06$, full
information is preserved, while for $p_{th}^{1}<p<p_{th}^{2}\approx 0.2$, a
finite density of information is protected. The threshold values decrease as
$T$ is increased. Inset. For low $p<p_{th}^{1}$ there is no information loss
even for $T=7L$, that is, the difference between $I_{A,R}^{\mathrm{(ann)}}$
and the maximum value $L$ goes to zero with system size. Thus all Bell pairs
can be perfectly recovered by a recovery operation acting on the system.
Figure 12: Coding transition for finite code rate. Density of mutual
information between the output of the circuit and the reference qubits shown
as a function of dissipation strength $p$, for fixed evolution time $T/L=4$
and number of initial Bell pairs $N_{R}=L/2$. Pre-scrambling time is
$t_{\mathrm{scr}}=L$, followed by noisy dynamics with a random boundary
erasure channel. The information density is perfectly protected for weak
enough dissipation $p$, then decays continuously towards zero with $p$ in a
crossover region, with all information leaked to the environment for $p$ large
enough.
To understand this behavior we again resort to the statistical mechanics of
the Ising domain wall. This model for a finite code rate differs importantly
from that of the model when an $O(1)$ amount of quantum information is
encoded. In the case of finite coding rate there are an extensive number of
Ising spins at the top boundary whose state is fixed by the boundary
conditions, though the bulk dynamics of the domain wall remain the same. This
leads to an exponential amplification of the trajectories that minimize the
number of domain walls at the top boundary (note that these domain walls at
the boundary are different than the Ising domain wall performing random walk
in the bulk). As shown at top of Fig. 11, the annealed $I$ is given by
$\displaystyle
I_{A,R}^{\mathrm{(ann)}}=CL+\log\left(\frac{Z_{\Downarrow}}{Z_{\Uparrow}}\right)$
(25)
where $Z_{\Downarrow},Z_{\Uparrow}$, are the partition function of the
statistical mechanics model with down and up spins respectively at the
locations of the encoded Bell pairs; the log is in the base of $q$. As
discussed in Sec. IV the domain wall discontinuously changes from being at the
left boundary to being at the right boundary. To a good approximation, we can
thus only keep these two trajectories in the partition function. For clarity
of the expressions we also introduce $\tilde{p}\equiv 1-p$. The partition
functions $Z_{\Downarrow},Z_{\Uparrow}$ can thus be written as
$\displaystyle
Z_{\Downarrow}\approx\tilde{p}^{2T}q^{2CL}+\left(\frac{1}{q}\right)^{L}q^{CL}$
(26) $\displaystyle
Z_{\Uparrow}\approx\tilde{p}^{2T}q^{CL}+\left(\frac{1}{q}\right)^{L}q^{2CL}$
(27)
Putting the above expression in eq. (25) and identifying the threshold values
to be $1-p_{th,1}\sim q^{-(1-C)L/(2T)},1-p_{th,2}\sim q^{-(1+C)L/(2T)}$, we
get
$\displaystyle I_{A,R}^{\mathrm{(ann)}}\approx\begin{cases}2CL&p<p_{th,1}\\\
2CL-2T\log\left(\frac{1-p_{th,1}}{1-p}\right)&p_{th,1}<p<p_{th,2}\\\
0&p>p_{th,2}.\end{cases}$ (28)
Intuitively, for low $p$ the domain wall remains localized near the noisy
boundary and mutual information is maximal. As $p$ is increased, it is easier
for the DW in $Z_{\Uparrow}$ to delocalize compared to $Z_{\Downarrow}$ as in
the former delocalization results in an exponential reduction in the cost
associated with having domain walls at the boundary. Thus the critical point
at which the DW delocalizes is different for the two boundary conditions
resulting in the two thresholds discussed above.
## VI Summary and Discussion
In this work, we studied one-dimensional quantum many-body systems with a
noisy boundary. We focused on the dynamics of the information of an initially
localized Bell pair near the (noisy) boundary by studying the mutual
information $I_{A,R}(t)$ between the inert spin of the Bell pair with the
system at later times where $A$ is the system and $R$ is the inert spin. This
is also related to the coherent information about the Bell pair remaining in
the system [49, 50]. We find that the chaotic scrambling due to the unitary
dynamics is sufficient to protect a part of this information against getting
leaked to the environment for noise rate $p<p_{c}$ and long times $T\lesssim
L/p$ by allowing the information to escape away from the boundary. We further
show that a random encoding of the Bell pair via noise-less scrambling
dynamics of depth $\mathcal{O}(\log L)$, is sufficient to perfectly protect
the information for all strengths of the noise upto time $T\lesssim L/p$. See
Fig. 1.b for a schematic representation of the phase diagram.
In the regime when the total time of evolution $T\gtrsim L/p$, any remaining
information in the system is revealed to the environment and the system go
through a first-order coding transition. This transition can also be seen as a
result of the system approaching thermalization to infinite temperature. We
expect this form of coding transition to be present in all noisy channels
though in the case of the boundary noise considered here, the timescales
associated with the transition increase parametrically with the system size
[30].
We also look at the coding dynamics for finite code rate, that is, when an
extensive number $N_{R}=CL$, with $C<1$, of the system’s qubits are entangled
in Bell pairs. We find that the code space can be perfectly preserved for
noise strength below some threshold $p_{th,1}$ and for strength above
$p_{th,2}$ the code space is completely destroyed, see Fig. 11, 12. We can
also look at the time for which the information stays in the system for a
fixed noise rate $p$ and equivalently define two threshold times
$T_{th,1}<T_{th,2}$ both of which scales linearly with system size.
This work provides new insights into the competition between scrambling and
decoherence. Normally, active feedback in the form of error correction is
needed to counter the decoherence effects of the noise. However, we present
the case of boundary noise where it is possible to have stable quantum error
codes (QEC) in presence of generic noise, with the code space dynamically
protected by scrambling. Previously such dynamical protection of information
was also observed for the special case of dephasing noise which can be
unraveled into quantum trajectories corresponding to projective measurements,
but there an extensive number of ancilla qubits that act as register for the
measurement outcomes are made part of the system [27]. It would be of interest
to generalize our results and techniques in the presence of ancilla qubits for
cases different from the boundary noise. We leave this for future work.
Other interesting directions to explore are the presence of similar coding
transitions in purely unitary evolution. It seems possible for quantum
information to remain confined in part of a system evolving under chaotic
unitary dynamics for a long time, and before the system thermalizes. We leave
a detailed discussion of this direction to future work [60].
The competition between chaos and decoherence has also been studied in the
context of open quantum systems. Previous studies have mostly focussed on
level statistics and quantities like spectral form factor, purity, and
Loschmidt echo to study the effect of decoherence in chaotic dynamics [61, 62,
63, 64, 65, 66, 67, 68]. It is an open question to study such probes in our
context and whether the coding transitions can also be seen in these
quantities. There is also a close relationship between the input-output mutual
information and operator spreading (measured via out-of-time-correlators
(OTOCs)) in noise-free unitary dynamics [4]. It is interesting to understand
how OTOCs in noisy systems are related to the emergent QEC property of the
noisy dynamics [69, 70, 71]. Or more generally, how is the dynamics of
information related to the above-mentioned quantities for open quantum
systems?
The coding transitions imply protection of the code-space against noise and
the potential existence of a decoding protocol that brings back the code-space
to its initial state. Such a protocol is notoriously hard for random dynamics
having little structure, except in a few special cases like the Preskill-
Hayden black hole protocol [1, 72] or for special types of noises like erasure
channel. For Clifford circuits with boundary dissipation considered here, an
efficient decoder can probably be constructed for the erasure channel. Another
interesting direction in further understanding the error-correcting properties
of the coding transitions is to look into the code distance of the resulting
code. We leave a detailed study of the decoding protocols and code distance
for future studies.
We also find similar coding transitions for bulk defects where noise acts on
the same site in the bulk. Protection of quantum information against bulk
defects is important for the design of modular quantum computers in which
smaller modules of quantum memory/computer are connected together to form a
bigger block. In this case, one expects the noise in the gates connecting the
two modules to be far greater than the noise in the bulk of the individual
modules. Thus the existence of an error-threshold against a bulk defect and
the availability of the decoding protocol discussed above gives a fault-
tolerant way of building a modular quantum computer.
A possible extension of our work is to study information dynamics in noisy
symmetric systems. The behavior of information in symmetric systems with local
charge density in presence of measurements has been shown to be qualitatively
different than without symmetry [73, 74, 75, 76]. It is also known that
systems with local charge conservation can have charge transport and long-time
operator entanglement growth even in the presence of strong dephasing noise
[77, 78]. This may potentially lead to a more robust encoding of the
information for when the code-space is spread across different charge sectors
as opposed to being confined to one sector. We leave this for future studies.
###### Acknowledgements.
The authors thank the Kavli Institute for Theoretical Physics (KITP), where
this research was initiated and partly performed. The KITP is supported, in
part, by the National Science Foundation under Grant No. NSF PHY-1748958. S.V.
thanks Matthew Fisher for helpful discussions. U.A. thanks Ali Lavasani for
helpful discussions. I.L. acknowledges support from the Gordon and Betty Moore
Foundation through Grant GBMF8690 to UCSB. This work was supported by the
Simons Collaboration on Ultra-Quantum Matter, which is a grant from the Simons
Foundation (651440, U.A.).
## References
* Hayden and Preskill [2007] P. Hayden and J. Preskill, Black holes as mirrors: Quantum information in random subsystems, JHEP 09, 120.
* Sekino and Susskind [2008] Y. Sekino and L. Susskind, Fast scramblers, JHEP 10, 065.
* Lashkari _et al._ [2013] N. Lashkari, D. Stanford, M. Hastings, T. Osborne, and P. Hayden, Towards the fast scrambling conjecture, Journal of High Energy Physics 2013, 22 (2013), arXiv:1111.6580 [hep-th] .
* Hos [2016] Chaos in quantum channels, Journal of High Energy Physics 2016, 1 (2016).
* Carr [2010] L. Carr, _Understanding Quantum Phase Transitions_ (CRC Press, 2010).
* Breuer and Petruccione [2002] H. P. Breuer and F. Petruccione, _The theory of open quantum systems_ (Oxford University Press, Great Clarendon Street, 2002).
* Müller _et al._ [2012] M. Müller, S. Diehl, G. Pupillo, and P. Zoller, Engineered open systems and quantum simulations with atoms and ions, in _Advances in Atomic, Molecular, and Optical Physics_, Advances In Atomic, Molecular, and Optical Physics, Vol. 61, edited by P. Berman, E. Arimondo, and C. Lin (Academic Press, 2012) pp. 1–80.
* Carusotto and Ciuti [2013] I. Carusotto and C. Ciuti, Quantum fluids of light, Rev. Mod. Phys. 85, 299 (2013).
* Daley [2014] A. J. Daley, Quantum trajectories and open many-body quantum systems, Advances in Physics 63, 77 (2014), https://doi.org/10.1080/00018732.2014.933502 .
* Maldacena _et al._ [2016] J. Maldacena, S. H. Shenker, and D. Stanford, A bound on chaos, Journal of High Energy Physics 2016, 106 (2016), arXiv:1503.01409 [hep-th] .
* Xu _et al._ [2020] T. Xu, T. Scaffidi, and X. Cao, Does scrambling equal chaos?, Phys. Rev. Lett. 124, 140602 (2020).
* Nahum _et al._ [2017] A. Nahum, J. Ruhman, S. Vijay, and J. Haah, Quantum Entanglement Growth under Random Unitary Dynamics, Physical Review X 7, 031016 (2017), arXiv:1608.06950 [cond-mat.stat-mech] .
* Nahum _et al._ [2018] A. Nahum, S. Vijay, and J. Haah, Operator Spreading in Random Unitary Circuits, Physical Review X 8, 021014 (2018), arXiv:1705.08975 [cond-mat.str-el] .
* von Keyserlingk _et al._ [2018] C. W. von Keyserlingk, T. Rakovszky, F. Pollmann, and S. L. Sondhi, Operator Hydrodynamics, OTOCs, and Entanglement Growth in Systems without Conservation Laws, Physical Review X 8, 021013 (2018), arXiv:1705.08910 [cond-mat.str-el] .
* Mezei and Stanford [2017] M. Mezei and D. Stanford, On entanglement spreading in chaotic systems, Journal of High Energy Physics 2017, 65 (2017), arXiv:1608.05101 [hep-th] .
* Luitz and Bar Lev [2017] D. J. Luitz and Y. Bar Lev, Information propagation in isolated quantum systems, Phys. Rev. B 96, 020406 (2017).
* Shor [1995] P. W. Shor, Scheme for reducing decoherence in quantum computer memory, Phys. Rev. A 52, R2493 (1995).
* Steane [1996] A. M. Steane, Error correcting codes in quantum theory, Phys. Rev. Lett. 77, 793 (1996).
* Bennett _et al._ [1996] C. H. Bennett, D. P. DiVincenzo, J. A. Smolin, and W. K. Wootters, Mixed-state entanglement and quantum error correction, Phys. Rev. A 54, 3824 (1996).
* Knill and Laflamme [1997] E. Knill and R. Laflamme, Theory of quantum error-correcting codes, Phys. Rev. A 55, 900 (1997).
* Knill _et al._ [1998] E. Knill, R. Laflamme, and W. H. Zurek, Resilient quantum computation: error models and thresholds, Proceedings of the Royal Society of London Series A 454, 365 (1998), arXiv:quant-ph/9702058 [quant-ph] .
* Aharonov and Ben-Or [1999] D. Aharonov and M. Ben-Or, Fault-Tolerant Quantum Computation With Constant Error Rate, arXiv e-prints , quant-ph/9906129 (1999), arXiv:quant-ph/9906129 [quant-ph] .
* Shor [1996] P. Shor, Fault-tolerant quantum computation, in _Proceedings of 37th Conference on Foundations of Computer Science_ (1996) pp. 56–65.
* Li _et al._ [2018] Y. Li, X. Chen, and M. P. A. Fisher, Quantum Zeno effect and the many-body entanglement transition, Phys. Rev. B 98, 205136 (2018), arXiv:1808.06134 [quant-ph] .
* Skinner _et al._ [2019] B. Skinner, J. Ruhman, and A. Nahum, Measurement-Induced Phase Transitions in the Dynamics of Entanglement, Physical Review X 9, 031009 (2019), arXiv:1808.05953 [cond-mat.stat-mech] .
* Chan _et al._ [2019] A. Chan, R. M. Nandkishore, M. Pretko, and G. Smith, Unitary-projective entanglement dynamics, Phys. Rev. B 99, 224307 (2019), arXiv:1808.05949 [cond-mat.stat-mech] .
* Gullans and Huse [2020] M. J. Gullans and D. A. Huse, Dynamical Purification Phase Transition Induced by Quantum Measurements, Physical Review X 10, 041020 (2020), arXiv:1905.05195 [quant-ph] .
* Choi _et al._ [2020] S. Choi, Y. Bao, X.-L. Qi, and E. Altman, Quantum Error Correction in Scrambling Dynamics and Measurement-Induced Phase Transition, Phys. Rev. Lett. 125, 030505 (2020), arXiv:1903.05124 [quant-ph] .
* Noh _et al._ [2020] K. Noh, L. Jiang, and B. Fefferman, Efficient classical simulation of noisy random quantum circuits in one dimension, Quantum 4, 318 (2020).
* Li _et al._ [2023a] Z. Li, S. Sang, and T. H. Hsieh, Entanglement dynamics of noisy random circuits, Phys. Rev. B 107, 014307 (2023a).
* Shannon [1948] C. E. Shannon, A mathematical theory of communication, The Bell System Technical Journal 27, 379 (1948).
* Gallager [1962] R. Gallager, Low-density parity-check codes, IRE Transactions on Information Theory 8, 21 (1962).
* Gallager [1973] R. Gallager, The random coding bound is tight for the average code (corresp.), IEEE Transactions on Information Theory 19, 244 (1973).
* CodesDavid _et al._ [1996] CodesDavid, C. J., and MacKayCavendish, Near shannon limit performance of low density parity check codes (1996).
* Richardson and Urbanke [2001] T. Richardson and R. Urbanke, Efficient encoding of low-density parity-check codes, IEEE Transactions on Information Theory 47, 638 (2001).
* Brown and Fawzi [2013] W. Brown and O. Fawzi, Short random circuits define good quantum error correcting codes, in _2013 IEEE International Symposium on Information Theory_ (IEEE, 2013).
* Brown and Fawzi [2015] W. Brown and O. Fawzi, Decoupling with random quantum circuits, Communications in Mathematical Physics 340, 867 (2015).
* Gullans _et al._ [2021] M. J. Gullans, S. Krastanov, D. A. Huse, L. Jiang, and S. T. Flammia, Quantum coding with low-depth random circuits, Phys. Rev. X 11, 031066 (2021).
* Fisher _et al._ [2022] M. P. A. Fisher, V. Khemani, A. Nahum, and S. Vijay, Random quantum circuits, Annual Review of Condensed Matter Physics 14 (2022).
* Potter and Vasseur [2022] A. C. Potter and R. Vasseur, Entanglement dynamics in hybrid quantum circuits, in _Entanglement in Spin Chains: From Theory to Quantum Technology Applications_ (Springer, 2022) pp. 211–249.
* Jian _et al._ [2021] S.-K. Jian, C. Liu, X. Chen, B. Swingle, and P. Zhang, Quantum error as an emergent magnetic field, arXiv e-prints , arXiv:2106.09635 (2021), arXiv:2106.09635 [quant-ph] .
* Li and Fisher [2021] Y. Li and M. P. A. Fisher, Robust decoding in monitored dynamics of open quantum systems with Z_2 symmetry, arXiv e-prints , arXiv:2108.04274 (2021), arXiv:2108.04274 [quant-ph] .
* Sá _et al._ [2020] L. Sá, P. Ribeiro, T. Can, and T. c. v. Prosen, Spectral transitions and universal steady states in random kraus maps and circuits, Phys. Rev. B 102, 134310 (2020).
* Weinstein _et al._ [2022] Z. Weinstein, Y. Bao, and E. Altman, Measurement-induced power-law negativity in an open monitored quantum circuit, Physical Review Letters 129, 10.1103/physrevlett.129.080501 (2022).
* Fan _et al._ [2021] R. Fan, S. Vijay, A. Vishwanath, and Y.-Z. You, Self-organized error correction in random unitary circuits with measurement, Physical Review B 103, 174309 (2021).
* Li and Fisher [2021] Y. Li and M. P. A. Fisher, Statistical mechanics of quantum error correcting codes, Physical Review B 103, 104306 (2021).
* Bao _et al._ [2020] Y. Bao, S. Choi, and E. Altman, Theory of the phase transition in random unitary circuits with measurements, Phys. Rev. B 101, 104301 (2020), arXiv:1908.04305 [cond-mat.stat-mech] .
* Li _et al._ [2023b] Y. Li, S. Vijay, and M. P. A. Fisher, Entanglement domain walls in monitored quantum circuits and the directed polymer in a random environment, PRX Quantum 4, 010331 (2023b).
* Schumacher and Nielsen [1996] B. Schumacher and M. A. Nielsen, Quantum data processing and error correction, Phys. Rev. A 54, 2629 (1996).
* Schumacher and Westmoreland [2001] B. Schumacher and M. D. Westmoreland, Approximate quantum error correction, arXiv e-prints , quant-ph/0112106 (2001), arXiv:quant-ph/0112106 [quant-ph] .
* Abraham [1980] D. Abraham, Solvable model with a roughening transition for a planar ising ferromagnet, Physical Review Letters 44, 1165 (1980).
* Abraham [1981] D. Abraham, Binding of a domain wall in the planar ising ferromagnet, Journal of Physics A: Mathematical and General 14, L369 (1981).
* Chalker [1981] J. Chalker, The pinning of a domain wall by weakened bonds in two dimensions, Journal of Physics A: Mathematical and General 14, 2431 (1981).
* Zhou and Nahum [2019] T. Zhou and A. Nahum, Emergent statistical mechanics of entanglement in random unitary circuits, Physical Review B 99, 174205 (2019).
* Huse and Henley [1985] D. A. Huse and C. L. Henley, Pinning and roughening of domain walls in ising systems due to random impurities, Phys. Rev. Lett. 54, 2708 (1985).
* Kardar and Zhang [1987] M. Kardar and Y.-C. Zhang, Scaling of directed polymers in random media, Phys. Rev. Lett. 58, 2087 (1987).
* Kardar [1985] M. Kardar, Depinning by quenched randomness, Phys. Rev. Lett. 55, 2235 (1985).
* Lipowsky and Fisher [1986] R. Lipowsky and M. E. Fisher, Wetting in random systems, Physical review letters 56, 472 (1986).
* Vijay and Fisher [2022] S. Vijay and M. P. A. Fisher, Unpublished (2022).
* Lovas _et al._ [2023] I. Lovas, U. Agrawal, and S. Vijay, Unpublished (2023).
* Kawabata _et al._ [2022] K. Kawabata, A. Kulkarni, J. Li, T. Numasawa, and S. Ryu, Dynamical quantum phase transitions in syk lindbladians (2022), arXiv:2210.04093 [cond-mat.stat-mech] .
* Yan _et al._ [2020] B. Yan, L. Cincio, and W. H. Zurek, Information scrambling and loschmidt echo, Phys. Rev. Lett. 124, 160603 (2020).
* Xu _et al._ [2019] Z. Xu, L. P. García-Pintos, A. Chenu, and A. del Campo, Extreme decoherence and quantum chaos, Phys. Rev. Lett. 122, 014103 (2019).
* Can [2019] T. Can, Random Lindblad dynamics, Journal of Physics A Mathematical General 52, 485302 (2019), arXiv:1902.01442 [quant-ph] .
* Jalabert and Pastawski [2001] R. A. Jalabert and H. M. Pastawski, Environment-independent decoherence rate in classically chaotic systems, Phys. Rev. Lett. 86, 2490 (2001).
* Karkuszewski _et al._ [2002] Z. P. Karkuszewski, C. Jarzynski, and W. H. Zurek, Quantum chaotic environments, the butterfly effect, and decoherence, Phys. Rev. Lett. 89, 170405 (2002).
* Cucchietti _et al._ [2003] F. M. Cucchietti, D. A. R. Dalvit, J. P. Paz, and W. H. Zurek, Decoherence and the loschmidt echo, Phys. Rev. Lett. 91, 210403 (2003).
* Peres [1984] A. Peres, Stability of quantum motion in chaotic and regular systems, Phys. Rev. A 30, 1610 (1984).
* Schuster and Yao [2022] T. Schuster and N. Y. Yao, Operator growth in open quantum systems (2022), arXiv:2208.12272 [quant-ph] .
* Zanardi and Anand [2021] P. Zanardi and N. Anand, Information scrambling and chaos in open quantum systems, Phys. Rev. A 103, 062214 (2021), arXiv:2012.13172 [quant-ph] .
* Yoshida and Yao [2019] B. Yoshida and N. Y. Yao, Disentangling scrambling and decoherence via quantum teleportation, Phys. Rev. X 9, 011006 (2019).
* Yoshida and Kitaev [2017] B. Yoshida and A. Kitaev, Efficient decoding for the hayden-preskill protocol (2017), arXiv:1710.03363 [hep-th] .
* Agrawal _et al._ [2022] U. Agrawal, A. Zabalo, K. Chen, J. H. Wilson, A. C. Potter, J. Pixley, S. Gopalakrishnan, and R. Vasseur, Entanglement and charge-sharpening transitions in u(1) symmetric monitored quantum circuits, Physical Review X 12, 10.1103/physrevx.12.041002 (2022).
* Barratt _et al._ [2022a] F. Barratt, U. Agrawal, A. C. Potter, S. Gopalakrishnan, and R. Vasseur, Transitions in the learnability of global charges from local measurements, Physical Review Letters 129, 10.1103/physrevlett.129.200602 (2022a).
* Barratt _et al._ [2022b] F. Barratt, U. Agrawal, S. Gopalakrishnan, D. A. Huse, R. Vasseur, and A. C. Potter, Field theory of charge sharpening in symmetric monitored quantum circuits, Physical Review Letters 129, 10.1103/physrevlett.129.120604 (2022b).
* Oshima and Fuji [2023] H. Oshima and Y. Fuji, Charge fluctuation and charge-resolved entanglement in a monitored quantum circuit with U (1 ) symmetry, Phys. Rev. B 107, 014308 (2023), arXiv:2210.16009 [cond-mat.dis-nn] .
* Wellnitz _et al._ [2022] D. Wellnitz, G. Preisser, V. Alba, J. Dubail, and J. Schachenmayer, Rise and fall, and slow rise again, of operator entanglement under dephasing, Phys. Rev. Lett. 129, 170401 (2022).
* Cai and Barthel [2013] Z. Cai and T. Barthel, Algebraic versus exponential decoherence in dissipative many-particle systems, Phys. Rev. Lett. 111, 150403 (2013).
## Appendix A Lattice Partition Function and the Annealed Phase Transition
The annihilation of the Ising domain wall at the boundary, or the free
propagation of the domain wall through the bulk describe two distinct phases
which may be accessed by tuning the dissipation strength, as described in
detail in Sec. II. Here, we make this connection precise by studying the
lattice partition function for the domain wall using the weights derived in
Sec. II.1. We consider the quantum circuit evolution shown schematically in
Fig. 13a, where each site (in blue) denotes the action of a two-site unitary
gate on a qudit chain, while dissipation (in orange) acts periodically on the
boundary qudit. Let $Z(T)$ denote the partition function for the domain wall
propagating for a time $T$, defined so that at the initial and final times,
the domain wall is absent (i.e. has been annihilated at the $x=0$ interface).
This partition sum may be calculated as follows. First, we define $Z_{a}(t)$
to be the partition functions when there is no domain wall for a time interval
$t$ (it has been annihilated), while $Z_{f}(t)$ is the partition function when
the domain wall is created at the $x=0$ interface, wanders and first returns
back to the interface after a time $t$, after which it is then annihilated
(the domain wall is free). With these definitions, we observe that
$Z_{0}({T})$ is given by summing over all possible domain wall histories as
$\displaystyle Z(T)=Z_{a}(T)+Z_{f}(T)+\sum_{t<T}Z_{a}(t)Z_{f}(T-t)+\cdots$
(29)
where the ellipsis denotes all possible domain wall configurations in which,
at intermediate timesteps, the domain wall wanders away or is annihilated at
the interface.
It is convenient to consider the discrete Laplace transform of the partition
function
$\displaystyle z(w)\equiv\sum_{T\geq 0}w^{T}Z(T).$ (30)
The inverse of this transformation is given by
$\displaystyle Z(T)=\frac{1}{2\pi i}\oint_{\Gamma}dw\frac{z(w)}{w^{T+1}}$ (31)
where the contour $\Gamma$ encloses the origin in the complex $w$ plane. This
relation is easily verified by substituting Eq. (30). As a result, the
smallest real singularity of $z(w)$ – denoted $w_{*}$ – determines the
behavior of the partition function at long times. Equivalently, the free
energy density $f=-T^{-1}\log Z$ is given by
$\displaystyle f\overset{T\rightarrow\infty}{\sim}\log w_{*}$ (32)
The Laplace transform of $Z(T)$ is straightforward to evaluate, since each
term in the expansion (29) is a discrete convolution of products of $Z_{a}$
and $Z_{f}$. As a result, the Laplace transform of each term in this sum is
simply the product of the Laplace transformations of appropriate products of
$Z_{a}$ and $Z_{f}$. We thus find that
$\displaystyle z(w)$
$\displaystyle=\frac{z_{a}(w)+z_{f}(w)+2\,z_{a}(w)z_{f}(w)}{1-z_{a}(w)z_{f}(w)}$
(33)
with $z_{a}(w)$ and $z_{f}(w)$ defined as the Laplace transforms of $Z_{a}(t)$
and $Z_{f}(t)$, respectively.
Observe that $Z_{a}(t)=(1-p)^{2t}$ so that
$\displaystyle z_{a}(w)=\frac{w(1-p)^{2}}{1-w(1-p)^{2}}$ (34)
Similarly, we note that when $t\geq 2$
$\displaystyle
Z_{f}(t)=\frac{p(2-p)}{q}\left(\frac{q}{q^{2}+1}\right)^{2t-3}N_{2t-4}$ (35)
Here, $p(2-p)/q$ is the weight to create the Ising domain wall, as indicated
in Eq. (7). The domain wall is acted upon by $2t-3$ two-site unitary gates,
incurring a weight $q/(q^{2}+1)$ for the action of each gate. Finally,
$N_{2k}$ is the number of walks on the rotated square lattice – such as the
one shown in Fig. 13b – which start at a site closest to the boundary, and
which and return to the same point after $2k$ steps, without touching the
boundary. This counting of paths is easily determined to be
$\displaystyle N_{2k}=\left(\begin{array}[]{c}2k\\\
k\end{array}\right)-\left(\begin{array}[]{c}2k\\\ k+1\end{array}\right).$ (40)
Performing the Laplace transform thus yields
$\displaystyle
z_{f}(w)=\frac{p(2-p)}{2q}\frac{w(q^{2}+1)}{q}\left[1-\sqrt{1-\frac{w}{w_{1}(q)}}\right]$
(41)
which has a singularity when the argument of the square root vanishes at
$\displaystyle w_{1}(q)\equiv(q^{2}+1)^{2}/4q^{2}.$ (42)
We note that $z(w)$ is also singular at $w=w_{2}$ such that
$z_{a}(w_{2})z_{f}(w_{2})=1$. Finally, we note that while $z_{a}(w)$ contains
a pole at $w=1/(1-p)^{2}$, it is clear from (33) that this does not give rise
to a singularity in $z(w)$.
$\begin{array}[]{cc}\includegraphics[width=99.73074pt]{Ising_DW_1}&\includegraphics[width=95.39693pt]{Ising_DW_2}\\\
\text{(a)}&\text{(b)}\end{array}$
Figure 13: A depiction of the quantum circuit which is applied to the qudit
chain is shown in (a). Here, each blue vertex indicates the application of a
two-site unitary gate while the orange sites indicate the periodic application
of a single-qudit depolarizing channel. The calculation of the corresponding
Ising partition sum can be performed, with spin configurations living on bonds
of the square lattice, as in (b), and which are naturally thought of as
propagating in the indicated “time” direction by the transfer matrix for the
Ising magnet. Shown is a contribution to $Z_{f}(t=5)$, where the domain wall
is created by the dissipation at the initial time, and is annihilated four
timesteps later. The trajectory of the Ising domain wall can be thought of as
a path on the lattice, which starts from the first unitary gate which acts on
a pair of anti-aligned spins, and ends when the domain wall is annihilated.
When $p>p_{c}$, the smallest real singularity of $z(w)$ occurs at
$w=w_{1}(q)$, so that the free energy
$\displaystyle f=2\log\left(\frac{q^{2}+1}{2q}\right)\hskip
14.45377pt(p>p_{c})$ (43)
A phase transition occurs at $p=p_{c}$ when the two singularities merge
$w_{1}=w_{2}$, and for $p<p_{c}$ the singularity at $w_{*}=w_{2}$ determines
the free energy density. The phase transition therefore occurs when
$\displaystyle z_{f}(w_{1})z_{a}(w_{1})=1$ (44)
This equation may be solved numerically to obtain $p_{c}$ for any finite $q$.
The critical probability increases with increasing Hilbert space dimension
$q$. In the limit $q\rightarrow\infty$, we may analytically solve this
equation to find that $p_{c}$ approaches one as
$\displaystyle p_{c}=1-O(q^{-2})$ (45)
so that the phase transition is absent when the on-site Hilbert space
dimension is strictly infinite.
Finally, we may study the singular part of the free energy near the transition
at $p=p_{c}$. Expanding the equation $z_{f}(w_{2})z_{a}(w_{2})=1$ for
$p=p_{c}-\delta p$ with $\delta p\ll p_{c}$ yields the result that the
singularity $w_{*}=w_{2}=w_{1}-\delta w$ where $\delta w\sim(\delta p)^{2}$.
As a result, the free energy difference vanishes when approaching the critical
point as
$\displaystyle\Delta f(p)\equiv f(p_{c})-f(p)\overset{p\rightarrow
p_{c}^{-}}{\sim}(p-p_{c})^{2}$ (46)
On general grounds, the singular part of the free energy density should vanish
as $\Delta f\sim 1/\xi_{\parallel}$ where $\xi_{\parallel}$ is the correlation
length along the time direction. This correlation length thus diverges as
$\xi_{\parallel}\sim(p-p_{c})^{-\nu_{\parallel}}$ with $\nu_{\parallel}=2$.
Finally, we may determine the typical length of an excursion $\ell_{\perp}$
that the domain wall will make into the bulk of the quantum circuit, and how
this distance diverges as we approach the phase transition from the pinned
phase $p\leq p_{c}$. First, observe that the weight for the Ising domain wall
to make an excursion for a time $t$ is $Z_{f}(t)/Z(t)$. Then the typical
duration of an excursion is
$\displaystyle\tau=\frac{\displaystyle\sum_{t}t\,Z_{f}(t)/Z(t)}{\displaystyle\sum_{t}Z_{f}(t)/Z(t)}\sim\frac{\displaystyle\sum_{t}t\,w_{*}^{t}Z_{f}(t)}{\displaystyle\sum_{t}w_{*}^{t}Z_{f}(t)}=\frac{\partial\ln
Z_{f}(w)}{\partial\ln w}\Big{|}_{w=w_{*}}$
where in the second expression, we have used the fact that
$Z(t)\overset{t\rightarrow\infty}{\sim}w_{*}^{-t}$. On approaching the
transition from the localized phase $p=p_{c}-\delta p$, the singularity
$w_{*}=w_{2}=w_{1}-\delta w$ with $\delta w\sim\delta p^{2}$, as derived
previously, which yields the result that $\tau\sim(p_{c}-p)^{-1}$ as
$p\rightarrow p_{c}^{-}$. Assuming a diffusive wandering of the domain wall,
the transverse distance covered by the domain wall diverges on approaching the
depinned phase as
$\displaystyle\ell_{\perp}\overset{p\rightarrow
p_{c}^{-}}{\sim}(p_{c}-p)^{-1/2}$ (47)
Approaching the phase transition, when $\ell_{\perp}\gg x_{0}$, the
probability that the domain wall has reached a point $y\geq x_{0}$ is
approximately $P(x_{0},t)=1-O(x_{0}/\ell_{\perp})$. Substituting this into Eq.
(13) yields the result that the annealed mutual information vanishes as
$I^{(\mathrm{ann})}_{A,R}\sim\ell_{\perp}^{-1}\sim(p_{c}-p)^{\beta}$ (with
$\beta\equiv 1/2$) when approaching the phase transition. This behavior, along
with the knowledge of $\nu_{\parallel}=2$ motivates the finite-size scaling
form for the annealed mutual information, which we use in the main text
$I^{(\mathrm{ann})}_{A,R}(T)=T^{-\beta/\nu}F(T^{1/\nu}(p-p_{c}))$.
## Appendix B Alternative random circuit protocols
To show that the phase transition in the mutual information persists
irrespective of the precise form of the boundary dissipation and scrambling
dynamics, here we introduce and examine four different protocols for the
random circuit. We consider the following two types of time evolution, each of
them with two different realizations of the boundary dissipation.
$\bullet$ Random boundary dissipation + maximal Clifford scrambling. In each
time step, the dissipation acts on the leftmost qubit with probability $p$.
Scrambling is provided by random Clifford gates arranged in a brickwork
structure. Therefore, the relative strength of the dissipation compared to the
efficiency of scrambling is tuned through the parameter $p$.
$\bullet$ Periodic boundary dissipation + sparse Clifford scrambling. The
dissipation acts on the leftmost qubit periodically, with periodicity $T_{\rm
period}$. The unitary gates providing the scrambling of information are
applied in a sparse brickwork structure, where each gate in the brickwork is a
random Clifford unitary with probability $p_{U}$, and the identity with
probability $1-p_{U}$. In this scenario, the relative strength of the
dissipation compared to the efficiency of scrambling is determined by two
parameters, $T_{\rm period}$ and $p_{U}$.
Figure 14: Coding transition induced by a single boundary for different
circuit protocols. Mutual information between the input and the output of the
circuit (a) as a function of dissipation strength $p$ for boundary dissipation
realized as a CNOT coupling to an ancilla qubit with maximal bulk Clifford
scrambling, (b)-(c) varying the strength of sparse bulk Clifford scrambling
$p_{U}$ with periodic boundary erasure channel (b), or periodic boundary CNOT
gate to an ancilla (c). All data are consistent with a coding transition
between a phase with partially protected information for weak dissipation /
strong enough scrambling, and a dissipative phase with all encoded information
lost. No pre-scrambling step was used for these plots.
As described in the main text, the Bell pair is encoded in the initial state
at the left boundary, optionally followed by a pre-scrambling step logarithmic
or linear in system size, depending on the type of phase transition that we
consider. We note that the pre-scrambling is realized by a full or sparse
brickwork of Clifford unitary gates, in the first and second types of
dynamics, respectively.
As mentioned above, we consider two different realizations of the boundary
dissipation.
$\bullet$ Boundary erasure channel. The dissipation acts by deleting the
information stored in the leftmost qubit.
$\bullet$ Coupling to an ancilla qubit. Here, we first couple the leftmost
qubit of the system to an ancilla qubit through a CNOT gate, and then trace
out the ancilla. In the stabilizer formalism, this operation results in
deleting all stabilizers containing a $Y$ or $Z$ Pauli operator at the left
end of the chain. To restore rotational invariance and obtain a smooth limit
$p_{U}\rightarrow 0$, for sparse Clifford scrambling we also act with a random
single site Clifford gate on the leftmost qubit before applying the CNOT gate.
In the main text we mainly focused on the case of random boundary dissipation
and maximal Clifford scrambling, with the dissipation realized as a boundary
erasure channel. We also briefly commented on the effect of a periodic
boundary noise, modifying the critical properties for linear pre-scrambling
compared to the random case. Below we provide supplementary numerical results
for the other protcols, showing a similar phase transition in the mutual
information.
We show the coding transition in the mutual information without pre-
scrambling, induced by a single boundary with aspect ratio $T/L<1$, in Fig. 14
for three different protocols. We cross the phase transition by tuning the
strength of dissipation $p$ in Fig. 14a, realized with a random CNOT coupling
between the boundary spin and an ancilla qubit. In contrast, in Fig. 14b and c
the tuning parameter is the strength of sparse bulk scrambling $p_{U}$, while
we apply a fixed strength periodic boundary dissipation, realized as an
erasure channel in Fig. 14b, and as a CNOT gate with an ancilla qubit in Fig.
14c. We recover the coding transition between a phase with partially protected
coherent information and a phase where all information is destroyed for all
protocols. Due to the difficulties in fitting critical exponents from finite
size data mentioned in the main text, we leave the detailed study of critical
properties for future work. In the cases with periodic boundary dissipation we
used $T_{\rm period}=5$ (b), and $T_{\rm period}=3$ (c).
|
# Enhanced temperature sensing by multi-mode coupling in an on-chip
microcavity system
###### Abstract
The microcavity is a promising sensor platform, any perturbation would disturb
its linewidth, cause resonance shift or splitting. However, such sensing
resolution is limited by the cavity’s optical quality factor and mode volume.
Here we propose and demonstrate in an on-chip integrated microcavity system
that resolution of a self-referenced sensor could be enhanced with multi-mode
coupling. In experiments, inter-mode coupling strength is carefully optimized
with a pulley waveguide and observed a resolution improvement of nearly $3$
times in frequency domain. While experiencing small refractive index change
tuned by temperature, mode-coupled system shows a $7.2$ times sensitivity
enhancement that is than the uncoupled system on the same chip and a very
significant lineshape contrast ratio change as great reference for minor
frequency shifts. This approach will help design microcavity sensors to
improve detection sensitivity and resolution under limited manufacture
precision.
###### keywords:
Micro-optical device, Optical sensing and sensors, Mode coupling, Integrated
optics, Resonant modes
Xueyi Wang‡ Tingge Yuan‡ Jiangwei Wu Yuping Chen* Xianfeng Chen
‡These authors contributed equally to this work.
X. Wang, T. Yuan, J. Wu, Prof. Y. Chen, Prof. X. Chen
State Key Laboratory of Advanced Optical Communication Systems and Networks
School of Physics and Astronomy
Shanghai Jiao Tong University
Shanghai 200240, China
Email<EMAIL_ADDRESS>
Prof. X. Chen
Shanghai Research Center for Quantum Sciences
Shanghai 201315, China
Prof. X. Chen
Collaborative Innovation Center of Light Manipulations and Applications
Shandong Normal University
Jinan 250358, China
## 1 Introduction
Optical microcavity as one of the building blocks of photonic integrated
circuit has enabled a variety of applications including nonlinear optics[1-2],
low-threshold laser[3-4] and single molecule detection[5-12], its small mode
volume and high quality factor ($Q$ factor). Especially in unlabeled
sensing[13-19] or environmental monitoring[20-25] as a great supplement for
medical and environmental research. On the other hand, decrease in the
cavity’s mode volume would increase radiation losses that is no longer
neglectable[26] causing drop in its $Q$ factor. Overcoming such limitation by
introducing new principles to microcavity systems thus become urgent. There
have been works implementing microcavity lasers[27-28] to enhance light-matter
interaction or by utilizing opto-mechnical coupling[29] that boosts sensing
resolution by magnitudes.
Among all of the brand new solutions, one of the most approachable is by
introducing mode coupling into the system[30], for that it causes little extra
fabrication or experiment difficulties. When the two coupled modes are at
weak-coupling region[31], their coherent interaction can optimize the
spectrum’s lineshape for efficient sensing[32]. Within one single cavity, the
coupling condition can be satisfied by utilizing modes in different
polarization of a micro-toroidal cavity[33] or by applying UV curable adhesive
onto micro-bottle resonator to create a lossy mode[34], which achieved a 4.3
times refractive index change sensitivity amplification through its coupling
with another discrete mode. Meanwhile micro-ring resonator is the ideal
platform for on-chip integration, with as simple as a built-in Fabry–Pérot
(F-P) cavity on its coupled waveguide[35], multi-mode coupling between
cavities could be achieved. Such structure was first manufactured with polymer
platform[36] that increased the sensitivity of solution refractive index by
its sharp resonance slopes, and with a silicon-on-insulator chip as well[37]
which realized a tunable lineshape fitting a variety of applications.
Recently, mode coupling has also been controlled by scatterers to function at
its exceptional point showing possibility for unprecedented sensitivity[38].
Thus mode coupling could be a handy improvement to already widely studied
microcavity sensors.
In this work, we propose a design method to improve the resolution of
microcavity sensors through multi-mode coupling with a compact, on-chip
integrated micro-cavity system. Based on a waveguide to micro-racetrack
structure supporting three resonance modes simultaneously and a pulley coupler
with careful geometrical optimization, our design allows efficient and
distinct inter-mode coupling at $1520$ nm to $1555$ nm band for both racetrack
quasi-TE and TM modes leading to frequency shifts and sharp lineshape, which
helps to distinguish two modes during self-referenced sensing and breaks the
sensitivity’s dependence on the $Q$ factor that microcavity sensors always
suffer from. In frequency domain we achieved 3 times enhancement in resolution
and a sensitivity of 44 $\rm pm^{\circ}C^{-1}$ that is 7.2 times higher than
the uncoupled structure on the same chip, with lineshape contrast ratio (LCR)
of 24.1 times enhancement in the same time which function as great reference
for minor turbulence. Our proposed approach will benefit applications in
optical sensors that require integration and high sensitivity probing weak
signals under a compromised fabrication efficiency.
## 2 Theory
Conventionally when two modes are weakly coupled, for instance one discrete
mode and one continuous mode, the discrete mode would experience a frequency
shift and linewidth sharpening determined by their detuned wavelength and
coupling strength[31]. While the coupling includes two discrete modes
simultaneously, it is very likely that they will experience different shifts
for that they possess distinct coupling strength and eigenfrequencies. Thus by
controlling the composition of three modes we could manipulate the relative
frequency difference of them after coupling happened, that in certain
scenarios would help us to distinguish two discrete modes with higher
resolution. Here we first introduce the theory and how it compose with our on-
chip system.
In the waveguide micro-ring resonator (WGMRR) three modes co-exist in the
system as in Figure 1a, one waveguide (WG) mode reflected by built-in
gratings, two micro-ring resonance (MRR) modes with quasi-TE and TM
polarization. They possess different coupling efficiency $\kappa_{j}$,
internal loss $\gamma_{j}$ and resonant frequency separately $\omega_{j}$
($j=0$ for WG mode and $j=1,2$ for MRR quasi-TE, TM modes respectively as
shown in Figure 1a). The system Hamiltonian should be,
$H_{SYS}=\sum_{j,j=0,1,2}\hbar\omega_{i}a_{i}^{{\dagger}}a_{i}+i\hbar\kappa_{1}(a_{0}^{{\dagger}}a_{1}-a_{1}^{{\dagger}}a_{0})+i\hbar\kappa_{2}(a_{0}^{{\dagger}}a_{2}-a_{2}^{{\dagger}}a_{0})-ig\hbar(a_{1}^{{\dagger}}a_{2}+a_{2}^{{\dagger}}a_{1}).$
(1)
Figure 1: a) Scheme of the WGMRR system with WG mode in blue and MRR quasi-TE
and quasi-TM in red and black respectively. The insets are calculated with
conformal transformation[39] indicating the polarization of $\rm TE_{00}$ and
$\rm TM_{00}$ in straight and curved waveguides (see also Supporting
Information section 1.1). Frquency shift b) and linewidth sharpening c) for
different MRR modes caused by coupling, insets are transmission spectrum of
quasi-TE, TM modes after coupling.
In which $a_{j}^{{\dagger}}$ $(a_{j})$ are photon creation (annihilation)
operators. For the oblique nature of the side walls ridge waveguide and
birefringence of X-cut LN (Lithium Niobate), the quasi-TE (TM) modes are not
perfectly parallel(vertical) to the substrate plane (see inset of Figure 1a
and Supporting Information section 1.1) which causes them to couple with a
coefficient $g$[40]. At the same time enabling WG $\rm TE_{00}$ photon $a_{0}$
to generate MRR TM mode $a_{2}$ with different polarization. Similarly, at the
time $a_{2}$ couples back into the waveguide it is projected into TE polarized
beams that lead to its coherent interaction with $a_{0}$ lights.
Under the first Markov approximation
$\kappa_{0}^{2}(\omega)=\kappa_{0}/2\pi$[41], we approach the Langevin
equations of motion as follow,
$\left(\begin{array}[]{c}\dot{a_{0}}\\\ \dot{a_{1}}\\\
\dot{a_{2}}\end{array}\right)=\left(\begin{array}[]{ccc}-i\omega_{0}-\frac{\kappa_{0}+\gamma_{0}}{2}&\kappa_{1}&\kappa_{2}\\\
-\kappa_{1}&-i\omega_{1}-\frac{\kappa_{1}+\gamma_{1}}{2}&-g\\\
-\kappa_{2}&-g&-i\omega_{2}-\frac{\kappa_{2}+\gamma_{2}}{2}\end{array}\right)\left(\begin{array}[]{c}a_{0}\\\
a_{1}\\\
a_{2}\end{array}\right)+\left(\begin{array}[]{c}\sqrt{\kappa_{0}}a_{IN}\\\
0\\\ 0\end{array}\right),$ (2)
$a_{IN}=\sqrt{2P_{IN}\kappa_{0}/\hbar\omega_{0}}$ is the input amplitude in TE
polarization and so as $a_{0}$. Then we solve Equation (2) in frequency
domain,
$(\omega-\widetilde{\omega}_{0})a_{0}=-i\kappa_{1}a_{1}-i\kappa_{2}a_{2}-i\sqrt{\kappa_{0}}a_{IN}$
(3) $(\omega-\widetilde{\omega}_{1})a_{1}=iga_{2}+i\kappa_{1}a_{0},$ (4)
$(\omega-\widetilde{\omega}_{2})a_{2}=iga_{1}+i\kappa_{2}a_{0},$ (5)
in which
$\widetilde{\omega}_{j}=\omega_{j}-i\frac{\kappa_{j}+\gamma_{j}}{2}(j=0,1,2)$
is the complex eigenfrequency. Equation (3) implies that the output field is
made up by the coherent superposition of $a_{1},a_{2}$ to $a_{0}$ mode with
coupling constant $\kappa_{1}$ and $\kappa_{2}$ respectively. For that inter-
modal coupling coefficient $g$ within the MRR is relatively small, output
amplitude $a_{OUT}=\sqrt{\kappa_{0}}a_{0}-a_{IN}$ approximates to,
$a_{OUT}=\kappa_{0}\xi a_{IN}-a_{IN},$ (6)
$\xi=\frac{i(\omega-\widetilde{\omega}_{1})(\omega-\widetilde{\omega}_{2})}{(\omega-\widetilde{\omega}_{0})(\omega-\widetilde{\omega}_{1})(\omega-\widetilde{\omega}_{2})-\kappa_{1}^{2}(\omega-\widetilde{\omega}_{2})-\kappa_{2}^{2}(\omega-\widetilde{\omega}_{1})},$
(7)
the transmission spectrum should be,
$T=\left|\frac{a_{OUT}}{a_{IN}}\right|^{2}\approx 1-2\kappa_{0}\xi,$ (8)
where $|\kappa_{0}\xi|^{2}$ is ignored in above equation. Solving the
denominator of $\xi$ there are (details in Supporting Information 2),
$\widetilde{\omega}_{j\pm}=(\widetilde{\omega}_{0}+\widetilde{\omega}_{j}\pm\delta_{j})/2,\delta_{j}^{2}=(\widetilde{\omega}_{0}-\widetilde{\omega}_{j})^{2}+4\kappa_{j}^{2},$
(9)
the eigenfrequencies of MRR are shifted by complex frequencies
$\widetilde{\Delta}_{\pm
j}=\widetilde{\omega}_{j\pm}-\widetilde{\omega}_{j}=(\widetilde{\omega}_{0}-\widetilde{\omega}_{1}\pm\delta_{j})/2$,
$(+:\omega_{0}-\omega_{j}>0,-:\omega_{0}-\omega_{j}<0)$ respectively, in which
$\Delta_{\omega}=Re(\widetilde{\Delta}_{\pm j})$ stands for shifts in
frequency and $\Delta_{\kappa+\gamma}=-Im(\widetilde{\Delta}_{\pm j})$ for
changes in linewidth. Consequently, MRR modes experience a red (blue) shift if
they are blue (red) detuned to $\omega_{0}$ and a linewidth reduction either
way as shown in Figure 1b and c. For that quasi-TE mode has greater $\kappa$
that leads to bigger $|\Delta_{\omega}|$ and $|\Delta_{\kappa+\gamma}|$ for
$a_{1}$.
Consequently, if two MRR modes locate across the continuum mode eigenfrequency
they would be ”pulled closer” as their frequencies are shifted towards each
other
$|\Delta^{\prime}_{12}|=|\Delta_{12}|-\left[|Re(\delta_{1})|+|Re(\delta_{2})|\right]/2$
as case I in Figure 2a. Or they could be ”pushed apart” if coupled to two
different continuum modes as the cases in Figure 2a II. that
$|\Delta^{\prime}_{12}|=|\Delta_{12}|+\left[|Re(\delta_{1})|+|Re(\delta_{2})|\right]/2$.
In the case when their frequency differs even less, within one side of the
background’s FSR (free spectrum range), then they are ”pushed apart” when mode
with larger $\kappa$ locates closer to $\omega_{0}$ only then
$|\Delta^{\prime}_{12}|=|\Delta_{12}|+\left[|Re(\delta_{1})|-|Re(\delta_{2})|\right]/2$
could be enlarged (as the case in Figure 2a III, the width of modes’ stripes
indicates their relative coupling strength), and ”pulled closer” if in the
opposite case. Under the circumstances when multi-mode coupling leads to
”pushed apart”, it enhances the observation resolution of two adjacent MRR
modes, which is ideal for sensing applications or mode measurements. From our
following experiments, clearly in Figure 2b, c and d the transmission spectrum
of the WGMRR and WG (in black and blue) has strongly asymmetric dips and peaks
indicate multi-mode coupling happening in the above (I to III) scenarios.
Figure 2: a) Schematic of the proposed mode coupling involving three modes
arranged in 3 fashions leading to different mode resolution: when MRR modes
locate I across background eigenmode then they are ”pulled closer”, II over
different background eignmodes and III within one side of background mode
while mode with larger $\kappa$ is closer to $\omega_{0}$ when they are
”pushed apart”, wider stripe indicates stronger coupling strength of the
eigenmode to the background mode. Transmission spectrum of the WGMRR and WG in
black and blue which satisfies the above scheme I b), II c), and III d)
respectively.
## 3 Results
### 3.1 Exprimental setup
Figure 3: a) Experiment setup. TL: Tunable Laser, PC: polarization controller,
PD: photon detector, OSC: oscillator, TEC: thermoelectric cooler, electrical
wire and optical fiber in black and yellow respectively. Microscopic pictures
of the single waveguide b), gratings c), coupling region d), racetrack e) and
ring f) resonators.
The experimental setup is in Figure 3a, it involves a tunable infrared laser
source (New Focus TLB-6728) which is adjusted by a PC (polarization
controller) and then coupled into the chip through built-in gratings on the
waveguide. After the output port a PD (photodetector) collects the
transmission information that is then shown on the OSC (oscilloscope).
Meanwhile the chip is loaded on a TEC (thermoelectric cooler) stage with a
precision up to $0.01$ degree Celsius. From this setup any multi-mode coupling
effect is acquired from the the lineshape of transmission spectrum.
To achieve multi-mode coupling and compare its effects, we integrate a
waveguide, a waveguide to micro-ring and a waveguide to micro-racetrack all
manufactured on one single X-cut LNOI (lithium niobate on insulator) chip with
standard electron beam lithography and plasma-reactive etching (see
fabrication details in Supporting Information section 3). As marked in insets
of Figure 1a, the chip thickness is $h=0.6$ $\mu m$, while top width of
waveguide is $w=1$ $\mu m$, thickness $t=0.38$ $\mu m$ and side wall angle
$\theta=60^{\circ}$, which are ideal to support only fundamental modes. The
radius of both micro-racetrack and micro-ring is the same $R=129.03$ $\mu m$
while the racetrack contains an extra straight waveguide of $82.54$ $\mu m$.
At the coupling region Figure 3c, the waveguide width shrinks down to $0.8$
$\mu m$ with a gap of $G=0.6$ $\rm\mu m$ to achieve sufficient evanescent
field coupling, next we calculate such coupling strength and analyse its
effect in detail.
### 3.2 Multi-mode coupling in frequency domain
From the system Hamiltonian we could tell that the inter-modal coupling
strength is determined by the mode coupling efficient $\kappa$ at the pulley
waveguide, and it can be calculated with temporal perturbation theory as[42]
(details in Supporting Information section 1.2),
$\kappa=\int_{-\psi_{0}}^{\psi_{0}}\left[\frac{i\omega}{4}\int_{0}^{R+G}\int_{0}^{t}\left(\epsilon-\epsilon_{0}\right)\mathbf{E}_{WG}\cdot\mathbf{E}_{MRR}rdrdz\right]e^{i\varphi}d\psi,$
(10)
in which $\mathbf{E}_{WG}\cdot\mathbf{E}_{MRR}$ corresponds to the mode
overlap of normalized WG and MRR fields, permittivity $\epsilon$ can be
obtained from the electric field and is shown in the mode’s effective
refractive index ($n_{eff}$) in Figure 4a and b. And
$\varphi=k_{0}n_{WG}R_{WG}\psi-m\psi$, $m$ is the radial mode number of
resonance mode inside cavity, $\varphi$ reflects the phase mismatch between
the waveguide and cavity mode that has major impact on $\kappa$. Meanwhile
X-cut LN is a anisotropic material as $n_{eff}$ shifts across azimuth angle
$\psi$ infecting phase matching condition along its way, thus it introduces
change in $\kappa$ proportional to $sinc(\varphi)$, see Figure 4c, where
$\kappa_{TE}$ increases and $\kappa_{TM}$ decrease as $\psi$ grows. It offers
another degree of freedom to manipulate coupling strength between different
polarization with the angle and length of pulley coupling scheme. Also in
frequency domain (Figure 4d) $\kappa$ degenerates significantly at longer
input wavelength due to non-ideal phase matching and mode overlaping, in the
following experiments we clearly observed weaker mode coupling at longer
wavelength as shown in Figure 4g. Reminding that the pulley waveguide needs to
be carefully designed to achieve sufficient coupling at target work waveband.
In our design, $\kappa$ is set to generate sufficient coupling for both
polarization across C-band at the same time differs with nearly one degree of
magnitude so they appear with diverse mode shifts during coupling to be
distinguished.
Figure 4: The simulated effective refractive index ($n_{eff}$) at different
angles a) on the X-cut LN chip and across resonance wavelength b). Coupling
coefficient $lg(\kappa)$ and $sinc(\varphi)$ at different angles c) on the
X-cut LN chip and across resonance wavelength d). e)Transmission spectrum of
the WG-micro-racetrack and the sole waveguide (in blue) when there is multi-
mode coupling. g)Transmission spectrum of the WG-micro-ring and the sole
waveguide (in blue) when there is no multi-mode coupling. Measured wavelength
difference $|\Delta_{12}|$, FWHM $2/(w_{1}+w_{2})$ and their product
$2|\Delta_{12}|/(w_{1}+w_{2})$ declaring the degree of mode separation of TM
mode and its closest TE mode in frequency domain for coupled f) and uncoupled
h) system, the red circle marks where inter-modal separation is significantly
enlarged by coupling.
With above system we first obtain the transmission spectrum of the micro-
racetrack (Figure 4e) the cavity modes preserved a strongly asymmetric
lineshape due to the coupling with WG modes (the transmission spectrum of
waveguide is in blue). The effective $Q$ factor calculated by applying Lorentz
fitting of the dip reaches $Q_{E}=11.07k$ and $Q_{M}=14.09k$ for TE and TM
respectively. With calculated coupling $Q_{CE}=17.10k$, $Q_{CM}=62.31k$ which
ends up in coupling coefficients $\kappa_{E}=2.67\times 10^{5}$,
$\kappa_{M}=1.40\times 10^{5}$ slightly larger than the simulation in Figure
4d supposedly caused by coupling region $\psi>15^{\circ}$ which has a larger
gap but still allows evanescent field coupling. Compared to their intrinsic
$Q$ factors $Q_{0E}=27.52k$, $Q_{0M}=18.21k$, the TE modes are over-coupled
leading to strong interference by WG modes, theoretically it causes mode shift
up to $72.52$ kHz as shown in Figure 1b. Experimentally, due to the different
FSRs of TE and TM modes, the wavelength difference between two closest modes
$|\Delta_{12}|$ would first increase and then decrease to zero during a
certain wavelength range as the insets of both Figure 4f and h. In a coupled
system, $|\Delta_{12}|$ and modes’ FWHM (full width half maximum, $w_{1}$,
$w_{2}$ for TE and TM respectively) were tuned by coupling strength and
relative background phase as in Figure 4f, determined by
$\widetilde{\omega}_{0}-\widetilde{\omega}_{1}$ according Equation (9). The
measured degree of mode separation $\frac{|\Delta_{12}|}{(w_{1}+w_{2})/2}$ is
enhanced for TM modes with radial mode number $1153$ to $1158$ (marked in red
in Figure 4f), satisfying either scenario II or III from Figure 2a, nearly 3
times larger compared to their siblings (mode number $1153$ to $1163$).
Meanwhile in the uncoupled system with micro-ring resonator (Figure 3e), whose
loaded $Q$ factors are $Q_{E}^{\prime}=22.61k$, $Q_{M}^{\prime}=10.76k$ and
coupling $Q_{CE}^{\prime}=137.37k$, $Q_{CM}^{\prime}=203.82k$ that is
significantly under-coupled. Calculated from its spectrum Figure 4g the mode
separation $\frac{|\Delta_{12}|}{(w_{1}+w_{2})/2}$ is proportional to
$|\Delta_{12}|$ despite of the fluctuation in FWHM caused by coupling depth as
in Figure 4h showing no signs of resolution enhancement nor inter-mode
coupling. Next we run tests over their sufficiency of detecting minor
temperature shifts for both systems.
### 3.3 Temperature sensing enhanced by multi-mode coupling
Figure 5: a) Transmission spectrum the WGMRR with multi-mode coupling leading
to ”pushing apart”. b) Transmission spectrum the WGMRR without mode coupling
by adjusting input polarization and wavelength. c) Measured wavelength
separation $|\Delta_{12}|$ of a) and b)(dashed) at different temperature. d)
Calculated lineshape contrast ratio
$\frac{(I_{1}-I_{0})+(I_{2}-I_{0})}{I_{1}+I_{2}}$ of a) and b)(dashed), under
multi-mode coupling their lineshapes experience a contrast shift $24.1$ times
greater.
In previous works utilizing double mode coupling such as fano resonance for
sensing purpose, apart from the sharp asymmetric lineshape that declines the
effective mode linewidth, those two modes are self-referenced so the system
accuracy is no longer limited by experiment equipment[42]. According to above
analysis, MRR quasi-TE and TM modes would sufficiently form a self-referenced
sensing system while coupling with WG mode further improves its capacity. Here
to compare the sensitivity for weak turbulence of a mode-coupled and an
uncoupled system, we keep the laser source sweeping in wavelength and slightly
adjust the stage’s temperature control which tunes $n_{eff}$ of the MRR
consequently. At around $1550$ nm wavelength the micro-racetrack forms an EIT-
like spectrum in Figure 5a, the three groups of modes experienced different
shifts due to their coupled background phase, the gap between two closest
modes $|\Delta_{12}|$ shifts as large as $44$ $\rm pm^{\circ}C^{-1}$ which is
$7.2$ times larger than the uncoupled system in Figure 5b that has very
similar mode gaps in the first place, as shown in Figure 5c.
On the other hand, a mode-coupled system possess a lineshape sensitive to its
background phase[43]. In our setup the background modes have a linewidth of
$0.23$ nm and FSR of $0.45$ nm (and it is also insensitive to thermal changes,
see Supporting Information section 4), thus even a minor shift at $\sim\rm pm$
would lead to observable changes to the spectrum lineshape. That experiences a
change in its LCR
$\left(\frac{(I_{1}-I_{0})+(I_{2}-I_{0})}{I_{1}+I_{2}}\right)$ as large as
$6.46\times 10^{-3}$ $\rm pm^{-1}$, while the according uncoupled modes only
changes $0.27\times 10^{-3}$ $\rm pm^{-1}$ as in Figure 5d. So there is a
maximum $24.1$ times enhancement compared with the uncoupled spectrum at
similar temperature working as another crucial criterion for minor turbulence
in the system’s $n_{eff}$.
It is still worth noting that our design did not reach the best performance
among all of the microcavity thermal sensors. Apart from the fact that our on-
chip microcavities were built large in the first place to compensate the WG
mode’s FSR, the pulley waveguide was designed to reach over-coupling condition
which lead to decrease in MRR modes’ $Q$ factor and coupling depth. Taking
those information under consideration, a further improvement should utilize
the coupling of multiple high-$Q$ factor modes either by connecting several
micro-resonators[30] or by optimizing cavities that support a number of
resonance modes.
## 4 Conclusion
In this paper, we have in depth analysed dimensions of multi-mode coupling,
revealing the intrinsic connection between frequency, $Q$ factor enhancement
and modes’ composition. Based on theoretical discussions and experiments
demonstrated with an on-chip integrated WGMRR, we have achieved higher
resolution in frequency domain and improved sensitivity as a self-referenced
sensor with micro-racetrack’s quasi-TE and TM modes coupled to waveguide
resonance mode at the same time. Here we have confined our discussions of
mode-coupled sensors within the fundamental case, practiced between discrete
and continuum modes, and it is expected that a natural extension to more
sophisticated configurations with several high-$Q$ modes or even multiple
exceptional points would reach much higher magnitudes of enhancement for
microcavity sensors without over-investing into manufacture techniques. Though
our design was practiced with a LNOI micro-racetrack, we believe that this
method can be applied to any cavity-based sensors or other material platforms
with great integration capability.
Supporting Information
Supporting Information is available from the Wiley Online Library or from the
author.
Acknowledgements
The authors would like to acknowledge support from National Natural Science
Foundation of China (Grant Nos. 12134009) and SJTU (No. 21X010200828).
Conflict of Interest
The authors declare no conflict of interest.
Data Availability Statement
The data that support the findings of this study are available from the cor-
responding author upon reasonable request.
Keywords
Micro-optical device, Optical sensing and sensors, Mode coupling, Integrated
optics, Resonant modes
References
[1] T. J. Kippenberg, S. M. Spillane, K. J. Vahala, Phys. Rev. Lett. 2004, 93
083904
[2] X. Ye, S. Liu, Y. Chen, Y. Zheng, X. Chen, Opt. Lett. 2020, 45, 2 523.
[3] J. W. B. Z. Y. C. . X. C. YiAn Liu, XiongShuo Yan, Sci. China Phys. Mech.
Astron. 2021, 64, 64234262.
[4] X. Liu, X. Yan, Y. Liu, H. Li, Y. Chen, X. Chen, Opt. Lett. 2021, 46, 21
5505.
[5] F. Vollmer, L. Yang, Nanophotonics 2012, 1, 3-4 267. [6] Y. Zhi, X.-C. Yu,
Q. Gong, L. Yang, Y.-F. Xiao, Advanced Materials 2017, 29, 12 1604920.
[7] X. Jiang, A. Qavi, S. Huang, L. Yang, Matter 2020, 3, 2 371.
[8] M. R. Foreman, J. D. Swaim, F. Vollmer, Adv. Opt. Photon. 2015, 7, 2 168.
[9] S.-J. Tang, M. Zhang, J. Sun, J.-W. Meng, X. Xiong, Q. Gong, D. Jin, Q.-F.
Yang, Y.-F. Xiao, Nature Photonics 2023, 1–6.
[10] F. Vollmer, S. Arnold, D. Keng, Proceedings of the National Academy of
Sciences 2008, 105, 5220701.
[11] X. Y.-F. L. L. H. L. C. D.-R. Zhu Jiangang, Ozdemir Sahin Kaya, Y. Lan,
Nature Photonics 2010, 4, 1 46.
[12] T. Lu, H. Lee, T. Chen, S. Herchak, J.-H. Kim, S. E. Fraser, R. C.
Flagan, K. Vahala, Proceedings of the National Academy of Sciences 2011, 108,
15 5976.
[13] S. Frustaci, F. Vollmer, Current Opinion in Chemical Biology 2019, 51 66,
chemical Genetics and Epigenetics • Molecular Imaging.
[14] F. Vollmer, S. Arnold, D. Keng, Proceedings of the National Academy of
Sciences 2008, 105, 5220701.
[15] F. Vollmer, S. Arnold, Nature Methods 2008, 5, 7 591.
[16] R. W. Boyd, J. E. Heebner, Appl. Opt. 2001, 40, 31 5742.
[17] F. Vollmer, D. Braun, A. Libchaber, M. Khoshsima, I. Teraoka, S. Arnold,
Applied Physics Letters 2002, 80, 21 4057.
[18] W. Kim, S. K. Ozdemir, J. Zhu, F. Monifi, C. Coban, L. Yang, Opt. Express
2012, 20, 28 29426.
[19] O. Gaathon, J. Culic-Viskota, M. Mihnev, I. Teraoka, S. Arnold, Applied
Physics Letters 2006, 89, 22 223901.
[20] A. M. Armani, K. J. Vahala, Opt. Lett. 2006, 31, 12 1896.
[21] W. Kim,K.$\rm\ddot{O}$zdemir, J. Zhu, L. He, L. Yang, Applied Physics
Letters 2010, 97, 7 071111.
[22] Q. Lu, X. Chen, L. Fu, S. Xie, X. Wu, Nanomaterials 2019, 9, 3.
[23] X. Xu, W. Chen, G. Zhao, Y. Li, C. Lu, L. Yang, Light: Science and
Applications 2018, 7, 162.
[24] C.-H. Dong, L. He, Y.-F. Xiao, V. R. Gaddam, S. K. Ozdemir, Z.-F. Han,
G.-C. Guo, L. Yang, Applied Physics Letters 2009, 94, 23 231119.
[25] J. Liao, L. Yang, Light: Science and Applications 2021, 10, 1 32.
[26] C. Ciminelli, F. Dell’Olio, D. Conteduca, C. Campanella, M. Armenise,
Optics Laser Technology 2014, 59 60.
[27] L. He, K.$\rm\ddot{O}$zdemir, J. Zhu, W. Kim, L. Yang, Nature
Nanotechnology 2011, 6, 7 428.
[28] K.$\rm\ddot{O}$zdemir, J. Zhu, X. Yang, B. Peng, H. Yilmaz, L. He, F.
Monifi, S. H. Huang, G. L. Long, L. Yang, Proceedings of the National Academy
of Sciences 2014, 111, 37 E3836.
[29] W. Yu, W. C. Jiang, Q. Lin, T. Lu, Nature Communications 2016, 7, 1
12311.
[30] Y.-F. Xiao, V. Gaddam, L. Yang, Opt. Express 2008, 16, 17 12538.
[31] B. Peng, K.$\rm\ddot{O}$zdemir, W. Chen, F. Nori, L. Yang, Nature
Communications 2014, 5, 1 5082.
[32] C.-M. Chang, O. Solgaard, Opt. Express 2013, 21, 22 27209.
[33] B.-B. Li, Y.-F. Xiao, C.-L. Zou, Y.-C. Liu, X.-F. Jiang, Y.-L. Chen, Y.
Li, Q. Gong, Applied Physics Letters 2011, 98, 2, 021116.
[34] J. Liao, X. Wu, L. Liu, L. Xu, Opt. Express 2016, 24, 8 8574.
[35] S. Fan, Applied Physics Letters 2002, 80, 6 908.
[36] C.-Y. Chao, L. J. Guo, Applied Physics Letters 2003, 83, 8 1527.
[37] L. Gu, H. Fang, J. Li, L. Fang, S. J. Chua, J. Zhao, X. Gan,
Nanophotonics 2019, 8, 5 841.
[38] W. Chen, K.$\rm\ddot{O}$zdemir, G. Zhao, J. Wiersig, L. Yang, Nature
2017, 548, 7666 192.
[39] M. Heiblum, J. Harris, IEEE Journal of Quantum Electronics 1975, 11, 2
75.
[40] L. Cortes-Herrera, X. He, J. Cardenas, G. P. Agrawal, Phys. Rev. A 2021,
103 063517.
[41] C. Gardiner, P. Zoller, Quantum noise: a handbook of Markovian and non-
Markovian quantum stochastic methods with applications to quantum optics,
Springer Science and Business Media, 2004.
[42] S.-L. Chuang, Journal of Lightwave Technology 1987, 5, 1 5.
[43] U. Fano, Phys. Rev. 1961, 124 1866.
|
# A discrete formulation for three-dimensional winding number
Ken Shiozaki Center for Gravitational Physics and Quantum Information, Yukawa
Institute for Theoretical Physics, Kyoto University, Kyoto 606-8502, Japan
###### Abstract
For a smooth map $g\colon X\to U(N)$, where $X$ is a three-dimensional,
oriented, and closed manifold, the winding number or the map’s degree is
defined by
$W_{3}=\frac{1}{24\pi^{2}}\int_{X}\mathrm{Tr}\left[(g^{-1}dg)^{3}\right]$. We
introduce a method to compute $W_{3}$ using a discrete approximation of $X$ so
that the result is manifestly quantized.
††preprint: YITP-24-14
Introduction— Consider a three-dimensional closed and oriented manifold $X$.
For a smooth map $g:X\to U(N)$, with $U(N)$ representing the group of $N\times
N$ unitary matrices, the winding number, an integer value, is defined by the
following expression:
$\displaystyle W_{3}[g]$ $\displaystyle=\frac{1}{2\pi}\int_{X}H\in\mathbb{Z},$
(1) $\displaystyle H$
$\displaystyle=\frac{1}{12\pi}\mathrm{Tr}\left[(g^{-1}\mathrm{d}g)^{3}\right].$
(2)
The winding number $W_{3}[g]$ is pivotal in various branches of physics,
including topological band theory, where it acts as the topological invariant
for three-dimensional superconductors with time-reversal symmetry [1, 2], and
in non-Abelian (lattice) gauge theory, appears in instanton number
calculations [3, 4]. Often in these applications, the function $g$ is defined
only on a finite set of lattice points for numerical analysis. Therefore, an
efficient numerical formulation with lattice approximation of manifolds is an
important issue.
For the first Chern number $ch_{1}$ of a line bundle with connection, a sort
of two-dimensional counterpart of the winding number, a well-established
discrete formulation with evident quantization exists [5, 6]. Furthermore,
discrete line bundles over finite simplicial complexes have been explored,
especially concerning applications in computer graphics [7]. This paper
develops a method for evaluating $W_{3}[g]$ via a discretized approximation of
$X$, ensuring the result remains manifestly quantized, provided the
approximation of $X$ is sufficiently refined relative to the spatial
variation’s length scale.
Figure 1: (a) Illustration of a $\theta$-gap. (b) Projection onto the
eigenspace between two $\theta$-gaps. (c) Smearing eigenvalues over a finite
set of vertices. In each panel, the unit circle in the complex plane is
depicted, with blue arcs representing the spectrum of the $U(N)$-valued matrix
$g(x)$ within a local region. Red lines mark the locations of $\theta$-gaps.
Formulation— Given that $H$ is a closed three-form, it is locally exact,
meaning that for a local patch, there exists a two-form $B$ such that $H=dB$.
To construct $B$ explicitly, we introduce a gap condition for elements of
$U(N)$. A matrix $g\in U(N)$ exhibits a $\theta$-gap if none of its
eigenvalues are $e^{i\theta}$ for a given real number $\theta\in[0,2\pi)$ [8],
as illustrated in Fig. 1 (a). Furthermore, we define $\log_{\theta}z$ for
nonzero complex number $z\in\mathbb{C}^{\times}$ as
$\displaystyle\log_{\theta}z=\log|z|+i\arg z,\quad\theta\leq\arg
z<\theta+2\pi.$ (3)
For two distinct $\theta$-gaps, $\theta_{1}$ and $\theta_{2}$, the following
relation holds:
$\displaystyle\log_{\theta_{1}}z-\log_{\theta_{2}}z$
$\displaystyle=\begin{cases}2\pi
i\times\mathrm{sgn}(\theta_{1}-\theta_{2})&(\min(\theta_{1},\theta_{2})<\arg
z<\max(\theta_{1},\theta_{2})),\\\ 0&(\mathrm{otherwise}).\end{cases}$ (4)
Consider $U\subset X$ as a three-dimensional subspace where $g(x)$ maintains a
$\theta$-gap for $x\in U$. Let $\gamma(x)=(u_{1}(x),\dots,u_{N}(x))\in U(N)$
be a unitary matrix diagonalizing $g(x)$, i.e.,
$g(x)=\gamma(x)\Lambda(x)\gamma(x)^{\dagger}$ with
$\Lambda(x)=\mathrm{diag}(\lambda_{1},\dots,\lambda_{N})$, where
$\lambda_{n}\in U(1)$ for $n=1,\dots,N$. The exact form $B_{\theta}$ is given
by [9]
$\displaystyle B_{\theta}$ $\displaystyle=Q+R_{\theta},$ (5) $\displaystyle Q$
$\displaystyle=\frac{1}{4\pi}\mathrm{Tr}[\gamma^{-1}d\gamma\Lambda\gamma^{-1}d\gamma\Lambda^{-1}],$
(6) $\displaystyle R_{\theta}$
$\displaystyle=\frac{1}{2\pi}\mathrm{Tr}[\log_{\theta}\Lambda(\gamma^{-1}d\gamma)^{2}].$
(7)
Note that while $Q$ is independent of $\theta$, but $R_{\theta}$ does. It is
evident that $X$ can be covered by $\\{U_{i}\\}_{i}$ such that in each patch
$U$, $g(x)$ exhibits a specific $\theta$-gap $\theta_{i}$.
The unitary matrix $\gamma$ is not unique due to the transformation
$\gamma\mapsto\gamma W$, where $W\in U(N)$ commutes with $\Lambda$, satisfying
$W\Lambda W^{-1}=\Lambda$. This ambiguity, however, does not affect $Q$ and
$R_{\theta}$. To illustrate, consider the $N$ eigenvalues divided into groups
of $|I|$ degenerate ones, each with eigenvalue $\lambda_{I}$ so that
$\Lambda=\bigoplus_{I}\lambda_{I}{\bf 1}_{|I|}$. Introduce a block matrix
notation $A_{IJ}=(u_{i}^{\dagger}du_{j})_{i\in I,j\in J}$. The transformation
matrix $W$ is expressed as $W=\bigoplus_{I}W_{I}$ as well, with $W_{I}\in
U(|I|)$, modifying $A_{IJ}$ to
$A_{IJ}=W_{I}^{\dagger}A_{IJ}W_{J}+\delta_{IJ}W_{I}^{-1}dW_{I}$. Consequently,
$Q$ and $R_{\theta}$ can be represented as:
$\displaystyle Q$ $\displaystyle=\frac{1}{4\pi}\sum_{I,J;I\neq
J}\mathrm{Tr}_{I}[A_{IJ}A_{JI}]\lambda_{J}\lambda_{I}^{-1},$ (8)
$\displaystyle R_{\theta}$ $\displaystyle=\frac{1}{2\pi}\sum_{I,J;I\neq
J}\mathrm{Tr}_{I}[A_{IJ}A_{JI}]\log_{\theta}\lambda_{I},$ (9)
where $\mathrm{Tr}_{I}$ denotes the trace over indices $i\in I$. In the
summation $\sum_{I,J}$, terms with $I=J$ can be excluded due to
$\mathrm{Tr}_{I}[(A_{II})^{2}]=0$. This demonstrates the invariance of $Q$ and
$R_{\theta}$ under the transformation $\gamma\mapsto\gamma W$.
Another noteworthy aspect is that the difference in $B_{\theta}$ between two
$\theta$-gaps is a total derivative. For $0\leq\theta_{1},\theta_{2}<2\pi$,
and using (4), it follows that:
$\displaystyle B_{\theta_{1}}-B_{\theta_{2}}=d\alpha_{\theta_{1},\theta_{2}},$
(10)
where
$\displaystyle\alpha_{\theta_{1},\theta_{2}}$ $\displaystyle=-i\
\mathrm{sgn}(\theta_{1}-\theta_{2})\mathrm{Tr}[P_{\theta_{1},\theta_{2}}\gamma^{-1}d\gamma]$
$\displaystyle=-i\ \mathrm{sgn}(\theta_{1}-\theta_{2})$
$\displaystyle\quad\times\sum_{n;\min(\theta_{1},\theta_{2})<\arg\lambda_{n}<\max(\theta_{1},\theta_{2})}u_{n}^{\dagger}du_{n},$
(11)
with
$\displaystyle
P_{\theta_{1},\theta_{2}}=\sum_{n;\min(\theta_{1},\theta_{2})<\arg\lambda_{n}<\max(\theta_{1},\theta_{2})}u_{n}u_{n}^{\dagger},$
(12)
the orthogonal projection onto the eigenspace for eigenvalues that fulfill
$\min(\theta_{1},\theta_{2})<\arg\lambda_{n}<\max(\theta_{1},\theta_{2})$. In
cases where $\theta_{1}=\theta_{2}$ or no eigenvectors meet the condition
$\min(\theta_{1},\theta_{2})<\arg\lambda_{n}<\max(\theta_{1},\theta_{2})$,
$\alpha_{\theta_{1},\theta_{2}}$ is simply null.
Now, we express the winding number $W_{3}[g]$ as a sum of line integrals,
utilizing a cubic decomposition $L$ of the manifold $X$. (Any simplicial
decomposition is equally valid.) Within each cube $c$ of the lattice $L$, we
select $\theta_{c}\in[0,2\pi)$ such that $g_{x}$ for $x\in c$ exhibits a
$\theta$-gap of $\theta_{c}$. Thus, $W_{3}[g]$ can be reformulated as a sum of
integrals over all plaquettes:
$\displaystyle W_{3}[g]$
$\displaystyle=\frac{1}{2\pi}\sum_{c}\int_{c}dB_{\theta_{c}}$
$\displaystyle=\frac{1}{2\pi}\sum_{c}\int_{\partial c}B_{\theta_{c}}$
$\displaystyle=\frac{1}{2\pi}\sum_{p}\int_{p}(B_{\theta_{p}^{-}}-B_{\theta_{p}^{+}}).$
(13)
Here, $\sum_{p}$ runs over all plaquettes $p$ in the lattice $L$, with each
$p$ being oriented. The gap parameters $\theta_{p}^{+}$ and $\theta_{p}^{-}$
correspond to the cubes adjacent to plaquette $p$, in directions parallel and
antiparallel to $p$’s normal vector, respectively, as depicted in Fig. 2 (a).
This formulation further simplifies to a sum of line integrals:
$\displaystyle W_{3}[g]$
$\displaystyle=\frac{1}{2\pi}\sum_{p}\int_{p}d\alpha_{\theta_{p}^{-},\theta_{p}^{+}}$
$\displaystyle=\frac{1}{2\pi}\sum_{p}\oint_{\partial
p}\alpha_{\theta_{p}^{-},\theta_{p}^{+}}.$ (14)
The transition to the last expression is contingent upon
$\alpha_{\theta_{p}^{-},\theta_{p}^{+}}$ being smoothly defined across
plaquette $p$. When $\alpha_{\theta_{p}^{-},\theta_{p}^{+}}$ is solely
determined along the loop $\partial p$ bordering plaquette $p$, a $2\pi$
ambiguity may arise from large gauge transformations $u_{n}\to
u_{n}e^{i\chi_{n}}$, where $\oint_{\partial p}d\chi_{n}=2\pi$, potentially
altering $W_{3}[g]$ by an integer. However, if the cubic lattice $L$ is
sufficiently fine relative to $g$’s spatial variations, the integral
$\oint_{\partial p}\alpha_{\theta_{p}^{-},\theta_{p}^{+}}$ approximates 0 mod
$2\pi$, permitting its interpretation as an $\mathbb{R}$-valued quantity
devoid of the $2\pi$ ambiguity.
Figure 2: (a) A plaquette $p$ within the cubic lattice, showing
$\theta_{p}^{+}$ and $\theta_{p}^{-}$ as the $\theta$-gaps of cubes adjacent
to $p$, aligned parallel and anti-parallel to $p$’s normal vector,
respectively. The vertices $v_{0},v_{1},v_{2},$ and $v_{3}$ are sequentially
labeled around the perimeter of plaquette $p$. (b) An edge $v_{a}v_{b}$ of the
cubic lattice, illustrating the $\theta$-gaps of cubes adjacent to the edge
$v_{a}v_{b}$.
We claim that the winding number as expressed in (14) can be calculated solely
using the diagonalizing matrices $\gamma(v)$ at the vertices $v$ of lattice
$L$. Diagonalizing $g(v)$ at vertices $v\in L$ yields the eigenvector and
eigenvalue pairs $\\{u_{n}(v),\lambda_{n}(v)\\}_{n=1,\dots,N}$ for each vertex
$v$. The ordering of eigenvectors $u_{n}(v)$ is such that the angles of
eigenvalues ascend, satisfying
$0\leq\lambda_{1}(v)\leq\cdots\leq\lambda_{N}(v)<2\pi$. (Note that eigenvalues
$\lambda$ near $0$ may reorder significantly under minor perturbations, yet
this does not contribute to the discrete formula below.) The gap parameter
$\theta_{c}$ for each cube $c$ is determined as follows: From the eight
vertices of cube $c$, denoted as $v\in c$, we derive $8N$ eigenvalues
$\\{\lambda_{n}(v)\\}_{v\in c,n=1,\dots,N}$. By smearing all eigenvalues
$\lambda_{n}(v)$, we have a set of intervals:
$\displaystyle I_{c}$ $\displaystyle=\bigcup_{v\in
c,n=1,\dots,N}\Bigg{\\{}\arg(\lambda_{n}(v)e^{i\delta\phi})\in[0,2\pi)\Bigg{|}$
$\displaystyle\hskip
30.0pt-\frac{\beta}{2N}<\delta\phi<\frac{\beta}{2N}\Bigg{\\}}.$ (15)
Here, $0<\beta<1$ is a constant smearing parameter ensuring that adjacent
eigenvalues fall within the same smeared interval. For example. we can set as
$\beta=1/2$. We select a $\theta$ from the set $[0,2\pi)\backslash I_{c}$ to
serve as $\theta_{c}$ for cube $c$. (Refer to Fig. 1 (c) for visualization.)
With $\theta$-gaps for all cubes in lattice $L$ thus defined, the gap
parameters $\theta_{p}^{+}$ and $\theta_{p}^{-}$ for each plaquette $p$ are
specified. For each corner vertex $v_{0},v_{1},v_{2},v_{3}$ of plaquette $p$,
we define $\gamma_{\\{\theta_{p}^{+},\theta_{p}^{-}\\}}(v_{j})$ as a $N\times
N_{q}$ matrix:
$\displaystyle\gamma_{\\{\theta_{p}^{+},\theta_{p}^{-}\\}}(v_{a})=\left(u_{n_{1}}(v_{a}),\dots,u_{n_{N_{q}}}(v_{a})\right),$
(16)
comprising $N_{q}$ eigenvectors, with eigenvalue angles satisfying
$\min(\theta_{p}^{+},\theta_{p}^{-})<\arg\lambda_{n_{1}}(v_{a})\leq\cdots\leq\arg\lambda_{n_{N_{q}}}(v_{a})<\max(\theta_{p}^{+},\theta_{p}^{-})$.
The nonnegative integer $N_{q}$, indicating the count of eigenvalues between
$e^{i\theta_{p}^{+}}$ and $e^{i\theta_{p}^{-}}$, should be in common for the
four vertices $v_{0},v_{1},v_{2},v_{3}$ of plaquette $p$, assuming the lattice
$L$ is fine enough. Then, the line integral $\oint_{\partial
p}\alpha_{\theta_{p}^{-},\theta_{p}^{+}}$ can be approximated as
$\displaystyle\oint_{\partial p}\alpha_{\theta_{p}^{-},\theta_{p}^{+}}$
$\displaystyle\cong\mathrm{sgn}(\theta_{p}^{+}-\theta_{p}^{-})\times\mathrm{Arg}\,\det\left[\gamma_{\\{\theta_{p}^{+},\theta_{p}^{-}\\}}(v_{0})^{\dagger}\gamma_{\\{\theta_{p}^{+},\theta_{p}^{-}\\}}(v_{3})\gamma_{\\{\theta_{p}^{+},\theta_{p}^{-}\\}}(v_{3})^{\dagger}\gamma_{\\{\theta_{p}^{+},\theta_{p}^{-}\\}}(v_{2})\right.$
$\displaystyle\hskip
50.0pt\left.\times\gamma_{\\{\theta_{p}^{+},\theta_{p}^{-}\\}}(v_{2})^{\dagger}\gamma_{\\{\theta_{p}^{+},\theta_{p}^{-}\\}}(v_{1})\gamma_{\\{\theta_{p}^{+},\theta_{p}^{-}\\}}(v_{1})^{\dagger}\gamma_{\\{\theta_{p}^{+},\theta_{p}^{-}\\}}(v_{0})\right]=:\Phi_{p},$
(17)
where $\mathrm{Arg}$ denotes the principal value ${-\pi<\mathrm{Arg}\ z<\pi}$.
Consequently, we obtain a discrete formula for the winding number:
$\displaystyle W^{\rm
dis}_{3}[g]=\frac{1}{2\pi}\sum_{p}\Phi_{p}\in\mathbb{Z},$ (18)
which relies solely on the diagonalization of matrix $g(v)$ at vertices $v\in
L$ of the cubic lattice $L$ approximating the manifold $X$.
The discrete formula (18) is inherently quantized. To demonstrate this, we
express $e^{2\pi iW^{\mathrm{dis}}_{3}[g]}$ as the product of edge
contributions:
$\displaystyle e^{2\pi iW^{\mathrm{dis}}_{3}[g]}=\prod_{p}e^{i\Phi_{p}}$
$\displaystyle=\prod_{v_{a}v_{b}\in\\{\mathrm{edges}\\}}\exp\Bigg{[}i\arg\Big{\\{}\det\left[\gamma_{\\{\theta_{1},\theta_{0}\\}}(v_{a})^{\dagger}\gamma_{\\{\theta_{1},\theta_{0}\\}}(v_{b})\right]^{\mathrm{sgn}(\theta_{1}-\theta_{0})}\det\left[\gamma_{\\{\theta_{2},\theta_{1}\\}}(v_{a})^{\dagger}\gamma_{\\{\theta_{2},\theta_{1}\\}}(v_{b})\right]^{\mathrm{sgn}(\theta_{2}-\theta_{1})}$
$\displaystyle\quad\times\det\left[\gamma_{\\{\theta_{3},\theta_{2}\\}}(v_{a})^{\dagger}\gamma_{\\{\theta_{3},\theta_{2}\\}}(v_{b})\right]^{\mathrm{sgn}(\theta_{3}-\theta_{2})}\det\left[\gamma_{\\{\theta_{0},\theta_{3}\\}}(v_{a})^{\dagger}\gamma_{\\{\theta_{0},\theta_{3}\\}}(v_{b})\right]^{\mathrm{sgn}(\theta_{0}-\theta_{3})}\Big{\\}}\Bigg{]}.$
(19)
Here, $v_{a}v_{b}$ denotes an individual edge, and
$\theta_{0},\dots,\theta_{3}$ are the gap parameters for cubes adjacent to the
edge $v_{a}v_{b}$, ordered counterclockwise from the vector
$\overrightarrow{v_{b}v_{a}}$ pointing out of the page. See Fig. 2 (b).
Regardless of the relative magnitudes of $\theta_{0},\theta_{1},\theta_{2}$,
and $\theta_{3}$, each edge’s contribution in (19) cancels out exactly due to
the property that for $\theta<\theta^{\prime}<\theta^{\prime\prime}$,
$\displaystyle\gamma_{\\{\theta,\theta^{\prime\prime}\\}}(v)=\left(\gamma_{\\{\theta,\theta^{\prime}\\}}(v_{a}),\gamma_{\\{\theta^{\prime},\theta^{\prime\prime}\\}}(v_{a})\right).$
(20)
Thus, $e^{2\pi iW^{\mathrm{dis}}_{3}[g]}=1$ is valid for any sufficiently fine
discrete approximation of $X$.
Model calculation— Our formulation extends to computing the winding number for
maps $g:X\to GL_{N}(\mathbb{C})$, where the target space consists of
invertible matrices. Invertible matrices that cannot be diagonalized, known as
”exceptional points,” constitute a ring in the three-dimensional parameter
space and are stable under minor perturbations. To circumvent these
exceptional points, one can employ the singular value decomposition $g=U\Sigma
V^{\dagger}$ to derive the unitary matrix $UV^{\dagger}$ at each vertex within
the discretized parameter space. We verified our formula (18) with the model
$g(k_{x},k_{y},k_{z})=t(\sin k_{x}\sigma_{x}+\sin k_{y}\sigma_{y}+\sin
k_{z}\sigma_{z})-i(m+\cos k_{x}+\cos k_{y}+\cos k_{z})\mathbf{1}_{2}$ on the
three-torus $(k_{x},k_{y},k_{z})\in[-\pi,\pi]^{\times 3}$, employing a cubic
lattice of $20\times 20\times 20$ mesh. Here,
$\sigma_{\mu}\in\\{\sigma_{x},\sigma_{y},\sigma_{z}\\}$ denotes the Pauli
matrices. We have checked that the winding number $W^{\rm dis}_{3}[g]$ equals
$-2\mathrm{sgn}(t)$ for $|m|<1$, $\mathrm{sgn}(t)$ for $1<|m|<3$, and $0$ for
$|m|>3$, which is consistent with the direct calculation of the analytic form
$W_{3}[g]$.
Summary— In this work, we presented a formulation for calculating the three-
dimensional winding number $W_{3}[g]$ for smooth maps $g:X\to U(N)$, utilizing
a discrete approximation of the manifold $X$. Our approach allows for the
computation of $W_{3}[g]$ exclusively through the diagonalization of matrices
$g(v)$ at a finite number of vertices, ensuring the result is explicitly
quantized to integer values. Discrete formulations that explicitly quantify
topological invariants are currently limited in scope. Examples such as
instanton numbers represented by the second Chern numbers, higher-dimensional
winding numbers, and degrees of maps to more general symmetric spaces have yet
to be explored. We look forward to future studies shedding more light on these
topics.
###### Acknowledgements.
We were supported by JST CREST Grant No. JPMJCR19T2, and JSPS KAKENHI Grant
No. 22H05118 and 23H01097.
## References
* Volovik [2003] G. E. Volovik, _The universe in a helium droplet_ , Vol. 117 (OUP Oxford, 2003).
* Schnyder _et al._ [2008] A. P. Schnyder, S. Ryu, A. Furusaki, and A. W. W. Ludwig, Classification of topological insulators and superconductors in three spatial dimensions, Phys. Rev. B 78, 195125 (2008).
* Nakahara [2018] M. Nakahara, _Geometry, topology and physics_ (CRC press, 2018).
* Lüscher [1982] M. Lüscher, Topology of lattice gauge fields, Communications in Mathematical Physics 85, 39 (1982).
* Fujiwara _et al._ [2001] T. Fujiwara, H. Suzuki, and K. Wu, Topological Charge of Lattice Abelian Gauge Theory, Progress of Theoretical Physics 105, 789 (2001), https://academic.oup.com/ptp/article-pdf/105/5/789/5186102/105-5-789.pdf .
* Fukui _et al._ [2005] T. Fukui, Y. Hatsugai, and H. Suzuki, Chern numbers in discretized brillouin zone: Efficient method of computing (spin) hall conductances, Journal of the Physical Society of Japan 74, 1674 (2005), https://doi.org/10.1143/JPSJ.74.1674 .
* Knöppel and Pinkall [2016] F. Knöppel and U. Pinkall, Complex line bundles over simplicial complexes and their applications, in _Advances in Discrete Differential Geometry_, edited by A. I. Bobenko (Springer Berlin Heidelberg, Berlin, Heidelberg, 2016) pp. 197–239.
* Carpentier _et al._ [2015] D. Carpentier, P. Delplace, M. Fruchart, K. Gawedzki, and C. Tauber, Construction and properties of a topological index for periodically driven time-reversal invariant 2d crystals, Nuclear Physics B 896, 779 (2015).
* Gawedzki and Reis [2002] K. Gawedzki and N. Reis, WZW branes and gerbes, Reviews in Mathematical Physics 14, 1281 (2002), https://doi.org/10.1142/S0129055X02001557 .
|
# SafeRL-Kit: Evaluating Efficient Reinforcement Learning Methods
for Safe Autonomous Driving
Linrui Zhang Qin Zhang Li Shen Bo Yuan Xueqian Wang
###### Abstract
Safe reinforcement learning (RL) has achieved significant success on risk-
sensitive tasks and shown promise in autonomous driving (AD) as well.
Considering the distinctiveness of this community, efficient and reproducible
baselines are still lacking for safe AD. In this paper, we release SafeRL-Kit
to benchmark safe RL methods for AD-oriented tasks. Concretely, SafeRL-Kit
contains several latest algorithms specific to zero-constraint-violation
tasks, including Safety Layer, Recovery RL, off-policy Lagrangian method, and
Feasible Actor-Critic. In addition to existing approaches, we propose a novel
first-order method named Exact Penalty Optimization (EPO) and sufficiently
demonstrate its capability in safe AD. All algorithms in SafeRL-Kit are
implemented (i) under the off-policy setting, which improves sample efficiency
and can better leverage past logs; (ii) with a unified learning framework,
providing off-the-shelf interfaces for researchers to incorporate their
domain-specific knowledge into fundamental safe RL methods. Conclusively, we
conduct a comparative evaluation of the above algorithms in SafeRL-Kit and
shed light on their efficacy for safe autonomous driving. The source code is
available at this https URL.
Machine Learning, ICML
## 1 Introduction
Figure 1: The overall framework of SafeRL-Kit. The trajectory collector
interacts with specified AD environments (e.g., MetaDrive (Li et al., 2021))
and stores transitions in the memory. SafeRL-Kit contains several safe RL
agents that efficiently learn from past experiences, including Safety Layer,
Recovery RL, Off-policy Lagrangian, Feasible Actor Critic, and newly proposed
Exact Penalty Optimization.
Reinforcement Learning (RL) has achieved superhuman performance in many
decision-making problems (Mnih et al., 2015; Vinyals et al., 2019). Typically,
the agent learns from trial and error and requires minimal prior knowledge of
the environment. Such a paradigm has natural advantages in mastering complex
skills for highly nonlinear systems like autonomous vehicles (Kiran et al.,
2021).
Nevertheless, concerns about the systematic safety limit the widespread use of
standard RL in real-world applications (Amodei et al., 2016). As an
alternative, safe RL takes safety requirements as hard constraints and
optimizes policies in the feasible domain. In recent years, it has been deemed
as a practical solution to resource allocation (Liu et al., 2021), robotic
locomotion (Yang et al., 2022), etc.
There have also been studies introducing safe RL into autonomous driving (AD)
(Isele et al., 2018; Chen et al., 2021; Li et al., 2022). Despite those
ongoing efforts, a unified benchmark is of great relevance to facilitate
further research on safe AD. We notice some risk-sensitive simulated
environments (Li et al., 2021; Herman et al., 2021) have been proposed, but an
efficient safe RL toolkit is still absent for this community. Considering the
distinctiveness of AD-oriented tasks, common code-bases (Ray et al., 2019;
Yuan et al., 2021) lack the following pivotal characteristics:
(1) Being safety-critical.
The agent must maintain zero cost-return as much as possible since any
inadmissible behavior in autopilot leads to catastrophic failures. Instead,
the previous code-base is built for a general-purpose with trajectory-based
constraints and non-zero thresholds.
(2) Being sample-efficient.
Off-policy algorithms can better leverage past logs and human demonstrations,
which is crucial for AD. By contrast, the previous code-base requires tens of
millions of interactions due to its on-policy algorithms, like CPO and PPO-L
(Ray et al., 2019).
(3) Being up-to-date.
There has been a fast-growing body of RL-based safe control. Nevertheless, the
previous code-base merely contains elder baselines (Achiam et al., 2017; Chow
et al., 2017) and lacks the latest advances.
(4) Being easy-to-use.
Most work on learning-based safe AD tends to incorporate domain-specific
knowledge into fundamental safe RL. Thus the toolkit is supposed to provide
off-the-shelf interfaces for extended studies. However, the modules of the
previous code-base are highly coupled and are implemented with the deprecated
TensorFlow version.
To provide a such as toolkit for safe RL algorithms and understand which of
them are best suited for AD-oriented tasks, our contributions in this work are
summarized as the following three-folds:
* •
We release SafeRL-Kit, which contains the latest advances in safe RL (Dalal et
al., 2018; Ha et al., 2020; Thananjeyan et al., 2021; Ma et al., 2021). All
algorithms are implemented efficiently under off-policy settings and with a
unified training framework.
* •
We propose a novel first-order method coined Exact Penalty Optimization (EPO)
and incorporate it into SafeRL-Kit. EPO utilizes a single penalty factor and a
ReLU operator to construct an equivalent unconstrained objective. Empirical
results show the simple technique is surprisingly effective and robust for AD-
oriented tasks.
* •
We benchmark SafeRL-Kit in a representative toy environment and a simulated
platform with realistic vehicle dynamics. To the best of our knowledge, this
paper is the first to provide unified off-policy safe RL baselines and a fair
comparison of them specific to AD.
## 2 Related Work
### 2.1 Safe RL Algorithms
A number of works tackle RL-based safe control for autonomous agents, and we
divide them into three genres. The first type of method, coined as safe policy
optimization, incorporates safety constraints into the standard RL objective
and yields a constrained sequential optimization problem (Chow et al., 2017;
Achiam et al., 2017; Zhang et al., 2020; Ma et al., 2021; Zhang et al., 2022).
The second type of method, coined as safety correction, projects initial
unsafe behaviors to the feasible region (Dalal et al., 2018; Zhao et al.,
2021). The third type of method, coined as safety recovery, learns an
additional pair of safe actor-critic to take over control when encountering
potential risks (Thananjeyan et al., 2021; Yang et al., 2022).
There have also been studies on safe RL specific to AD-oriented tasks. Isele
et al. (2018) utilizes a prediction module to generate masks on dangerous
behaviors, which merely works in discrete action spaces. Wen et al. (2020)
extend Constrained Policy Optimization (CPO) (Achiam et al., 2017) to AD and
employ synchronized parallel actors to accelerate the convergence speed for
on-policy CPO. Chen et al. (2021) take the ego-camera view as input and train
an additional recovery policy via a heuristic objective based on Hamilton-
Jacobi reachability. Li et al. (2022) propose a human-in-loop approach to
learn safe driving efficiently.
### 2.2 Safe RL Benchmarks
For general scenarios, a set of benchmarks are commonly used to evaluate the
efficacy of safe RL algorithms. The classic
environments111https://github.com/SvenGronauer/Bullet-Safety-Gym include Robot
with Limit Speed (Zhang et al., 2020), Circle and Gather (Achiam et al.,
2017), etc. Safety-gym222https://github.com/openai/safety-gym (Ray et al.,
2019) contains several tasks (goal, button, push) and agents (point, car,
doggo) that are representative in robot control problems. Meanwhile, the
authors provide popular baselines333https://github.com/openai/safety-starter-
agents, including CPO and some on-policy Lagrangian methods. Safe-control-
gym444https://github.com/utiasDSL/safe-control-gym (Yuan et al., 2021) bridges
the gap between control and RL communities. The authors also developed an
open-sourced toolkit supporting both model-based and data-driven control
techniques.
For AD-oriented tasks, there have been some existing environments for safe
driving. Li et al. (2021) release
Metadrive555https://github.com/metadriverse/metadrive that benchmarks
reinforcement learning algorithms for vehicle autonomy, including safe
exploitation and exploration. Herman et al. (2021) propose Learn-to-
Race666https://github.com/learn-to-race/l2r that focuses on safe control in
high speed. Nevertheless, it still lacks a set of strong baselines specific to
the AD community considering its distinctiveness depicted above in Section 1.
To our best knowledge, this paper is the first to provide unified off-policy
safe RL baselines and a fair comparison of them for the purpose of autonomous
driving.
## 3 Preliminaries
(a) Cost Signal = 0 (b) Cost Signal = 1
Figure 2: SpeedLimit Benchmark. The vehicle is rewarded for driving along the
avenue, but receives a cost signal if $vel>1.5m/s$.
A Markov Decision Process (MDP) (Sutton & Barto, 1998) is defined by a tuple
$(\mathcal{S},\mathcal{A},\mathcal{P},\mathcal{R},\mu,\gamma)$.
${\mathcal{S}}$ and ${\mathcal{A}}$ denote the state space and the action
space respectively.
${\mathcal{P}}:{\mathcal{S}}\times\mathcal{A}\times\mathcal{S}\mapsto[0,1]$ is
the transition probability function to describe the dynamics of the system.
$\mathcal{R}:\mathcal{S}\times\mathcal{A}\mapsto\mathbb{R}$ is the reward
function. $\mu:\mathcal{S}\mapsto[0,1]$ is the initial state distribution.
$\gamma$ is the discount factor for future reward. A stationary policy
$\pi:S\mapsto P(A)$ maps the given states to probability distributions over
action space. The goal of standard RL is to find the optimal policy $\pi^{*}$
that maximizes the expected discounted return
$J_{R}(\pi)=\mathop{\mathbb{E}}_{\tau\sim\pi}\big{[}\sum^{\infty}_{t=0}\gamma^{t}R(s_{t},a_{t})\big{]},$
where $\tau=\\{(s_{t},a_{t})\\}_{t\geq 0}$ is a sample trajectory and
$\tau\sim\pi$ accounts for the distribution over trajectories depending on
$s_{0}\sim\mu,a_{t}\sim\pi(\cdot|s_{t}),s_{t+1}\sim P(\cdot|s_{t},a_{t})$.
A Constrained Markov Decision Process (CMDP) (Altman, 1999) extends MDP to
$(\mathcal{S},\mathcal{A},\mathcal{P},\mathcal{R},\mathcal{C},\mu,\gamma)$.
The cost function $\mathcal{C}:\mathcal{S}\times\mathcal{A}\mapsto[0,+\infty]$
reflects the violation of systematic safety. The goal of safe RL is to find
$\pi^{*}={\arg\max}_{\pi}J_{R}(\pi)\quad\mathrm{s.t.}\ \ \\{a_{t}\\}_{t\geq
0}\text{ is feasible}.$
In a CMDP, the cost function is typically constrained in the following two
ways. The first is _Instantaneous Constrained MDP_. This type of Safe RL
formualtion requires the selected actions enforce the constraint at every
decision-making step, namely $C(s_{t},a_{t})\leq\epsilon$. The second is
_Cumulative Constrained MDP_. This type of Safe RL formualtion requires the
discounted sum of cost signals in the whole trajectory within a certain
threshold, namely
$J_{C}(\pi)=\mathop{\mathbb{E}}_{\tau\sim\pi}\big{[}\sum^{\infty}_{t=0}\gamma^{t}C(s_{t},a_{t})\big{]}\leq
d.$
## 4 Problem Setup
In this paper, we develop SafeRL-Kit to evaluate efficient RL algorithms for
safe autonomous driving on existing benchmarks. We simplify the cost function
as the following risk-indicator:
$C(s,a)=\begin{cases}1,&\text{if the transition is unsafe}\\\
0,&\text{otherwise}\end{cases}.$ (1)
This formulation is generalizable to different AD-oriented tasks without
cumbersome reward and cost shaping. The goal of the autonomous vehicle is to
reach the destination as fast as possible while adhering to zero cost signals
at every time steps. Specifically, we conduct comparative evaluations on a
representative toy environment and a simulated platform with realistic vehicle
dynamics respectively.
(a) Cost Signal = 0 (b) Cost Signal = 1
Figure 3: MetaDrive Benchmark. The vehicle aims to reach virtual markers, but
receives a cost signal if it collides with obstacles and other vehicles or it
is out of the road. Table 1: Comparison of different safe reinforcement
learning algorithms for AD-oriented tasks.
Algorithm | Constraint Type | Policy Type
---|---|---
Cumulative/Instantaneous | State-wise/Trajectory-wise | Deterministic | Stochastic
CPO (Ray et al., 2019) | Cumulative | Trajectory-wise | $\times$ | $\surd$
PPO-L (Ray et al., 2019) | Cumulative | Trajectory-wise | $\times$ | $\surd$
TRPO-L (Ray et al., 2019) | Cumulative | Trajectory-wise | $\times$ | $\surd$
Safety Layer | Instantaneous | State-wise | $\surd$ | $\times$
Recovery RL | Cumulative | State-wise | $\surd$ | $\surd$
Off-policy Lagrangian | Cumulative | Trajectory-wise | $\surd$ | $\surd$
Feasible Actor-Critic | Cumulative | State-wise | $\surd$ | $\surd$
Exact Penalty Optimization | Cumulative | Both | $\surd$ | $\surd$
### 4.1 SpeedLimit Benchmark
The task is inspired by Zhang et al. (2020), as illustrated in Figure 2. In
SpeedLimit task, the agent is a four-wheeled race-car whose observation is ego
position, velocity and yaw. The selected action controls the Revolution Per
Minute (RPM) and steering of wheels. The agent is rewarded for approaching
$x_{dest}=+\infty$ and the cost function is
$C(s,a)=\begin{cases}1,&\text{if vehicle's velocity}>1.5m/s\\\
0,&\text{otherwise}\end{cases}.$ (2)
The toy environment is simple yet representative since speed control is a
classic problem in vehicle autonomy. Besides, the speed limit is easy to reach
and thus undesirable algorithms may violate the safety constraint at almost
every time step. That is, the toy environment enables us to see which
algorithms can effectively degrade the dense cost return and are best suited
for safe AD tasks.
### 4.2 MetaDrive Benchmark
This task is inspired by Li et al. (2021), as illustrated in Figure 3.
Metadrive is a compositional, lightweight and realistic platform for vehicle
autonomy. Most importantly, it provides pre-defined environments for safe
policy learning in autopilots. Concretely, the observation is encoded by a
vector containing ego-state, navigation information and surrounding
information detected by the Lidar. We control the speed and steering of the
car to hit virtual land markers for rewards, and the cost function is defined
as
$C(s,a)=\begin{cases}1,&\text{if collides or out of the road}\\\
0,&\text{otherwise}\end{cases}$ (3)
It worth mentioning that we set the traffic density twice than the original
paper to construct a more challenging scenario.
## 5 Efficient Safe RL Algorithms
### 5.1 Overall Implementation
The current version of SafeRL-Kit contains some latest RL-based methods,
including _Safety Layer_ (Dalal et al., 2018), _Recovery RL_ (Thananjeyan et
al., 2021), _Off-policy Lagrangian_ (Ha et al., 2020), _Feasible Actor-Critic_
(Ma et al., 2021) and newly proposed _Exact Penalty Optimization_. We compare
above methods along with some existing on-policy baselines (Ray et al., 2019)
in Table 1.
Before diving into algorithmic details, we first explain the overall
implementation of SafeRL-Kit and its benefits:
(1)
The adopted algorithms address safe policy learning from different
perspectives (Safety Layer for safety correction; Recovery RL for safety
recovery; Lagrangian, FAC, and EPO for constrained optimization). Thus, users
can combine AD-specific knowledge with the proper type of safe RL baselines in
their studies.
(2)
All the algorithms are implemented under the off-policy Actor-Critic
architecture. Thus, they enjoy better sample efficiency and can leverage human
demonstration if needed.
(3)
All the algorithms are implemented with a unified training framework. By
default, all networks are MLPs with (256,256) hidden layers activated via the
ReLU function. The essential updates of backbone networks follow TD3 (Fujimoto
et al., 2018) without pre-training processes. Thus, we can conduct a fair
comparison to see which of them are best suited for AD-oriented tasks.
### 5.2 Safety Layer
Safety Layer, added on top of the original policy network, conducts a
quadratic-programming-based constrained optimization to find the ”nearest”
action to the feasible region.
Specifically, Safety Layer utilizes a parametric linear model
$C(s_{t},a_{t})\approx g(s_{t};\omega)^{\top}a_{t}+c_{t-1}$ (4)
to approximate the single-step cost function with supervised training and
yields the following QP problem
$\displaystyle a_{t}^{*}=$ $\displaystyle\ \ {\arg\min}_{a}\
\frac{1}{2}||a-\mu_{\theta}(s)||^{2}$ (5) $\displaystyle\mathrm{s.t.}\quad
g(s_{t};\omega)^{\top}a_{t}+c_{t-1}\leq\epsilon,$
which projects the unsafe action back to the feasible region.
Since there is only one compositional cost signal in our problem, the closed-
form solution of problem (5) is
$a_{t}^{*}=\mu_{\theta}(s_{t})-\bigg{[}\frac{g(s_{t};\omega)^{\top}\mu_{\theta}(s)+c_{t-1}-\epsilon}{g(s_{t};\omega)^{\top}g(s_{t};\omega)}\bigg{]}^{+}g(s_{t};\omega)$
(6)
Thus, Safety Layer is the type of method that addresses state-wise,
instantaneous constraints.
By the way, the $g_{\omega}$ is trained from offline data in Dalal et al.
(2018). SafeRL-Kit instead learns the linear model with the policy network
synchronously, considering the side-effect of distribution shift. We employ a
warm-up in the training process to avoid meaningless, inaccurate corrections.
Table 2: Hyper-parameters of different safety-aware algorithms in SafeRL-Kit.
Hyper-parameter | Safety Layer | Recovery RL | Lagrangian | FAC | EPO
---|---|---|---|---|---
Cost Limit | 0.02 | 0.1 | 0.1 | 0.1 | 0.1
Reward Discount | 0.99 | 0.99 | 0.99 | 0.99 | 0.99
Cost Discount | 0.99 | 0.99 | 0.99 | 0.99 | 0.99
Warm-up Ratio | 0.2 | 0.2 | N/A | N/A | N/A
Batch Size | 256 | 256 | 256 | 256 | 256
Critic LR | 3E-4 | 3E-4 | 3E-4 | 3E-4 | 3E-4
Actor LR | 3E-4 | 3E-4 | 3E-4 | 3E-4 | 3E-4
Safe Critic LR | 3E-4 | 3E-4 | 3E-4 | 3E-4 | 3E-4
Safe Actor LR | N/A | 3E-4 | N/A | N/A | N/A
Multiplier LR | N/A | N/A | 1E-5 | 1E-5 | N/A
Multiplier Init | N/A | N/A | 0.0 | N/A | N/A
Policy Delay | 2 | 2 | 2 | 2 | 2
Multiplier Delay | N/A | N/A | N/A | 12 | N/A
Penalty Factor | N/A | N/A | N/A | N/A | 5
### 5.3 Recovery RL
The critical insight behind Recovery RL is to introduce an additional policy
that recovers potential unsafe states. Consequently, it trains two independent
RL agents instead of solving a cumbersome constrained optimization problem.
Specifically, Recovery RL learns a safe critic to estimate the future
probability of constraint violation as
$Q^{\pi}_{\text{risk}}(s_{t},a_{t})=c_{t}+(1-c_{t})\gamma\mathbb{E}_{\pi}Q^{\pi}_{\text{risk}}(s_{t+1},a_{t+1}).$
(7)
This formulation is slightly different from the standard Bellman equation
since it assumes the episode terminates when the agent receives a cost signal.
We found in experiments that such an early stopping makes it intractable for
agents to master desirable skills in AD. Thus, we remove the early-stopping
condition but still preserve the original formulation of
$Q^{\pi}_{\text{risk}}$ in (7) since it limits the upper bound of the safe
critic and eliminates the over-estimation in Q-learning.
In the phase of policy execution, the recovery policy takes over the control
when the predicted value of the safe critic exceeds the given threshold, as
$a_{t}=\begin{cases}\pi_{\text{task}}(s_{t}),&\text{if
}Q^{\pi}_{\text{risk}}\big{(}s_{t},\pi_{\text{task}}(s_{t})\big{)}\leq\epsilon\\\
\pi_{\text{risk}}(s_{t}),&\text{otherwise}\end{cases}$ (8)
The optimization objective of $\pi_{\text{task}}$ is to maximize the
cumulative rewards, whereas the goal of $\pi_{\text{risk}}$ is to minimize
$Q^{\pi}_{\text{risk}}$, namely to degrade the potential risk of the agent.
It is important to store $a_{\text{task}}$ and $a_{\text{risk}}$
simultaneously in the replay buffer, and utilize them to train
$\pi_{\text{task}}$ and $\pi_{\text{risk}}$ respectively in Recovery RL. This
technique ensures that $\pi_{\text{task}}$ can learn from the new MDP, instead
of proposing same unsafe actions continuously.
Similar to Safety Layer, Recovery RL in SafeRL-Kit also has a warm-up stage
where $Q^{\pi}_{\text{risk}}$ is trained but is not utilized.
### 5.4 Off-policy Lagrangian
Lagrangian Relaxation is commonly-used to address constrained optimization
problem. Safe RL as well can be formulated as a constrained sequential
optimization problem
$\displaystyle\mathop{\max}_{\pi}$
$\displaystyle\mathop{\mathbb{E}}_{s\sim\mu}V_{0}^{\pi}(s)$ (9)
$\displaystyle\mathrm{s.t.}$
$\displaystyle\mathop{\mathbb{E}}_{s\sim\mu}U^{\pi}_{0}(s)\leq\epsilon,$
where
$V^{\pi}_{0}(s)=\mathop{\mathbb{E}}_{\tau\sim\pi}\big{[}\sum^{\infty}_{t=0}\gamma^{t}r_{t}\big{|}s_{0}=s]$
and
$U^{\pi}_{0}(s)=\mathop{\mathbb{E}}_{\tau\sim\pi}\big{[}\sum^{\infty}_{t=0}\gamma^{t}c_{t}\big{|}s_{0}=s]$.
Strong duality holds for primal problem (9) (Paternain et al., 2022), thus it
can be tackled via the dual problem
$\mathop{\max}_{\lambda\geq
0}\mathop{\min}_{\pi}\mathop{\mathbb{E}}_{s\sim\mu}-V_{0}^{\pi}(s)+\lambda\big{(}U^{\pi}_{0}(s)-\epsilon\big{)}.$
(10)
The off-policy objective of problem (10) in the parametric space (Ha et al.,
2020) can be formulated as
$\mathop{\max}_{\lambda\geq
0}\mathop{\min}_{\theta}\mathbb{E}_{\mathcal{D}}-Q^{\pi}(s,\pi_{\theta}(s))+\lambda\big{(}Q^{\pi}_{c}(s,\pi_{\theta}(s))-\epsilon\big{)}.$
(11)
Stochastic primal-dual optimization (Luenberger et al., 1984) is applied here
to update primal and dual variables alternatively, which follows as
$\begin{cases}\theta\leftarrow\theta-\eta_{\theta}\nabla_{\theta}\mathbb{E}_{\mathcal{D}}\big{(}-Q^{\pi}(s,\pi_{\theta}(s))+\lambda
Q^{\pi}_{c}(s,\pi_{\theta}(s))\big{)}\\\
\lambda\leftarrow\big{[}\lambda+\eta_{\lambda}\mathbb{E}_{\mathcal{D}}\big{(}Q^{\pi}_{c}(s,\pi_{\theta}(s))-\epsilon\big{)}\big{]}^{+}\end{cases}$
(12)
Notably, the timescale of primal variable updates is required to be faster
than the timescale of Lagrange multipliers. Thus, we set
$\eta_{\theta}\gg\eta_{\lambda}$ in SafeRL-Kit.
### 5.5 Feasible Actor-Critic
The constraint of Off-policy Lagrangian in Section 5.4 is based on the
expectation of whole trajectories, which inevitably allows some unsafe roll-
outs. Ma et al. (2021) introduce a new concept, namely state-wise constraints
for cumulative cost-return which follows as
$\displaystyle\mathop{\max}_{\pi}$
$\displaystyle\mathop{\mathbb{E}}_{s\sim\mu}V_{0}^{\pi}(s)$ (13)
$\displaystyle\mathrm{s.t.}$ $\displaystyle U^{\pi}_{0}(s)\leq\epsilon,\forall
s\in\mathcal{I_{F}}.$
Here $s\in\mathcal{I_{F}}$ stands for all ”feasible” initial states. Also,
their theoretical results show that problem (13) is a stricter version than
problem (9), which provides strong safety guarantees for state-wise safe
control.
The dual problem of (13) is derived by rescaling the state-wise constraints
and follows as
$\mathop{\max}_{\lambda\geq
0}\mathop{\min}_{\pi}\mathop{\mathbb{E}}_{s\sim\mu}-V_{0}^{\pi}(s)+\lambda(s)\big{(}U^{\pi}_{0}(s)-\epsilon\big{)}.$
(14)
The distinctiveness of problem (14) is there are infinitely many Lagrangian
multipliers that are state-dependent. In SafeRL-Kit, we employ an neural
network $\lambda(s;\xi)$ activated by _Softplus_ function to map the given
state $s$ to its corresponding Lagrangian multiplier $\lambda(s)$.
The primal-dual ascents of policy network is similar to (12); the updates of
multiplier network is given by
$\xi\leftarrow\xi+\eta_{\xi}\nabla_{\xi}\mathbb{E}_{\mathcal{D}}\lambda(s;\xi)\big{(}Q^{\pi}_{c}(s,\pi_{\theta}(s))-\epsilon\big{)}.$
(15)
Besides, SafeRL-Kit also sets a different interval schedule $m_{\pi}$ (for
$\pi_{\theta}$ delay steps) and $m_{\lambda}$ (for $\lambda_{\xi}$ delay
steps) to stabilize the training process (Ma et al., 2021).
### 5.6 Exact Penalty Optimization
Algorithm 1 State-wise Exact Penalty Optimization
0: deterministic policy network $\pi(s;\theta)$; critic networks
$\hat{Q}(s,a;\phi)$ and $\hat{Q}_{c}(s,a;\varphi)$
1: for t in $1,2,...$ do
2: $a_{t}=\pi(s_{t};\theta)+\epsilon,\ \ \epsilon\sim\mathcal{N}(0,\sigma)$.
3: Apply $a_{t}$ to the environment.
4: Store the transition $(s_{t},a_{t},s_{t+1},r_{t},c_{t},d_{t})$ in
$\mathcal{B}$.
5: Sample a mini-batch of $N$ transitions from $\mathcal{B}$.
6:
$\varphi\leftarrow{\arg\min}_{\varphi}\mathop{\mathbb{E}}_{\mathcal{B}}\big{[}\hat{Q}_{c}(s,a;\varphi)-\big{(}c+\gamma_{c}(1-d)\hat{Q}_{C}(s^{\prime},\pi(s^{\prime};\theta);\varphi)\big{)}\big{]}^{2}$.
7:
$\phi\leftarrow{\arg\min}_{\phi}\mathop{\mathbb{E}}_{\mathcal{B}}\big{[}\hat{Q}(s,a;\phi)-\big{(}r+\gamma(1-d)\hat{Q}(s^{\prime},\pi(s^{\prime};\theta);\phi)\big{)}\big{]}^{2}$.
8:
$\theta\leftarrow{\arg\min}_{\theta}\mathop{\mathbb{E}}_{\mathcal{B}}\big{[}-\hat{Q}(s,\pi(s;\theta);\phi)+\kappa\cdot\max\\{0,\hat{Q}_{c}(s,\pi(s;\theta);\varphi)-\delta\\}\big{]}$.
9: end for
In this paper, we propose a simple-yet-effective approach motivated by the
exact penalty method.
###### Theorem 5.1.
Considering the following two problems
$\displaystyle\min f(x)\ \ \mathrm{s.t.}\ g_{i}(x)\leq 0,i=1,2,...$ (P)
$\displaystyle\min f(x)+\kappa\cdot\sum_{i}\max\\{0,g_{i}(x)\\}$ (Q)
Suppose $\lambda^{*}$ is the optimal Lagrange multiplier vector of problem
(P). If the penalty factor $\kappa\geq||\lambda^{*}||_{\infty}$, problem (P)
and problem (Q) share the same optimal solution set.
###### Proof.
See our recent work (Zhang et al., 2022). ∎
The above theorem enables us to construct an equivalent function whose
unconstrained minimizing points also solve the previous constrained problem.
Meanwhile, the unconstrained problem can tackle multiple constraints with
exactly one consistent penalty factor.
Thus, we simplify Lagrangian-based methods (i.e., Off-policy Lagrangian and
FAC) with this technique, considering that the single-constrained optimization
problem (9) and the multi-constrained optimization problem (13) are suited for
exact penalty method in Theorem 5.1. In this way, we can employ a single
minimization on primal variables with fixed penalty terms instead of
cumbersome min-max optimization over both primal and dual variables.
Below we merely summarize the state-wise Exact Penalty Optimization (EPO) in
Algorithm 1 as an alternative to FAC, since FAC provides stricter safety
guarantees but suffers from the oscillation and instability of the multiplier
network. The off-policy surrogate objective of state-wise EPO follows as
$\ell(\theta)=\mathbb{E}_{\mathcal{D}}-Q^{\pi}(s,\pi_{\theta}(s))+\kappa\big{[}Q^{\pi}_{c}(s,\pi_{\theta}(s))-\epsilon\big{]}^{+},$
(16)
where $\kappa$ is a fixed, sufficiently large hyper-parameter.
## 6 Empirical Analysis
We benchmark RL-based algorithms on SpeedLimit task (Zhang et al., 2020) and
MetaDrive platform (Li et al., 2021). Below, we give a comparative evaluation
according to the empirical results.
Table 3: Mean performance with normal 95% confidence for safety-aware
algorithms on benchmarks.
Environments | Safety Layer | Recovery RL | Lagrangian | FAC | EPO
---|---|---|---|---|---
SpeedLimit | Ep-Reward | $651.59\pm 10.70$ | $623.67\pm 99.58$ | $565.50\pm 69.29$ | $631.55\pm 34.92$ | $\bm{684.86\pm 3.19}$
Ep-Cost | $76.30\pm 9.07$ | $187.14\pm 96.50$ | $7.28\pm 3.11$ | $7.83\pm 5.23$ | $5.44\pm 0.53$
CostRate | $0.33\pm 0.01$ | $0.43\pm 0.06$ | $0.06\pm 0.01$ | $0.07\pm 0.01$ | $0.02\pm 0.01$
MetaDrive | SuccessRate | $0.73\pm 0.05$ | $0.78\pm 0.06$ | $0.74\pm 0.05$ | $0.68\pm 0.04$ | $0.73\pm 0.05$
Ep-Cost | $12.91\pm 1.10$ | $14.18\pm 1.92$ | $9.23\pm 4.88$ | $3.29\pm 0.50$ | $4.29\pm 0.71$
CostRate | $0.04\pm 0.001$ | $0.05\pm 0.001$ | $0.02\pm 0.01$ | $0.01\pm 0.01$ | $0.01\pm 0.01$
#### Unconstrained Reference.
We utilize TD3 (Fujimoto et al., 2018) as the unconstrained reference for
upper bounds of reward performance and constraint violations. For the
SpeedLimit task (500 max_episode_horizon), TD3 exceeds the velocity threshold
at almost every step with a near 100% cost rate. For the MetaDrive environment
(1000 max_episode_horizon), the agent receives sparse cost signals when it
collides with obstacles or is out of the road. Besides, the cost signals are
encoded into the reward function; otherwise, it would be too hard to learn
desirable behaviors (Li et al., 2021). Consequently, TD3 with reward-shaping
(TD3-RS) would not have that high cumulative costs as it does in the toy
environment.
#### Overall Performance.
The mean performances are summarized in Table 2 and the learning curves over
five seeds are shown in Figure 4 and 5. We conclude that Safety Layer and
Recovery RL are less effective in degrading cost return. They still have
around 10% safety violations in SpeedLimit, and the safety improvement in
MetaDrive is also limited. As for Safety Layer, the main reasons are that the
linear approximation to the cost function brings about non-negligible errors,
and the single-step correction is myopic for future risks. As for Recovery RL,
the estimation error of $Q_{\text{risk}}$ is probably the biggest problem
affecting the recovery effects. By contrast, Off-policy Lagrangian and FAC
have significantly lower cumulative costs. However, the Lagrangian-based
methods have the inherent problems from primal-dual ascents. For one thing,
the Lagrangian multiplier tuning causes oscillations of learning curves. For
another thing, those algorithms are susceptible to Lagrangian multipliers’
initialization and learning rate. We conclude that constrained optimization
still outperforms safety correction and recovery if the hyper-parameters are
appropriately settled. At last, we find that the newly proposed EPO is
surprisingly effective for learning safe AD. In SpeedLimit, it converges to a
high plateau quickly while adhering to an almost zero cost return. In
MetaDrive, it is still competitive with SOTA baselines. We regard the
underlying reason as that EPO is an equivalent form to FAC but reduces state-
dependent Lagrangian multipliers to one fixed hyper-parameter. The consistent
loss function stabilizes the training process compared with primal-dual
optimization.
#### Sensitivity Analysis.
In this paper, we study the sensitivity to hyper-parameters of Lagrangian-
based methods and EPO in Figure 6 and Figure 7 respectively. We found that
Lagrangian-based methods are susceptible to the learning rate of the
Lagrangian multiplier(s) in stochastic primal-dual optimization. First, the
oscillating $\lambda$ causes non-negligible deviations in the learning curves.
Besides, the increasing $\eta_{\lambda}$ may degrade the performance
dramatically. The phenomenon is especially pronounced in FAC, which has a
multiplier network to predict the state-dependent $\lambda(s;\xi)$. Thus, we
suggest $\eta_{\lambda}\ll\eta_{\theta}$ in practice. As for EPO, we find if
the penalty factor $\kappa$ is too small, the cost return may fail to
converge. Nevertheless, if $\kappa$ is sufficiently large, the learning curves
are robust and almost identical. Thus, we suggest $\kappa>5$ in experiments
and a grid search for better performance.
#### Sample Complexity.
Considering the difficulty of the above two tasks, we run $5\times 10^{5}$ and
$1\times 10^{6}$ interactive steps respectively to obtain admissible results.
Notably, previous on-policy codebases require significantly more samples for
convergence; for example, Ray et al. (2019) run $1\times 10^{7}$ interactive
steps even for toy environments. Thus, SafeRL-Kit with off-policy
implementations is much more sample-efficient compared to theirs, emphasizing
the applicability of SafeRL-Kit to data-expensive AD-oriented tasks.
## 7 Further Discussion
The released SafeRL-kit contains several SOTA off-policy safe RL methods that
are suited for safety-critical autonomous driving. We conduct the comparative
evaluation of those baselines over one representative toy environment and one
simulated AD platform, respectively. The proposed Exact Penalty Optimization
in this paper is easy-to-implement and surprisingly effective on AD-oriented
tasks. We think future work on SafeRL-kit from two aspects:
* •
The off-policy implementation of SafeRL-Kit can naturally leverage offline
data, including past logs and human demonstrations, which are commonly used
and highly effective for AD-oriented tasks.
* •
We only benchmark safe RL methods with vector input (ego-state, navigation
information, Lidar signals, etc.) in this paper. Nevertheless, vision-based AD
is still less studied in the current version of SafeRL-Kit.
(a) Eval Episode Reward (b) Eval Episode Cost
(c) Training Cost rate
Figure 4: Learning curves on the SpeedLimit benchmark. The x-axis is the
number of interactions with the simulator (500,000 total).
(a) Eval Success Rate (b) Eval Episode Cost
(c) Training Cost rate
Figure 5: Learning curves on the MetaDrive Benchmark. The x-axis is the number
of interactions with the simulator (1,000,000 total).
(a) Reward-Lag-SpeedLimit (b) Cost-Lag-SpeedLimit
(c) Reward-FAC-MetaDrive (d) Cost-FAC-MetaDrive
Figure 6: Sensitivity studies of Lagrangian-based methods. The first two
figures are reward and cost plots of Off-policy Lagrangian on SpeedLimit task
with different $\lambda$ learning rates. The last two figures are success rate
and cost plots of Feasible Actor-Critic on MetaDrive benchmark with different
$\lambda(s;\xi)$ learning rates.
(a) Eval Episode Reward (b) Eval Episode Cost
(c) Training Cost rate (d) Training Cost rate
Figure 7: Sensitivity studies of Exact Penalty Optimization. The first two
figures are reward and cost plots of EPO on the SpeedLimit task with different
penalty factors $\kappa$. The last two figures are the success rate and cost
plots of EPO on the MetaDrive benchmark with different penalty factors
$\kappa$.
## References
* Achiam et al. (2017) Achiam, J., Held, D., Tamar, A., and Abbeel, P. Constrained policy optimization. In _International Conference on Machine Learning_ , pp. 22–31. PMLR, 2017.
* Altman (1999) Altman, E. _Constrained Markov decision processes_ , volume 7. CRC Press, 1999.
* Amodei et al. (2016) Amodei, D., Olah, C., Steinhardt, J., Christiano, P., Schulman, J., and Mané, D. Concrete problems in ai safety. _arXiv preprint arXiv:1606.06565_ , 2016.
* Chen et al. (2021) Chen, B., Francis, J., Nyberg, J. O. E., and Herbert, S. L. Safe autonomous racing via approximate reachability on ego-vision. _arXiv preprint arXiv:2110.07699_ , 2021.
* Chow et al. (2017) Chow, Y., Ghavamzadeh, M., Janson, L., and Pavone, M. Risk-constrained reinforcement learning with percentile risk criteria. _The Journal of Machine Learning Research_ , 18(1):6070–6120, 2017.
* Dalal et al. (2018) Dalal, G., Dvijotham, K., Vecerik, M., Hester, T., Paduraru, C., and Tassa, Y. Safe exploration in continuous action spaces. _arXiv preprint arXiv:1801.08757_ , 2018.
* Fujimoto et al. (2018) Fujimoto, S., Hoof, H., and Meger, D. Addressing function approximation error in actor-critic methods. In _International conference on machine learning_ , pp. 1587–1596. PMLR, 2018.
* Ha et al. (2020) Ha, S., Xu, P., Tan, Z., Levine, S., and Tan, J. Learning to walk in the real world with minimal human effort. _arXiv preprint arXiv:2002.08550_ , 2020.
* Herman et al. (2021) Herman, J., Francis, J., Ganju, S., Chen, B., Koul, A., Gupta, A., Skabelkin, A., Zhukov, I., Kumskoy, M., and Nyberg, E. Learn-to-race: A multimodal control environment for autonomous racing. In _Proceedings of the IEEE/CVF International Conference on Computer Vision_ , pp. 9793–9802, 2021.
* Isele et al. (2018) Isele, D., Nakhaei, A., and Fujimura, K. Safe reinforcement learning on autonomous vehicles. In _2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)_ , pp. 1–6. IEEE, 2018.
* Kiran et al. (2021) Kiran, B. R., Sobh, I., Talpaert, V., Mannion, P., Al Sallab, A. A., Yogamani, S., and Pérez, P. Deep reinforcement learning for autonomous driving: A survey. _IEEE Transactions on Intelligent Transportation Systems_ , 2021.
* Li et al. (2021) Li, Q., Peng, Z., Xue, Z., Zhang, Q., and Zhou, B. Metadrive: Composing diverse driving scenarios for generalizable reinforcement learning. _arXiv preprint arXiv:2109.12674_ , 2021.
* Li et al. (2022) Li, Q., Peng, Z., and Zhou, B. Efficient learning of safe driving policy via human-ai copilot optimization. _arXiv preprint arXiv:2202.10341_ , 2022.
* Liu et al. (2021) Liu, Y., Ding, J., and Liu, X. Resource allocation method for network slicing using constrained reinforcement learning. In _2021 IFIP Networking Conference (IFIP Networking)_ , pp. 1–3. IEEE, 2021.
* Luenberger et al. (1984) Luenberger, D. G., Ye, Y., et al. _Linear and nonlinear programming_ , volume 2. Springer, 1984.
* Ma et al. (2021) Ma, H., Guan, Y., Li, S. E., Zhang, X., Zheng, S., and Chen, J. Feasible actor-critic: Constrained reinforcement learning for ensuring statewise safety. _arXiv preprint arXiv:2105.10682_ , 2021.
* Mnih et al. (2015) Mnih, V., Kavukcuoglu, K., Silver, D., Rusu, A. A., Veness, J., Bellemare, M. G., Graves, A., Riedmiller, M., Fidjeland, A. K., Ostrovski, G., et al. Human-level control through deep reinforcement learning. _nature_ , 518(7540):529–533, 2015.
* Paternain et al. (2022) Paternain, S., Calvo-Fullana, M., Chamon, L. F., and Ribeiro, A. Safe policies for reinforcement learning via primal-dual methods. _IEEE Transactions on Automatic Control_ , 2022.
* Ray et al. (2019) Ray, A., Achiam, J., and Amodei, D. Benchmarking safe exploration in deep reinforcement learning. _arXiv preprint arXiv:1910.01708_ , 7:1, 2019.
* Sutton & Barto (1998) Sutton, R. S. and Barto, A. G. _Reinforcement learning: An introduction_. MIT press, 1998.
* Thananjeyan et al. (2021) Thananjeyan, B., Balakrishna, A., Nair, S., Luo, M., Srinivasan, K., Hwang, M., Gonzalez, J. E., Ibarz, J., Finn, C., and Goldberg, K. Recovery rl: Safe reinforcement learning with learned recovery zones. _IEEE Robotics and Automation Letters_ , 6(3):4915–4922, 2021.
* Vinyals et al. (2019) Vinyals, O., Babuschkin, I., Czarnecki, W. M., Mathieu, M., Dudzik, A., Chung, J., Choi, D. H., Powell, R., Ewalds, T., Georgiev, P., et al. Grandmaster level in starcraft ii using multi-agent reinforcement learning. _Nature_ , 575(7782):350–354, 2019.
* Wen et al. (2020) Wen, L., Duan, J., Li, S. E., Xu, S., and Peng, H. Safe reinforcement learning for autonomous vehicles through parallel constrained policy optimization. In _2020 IEEE 23rd International Conference on Intelligent Transportation Systems (ITSC)_ , pp. 1–7. IEEE, 2020.
* Yang et al. (2022) Yang, T.-Y., Zhang, T., Luu, L., Ha, S., Tan, J., and Yu, W. Safe reinforcement learning for legged locomotion. _arXiv preprint arXiv:2203.02638_ , 2022.
* Yuan et al. (2021) Yuan, Z., Hall, A. W., Zhou, S., Brunke, L., Greeff, M., Panerati, J., and Schoellig, A. P. safe-control-gym: a unified benchmark suite for safe learning-based control and reinforcement learning. _arXiv preprint arXiv:2109.06325_ , 2021.
* Zhang et al. (2022) Zhang, L., Shen, L., Yang, L., Chen, S.-Y., Yuan, B., Wang, X., and Tao, D. Penalized proximal policy optimization for safe reinforcement learning. _arXiv preprint arXiv:2205.11814_ , 2022.
* Zhang et al. (2020) Zhang, Y., Vuong, Q., and Ross, K. W. First order constrained optimization in policy space. _arXiv preprint arXiv:2002.06506_ , 2020.
* Zhao et al. (2021) Zhao, W., He, T., and Liu, C. Model-free safe control for zero-violation reinforcement learning. In _5th Annual Conference on Robot Learning_ , 2021.
|
# Single-shot Hyper-parameter Optimization for
Federated Learning:
A General Algorithm & Analysis
Yi Zhou Parikshit Ram Theodoros Salonidis
Nathalie Baracaldo Horst Samulowitz Heiko Ludwig
IBM Research
{yi.zhou<EMAIL_ADDRESS>
{tsaloni, baracald, samulowitz<EMAIL_ADDRESS>
###### Abstract
We address the relatively unexplored problem of hyper-parameter optimization
(HPO) for federated learning (FL-HPO). We introduce Federated Loss SuRface
Aggregation (FLoRA), a general FL-HPO solution framework that can address use
cases of tabular data and any Machine Learning (ML) model including gradient
boosting training algorithms and therefore further expands the scope of FL-
HPO. FLoRA enables single-shot FL-HPO: identifying a single set of good hyper-
parameters that are subsequently used in a single FL training. Thus, it
enables FL-HPO solutions with minimal additional communication overhead
compared to FL training without HPO. We theoretically characterize the
optimality gap of FLoRA, which explicitly accounts for the heterogeneous non-
iid nature of the parties’ local data distributions, a dominant characteristic
of FL systems. Our empirical evaluation of FLoRA for multiple ML algorithms on
seven OpenML datasets demonstrates significant model accuracy improvements
over the considered baseline, and robustness to increasing number of parties
involved in FL-HPO training.
## 1 Introduction
Traditional machine learning (ML) approaches require training data to be
gathered at a central location where the learning algorithm runs. In real
world scenarios, however, training data is often subject to privacy or
regulatory constraints restricting the way data can be shared, used and
transmitted. Examples of such regulations include the European General Data
Protection Regulation (GDPR), California Consumer Privacy Act (CCPA),
Cybersecurity Law of China (CLA) and HIPAA, among others. Federated learning
(FL), first proposed in McMahan et al. (2017b), has recently become a popular
approach to address privacy concerns by allowing collaborative training of ML
models among multiple parties where each party can keep its data private.
##### FL-HPO problem.
Despite the privacy protection FL brings along, there are many open problems
in FL domain (Kairouz et al., 2019; Khodak et al., 2021), one of which is
hyper-parameter optimization for FL. Existing FL systems require a user (or
all participating parties) to pre-set (agree on) multiple hyper-parameters
(HPs) (i) for the model being trained (such as number of layers and batch size
for neural networks or tree depth and number of trees in tree ensembles), and
(ii) for the the aggregator (if such hyper-parameters exist). Hyper-parameter
optimization (HPO) for FL is important because the choice of HPs can have
dramatic impact on performance (McMahan et al., 2017b).
While HPO has been widely studied in the centralized ML setting, it comes with
unique challenges in the FL setting. First, existing HPO techniques for
centralized training often make use of the entire dataset, which is not
available in FL. Secondly, they train a vast variety of models for a large
number of HP configurations which would be prohibitively expensive in terms of
communication and training time in FL settings. Thirdly, one important
challenge that has not been adequately explored in FL literature is support
for tabular data, which are widely used in enterprise settings (Ludwig et al.,
2020). One of the best models for this setting is based on gradient boosting
tree algorithms (Friedman, 2001) which are different from the stochastic
gradient descent algorithm used for neural networks. Recently, a few
approaches have been proposed for FL-HPO, however they focus on handling HPO
using personalization techniques (Khodak et al., 2021) and neural networks
(Khodak et al., 2020). To the best of our knowledge, there is no HPO approach
for FL systems to train non-neural network models, such as XGBoost (Chen and
Guestrin, 2016) that is particularly common in the enterprise setting.
##### Scope.
In this paper, we address the aforementioned challenges of FL-HPO. We focus on
the problem where the model HPs are shared across all parties and we seek a
set of HPs and train a single model that is eventually used by all parties for
testing/deployment. Moreover, we impose three further requirements that make
the problem more challenging: (C1) we do not make any assumption that two
models with different HPs can perform some form of “weight-sharing” (which is
a common technique used in various HPO and neural architecture search (NAS)
schemes for neural networks to reduce the computational overhead of HPO and
NAS), allowing our solution to be applied beyond neural networks (Khodak et
al., 2020). (C2) we seek to perform “single-shot” FL-HPO, where we have
limited resources (in the form of computation and communication overhead)
which allow training only a single model via federated learning (that is, a
single HP configuration), and (C3) we do not assume that parties have
independent and identically distributed (IID) data distributions.
##### Contributions.
Given the above FL-HPO problem setting, we make the following contributions:
* •
(§3) We present a novel framework Federated Loss SuRface Aggregation (FLoRA)
that leverages meta-learning techniques to utilize local and asynchronous HPO
on each party to perform single-shot HPO for the global FL-HPO problem.
* •
(§4) We provide theoretical guarantees for the set of HPs selected by FLoRA
covering both IID and Non-IID cases. To the best of our knowledge, this is the
first rigorous theoretical analysis for FL-HPO problem and also the first
optimality gap constructed in terms of the estimated loss given a target
distribution.
* •
(§5) We evaluate FLoRA on the FL-HPO of Gradient Boosted Decision Trees
(GBDTs), Support Vector Machines (SVMs) and Multi-layered Perceptrons (MLPs)
on seven classification datasets from OpenML (Vanschoren et al., 2013),
highlighting (i) its performance relative to the baseline, (ii) the effect of
various choices in this scheme, and (iii) the effect of the number of parties
on the performance.
## 2 Related work
##### Performance optimization of FL systems.
One of the main challenges in FL is achieving high accuracy and low
communication overhead. FedAvg (McMahan et al., 2017a) is a predominant
algorithm used for training in FL and several optimization schemes build on
it. It is executed in multiple global rounds. At each round, the clients
perform stochastic gradient descent (SGD) updates on their parameters based on
their local objective functions. They subsequently send their updates to the
server, which averages them and transmits their mean back to the clients.
Several approaches have been devised for optimizing the communication
performance of FL systems. Initially, communication optimizations included
performing multiple SGD local iterations at the clients and randomly selecting
a small subset of the clients to compute and send updates to the server
(McMahan et al., 2017a). Subsequently, compression techniques were used to
minimize the size of model updates to the server. It has been shown that the
accuracy and communication performance of these techniques depend highly on
their HPs (McMahan et al., 2017a).
##### FL-HPO approaches.
Recent optimization approaches adapt HPs such as the local learning rate at
each client (Koskela and Honkela, 2019; Mostafa, 2019; Reddi et al., 2020),
the number of local SGD iterations (which affect the frequency of server
updates) (Wang et al., 2019). In Dai et al. (2020, 2021), Dai et.al. address
Federated Bayesian Optimization. Although using HPO with multiple HPs, the
problem setup is quite different than Federated Learning: they focus on a
single party using information from other parties to accelerate its own
Bayesian Optimization, instead of building a model for all parties. Federated
Network Architecture Search (FNAS) approaches search for architectural HPs of
deep learning CNN models by running locally NAS algorithms and then
aggregating the NAS architecture weights and model weights using FedAvg (He et
al., 2020; Garg et al., 2020; Xu et al., 2020). These approaches have shown
empirical gains but lack theoretical analysis. Inspired from the NAS technique
of weight-sharing, (Khodak et al., 2020, 2021) proposed FedEx, a FL-HPO
framework to accelerate a general HPO procedure, i.e., successive halving
algorithm (SHA), for many SGD-based FL algorithms. Fedex focuses on building
personalized models for parties by tuning local HPs of the parties. They
provide a theoretical analysis for a special case of tuning a single HP
(learning rate) in a convex optimization setting.
Our framework improves on the above approaches in several ways. 1) It is more
general, as it can tune multiple HPs and is applicable to non SGD-training
settings such as gradient boosting trees. This is achieved by treating FL-HPO
as a black-box HPO problem, which has been addressed in centralized HPO
literature using grid search, random search (Bergstra and Bengio, 2012) and
Bayesian Optimization approaches (Shahriari et al., 2016). The key challenge
is the requirement to perform computationally intensive evaluations on a large
number of HPO configurations, where each evaluation involves training a model
and scoring it on a validation dataset. In the distributed FL setting this
problem is exacerbated because validation sets are local to the parties and
each FL training/scoring evaluation is communication intensive. Therefore a
brute force application of centralized black-box HPO approaches that select
hyper-parameters in an outer loop and proceed with FL training evaluations is
not feasible. 2) It yields minimal HPO communication overhead. This is
achieved by building a loss surface from local asynchronous HPO at the parties
that yields a single optimized HP configuration used to train a global model
with a single FL training. 3) It is the first that theoretically characterizes
optimality gap in an FL-HPO setting, for the case we focus in this paper
(creating a global model by tuning multiple global HPs).
## 3 Methodology
In the centralized ML setting, we would consider a model class $\mathcal{M}$
and its corresponding learning algorithm $\mathcal{A}$ parameterized
collectively with HPs $\boldsymbol{\theta}\in\boldsymbol{\Theta}$, and given a
training set $D$, we can learn a single model
$\mathcal{A}(\mathcal{M},\boldsymbol{\theta},D)\to m\in\mathcal{M}$. Given
some predictive loss $\mathcal{L}(m,D^{\prime})$ of any model $m$ scored on
some holdout set $D^{\prime}$, the centralized HPO problem can be stated as
$\min\nolimits_{\boldsymbol{\theta}\in\boldsymbol{\Theta}}\mathcal{L}(\mathcal{A}(\mathcal{M},\boldsymbol{\theta},D),D^{\prime}).$
(3.1)
In the most general FL setting, we have $p$ parties $P_{1},\dots,P_{p}$ each
with their private local training dataset $D_{i},i\in[p]=\\{1,2,\ldots,p\\}$.
Let $D=\cup_{i=1}^{p}D_{i}$ denote the aggregated training dataset and
$\overline{D}=\\{D_{i}\\}_{i\in[p]}$ denote the set of per-party datasets.
Each model class (and corresponding learning algorithm) is parameterized by
global HPs $\boldsymbol{\theta}_{G}\in\boldsymbol{\Theta}_{G}$ shared by all
parties and per-party local HPs
$\boldsymbol{\theta}_{L}^{(i)}\in\boldsymbol{\Theta}_{L},i\in[p]$ with
$\boldsymbol{\Theta}=\boldsymbol{\Theta}_{G}\times\boldsymbol{\Theta}_{L}$. FL
systems usually include an aggregator with its own set of HPs
$\boldsymbol{\phi}\in\boldsymbol{\Phi}$. Finally, we would have a FL algorithm
$\mathcal{F}\left(\mathcal{M},\boldsymbol{\phi},\boldsymbol{\theta}_{G},\\{\boldsymbol{\theta}_{L}^{(i)}\\}_{i\in[p]},\mathcal{A},\overline{D}\right)\to
m\in\mathcal{M},$ (3.2)
which takes as input all the relevant HPs and per-party datasets and generates
a model. in this case, the FL-HPO problem can be stated in the two following
ways depending on the desired goals: (i) Ideally, for a global holdout dataset
$D^{\prime}$ (a.k.a validation set, possibly from the same distribution as the
aggregated dataset $D$), the target problem is:
$\min_{\boldsymbol{\phi}\in\boldsymbol{\Phi},\boldsymbol{\theta}_{G}\in\boldsymbol{\Theta}_{G},\boldsymbol{\theta}_{L}^{(i)}\in\boldsymbol{\Theta}_{L},i\in[p]}\mathcal{L}\left(\mathcal{F}\left(\mathcal{M},\boldsymbol{\phi},\boldsymbol{\theta}_{G},\\{\boldsymbol{\theta}_{L}^{(i)}\\}_{i\in[p]},\mathcal{A},\overline{D}\right),D^{\prime}\right).$
(3.3)
(ii) An alternative target problem would involve per-party holdout datasets
$D_{i}^{\prime},i\in[p]$ as follows:
$\min_{\boldsymbol{\phi}\in\boldsymbol{\Phi},\boldsymbol{\theta}_{G}\in\boldsymbol{\Theta}_{G},\boldsymbol{\theta}_{L}^{(i)}\in\boldsymbol{\Theta}_{L},i\in[p]}\mathsf{Agg}\left(\left\\{\mathcal{L}\left(\mathcal{F}\left(\mathcal{M},\boldsymbol{\phi},\boldsymbol{\theta}_{G},\\{\boldsymbol{\theta}_{L}^{(i)}\\}_{i\in[p]},\mathcal{A},\overline{D}\right),D^{\prime}_{i}\right),i\in[p]\right\\}\right),$
(3.4)
where $\mathsf{Agg}:\mathbb{R}^{p}\to\mathbb{R}$ is some aggregation function
(such as average or maximum) that scalarizes the $p$ per-party predictive
losses.
Contrasting problem (3.1) to problems (3.3) & (3.4), we can see that the FL-
HPO is significantly more complicated than the centralized HPO problem. In the
ensuing presentation, we focus on problem (3.3) although our proposed single-
shot FL-HPO scheme can be applied and evaluated for problem (3.4). We simplify
the FL-HPO problem in the following ways: (i) we assume that there is no
personalization so there are no per-party local HPs
$\boldsymbol{\theta}_{L}^{(i)},i\in[p]$, (ii) we only focus on the model class
HPs $\boldsymbol{\theta}_{G}$, deferring HPO for aggregator HPs
$\boldsymbol{\phi}$ for future work, and (iii) we assume there is a global
holdout/validation set $D^{\prime}$ which is only used to evaluate the final
global model’s performance but can not be accessed during HPO process. Hence
the problem we will study is stated as for a fixed aggregator HP
$\boldsymbol{\phi}$:
$\min\nolimits_{\boldsymbol{\theta}_{G}\in\boldsymbol{\Theta}_{G}}\mathcal{L}\left(\mathcal{F}\left(\mathcal{M},\boldsymbol{\phi},\boldsymbol{\theta}_{G},\mathcal{A},\overline{D}\right),D^{\prime}\right).$
(3.5)
This problem appears similar to the centralized HPO problem (3.1). However,
note that the main challenges in (3.5) is (i) the need for a federated
training for each set of HPs $\boldsymbol{\theta}_{G}$, and (ii) the need to
evaluate the trained model on the global validation set $D^{\prime}$ (which is
usually not available in usual FL-HPO setting). Hence it is not practical
(from a communication overhead and functional perspective) to apply existing
off-the-shelf HPO schemes to problem (3.5). In the subsequent discussion, for
simplicity purposes, we will use $\boldsymbol{\theta}$ to denote the global
HPs, dropping the “$G$” subscript.
### 3.1 Leveraging local HPOs
Algorithm 1 Single-shot FL-HPO with Federated Loss Surface Aggregation (FLoRA)
1:
Input:$\boldsymbol{\Theta},\mathcal{M},\mathcal{A},\mathcal{F},\\{(D_{i},D_{i}^{\prime})\\}_{i\in[p]},T$
2: for each party $P_{i},i\in[p]$ do
3: Run HPO to generate $T$ (HP, loss) pairs
$E^{(i)}=\left\\{(\boldsymbol{\theta}_{t}^{(i)},\mathcal{L}_{t}^{(i)}),t\in[T]\right\\},$
(3.6) where
$\boldsymbol{\theta}_{t}^{(i)}\in\boldsymbol{\Theta},\mathcal{L}_{t}^{(i)}:=\mathcal{L}(\mathcal{A}(\mathcal{M},\boldsymbol{\theta}_{t}^{(i)},D_{i}),D_{i}^{\prime})$.
4: end for
5: Collect all $E=\\{E^{(i)},i\in[p]\\}$ in aggregator
6: Generate a unified loss surface
$\widehat{\ell}:\boldsymbol{\Theta}\to\mathbb{R}$ using $E$
7: Select best HP candidate
$\displaystyle\widehat{\boldsymbol{\theta}}^{\star}\leftarrow\arg\min\limits_{\boldsymbol{\theta}\in\boldsymbol{\Theta}}\widehat{\ell}(\boldsymbol{\theta}).$
(3.7)
8: Invoke federated training
$m\leftarrow\mathcal{F}(\mathcal{M},\widehat{\boldsymbol{\theta}}^{\star},\mathcal{A},\overline{D})$
9: Output: FL model $m$.
While it is impractical to apply off-the-shelf HPO solvers (such as Bayesian
Optimization (BO) (Shahriari et al., 2016), Hyperopt (Bergstra et al., 2011),
SMAC (Hutter et al., 2011), and such), we wish to understand how we can
leverage local and asynchronous HPOs in each of the parties. We begin with a
simple but intuitive hypothesis underlying various meta-learning schemes for
HPO (Vanschoren, 2018; Wistuba et al., 2018): if a HP configuration
$\boldsymbol{\theta}$ has good performance for all parties independently, then
$\boldsymbol{\theta}$ is a strong candidate for federated training.
With this hypothesis, we present our proposed algorithm FLoRA in Algorithm 1.
In this scheme, we allow each party to perform HPO locally and asynchronously
with some adaptive HPO scheme such as BO (line 3). Then, at each party
$i\in[p]$, we collect all the attempted $T$ HPs
$\boldsymbol{\theta}_{t}^{(i)},t\in[T]=\\{1,2,\ldots,T\\}$ and their
corresponding predictive loss $\mathcal{L}_{t}^{(i)}$ into a set $E^{(i)}$
(line 3, equation (3.6)). Then these per-party sets of (HP, loss) pairs
$E^{(i)}$ are collected at the aggregator (line 5). This operation has at most
$O(pT)$ communication overhead (note that the number of HPs are usually much
smaller than the number of columns or number of rows in the per-party
datasets). These sets are then used to generate an aggregated loss surface
$\widehat{\ell}:\boldsymbol{\Theta}\to\mathbb{R}$ (line 6) which will then be
used to make the final single-shot HP recommendation
$\widehat{\boldsymbol{\theta}}^{\star}\in\boldsymbol{\Theta}$ (line 7) for the
federated training to create the final model $m\in\mathcal{M}$ (line 8). We
will discuss the generation of the aggregated loss surface in detail in §3.2.
Before that, we briefly want to discuss the motivation behind some of our
choices in Algorithm 1.
##### Why adaptive HPO?
The reason to use adaptive HPO schemes instead of non-adaptive schemes such as
random search or grid search is that this allows us to efficiently approximate
the local loss surface more accurately (and with more certainty) in regions of
the HP space where the local performance is favorable instead of trying to
approximate the loss surface well over the complete HP space. This has
advantages both in terms of computational efficiency and loss surface
approximation.
##### Why asynchronous HPO?
Each party executes HPO asynchronously, without coordination with HPO results
from other parties or with the aggregator. This is in line with our objective
to minimize communication overhead. Although there could be strategies that
involve coordination between parties, they could involve many rounds of
communication. Our experimental results show that this approach is effective
for the datasets we evaluated for.
### 3.2 Loss surface aggregation
Given the sets of (HP, loss) pairs
$E^{(i)}=(\boldsymbol{\theta}_{t}^{(i)},\mathcal{L}_{t}^{(i)}),i\in[p],t\in[T]$
at the aggregator, we wish to construct a loss surface
$\widehat{\ell}:\boldsymbol{\Theta}\to\mathbb{R}$ that best emulates the
(relative) performance loss $\widehat{\ell}(\boldsymbol{\theta})$ we would
observe when training the model on $\overline{D}$. Based on our hypothesis, we
want the loss surface to be such that it would have a relatively low
$\widehat{\ell}(\boldsymbol{\theta})$ if $\boldsymbol{\theta}$ has a low loss
for all parties simultaneously. However, because of the asynchronous and
adaptive nature of the local HPOs, for any HP
$\boldsymbol{\theta}\in\boldsymbol{\Theta}$, we would not have the
corresponding losses from all the parties. For that reason, we will model the
loss surfaces using regressors that try to map any HP to their corresponding
loss. In the following, we present four ways of constructing such loss
surfaces:
##### Single global model (SGM).
We merge all the sets $E=\cup_{i\in[p]}E^{(i)}$ and use it as a training set
for a regressor $f:\boldsymbol{\Theta}\to\mathbb{R}$, which considers the HPs
$\boldsymbol{\theta}\in\boldsymbol{\Theta}$ as the covariates and the
corresponding loss as the dependent variable. For example, we can train a
random forest regressor (Breiman, 2001) on this training set $E$. Then we can
define the loss surface
$\widehat{\ell}(\boldsymbol{\theta}):=f(\boldsymbol{\theta})$. While this loss
surface is simple to obtain, it may not be able to handle Non-iid party data
distribution well: it is actually overly optimistic – under the assumption
that every party generates unique HPs during the local HPO, this single global
loss surface would assign a low loss to any HP $\boldsymbol{\theta}$ which has
a low loss at any one of the parties. This implies that this loss surface
would end up recommending HPs that have low loss in just one of the parties,
but not necessarily on all parties.
##### Single global model with uncertainty (SGM+U).
Given the merged set $E=\cup_{i\in[p]}E^{(i)}$, we can train a regressor that
provides uncertainty quantification around its predictions (such as Gaussian
Process Regressor (Williams and Rasmussen, 2006)) as
$f:\boldsymbol{\Theta}\to\mathbb{R},u:\boldsymbol{\Theta}\to\mathbb{R}_{+}$,
where $f(\boldsymbol{\theta})$ is the mean prediction of the model at
$\boldsymbol{\theta}\in\boldsymbol{\Theta}$ while $u(\boldsymbol{\theta})$
quantifies the uncertainty around this prediction $f(\boldsymbol{\theta})$. We
define the loss surface as
$\widehat{\ell}(\boldsymbol{\theta}):=f(\boldsymbol{\theta})+\alpha\cdot
u(\boldsymbol{\theta})$ for some $\alpha>0$. This loss surface does prefer HPs
that have a low loss even in just one of the parties, but it penalizes a HP if
the model estimates high uncertainty around this HP. Usually, a high
uncertainty around a HP would be either because the training set $E$ does not
have many samples around this HP (implying that many parties did not view the
region containing this HP as one with low loss), or because there are multiple
samples in the region around this HP but parties do not collectively agree
that this is a promising region for HPs. Hence this makes SGM+U more desirable
than SGM, giving us a loss surface that estimates low loss for HPs that are
simultaneously thought to be promising to multiple parties.
##### Maximum of per-party local models (MPLM).
Instead of a single global model on the merged set $E$, we can instead train a
regressor $f^{(i)}:\boldsymbol{\Theta}\to\mathbb{R},i\in[p]$ with each of the
per-party set $E^{(i)}$. Given this, we can construct the loss surface as
$\widehat{\ell}(\boldsymbol{\theta}):=\max_{i\in[p]}f^{(i)}(\boldsymbol{\theta})$.
This can be seen as a much more pessimistic loss surface, assigning a low loss
to a HP only if it has a low loss estimate across all parties.
##### Average of per-party local models (APLM).
A less pessimistic version of MPLM would be to construct the loss surface as
the average of the per-party regressors $f^{(i)},i\in[p]$ instead of the
maximum, defined as
$\widehat{\ell}(\boldsymbol{\theta}):=\nicefrac{{1}}{{p}}\sum_{i=1}^{p}f^{(i)}(\boldsymbol{\theta})$.
This is also less optimistic than SGM since it will assign a low loss for a HP
only if its average across all per-party regressors is low, which implies that
all parties observed a relatively low loss around this HP.
Intuitively, we believe that loss surfaces such as SGM+U or APLM would be the
most promising while the extremely optimistic and pessimistic SGM and MPLM
respectively would be relatively less promising, with MPLM being superior to
SGM. In the following section, we theoretically quantify the performance
guarantees for MPLM and APLM, and in §5, we evaluate all these loss surface
empirically in the single-shot FL-HPO setting.
## 4 Optimality analysis
In this section, we provide a rigorous analysis of the sub-optimality of the
HP selected by FLoRA. Let us first define some notation we will use throughout
this section.
###### Definition 4.1 (Loss functions).
For a given set of parties’ data $\overline{D}=\\{D_{i}\\}_{i\in[p]}$ and any
$\boldsymbol{\theta}\in\boldsymbol{\Theta}$, the true target loss (any
predictive performance metric, such as, the training loss) can be expressed
as:
$\ell(\boldsymbol{\theta},\mathcal{D}):=\underbrace{\mathbb{E}_{(x,y)\sim\mathcal{D}}}_{\text{test
perf. of trained
model}}\mathcal{L}(\underbrace{\mathcal{A}(\boldsymbol{\theta},\overline{D})}_{\text{trained
model}},(x,y)).$ (4.1)
Here $\mathcal{D}$ is the data distribution of the test points. Let
$\tilde{\ell}(\boldsymbol{\theta},\mathcal{D})$ be an estimate of the loss
defined in (4.1) given some validation (holdout) set $D^{\prime}$ sampled from
$\mathcal{D}$, which is the model performance metric during evaluation and/or
inference time.
We assume the parties’ training sets are collected before the federated
learning such that $\overline{D}$ is fixed and unchanged during the HPO and FL
processes, in order words, we do not consider streaming data setting.
Now we are ready to provide a more general definition of the unified loss
surface constructed by FLoRA as follows:
###### Definition 4.2 (Unified loss surface).
Given the local loss surfaces
$\widehat{\ell}_{i}:\boldsymbol{\Theta}\to\mathbb{R}$ for each party $i\in[p]$
generated by $T$ (HP, loss) pairs
$\\{(\boldsymbol{\theta}^{(i)}_{t},\mathcal{L}_{t}^{(i)})\\}_{t\in[T]}$, we
can define the global loss surface
$\widehat{\ell}:\boldsymbol{\Theta}\to\mathbb{R}$ as
$\widehat{\ell}(\boldsymbol{\theta})=\textstyle{\sum}_{i=1}^{p}\alpha_{i}(\boldsymbol{\theta})\cdot\widehat{\ell}_{i}(\boldsymbol{\theta}),\alpha_{i}(\boldsymbol{\theta})\in[0,1],\textstyle{\sum}_{i=1}^{p}\alpha_{i}(\boldsymbol{\theta})=1.$
(4.2)
In particular,
* i)
If $\alpha_{i}(\boldsymbol{\theta})=\nicefrac{{1}}{{p}},\ \forall
i\in[p],\boldsymbol{\theta}\in\boldsymbol{\Theta}$, then this reduces to APLM
loss surface.
* ii)
If
$\alpha_{i}(\boldsymbol{\theta})=\mathbb{I}\left(\widehat{\ell}_{i}(\boldsymbol{\theta})=\max_{j\in[p]}\widehat{\ell}_{j}(\boldsymbol{\theta})\right)$,
then this reduces to the MPLM loss surface (assuming all
$\widehat{\ell}_{j}(\boldsymbol{\theta})$s are unique).
We formalize the distance metric used in our analysis to evaluate the distance
between two given data distributions.
###### Definition 4.3 (1-Wasserstein distance (Villani, 2003)).
For two distributions $\mu,\nu$ with bounded support, the 1-Wasserstein
distance is defined as
$\mathcal{W}_{1}(\mu,\nu):=\sup_{f\in\mathsf{F}_{1}}\mathbb{E}_{x\sim\mu}f(x)-\mathbb{E}_{x\sim\nu}f(x),$
(4.3)
where $\mathsf{F}_{1}=\\{f:f\text{ is continuous},\textsf{Lipschitz}(f)\leq
1\\}$.
To facilitate our analysis later, we make the following Lipschitzness
assumptions regarding the loss function $\tilde{\ell}$ and also the per-party
loss surface $\widehat{\ell}_{i}$.
###### Assumption 4.4 (Lipschitzness).
For a fixed data distribution $\mathcal{D}$ and
$\forall\boldsymbol{\theta},\boldsymbol{\theta}^{\prime}\in\bar{\boldsymbol{\Theta}}\subset\boldsymbol{\Theta}$,
we have
$\displaystyle|\tilde{\ell}(\boldsymbol{\theta},\mathcal{D})-\tilde{\ell}(\boldsymbol{\theta}^{\prime},\mathcal{D})|$
$\displaystyle\leq\tilde{L}(\mathcal{D})\cdot
d(\boldsymbol{\theta},\boldsymbol{\theta}^{\prime}),$ (4.4)
$\displaystyle|\widehat{\ell}_{i}(\boldsymbol{\theta})-\widehat{\ell}_{i}(\boldsymbol{\theta}^{\prime})|$
$\displaystyle\leq\widehat{L}_{i}\cdot
d(\boldsymbol{\theta},\boldsymbol{\theta}^{\prime}),$ (4.5)
where $d(\cdot,\cdot)$ is a certain distance metric defined over the hyper-
parameter search space, see Appendix A.1 for one definition. For a fixed set
of hyper-parameters
$\boldsymbol{\theta}\in\bar{\boldsymbol{\Theta}}\subset\boldsymbol{\Theta}$
and some data distributions $\mathcal{D}$ and $\mathcal{D}^{\prime}$, we have
$|\tilde{\ell}(\boldsymbol{\theta},\mathcal{D})-\tilde{\ell}(\boldsymbol{\theta},\mathcal{D}^{\prime})|\leq\tilde{\beta}(\boldsymbol{\theta})\cdot\mathcal{W}_{1}(\mathcal{D},\mathcal{D}^{\prime}).$
(4.6)
##### Remark.
Note that we explicitly use a
$\bar{\boldsymbol{\Theta}}\subset\boldsymbol{\Theta}$ to highlight that we
need Lipschitz-ness only in particular parts of the HP space. In fact, our
analysis only requires the Lipschitz-ness at
$\widehat{\boldsymbol{\theta}}^{*}$, the optimal HP and a HP space containing
these two HPs and the set of HP tried in local HPO runs, i.e.,
$\boldsymbol{\theta}_{t}^{(i)}$, which most of the time does not cover the
entire HP search space. Moreover, the above Lipschitzness assumption w.r.t. a
general HP space, which could be a combination of continuous and discrete
variables, may be strong. We also show in Appendix A.2 and B.4 that it can be
relaxed to a milder assumption based on the modulus of continuity without
significantly affecting our main results. For simplicity, we can always assume
that $\tilde{L}(\mathcal{D})\leq\tilde{L},\ \forall\mathcal{D}$ and
$\tilde{\beta}(\boldsymbol{\theta})\leq\tilde{\beta},\
\forall\boldsymbol{\theta}$.
Recall that the HP $\widehat{\boldsymbol{\theta}}^{\star}$ selected by FLoRA
is defined as in (3.7). We then define the optimal HP given by the estimated
loss function for a desired data distribution $\mathcal{D}$ we want to learn
as,
$\boldsymbol{\theta}^{\star}\in\arg\min_{\boldsymbol{\theta}\in\boldsymbol{\Theta}}\tilde{\ell}(\boldsymbol{\theta},\mathcal{D}).$
(4.7)
We are interested in providing a bound for the following optimality gap:
$\tilde{\ell}(\widehat{\boldsymbol{\theta}}^{\star},\mathcal{D})-\tilde{\ell}(\boldsymbol{\theta}^{\star},\mathcal{D}).$
(4.8)
Note that this bound is the optimality gap for the output of FLoRA in terms of
the estimated loss $\tilde{\ell}$. We state our main results in the following
theorem. Informally speaking we show how to bound the optimality gap by
picking the ‘worst-case’ HP setting that maximizes the combination of
Wasserstein distances of the local data distributions and actual quality of
local HPO approximation across parties.
###### Theorem 4.5.
Consider the optimality gap defined in (4.8), where
$\widehat{\boldsymbol{\theta}}^{*}$ is selected by FLoRA with each party
$i\in[p]$ collecting $T$ (HP, loss) pairs
$\\{(\boldsymbol{\theta}_{t}^{(i)},\mathcal{L}_{t}^{(i)})\\}_{t\in[T]}$ during
the local HPO run. For a desired data distribution
$\mathcal{D}=\sum_{i=1}^{p}w_{i}\mathcal{D}_{i}$, where
$\\{\mathcal{D}_{i}\\}_{i\in[p]}$ are the sets of parties’ local data
distributions and $w_{i}\in[0,1],\forall i\in[p]$, we have
$\displaystyle\tilde{\ell}(\widehat{\boldsymbol{\theta}}^{\star},\mathcal{D})-\tilde{\ell}(\boldsymbol{\theta}^{\star},\mathcal{D})$
$\displaystyle\ \leq
2\max_{\boldsymbol{\theta}\in\bar{\boldsymbol{\Theta}}}\textstyle{\sum}_{i\in[p]}\alpha_{i}(\boldsymbol{\theta})\left\\{\tilde{\beta}(\boldsymbol{\theta})\textstyle{\sum}_{j\in[p],j\not=i}w_{j}\mathcal{W}_{1}(\mathcal{D}_{j},\mathcal{D}_{i})+\left(\tilde{L}(\mathcal{D}_{i})+\widehat{L}_{i}\right)\min_{t\in[T]}d(\boldsymbol{\theta},\boldsymbol{\theta}_{t}^{(i)})+\delta_{i}\right\\},$
(4.9)
where $\delta_{i}$ is the maximum per sample training error for the local loss
surface $\widehat{\ell}_{i}$, i.e.,
$\delta_{i}=\max_{t}|\mathcal{L}_{t}^{(i)}-\widehat{\ell}_{i}(\boldsymbol{\theta}_{t}^{(i)})|$.
In particular, when all parties have i.i.d. local data distributions, (4.5)
reduces to
$\displaystyle\tilde{\ell}(\widehat{\boldsymbol{\theta}}^{\star},\mathcal{D})-\tilde{\ell}(\boldsymbol{\theta}^{\star},\mathcal{D})\leq
2\max_{\boldsymbol{\theta}\in\bar{\boldsymbol{\Theta}}}\sum_{i=1}^{p}\alpha_{i}(\boldsymbol{\theta})\left\\{\left(\tilde{L}(\mathcal{D}_{i})+\widehat{L}_{i}\right)\min\limits_{t\in[T]}d(\boldsymbol{\theta},\boldsymbol{\theta}_{t}^{(i)})+\delta_{i}\right\\}.$
We make some observations regarding the above results. Firstly, the first term
in our bound characterizes the errors incurred by the differences among
parties’ local data distributions, i.e., the magnitude of Non-IIDness in a FL
system. In particular, we can see it vanish under the IID setting. Secondly,
the last two terms measure the quality of the local HPO approximation, which
can be reduced if a good loss surface is selected. Thirdly,
$\min_{t\in[T]}d(\boldsymbol{\theta},\boldsymbol{\theta}_{t}^{(i)})$ indicates
that the optimality gap depends only on the HP trials
$\boldsymbol{\theta}_{t}^{(i)}$ that is closest to the optimal HP setting.
Finally, if we assume each party’s training dataset $D_{i}$ is of size $n_{i}$
sampled as $D_{i}\sim\mathcal{D}_{i}^{n_{i}}$, we can view
$w_{i}=\tfrac{n_{i}}{n}$ where $n=\sum_{i=1}^{p}n_{i}$, i.e., with probability
$w_{i}$ the desired data distribution $\mathcal{D}$ is sampled from
$\mathcal{D}_{i}$.
In order to obtain the result in (4.5), we first analyze (4.8) in Proposition
4.6, see its proof in Appendix B. Note that the local loss surfaces
$\widehat{\ell}_{i},\ i\in[p]$ are computed at a certain test/validation set
sampled from the parties local data distribution $\mathcal{D}_{i}$. We
quantify the relationship between $\widehat{\ell}_{i}(\boldsymbol{\theta})$
and the estimated loss function
$\tilde{\ell}(\boldsymbol{\theta},\mathcal{D}_{i})$ as follows:
$|\widehat{\ell}_{i}(\boldsymbol{\theta})-\tilde{\ell}(\boldsymbol{\theta},\mathcal{D}_{i})|:=\epsilon_{i}(\boldsymbol{\theta},T).$
(4.10)
###### Proposition 4.6.
Consider $\widehat{\boldsymbol{\theta}}^{\star}$ and
$\boldsymbol{\theta}^{\star}$ are two sets of HP defined in (3.7) and (4.7),
respectively, and $\\{\mathcal{D}_{i}\\}_{i\in[p]}$ and $\mathcal{D}$ are the
sets of parties’ local data distributions and the target (global) data
distribution we want to learn, for a given HP space such that
$\widehat{\boldsymbol{\theta}}^{\star},\boldsymbol{\theta}^{\star}\in\bar{\boldsymbol{\Theta}}\subset\boldsymbol{\Theta}$,
we have
$\displaystyle\tilde{\ell}(\widehat{\boldsymbol{\theta}}^{\star},\mathcal{D})-\tilde{\ell}(\boldsymbol{\theta}^{\star},\mathcal{D})\leq
2\max_{\boldsymbol{\theta}\in\bar{\boldsymbol{\Theta}}}\textstyle{\sum}_{i\in[p]}\alpha_{i}(\boldsymbol{\theta})\left\\{\tilde{\beta}(\boldsymbol{\theta})\mathcal{W}_{1}(\mathcal{D},\mathcal{D}_{i})+\epsilon_{i}(\boldsymbol{\theta},T)\right\\}.$
(4.11)
We now dive into each term in (4.11) to provide tight bounds for
$\mathcal{W}_{1}(\mathcal{D},\mathcal{D}_{i})$ and
$\epsilon_{i}(\boldsymbol{\theta},T)$ in the following propositions. All the
proofs can be found in Appendix B.
###### Proposition 4.7.
Consider 1-Wasserstein distance we defined in (4.3), for a local data
distribution $\mathcal{D}_{i}$ of any party $i,\ i\in[p]$, and
$\mathcal{D}=\sum_{i=1}^{p}w_{i}\mathcal{D}_{i}$ for some
$w_{i}\in[0,1],\forall i\in[p]$, we have
$\mathcal{W}_{1}(\mathcal{D},\mathcal{D}_{i})\leq\textstyle{\sum}_{j\in[p],j\not=i}w_{j}\mathcal{W}_{1}(\mathcal{D}_{j},\mathcal{D}_{i}).$
(4.12)
In particular, when $\mathcal{D}_{i},\ i\in[p]$ are i.i.d. data distribution,
i.e., all parties in a federated learning system possess i.i.d. local data
distribution – that is,
$\mathcal{W}_{1}(\mathcal{D}_{j},\mathcal{D}_{i})=0\forall i,j\in[p]$ – then
$\textstyle{\sum}_{j\in[p],j\not=i}w_{j}\mathcal{W}_{1}(\mathcal{D}_{j},\mathcal{D}_{i})=0$.
Therefore, $\mathcal{W}_{1}(\mathcal{D},\mathcal{D}_{i})=0,\ \forall i\in[p]$.
###### Proposition 4.8.
For any party $i,\ i\in[p]$, consider a (HP, loss) pair
$(\boldsymbol{\theta}_{t}^{(i)},\mathcal{L}_{t}^{(i)})$ collected during the
local HPO run for party $i$, for any
$\boldsymbol{\theta}\in\bar{\boldsymbol{\Theta}}\subset\boldsymbol{\Theta}$,
we have
$\epsilon_{i}(\boldsymbol{\theta},T)\leq\left(\tilde{L}(\mathcal{D}_{i})+\widehat{L}_{i}\right)\min_{t\in[T]}d(\boldsymbol{\theta},\boldsymbol{\theta}_{t}^{(i)})+\delta_{i},$
(4.13)
where
$\delta_{i}=\max_{t}|\mathcal{L}_{t}^{(i)}-\widehat{\ell}_{i}(\boldsymbol{\theta}_{t}^{(i)})|$
is the maximum per sample training error for the local loss surface
$\widehat{\ell}_{i}$.
Note that if we use non-parametric regression models as the loss surfaces
(such as Gaussian Processes, Random Forests, etc), the per-sample training
error can be made arbitrarily small (that is $\delta_{i}\approx 0$), but at
the cost of increasing $\widehat{L}_{i}$ for $\widehat{\ell}_{i}$.
## 5 Empirical evaluation
Table 1: Comparison of different loss surfaces (the 4 rightmost columns) for FLoRA relative to the baseline for single-shot 3-party FL-HPO in terms of the relative regret (lower is better). Aggregate | ML Method | SGM | SGM+U | MPLM | APLM
---|---|---|---|---|---
Regret | HGB | [0.30, 0.47, 0.68] | [0.27, 0.54, 0.64] | [0.25, 0.43, 0.67] | [0.25, 0.50, 0.65]
Inter-quartile range | SVM | [0.04, 0.38, 1.11] | [0.04, 0.48, 1.07] | [0.38, 0.91, 2.41] | [0.23, 0.54, 0.76]
| MLP | [0.36, 0.80, 0.97] | [0.48, 0.99, 1.01] | [0.47, 0.89, 1.00] | [0.46, 0.79, 0.95]
| Overall | [0.22, 0.53, 0.97] | [0.32, 0.55, 1.01] | [0.36, 0.61, 0.99] | [0.36, 0.57, 0.79]
FLoRA | HGB | 6/0/1 | 6/0/1 | 7/0/0 | 7/0/0
Wins/Ties/Losses | SVM | 4/0/2 | 4/0/2 | 3/0/3 | 5/0/1
| MLP | 6/0/1 | 4/1/2 | 5/1/1 | 6/0/1
| Overall | 16/0/4 | 14/1/5 | 15/1/4 | 18/0/2
Wilcoxon Signed-Rank Test | HGB | (26, 0.02126) | (27, 0.01400) | (28, 0.00898) | (28, 0.00898)
1-sided | SVM | (18, 0.05793) | (17, 0.08648) | (9, 0.62342) | (15, 0.17272)
(statistic, p-value) | MLP | (21, 0.11836) | (15, 0.17272) | (18, 0.05793) | (24, 0.04548)
| Overall | (174, 0.00499) | (164, 0.00272) | (141, 0.03206) | (183.5, 0.00169)
In this section, we evaluate our proposed scheme FLoRA with different loss
surfaces for the FL-HPO on a variety of ML models – histograms based gradient
boosted (HGB) decision trees (Friedman, 2001), Support Vector Machines (SVM)
with RBF kernel and multi-layered perceptrons (MLP) (using their respective
scikit-learn implementation (Pedregosa et al., 2011)) on OpenML (Vanschoren et
al., 2013) classification problems. The precise HP search space is described
in Appendix C.2. First, we fix the number of parties $p=3$ and compare FLoRA
to a baseline on $7$ datasets. Then we study the effect of increasing the
number of parties from $p=3$ up to $p=100$ on the performance of our proposed
scheme on $3$ datasets. The data is randomly split across parties. We also
evaluate FLoRA with different parameter choices, in particular, the number of
local HPO rounds and the communication overhead in the aggregation of the per-
party (HP, loss) pairs. Finally, we evaluate FLoRA in a real FL testbed IBM FL
(Ludwig et al., 2020) using its default HP setting as a baseline. More
experimental results can be found in Appendix C.
##### Single-shot baseline.
To appropriately evaluate our proposed single-shot FL-HPO scheme, we need to
select a meaningful single-shot baseline. For this, we choose the default HP
configuration of scikit-learn as the single-shot baseline for two main
reasons: (i) the default HP configuration in scikit-learn is set manually
based on expert prior knowledge and extensive empirical evaluation, and (ii)
these are also used as the defaults in the Auto-Sklearn package (Feurer et
al., 2015, 2020), one of the leading open-source AutoML python packages, which
maintains a carefully selected portfolio of default configurations.
##### Dataset selection.
For our evaluation of single-shot HPO, we consider $7$ binary classification
datasets of varying sizes and characteristics from OpenML (Vanschoren et al.,
2013) such that there is at least a significant room for improvement over the
single-shot baseline performance. We consider datasets which have at least
$>3\%$ potential improvement in balanced accuracy for gradient boosted
decision trees. See Appendix C.1 for details on data. Note that this only
ensures room for improvement for HGB, while highlighting cases with no room
for improvement for SVM and MLP as we see in our results.
##### (Dis-)Regarding other baselines.
While there are some existing schemes for FL-HPO (as discussed in §2), we are
unable to compare FLoRA to them for the following reasons: (i) As noted by
Khodak et al. (2021, Section 1, Related Work), existing schemes focus “on a
small number of hyperparameters (e.g. the step-size and sometimes one or two
more) in less general settings (studying small-scale problems or assuming
server-side validation data)” whereas we explicitly assume no access to such a
“server-side validation data”. (ii) Furthermore, we noted in §1 (C1), we do
not assume any “weight-sharing” type capability, and hence it is not clear how
FedEx (Khodak et al., 2021) can be applied to FL-HPO in the
general111Moreover, (Khodak et al., 2021) claim that FedEx can handle
architectural hyper-paramters but it is never demonstrated and discussed
explicitly. In contrast, our proposed algorithm can handle architectural
hyperparameters (as we do with HGB (tree depth) and MLP (width of the
layer))..
##### Implementation.
We consider two implementations for our empirical evaluation. In our first
three sets of experiments, we emulate the final FL (Algorithm 1, line 8) with
a centralized training using the pooled data. We chose this implementation
because we want to evaluate the final performance of any HP configuration
(baseline or recommended by FLoRA) in a statistically robust manner with
multiple train/validation splits (for example, via 10-fold cross-validation)
instead of evaluating the performance on a single train/validation. This form
of evaluation is extremely expensive to perform in a real FL system and
generally not feasible, but allows us to evaluate how the performance of our
single-shot HP recommendation fairs against that of the best-possible HP found
via a full-scale centralized HPO.
##### Evaluation metric.
In all datasets, we consider the balanced accuracy as the metric we wish to
maximize. For the local per-party HPOs (as well as the centralized HPO we
execute to compute the regret), we maximize the 10-fold cross-validated
balanced accuracy. For Table 1-2, we report the relative regret, computed as
$\nicefrac{{(a^{\star}-a)}}{{(a^{\star}-b)}}$, where $a^{\star}$ is the best
metric obtained via the centralized HPO, $b$ is the result of the baseline,
and $a$ is the result of the HP recommended by FLoRA. The baseline has a
relative regret of 1 and smaller values imply better performance. A value
larger than 1 implies that the recommended HP performs worse than the
baseline.
##### Comparison to single-shot baseline.
In our first set of experiments for 3-party FL-HPO ($p=3$), we compare our
proposed scheme with the baseline across different datasets, machine learning
models and FLoRA loss surfaces. The aggregated results are presented in Table
1, with the individual results detailed in Appendix C.3. For each of the three
methods, we report the aggregate performance over all considered datasets in
terms of (i) inter-quartile range, (ii) Wins/Ties/Losses of FLoRA w.r.t. the
single-shot baseline, and (iii) a one-sided Wilcoxon Signed Ranked Test of
statistical significance with the null hypothesis that the median of the
difference between the single-shot baseline and FLoRA is positive against the
alternative that the difference is negative (implying FLoRA improves over the
baseline). Finally, we also report an “Overall” performance, further
aggregated across all ML models.
All FLoRA loss surfaces show strong performance w.r.t the single-shot
baseline, with significantly more wins than losses, and 3rd-quartile relative
regret values less than 1 (indicating improvement over the baseline). All
FLoRA loss surfaces have a p-value of less than $0.05$, indicating that we can
reject the null hypothesis. Overall, APLM shows the best performance over all
loss surfaces, both in terms of Wins/Ties/Losses over the baseline as well as
in terms of the Wilcoxon Signed Rank Test, with the highest statistic and a
p-value close to $10^{-3}$. APLM also has significantly lower 3rd-quartile
than all other loss surfaces. MPLM appears to have the worst performance but
much of that is attributable to a couple of very hard cases with SVM (see
Appendix C.3 for detailed discussion). Otherwise, MPLM performs second best
both for FL-HPO with HGB and MLP.
Table 2: Effect of increasing the number of parties on FLoRA with different loss surfaces for HGB. Data | $p$ | $\gamma_{p}$ | SGM | SGM+U | MPLM | APLM
---|---|---|---|---|---|---
EEG Eye State | 3 | 1.01 | 0.14 | 0.12 | 0.11 | 0.12
14980 samples | 6 | 1.01 | 0.07 | 0.00 | 0.07 | 0.09
| 10 | 1.03 | 0.08 | 0.00 | 0.16 | 0.01
| 25 | 1.08 | 0.35 | 0.92 | 0.17 | 0.04
| 50 | 1.20 | 0.20 | 0.23 | 0.67 | 0.12
Electricity | 3 | 1.01 | 0.17 | 0.14 | 0.09 | 0.12
45312 samples | 6 | 1.01 | 0.25 | 0.21 | 0.18 | 0.13
| 10 | 1.02 | 0.03 | 0.06 | 0.32 | 0.14
| 25 | 1.04 | 0.40 | 0.42 | 1.42 | 0.89
| 50 | 1.07 | 1.57 | 1.57 | 0.89 | 1.13
| 100 | 1.14 | 1.45 | 1.47 | 0.48 | 1.11
Pollen | 3 | 1.02 | 0.43 | 0.54 | 0.43 | 0.69
3848 samples | 6 | 1.10 | 1.02 | 0.91 | 0.54 | 0.56
| 10 | 1.16 | 1.05 | 0.73 | 0.75 | 1.12
##### Effect of increasing number of parties.
In the second set of experiments, we study the effect of increasing the number
of parties in the FL-HPO problem on 3 datasets and HGB. For each data set, we
increase the number of parties $p$ up until each party has at least 100
training samples. We present the relative regrets in Table 2. It also displays
$\gamma_{p}:=\nicefrac{{\left(1-\min_{i\in[p]}\mathcal{L}_{\star}^{(i)}\right)}}{{\left(1-\max_{i\in[p]}\mathcal{L}_{\star}^{(i)}\right)}}$,
where $\mathcal{L}_{\star}^{(i)}=\min_{t\in[T]}\mathcal{L}_{t}^{(i)}$ is the
minimum loss observed during the local asynchronous HPO at party $i$. This
ratio $\gamma_{p}$ is always greater than 1, and highlights the difference in
the observed performances across the parties. A ratio closer to 1 indicates
that all the parties have relatively similar performances on their respective
training data, while a ratio much higher than 1 indicating significant
discrepancy between the per-party performances, implicitly indicating the
difference in the per-party data distributions.
We notice that increasing the number of parties does not have a significant
effect on $\gamma_{p}$ for the Electricity dataset until $p=100$, but
significantly increases for the Pollen dataset earlier (making the problem
harder). For the EEG eye state, the increase in $\gamma_{p}$ with increasing
$p$ is moderate until $p=50$. The results indicate that, with low or moderate
increase in $\gamma_{p}$ (EEG eye state, Electricity for moderate $p$), the
proposed scheme is able to achieve low relative regret – the increase in the
number of parties does not directly imply degradation in performance. However,
with significant increase in $\gamma_{p}$ (Pollen, Electricity with $p=50,100$
and EEG Eye State with $p=50$), we see a significant increase in the relative
regret (eventually going over 1 in a few cases). In this challenging case,
MPLM (the most pessimistic loss function) has the most graceful degradation in
relative regret compared to the remaining loss surfaces.
(a) # local HPO rounds.
(b) # (HP, loss) pairs communicated to aggregator
Figure 1: Effect of different choices on FLoRA with the APLM loss surface for
different methods and datasets. More results and other loss surfaces are
presented in Appendix C.4 and C.5.
##### Effect of different choices in FLoRA.
In this set of experiments, we consider FLoRA with the APLM loss surface, and
ablate the effect of different choices in FLoRA on 2 datasets each for SVM and
MLP. First, we study the impact of the thoroughness of the per-party local
HPOs, quantified by the number of HPO rounds $T$ in Figure 1(a). The results
indicate that for really small $T$ ($<20$) the relative regret of FLoRA can be
very high. However, after that point, the relative regret converges to its
best possible value. We present the results for other loss surfaces in
Appendix C.4.
We also study the effect of the communication overhead of FLoRA for fixed
level of local HPO thoroughness. We assume that each party performs $T=100$
rounds of local asynchronous HPO. However, instead of sending all $T$ (HP,
loss) pairs, we consider sending $T^{\prime}<T$ of the “best” (HP, loss) pairs
– that is, (HP, loss) pairs with the $T^{\prime}$ lowest losses. Changing the
value of $T^{\prime}$ trades off the communication overhead of the FLoRA step
where the aggregators collect the per-party loss pairs (Algorithm 1, line 5).
The results for this study are presented in Figure 1(b), and indicate that,
for really small $T^{\prime}$, the relative regret can be really high.
However, for a moderately high value of $T^{\prime}<T$, FLoRA converges to its
best possible performance. Results on other loss surfaces and further
discussion can be found in Appendix C.5.
Table 3: Performance of FLoRA with the IBM-FL system in terms of the balanced accuracy on a holdout test set (higher is better). The baseline is still the default HP configuration of HistGradientBoostingClassifier in scikit-learn. Data | # parties | # training data per party | Baseline | SGM | SGM+U | MPLM | APLM
---|---|---|---|---|---|---|---
Oil spill | $3$ | $200$ | 0.5895 | 0.7374 | 0.5909 | 0.7061 | 0.7332
EEG eye state | $3$ | $3,000$ | 0.8864 | 0.9153 | 0.9211 | 0.9251 | 0.9245
Electricity | $6$ | $4,000$ | 0.8448 | 0.8562 | 0.8627 | 0.8621 | 0.8624
##### Federated Learning testbed evaluation.
We now conduct experiments for histrogram boosted tree model in a FL testbed,
utilizing IBM FL library (Ludwig et al., 2020; Ong et al., 2020), More
specifically, we reserved $40\%$ of oil spill and electricity and $20\%$ of
EEG eye state as global hold-out set only for evaluating the final FL model
performance. Each party randomly sampled from the rest of the original dataset
to obtain their own training dataset. We use the same HP search space as in
Appendix C.2. We report the balanced accuracy of any HP (baseline or
recommended by FLoRA) on a single train/test split. Given balanced accuracy as
the evaluation metric, we utilize (1 - balanced accuracy) as the loss
$\mathcal{L}_{t}^{(i)}$ in Algorithm 1 Each party will run HPO to generate
$T=500$ (HP, loss) pairs and use those pairs to generate loss surface either
collaboratively or by their own according to different aggregation procedures
described in §3.2. Once the loss surface is generated, the aggregator uses
Hyperopt (Bergstra et al., 2011) to select the best HP candidate and train a
federated XGBoost model via the IBM FL library using the selected HPs. Table 3
summarizes the experimental results for $3$ datasets, indicating that FLoRA
can significantly improve over the baseline in IBM FL testbed.
## 6 Conclusions
How to effectively select hyper-parameters in FL settings is a challenging
problem. In this paper, we introduced FLoRA, a single-shot FL-HPO algorithm
that can be applied to a variety of ML models. We provided a theoretical
analysis which includes a bound on the optimality gap incurred by the hyper-
parameter selection performed by FLoRA. Our experimental evaluation shows that
FLoRA can effectively produce hyper-paramater configurations that outperform
the baseline with just a single shot.
## References
* Bergstra and Bengio (2012) James Bergstra and Yoshua Bengio. Random search for hyper-parameter optimization. _Journal of Machine Learning Research_ , 13(Feb):281–305, 2012.
* Bergstra et al. (2011) James S Bergstra, Rémi Bardenet, Yoshua Bengio, and Balázs Kégl. Algorithms for hyper-parameter optimization. In _Advances in neural information processing systems_ , pages 2546–2554, 2011.
* Breiman (2001) Leo Breiman. Random forests. _Machine learning_ , 45(1):5–32, 2001.
* Chen and Guestrin (2016) Tianqi Chen and Carlos Guestrin. Xgboost: A scalable tree boosting system. In _Proceedings of the 22Nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining_ , KDD ’16, pages 785–794, New York, NY, USA, 2016. ACM. ISBN 978-1-4503-4232-2. doi: 10.1145/2939672.2939785. URL http://doi.acm.org/10.1145/2939672.2939785.
* Dai et al. (2020) Z. Dai, B.K.H. Low, and P. Jaillet. Federated bayesian optimization via thompson sampling. _Advances in Neural Information Processing Systems_ , 33, 2020.
* Dai et al. (2021) Z. Dai, B.K.H. Low, and P. Jaillet. Differentially private federated bayesian optimization with distributed exploration. _Advances in Neural Information Processing Systems_ , 34, 2021.
* Feurer et al. (2015) Matthias Feurer, Aaron Klein, Katharina Eggensperger, Jost Springenberg, Manuel Blum, and Frank Hutter. Efficient and robust automated machine learning. In _Advances in Neural Information Processing Systems_ , pages 2962–2970, 2015.
* Feurer et al. (2020) Matthias Feurer, Katharina Eggensperger, Stefan Falkner, Marius Lindauer, and Frank Hutter. Auto-sklearn 2.0: The next generation. In _arXiv:2007.04074 [cs.LG]_ , 2020.
* Friedman (2001) Jerome H Friedman. Greedy function approximation: a gradient boosting machine. _Annals of statistics_ , pages 1189–1232, 2001.
* Garg et al. (2020) Anubhav Garg, Amit Kumar Saha, and Debo Dutta. Direct federated neural architecture search. _arxiv.2010.06223_ , 2020.
* He et al. (2020) Chaoyang He, Murali Annavaram, and Salman Avestimehr. Towards non-i.i.d. and invisible data with fednas: Federated deep learning via neural architecture search. _arxiv.2004.08546_ , 2020.
* Hutter et al. (2011) Frank Hutter, Holger H Hoos, and Kevin Leyton-Brown. Sequential model-based optimization for general algorithm configuration. In _International Conference on Learning and Intelligent Optimization_ , pages 507–523. Springer, 2011.
* Kairouz et al. (2019) Peter Kairouz, H Brendan McMahan, Brendan Avent, Aurélien Bellet, Mehdi Bennis, Arjun Nitin Bhagoji, Kallista Bonawitz, Zachary Charles, Graham Cormode, Rachel Cummings, et al. Advances and open problems in federated learning. _arXiv preprint arXiv:1912.04977_ , 2019.
* Khodak et al. (2020) Mikhail Khodak, Tian Li, Liam Li, M Balcan, Virginia Smith, and Ameet Talwalkar. Weight sharing for hyperparameter optimization in federated learning. In _Int. Workshop on Federated Learning for User Privacy and Data Confidentiality in Conjunction with ICML 2020_ , 2020.
* Khodak et al. (2021) Mikhail Khodak, Renbo Tu, Tian Li, Liam Li, Maria-Florina Balcan, Virginia Smith, and Ameet Talwalkar. Federated hyperparameter tuning: Challenges, baselines, and connections to weight-sharing. _arXiv preprint arXiv:2106.04502_ , 2021.
* Kingma and Ba (2015) Diederik P. Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In _ICLR (Poster)_ , 2015. URL http://arxiv.org/abs/1412.6980.
* Koskela and Honkela (2019) A. Koskela and A. Honkela. Learning rate adaptation for federated and differentially private learning. _arXiv preprint arXiv:1809.03832_ , 2019.
* Ludwig et al. (2020) Heiko Ludwig, Nathalie Baracaldo, Gegi Thomas, Yi Zhou, Ali Anwar, Shashank Rajamoni, Yuya Ong, Jayaram Radhakrishnan, Ashish Verma, Mathieu Sinn, et al. IBM Federated Learning: an enterprise framework white paper v0. 1. _arXiv preprint arXiv:2007.10987_ , 2020. URL https://github.com/IBM/federated-learning-lib.
* McMahan et al. (2017a) B. McMahan, E. Moore, D. Ramage, S. Hampson, and B. A. Arcas. Communication-efficient learning of deep networks from decentralized data. In _Proc. International Conference on Artificial Intelligence and Statistics_ , pages 1273–1282, Ft. Lauderdale, FL, 20–22 Apr 2017a.
* McMahan et al. (2017b) Brendan McMahan, Eider Moore, Daniel Ramage, Seth Hampson, and Blaise Aguera y Arcas. Communication-efficient learning of deep networks from decentralized data. In _Artificial intelligence and statistics_ , pages 1273–1282. PMLR, 2017b.
* Mostafa (2019) H. Mostafa. Robust federated learning through representation matching and adaptive hyper-parameters. _arXiv preprint arXiv:1912.13075_ , 2019.
* Oh et al. (2019) Changyong Oh, Jakub M Tomczak, Efstratios Gavves, and Max Welling. Combinatorial bayesian optimization using the graph cartesian product. In _Proceedings of the 33rd International Conference on Neural Information Processing Systems_ , pages 2914–2924, 2019.
* Ong et al. (2020) Yuya Jeremy Ong, Yi Zhou, Nathalie Baracaldo, and Heiko Ludwig. Adaptive histogram-based gradient boosted trees for federated learning. _arXiv preprint arXiv:2012.06670_ , 2020.
* Pedregosa et al. (2011) F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel, P. Prettenhofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Passos, D. Cournapeau, M. Brucher, M. Perrot, and E. Duchesnay. Scikit-learn: Machine learning in Python. _Journal of Machine Learning Research_ , 12:2825–2830, 2011\.
* Reddi et al. (2020) S.J. Reddi, Z. Charles, M. Zaheer, Z. Garrett, K. Rush, J. Konecny, S. Kumar, and H.B. McMahan. Adaptive federated optimization. In _International Conference on Learning Representations_ , 2020.
* Shahriari et al. (2016) B. Shahriari, K. Swersky, Z. Wang, R. P. Adams, and N. De Freitas. Taking the human out of the loop: A review of bayesian optimization. _Proceedings of the IEEE_ , 104(1):148–175, 2016\.
* Vanschoren (2018) Joaquin Vanschoren. Meta-learning: A survey. _arXiv preprint arXiv:1810.03548_ , 2018.
* Vanschoren et al. (2013) Joaquin Vanschoren, Jan N. van Rijn, Bernd Bischl, and Luis Torgo. OpenML: Networked science in machine learning. _SIGKDD Explorations_ , 15(2):49–60, 2013. doi: 10.1145/2641190.2641198. URL http://doi.acm.org/10.1145/2641190.2641198.
* Villani (2003) Cedric Villani. Topics in optimal transportation.(books). _OR/MS Today_ , 30(3):66–67, 2003.
* Wang et al. (2019) Shiqiang Wang, Tiffany Tuor, Theodoros Salonidis, Kin Leung, Christian Makaya, Ting He, and Kevin Chan. Adaptive federated learning in resource constrained edge computing systems. _Journal Selected Areas in Communications (JSAC)_ , 2019.
* Williams and Rasmussen (2006) Christopher K Williams and Carl Edward Rasmussen. _Gaussian processes for machine learning_ , volume 2. MIT press Cambridge, MA, 2006.
* Wistuba et al. (2018) Martin Wistuba, Nicolas Schilling, and Lars Schmidt-Thieme. Scalable gaussian process-based transfer surrogates for hyperparameter optimization. _Machine Learning_ , 107(1):43–78, 2018.
* Xu et al. (2020) Mengwei Xu, Yuxin Zhao, Kaigui Bian, Gang Huang, Qiaozhu Mei, and Xuanzhe Liu. Federated neural architecture search. _arxiv.2002.06352_ , 2020.
## Appendix A Technical Definitions
### A.1 Distance in $\boldsymbol{\Theta}$
Here we will define a distance metric
$d:\boldsymbol{\Theta}\times\boldsymbol{\Theta}\to\mathbb{R}_{+}$. Assuming we
have $m$ HPs, if $\boldsymbol{\Theta}\subset\mathbb{R}^{m}$, then there are
various distances available such as
$\|\boldsymbol{\theta}-\boldsymbol{\theta}^{\prime}\|_{\rho}$ (the
$\rho$-norm). The more general case is where we have $R$ continuous/real HPs,
$I$ integer HPs, and $C$ categorical HPs; $m=R+I+C$. In that case,
$\boldsymbol{\Theta}\subset\mathbb{R}^{R}\times\mathbb{Z}^{I}\times\mathbb{C}^{C}$,
and any
$\boldsymbol{\theta}=(\boldsymbol{\theta}_{\mathbb{R}},\boldsymbol{\theta}_{\mathbb{Z}},\boldsymbol{\theta}_{\mathbb{C}})\in\boldsymbol{\Theta}_{\mathbb{R}}\times\boldsymbol{\Theta}_{\mathbb{Z}}\times\boldsymbol{\Theta}_{\mathbb{C}}$,
where
$\boldsymbol{\theta}_{\mathbb{R}}\in\boldsymbol{\Theta}_{\mathbb{R}},\boldsymbol{\theta}_{\mathbb{Z}}\in\boldsymbol{\Theta}_{\mathbb{Z}},\boldsymbol{\theta}_{\mathbb{C}}\in\boldsymbol{\Theta}_{\mathbb{C}}$
respectively denote the continuous, integer and categorical HPs in
$\boldsymbol{\theta}$. Distances over $\mathbb{R}^{R}\times\mathbb{Z}^{I}$ is
available, such as $\rho$-norm. Let
$d_{\mathbb{R},\mathbb{Z}}:(\boldsymbol{\Theta}^{\mathbb{R}}\times\boldsymbol{\Theta}_{\mathbb{Z}})\times(\boldsymbol{\Theta}_{\mathbb{R}}\times\boldsymbol{\Theta}_{\mathbb{Z}})\to\mathbb{R}_{+}$
be some such distance.
To define distances over categorical spaces, there are some techniques such as
one described by Oh et al. [2019]:
Assume that each of the $C$ HPs $\boldsymbol{\theta}_{\mathbb{C},k},k\in[C]$
have $n_{k}$ categories $\\{\xi_{k1},\xi_{k2},\ldots,\xi_{kn_{k}}\\}$. Then we
define a complete undirected graph $G_{k}=(V_{k},E_{k}),k\in[C]$ where
* •
There is a node $N_{kj}$ in $G_{k}$ for each category $\xi_{kj}$ for each
$j\in[n_{k}]$ and $V_{k}=\\{N_{k1},\ldots N_{kn_{k}}\\}$.
* •
There is an undirected edge $(N_{kj},N_{kj^{\prime}})$ for each pair
$j,j^{\prime}\in[n_{k}]$, and
$E_{k}=\\{(N_{kj},N_{kj^{\prime}}),j,j^{\prime}\in[n_{k}]\\}$.
Given the per-categorical HP graph $G_{k},k\in[C]$, we define the graph
Cartesian product $\mathsf{G}=\bigotimes_{k\in[C]}G_{k}$ and
$\mathsf{G}=(\mathsf{V},\mathsf{E})$ such that
* •
$\mathsf{V}=\\{\mathsf{N}_{(j_{1},j_{2},\ldots,j_{C})}:(\xi_{1j_{1}},\xi_{2j_{2}},\ldots\xi_{kj_{k}},\ldots,\xi_{Cj_{C}})\in\boldsymbol{\Theta}_{\mathbb{C}},j_{k}\in[n_{k}]\forall
k\in[C]\\}$.
* •
$\mathsf{E}=\\{(\mathsf{N}_{(j_{1},j_{2},\ldots,j_{C})},\mathsf{N}_{(j^{\prime}_{1},j^{\prime}_{2},\ldots,j^{\prime}_{C})}):\text{\sf
IFF}\exists t\in[C]\text{ such that }\forall
k\not=t,\xi_{kj_{k}}=\xi_{kj^{\prime}_{k}},\text{ and
}\exists(N_{tj_{t}},N_{tj^{\prime}_{t}})\in E_{t}\\}$.
Then for any
$\boldsymbol{\theta}_{\mathbb{C}},\boldsymbol{\theta}^{\prime}_{\mathbb{C}}\in\boldsymbol{\Theta}_{\mathbb{C}}$
with corresponding nodes $\mathsf{N},\mathsf{N}^{\prime}\in\mathsf{V}$, Oh et
al. [2019, Theorem 2.2.1] says that the length of the shortest path between
nodes $\mathsf{N}$ and $\mathsf{N}^{\prime}$ in $\mathsf{G}$ is a distance. We
can consider this distance as
$d_{\mathbb{C}}:\boldsymbol{\Theta}_{\mathbb{C}}\times\boldsymbol{\Theta}_{\mathbb{C}}\to\mathbb{R}_{+}$.
Of course, there are other ways of defining distances in the categorical
space.
Then we can define a distance
$d:(\boldsymbol{\Theta}_{\mathbb{R}}\times\boldsymbol{\Theta}_{\mathbb{Z}}\times\boldsymbol{\Theta}_{\mathbb{C}})\times(\boldsymbol{\Theta}_{\mathbb{R}}\times\boldsymbol{\Theta}_{\mathbb{Z}}\times\boldsymbol{\Theta}_{\mathbb{C}})\to\mathbb{R}_{+}$
between two HPs $\boldsymbol{\theta},\boldsymbol{\theta}^{\prime}$ as
$d(\boldsymbol{\theta},\boldsymbol{\theta}^{\prime})=d_{\mathbb{R},\mathbb{Z}}((\boldsymbol{\theta}_{\mathbb{R}},\boldsymbol{\theta}_{\mathbb{Z}}),(\boldsymbol{\theta}^{\prime}_{\mathbb{R}},\boldsymbol{\theta}^{\prime}_{\mathbb{Z}}))+d_{\mathbb{C}}(\boldsymbol{\theta}_{\mathbb{C}},\boldsymbol{\theta}^{\prime}_{\mathbb{C}}).$
(A.1)
###### Proposition A.1.
Given distance metrics $d_{\mathbb{R},\mathbb{Z}}$ and $d_{\mathbb{C}}$, the
function $d:\boldsymbol{\Theta}\times\boldsymbol{\Theta}$ defined in (A.1) is
a valid distance metric.
### A.2 Continuity in the space of HPs $\boldsymbol{\Theta}$
In the simple case, we can assume Lipschitz continuity of estimated loss
$\tilde{\ell}(\boldsymbol{\theta},\mathcal{D})$ and the loss surface
$\widehat{\ell}_{i}(\boldsymbol{\theta}),i\in[p]$ as follows:
$\displaystyle|\ell(\boldsymbol{\theta},\mathcal{D})-\ell(\boldsymbol{\theta}^{\prime},\mathcal{D})|$
$\displaystyle\leq\tilde{L}(\mathcal{D})\cdot
d(\boldsymbol{\theta},\boldsymbol{\theta}^{\prime}),$ (A.2)
$\displaystyle|\widehat{\ell}_{i}(\boldsymbol{\theta})-\widehat{\ell}_{i}(\boldsymbol{\theta}^{\prime})|$
$\displaystyle\leq\widehat{L}\cdot
d(\boldsymbol{\theta},\boldsymbol{\theta}^{\prime}).$ (A.3)
For a more general handling, we can consider the notion of modulus of
continuity in the form of a increasing real-valued functions
$\omega:\mathbb{R}_{+}\to\mathbb{R}_{+}$ with $\lim_{t\to
0}\omega(t)=\omega(0)=0$. Then we can say that the estimated loss
$\tilde{\ell}(\boldsymbol{\theta},\mathcal{D})$ and the loss surface
$\ell_{i}(\boldsymbol{\theta})$ admits $\tilde{\omega}_{\mathcal{D}}$ and
$\widehat{\omega}$ as a modulus of continuity (respectively) if
$\displaystyle|\ell(\boldsymbol{\theta},\mathcal{D})-\ell(\boldsymbol{\theta}^{\prime},\mathcal{D})|$
$\displaystyle\leq\tilde{\omega}_{\mathcal{D}}(d(\boldsymbol{\theta},\boldsymbol{\theta}^{\prime}))$
(A.4)
$\displaystyle|\widehat{\ell}_{i}(\boldsymbol{\theta})-\widehat{\ell}_{i}(\boldsymbol{\theta}^{\prime})|$
$\displaystyle\leq\widehat{\omega}(d(\boldsymbol{\theta},\boldsymbol{\theta}^{\prime})).$
(A.5)
If we further assume that $\tilde{\omega}_{\mathcal{D}},\widehat{\omega}$ to
be concave, then we can say that these functions are sublinear as follows:
$\displaystyle\tilde{\omega}_{\mathcal{D}}(t)$
$\displaystyle\leq\tilde{A}_{\mathcal{D}}\cdot t+\tilde{B}_{\mathcal{D}},$
(A.6) $\displaystyle\widehat{\omega}(t)$ $\displaystyle\leq\widehat{A}\cdot
t+\widehat{B}.$ (A.7)
These conditions give us (indirectly) something similar in spirit to the
guarantees of Lipschitz continuity, but is a more rigorous way of achieving
such guarantees.
## Appendix B Proofs for optimality analysis
We provide detailed proofs of the propositions stated in Section 4.
### B.1 Proof of Proposition 4.6
###### Proof.
Consider the definition of $\widehat{\boldsymbol{\theta}}^{\star}$ and
$\boldsymbol{\theta}^{\star}$, we can obtain
$\displaystyle\tilde{\ell}(\widehat{\boldsymbol{\theta}}^{\star},\mathcal{D})-\tilde{\ell}(\boldsymbol{\theta}^{\star},\mathcal{D})$
$\displaystyle=\tilde{\ell}(\widehat{\boldsymbol{\theta}}^{\star},\mathcal{D})-\widehat{\ell}(\widehat{\boldsymbol{\theta}}^{\star})+\widehat{\ell}(\widehat{\boldsymbol{\theta}}^{\star})-\widehat{\ell}(\boldsymbol{\theta}^{\star})+\widehat{\ell}(\boldsymbol{\theta}^{\star})-\tilde{\ell}(\boldsymbol{\theta}^{\star},\mathcal{D})$
$\displaystyle\leq
2\max_{\boldsymbol{\theta}\in\bar{\boldsymbol{\Theta}}\subset\boldsymbol{\Theta}}\left|\tilde{\ell}(\boldsymbol{\theta},\mathcal{D})-\widehat{\ell}(\boldsymbol{\theta})\right|,$
where the inequality follows from the fact that
$\widehat{\ell}(\widehat{\boldsymbol{\theta}}^{\star})-\widehat{\ell}(\boldsymbol{\theta}^{\star})\leq
0$. Moreover, observe that for any
$\boldsymbol{\theta}\in\bar{\boldsymbol{\Theta}}\subset\boldsymbol{\Theta}$,
by the definition of $\widehat{\ell}(\boldsymbol{\theta})$ in (4.2), we have
$\displaystyle|\tilde{\ell}(\boldsymbol{\theta},\mathcal{D})-\widehat{\ell}(\boldsymbol{\theta})|$
$\displaystyle=\left|\tilde{\ell}(\boldsymbol{\theta},\mathcal{D})-\textstyle{\sum}_{i\in[p]}\alpha_{i}(\boldsymbol{\theta})\cdot\widehat{\ell}_{i}(\boldsymbol{\theta})\right|$
$\displaystyle=\left|\tilde{\ell}(\boldsymbol{\theta},\mathcal{D})-\textstyle{\sum}_{i\in[p]}\alpha_{i}(\boldsymbol{\theta})\cdot\tilde{\ell}(\boldsymbol{\theta},\mathcal{D}_{i})+\textstyle{\sum}_{i\in[p]}\alpha_{i}(\boldsymbol{\theta})\cdot\tilde{\ell}(\boldsymbol{\theta},\mathcal{D}_{i})-\textstyle{\sum}_{i\in[p]}\alpha_{i}(\boldsymbol{\theta})\cdot\widehat{\ell}_{i}(\boldsymbol{\theta})\right|$
$\displaystyle\leq\textstyle{\sum}_{i\in[p]}\alpha_{i}(\boldsymbol{\theta})\left|\tilde{\ell}(\boldsymbol{\theta},\mathcal{D})-\tilde{\ell}(\boldsymbol{\theta},\mathcal{D}_{i})\right|+\textstyle{\sum}_{i\in[p]}\alpha_{i}(\boldsymbol{\theta})\left|\tilde{\ell}(\boldsymbol{\theta},\mathcal{D}_{i})-\widehat{\ell}_{i}(\boldsymbol{\theta})\right|$
$\displaystyle\leq\textstyle{\sum}_{i\in[p]}\alpha_{i}(\boldsymbol{\theta})\tilde{\beta}(\boldsymbol{\theta})\mathcal{W}_{1}(\mathcal{D},\mathcal{D}_{i})+\textstyle{\sum}_{i\in[p]}\alpha_{i}(\boldsymbol{\theta})\epsilon_{i}(\boldsymbol{\theta},T),$
where the last inequality follows from assumption (4.6) and definition (4.10).
∎
### B.2 Proof of Proposition 4.7
###### Proof.
By the definition of 1-Wasserstein distance in (4.3) and the fact that
$\mathcal{D}=\sum_{i\in[p]}w_{i}\mathcal{D}_{i}$, we can obtain
$\displaystyle\mathcal{W}_{1}(\mathcal{D},\mathcal{D}_{i})$
$\displaystyle=\sup_{f\in\mathsf{F}_{1}}\mathbb{E}_{(x,y)\sim\mathcal{D}}f(x,y)-\mathbb{E}_{(x_{i},y_{i})\sim\mathcal{D}_{i}}f(x_{i},y_{i})$
$\displaystyle=\sup_{f\in\mathsf{F}_{1}}\textstyle{\sum}_{j\in[p]}w_{j}\mathbb{E}_{(x_{j},y_{j})\sim\mathcal{D}_{j}}f(x_{j},y_{j})-\mathbb{E}_{(x_{i},y_{i})\sim\mathcal{D}_{i}}f(x_{i},y_{i})$
$\displaystyle=\sup_{f\in\mathsf{F}_{1}}\textstyle{\sum}_{i\not=j,j\in[p]}w_{j}\left(\mathbb{E}_{(x_{j},y_{j})\sim\mathcal{D}_{j}}f(x_{j},y_{j})-\mathbb{E}_{(x_{i},y_{i})\sim\mathcal{D}_{i}}f(x_{i},y_{i})\right)$
$\displaystyle\leq\textstyle{\sum}_{i\not=j,j\in[p]}w_{j}\left(\sup_{f\in\mathsf{F}_{1}}\mathbb{E}_{(x_{j},y_{j})\sim\mathcal{D}_{j}}f(x_{j},y_{j})-\mathbb{E}_{(x_{i},y_{i})\sim\mathcal{D}_{i}}f(x_{i},y_{i})\right)$
$\displaystyle\leq\textstyle{\sum}_{i\not=j,j\in[p]}w_{j}\mathcal{W}_{1}(\mathcal{D}_{j},\mathcal{D}_{i}).$
∎
### B.3 Proof of Proposition 4.8
###### Proof.
By the definition of $\epsilon_{i}(\boldsymbol{\theta},T)$
$\displaystyle\epsilon_{i}(\boldsymbol{\theta},T)$
$\displaystyle=\left|\tilde{\ell}(\boldsymbol{\theta},\mathcal{D}_{i})-\widehat{\ell}_{i}(\boldsymbol{\theta})\right|$
$\displaystyle=\left|\underbrace{\tilde{\ell}(\boldsymbol{\theta},\mathcal{D}_{i})-\tilde{\ell}(\boldsymbol{\theta}_{t}^{(i)},\mathcal{D}_{i})}_{\text{Smoothness
of
}\tilde{\ell}}+\underbrace{\tilde{\ell}(\boldsymbol{\theta}_{t}^{(i)},\mathcal{D}_{i})-\widehat{\ell}_{i}(\boldsymbol{\theta}_{t}^{(i)})}_{\text{Modeling
error}}+\underbrace{\widehat{\ell}_{i}(\boldsymbol{\theta}_{t}^{(i)})-\widehat{\ell}_{i}(\boldsymbol{\theta})}_{\text{Smoothness
of }\widehat{\ell}}\right|,$
where $\boldsymbol{\theta}_{t}^{(i)},t\in[T]$ is any one of the HP tried
during local HPO run on party $i\in[p]$.
First note that
$\displaystyle|\tilde{\ell}(\boldsymbol{\theta}_{t}^{(i)},\mathcal{D}_{i})-\widehat{\ell}_{i}(\boldsymbol{\theta}_{t}^{(i)})|$
$\displaystyle=|\mathcal{L}_{t}^{(i)}-\widehat{\ell}_{i}(\boldsymbol{\theta}_{t}^{(i)})|$
$\displaystyle\leq\max_{t}|\mathcal{L}_{t}^{(i)}-\widehat{\ell}_{i}(\boldsymbol{\theta}_{t}^{(i)})|$
$\displaystyle\leq\delta_{i}.$
In view of (4.4) and (4.6), we have
$\displaystyle\epsilon_{i}(\boldsymbol{\theta},T)$
$\displaystyle\leq\tilde{L}(\mathcal{D}_{i})d(\boldsymbol{\theta},\boldsymbol{\theta}_{t}^{(i)})+\delta_{i}+\widehat{L}_{i}d(\boldsymbol{\theta},\boldsymbol{\theta}_{t}^{(i)}),$
which immediately implies the result in (4.13). ∎
### B.4 Proposition 4.8 using modulus of continuity instead of Lipschitz
continuity
###### Proposition B.1.
Assume that the estimated loss
$\tilde{\ell}(\boldsymbol{\theta},\mathcal{D}_{i})$ and the loss surface
$\ell_{i}(\boldsymbol{\theta})$ admit concave functions
$\tilde{\omega}_{\mathcal{D}_{i}}$ and $\widehat{\omega}_{i}$ respectively as
a modulus of continuity with respect to
$\boldsymbol{\theta}\in\boldsymbol{\Theta}$ for each party $i\in[p]$. Then,
for any party $i,\ i\in[p]$, with the set of (HP, loss) pairs
$\\{(\boldsymbol{\theta}_{t}^{(i)},\mathcal{L}_{t}^{(i)})\\}_{t\in[T]}$
collected during the local HPO run for party $i$, for any
$\boldsymbol{\theta}\in\bar{\boldsymbol{\Theta}}\subset\boldsymbol{\Theta}$,
there exists
$\tilde{A}_{\mathcal{D}_{i}},\widehat{A}_{i},\tilde{B}_{\mathcal{D}_{i}},\widehat{B}_{i}\geq
0$ such that
$\epsilon_{i}(\boldsymbol{\theta},T)\leq\left(\tilde{A}_{\mathcal{D}_{i}}+\widehat{A}_{i}\right)\min_{t\in[T]}d(\boldsymbol{\theta},\boldsymbol{\theta}_{t}^{(i)})+\tilde{B}_{\mathcal{D}_{i}}+\widehat{B}_{i}+\delta_{i},$
(B.1)
where
$\delta_{i}=\max_{t}|\mathcal{L}_{t}^{(i)}-\widehat{\ell}_{i}(\boldsymbol{\theta}_{t}^{(i)})|$
is the maximum per sample training error for the local loss surface
$\widehat{\ell}_{i}$.
###### Proof.
By the definition of $\epsilon_{i}(\boldsymbol{\theta},T)$
$\displaystyle\epsilon_{i}(\boldsymbol{\theta},T)$
$\displaystyle=\left|\tilde{\ell}(\boldsymbol{\theta},\mathcal{D}_{i})-\widehat{\ell}_{i}(\boldsymbol{\theta})\right|$
$\displaystyle=\left|\underbrace{\tilde{\ell}(\boldsymbol{\theta},\mathcal{D}_{i})-\tilde{\ell}(\boldsymbol{\theta}_{t}^{(i)},\mathcal{D}_{i})}_{\text{Smoothness
of
}\tilde{\ell}}+\underbrace{\tilde{\ell}(\boldsymbol{\theta}_{t}^{(i)},\mathcal{D}_{i})-\widehat{\ell}_{i}(\boldsymbol{\theta}_{t}^{(i)})}_{\text{Modeling
error}}+\underbrace{\widehat{\ell}_{i}(\boldsymbol{\theta}_{t}^{(i)})-\widehat{\ell}_{i}(\boldsymbol{\theta})}_{\text{Smoothness
of }\widehat{\ell}}\right|,$
where $\boldsymbol{\theta}_{t}^{(i)},t\in[T]$ is any one of the HP tried
during local HPO run on party $i\in[p]$.
First note that
$\displaystyle|\tilde{\ell}(\boldsymbol{\theta}_{t}^{(i)},\mathcal{D}_{i})-\widehat{\ell}_{i}(\boldsymbol{\theta}_{t}^{(i)})|$
$\displaystyle=|\mathcal{L}_{t}^{(i)}-\widehat{\ell}_{i}(\boldsymbol{\theta}_{t}^{(i)})|$
$\displaystyle\leq\max_{t}|\mathcal{L}_{t}^{(i)}-\widehat{\ell}_{i}(\boldsymbol{\theta}_{t}^{(i)})|$
$\displaystyle\leq\delta_{i}.$
In view of (A.4) and (A.5), we have
$\displaystyle\epsilon_{i}(\boldsymbol{\theta},T)$
$\displaystyle\leq\tilde{\omega}_{\mathcal{D}_{i}}(d(\boldsymbol{\theta},\boldsymbol{\theta}_{t}^{(i)}))+\delta_{i}+\widehat{\omega}(d(\boldsymbol{\theta},\boldsymbol{\theta}_{t}^{(i)})),$
$\displaystyle\leq\delta_{i}+\min_{t\in[T]}\left(\tilde{\omega}_{\mathcal{D}_{i}}(d(\boldsymbol{\theta},\boldsymbol{\theta}_{t}^{(i)}))+\widehat{\omega}(d(\boldsymbol{\theta},\boldsymbol{\theta}_{t}^{(i)}))\right).$
Concavity of a function $\omega:[0,\infty]\to[0,\infty]$ implies that there
exists $A,B>0$ such that $\omega(t)\leq At+B$. Using that, we can find some
$\tilde{A}_{\mathcal{D}_{i}},\widehat{A}_{i},\tilde{B}_{\mathcal{D}_{i}},\widehat{B}_{i}>0$
which allows us to simplify the above to
$\displaystyle\epsilon_{i}(\boldsymbol{\theta},T)$
$\displaystyle\leq\delta_{i}+(\tilde{A}_{\mathcal{D}_{i}}+\widehat{A})\cdot\min_{t\in[T]}d(\boldsymbol{\theta},\boldsymbol{\theta}_{t}^{(i)})+(\tilde{B}_{\mathcal{D}_{i}}+\widehat{B}).$
∎
### B.5 Relative regrets
As a byproduct, we can also provide a bound for the following relative regret
we use in our experiments.
###### Corollary B.2.
Let us assume $\widehat{\boldsymbol{\theta}}^{\star}$ and
$\boldsymbol{\theta}^{\star}$ are defined in (3.7) and (4.7), and
$\bar{\boldsymbol{\theta}}^{\star}$ and $\boldsymbol{\theta}_{b}$ are the
hyper-parameter settings selected by centralized HPO and some baseline hyper-
parameters, respectively, then we can bound the relative regret as follows,
for a given data distribution $\mathcal{D}$, we have
$\displaystyle\frac{\tilde{\ell}(\bar{\boldsymbol{\theta}}^{\star},\mathcal{D})-\tilde{\ell}(\widehat{\boldsymbol{\theta}}^{\star},\mathcal{D})}{\tilde{\ell}(\bar{\boldsymbol{\theta}}^{\star},\mathcal{D})-\tilde{\ell}(\boldsymbol{\theta}_{b},\mathcal{D})}$
$\displaystyle\quad\leq\frac{2\max_{\boldsymbol{\theta}\in\bar{\boldsymbol{\Theta}}}\textstyle{\sum}_{i\in[p]}\alpha_{i}(\boldsymbol{\theta})\left\\{\tilde{\beta}(\boldsymbol{\theta})\textstyle{\sum}_{j\in[p],j\not=i}w_{j}\mathcal{W}_{1}(\mathcal{D}_{j},\mathcal{D}_{i})+\left(\tilde{L}(\mathcal{D}_{i})+\widehat{L}_{i}\right)\min_{t\in[T]}d(\boldsymbol{\theta},\boldsymbol{\theta}_{t}^{(i)})+\delta_{i}\right\\}}{\widehat{\ell}(\boldsymbol{\theta}_{b},\mathcal{D})-\widehat{\ell}(\bar{\boldsymbol{\theta}}^{\star},\mathcal{D})}.$
(B.2)
###### Proof.
By the definition of relative regret, we have
$\displaystyle\frac{\tilde{\ell}(\bar{\boldsymbol{\theta}}^{\star},\mathcal{D})-\tilde{\ell}(\widehat{\boldsymbol{\theta}}^{\star},\mathcal{D})}{\tilde{\ell}(\bar{\boldsymbol{\theta}}^{\star},\mathcal{D})-\tilde{\ell}(\boldsymbol{\theta}_{b},\mathcal{D})}$
$\displaystyle=\frac{\tilde{\ell}(\widehat{\boldsymbol{\theta}}^{\star},\mathcal{D})-\tilde{\ell}(\bar{\boldsymbol{\theta}}^{\star},\mathcal{D})}{\tilde{\ell}(\boldsymbol{\theta}_{b}.\mathcal{D})-\tilde{\ell}(\bar{\boldsymbol{\theta}}^{\star},\mathcal{D})}$
$\displaystyle\leq\frac{\tilde{\ell}(\widehat{\boldsymbol{\theta}}^{\star},\mathcal{D})-\tilde{\ell}(\boldsymbol{\theta}^{\star},\mathcal{D})}{\widehat{\ell}(\boldsymbol{\theta}_{b},\mathcal{D})-\widehat{\ell}(\bar{\boldsymbol{\theta}}^{\star},\mathcal{D})},$
where the last inequality follows from the fact that
$\boldsymbol{\theta}^{\star}$ is the minimizer of
$\tilde{\ell}(\boldsymbol{\theta},\mathcal{D})$. Moreover, in view of the
result in Theorem 4.5, the result in (B.2) follows. ∎
## Appendix C Experimental Setting
### C.1 Dataset details
The details of the binary classification datasets used in our evaluation is
reported in Table 4. We report the 10-fold cross-validated balanced accuracy
of the default HP configuration on each of datasets with centralized training.
The “Gap” column for the results for all datasets and models in §C.3 denote
the difference between the best 10-fold cross-validated balanced accuracy
obtained via centralized HPO and the 10-fold cross-validated balanced accuracy
of the default HP configuration.
Table 4: OpenML binary classification dataset details Data | rows | columns | class sizes
---|---|---|---
EEG eye state | 14980 | 14 | (8257, 6723)
Electricity | 45312 | 8 | (26075, 19237)
Heart statlog | 270 | 13 | (150, 120)
Oil spill | 937 | 49 | (896, 41)
Pollen | 3848 | 5 | (1924, 1924)
Sonar | 208 | 61 | (111, 97)
PC3 | 1563 | 37 | (1403, 160)
### C.2 Search space
We use the search space definition used in the NeurIPS 2020 Black-box
optimization challenge (https://bbochallenge.com/), described in details in
the API
documentation222https://github.com/rdturnermtl/bbo_challenge_starter_kit/#configuration-
space.
#### C.2.1 Histogram based Gradient Boosted Trees
Given this format for defining the HPO search space, we utilize the following
precise search space for the HistGradientBoostingClassifier in scikit-learn:
api_config = {
"max_iter": {"type": "int", "space": "linear", "range": (10, 200)},
"learning_rate": {"type": "real", "space": "log", "range": (1e-3, 1.0)},
"min_samples_leaf": {"type": "int", "space": "linear", "range": (1, 40)},
"l2_regularization": {"type": "real", "space": "log", "range": (1e-4, 1.0)},
}
The HP configuration we consider for the single-shot baseline described in §5
is as follows:
config = {
"max_iter": 100,
"learning_rate": 0.1,
"min_samples_leaf": 20,
"l2_regularization": 0,
}
#### C.2.2 Kernel SVM with RBF kernel
For SVC(kernel="rbf") in scikit-learn, we use the following search space:
api_config = {
"C": {"type": "real", "space": "log", "range": (0.01, 1000.0)},
"gamma": {"type": "real", "space": "log", "range": (1e-5, 10.0)},
"tol": {"type": "real", "space": "log", "range": (1e-5, 1e-1)},
}
The single shot baseline we consider for SVC from Auto-sklearn [Feurer et al.,
2015] is:
config = {
"C": 1.0,
"gamma": 0.1,
"tol": 1e-3,
#### C.2.3 Multi-Layered Perceptrons
For the MLPClassifier(solver="adam") from scikit-learn, we consider both
architectural HP such as hidden-layer-sizes as well as optimizer parameters
such as alpha and learning-rate-init for the Adam optimizer [Kingma and Ba,
2015]. We consider the following search space:
api_config = {
"hidden_layer_sizes": {"type": "int", "space": "linear", "range": (50, 200)},
"alpha": {"type": "real", "space": "log", "range": (1e-5, 1e1)},
"learning_rate_init": {"type": "real", "space": "log", "range": (1e-5, 1e-1)},
}
We utilize the following single shot baseline:
config = {
"hidden_layer_sizes: 100,
"alpha": 1e-4,
"learning_rate_init": 1e-3,
}
We fix the remaining HPs of MLPClassifier as with values used by Auto-sklearn.
activation="relu",
early_stopping=True,
shuffle=True,
batch_size="auto",
tol=1e-4,
validation_fraction=0.1,
beta_1=0.9,
beta_2=0.999,
epsilon=1e-8,
### C.3 Detailed results of comparison against baselines
Here we present the relevant details and the performance of FLoRA on the FL-
HPO of (i) histograms based gradient boosted trees (HGB) in Table 5), (ii)
nonlinear support vector machines (SVM) in Table 6, and (iii) multi-layered
perceptrons (MLP) in Table 7. We use the search spaces and the single-shot
baselines presented in §C.2. We utilize all 7 datasets for each of the method
except for the Electricity dataset with SVM because of the infeasible amount
of time taken by SVM on this dataset. For each setup, we report the following:
* •
Performance of the single-shot baseline (“SSBaseline”),
* •
the best centralized HPO performance (“Best”),
* •
the available “Gap” for improvement,
* •
the minimum accuracy of the best local HP across all parties “PMin”
$:=\min_{i\in[p]}\max_{t}(1-\tilde{\ell}(\boldsymbol{\theta}_{t}^{(i)},\mathcal{D}_{i}))$
* •
the maximum accuracy of the best local HP across all parties “PMax”
$:=\max_{i\in[p]}\max_{t}(1-\tilde{\ell}(\boldsymbol{\theta}_{t}^{(i)},\mathcal{D}_{i}))$
* •
$\gamma_{p}=\nicefrac{{\text{PMax}}}{{\text{PMin}}}$, and finally
* •
the regret for each of the considered loss surfaces in FLoRA.
For each of the three methods, we also report the aggregate performance over
all considered datasets in terms of mean $\pm$ standard deviation
(“mean$\pm$std”), inter-quartile range (“IQR”), Wins/Ties/Losses of FLoRA with
respect to the single-shot baseline (“W/T/L”), and a one-sided Wilcoxon Signed
Ranked Test of statistical significance (“WSRT”) with the null hypothesis that
the median of the difference between the single-shot baseline and FLoRA is
positive against the alternative that the difference is negative (implying
FLoRA improves over the baseline).’ These aggregate metrics are collected in
Table 8 along with a set of final aggregate metrics across all datasets and
methods.
Table 5: HGB
Data | SSBaseline | Best | Gap | PMin | PMax
---|---|---|---|---|---
PC3 | 58.99 | 63.81 | 4.82 | 61.67 | 64.37
Pollen | 48.86 | 52.21 | 3.35 | 51.83 | 52.64
Electricity | 87.75 | 92.84 | 5.10 | 88.42 | 89.19
Sonar | 87.43 | 91.25 | 3.82 | 83.75 | 88.33
Heart Statlog | 79.42 | 85.58 | 6.17 | 78.00 | 86.50
Oil Spill | 63.22 | 74.58 | 11.36 | 68.16 | 82.16
EEG Eye State | 89.96 | 94.66 | 4.70 | 91.80 | 92.29
Data | $\gamma_{p}$ | SGM | SGM+U | MPLM | APLM
---|---|---|---|---|---
PC3 | 1.04 | 0.66 | 0.72 | 0.39 | 0.38
Pollen | 1.02 | 0.43 | 0.54 | 0.43 | 0.69
Electricity | 1.01 | 0.17 | 0.14 | 0.09 | 0.12
Sonar | 1.05 | 1.33 | 0.41 | 0.92 | 0.71
Heart Statlog | 1.11 | 0.69 | 0.55 | 0.89 | 0.50
Oil Spill | 1.21 | 0.47 | 1.13 | 0.46 | 0.61
EEG Eye State | 1.01 | 0.14 | 0.12 | 0.11 | 0.12
mean$\pm$std | | 0.56 $\pm$ 0.37 | 0.52 $\pm$ 0.32 | 0.47 $\pm$ 0.31 | 0.45 $\pm$ 0.23
IQR | | [0.30, 0.47, 0.68] | [0.27, 0.54, 0.64] | [0.25, 0.43, 0.67] | [0.25, 0.50, 0.65]
WTL | | 6/0/1 | 6/0/1 | 7/0/0 | 7/0/0
WSRT | | (26, 0.02126) | (27, 0.01400) | (28, 0.00898) | (28, 0.00898)
##### HGB.
The results in Table 5 indicate that, in almost all cases, with all loss
functions, FLoRA is able to improve upon the baseline to varying degrees
(there is only one case where SGM performs worse than the baseline on Sonar).
On average (across the datasets), SGM+U, MPLM and APLM perform better than SGM
as we expected. MPLM performs better than SGM both in terms of average and
standard deviation. Looking at the individual datasets, we see that, for
datasets with low $\gamma_{p}$ (EEG eye state, Electricity), all the proposed
loss surface have low relative regret, indicating that the problem is easier
as expected. For datasets with high $\gamma_{p}$ (Heart statlog, Oil spill),
the relative regret of all loss surfaces are higher (but still much smaller
than 1), indicating that our proposed single-shot scheme can show improvement
even in cases where there is significant difference in the per-party losses
(and hence datasets).
Table 6: SVM
Data | SSBaseline | Best | Gap | PMin | PMax
---|---|---|---|---|---
Pollen | 49.48 | 50.30 | 0.82 | 51.55 | 53.55
Sonar | 80.20 | 89.29 | 9.09 | 83.33 | 87.92
Heart Statlog | 83.67 | 84.92 | 1.25 | 77.00 | 88.00
Oil Spill | 82.76 | 86.54 | 3.78 | 77.14 | 88.45
EEG Eye State | 50.24 | 60.51 | 10.28 | 69.54 | 71.72
PC3 | 74.03 | 77.96 | 3.92 | 75.26 | 76.95
Data | $\gamma_{p}$ | SGM | SGM+U | MPLM | APLM
---|---|---|---|---|---
Pollen | 1.04 | 1.35 | 1.45 | 2.84 | 2.30
Sonar | 1.06 | 0.17 | 0.17 | 0.27 | 0.17
Heart Statlog | 1.14 | 0.00 | 0.00 | 6.80 | 0.67
Oil Spill | 1.15 | 1.28 | 1.16 | 1.12 | 0.41
EEG Eye State | 1.03 | -0.01 | -0.01 | -0.02 | -0.01
PC3 | 1.02 | 0.59 | 0.79 | 0.70 | 0.79
mean$\pm$std | | 0.56 $\pm$ 0.57 | 0.59 $\pm$ 0.58 | 1.95 $\pm$ 2.35 | 0.72 $\pm$ 0.76
IQR | | [0.04, 0.38, 1.11] | [0.04, 0.48, 1.07] | [0.38, 0.91, 2.41] | [0.23, 0.54, 0.76]
WTL | | 4/0/2 | 4/0/2 | 3/0/3 | 5/0/1
WSRT | | (18, 0.05793) | (17, 0.08648) | (9, 0.62342) | (15, 0.17272)
##### SVM.
For SVM we continue with the datasets selected using HGB (datasets with a
“Gap” of at least 3%). Of the 7 datasets (Table 4), we skip Electricity
because it takes a prohibitively long time for SVM to be trained on this
dataset with a single HP. So we consider 6 datasets in this evaluation and
present the corresponding results in Table 6. Of the 6, note that 2 of these
datasets (Pollen, Heart Statlog) have really small “Gap” (highlighted in red
in Table 6). Moreover, 2 of the datasets (Heart statlog, Oil Spill) also have
really high $\gamma_{p}$ indicating a high level of heterogeneity between the
per-party distributions (again highlighted in red). In this case, there are a
couple of datasets (Oil Spill and Pollen) where FLoRA is unable to show any
improvement over the single-shot baseline (see underlined entries in Table 6),
but both these cases either have a small or moderate “Gap” and/or have a high
$\gamma_{p}$. Moreover, in one case, MPLM incurs a regret of 6.8, but this is
a case with really high $\gamma_{p}=1.14$ – MPLM rejects any HP that has a low
score in even one of the parties, and in that process reject all promising HPs
since the local HPOs on these disparate distributions did not concentrate on
the same region of the HP space, thereby incuring a high MPLM loss in almost
all regions of the HP where some local HPO focused on. Other than these
expected hard cases, FLoRA is able to improve upon the baseline in most cases,
and achieve optimal performance (zero regret) in a few cases (EEG Eye State,
Heart Statlog).
Table 7: MLP-Adam
Data | SSBaseline | Best | Gap | PMin | PMax
---|---|---|---|---|---
Pollen | 50.39 | 51.26 | 0.87 | 51.46 | 52.23
Electricity | 76.95 | 78.06 | 1.11 | 77.01 | 77.39
Sonar | 61.63 | 79.32 | 17.69 | 69.17 | 78.75
Heart Statlog | 72.17 | 85.17 | 13.00 | 79.50 | 89.50
Oil Spill | 50.00 | 65.22 | 15.22 | 54.83 | 63.63
EEG Eye State | 49.99 | 51.66 | 1.67 | 50.02 | 51.84
PC3 | 50.00 | 59.56 | 9.56 | 53.47 | 56.60
Data | $\gamma_{p}$ | SGM | SGM+U | MPLM | APLM
---|---|---|---|---|---
Pollen | 1.02 | 1.88 | 1.45 | 1.45 | 1.31
Electricity | 1.00 | 0.24 | 0.41 | 0.16 | 0.53
Sonar | 1.14 | 0.26 | 0.55 | 0.52 | 0.39
Heart Statlog | 1.13 | 0.46 | 0.37 | 0.42 | 0.28
Oil Spill | 1.16 | 0.80 | 1.03 | 1.00 | 0.79
EEG Eye State | 1.04 | 0.99 | 0.99 | 0.99 | 0.99
PC3 | 1.06 | 0.96 | 1.00 | 0.89 | 0.90
mean$\pm$std | | 0.80 $\pm$ 0.53 | 0.83 $\pm$ 0.37 | 0.78 $\pm$ 0.40 | 0.74 $\pm$ 0.34
IQR | | [0.36, 0.80, 0.97] | [0.48, 0.99, 1.01] | [0.47, 0.89, 1.00] | [0.46, 0.79, 0.95]
WTL | | 6/0/1 | 4/1/2 | 5/1/1 | 6/0/1
WSRT | | (21, 0.11836) | (15, 0.17272) | (18, 0.05793) | (24, 0.04548)
##### MLP.
We consider all 7 datasets for the evaluation of FLoRA on FL-HPO for MLP HPs
and present the results in Table 7. As with SVM, there are a few datasets with
a small room for improvement (“Gap”) and/or high $\gamma_{p}$, again
highlighted in red in Table 7. In some of these cases, FLoRA is unable to
improve upon the single-shot baseline (Pollen, EEG Eye State). Other than
these hard cases, FLoRA again able to show significant improvement over the
single-shot baseline, with APLM performing the best.
Table 8: Aggregate Table Agg. | Method | SGM | SGM+U | MPLM | APLM
---|---|---|---|---|---
mean $\pm$ std. | HGB | 0.56 $\pm$ 0.37 | 0.52 $\pm$ 0.32 | 0.47 $\pm$ 0.31 | 0.45 $\pm$ 0.23
| SVM | 0.56 $\pm$ 0.57 | 0.59 $\pm$ 0.58 | 1.95 $\pm$ 2.35 | 0.72 $\pm$ 0.76
| MLP | 0.80 $\pm$ 0.53 | 0.83 $\pm$ 0.37 | 0.78 $\pm$ 0.40 | 0.74 $\pm$ 0.34
| Overall | 0.64 $\pm$ 0.51 | 0.64 $\pm$ 0.51 | 1.02 $\pm$ 1.46 | 0.63 $\pm$ 0.50
IQR | HGB | [0.30, 0.47, 0.68] | [0.27, 0.54, 0.64] | [0.25, 0.43, 0.67] | [0.25, 0.50, 0.65]
| SVM | [0.04, 0.38, 1.11] | [0.04, 0.48, 1.07] | [0.38, 0.91, 2.41] | [0.23, 0.54, 0.76]
| MLP | [0.36, 0.80, 0.97] | [0.48, 0.99, 1.01] | [0.47, 0.89, 1.00] | [0.46, 0.79, 0.95]
| Overall | [0.22, 0.53, 0.97] | [0.32, 0.55, 1.01] | [0.36, 0.61, 0.99] | [0.36, 0.57, 0.79]
W/T/L | HGB | 6/0/1 | 6/0/1 | 7/0/0 | 7/0/0
| SVM | 4/0/2 | 4/0/2 | 3/0/3 | 5/0/1
| MLP | 6/0/1 | 4/1/2 | 5/1/1 | 6/0/1
| Overall | 16/0/4 | 14/1/5 | 15/1/4 | 18/0/2
WSRT 1 sided | HGB | (26, 0.02126) | (27, 0.01400) | (28, 0.00898) | (28, 0.00898)
| SVM | (18, 0.05793) | (17, 0.08648) | (9, 0.62342) | (15, 0.17272)
| MLP | (21, 0.11836) | (15, 0.17272) | (18, 0.05793) | (24, 0.04548)
| Overall | (174, 0.00499) | (164, 0.00272) | (141, 0.03206) | (183.5, 0.00169)
##### Aggregate.
The results for all the methods and datasets are aggregated in Table 8. All
FLoRA loss surfaces show strong performance with respect to the single-shot
baseline, with significantly more wins than losses, and 3rd-quartile regret
values less than 1 (indicating improvement over the baseline). All FLoRA loss
surfaces have a p-value of less than $0.05$, indicating that we can reject the
null hypothesis. Overall, APLM shows the best performance over all loss
surfaces, both in terms of Wins/Ties/Losses over the baseline as well as in
terms of the Wilcoxon Signed Rank Test, with the highest statistic and a
p-value close to $10^{-3}$. APLM also has significantly lower 3rd-quartile
than all other loss surfaces. MPLM appears to have the worst performance but
much of that is attributable to the really high regret of 6.8 and 2.84 it
received for SVM with Heart Statlog and Pollen (both hard cases as discussed
earlier). Otherwise, MPLM performs second best both for FL-HPO with HGB and
MLP.
### C.4 Effect of the number of local HPO rounds per party
In this experiment, we report additional results to study the effect of the
“thoroughness” of the local HPO runs (in terms of the number of HPO rounds
$T$) on the overall performance of FLoRA for all the loss surfaces in Table 9.
In almost all cases, FLoRA does not require $T$ to be too large to get enough
information about the local HPO loss surface to get to its best possible
performance.
Table 9: Effect of $T$. Method | data | $T$ | $\gamma_{p}$ | SGM | SGM+U | MPLM | APLM
---|---|---|---|---|---|---|---
MLP | Heart Statlog | 5 | 1.13 | 0.58 | 0.33 | 0.22 | 0.56
| | 10 | 1.13 | 0.33 | 0.16 | 0.60 | 0.39
| | 20 | 1.13 | 0.49 | 0.15 | 0.24 | 0.44
| | 40 | 1.13 | 0.44 | 0.30 | 0.42 | 0.29
| | 60 | 1.13 | 0.37 | 0.15 | 0.33 | 0.22
| | 80 | 1.13 | 0.35 | 0.40 | 0.35 | 0.26
MLP | Sonar | 5 | 1.14 | 0.38 | 0.38 | 0.51 | 0.78
| | 10 | 1.14 | 0.45 | 0.23 | 0.43 | 0.62
| | 20 | 1.14 | 0.39 | 0.24 | 0.36 | 0.30
| | 40 | 1.14 | 0.23 | 0.37 | 0.65 | 0.49
| | 60 | 1.14 | 0.53 | 0.14 | 0.34 | 0.48
| | 80 | 1.14 | 0.46 | 0.07 | 0.19 | 0.30
SVM | Sonar | 5 | 1.06 | 0.17 | 0.17 | 1.16 | 0.28
| | 10 | 1.06 | 0.17 | 0.43 | 0.34 | 0.27
| | 20 | 1.06 | 0.17 | 0.17 | 0.17 | 0.17
| | 40 | 1.06 | 0.17 | 0.17 | 0.22 | 0.27
| | 60 | 1.06 | 0.17 | 0.17 | 0.17 | 0.17
| | 80 | 1.06 | 0.17 | 0.11 | 0.27 | 0.17
SVM | EEG | 5 | 1.03 | -0.01 | -0.01 | 0.92 | 0.16
| | 10 | 1.03 | -0.01 | -0.01 | -0.01 | 0.14
| | 20 | 1.03 | -0.01 | -0.01 | -0.00 | -0.00
| | 40 | 1.03 | -0.02 | -0.02 | 0.01 | 0.00
| | 60 | 1.03 | -0.01 | -0.01 | 0.02 | 0.03
| | 80 | 1.03 | -0.01 | -0.01 | 0.01 | -0.0‘
### C.5 Effect of communication overhead
While in the previous experiment, we studied the effect of the thoroughness of
the local HPO runs on the performance of FLoRA, here we consider a subtly
different setup. We assume that each party performs $T=100$ rounds of local
asynchronous HPO. However, instead of sending all $T$ (HP, loss) pairs, we
consider sending $T^{\prime}<T$ of the “best” (HP, loss) pairs – that is, (HP,
loss) pairs with the $T^{\prime}$ lowest losses. Changing the value of
$T^{\prime}$ trades off the communication overhead of the FLoRA step where the
aggregators collect the per-party loss pairs (Algorithm 1, line 5). We
consider 2 datasets each for 2 of the methods (SVM, MLP) and all the loss
surfaces for FLoRA, and report all the results in Table 10.
Table 10: Effect of the number of best (HP, loss) pairs $T^{\prime}<T$ sent to aggregator by each party after doing local HPO with $T=100$. Method | data | $T^{\prime}<T$ | $\gamma_{p}$ | SGM | SGM+U | MPLM | APLM
---|---|---|---|---|---|---|---
MLP | Heart Statlog | 5 | 1.13 | 0.33 | 0.27 | 0.38 | 0.71
| | 10 | 1.13 | 0.35 | 0.31 | 0.33 | 1.72
| | 20 | 1.13 | 0.42 | 0.39 | 2.02 | 0.55
| | 40 | 1.13 | 0.34 | 0.44 | 0.88 | 0.51
| | 60 | 1.13 | 0.38 | 0.22 | 0.31 | 0.32
| | 80 | 1.13 | 0.34 | 0.38 | 0.22 | 0.33
MLP | Sonar | 5 | 1.14 | 0.39 | 0.50 | 1.78 | 0.65
| | 10 | 1.14 | 0.73 | 0.18 | 1.66 | 0.58
| | 20 | 1.14 | 0.20 | 0.41 | 1.23 | 0.37
| | 40 | 1.14 | 0.60 | 0.42 | 0.18 | 0.51
| | 60 | 1.14 | 0.10 | 0.33 | 0.55 | 0.26
| | 80 | 1.14 | 0.47 | 0.41 | 0.34 | 0.32
SVM | EEG Eye State | 5 | 1.03 | -0.02 | -0.01 | 0.39 | 1.02
| | 10 | 1.03 | -0.01 | -0.01 | 1.02 | 1.02
| | 20 | 1.03 | -0.01 | -0.01 | 0.01 | -0.01
| | 40 | 1.03 | -0.01 | -0.01 | -0.00 | -0.01
| | 60 | 1.03 | -0.01 | -0.01 | -0.01 | -0.01
| | 80 | 1.03 | -0.01 | -0.01 | -0.01 | -0.01
SVM | Sonar | 5 | 1.06 | 0.17 | 0.17 | 0.43 | 1.43
| | 10 | 1.06 | 0.17 | 0.17 | 0.17 | 0.17
| | 20 | 1.06 | 0.17 | 0.17 | 0.22 | 0.17
| | 40 | 1.06 | 0.17 | 0.38 | 0.27 | 0.17
| | 60 | 1.06 | 0.17 | 0.43 | 0.27 | 0.17
| | 80 | 1.06 | 0.17 | 0.43 | 0.27 | 0.17
|
# A new galaxy spectral energy distribution model consistent with the
evolution of dust
Kazuki Y. Nishida,1 Tsutomu T. Takeuchi,1,2 Takuma Nagata,1 and Ryosuke S.
Asano1
1Division of Particle and Astrophysical Science, Nagoya University, Furo-cho,
Chikusa-ku, Nagoya, 464–8602, Japan
2The Research Center for Statistical Machine Learning, The Institute of
Statistical Mathematics, 10-3 Midori-cho, Tachikawa, Tokyo 190–8562, Japan
E-mail<EMAIL_ADDRESS>
(Accepted XXX. Received YYY; in original form ZZZ)
###### Abstract
The spectral energy distribution (SED) of galaxies provides fundamental
information on the related physical processes. However, the SED is
significantly affected by dust in its interstellar medium. Dust is mainly
produced by asymptotic giant branch stars and Type II supernovae. In addition,
the dust mass increases through the metal accretion, and the grain size
changes by the collisions between the grains. The contribution of each process
and the extinction depend on the size distribution. Therefore, the SED model
should treat the evolution of the dust mass and size distribution. In spite of
the importance of dust evolution, many previous SED models have not considered
the evolution of the total mass and size distribution in a physically
consistent manner. In this work, we constructed a new radiative transfer SED
model, based on our dust evolution model consistent with the chemical
evolution. To reduce the computational cost, we adopted the mega-grain and the
one-dimensional plane parallel galaxy approximation. As a fiducial case, we
calculated Milky Way-like galaxy SEDs at various ages under the closed-box
model. We found that a galaxy at the age of 100 Myr does not produce small
grains such as polycyclic aromatic hydrocarbons. After 1 Gyr, we observed a
drastic increase of infrared emission and attenuation caused by a rapid
increase of dust mass. This phenomenon can be treated appropriately for the
first time by our new model. This model can be used for the SED fitting to a
galaxy at any stage of evolution.
###### keywords:
dust, extinction – galaxies: evolution – ISM: evolution – radiative transfer –
galaxies: ISM – galaxies: disc
††pubyear: 2020††pagerange: A new galaxy spectral energy distribution model
consistent with the evolution of dust–A new galaxy spectral energy
distribution model consistent with the evolution of dust
## 1 Introduction
The spectral energy distribution (SED) fitting is a fundamental method to
extract the information of the physical processes in galaxies (e.g., star
formation rate: SFR, stellar mass, dust mass) from observational data. Stars
emit photons with wavelengths ranging from ultraviolet (UV) to near-infrared
(NIR). Dust grains absorb and scatter the photons emitted from stars, and re-
emit the absorbed energy at mid-infrared (MIR) to far-infrared (FIR). In
addition to the radiative aspect of a galaxy, dust grains promote the
formation of hydrogen molecules on the surface of the grains (e.g., Hollenbach
& McKee, 1979; Hirashita & Ferrara, 2002; Cazaux et al., 2005). Since hydrogen
molecules are one of the fundamental ingredients of the star formation, dust
grains directly activate the star formation in galaxies.
Dust is a solid grain consisting of the elements heavier than helium, and is
produced by stellar mass loss and supernovae (SNe). Outflows from low- and
intermediate-mass stars during the thermally pulsing asymptotic giants branch
(TP-AGB) phase and Type II SNe (SNe II) are considered to be the primary
sources of dust (e.g., Nozawa et al., 2007; Bianchi & Schneider, 2007;
Zhukovska et al., 2008). Dust grains are formed by condensation of heavy
elements in the atmosphere of massive stars, and only the grains that could
survive the reverse shock of the SN are finally expelled into the interstellar
medium (ISM). Then, the blast waves from SNe propagating in the ISM also
destroy dust grains (e.g., Jones et al., 1994, 1996; Nozawa et al., 2003;
Nozawa et al., 2006; Zhukovska et al., 2008; Yamasawa et al., 2011). This
destruction process has been confirmed by observations of several supernova
remnants (e.g., Borkowski et al., 2006; Arendt et al., 2010). Details of the
dust grain survival still remain controversial. It might depend on their
composition and size111Terminologies such as grain size, size distribution,
etc. are often used in articles related to dust. Throughout this paper, when
we mention “size” of dust grain, it always mean the dust grain radius under
the assumption of spherical shape. (e.g., Nozawa et al., 2007; Gall et al.,
2014; Slavin et al., 2020), and others claim that it depends on the clumpiness
of the ejecta (e.g., Biscaro & Cherchneff, 2016). In addition, Matsuura et al.
(2019) argue that the dust destruction by the SN are suppressed because the
atoms can stick to the surviving dust grain in the passage of the forward
shock region and it can reform or increase dust grains. Observations of SN
remnants (SNRs) do not give a final answer, since dust mass and composition
are significantly different among observed SNRs.
In addition to the dust production from stars, dust growth in the ISM is also
an important process, and necessary to explain the large amount of dust
present in galaxies (e.g., Asano et al., 2013a; Zhukovska, 2014; Michałowski,
2015; Lesniewska & Michałowski, 2019). In the cold phase of ISM, metal is
accreted onto the dust grain surface to increased the size and total mass of
the dust (e.g., Dwek, 1998; Zhukovska et al., 2008; Michałowski et al., 2010;
Hirashita & Kuo, 2011; Asano et al., 2013a). The size distribution of the dust
is also changed through the collisions between dust grains (e.g., Yan et al.,
2004; Jones et al., 1996; Hirashita & Yan, 2009; Kuo & Hirashita, 2012).
Which process controls the mass of dust varies greatly, depending on the age
and the environment of the galaxy, and it is still actively debated from
several different points of view. The SNe II dominates the dust production,
especially in very young galaxies, because SNe II have a shorter lifetime
($<30$ Myr) than AGB stars ($>150$ Mry) (e.g., Morgan & Edmunds, 2003;
Marchenko, 2006; Dwek et al., 2007; Valiante et al., 2009; Gall et al., 2011a,
b; Liu & Hirashita, 2019; De Looze et al., 2020; Burgarella et al., 2020;
Nanni et al., 2020). We should note, however, that the contribution of AGB
cannot be ignored even in galaxies with the age of 500 Myr, when the star
formation rate (SFR) is high (Valiante et al., 2009).
The debate of dust grain growth in high-$z$ galaxies has not been settled. In
high-$z$ galaxies, several studies claim that the process is not effective
because there is not enough time to growth, low gas density, and high
temperature (e.g., Ferrara et al., 2016; Ceccarelli et al., 2018). Since dust
growth in the ISM is strongly affected by the metallicity in a galaxy (e.g.,
Inoue, 2003; Asano et al., 2013a), they claim that it might not be very
important in young galaxies with low metallicity. However, other studies have
shown that the dust mass in distant galaxies cannot be explained without
considering metal accretion (e.g., Pipino et al., 2011; Valiante et al., 2011;
Zhukovska, 2014; Michałowski, 2015; Mancini et al., 2015; Lesniewska &
Michałowski, 2019; Rouillé et al., 2020).
Asano et al. (2013a) defined the critical metallicity, $Z_{\mathrm{cr}}$, as
the metallicity of the ISM, at which the production rate of dust from stars
(AGB and SNe II) become equal to the mass growth rate in the ISM. When the
metallicity reaches $Z_{\mathrm{cr}}$, the dust mass increases suddenly and
nonlinearly. This rapid increase in dust mass is caused by the following
process (e.g., Hirashita & Yan, 2009; Asano et al., 2013a, b). First, the
metal accretion depends on the metallicity and total surface area of the dust
grains. As the dust size increases through the metal accretion, yet another
process of dust evolution in the ISM, shattering, is more likely to occur.
This process is basically the collision of grains with each other and
redistribute the mass of dust into smaller-sized grains. When the shattering
becomes effective, the total surface area of the dust grain per mass
increases, and the metal accretion becomes more efficient. This cycle leads to
the sharp increase of the total dust mass along with metallicity. Therefore,
not only the total dust mass, but also it is of vital importance to take into
account the grain size distribution to discuss the evolution of dust in
galaxies.
The absorption and scattering coefficients of dust as a function of wavelength
depend on the size and composition of grains. When a dust grain absorbs light,
its temperature rises and the absorbed energy is re-emitted at longer
wavelengths (mainly IR). The wavelength of the re-emission depends on the
instantaneous temperature of the grain, and the temperature strongly depends
on the size of the grain. Thus, as already mentioned, the mass, size, and
composition of dust grains play a fundamental role in shaping the SED of a
galaxy. Fitting to the SED of distant galaxies by an empirical dust emission
model without evolution may lead to erroneous results. This happens, for
example, when we use a model in which the dust size distribution is constant.
Recently, some galaxies with a large amount of dust
($M_{\mathrm{dust}}>10^{6}~{}M_{\odot}$) have been observed at $z>6$ (e.g.,
Watson et al., 2015; Laporte et al., 2017; Tamura et al., 2019). Now it is a
proper moment to develop a new SED model based on the theory of dust
evolution, after the advent of the Atacama Large Millimeter/Submillimeter
Array (ALMA) and other large facilities at this wavelength range.
The dust evolution in the ISM has been considered by a number of previous
studies (e.g., Dwek, 1998; Calura et al., 2008; da Cunha et al., 2010; Asano
et al., 2013a, b, 2014; Mancini et al., 2015; Schneider et al., 2016; Ginolfi
et al., 2018; De Vis et al., 2017, 2019; Hirashita & Aoyama, 2019; De Looze et
al., 2020; Nanni et al., 2020; Burgarella et al., 2020). We put our basis on
the theoretical framework of dust evolution proposed by Asano et al. (2013a,
b, 2014) and Nozawa et al. (2015) (hereafter Asano model) to develop a new
radiative transfer SED model. The Asano model considers SNe II and AGB stars
as dust production sources, and not only the metal accretion but also
shattering and coagulation as the dust evolution process in the ISM, which
enables us to determine the dust mass and size distribution in all galaxy ages
from a first principle. Hirashita & Aoyama (2019) developed a dust evolution
model also based on the Asano model, but with a better computational
performance to apply to cosmological simulations. Recall that the Asano model
considers different physical quantities (e.g., ambient gas density of SN,
hydrogen gas density, and magnetic field strength) for various galaxies to
treat the dust destruction by SN and dust collisions. However,considering dust
on the cosmological scale, it is impossible to reach a galaxy scale
resolution. Therefore, Hirashita & Aoyama (2019) adopt many simplifications to
optimize their model for the simulations. In contrast, since we aim at
calculating SED of an individual galaxy, we make a maximal use of the Asano
model.
There have been several SED models that include the evolution of dust in the
ISM. For example, Schurer et al. (2009) is the SED model which is based on the
dust model of Calura et al. (2008). They calculate the chemical evolution in a
single gas phase and dust evolution including metal accretion. However, since
Schurer et al. (2009) do not consider shattering and coagulation, the rapid
increase of the total dust mass does not occur. Version 3 of Pégase(Fioc &
Rocca-Volmerange, 2019) considers the evolution of dust mass, and can
calculate not only the radiation from stars but also the extinction by dust
grain and the radiation of dust with a stochastic temperature distribution.
Pégasedoes not take into account the dust size distribution, and assumes that
the fraction and the size distribution of each grain species do not evolve. In
this paper, we construct a new galaxy SED model, including the dust evolution
theory proposed by Asano et al. (2013a, b, 2014). We adopt the mega-grain
approximation (MGA) with a one-dimensional plain parallel galaxy (Varosi &
Dwek, 1999; Inoue, 2005) to make the radiative transfer calculation faster.
This paper is organized as follows. In Section 2, we introduce how to
calculate the SED for each component. In Section 3, as an example of our SED
model, we show the SED of a Milky Way (MW)-like galaxy. In Section 4, we
discuss the effect of parameters on the model SEDs. Section 5 is devoted to
the conclusions.
## 2 Methods: Construction of SED model
To construct a galaxy SED model, we synthesis the stellar SED calculated by
version 2 of Pégase (Fioc & Rocca-Volmerange, 1999, Pégase.2), the dust
evolution model based on Asano et al. (2013a, b, 2014), dust attenuation
calculated by radiative transfer with MGA in a one-dimensional galaxy (Varosi
& Dwek, 1999; Inoue, 2005), and the dust emission by a Monte Carlo simulation.
In this section, we present how to calculate each component. §2.1 introduces
the equation of mass evolution in a galaxy. §2.2 shows the details of the dust
chemical evolution model. §2.3 is the overview of calculation of stellar SED
by Pégase.2. §2.4 and 2.5 show dust properties with mega-grain approximation
and radiative transfer in one-dimensional galaxy, respectively. In §2.6 and
2.7, we explain the calculation of the dust temperature distribution by Monte
Carlo simulation and the dust emission by the distribution.
### 2.1 Equations governing galaxy evolution
We consider stars, gases and dust grains as the components of a model galaxy.
For simplicity, we assume a one-zone galaxy model, where physical quantities
vary uniformly over the entire galaxy. The time evolution of the total stellar
mass $M_{\ast}(t)$, the ISM mass $M_{\mathrm{ISM}}(t)$, the metal mass
$M_{\mathrm{Z}}(t)$, and the dust mass $M_{\mathrm{d}}(t)$ at an age of galaxy
$t$ are represented as (Lisenfeld & Ferrara, 1998; Asano et al., 2013a),
$\displaystyle\frac{\mathrm{d}{M_{\ast}(t)}}{\mathrm{d}{t}}$
$\displaystyle=\mathrm{SFR}(t)-R(t),$ (1)
$\displaystyle\frac{\mathrm{d}{M_{\mathrm{ISM}}(t)}}{\mathrm{d}{t}}$
$\displaystyle=-\mathrm{SFR}(t)+R(t)+\frac{\mathrm{d}{M_{\mathrm{infall}}(t)}}{\mathrm{d}{t}},$
(2) $\displaystyle\frac{\mathrm{d}{M_{\mathrm{Z}}(t)}}{\mathrm{d}{t}}$
$\displaystyle=-Z(t)\mathrm{SFR}(t)+R_{\mathrm{Z}}(t)+Y_{\mathrm{Z}}(t),$ (3)
$\displaystyle\frac{\mathrm{d}{M_{\mathrm{d}}(t)}}{\mathrm{d}{t}}$
$\displaystyle=-D(t)\mathrm{SFR}(t)+Y_{\mathrm{d}}(t)$
$\displaystyle-\Big{(}\frac{\mathrm{d}M_{\mathrm{d}}(t)}{\mathrm{d}t}\Big{)}_{\mathrm{SN}}+\Big{(}\frac{\mathrm{d}M_{\mathrm{d}}(t)}{\mathrm{d}t}\Big{)}_{\mathrm{acc}},$
(4)
where $\mathrm{SFR}(t)$ is the star formation rate, and $R(t)$ and
$R_{\mathrm{Z}}(t)$ are the mass of the gas and metal taken into stars from
ISM and returned to ISM when stars die per unit time, respectively.
$\mathrm{d}M_{\mathrm{infall}}/\mathrm{d}t$ is the infall gas rate, which is
assumed to be zero in this paper except §4.5. $Z(t)\equiv
M_{\mathrm{Z}}/M_{\mathrm{ISM}}$ is the metallicity, and $D(t)\equiv
M_{\mathrm{d}}/M_{\mathrm{ISM}}$ is the mass fraction of dust with respect to
the total amount of metal. $Y_{\mathrm{Z}}(t)$ and $Y_{\mathrm{d}}(t)$ are the
metal and dust masses newly produced by stars per unit time, respectively.
$(\mathrm{d}M_{\mathrm{d}}(t)/\mathrm{d}t)_{\mathrm{SN}}$ and
$(\mathrm{d}M_{\mathrm{d}}(t)/\mathrm{d}t)_{\mathrm{acc}}$ is the change of
grain mass cased by SN shock and metal accretion. In the Asano model, three
phases are supposed in the ISM: warm neutral medium (WNM, with gas temperature
$T_{\mathrm{gas}}=6000$ K, and hydrogen number density
$n_{\mathrm{H}}=0.3~{}\mathrm{cm^{-3}}$), cold neutral medium (CNM, with
$T_{\mathrm{gas}}=100$ K, $n_{\mathrm{H}}=30~{}\mathrm{cm^{-3}}$), and
molecular cloud (MC, with $T_{\mathrm{gas}}={25}$ K,
$n_{\mathrm{H}}=300~{}\mathrm{cm^{-3}}$) (Nozawa et al., 2015). In the MC,
dust grains form icy mantle on the grain surface. (e.g., Kalvans, 2017;
Ceccarelli et al., 2018). However, since the properties of the icy mantle are
not well understood yet, we do not consider its effect in this work. As for
the dust growth in the ISM, metal acceleration occurs only in CNM and MC, and
shattering and coagulation occur in all three phases. Because metal accretion
occurs effectively in high density regions, the grain growth in the MC is more
prominent. In this paper, we fix the phase fraction of WNM, CNM, and MC to
$\eta_{\mathrm{WNM}}=0.5$, $\eta_{\mathrm{CNM}}=0.3$, and
$\eta_{\mathrm{MC}}=0.2$, respectively, the same values to that of Nozawa et
al. (2015) used to reproduce the MW extinction curve. These fractions are
constant throughout the calculation of the dust model. For each time step, the
dust grain is redistributed into each ISM phase so that the mass fraction of
each ISM phase is maintained. Thus, we consider the dust grain cycling between
the different ISM phases. Also, we do not consider the outflow effects in this
model. We assume that at the age of $t=0$, the galaxy contains no stars and
dust, and contains only zero-metallicity gas (i.e.,
$M_{\mathrm{star}}(0)=M_{\mathrm{Z}}(0)=M_{\mathrm{d}}(0)=0$, and
$M_{\mathrm{ISM}}(0)$ is total galaxy mass).
We adopt the Schmidt law (Schmidt, 1959), $\mathrm{SFR}(t)\propto
M^{n}_{\mathrm{ISM}}$ for SFR with $n=1$ for simplicity, as
$\mathrm{SFR}(t)=\frac{M_{\mathrm{ISM}}(t)}{\tau_{\mathrm{SF}}},$ (5)
where $\tau_{\mathrm{SF}}$ is the timescale of star formation. In this paper,
initial galaxy mass $M_{\mathrm{ISM}}(0)$ and $\tau_{\mathrm{SF}}$ are set to
be $10^{11}~{}\mathrm{M_{\odot}}$ and $3$ Gyr as fiducial values. $R(t)$,
$R_{\mathrm{Z}}(t)$, and $Y_{\mathrm{d}}(t)$ are represented as
$\displaystyle R(t)$
$\displaystyle=\int^{100~{}M_{\odot}}_{m_{\mathrm{min}}(t)}[m-\omega(m,Z(t-\tau_{m}))]\phi(m)$
$\displaystyle~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}\times\mathrm{SFR}(t-\tau_{m})\,\mathrm{d}m,$
(6) $\displaystyle R_{\mathrm{Z}}(t)$
$\displaystyle=\int^{100~{}M_{\odot}}_{m_{\mathrm{min}}(t)}[m-\omega(m,Z(t-\tau_{m}))]\phi(m)$
$\displaystyle~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}~{}\times\mathrm{SFR}(t-\tau_{m})Z(t-\tau_{m})\,\mathrm{d}m,$
(7) $\displaystyle Y_{\mathrm{Z}}(t)$
$\displaystyle=\int^{100~{}M_{\odot}}_{m_{\mathrm{min}}(t)}m_{\mathrm{Z}}(m,Z(t-\tau_{m}))\phi(m)\mathrm{SFR}(t-\tau_{m})\,\mathrm{d}m,$
(8) $\displaystyle Y_{\mathrm{d}}(t)$
$\displaystyle=\int^{100~{}M_{\odot}}_{m_{\mathrm{min}}(t)}m_{\mathrm{d}}(m,Z(t-\tau_{m}))\phi(m)\mathrm{SFR}(t-\tau_{m})\,\mathrm{d}m,$
(9)
where $m_{\mathrm{min}}(t)$ is the lower limit mass of star which can explode
at time $t$, $\phi(t)$ is the initial mass function (IMF),
$\omega(m,Z(t-\tau_{m}))$, $m_{\mathrm{Z}}(m,Z(t-\tau_{m}))$, and
$m_{\mathrm{d}}(m,Z(t-\tau_{m}))$ are the remnant mass which remains after a
star explodes, and the metal mass and the dust mass newly produced by a star
of mass $m$ and metallicity $Z(t-\tau_{m})$. As for $\omega$ and
$m_{\mathrm{Z}}$, we adopt Ventura et al. (2013) for AGB stars with mass
$m=1$–$8~{}M_{\odot}$ and metallicity $Z=(0.015$, 0.4) $Z_{\odot}$, and
Kobayashi et al. (2006) for SNe II with progenitor mass
$m=13$–$40~{}M_{\odot}$ and metallicity $Z=(0.0$, 0.05, 0.3, 1.0) $Z_{\odot}$.
We interpolate and extrapolate all data tables over mass and metallicity in
this paper. $\tau_{m}$ is the lifetime of a star with mass $m$ and we use the
following equation by Raiteri et al. (1996),
$\log\tau_{m}=a_{0}(Z)+a_{1}(Z)\log m+a_{2}(Z)(\log m)^{2},$ (10)
with
$\displaystyle a_{0}(Z)$ $\displaystyle=10.13+0.07547\log Z-0.008084(\log
Z)^{2},$ (11) $\displaystyle a_{1}(Z)$ $\displaystyle=-4.424-0.7939\log
Z-0.1187(\log Z)^{2},$ (12) $\displaystyle a_{2}(Z)$
$\displaystyle=1.262+0.3385\log Z+0.05417(\log Z)^{2}.$ (13)
This equation was obtained by fitting the calculation result of stars with
stellar mass range 0.6–120 $M_{\odot}$ and metallicity range 0.0004–0.05 by
the Padova group (Alongi et al., 1993; Bressan et al., 1993; Bertelli et al.,
1994). We use the Salpeter IMF (Salpeter, 1955):
$\phi(m)\propto m^{-2.35}.$ (14)
The IMF is normalized as
$\int^{100~{}M_{\odot}}_{0.1~{}M_{\odot}}\phi(m)m~{}\mathrm{d}m=1~{}M_{\odot}.$
(15)
### 2.2 Dust evolution model
In this paper, we adopt the Asano model. The Asano model takes into account
ten species for dust; C, Si, SiO2, Fe, FeS, Al2O3, MgO, MgSiO3, Mg2SiO4, and
Fe3O4 (Nozawa et al., 2007; Zhukovska et al., 2008). Since the Asano model
calculates the evolution for each dust species, the dust composition evolves
with time. For simplicity in this work, we divide dust grains into two
representative families, silicate and carbonaceous grains, for the grain
growth and grain-grain collision, because the optical properties of other dust
species are not well understood yet. Further, among the carbonaceous grains,
the smaller ones are treated as polycyclic aromatic hydrocarbon (PAH) grains.
The fraction of graphite in carbonaceous dust grain is obtained by the
following formula (Draine & Li, 2007),
$f_{\mathrm{gra}}=\begin{cases}0.01&(a<50~{}\mathrm{\mathring{A}})\\\
0.01+0.99\left[1-\left(\frac{50~{}\mathrm{\mathring{A}}}{a}\right)^{3}\right]&(a>50~{}\mathrm{\mathring{A}})\end{cases}.$
(16)
Where $a$ is the dust grain radius and the fraction of PAHs is defined as
$f_{\mathrm{PAH}}=1-f_{\mathrm{gra}}$. Since the PAHs are divided into ionized
and neutral PAHs, which have different optical properties. The fraction of
ionized PAH is shown in Figure 7 of Li & Draine (2001). A part of carbonaceous
grains could consist of amorphous carbon. According to Nozawa et al. (2015),
graphite is accepted for reproducing the attenuation curve of nearby galaxy
like MW, and amorphous carbon is accepted for galaxies that do not have 2175 Å
bumps in the attenuation curve, such as high-$z$ quasars. Since the
differences between these compositions are not well understood yet, we only
consider graphite grains in this work. It is also possible to take into
account amorphous carbon in our SED model.
This model considers only AGB and SNe II as the sources of dust grains. For
simplicity, we do not consider SNe Ia contributions because they are
considered to be a very minor contributor to the total mass of dust (e.g.,
Calura et al., 2008).
The dust size distribution is represented by the dust number
$f_{\mathrm{X}}(a,t)$ and mass $\rho_{\mathrm{X}}(m_{\mathrm{d}},t)$
distribution. $f_{\mathrm{X}}(a,t)\,\mathrm{d}a$ and
$\rho_{\mathrm{X}}(m_{\mathrm{d}},t)\,\mathrm{d}m_{\mathrm{d}}$ are the number
and mass density of dust grains with radii [$a,a+\mathrm{d}a$] and mass
[$m_{\mathrm{d}},m_{\mathrm{d}}+\mathrm{d}m_{\mathrm{d}}$] at time $t$,
respectively. Where ’X’ represents the dust species (C: carbonaceous or Si:
silicate grain), $a$ is the dust radius, and $m_{\mathrm{d}}$ is the dust
grain mass. We assume that the dust grain has a constant density $s$ and is
spherical grain, so dust grain mass is
$m_{\mathrm{d}}=\frac{4}{3}\pi a^{3}s.$ (17)
The relation of dust number and mass density is expressed as
$\rho_{\mathrm{X}}(m_{\mathrm{d}},t)\,\mathrm{d}m_{\mathrm{d}}=m_{\mathrm{d}}f_{\mathrm{X}}(a,t)\,\mathrm{d}a.$
(18)
In the initial condition of the galaxy, the dust mass of all sizes is set to
be zero. Therefore, in the first time step of the computation, only the dust
grain is produced by the stars. In the next time step, the dust size
distribution produced by the stars evolves through the dust evolution process
in the ISM. In the following, we explain the details of the Asano model.
#### 2.2.1 Dust production by AGB
AGB stars are the final phase of the evolution of low and intermediate mass
stars ($<8~{}M_{\odot}$). They have the carbon-oxygen core and are burning
hydrogen and helium surrounding the core. They release heavier elements in the
ISM, and dust grains are formed in the ejecta. The dust size distribution
produced by AGB stars depends on the progenitor mass and suggested that it is
represented by a log-normal distribution with a peak at $\sim
0.1~{}\mathrm{\mu m}$ by Winters et al. (1997). Further, Yasuda & Kozasa
(2012) calculates dust formation by AGB stars with the hydrodynamical
simulation including SiC production. They suggest that the mass distribution
per unit logarithmic bin $a^{4}f(a)$ is described by a log-normal distribution
with a peak at 0.2–0.3 $\mu$m. We assume the size distribution of dust grains
from AGB stars is represented by the log-normal with a peak at 0.1 $\mu$m with
a standard deviation of $\sigma=0.47$. This shape reproduces Figure 7 of
Yasuda & Kozasa (2012). We assume the same size distributions for all dust
species.
As for the dust mass produced by AGB stars, we adopt Ventura et al. (2012);
Ventura et al. (2013). They consider AGB stars with a mass range of 1–8
$M_{\odot}$ and metallicity range of $Z=(0.05$, 0.4) $Z_{\odot}$. The
condensation fraction of the key elements is $\sim 0.3$ for silicate with
progenitor stellar mass $M_{\mathrm{AGB}}=6~{}M_{\odot}$ and initial
metallicity $Z=0.05~{}Z_{\odot}$, and $\sim 0.05$ for carbon with
$M_{\mathrm{AGB}}=3~{}M_{\odot}$ (Ventura et al., 2012). We interpolate and
extrapolate their data to obtain the dust yield at the required stellar mass
and metallicity, as in §2.1.
#### 2.2.2 Dust production by SNe II
Massive stars end their lives as supernovae (SNe), and dust grains are formed
in the ejecta of the SNe. The element synthesis determines the dust grains
composition in stars and the mechanism of explosion (Nozawa et al., 2003).
Furthermore, the reverse shock of SN destroys the dust grains by sputtering
(Nozawa et al., 2007; Bianchi & Schneider, 2007). Dust destruction by SN
reverse shock is still under debate and there is no common agreement (e.g.,
Gall et al., 2014; Biscaro & Cherchneff, 2016; Matsuura et al., 2019; Slavin
et al., 2020), but we use Nozawa et al. (2007) as a working hypothesis in this
paper. Nozawa et al. (2007) calculates the grain size distribution produced by
SNe II, and we adopt it for SNe in a progenitor mass range of 13–30
$M_{\odot}$. They calculate only production by SNe II from zero-metallicity
star, but the size distribution and composition of dust produced by the SNe II
are less dependent on the metallicity of the progenitor stars (e.g., Todini &
Ferrara, 2001; Kozasa et al., 2009). Therefore, we assume that the dust
production from the SN does not depend on metallicity and uses the stellar
mass interpolated and extrapolated.
Nozawa et al. (2007) discusses two extreme cases for the structure of the
cores of progenitor stars, mixed and unmixed. According to Hirashita et al.
(2005), the unmixed model gives a better fit to the observed high-$z$
extinction curve of SDSS J1048+4637 at $z=6.2$ (Maiolino et al., 2004). Thus,
we adopt the unmixed model in this paper. The condensation fraction is about
0.003–0.006 (Nozawa et al., 2007).
#### 2.2.3 Dust destruction by supernova shock
Dust grains in the ISM are partially destroyed by SN shocks (e.g., Jones et
al., 1996; Nozawa et al., 2006). The SN shocks decrease the total dust mass,
and change the size distribution through the sputtering process (Nozawa et
al., 2006). The sputtering is separated into thermal and non-thermal. The
thermal sputtering is caused by the motion of hot gas, and nonthermal one is
caused by relative motion between gas and dust grain. The sputterings depend
on grain size, gas density, temperature (for thermal sputtering), and the
relative velocity between dust grain and gas (for nonthermal sputtering).
We adopt the result by Yamasawa et al. (2011) for the treatment of the SN
destruction. The grain number density after the destruction by SN shocks,
$f_{\mathrm{X}}^{\prime}(a,t)$, is formulated as
$f_{\mathrm{X}}^{\prime}(a,t)=\int^{a_{\mathrm{max}}}_{a}\eta_{\mathrm{X}}(a,a^{\prime})f_{\mathrm{X}}(a^{\prime},t)\,\mathrm{d}a^{\prime}.$
(19)
Where $\eta_{\mathrm{X}}(a,a^{\prime})$ is the conversion efficiency of SN
sputtering defined as the conversion rate of dust grains from radii
$[a,a+\mathrm{d}a]$ to $[a^{\prime},a^{\prime}+\mathrm{d}a^{\prime}]$.
$a_{\mathrm{max}}$ is the maximum radius of dust grain and we adopt
$a_{\mathrm{max}}=8~{}\mathrm{\mu m}$ (Asano et al., 2013b). This value is
large enough to represent the maximum size produced by shattering and
coagulation (Hirashita & Yan, 2009). Yamasawa et al. (2011) calculate
$\eta_{\mathrm{X}}$ by the method developed by Nozawa et al. (2006). In this
process, the size of dust grains is only reduced by destruction, if
$a>a^{\prime}$, $\eta_{\mathrm{X}}=0$. Equation (19) represents the increasing
amount of dust with radii $[a,a+\mathrm{d}a]$ by SN destruction of dust larger
than $a$. The actual upper limit of the integration corresponds to the maximum
dust size of the distribution before the shock passes. The change of grain
number density caused by SN shock is represented as
$\displaystyle\mathrm{d}f_{\mathrm{X}}(a,t)$
$\displaystyle=f_{\mathrm{X}}^{\prime}(a,t)-[1-\eta_{\mathrm{X}}(a,a)]f_{\mathrm{X}}(a,t)$
$\displaystyle=\int_{0}^{a_{\mathrm{max}}}\eta_{\mathrm{X}}(a,a^{\prime})f_{\mathrm{X}}(a^{\prime},t)\,\mathrm{d}a^{\prime}-f_{\mathrm{X}}(a,t).$
(20)
The change of grain mass density by SN shock at a grain radius $a$ and the
time $t$ is represented by Eq. (20) as,
$\displaystyle\left(\frac{\mathrm{d}{\rho_{\mathrm{X}}(m_{\mathrm{d}},t)}}{\mathrm{d}{t}}\right)_{SN}=$
$\displaystyle
m_{\mathrm{d}}\frac{\mathrm{d}{f_{\mathrm{X}}(a,t)}}{\mathrm{d}{t}}$
$\displaystyle=$
$\displaystyle-\tau_{\mathrm{SN,X}}^{-1}\Big{[}\rho_{\mathrm{X}}(m_{\mathrm{d}},t)$
$\displaystyle-
m_{\mathrm{d}}\int^{a_{\mathrm{max}}}_{0}\eta_{\mathrm{X}}(a,a^{\prime})f_{\mathrm{X}}(a^{\prime},t)\,\mathrm{d}a^{\prime}\Big{]}.$
(21)
If we integrate this equation with respect to $a$ and summing up the dust
species, it agrees with the third term of the right hand side of Equation (4).
The timescale of dust destruction $\tau_{\mathrm{SN}}(t)$ by SN is expressed
as,
$\tau_{\mathrm{SN}}(t)=\frac{M_{\mathrm{ISM}}(t)}{\epsilon
m_{\mathrm{swept}}\gamma_{\mathrm{SN}}(t)},$ (22)
where $\epsilon$ is the efficiency of the dust destruction by SN shocks, and
$\gamma_{\mathrm{SN}}(t)$ is the SN rate. The SN rate is expressed as follows
by (McKee, 1989; Nozawa et al., 2006),
$\gamma_{\mathrm{SN}}(t)=\int^{40~{}M_{\odot}}_{\max(m_{\mathrm{min}}(t),8~{}M_{\odot})}\phi(m)\mathrm{SFR}(t-\tau_{\mathrm{m}})\mathrm{d}m.$
(23)
The integration range is determined by when the SNe can occur (Heger et al.,
2003). When $t<\tau(40~{}M_{\odot})$, $\gamma_{\mathrm{SN}}(t)=0$. We assume
$\epsilon=0.1$ (McKee, 1989; Nozawa et al., 2006).
$m_{\mathrm{swept}}$ is the ISM mass swept by a SN shock. $m_{\mathrm{swept}}$
depends on the density and metallicity of the ISM (Nozawa et al., 2006;
Yamasawa et al., 2011). When the ISM density is high, $m_{\mathrm{swept}}$ is
small because there are many particles that slow down the SN shock. When the
metallicity is high, efficient line cooling with metal results in a faster
shock deceleration and smaller $m_{\mathrm{swept}}$. We use the following
formulae fitted by Yamasawa et al. (2011),
$m_{\mathrm{swept}}=1535n^{-0.202}_{\mathrm{SN}}\left[\left(Z/Z_{\odot}\right)+0.039\right]^{-0.289}~{}M_{\odot},$
(24)
where $n_{\mathrm{SN}}$ is the ISM density surrounding SNe. The fitting
accuracy is within 16% for $0.03~{}\mathrm{cm}^{-3}\leq n_{\mathrm{SN}}\leq
30~{}\mathrm{cm}^{-3}$ and for $10^{-4}\leq Z/Z_{\odot}\leq 1.0$ (Yamasawa et
al., 2011), and we use $n_{\mathrm{SN}}=1.0~{}\mathrm{cm^{-3}}$.
#### 2.2.4 Grain growth by metal accretion
In the cold phase of the ISM, the metal in the gas phase is accreted onto the
pre-existing dust grain surface, which increases the radius of the dust grain
and the total mass of the dust, known as grain growth (e.g., Dwek & Scalo,
1980; Draine, 2009; Jones & Nuth, 2011). We assume that grain growth occurs in
the CNM ($T_{\mathrm{gas}}=100$ K and $n_{\mathrm{H}}=30~{}\mathrm{cm^{-3}}$)
and the MC ($T_{\mathrm{gas}}=25$ K and
$n_{\mathrm{H}}=300~{}\mathrm{cm^{-3}}$). The total mass fraction in these ISM
phases is assumed to be 0.5 (Nozawa et al., 2015). We treat only refractory
grains (silicate and carbonaceous dust) and do not consider the icy mantle. We
assume that a grain instantly becomes a sphere with a smooth surface, and we
adopt only the geometric cross-section, i.e., we do not consider the effect of
the Coulomb interaction. Here, we consider only two species of dust grain,
carbonaceous with key element C and silicate with Si. Jones & Nuth (2011)
indicate that in an H2, CO and H2O-rich environment, Si, Fe and Mg accretion
forms silicates through complex chemical reactions such as ice formation.
However, they also say that the spectrum of silicates formed in such a
scenario is inconsistent with actual observations. In this paper, we do not
know the chemical properties well, so it is assumed that only the same key
element X accretes onto the dust of key elements X. Since grain growth
requires a sufficient amount of metals and dust grains in the ISM, grain
growth is difficult to occur in a very young galaxy, but it becomes
efficiently processed at 1 Gyr in general (Asano et al., 2013a).
In the following, we introduce the formalism of size evolution by Hirashita &
Kuo (2011). The collision rate at which an atom of element X with radius $a$
with the surface of the dust grain is expressed as follows (Evans, 1994):
$\mathcal{R}=\pi a^{2}n_{\mathrm{X}}(t)v_{\mathrm{th}},$ (25)
where $n_{\mathrm{X}}(t)$ is the number density of the key element X in gas
phase and $v_{\mathrm{th}}$ is the thermal velocity
$v_{\mathrm{th}}=\left(\frac{8kT_{\mathrm{gas}}}{\pi
m_{\mathrm{X}}}\right)^{1/2},$ (26)
where $k$ is the Boltzmann constant, $T_{\mathrm{gas}}$ is the gas temperature
and $m_{\mathrm{X}}$ is the atomic mass of the key element X. In reality,
since metals other than the corresponding key element may accrete onto the
grain surface, Equation (25) represents the accretion rate associated with the
key element. The evolution of grain mass
$\mathrm{d}m_{\mathrm{d}}(a,t)/\mathrm{d}t$ is
$\frac{\mathrm{d}{m_{\mathrm{d}}(a,t)}}{\mathrm{d}{t}}=g_{\mathrm{X}}^{-1}m_{\mathrm{d}}\alpha_{\mathrm{acc}}\mathcal{R},$
(27)
where $g_{\mathrm{X}}$ is the mass fraction of the key element X in a specific
grain species (silicate: 0.166, graphite: 1.00). We assume Mg1.1Fe0.9SiO4 for
the composition of silicate (Draine & Lee, 1984). $\alpha_{\mathrm{acc}}$ is
the sticking probability of atoms that collide with grains. It is very
difficult to quantify whether the sticking atoms become a part of the grain
(e.g., Jones & Nuth, 2011). This value may be almost 1 in the low temperature
environment (e.g., Zhukovska et al., 2008), so this paper sets
$\alpha_{\mathrm{acc}}=1$ for simplicity.
$n_{\mathrm{X}}(t)$ is estimated as
$n_{\mathrm{X}}(t)=\frac{\rho_{\mathrm{ISM}}^{\mathrm{eff}}}{m_{\mathrm{d}}}\frac{M_{\mathrm{X}}(t)-g_{\mathrm{X}}M_{\mathrm{d,X}}(t)}{M_{\mathrm{ISM}}(t)},$
(28)
where $M_{\mathrm{X}}(t)$ is the total mass of element X (including gas and
dust), $M_{\mathrm{ISM}}(t)$ is the total mass of gas, $M_{\mathrm{d,X}}(t)$
is the dust mass associated with element X, and
$\rho^{\mathrm{eff}}_{\mathrm{ISM}}$ is the effective ISM mass density which
is averaged mass density of the cloud where accretion process occurs.
$\rho^{\mathrm{eff}}_{\mathrm{ISM}}$ is calculated as
$\rho_{\mathrm{ISM}}^{\mathrm{eff}}=\mu m_{\mathrm{H}}n_{\mathrm{H,acc}}$,
where $\mu=1.4$ is the mean atomic weight, $m_{\mathrm{H}}$ is the hydrogen
atom mass, and $n_{\mathrm{H,acc}}$ is the mean hydrogen number density in the
ISM where the accretion process takes place. When
$\eta=\eta_{\mathrm{CNM}}+\eta_{\mathrm{MC}}=0.5$, $n_{\mathrm{H,acc}}$ is 130
$\mathrm{cm^{-3}}$. The second term on the right hand side represents the gas
mass of element X to total ISM mass ratio. From (25)–(28), the grain mass
growth rate is
$\displaystyle\frac{\mathrm{d}{m_{\mathrm{d}}(a,t)}}{\mathrm{d}{t}}=$
$\displaystyle\frac{\pi
a^{2}m_{\mathrm{d}}\alpha_{\mathrm{acc}}v_{\mathrm{th}}}{g_{\mathrm{X}}}n_{\mathrm{X}}(t)$
$\displaystyle=$ $\displaystyle\frac{\pi
a^{2}\alpha_{\mathrm{acc}}\rho_{\mathrm{ISM}}^{\mathrm{eff}}v_{\mathrm{th}}}{g_{\mathrm{X}}}\frac{M_{\mathrm{X}}(t)-g_{\mathrm{X}}M_{\mathrm{d,X}}(t)}{M_{\mathrm{ISM}}(t)}.$
(29)
The total mass growth rate by accretion process
$\left(\mathrm{d}M_{\mathrm{d}}(t)/\mathrm{d}t\right)_{\mathrm{acc}}$ is
calculated by integrating the Equation (29) with respect to the grain radius
and summing up for all dust species in all ISM phases. From Equation (17) and
(29), grain radius growth rate is represented as
$\frac{\mathrm{d}{a}}{\mathrm{d}{t}}=\frac{\alpha_{\mathrm{acc}}\rho_{\mathrm{ISM}}^{\mathrm{eff}}v_{\mathrm{th}}}{4sg_{\mathrm{X}}}\frac{M_{\mathrm{X}}(t)-g_{\mathrm{X}}M_{\mathrm{d,X}}(t)}{M_{\mathrm{ISM}}(t)}.$
(30)
In computation, we solve this equation for each size bin at each time step. By
transferring all the dust that was in the radius bin before metal accretion to
the radius bin after accretion, the evolution of size distribution by metal
accretion can be calculated. The number of dust grains does not change in this
process.
#### 2.2.5 Grain-grain collision
We consider two types of gain-grain collisions, shattering, and coagulation.
They only change dust size distribution and conserve total grain mass in the
ISM. Which processes occur is determined by the relative velocity between two
collisional grains. In the case that the relative velocity is fast, shattering
is easy to occur. On the contrary, when the relative velocity is small,
coagulation occurs. Relative velocities between dust grains can be caused by
ubiquitous ISM turbulence (e.g., Draine & Anderson, 1985; Ossenkopf, 1993;
Lazarian & Yan, 2002; McKee & Ostriker, 2007; Hirashita & Yan, 2009; Ormel et
al., 2009). Furthermore, because the dust is thought to be magnetized (Arons &
Max, 1975), it is necessary to consider the motion of grains due to
magnetohydrodynamics (MHD) turbulence. We consider dust collisions by applying
the relative velocity of grains in MHD turbulence calculated by Yan et al.
(2004). The velocity is calculated in consideration of gas drag (hydro drag)
and gyroresonance. When a grain with a relative velocity higher than the
threshold collides, it is shattered into small pieces. Since the larger grains
are affected by turbulence strongly, they have a large relative velocity and
it can occur shattering. Yan et al. (2004) indicate that grains with
$a>10^{-6}$ cm can accelerates to velocity (1–2 $\mathrm{km\,s^{-1}}$) close
to the shattering threshold in CNM. In WNM, gyroresonance accelerates grain
with $a>2$–$3\times 10^{-5}~{}\mathrm{cm}$ to high relative velocity ($\sim
20~{}\mathrm{kms^{-1}}$). On the contrary, small dust has a small relative
velocity and causes coagulation. When the relative velocity is small, the
collision cross section is small, so a high density environment (e.g., MC) is
required for coagulation. We consider silicate and graphite as grain species
while grain-grain collision, and they collide only with the same species.
Furthermore, We treat spherical grains which have a constant density.
Hirashita & Yan (2009) calculates in various ISM phases (including CNM, WNM,
and MC) by shattering and coagulation, and the Asano model applied the same
method.
We consider four types of grain-grain collisions, in other words, the relative
velocity is divided into four types. This treatment is the same as Jones et
al. (1994) and Hirashita & Yan (2009). Considering the shattered and
coagulated grains are radii $a_{1}$ and $a_{2}$, which are called grain 1 and
2, respectively. The mass of grain 1 and 2 is denoted as $m_{1}$ and $m_{2}$.
The relative collisional velocities between grain 1, and 2 are as follows,
* •
front collision ($v_{1,2}=v_{1}+v_{2}$)
* •
back-end collision ($v_{1,2}=|v_{1}-v_{2}|$)
* •
side collision ($v_{1,2}=v_{1}$)
* •
another side collision ($v_{1,2}=v_{2}$)
where $v_{1}$ and $v_{2}$ are the velocity of grain with radii $a_{1}$ and
$a_{2}$, respectively. We assume that collisions in all directions have the
same probability.
Jones et al. (1996) suggests that the shattering is significantly affecting
the grain size distribution in the ISM. The time evolution of grain mass
density by shattering process is
$\displaystyle\left[\frac{\mathrm{d}{\rho_{\mathrm{X}}(m_{\mathrm{d}},t)}}{\mathrm{d}{t}}\right]_{\mathrm{shat}}$
$\displaystyle=-m_{\mathrm{d}}\rho_{\mathrm{X}}(m_{\mathrm{d}},t)$
$\displaystyle\times\int^{a_{\mathrm{max}}}_{a_{\mathrm{min}}}\alpha\left[m_{\mathrm{d}},m_{1}\right]\rho_{\mathrm{X}}(m_{1},t)~{}\mathrm{d}m_{1}$
$\displaystyle+\int^{a_{\mathrm{max}}}_{a_{\mathrm{min}}}\int^{a_{\mathrm{max}}}_{a_{\mathrm{min}}}\alpha\left[m_{1},m_{2}\right]m^{1,2}_{\mathrm{shat}}(m_{\mathrm{d}})$
$\displaystyle\times\rho_{\mathrm{X}}(m_{1},t)\rho_{\mathrm{X}}(m_{2},t)~{}\mathrm{d}m_{1}\mathrm{d}m_{2},$
(31)
where $\alpha[m_{1},m_{2}]$ is the collision frequency normalized by two grain
masses and grain number density and expressed as
$\alpha[m_{1},m_{2}]=\begin{cases}0&(v_{1,2}<v_{\mathrm{shat}})\\\
\frac{\sigma_{1,2}v_{1,2}}{m_{1}m_{2}}&(v_{1,2}>v_{\mathrm{shat}})\end{cases},$
(32)
$m_{\mathrm{shat}}^{1,2}(m_{\mathrm{d}})$ represents the total mass of
fragments with masses between $m_{\mathrm{d}}$ and
$m_{\mathrm{d}}+\mathrm{d}m_{\mathrm{d}}$ as the result of collision between
grain 1 and 2. We assume that the distribution of shattered fragments is
proportional to $a^{-3.3}$ (Hellyer, 1970; Jones et al., 1996). $\sigma$ is
the collisional cross-section and represented as
$\sigma_{1,2}=\beta\pi(a_{1}+a_{2})^{2},$ (33)
$\beta$ is the coefficient connecting the cross-section and the geometric
cross-section, assumed $\beta=1$ for simplicity. $v_{\mathrm{shat}}$ is the
threshold of shattering, we assume $1.2~{}\mathrm{km\,s^{-1}}$ and
$2.7~{}\mathrm{km\,s^{-1}}$ for silicate and graphite grains, respectively
(Jones et al., 1996). $a_{\mathrm{min}}$ and $a_{\mathrm{max}}$ are minimum
and maximum radius and we adopt $a_{\mathrm{min}}=0.0003~{}\mathrm{\mu m}$ and
$a_{\mathrm{max}}=8~{}\mathrm{\mu m}$, respectively (Asano et al., 2013b). The
minimum grain radius in the ISM is less well understood, even if
$a_{\mathrm{min}}=0.001~{}\mathrm{\mu m}$, the dust size distribution does not
change significantly (Hirashita, 2012).
The first term on the right hand side of Equation (31) represents the decrease
of grain mass $m_{\mathrm{d}}$ due to destruction by the collisions with other
grains. The second term represents the grain mass $m_{\mathrm{d}}$ increase
due to the fragments resulting from the collision between grain 1 and 2.
Shattering does not produce larger fragments than the original grain, so it
only contributes if either grain 1 or 2 is heavier than $m_{\mathrm{d}}$.
The coagulation occurs when the relative velocity is low. The time evolution
for coagulation is expressed as a similar form to shattering,
$\displaystyle\left[\frac{\mathrm{d}{\rho_{\mathrm{X}}(m_{\mathrm{d}},t)}}{\mathrm{d}{t}}\right]_{\mathrm{coag}}$
$\displaystyle=-m_{\mathrm{d}}\rho_{\mathrm{X}}(m_{\mathrm{d}},t)$
$\displaystyle\times\int^{a_{\mathrm{max}}}_{a_{\mathrm{min}}}\alpha\left[m_{\mathrm{d}},m_{1}\right]\rho_{\mathrm{X}}(m_{1},t)~{}\mathrm{d}m_{1}$
$\displaystyle+\int^{a_{\mathrm{max}}}_{a_{\mathrm{min}}}\int^{a_{\mathrm{max}}}_{a_{\mathrm{min}}}\alpha[m_{1},m_{2}]m_{\mathrm{coag}}^{1,2}(m_{\mathrm{d}})$
$\displaystyle\times\rho_{\mathrm{X}}(m_{1},t)\rho_{\mathrm{X}}(m_{2},t)~{}\mathrm{d}m_{1}\mathrm{d}m_{2}.$
(34)
and
$\alpha[m_{1},m_{2}]=\begin{cases}\frac{\sigma_{1,2}v_{1,2}}{m_{1}m_{2}}&(v_{1,2}<v_{\mathrm{coag}})\\\
0&(v_{1,2}>v_{\mathrm{coag}})\end{cases},$ (35)
where $m_{\mathrm{coag}}^{1,2}$ is the total mass of coagulated grains:
$m_{\mathrm{coag}}^{1,2}(m_{\mathrm{d}})=\begin{cases}m_{\mathrm{d}}&\mathrm{when}~{}m_{\mathrm{d}}\leq
m_{1}+m_{2}<m_{\mathrm{d}}+\mathrm{d}m_{\mathrm{d}})\\\
0&\mathrm{otherwise}\end{cases}.$ (36)
We use Equation (33) as the collisional cross-section for coagulation.
$v_{\mathrm{coag}}$ is the threshold velocity of coagulation, and grains with
higher relative velocity do not stick. Chokshi et al. (1993) calculate the
threshold velocity as $10^{-3}$–$10^{-1}~{}\mathrm{km\,s^{-1}}$ and it depends
on the grain size. Here we assume that the dust grain is a smooth sphere, but
in the real picture the grain is fluffy (Ossenkopf, 1993). It has been
suggested that the coagulation threshold relative velocity is higher because
fluffiness increases the cross-section of grain collisions (Ormel et al.,
2009; Hirashita & Kobayashi, 2013). In addition, Asano et al. (2014) indicate
that the coagulation threshold suppresses the production of large grain,
producing smaller grain ($a<0.01~{}\mathrm{\mu m}$) than the Mathis et al.
(1977). Therefore, in this paper, it is assumed that coagulation can occur at
all relative velocities without setting the coagulation threshold. The first
term of Equation (34) indicates the decrease of the grain with mass
$m_{\mathrm{d}}$ by coagulation with other grains. The second term indicates
the increase of the grain with mass $m_{\mathrm{d}}$ by coagulation between
grain 1 and 2. Since coagulation works effectively on small size dust, it
becomes effective after shattering becomes effective and small dust increases.
Coagulation shifts the dust size distribution to the larger one.
#### 2.2.6 Result of dust evolution model
We show the dust grain (including carbon and silicate) size distribution
calculated with the star formation timescale $\tau_{SF}=3$ Gyr and total
galaxy mass $10^{11}~{}M_{\odot}$ in Figure 1. We note that the total galaxy
mass is merely a normalization for our dust model, and can be rescaled freely.
Figure 1: The time evolution of all species of dust grain size distribution.
Blue, orange, green, and purple curves indicate the age of 100 Myr, 1 Gyr, 5
Gyr, and 13 Gyr, respectively. Black line represents the slope of the MRN
distribution.
In this representation, the ratio of ISM phases are $\eta_{\mathrm{WNM}}=0.5$,
$\eta_{\mathrm{CNM}}=0.3$, $\eta_{\mathrm{MC}}=0.2$. Blue, orange, green, and
purple curves indicate the age of 100 Myr, 1 Gyr, 5 Gyr, and 13 Gyr,
respectively. Black curve indicates the slope of the grain size distribution
suggested by Mathis et al. (1977) which reproduces the MW extinction curve.
This is known as the MRN distribution, expressed by a single power law,
$f(a)\mathrm{d}a\propto a^{-3.5}\mathrm{d}a~{}(0.005~{}\mathrm{\mu
m}<a<0.25~{}\mathrm{\mu m}).$ (37)
The overview of the time evolution of dust size distribution is as follows.
* •
$<100$ Myr
* –
Dust production from SNe dominates, and the original size distribution of SN
dust is reflected in the overall dust size distribution.
* •
100 Myr–1 Gyr
* –
Metal accretion dominates the evolution of dust.
* –
Shattering and coagulation become effective, and consequently dust mass
rapidly increases because of the increase of the total surface area per dust
mass.
* –
Production of PAHs is dominated by shattering.
* •
1–5 Gyr
* –
Shattering and coagulation become more effective.
* –
Metal accretion also becomes more effective thanks to the increased amount of
small dust continuously generated by shattering.
* –
The increase of dust mass is the most rapid between 1 Gyr and 2 Gyr.
* •
5–13 Gyr
* –
The production of dust grain by stars decreases, and shattering and
coagulation dominate the evolution of the grain size distribution.
Details of each step in the evolution of dust size distribution is explained
as follows. At the age of 100 Myr, only a tiny amount of grains exist in the
ISM, and the slope of size distribution is completely different from the slope
of the MRN distribution. Particularly, PAHs are not produced in early
galaxies. Figure 2 shows how different processes (i.e., production by AGB and
SNe, and grain growth in the ISM) contribute to the increase of the total PAH
mass. We note that the contribution of destruction processes including the SN
shock and astration are not shown here. The dust destruction process only
depends on the grain size and species, then it works in the same way for dust
from any production source. Therefore, even if dust reduction is taken into
account, the ratio of dust mass for each source remains the same. In galaxies
younger than 100 Myr, short-lived SNe II (lifetime $\sim 10^{6}$–$10^{7}$ yr)
is the main source of dust supply (e.g., Maiolino et al., 2004; Hiraki &
Hirak, 2008), as the age of the galaxy is too young for stars to evolve into
AGB (lifetime $\sim 10^{8}$–$10^{9}$ yr) (e.g., Morgan & Edmunds, 2003;
Marchenko, 2006). However, smallest grains such as PAHs are supplied from SN
stars, though the amount is very small (e.g., Nozawa et al., 2007).
Figure 2: The evolution of PAH mass per each production source. The total
galaxy mass is $10^{11}~{}M_{\odot}$. Blue, orange, and green curves represent
PAH mass produced by SN, AGB, and evolution in the ISM, respectively.
As the chemical evolution proceeds in the galaxy, the amount of metal in the
ISM increases. Very young galaxies ($\mbox{age}\simeq 20$ Myr) have only a
small supply of dust from SNe. When the galaxy age reaches $\sim 100$ Myr, the
smallest grains (PAH) are gradually formed by shattering, and the mass of the
PAH increases. The galaxy must evolve to reach the critical metallicity for
dust growth to work effectively (Inoue, 2011; Asano et al., 2013a). Since AGB
provides larger size dust grains ($>0.1~{}\mathrm{\mu m}$), their contribution
to PAHs is not significant (Winters et al., 1997; Yasuda & Kozasa, 2012).
At 1 Gyr, the total dust mass continues to increase gradually, while the PAH
mass starts to increase significantly, because the shattering in the ISM
becomes effective. The bump in 10-3–10${}^{-2}~{}\mathrm{\mu m}$ in Figure 1
is the consequence of the activated shattering process. We show the evolution
of the total dust mass of the model galaxy in Figure 3. Solid and dashed lines
represent the dust grain evolution in the ISM (fiducial) and without evolution
(no evolution) case, respectively. Figure 3 clearly demonstrates that, if dust
grains evolve in the ISM, dust mass rapidly increases by metal accretion in
1–2 Gyr.
Figure 3: The evolution of dust to gas mass ratio calculated by the Asano
model.
It has been suggested that in the MW-like galaxy model, when the metallicity
exceeds 0.1 $Z_{\odot}$, the metal accretion process becomes effective and the
dust mass drastically increases (Asano et al., 2013a). When the metal
accretion becomes effective, dust collisions with each other in the ISM become
more likely to occur, and the shattering and coagulation also become
effective. The shattering process results in a significant increase in the
amount of small dust grains including PAH. Since the metal accretion depends
on the total surface area of dust grain (Equation (25)), the shattering
promotes the accretion. Such a dust growth cycle causes the dust mass to
increase nonlinearly. This cycle is effective between 1 Gyr and 2 Gyr in this
model galaxy.
After that, the mass of dust increases and peaks at $\sim 3$ Gyr. This peak
time depends on the timescale of star formation $\tau_{\mathrm{SF}}=3$ Gyr.
After 3 Gyr, the dust mass decreases due to the destruction by SN shocks. The
smaller the dust size, the more effectively the SN destruction works (Nozawa
et al., 2006). In addition, production of dust from stars also decreases due
to the decrease of the SFR. Thus, in total, the dust mass gradually decreases
by SN shock and astration. As the production of dust by stars decreases,
coagulation dominates the evolution of dust size distribution. Due to the SN
shock destruction and coagulation, the dust size distribution is biased toward
larger radius. For galaxies with fully grown dust after 5 Gyr, the dust grain
size distribution finally converges to a similar function to that obtained
from observations of nearby galaxies such as Schurer et al. (2009). A galaxy
with the age in $5\mbox{--}13$ Gyr has a dust distribution with a power-law
slope similar to the MRN.
In contrast, for the no evolution case, the total grain mass does not increase
rapidly and only increases by stellar production with a constant rate up to
$\tau_{\mathrm{SF}}=3$ Gyr. In the age of the $<1$ Gyr galaxy, no evolution
case has a larger grain mass than the mass of the fiducial case. This is
because the no evolution case does not consider the destruction of dust due to
SN shocks. After 3 Gyr, the dust mass decreases by astration. As described
above, if the evolution of dust in the ISM is not taken into account in the
calculation, a rapid increase of dust mass in 1–2 Gyr does not appear.
### 2.3 Stellar SED
We use the version 2 of Pégase (Fioc & Rocca-Volmerange, 1999, Pégase.2) to
produce stellar SEDs. Pégase calculates the stellar emission by stellar
population synthesis (SPS) method with simple stellar populations (SSPs). The
SSP represents the time variation of the SED of a single contemporaneous
stellar population with a single metallicity and abundance pattern. The
monochromatic luminosity per unit wavelength of SSP is expressed as
$L_{\lambda}^{\mathrm{SSP}}(t,Z)=\int^{m_{\mathrm{max}}}_{m_{\mathrm{min}}}L_{\lambda}^{\mathrm{star}}(T_{\mathrm{eff}}(t,m),\log
g(t,m),Z)\phi(m)~{}\mathrm{d}m,$ (38)
where $L^{\mathrm{star}}_{\lambda}$ is the monochromatic luminosity of a star
with the mass in the interval effective temperature $T_{\mathrm{eff}}$,
surface gravity of stellar $g$, metallicity $Z$, and an age of galaxy $t$
(e.g., Conroy, 2013). $m_{\mathrm{max}}$ and $m_{\mathrm{min}}$ are the upper
and lower limit of stellar mass, set to be 100 $M_{\odot}$ and 0.1
$M_{\odot}$, which is the same as the IMF integration range. The effective
temperature $T_{\mathrm{eff}}(t,m)$ and the surface gravity $\log g(t,m)$ are
from the stellar evolutionary track. Pégase.2 uses the evolutionary track
based on the Padova tracks (Bressan et al., 1993; Fagotto et al., 1994b, c, a;
Girardi et al., 1996). The metallicity of the ISM $Z$ evolves with galaxy age
$t$ and it is calculated from Woosley & Weaver (1995) SN II models. Since only
the evolutionary track table with metallicities
$Z=(0.005,0.02,0.2,0.4,1,0,2.5,5.0)~{}Z_{\odot}$ is prepared, they use the
interpolated value. Pégase assumes that a star releases metal into ISM only at
the end of its life, and the recycling model is not instantaneous. The library
of stellar spectra used by Pégase.2 is divided into two according to effective
temperature $T_{\mathrm{eff}}$. For $T_{\mathrm{eff}}<50000~{}\mathrm{K}$, the
library comes from Lejeune et al. (1997, 1998, corrected version (BaSeL-2.0)).
A monochromatic luminosity from total stars at time $t$ is calculated by
weighting $L_{\lambda}^{\mathrm{SSP}}$ at galaxy age $t^{\prime}$ with star
formation rate SFR,
$L_{\lambda}(t)=\int^{t^{\prime}=t}_{t^{\prime}=0}\int^{Z=Z_{\mathrm{max}}(t-t^{\prime})}_{Z=0}\mathrm{SFR}(t-t^{\prime})L_{\lambda}^{\mathrm{SSP}}(t^{\prime},Z(t-t^{\prime}))~{}\mathrm{d}Z\mathrm{d}t^{\prime},$
(39)
where $Z_{\mathrm{max}}(t-t^{\prime})$ is the maximum metallicity at time
$t-t^{\prime}$. In order to take into account the stars that were born at the
time of the galaxy’s birth to the stars that are just born, the value is
integrated over time. The time lag $t-t^{\prime}$ represents the time
difference between the formation of a star and the end of the evolution of the
star. We chose the Schmidt law (Schmidt, 1959, Equation (5)) for the SFR.
### 2.4 Dust properties
Radiative transfer is the method to calculate the propagation of energy in
systems of various sizes (from isolated gas clouds to galaxies). In the galaxy
ISM, radiation is mainly affected by absorption and scattering by dust grains.
One of the easiest ways to calculate radiative transfer is to assume that the
ISM has a homogeneous distribution. However, it has been observed that actual
galaxies have a more complex structure in general (e.g., Field et al., 1969;
McKee & Ostriker, 1977). If a homogeneous dust distribution is assumed, the
optical depth of the dust is larger than that in the case of inhomogeneous
distribution. In other words, if the dust mass is estimated with a homogeneous
distribution, the attenuation per dust grain is overestimated, and the dust
mass would be underestimated as a consequence. Therefore, in this paper, we
consider a clumpy dust distribution.
The calculation of three-dimensional radiative transfer with clumpy dust
usually requires substantial computational cost. Neufeld (1991) and Hobson &
Padman (1993) introduced the method that solves the radiative transfer with
MGA in a one-dimensional plain parallel galaxy (Varosi & Dwek, 1999; Inoue,
2005). The MGA treats the dusty region as a kiloparsec-size huge grain called
mega-grain, and regards absorption and scattering behave in the same way as
typical grains with effective optical properties. We approximate the complex
distribution of stars, dust grains, and gas in the model galaxy, to simplify
costly calculations in three-dimensional space. In this approximation, the
distribution of young stars is clumpy and the young stars are embedded by
mega-grain. In contrast, old stars are supposed to distribute smoothly in a
diffuse way. The light emitted by young stars is stronger attenuated than the
light emitted by older stars due to the surrounding mega-grains. Inoue (2005)
researched the effect of changing criterion of young star $t_{\mathrm{y}}$. He
conclude 10 Myr is best fit to the MW attenuation, and we apply it in this
paper. Assuming thermal and chemical equilibrium with temperature $T<10^{4}$
K, the ISM is represented by two phases, WNM and CNM (e.g., Wolfire et al.,
2003; Koyama & Inutsuka, 2002). The relation of the thermal pressure with
hydrogen density is expressed by fitting the phase diagram (Inoue, 2005) as,
$\displaystyle\frac{p/k}{10^{4}~{}\mathrm{K\,cm^{-3}}}$
$\displaystyle=\frac{n_{\mathrm{H,WNM}}}{1~{}\mathrm{cm^{-3}}},~{}$
$\displaystyle(\mathrm{WNM})$ (40)
$\displaystyle\frac{p/k}{10^{4.5}~{}\mathrm{K\,cm^{-3}}}$
$\displaystyle=\left(\frac{n_{\mathrm{H,CNM}}}{10^{3}~{}\mathrm{cm^{-3}}}\right)^{0.7},~{}$
$\displaystyle(\mathrm{CNM})$ (41)
where $p$ is the pressure. $n_{\mathrm{H,WNM}}$ and $n_{\mathrm{H,CNM}}$ are
the hydrogen density of WNM and CNM, respectively. We regard the WNM as a
homogeneous interclump medium and the CNM as a clump. The clump radius
$r_{\mathrm{cl}}$ is calculated by assuming it to be self-gravitating (Inoue,
2005),
$r_{\mathrm{cl}}=\frac{1}{\rho_{\mathrm{cl}}}\sqrt{\frac{15p}{4\pi
G}}=\frac{1}{\mu m_{\mathrm{p}}n_{\mathrm{H,CNM}}}\sqrt{\frac{15p}{4\pi
G}}\sim 10.4~{}\mathrm{pc},$ (42)
where $\rho_{\mathrm{cl}}=\mu m_{\mathrm{p}}n_{\mathrm{H,CNM}}$ is the clump
density, $\mu$ is the mean atomic density, $G$ is the gravitational constant
and $m_{\mathrm{p}}$ is the proton mass. Clumps exist in the interclump
medium. We assume all clumps are spherical and have a constant radius and
density. In this approximation, we use the mass absorption and scattering
coefficient and the scattering asymmetry parameter of dust grain,
$k_{\mathrm{abs}}$, $k_{\mathrm{scat}}$ and $g_{\mathrm{d}}$ respectively,
averaged by dust size distribution calculated by the Asano model:
$\displaystyle k_{\mathrm{abs}}$
$\displaystyle=\frac{\int^{a_{\mathrm{max}}}_{a_{\mathrm{min}}}\pi
a^{2}Q_{\mathrm{abs}}(a)f(a)\mathrm{d}a}{\int^{a_{\mathrm{max}}}_{a_{\mathrm{min}}}m_{\mathrm{d}}(a)f(a)\mathrm{d}a},$
(43) $\displaystyle k_{\mathrm{scat}}$
$\displaystyle=\frac{\int^{a_{\mathrm{max}}}_{a_{\mathrm{min}}}\pi
a^{2}Q_{\mathrm{scat}}(a)f(a)\mathrm{d}a}{\int^{a_{\mathrm{max}}}_{a_{\mathrm{min}}}m_{\mathrm{d}}(a)f(a)\mathrm{d}a},$
(44) $\displaystyle g_{\mathrm{d}}$
$\displaystyle=\frac{\int^{a_{\mathrm{max}}}_{a_{\mathrm{min}}}g(a)\pi
a^{2}Q_{\mathrm{scat}}(a)f(a)\mathrm{d}a}{\int^{a_{\mathrm{max}}}_{a_{\mathrm{min}}}\pi
a^{2}Q_{\mathrm{scat}}(a)f(a)\mathrm{d}a},$ (45)
where $f(a)$ is the dust number distribution, $Q_{\mathrm{abs}}(a)$ and
$Q_{\mathrm{scat}}(a)$ are the absorption and scattering coefficient, and
$g(a)$ is the scattering asymmetry parameter of a grain, respectively. In this
model $Q_{\mathrm{abs}}(a)$, $Q_{\mathrm{scat}}(a)$, and $g(a)$ are calculated
by the Mie theory (Bohren & Huffman, 1983). We adopt Draine & Lee (1984) and
Laor & Draine (1993) for silicate and graphite, and Li & Draine (2001) for PAH
as optical parameters. The mass extinction coefficient and scattering albedo
are defined as
$k_{\mathrm{d}}=k_{\mathrm{abs}}+k_{\mathrm{scat}}.$ (46)
In the MGA, we replace the optical properties, namely the extinction
coefficient per unit length $\kappa$, the scattering albedo $\omega$, and the
scattering asymmetry parameter $g$ with the effective ones. The relative
optical depth of a clump with the interclump medium is
$\tau_{\mathrm{cl}}=(\rho_{\mathrm{cl}}-\rho_{\mathrm{icm}})k_{\mathrm{d}}Dr_{\mathrm{cl}}.$
(47)
where $D$ is the dust-to-gas mass ratio calculated by the Asano model. The
extinction coefficient per unit length of the medium by clump is
$\kappa_{\mathrm{mg}}=n_{\mathrm{cl}}\pi
r^{2}_{\mathrm{cl}}P_{\mathrm{int}}(\tau_{\mathrm{cl}})=\frac{3f_{\mathrm{cl}}}{4r_{\mathrm{cl}}}P_{\mathrm{int}}(\tau_{\mathrm{cl}}),$
(48)
where $n_{\mathrm{cl}}$ is the number density of clump and $f_{\mathrm{cl}}$
is the clump filling fraction,
$f_{\mathrm{cl}}=\frac{n_{\mathrm{H}}-n_{\mathrm{H,WNM}}}{n_{\mathrm{H,CNM}}-n_{\mathrm{H,WNM}}}.$
(49)
We assume that mean hydrogen number density in the galaxy $n_{\mathrm{H}}$ has
a constant value 1 cm3. $P_{\mathrm{int}}(\tau)$ is the interaction
probability against parallel light by a sphere with optical depth $\tau$, and
represented as
$P_{\mathrm{int}}(\tau)=1-\frac{1}{2\tau^{2}}+\left(\frac{1}{\tau}+\frac{1}{2\tau^{2}}\right)e^{-2\tau}.$
(50)
This equation is obtained by integrating the light incident on the sphere in
all directions and taking the ratio to the case where the optical depth of the
sphere is zero (see appendix C of Varosi & Dwek (1999) for details). The
extinction coefficient of interclump medium is
$\kappa_{\mathrm{icm}}=k_{\mathrm{d}}D\rho_{\mathrm{icm}}.$ (51)
Thus, the effective extinction coefficient is expressed as
$\kappa_{\mathrm{eff}}=\kappa_{\mathrm{mg}}+\kappa_{\mathrm{icm}}.$ (52)
The scattering albedo of clump is
$\omega_{\mathrm{cl}}=\omega_{\mathrm{d}}P_{\mathrm{esc}}(\tau_{\mathrm{cl}},\omega_{\mathrm{d}}),$
(53)
where $\omega_{\mathrm{d}}=k_{\mathrm{scat}}/k_{\mathrm{d}}$ is the scattering
albedo of normal grain averaged by grain size distribution and
$P_{\mathrm{esc}}(\tau,\omega)=\frac{\frac{3}{4\tau}P_{\mathrm{int}}(\tau)}{1-\omega\left[1-\frac{3}{4\tau}P_{\mathrm{int}}(\tau)\right]},$
(54)
is the photon escape probability from a sphere grain. The effective scattering
albedo is
$\omega_{\mathrm{eff}}=\frac{\omega_{\mathrm{cl}}\kappa_{\mathrm{mg}}+\omega_{\mathrm{d}}\kappa_{\mathrm{icm}}}{\kappa_{\mathrm{eff}}}.$
(55)
The light entering the clump is scattered by the dust in the clump and escapes
in various directions. An optical parameter that indicates in which direction
light escapes is called an asymmetry parameter of clump and it is defined as
$g_{\mathrm{cl}}=\langle\cos\theta_{\mathrm{esc}}\rangle$, where
$\theta_{\mathrm{esc}}$ is the angle between enter and escape directions.
$g_{\mathrm{cl}}$ is given by fitting the Monte Carlo calculation result in
Varosi & Dwek (1999), and represented by the following empirical formula,
$g_{\mathrm{cl}}(\tau_{\mathrm{cl}},\omega_{\mathrm{cl}},g_{\mathrm{d}})=g_{\mathrm{d}}-C\left(1-\frac{1+e^{-B/A}}{1+e^{(\tau_{\mathrm{cl}}-B)/A}}\right),$
(56)
where
$\displaystyle A$ $\displaystyle\equiv
1.5+4g_{\mathrm{d}}^{3}+2\omega_{\mathrm{d}}\sqrt{g_{\mathrm{d}}}\exp(-5g_{\mathrm{d}}),$
(57) $\displaystyle B$ $\displaystyle\equiv
2-g_{\mathrm{d}}(1-g_{\mathrm{d}})-2\omega_{\mathrm{d}}g_{\mathrm{d}},$ (58)
$\displaystyle C$
$\displaystyle\equiv\frac{1}{3-\sqrt{2g_{\mathrm{d}}}-2\omega_{\mathrm{d}}g_{\mathrm{d}}(1-g_{\mathrm{d}})}.$
(59)
The effective asymmetry parameter is
$g_{\mathrm{eff}}=\frac{g_{\mathrm{cl}}\kappa_{\mathrm{mg}}+g_{\mathrm{d}}\kappa_{\mathrm{icm}}}{\kappa_{\mathrm{eff}}}.$
(60)
### 2.5 Radiative transfer in a one-dimensional galaxy
Figure 4: The geometry of a one-dimensional plain parallel galaxy.
We assume a one-dimensional plane-parallel galaxy along the $z$-axis for
solving radiative transfer shown in Figure 4. We set two kinds of disks in the
model. One is a gas+dust disk containing young stars (disk 1). Stars are born
in cold, dense regions (CNM), so we assume young stars are surrounded by
clamps, and disk 1 is also full of interclump medium. Disk 1 has a constant
density of stars, gases, and grains. Optical depth $\tau$ is defined with
constant effective extinction coefficient $\kappa_{\mathrm{eff}}$ as
$\mathrm{d}\tau=-\kappa_{\mathrm{eff}}\,\mathrm{d}z,$ (61)
with $\tau=0$ at $z=h_{\mathrm{d}}$ and
$\tau=\kappa_{\mathrm{eff}}h_{\mathrm{d}}$ at $z=0$. Where $2h_{\mathrm{d}}$
is the thickness of disk 1. The other disk contains only exponentially and
smoothly distributed old stars (disk 2). The thickness of the disk 2 is
$4h_{\mathrm{d}}$, which is twice larger than that of disk 1. These two disks
are stacked so that their centers are aligned. In the above condition,
radiative transfer is formulated as,
$\mu\frac{\mathrm{d}{I(\tau,\mu)}}{\mathrm{d}{\tau}}=-I(\tau,\mu)+S(\tau,\mu),$
(62)
where $I(\tau,\mu)$ is the specific intensity at $\tau$ and $\mu\equiv
1/\cos\theta$. The $\theta$ is the angle between the ray and $z$-axis. Source
function $S$ is represented as
$S(\tau,\mu)=\frac{\eta_{\ast}(\tau)}{\kappa_{\mathrm{eff}}}+\omega_{\mathrm{eff}}\int^{1}_{-1}I(\tau,\mu^{\prime})\Phi(g_{\mathrm{eff}},\mu,\mu^{\prime})\mathrm{d}\mu^{\prime},$
(63)
where $\eta_{\ast}$ is the stellar emissivity and $\Phi$ is the scattering
phase function. Here we adopt the Henyey-Greenstein phase function (Henyey &
Greenstein, 1941)
The first term in the right hand side of Equation (63) represents the
intensity of light emitted from the star that escapes from the clump where the
star was born. The second term in the right hand side represents the integral
of the light scattered in the considering direction $\mu$ among the scattered
light by dust grain. The boundary conditions in this galaxy at $z=0$ and
$z=h_{\mathrm{d}}$ are
$\displaystyle I(\tau=\kappa_{\mathrm{eff}}h_{\mathrm{d}},\mu)$
$\displaystyle=I(\tau=\kappa_{\mathrm{eff}}h_{\mathrm{d}},-\mu),$ (64)
$\displaystyle I(\tau=0,\mu<0)$
$\displaystyle=-\frac{\int^{\infty}_{h_{\mathrm{d}}}\eta_{\ast}(z)\,\mathrm{d}z}{\mu}.$
(65)
The stellar emissivity is normalized by
$\int^{\infty}_{-\infty}\eta_{\ast}(z)\mathrm{d}z=1.$ (66)
The intrinsic emissivity from young stars in disk 1
$\eta^{\mathrm{young}}_{\mathrm{\ast}}$is $1/2h_{\mathrm{d}}$ for $|z|\leq
h_{\mathrm{d}}$ and zero for $|z|>h_{\mathrm{d}}$ because it is normalized by
Equation (66). The energy emitted by young star is absorbed by the dust in
clump surrounding the star. Since the escape probability from the clump is
Equation (54), the emissivity from young stars is represented as
$\eta^{\mathrm{young}}_{\mathrm{\ast}}\begin{cases}P_{\mathrm{est}}(\tau_{\mathrm{cl}},\omega_{\mathrm{cl}})/2h_{\mathrm{d}}&(|z|\leq
h_{\mathrm{d}})\\\ 0&(|z|>h_{\mathrm{d}})\end{cases}.$ (67)
The old stars in disk 2 are distributed with exponential diffusion along the
$z$-axis. From the normalization Equation
$(\ref{equ:emissivity_normalization})$, the emissivity from the old stars is
$\eta_{\mathrm{\ast}}^{\mathrm{old}}(z)=\frac{e^{-|z|/2h_{\mathrm{d}}}}{4h_{\mathrm{d}}}.$
(68)
The total stellar emissivity at $z$ is represented as
$\eta_{\ast}(z)=f_{\mathrm{y}}(t)\eta_{\mathrm{\ast}}^{\mathrm{young}}(z)+(1-f_{\mathrm{y}}(t))\eta_{\mathrm{\ast}}^{\mathrm{old}}(z),$
(69)
where $f_{\mathrm{y}}(t)$ is the luminosity fraction emitted by young stellar
at age $t$, it is calculated by
$f_{\mathrm{y}}(t)=\frac{\int^{\mathrm{min}[t_{\mathrm{y}},t]}_{0}\int^{\mathrm{Z}_{\mathrm{max}}(t-t^{\prime})}_{0}\mathrm{SFR}(t-t^{\prime})L_{\lambda}^{\mathrm{SSP}}(t^{\prime},\mathrm{Z}(t-t^{\prime}))\,\mathrm{dZ}\,\mathrm{d}t^{\prime}}{L_{\lambda}(t)}.$
(70)
In the radiative transfer calculation, we calculate Equation (62) and (63)
iteratively until the ratio to the previous loop of the source function in all
directions on the galaxy surface converged to 10-10.
### 2.6 Dust temperature distribution
The UV and optical photons emitted by stars heat dust grains. The heated dust
grains release the energy by emission from MIR to FIR wavelength photons.
Large grains have an equilibrium temperature determined by the stellar
radiation field. In contrast, very small grains cannot establish radiative
equilibrium, and it does not have a stable equilibrium temperature (Draine &
Anderson, 1985; Draine & Li, 2001; Li & Draine, 2001; Takeuchi et al., 2003,
2005; Horn et al., 2007). Since they have small heat capacities, they are
easily heated by photons and then rapidly cooled Thus, the (instantaneous)
temperature of very small grains is inevitably stochastic, we calculate
temperature distribution by Monte Carlo simulation.
#### 2.6.1 Stochastic heating
The rate at which a dust grain absorbs photons in the energy range
$[E,E+\mathrm{d}E]$ and time interval $[t,t+\mathrm{d}t]$ is expressed as
$\mathrm{d}p(a,\lambda)=\pi
a^{2}Q_{\mathrm{abs}}(a,\lambda)\bar{u}_{\lambda}\frac{\lambda^{3}}{h_{\mathrm{p}}^{2}c}\mathrm{d}E\mathrm{d}t,$
(71)
where $\bar{u}_{\lambda}$ is the mean energy density per wavelength in a
galaxy, $h_{\mathrm{p}}$ is the Plank constant, and $c$ is the speed of light,
respectively (e.g., Draine & Anderson, 1985; Takeuchi et al., 2003, 2005). The
energy density actually varies depending strongly on its spatial position in a
galaxy. Therefore, considering the different energy densities for each
position, the calculation time becomes enormous. Thus, we use the mean energy
density $\bar{u}_{\lambda}$ which is calculated in the same way as Fioc &
Rocca-Volmerange (2019):
$L_{\lambda}^{0}-L_{\mathrm{obs}}=c\bar{u}_{\lambda}k_{\mathrm{abs}}M_{\mathrm{d}},$
(72)
where $L_{\lambda}^{0}$ and $L_{\mathrm{obs}}$ are the intrinsic stellar
luminosity and observed luminosity calculated by transfer radiation.
Therefore, the mean energy density is represented as
$\bar{u}_{\lambda}=\frac{L_{\lambda}^{0}-L_{\mathrm{obs}}}{ck_{\mathrm{abs}}M_{\mathrm{d}}}.$
(73)
Equation (71) can be regarded as the probability density distribution if the
time interval $\mathrm{d}t$ is appropriately small. For each dust size,
$\mathrm{d}t$ is determined so that the maximum collision probability among
all wavelengths is 0.01,
$\mathrm{d}t(a)=0.01\left[\pi
a^{2}Q_{\mathrm{abs}}(a,\lambda)\bar{u}_{\lambda}\frac{\lambda^{3}}{h_{\mathrm{p}}^{2}c}\mathrm{d}E\right]^{-1}.$
(74)
For simplicity, we assume that the energy of an absorbed photon is totally
used to heat the dust grains, represented as
$E(T+\Delta T)=E(T)+\frac{h_{\mathrm{p}}c}{\lambda},$ (75)
where $E(T)$ is the enthalpy of dust grains at temperature $T$ and $\Delta T$
is the increment of temperature. We adopt the Debye model for calculating the
enthalpy of dust grains (Li & Draine, 2001). The enthalpy of silicate and
graphite grains are
$\displaystyle E_{\mathrm{sil}}(T)$
$\displaystyle=(N_{\mathrm{atom}}-2)k\left[2f_{2}\left(\frac{T}{500~{}\mathrm{K}}\right)+f_{3}\left(\frac{T}{1500~{}\mathrm{K}}\right)\right],$
(76) $\displaystyle E_{\mathrm{gra}}(T)$
$\displaystyle=(N_{\mathrm{C}}-2)k\left[f_{2}\left(\frac{T}{863~{}\mathrm{K}}\right)+2f_{2}\left(\frac{T}{2504~{}\mathrm{K}}\right)\right],$
(77)
where
$f_{n}(a)\equiv n\int^{1}_{0}\frac{y^{n}\mathrm{d}y}{\exp(y/x)-1}.$ (78)
The subscripts ’sil’ and ’gra’ represent the silicate and graphite grains,
respectively. Equation (78) is the $n$ dimensional Debye function.
$N_{\mathrm{atom}}$ and $N_{\mathrm{C}}$ are the number of atoms in a grain,
they are expressed as,
$N=\dfrac{\frac{4}{3}\pi a^{3}\rho N_{\mathrm{A}}}{M},$ (79)
where $\rho$ is the mass density, $M$ is the mass number and $N_{\mathrm{A}}$
is the Avogadro constant. For carbonaceous (graphite or PAH) grain,
$\rho=2.26~{}\mathrm{g/cm^{3}}$ and $M=12.0~{}\mathrm{g/mol}$(Draine & Lee,
1984). For silicate grain, $\rho=3.50~{}\mathrm{g/cm^{3}}$ and
$M=172.25~{}\mathrm{g/mol}$(Li & Draine, 2001). In polycyclic aromatic
hydrocarbon (PAH) grains, we consider C-C bond modes are same as graphite and
C-H bond modes component is added to Equation (77),
$E_{\mathrm{pah}}(T)=E_{\mathrm{gra}}+\frac{\mathrm{H}}{\mathrm{C}}N_{\mathrm{C}}\sum^{3}_{j=1}\left(\frac{h_{\mathrm{p}}\nu_{j}}{\exp(h_{\mathrm{p}}\nu_{j}/kT)-1}\right).$
(80)
The index $j$ represents the C-H out-of-plane bending modes
($\nu_{1}/c=886~{}\mathrm{cm^{-1}}$), in-plane bending modes
($\nu_{2}/c=1161~{}\mathrm{cm^{-1}}$), and stretching modes
($\nu_{3}/c=3030~{}\mathrm{cm^{-1}}$), respectively (Draine & Li, 2001).
$\frac{\mathrm{H}}{\mathrm{C}}$ is the hydrogen to carbon ratio. We adopt the
following empirical formula (Li & Draine, 2001),
$\frac{\mathrm{H}}{\mathrm{C}}=\begin{cases}0.5&(N_{\mathrm{C}}<25)\\\
\frac{0.5}{\sqrt{N_{\mathrm{C}}/25}}&(25<N_{\mathrm{C}}<100)\\\
0.25&(N_{\mathrm{C}}>100)\end{cases}.$ (81)
#### 2.6.2 Dust cooling
The equation of emission of dust grains with radius $a$ is formulated as,
$4\pi\epsilon(T,a)=4\pi\left(\pi a^{2}\right)\int
Q_{\mathrm{abs}}(\lambda)\frac{2h_{\mathrm{p}}c^{2}}{\lambda^{5}}\frac{\mathrm{d}\lambda}{\exp\left(\frac{h_{\mathrm{p}}c}{\lambda
kT}\right)-1},$ (82)
where $T$ is the temperature of dust grain and $\epsilon(T,a)$ is the emission
power per unit time per unit solid angle. In the case that dust grains do not
absorb energy while cooling, since emission energy and changes in internal
energy are balanced, the following equation holds,
$\frac{\mathrm{d}E(T,a)}{\mathrm{d}T}\frac{\mathrm{d}T}{\mathrm{d}t}=-4\pi\epsilon(T,a).$
(83)
This equation can not be solved analytically, but we can calculate, but we can
get the temperature variation by numerical calculation.
#### 2.6.3 Result of dust temperature distribution
Figure 5 is the result of dust temperature distribution of silicate.
Figure 5: Temperature distribution of the several grain sizes of silicate
calculated by Monte Carlo calculation. The blue, orange, green, red, and
purple curves indicate $3.98\times 10^{-8}$, $1.26\times 10^{-7}$, $3.98\times
10^{-7}$, $1.26\times 10^{-6}$, and $3.98\times 10^{-5}$ cm grains,
respectively.
The galaxy is the face on ($\mu=1$) MW-like galaxy model at the age of 13 Gyr.
The condition of the galaxy is the same as §2.2.6. The radius of galaxy
$R_{\mathrm{gal}}$ is 10 kpc and scale height of dust $h_{\mathrm{d}}$ is 150
pc which is the typical scale height of cold dust in the MW (e.g., Binney &
Merrifield, 1998).
The temperature of small grains is very widely distributed from 1 to 4,000 K.
When grain size becomes larger, the temperature range becomes narrower and
approaches the equilibrium temperature. The equilibrium temperature is
represented by Draine & Lee (1984); Takeuchi et al. (2003) as
$T_{\mathrm{eq}}\simeq\left(\frac{h_{\mathrm{p}}c}{\pi
k}\right)\left[\frac{945u}{960\pi(2\pi Aa)h_{\mathrm{p}}c}\right]^{1/6},$ (84)
and
$u\equiv\int^{\infty}_{0}u_{\lambda}\mathrm{d}\lambda,$ (85)
where $h_{\mathrm{p}}$ is the Planck constant, and we adopt
$A_{\mathrm{sil}}=1.34\times 10^{-3}$ cm for silicate grains (Drapatz &
Michel, 1977) and $A_{\mathrm{C}}=3.20\times 10^{-3}$ cm for carbonaceous
grains (Draine & Lee, 1984). When grain size is $3.98\times 10^{-5}$ cm, the
equilibrium temperature of that grain is about 19 K by Equation (84) and it is
equal to the result calculated by the Monte Carlo calculation.
Figure 6 is the temperature distribution of graphite. Graphite grains have
broad temperature compared with silicate grains in Figure 5. The difference
comes from differences of internal energy between silicate and graphite.
Figure 6: Temperature distribution of the several grain sizes of graphite
grain. Calculation parameters and color coordinates are the same as Figure 5.
We show the temperature distribution of PAH in Figure 7. Comparing PAH
temperature distribution with graphite, PAHs stay in a narrower temperature
range because PAHs have an additional term in the equation of internal energy
(Equation 80). Almost the same behavior is seen in the result of Draine & Li
(2007). From the result of temperature distribution, some dust grains might
exceed the sublimation temperature, which is 1500 K or higher (e.g., Baskin &
Laor, 2018). If we assume that the grain above 1500 K has sublimated and
calculated its effect by removing it from the galaxy, the mass of the
sublimated grains is only a few percent of the total dust mass. Thus, the
effect of the sublimation for the result is negligible, and we do not consider
the effect of sublimation temperature in our model for simplicity.
Figure 7: Temperature distribution of the several grain sizes of PAHs.
Calculation parameters are the same as Figure 5. The blue, orange, green, and
red curves indicate $3.98\times 10^{-8}$, $1.26\times 10^{-7}$, $3.98\times
10^{-7}$, $1.00\times 10^{-6}$ cm grains, respectively.
### 2.7 Dust radiation
The dust radiation depends on the temperature distribution
$\frac{\mathrm{d}{P_{i}(a)}}{\mathrm{d}{T}}$ calculated by the method of the
above sections. The monochromatic luminosity of a dust grain of species $i$
(silicate, graphite, neutral PAH, or ionized PAH) is expressed as
$L_{i}^{\mathrm{grain}}(a,\lambda)=4\pi a^{2}\pi\int
Q_{\mathrm{abs}}^{i}(\lambda)B_{\lambda}(T)\frac{\mathrm{d}{P_{i}(a)}}{\mathrm{d}{T}}~{}\mathrm{d}T,$
(86)
where $B_{\lambda}$ is the blackbody radiation and $Q_{\mathrm{abs}}^{i}$ is
the absorption coefficient of dust species $i$. Total luminosity at wavelength
$\lambda$ is represented as,
$L(\lambda)=\sum_{i}Q_{\mathrm{abs}}^{i}(\lambda)\int
L_{i}^{\mathrm{grain}}(a,\lambda)f_{i}(a)~{}\mathrm{d}a.$ (87)
$f_{i}(a)$ is the dust number distribution of dust species $i$ from the dust
evolution model.
## 3 Result: Milky Way-like galaxy model SED
In Figure 8, we show the model result with a face-on ($\mu=1$) MW-like galaxy
model (§2.6.3 and §2.2.6) at the age of 13 Gyr (the same setting as §2.6.3).
Figure 8: The result of our SED model with MW-like galaxy parameters at an age
of 13 Gyr. The black curve represents the overall emission of the galaxy.
Other color curves express each dust species (blue: ionized PAH, orange:
neutral PAH, green: silicate, red: graphite).
Each curve in Figure 8 represents the corresponding emission species. At the
912 $\mathrm{\mathring{A}}$ wavelength, we see the cutoff of the Lyman break.
The UV to IR wavelength region is dominated by stellar emission. Numerous PAH
lines are prominent in the mid-IR, and the far-IR range is dominated by the
continuum emission from large graphite grains. The emission of silicate is
weaker than that of graphite in the wavelength range of $200~{}\mathrm{\mu m}$
or less, and is effective only in the longer wavelength range. The temperature
of graphite and silicate fitted by a gray body are 28 K and 26 K,
respectively. The difference in emission and temperature between graphite and
silicate is caused by the number of grains.
Figure 9 is the time evolution of our galaxy SED model with the MW-like galaxy
model parameters.
Figure 9: The evolution of the SED of a MW-like galaxy. Parameters are the
same as Figure 8. Blue, orange, green, red, and purple curves indicate age of
100 Myr, 1, 5, 10, 13 Gyr, respectively.
The purple curve represents the SED of a MW-like galaxy at the age of 13 Gyr,
as Figure 8. Blue, orange, green, and red curves represent the age of 100 Myr,
1, 5, and 10 Gyr, respectively. Figure 9 shows that the UV region emitted by
stars monotonically decreases with the evolution. Since we assume a closed
box, the gas mass decreases monotonically as it is consumed by star formation.
The SFR is proportional to the gas mass, hence SFR also decreases
monotonically, and the UV radiation also decreases. The overview of the time
evolution of the SED is as follows.
* •
<100 Myr
* –
Stellar emission dominates the SED and PAH emission does not exist yet.
* •
100 Myr–1 Gyr
* –
Stellar emission still dominates the SED, but the dust emission including PAH
gradually becomes prominent.
* •
1–5 Gyr
* –
Dust emission dominates the SED and dust emission becomes strongest at this
age.
* •
5–13 Gyr
* –
The emission both from stars and dust gradually decreases, along with the
decline of the star formation rate.
The details of the SED evolution are explained below.
At 100 Myr, since only a very small amount of dust has been produced, stellar
radiation is not attenuated, and dust radiation is weak. Particularly, PAHs
are not produced in young galaxies, hence the MIR radiation is very faint. For
an SED model that supposes a constant size distribution without considering
the evolution of the dust size distribution (e.g., Schurer et al., 2009), many
PAHs are observed even in such young galaxies, and different conclusions are
deduced.
The evolution of metallicity, dust mass and bolometric luminosity for each
component are shown in Figure 10.
Figure 10: The evolution of metallicity, dust mass, and bolometric luminosity
of each component of the galaxy. Parameters are set to be the same as Figure
8. Metallicity, dust mass, and bolometric luminosities are normalized by solar
metallicity $Z_{\odot}=0.02$ (Anders & Grevesse, 1989), maximum value of it,
and overall bolometric luminosity, respectively. The calculation was performed
with the age of the logarithmic scale bin.
The dust mass is normalized with respect to its maximum value. Dust mass and
luminosity are tightly correlated. Here we adopt $Z_{\odot}=0.02$ (Anders &
Grevesse, 1989). At 1 Gyr, the dust mass is gradually increasing, and along
with this, the IR radiation from dust becomes prominent. Because the PAH mass
increases by the evolution in the ISM, their characteristic mid-infrared line
emission can be seen.
The dust emission becomes strongest at 3 Gyr if we adopt the star formation
timescale $\tau_{\mathrm{SF}}=3$ Gyr. As predicted, in the MW-like model, when
the metallicity exceeds 0.1 $Z_{\odot}$, the metal accretion process becomes
effective and the dust mass increases (Asano et al., 2013a). Star formation is
active, but the UV continuum from young stars is strongly attenuated due to
the increase of dust mass.
After 3 Gyr, the dust mass decreases due to the destruction by SN shocks and
astration. Dust radiation also decreases with the age of the galaxy. This is
not only due to the reduction of dust mass, but also due to the decline of the
UV light from young stars to heat dust grains.
Focusing on the metallicity, it reaches 1.6 $Z_{\odot}$ at 13 Gyr, which is
larger than the solar metallicity. This is due to the assumption of the closed
box model. In the closed box model, there is no inflow of gas and outflow of
ISM and the metallicity increases monotonically. However, considering the
infall model, the metallicity is reduced because the ISM is diluted by the
inflow of gas (Erb et al., 2006).
We should note the difference in the evolution of each species of dust grains
in Figure 10. The increase of graphite emission is more gradual than the
increase of other components. The metallicity of a galaxy exceeds the critical
metallicity in accretion onto the dust surface, leading to the sharp rise of
the dust mass and emission (Asano et al., 2013a). Shattering produces small
dust grains and makes the surface area of dust larger, and consequently leads
to the boost of the accretion efficiency. However, since we regard almost all
small carbonaceous grains as PAHs, the mass of graphite grains does not have
the discontinuous increase. The bolometric luminosity of dust emission is
dominated by graphite in all epochs.
## 4 Discussion
### 4.1 Effect of star formation timescale
The effect of star formation (SF) timescale $\tau_{\mathrm{SF}}$ is shown in
Figure 11.
Figure 11: The effect of star formation timescale $\tau_{\mathrm{SF}}$ for
galaxy SED. The geometrical model parameters are the same as Figure 11. The
galaxy age is increasing from left to right, SF timescale increases from top
to bottom. The age and SF timescale are written on each plot. Black, orange,
green, red thick curves represent overall, graphite, silicate, and PAHs
luminosity, respectively. Blue thin curve is an intrinsic (unattenuated)
stellar emission.
In Figure 11, the SEDs are calculated with different SF timescale and age from
Figure 8, while geometrical parameters are kept the same. The galaxy age
increase from left to right the panels (100 Myr, 1, and 10 Gyr), and SF
timescale increase from top to bottom the panels ($\tau_{\mathrm{SF}}=$ 0.5,
1, 5 Gyr). Three characteristic trends are observed in Figure 11. First, the
UV light emitted by stars and the IR light emitted by graphite and silicate
grains at the age of 100 Myr decrease with increasing $\tau_{\mathrm{SF}}$.
This is because the age of the galaxy is sufficiently small with respect to
the SFR, and the larger the time scale of the star formation rate, the fewer
stars form. In these young galaxies, PAHs are not produced and the PAHs
emission is not observed in the model with any $\tau_{\mathrm{SF}}$ yet. This
indicates that the dust mass in early galaxies is dominated by production from
stars instead of accretion processes.
Second, the overall bolometric luminosities tend to be stronger when the age
of the galaxy is equal to the SF timescale. In this evolutionary phase, the
SFR is still large and a large amount of dust exists in the galaxy.
Lastly, when the age of the galaxy is older than the SF timescale, the galaxy
has very weak stellar emission due to consumption of most of the gas in the
ISM which is an ingredient of star formation. The dust emission is also very
weak in the galaxy because both, decreasing dust mass and UV light which is
the source of heating dust grains.
### 4.2 Effect of geometrical parameters
Figure 12 shows the effect of changing the dust scale height of galaxy
$h_{\mathrm{d}}$ for our galaxy SED model at an age of 13 Gyr.
Figure 12: Galaxy SEDs with various dust scale heights $h_{\mathrm{d}}$ at the
age of 13 Gyr. The star formation history is the same as Figure 8. Blue,
orange, and green curves represent 75, 150 (fiducial), and 300 pc,
respectively.
The SFH is the same as Figure 8. Blue, red, and green curves represent
$h_{\mathrm{d}}=$ 75, 150 (fiducial), and 300 pc, respectively. Intrinsic
stellar radiation does not depend on $h_{\mathrm{d}}$. Since the optical depth
of the galaxy is defined as $\tau=\kappa_{\mathrm{eff}}h_{\mathrm{d}}$, $\tau$
increases as $h_{\mathrm{d}}$ increases. Then, the absorption by the dust
grain becomes stronger, and the observed UV radiation becomes weaker. Since
the energy absorbed by the dust grain increases, the radiation in the IR
region becomes stronger.
Since our SED model assumes an axisymmetric one-dimensional disk, the galaxy
has no structure in the radial direction and does not determine the radius.
However, in reality, when the radius of the galaxy changes and the volume
changes, the density of the dust clump changes and the optical depth also
changes. In our model, the optical depth depends on clump filling fraction
(Equation (49)), and we assume that $n_{\mathrm{H}}$ is constant. Therefore,
if we consider a galaxy with a volume in which $n_{\mathrm{H}}$ changes
significantly from 1 cm-3, it is necessary to consider it, but it is not
implemented in our model and is a future work.
### 4.3 Effect of the ISM phase fraction
In the current model, we consider three phases in the ISM: WNM, CNM and MC.
Figure 13 and 14 are the effects of the ISM phase fraction for dust size
distribution. The parameters except the ISM fraction are the same as the MW-
like galaxy. The value of the fraction of cold region $\eta$ is changed while
keeping the ratio in the cold region constant to
$\eta_{\mathrm{CNM}}:\eta_{\mathrm{MC}}=3:2$ in this section.
Figure 13: The dust size distribution with cold ISM region fraction $\eta=0.5$
(fiducial, solid), 1 (dashed), and 0 (dot-dashed). Blue, orange, and purple
curves are the age of 100 Myr, 1, and 13 Gyr galaxies, respectively. Note, the
100 Myr galaxy has three overlapping curves.
Solid, dashed, and dot-dashed curves represent fiducial, $\eta=1$
($\eta_{\mathrm{WNM}}=0.0$, $\eta_{\mathrm{CNM}}=0.6$, and
$\eta_{\mathrm{MC}}=0.4$), and $\eta=0$ ($\eta_{\mathrm{WNM}}=1.0$,
$\eta_{\mathrm{CNM}}=0.0$, and $\eta_{\mathrm{MC}}=0.0$) case, respectively.
At 100 Myr galaxy, there is no difference in three cases, since the stellar
production dominates dust size distribution and is not affected by the ISM
fractions.
At 1 Gyr galaxy, the size distribution of small $\eta$ cases have small amount
of grain in radius of $>2\times 10^{-1}~{}\mathrm{\mu m}$ region because
shattering is more likely to occur thanks to collisions between larger grains.
The bump in 10-3–10-2 $\mathrm{\mu m}$ is generated by accretion on the
$>10^{-3}$ size of the grain surface. The bump is not observed in the $\eta=0$
case, because in this case the accretion process on grains in the cold regions
is not included. The $\eta=1$ result has a larger bump than the fiducial case.
This is because shattering is not effective yet, and the larger the fraction
of cold regions is, the more effective the metal accretion. On the contrary,
large amounts of intermediate size grains ($2\times 10^{-2}$–$2\times
10^{-1}~{}\mathrm{\mu m}$) in a small $\eta$ case. This results from the
coagulation process in WNM.
At the 13 Gyr galaxy, a small amount of grain is observed in the $\eta=0$
case. When grain evolution occurs in only WNM, a strong shattering process
generates the large amount of small grains. Small dust grains are largely
destroyed by SN shock (Nozawa et al., 2006), hence the mass of dust grains
effectively decreases. The large bump in $10^{-1}~{}\mathrm{\mu m}$ is caused
by the balance between strong shattering and coagulation. At the grain sizes
of $<1~{}\mathrm{\mu m}$, the size distribution of the $\eta=1$ case has a
smaller amount of dust. This is because the shattering is weak in the $\eta=1$
case and the metal accretion does not occur as effectively as the fiducial
case, because the WNM is not considered in the calculation of the dust
evolution. Further, maximum grain radius in the $\eta=1$ case reaches
$>1~{}\mathrm{\mu m}$, as weak shattering efficiency in CNM and MC.
The effect of $\eta$ for total dust mass of the MW-like galaxy model is shown
in Figure 14.
Figure 14: The evolution of total dust mass with $\eta=0.5$ (fiducial, orange
solid), 1 (blue dashed), and 0 (green dot-dashed).
Very small amount of total dust mass is observed in the $\eta=0$ case because
the case does not consider the mass-increasing process other than the
production from the stars. Around 1 Gyr, the $\eta=1$ case has a larger amount
of total dust mass than the fiducial case. This is because shattering is still
less effective in this age, and metal accretion dominates the increase of
total dust mass. After 1 Gyr, the increase of total dust mass in the $\eta=1$
case is slower than that in the fiducial case, since the rapid increase cycle
is less effective in the $\eta=1$ case. The total mass at 13 Gyr is determined
by the balance between destruction by SN shocks and the dust growth by metal
accretion.
The galaxy SED at 13 Gyr with the cold ISM region fraction $\eta=0.5$
(fiducial, solid orange), 1 (dashed blue), and 0 (dot-dashed green).
Figure 15: The galaxy SED at 13 Gyr with $\eta=0.5$ (fiducial, orange solid),
1 (blue dashed), and 0 (green dot-dashed). The parameters are the same as the
MW-like galaxy model except $\eta$.
In $\eta=0$ case, the SED has weaker dust attenuation and dust radiation than
that of the fiducial case (Figure 14), since dust mass in all radius is
smaller than the fiducial case. Though the difference of total dust mass
between the fiducial and the $\eta=1$ case is small, the size distribution of
the two cases has a large difference. The $\eta=1$ case has a lot of large
dust grains and a few small dust grains. The attenuation in the
$0.1~{}\mathrm{\mu m}$ wavelength is mainly dominated by the
$a<1~{}\mathrm{\mu m}$ radius of grain. As the large grain has large heat
capacity, the radiation from the large grain is weaker than that from the
small grain. Therefore, $\eta=1$ case has weaker attenuation in the UV region
and weaker radiation in the IR region than the fiducial case.
### 4.4 Effect of the coagulation threshold
Coagulation can occur when the relative velocity of grains $v_{\mathrm{coag}}$
is slower than the threshold velocity. However, we do not adopt the threshold
of coagulation to reproduce the dust size distribution of the MW (Asano et
al., 2013a; Nozawa et al., 2015). If $v_{\mathrm{coag}}$ is adapted to the
dust model, since the small radius grains have lower relative velocity, the
small grains are more likely to coagulate. Conversely, the large radius grains
have large relative velocity and the grains cannot occur the coagulation.
Hirashita & Yan (2009) show that the grain radius increases only up to
0.01–0.1 $\mathrm{\mu m}$, because the radii $a>0.1~{}\mathrm{\mu m}$ grains
have the relative velocity larger than $v_{\mathrm{coag}}$. Therefore, a lower
coagulation threshold velocity suppresses the effect of coagulation. First, We
show the effect of $v_{\mathrm{coag}}$ for total dust grain mass in Figure 16.
The dashed and solid lines represent the fiducial case (no
$v_{\mathrm{coag}}$) and the adopted $v_{\mathrm{coag}}$ case (we call it a
suppressed coagulation case). Galaxy parameters are the same as §2.2.6 except
$v_{\mathrm{coag}}$. We calculate the coagulation velocity threshold in the
same formula as Hirashita & Yan (2009). $v_{\mathrm{coag}}$ between grain 1
and 2 is represented as
$v_{\mathrm{coag}}=21.4\left[\frac{a^{3}_{1}+a^{3}_{2}}{(a_{1}+a_{2})^{3}}\right]^{1/2}\frac{\gamma^{5/6}}{E^{1/3}R_{1,2}^{5/6}s^{1/2}},$
(88)
where suffix 1 and 2 represents the each value of grain 1 and 2,
$R_{1,2}\equiv a_{1}a_{2}/(a_{1}+a_{2})$ is the reduced radius of the grains,
$\gamma$ is the surface energy per unit area, and $E$ is related to the
Poisson ratios ($\nu_{1}$ and $\nu_{2}$) and the Young modulus ($E_{1}$ and
$E_{2}$) by $1/E\equiv(1-\nu_{1})^{2}/E_{1}+(1-\nu_{2})^{2}/E_{2}$. The value
of $\gamma$, $\nu$, and $E$ are 25 $\mathrm{erg/cm^{2}}$, 0.17
$\mathrm{erg/cm^{2}}$ and $5.4\times 10^{11}~{}\mathrm{dyn/cm^{2}}$ for
silicate, and 12 $\mathrm{erg/cm^{2}}$, 0.5 and $3.4\times
10^{10}~{}\mathrm{dyn/cm^{2}}$ for graphite from Chokshi et al. (1993).
Figure 16: The effect of coagulation threshold velocity for total dust mass.
Solid and dashed curves represent suppressed coagulation and fiducial case,
respectively. The galaxy properties are the same as the MW-like galaxy model
except $v_{\mathrm{coag}}$.
From Figure 16, the effect of $v_{\mathrm{coag}}$ for total dust mass is very
small. Coagulation itself decrease total surface area of grains and suppress
the cross section of the metal accretion. On the other hand, as the grain size
increases, shattering is more likely to occur, and smaller radius grain
increases. Since these effects are balanced, coagulation only suppresses the
increase in total dust mass and has a small effect. Coagulation is difficult
to occur in young galaxies (age > 1 Gyr), and becomes effective after the
shattering process becomes effective. Therefore, when the coagulation becomes
effective, the rapid increase in the total dust grain mass has already
finished, and the coagulation does not significantly affect the total mass,
but only changes the size distribution of the dust grain.
Second, we show the effect of $v_{\mathrm{coag}}$ for dust size distribution
in Figure 17.
Figure 17: The effect of $v_{\mathrm{coag}}$ for dust size distribution. Blue,
orange, green, and purple curves represent the dust grain size distribution
with the age of 100 Myr, 1 Gyr, 5 Gyr, and 13 Gyr, respectively. Solid and
dashed lines represent suppressed coagulation and fiducial case, respectively.
In the age of the 100 Myr and 1 Gyr galaxies, since effective dust evolution
in the ISM has not started yet, there is no difference in the dust size
distribution between the two cases. On the other hand, $v_{\mathrm{coag}}$
strongly affects the dust distribution after 1 Gyr galaxy. $v_{\mathrm{coag}}$
suppresses the coagulation between larger grains and determines the maximum
radius of dust grain. Coagulation shifts the size distribution to larger
sizes, thus, in the suppressed coagulation case, the dust size distribution is
biased toward the smaller one, and the slope is also different from MRN.
Therefore, adopting the low $v_{\mathrm{coag}}$ in the MW-like galaxy model
leads to the dust size distribution to be different from the MRN distribution,
and thus we do not adopt $v_{\mathrm{coag}}$ in our model.
### 4.5 Comparison between closed-box and infall model
The comparison between the result of the closed-box and infall model is shown
in Figure 18. We adopt the following equation for the infall rate (Inoue,
2011):
$\frac{\mathrm{d}{M_{\mathrm{infall}}}}{\mathrm{d}{t}}=\frac{M_{\mathrm{infall}}}{\tau_{\mathrm{infall}}}\exp\left(-\frac{t}{\tau_{\mathrm{infall}}}\right),$
(89)
where $\tau_{\mathrm{infall}}$ is the timescale of infall, and
$M_{\mathrm{infall}}$ is the total mass that flows into the galaxy by infall
as $t\rightarrow\infty$. For the infall model, the initial mass of a galaxy is
set to zero, and primordial gas (zero-metallicity) fall onto the galaxy with
$M_{\mathrm{infall}}=10^{11}~{}M_{\odot}$ and $\tau_{\mathrm{infall}}=6$ Gyr.
Figure 18: The comparison of MW-like model galaxy SED at age
$t_{\mathrm{gal}}=13$ Gyr with infall and closed-box model. The orange curve
represents closed-box model (same as Figure 8), the blue curve represents
infall model with infall time scale $\tau_{\mathrm{infall}}=6$ Gyr.
The time evolution of the SFR and dust mass are plotted in Figure 19. The star
formation history is very different between the two models.
Figure 19: The time evolution of dust mass and SFR of the galaxy with closed-
box and infall model (same galaxies as Figure 18). Colors represent the
difference of quantities: the ratio of $\mathrm{SFR}(t_{\mathrm{gal}})$ and
maximum value of it (red), and the ratio of dust mass $M_{\mathrm{dust}}$ and
maximum value of it (blue). The solid and dashed curves represent closed-box
and infall models, respectively.
While the SFR of the closed-box model case monotonically decrease, the SFR of
the infall model gradually increase and it reaches to a peak at
$t_{\mathrm{gal}}=5$ Gyr (close to infall timescale $t_{\mathrm{infall}}=6$
Gyr), after that the SFR decrease gradually. Since the SFR at 13 Gyr of the
infall model is higher than that of the closed-box model, the ratio of younger
stars is increased in the infall model, and then the luminosity of the UV
region becomes stronger. On the other hand, the continuum at near IR
wavelengths emitted from old stars becomes slightly weaker due to the smaller
amount of old stars. The metallicity is 1.6 $Z_{\odot}$ in the closed box
model, while it is 0.86 $Z_{\odot}$ in the infall model which is closer to the
solar metallicity.
In the infall model case, the peak of the dust mass comes later because of the
different star formation history. It leads to an increase of the IR emission
which is emitted by dust grains. In general, the infall model tends to delay
the evolution of the galaxy.
### 4.6 Radio emission
Our model does not include the radio emission. Because, in a normal galaxy,
the luminosity of the radio region is only $<10^{-4}$ of the overall
bolometric luminosity of a galaxy (Condon, 1992). Above $\sim 1$ mm, radio
emission is swamped in dust emission for normal galaxies. The radio is mainly
emitted by synchrotron radiation from relativistic electrons accelerated in
supernova remnants and free-free emission by Hii region, which is a ionized by
the radiation of heavy and young stars (Klein et al., 1988; Carlstrom &
Kronberg, 1991). Since both radio sources are associated with SN explosions,
their radiation is considered to depend on the SN rate (SNR) (Condon, 1992).
In particular, many galaxies with strong synchrotron radiation by a jet from
active galactic nuclei have been observed (e.g., Carilli et al., 1991; Laing &
Bridle, 2002), and we will take it into account in our future work.
## 5 Conclusions
In this paper, we construct a new galaxy SED model including the dust
evolution in galaxies consistent with the chemical evolution (Asano et al.,
2013a, b, 2014). The dust model considers several evolutionary processes of
the dust production by AGB stars and SNe II, the destruction by SN shocks in
the ISM, the grain growth by metal accretion to grain surface, and the two
types of grain-grain collision, the shattering and coagulation. The stellar
radiation is calculated by PÉGASE.2 (Fioc & Rocca-Volmerange, 1999). Based on
this, we constructed a radiative transfer model with a one-dimensional plain
parallel geometry equipped with the mega-grain approximation for fast
computation (Varosi & Dwek, 1999; Inoue, 2005). For the radiation from dust,
we take into account the stochastic heating of dust grains by Monte Carlo
simulation. As a fiducial model, we assumed the Schmidt law with star
formation time scale $\tau_{\mathrm{SF}}=3~{}\mathrm{Gyr}$, the Salpeter IMF
(Salpeter, 1955), and the closed box model. The ISM phase fractions were set
as $\eta_{\mathrm{WNM}}=0.5$, $\eta_{\mathrm{CNM}}=0.3$, and
$\eta_{\mathrm{MC}}=0.2$, scale height of dust is $h_{\mathrm{d}}=150$ pc, and
the threshold of coagulation velocity is removed. Our model indicates that
early galaxies ($\sim 100$ Myr) produce a small amount of dust. The PAHs,
which dominate the MIR wavelength region, have not been produced yet, in
particular. The SED at the age of 100 Myr is dominated by stellar emission.
Then the amount of dust mass and emission explosively increases at the age of
about 3 Gyrs. Subsequently, the dust mass and emission from both the stars and
dust decreases, along with the decline of the star formation rate. Since this
model treats the evolution of dust appropriately, we can apply it to any age
of a galaxy as far as the model assumptions are valid.
## Acknowledgements
First of all, we offer our sincere thanks to the anonymous referee for her/his
enormous effort to read through the article and invaluably important comments
and suggestions that improved the quality of the paper very much. We are
grateful to the colleagues in the Lab for fruitful discussions and comments.
We thank H. Kobayashi and A.K. Inoue for helpful comments on the coding of
dust evolution model. This work has been supported by JSPS Grants-in-Aid for
Scientific Research (17H01110, 19H05076, and 21H01128). This work has also
been supported in part by the Sumitomo Foundation Fiscal 2018 Grant for Basic
Science Research Projects (180923), and the Collaboration Funding of the
Institute of Statistical Mathematics “New Development of the Studies on Galaxy
Evolution with a Method of Data Science”.
## Data Availability
The data underlying this article will be shared on reasonable request to the
corresponding author.
## References
* Alongi et al. (1993) Alongi M., Bertelli G., Bressan A., Chiosi C., Fagotto F., Greggio L., Nasi E., 1993, A&AS, 97, 851
* Anders & Grevesse (1989) Anders E., Grevesse N., 1989, Geochim. Cosmochim. Acta, 53, 197
* Arendt et al. (2010) Arendt R. G., et al., 2010, ApJ, 725, 585
* Arons & Max (1975) Arons J., Max C. E., 1975, Astrophys. J., 196, L77
* Asano et al. (2013a) Asano R. S., Takeuchi T. T., Hirashita H., Inoue A. K., 2013a, Earth, Planets Sp., 65, 213
* Asano et al. (2013b) Asano R. S., Takeuchi T. T., Hirashita H., Nozawa T., 2013b, MNRAS, 432, 637
* Asano et al. (2014) Asano R. S., Takeuchi T. T., Hirashita H., Nozawa T., 2014, Mon. Not. R. Astron. Soc., 440, 134
* Baskin & Laor (2018) Baskin A., Laor A., 2018, Mon. Not. R. Astron. Soc., 474, 1970
* Bertelli et al. (1994) Bertelli G., Bressan A., Chiosi C., Fagotto F., Nasi E., 1994, A&AS, 106, 275
* Bianchi & Schneider (2007) Bianchi S., Schneider R., 2007, Mon. Not. R. Astron. Soc., 378, 973
* Binney & Merrifield (1998) Binney J., Merrifield M., 1998, Galactic Astronomy
* Biscaro & Cherchneff (2016) Biscaro C., Cherchneff I., 2016, Astron. Astrophys., 589, 1
* Bohren & Huffman (1983) Bohren C. F., Huffman D. R., 1983, Absorption and scattering of light by small particles. New York: Wiley, https://ui.adsabs.harvard.edu/abs/1983asls.book.....B
* Borkowski et al. (2006) Borkowski K. J., et al., 2006, ApJ, 642, L141
* Bressan et al. (1993) Bressan A., Fagotto F., Bertelli G., Chiosi C., 1993, A&AS, 100, 647
* Burgarella et al. (2020) Burgarella D., Nanni A., Hirashita H., Theulé P., Inoue A. K., Takeuchi T. T., 2020, Astron. Astrophys., 637, A32
* Calura et al. (2008) Calura F., Pipino A., Matteucci F., 2008, A&A, 479, 669
* Carilli et al. (1991) Carilli C. L., Perley R. A., Dreher J. W., Leahy J. P., 1991, ApJ, 383, 554
* Carlstrom & Kronberg (1991) Carlstrom J. E., Kronberg P. P., 1991, ApJ, 366, 422
* Cazaux et al. (2005) Cazaux S. M., Caselli P., Walmsley M., Tielens A. G., 2005, Proc. Int. Astron. Union, 1, 325
* Ceccarelli et al. (2018) Ceccarelli C., Viti S., Balucani N., Taquet V., 2018, Mon. Not. R. Astron. Soc., 476, 1371
* Chokshi et al. (1993) Chokshi A., Tielens A. G. G. M., Hollenbach D., 1993, ApJ, 407, 806
* Condon (1992) Condon J. J., 1992, ARA&A, 30, 575
* Conroy (2013) Conroy C., 2013, ARA&A, 51, 393
* De Looze et al. (2020) De Looze I., et al., 2020, Mon. Not. R. Astron. Soc., 496, 3668
* De Vis et al. (2017) De Vis P., et al., 2017, Mon. Not. R. Astron. Soc., 471, 1743
* De Vis et al. (2019) De Vis P., et al., 2019, Astron. Astrophys., 623, A5
* Draine (2009) Draine B. T., 2009, Space Sci. Rev., 143, 333
* Draine & Anderson (1985) Draine B. T., Anderson N., 1985, ApJ, 292, 494
* Draine & Lee (1984) Draine B. T., Lee H. M., 1984, ApJ, 285, 89
* Draine & Li (2001) Draine B. T., Li A., 2001, ApJ, 551, 807
* Draine & Li (2007) Draine B. T., Li A., 2007, ApJ, 657, 810
* Drapatz & Michel (1977) Drapatz S., Michel K., 1977, A&A, 56, 353
* Dwek (1998) Dwek E., 1998, ApJ, 501, 643
* Dwek & Scalo (1980) Dwek E., Scalo J. M., 1980, ApJ, 239, 193
* Dwek et al. (2007) Dwek E., Galliano F., Jones A. P., 2007, ApJ, 662, 927
* Erb et al. (2006) Erb D. K., Shapley A. E., Pettini M., Steidel C. C., Reddy N. A., Adelberger K. L., 2006, Astrophys. J., 644, 813
* Evans (1994) Evans A., 1994, The dusty universe. Wiley, Chichester, https://ui.adsabs.harvard.edu/abs/1994duun.book.....E
* Fagotto et al. (1994a) Fagotto F., Bressan A., Bertelli G., Chiosi C., 1994a, A&AS, 104, 365
* Fagotto et al. (1994b) Fagotto F., Bressan A., Bertelli G., Chiosi C., 1994b, A&AS, 105, 29
* Fagotto et al. (1994c) Fagotto F., Bressan A., Bertelli G., Chiosi C., 1994c, A&AS, 105, 39
* Ferrara et al. (2016) Ferrara A., Viti S., Ceccarelli C., 2016, Mon. Not. R. Astron. Soc. Lett., 463, L112
* Field et al. (1969) Field G. B., Goldsmith D. W., Habing H. J., 1969, ApJ, 155, L149
* Fioc & Rocca-Volmerange (1999) Fioc M., Rocca-Volmerange B., 1999, arXiv e-prints
* Fioc & Rocca-Volmerange (2019) Fioc M., Rocca-Volmerange B., 2019, A&A, 623, A143
* Gall et al. (2011a) Gall C., Andersen A. C., Hjorth J., 2011a, Astron. Astrophys., 528, A13
* Gall et al. (2011b) Gall C., Andersen A. C., Hjorth J., 2011b, Astron. Astrophys., 528, A14
* Gall et al. (2014) Gall C., et al., 2014, Nature, 511, 326
* Ginolfi et al. (2018) Ginolfi M., Graziani L., Schneider R., Marassi S., Valiante R., Dell’Agli F., Ventura P., Hunt L. K., 2018, Mon. Not. R. Astron. Soc., 473, 4538
* Girardi et al. (1996) Girardi L., Bressan A., Chiosi C., Bertelli G., Nasi E., 1996, A&AS, 117, 113
* Heger et al. (2003) Heger A., Fryer C. L., Woosley S. E., Langer N., Hartmann D. H., 2003, ApJ, 591, 288
* Hellyer (1970) Hellyer B., 1970, Obs., 90, 55
* Henyey & Greenstein (1941) Henyey L. C., Greenstein J. L., 1941, Nature, 147, 613
* Hiraki & Hirak (2008) Hiraki A., Hirak H., 2008, Rev. Mex. Física, 54, 44
* Hirashita (2012) Hirashita H., 2012, MNRAS, 422, 1263
* Hirashita & Aoyama (2019) Hirashita H., Aoyama S., 2019, MNRAS, 482, 2555
* Hirashita & Ferrara (2002) Hirashita H., Ferrara A., 2002, Mon. Not. R. Astron. Soc., 337, 921
* Hirashita & Kobayashi (2013) Hirashita H., Kobayashi H., 2013, Earth, Planets Sp., 65, 1083
* Hirashita & Kuo (2011) Hirashita H., Kuo T. M., 2011, MNRAS, 416, 1340
* Hirashita & Yan (2009) Hirashita H., Yan H., 2009, MNRAS, 394, 1061
* Hirashita et al. (2005) Hirashita H., Nozawa T., Kozasa T., Ishii T. T., Takeuchi T. T., 2005, Mon. Not. R. Astron. Soc., 357, 1077
* Hobson & Padman (1993) Hobson M. P., Padman R., 1993, MNRAS, 264, 161
* Hollenbach & McKee (1979) Hollenbach D., McKee C. F., 1979, ApJS, 41, 555
* Horn et al. (2007) Horn K., Perets H. B., Biham O., 2007, arXiv e-prints
* Inoue (2003) Inoue A. K., 2003, Publ. Astron. Soc. Japan, 55, 901
* Inoue (2005) Inoue A. K., 2005, Mon. Not. R. Astron. Soc., 359, 171
* Inoue (2011) Inoue A. K., 2011, Earth, Planets Sp., 63, 1027
* Jones & Nuth (2011) Jones A. P., Nuth J. A., 2011, A&A, 530, 1
* Jones et al. (1994) Jones A. P., Tielens A. G. G. M., Hollenbach D. J., McKee C. F., 1994, ApJ, 433, 797
* Jones et al. (1996) Jones A. P., Tielens A. G. G. M., Hollenbach D. J., 1996, ApJ, 469, 740
* Kalvans (2017) Kalvans J., 2017, Proc. Int. Astron. Union, 13, 374
* Klein et al. (1988) Klein U., Wielebinski R., Morsi H. W., 1988, A&A, 190, 41
* Kobayashi et al. (2006) Kobayashi C., Umeda H., Nomoto K., Tominaga N., Ohkubo T., 2006, ApJ, 653, 1145
* Koyama & Inutsuka (2002) Koyama H., Inutsuka S., 2002, in 8th Asian-Pacific Reg. Meet. Vol. II. pp 159–160
* Kozasa et al. (2009) Kozasa T., Nozawa T., Tominaga N., Umeda H., Maeda K., Nomoto K., 2009, in Cosm. Dust - Near Far. p. 43
* Kuo & Hirashita (2012) Kuo T. M., Hirashita H., 2012, Mon. Not. R. Astron. Soc. Lett., 424, 34
* Laing & Bridle (2002) Laing R. A., Bridle A. H., 2002, MNRAS, 336, 1161
* Laor & Draine (1993) Laor A., Draine B. T., 1993, ApJ, 402, 441
* Laporte et al. (2017) Laporte N., et al., 2017, ApJ, 837, L21
* Lazarian & Yan (2002) Lazarian A., Yan H., 2002, Astrophys. J., 566, L105
* Lejeune et al. (1997) Lejeune T., Cuisinier F., Buser R., 1997, A&AS, 125, 229
* Lejeune et al. (1998) Lejeune T., Cuisinier F., Buser R., 1998, Astron. Astrophys. Suppl. Ser., 130, 65
* Lesniewska & Michałowski (2019) Lesniewska A., Michałowski M. J., 2019, Astron. Astrophys., 624, 4
* Li & Draine (2001) Li A., Draine B. T., 2001, Astrophys. J., 554, 778
* Lisenfeld & Ferrara (1998) Lisenfeld U., Ferrara A., 1998, ApJ, 496, 145
* Liu & Hirashita (2019) Liu H.-M., Hirashita H., 2019, Mon. Not. R. Astron. Soc., 490, 540
* Maiolino et al. (2004) Maiolino R., Schneider R., Oliva E., Bianchi S., Ferrara A., Mannucci F., Pedani M., Roca Sogorb M., 2004, Nature, 431, 533
* Mancini et al. (2015) Mancini M., Schneider R., Graziani L., Valiante R., Dayal P., Maio U., Ciardi B., Hunt L. K., 2015, Mon. Not. R. Astron. Soc., 451, L70
* Marchenko (2006) Marchenko S. V., 2006, in Stellar Evol. Low Met. Mass Loss, Explos. Cosmol.. p. 299, https://ui.adsabs.harvard.edu/abs/2006ASPC..353..299M
* Mathis et al. (1977) Mathis J. S., Rumpl W., Nordsieck K. H., 1977, ApJ, 217, 425
* Matsuura et al. (2019) Matsuura M., et al., 2019, Mon. Not. R. Astron. Soc., 482, 1715
* McKee (1989) McKee C. F., 1989, ApJ, 345, 782
* McKee & Ostriker (1977) McKee C. F., Ostriker J. P., 1977, ApJ, 218, 148
* McKee & Ostriker (2007) McKee C. F., Ostriker E. C., 2007, Annu. Rev. Astron. Astrophys., 45, 565
* Michałowski (2015) Michałowski M. J., 2015, Astron. Astrophys., 577, 1
* Michałowski et al. (2010) Michałowski M., Watson D., Hjorth J., 2010, ApJ, 712, 942
* Morgan & Edmunds (2003) Morgan H. L., Edmunds M. G., 2003, MNRAS, 343, 427
* Nanni et al. (2020) Nanni A., Burgarella D., Theulé P., Côté B., Hirashita H., 2020, Astron. Astrophys., 641, A168
* Neufeld (1991) Neufeld D. A., 1991, ApJ, 370, L85
* Nozawa et al. (2003) Nozawa T., Kozasa T., Umeda H., Maeda K., Nomoto K., 2003, Astrophys. J., 598, 785
* Nozawa et al. (2006) Nozawa T., Kozasa T., Habe A., 2006, Astrophys. J., 648, 435
* Nozawa et al. (2007) Nozawa T., Kozasa T., Habe A., Dwek E., Umeda H., Tominaga N., Maeda K., Nomoto K., 2007, Astrophys. J., 666, 955
* Nozawa et al. (2015) Nozawa T., Asano R. S., Hirashita H., Takeuchi T. T., 2015, Mon. Not. R. Astron. Soc. Lett., 447, L16
* Ormel et al. (2009) Ormel C. W., Paszun D., Dominik C., Tielens A. G. G. M., 2009, Astron. Astrophys., 502, 845
* Ossenkopf (1993) Ossenkopf V., 1993, Astron. Astrophys., 280, 617
* Pipino et al. (2011) Pipino A., Fan X. L., Matteucci F., Calura F., Silva L., Granato G., Maiolino R., 2011, A&A, 525, A61
* Raiteri et al. (1996) Raiteri C. M., Villata M., Navarro J. F., 1996, A&A, 315, 105
* Rouillé et al. (2020) Rouillé G., Jäger C., Henning T., 2020, Astrophys. J., 892, 96
* Salpeter (1955) Salpeter E. E., 1955, ApJ, 121, 161
* Schmidt (1959) Schmidt M., 1959, ApJ, 129, 243
* Schneider et al. (2016) Schneider R., Hunt L., Valiante R., 2016, Mon. Not. R. Astron. Soc., 457, 1842
* Schurer et al. (2009) Schurer A., Calura F., Silva L., Pipino A., Granato G. L., Matteucci F., Maiolino R., 2009, MNRAS, 394, 2001
* Slavin et al. (2020) Slavin J. D., Dwek E., Mac Low M.-M., Hill A. S., 2020, Astrophys. J., 902, 135
* Takeuchi et al. (2003) Takeuchi T. T., Hirashita H., Ishii T. T., Hunt L. K., Ferrara A., 2003, Mon. Not. R. Astron. Soc., 343, 839
* Takeuchi et al. (2005) Takeuchi T. T., Ishii T. T., Nozawa T., Kozasa T., Hirashita H., 2005, MNRAS, 362, 592
* Tamura et al. (2019) Tamura Y., et al., 2019, ApJ, 874, 27
* Todini & Ferrara (2001) Todini P., Ferrara A., 2001, MNRAS, 325, 726
* Valiante et al. (2009) Valiante R., Schneider R., Bianchi S., Andersen A. C., 2009, MNRAS, 397, 1661
* Valiante et al. (2011) Valiante R., Schneider R., Salvadori S., Bianchi S., 2011, MNRAS, 416, 1916
* Varosi & Dwek (1999) Varosi F., Dwek E., 1999, ApJ, 523, 265
* Ventura et al. (2012) Ventura P., et al., 2012, MNRAS, 2357, 2345
* Ventura et al. (2013) Ventura P., Criscienzo M. D., Carini R., Antona F. D., 2013, MNRAS, 3653, 3642
* Watson et al. (2015) Watson D., Christensen L., Knudsen K. K., Richard J., Gallazzi A., Michałowski M., 2015, Nature, 519, 327
* Winters et al. (1997) Winters J. M., Fleischer A. J., Le Bertre T., Sedlmayr E., 1997, A&A, 326, 305
* Wolfire et al. (2003) Wolfire M. G., McKee C. F., Hollenbach D., Tielens A. G. G. M., 2003, Astrophys. J., 587, 278
* Woosley & Weaver (1995) Woosley S. E., Weaver T. A., 1995, Astrophys. J. Suppl. Ser., 101, 181
* Yamasawa et al. (2011) Yamasawa D., Habe A., Kozasa T., Nozawa T., Hirashita H., Umeda H., Nomoto K., 2011, ApJ, 735
* Yan et al. (2004) Yan H., Lazarian A., Draine B. T., 2004, Astrophys. J., 616, 895
* Yasuda & Kozasa (2012) Yasuda Y., Kozasa T., 2012, ApJ, 745, 159
* Zhukovska (2014) Zhukovska S., 2014, Astron. Astrophys., 562, 1
* Zhukovska et al. (2008) Zhukovska S., Gail H. P., Trieloff M., 2008, A&A, 479, 453
* da Cunha et al. (2010) da Cunha E., Eminian C., Charlot S., Blaizot J., 2010, Mon. Not. R. Astron. Soc., 403, 1894
|
# Asymmetric Co-teaching with Multi-view Consensus
for Noisy Label Learning
Fengbei Liu1 Yuanhong Chen1 Chong Wang 1 Yu Tian2 Gustavo Carneiro3 1
Australian Institute for Machine Learning, University of Adelaide 2 Harvard
Medical School, Harvard University
3 CVSSP, University of Surrey
###### Abstract
Learning with noisy-labels has become an important research topic in computer
vision where state-of-the-art (SOTA) methods explore: 1) prediction
disagreement with co-teaching strategy that updates two models when they
disagree on the prediction of training samples; and 2) sample selection to
divide the training set into clean and noisy sets based on small training
loss. However, the quick convergence of co-teaching models to select the same
clean subsets combined with relatively fast overfitting of noisy labels may
induce the wrong selection of noisy label samples as clean, leading to an
inevitable confirmation bias that damages accuracy. In this paper, we
introduce our noisy-label learning approach, called Asymmetric Co-teaching
(AsyCo), which introduces novel prediction disagreement that produces more
consistent divergent results of the co-teaching models, and a new sample
selection approach that does not require small-loss assumption to enable a
better robustness to confirmation bias than previous methods. More
specifically, the new prediction disagreement is achieved with the use of
different training strategies, where one model is trained with multi-class
learning and the other with multi-label learning. Also, the new sample
selection is based on multi-view consensus, which uses the label views from
training labels and model predictions to divide the training set into clean
and noisy for training the multi-class model and to re-label the training
samples with multiple top-ranked labels for training the multi-label model.
Extensive experiments on synthetic and real-world noisy-label datasets show
that AsyCo improves over current SOTA methods.
## 1 Introduction
Figure 1: Comparison of methods Decoupling [20], Co-teaching+ [36], JoCoR
[29], and our AsyCo. AsyCo co-teaches the multi-class model A and the multi-
label model B with different training strategies (denoted by the different
colours of A&B). The training samples for A and B, represented by the green
and red arrows, are formed by our proposed multi-view consensus that uses
label views from the training set and model predictions to estimate the
variables $\mathbf{w}$ and $\hat{\mathbf{y}}$, which selects clean/noisy
samples for training A and iteratively re-labels samples for training B,
respectively.
Deep neural network (DNN) has achieved remarkable success in many fields,
including computer vision [15, 11], natural language processing (NLP) [7, 35]
and medical image analysis [17, 28]. However, the methods from those fields
often require massive amount of high-quality annotated data for supervised
training [6], which is challenging and expensive to acquire. To alleviate such
problem, some datasets have been annotated via crowdsourcing [32], from search
engines [27], or with NLP from radiology reports [28]. Although these cheaper
annotation processes enable the construction of large-scale datasets, they
inevitably introduce noisy labels for model training, resulting in DNN model
performance degradation. Therefore, novel learning algorithms are required to
robustly train DNN models when training sets containing noisy labels.
Previous methods tackle noisy-label learning from different perspectives. For
example, some approaches focus on prediction disagreement [36, 29, 20], which
rely on jointly training two models to update their parameters when they
disagree on the predictions of the same training samples. These two models
generally use the same training strategy, so even though they are trained
using samples with divergent predictions, both models will quickly converge to
select similar clean samples during training, which neutralises the
effectiveness of prediction disagreement. Other noisy-label learning methods
are based on sample selection [16, 9, 1] to find clean and noisy-label samples
that are treated differently in the training process. Sample-selection
approaches usually assume that samples with small training losses are
associated with clean labels, which is an assumption verified only at early
training stages [18, 37]. However, such assumption is unwarranted in later
training stages because DNN models can overfit any type of noisy label after a
certain number of epochs, essentially reducing the training loss for all
training samples. State-of-the-art (SOTA) noisy-label learning approaches [16]
have been designed to depend on both prediction disagreement and sample
selection methods to achieve better performance than either method alone.
Nevertheless, these SOTA methods are still affected by the fast convergence of
both models and label noise overfitting, which raises the following questions:
1) Are there more effective ways to maximise the prediction disagreement
between both models, so they consistently produce divergent results during the
training procedure? 2) Is there a sample selection approach that can better
integrate prediction disagreements than the small loss strategy?
Motivated by traditional multi-view learning [3, 26] and multi-label learning
[24], we propose a new noisy-label learning method that aims to answer the two
questions above. Our method, named Asymmetric Co-teaching (AsyCo) and depicted
in Fig. 1, is based on two models trained with different learning strategies
to maximise their prediction disagreement. One model, the classification net,
is trained with conventional multi-class learning by minimising a cross
entropy loss and provide single-class prediction, and the other, the reference
net, is trained with a binary cross entropy loss to enable multi-label
learning that is used to estimate the top-ranked labels that represent the
potentially clean candidate labels for each training sample. The original
training labels and the predictions by the training and reference nets enable
the formation of three label views for each training sample, allowing us to
formulate the multi-view consensus that is tightly integrated with the
prediction disagreement to select clean and noisy samples for training the
multi-class model and to iteratively re-label samples with multiple top-ranked
labels for training the multi-label model. In summary, our main contributions
are:
* •
The new noisy-label co-teaching method AsyCo designed to maximise the
prediction disagreement between the training of a multi-class and a multi-
label model; and
* •
The novel multi-view consensus that uses the disagreements between training
labels and model predictions to select clean and noisy samples for training
the multi-class model and to iteratively re-label samples with multiple top-
ranked labels for training the multi-label model.
We conduct extensive experiments on both synthetic and real-world noisy
datasets that show that AsyCo provides substantial improvements over previous
state-of-the-art (SOTA) methods.
## 2 Related Work
Prediction disagreement approaches seek to maximise model performance by
exploring the prediction disagreements between models trained from the same
training set. In general, these methods [20, 36, 29, 13] train two models
using samples that have different predictions from both models to mitigate the
problem of confirmation bias (i.e., a mistake being reinforced by further
training from the same mistake) that particularly affects single-model
training. Furthermore, the cross teaching of two models can help escape local
minima. Most of the prediction-disagreement methods also rely on sample-
selection techniques, as we explain below, but in general, they use the same
training strategy to train two models, which limits the ability of these
approaches to maximise the divergence between the models.
Sample selection approaches aim to automatically classify training samples
into clean or noisy and treat them differently during the training process.
Previous papers [18, 37] have shown that when training with noisy label, DNN
fits the samples with clean labels first and gradually overfits the samples
with noisy labels later. Such training loss characterisation allowed
researchers to assume that samples with clean labels have small losses,
particularly at early training stages – this is known as the small-loss
assumption. For examples, M-correction [1] automatically selects clean samples
by modelling the training loss distribution with a Beta Mixture model (BMM).
Sample selection has been combined with prediction disagreement in several
works, such as Co-teaching [9] and Co-teaching+ [36] that train two networks
simultaneously, where in each mini-batch, it selects small-loss samples to be
used in the training of the other model. JoCoR [29] improves upon Co-teaching+
by using a contrastive loss to jointly train both models. DivideMix [16] has
advanced the area with a similar combination of sample selection and
prediction disagreement using semi-supervised learning, co-teaching and small-
loss detection with a Gaussian Mixture Model (GMM). InstanceGM [8] combines
graphical model with DivideMix to achieve promising results. These methods
show that sample selection based on the small-loss assumption is one of the
core components for achieving SOTA performance. However, the small loss signal
used to select samples is poorly integrated with prediction disagreement since
both models will quickly converge to produce similar loss values for all
training samples, resulting in little disagreement between models, which
increases the risk of confirmation bias.
Transition matrix methods aim to estimate a noise transition matrix to
guarantee that the classifier learned from the noisy data is consistent with
the optimal classifier [31, 22, 5] F-correction [22] uses a two-step solution
to heuristically estimate the noise transition matrix. T-revision [31] argues
that anchor points are not necessary for estimating the transition matrix and
proposes a solution for selecting reliable samples to replace anchor points.
kMEIDTM [5] proposes an anchor-free method for estimating instance-dependent
transition matrix by applying manifold regularization during the training. The
main issue with the methods above is that it is challenging to estimate the
transition matrix accurately, particularly an instance-dependent transition
matrix that contains little support from the training set. Furthermore, real-
world scenarios often contain out-of-distribution samples that are hard to
represent in the transition matrix.
Multi-view learning (MVL) studies the integration of knowledge from different
views of the data to capture consensus and complementary information across
different views. Traditional MVL methods [3, 26] aimed to encourage the
convergence of patterns from different views. For example, Co-training [3]
uses two views of web-pages (i.e., text and hyperlinks on web-pages) to allow
the use of inexpensive unlabelled data to augment a small labelled data.
Considering that the quality and importance of different views could vary for
real-world applications, recent methods [10] weight the contribution of each
view based on the estimated uncertainty. In our paper, we explore this multi-
view learning strategy to select clean and noisy samples and to iteratively
re-label training samples, where the views are represented by the training
labels, and the predictions by the two models that are trained using different
learning strategies.
## 3 Method
### 3.1 Problem Definition
We denote the noisy training set as
$\mathcal{D}=\\{(\mathbf{x}_{i},\tilde{\mathbf{y}}_{i})\\}_{i=1}^{|\mathcal{D}|}$,
where $\mathbf{x}_{i}\in\mathcal{X}\subset\mathbb{R}^{H\times W\times C}$ is
the input image of size $H\times W$ with $C$ colour channels, and
$\tilde{\mathbf{y}}_{i}\in\mathcal{Y}\subset\\{0,1\\}^{|\mathcal{Y}|}$ is the
one-hot (or multi-class) label representation. The goal of is to learn the
classification net $n_{\theta}:\mathcal{X}\to\mathcal{L}$, parameterised by
$\theta\in\Theta$, that outputs the logits
$\mathbf{l}\in\mathcal{L}\subset\mathbb{R}^{|\mathcal{Y}|}$ for an image
$\mathbf{x}\in\mathcal{X}$. Following the prediction-disagreement strategy, we
also define the reference net denoted by $r_{\phi}:\mathcal{X}\to\mathcal{L}$,
parameterised by $\phi\in\Phi$, to be jointly trained with $n_{\theta}(.)$.
AsyCo111Algorithm in supplementary material. is based on alternating the
training of the multi-class model $n_{\theta}(.)$ and the multi-label model
$r_{\phi}(.)$, which allows the formation of three label views for the
training samples $\\{\mathbf{x}_{i}\\}_{i=1}^{|\mathcal{D}|}$: 1) the original
training label $\tilde{\mathbf{y}}_{i}$, 2) the classification net multi-class
prediction $\tilde{\mathbf{y}}^{(n)}_{i}$, and 3) the reference net multi-
label prediction $\tilde{\mathbf{y}}^{(r)}_{i}$. Using these views, we
introduce new methods to estimate the sample-selection variable $\mathbf{w}$
that classifies training samples into clean or noisy, and the re-labelling
variable $\hat{\mathbf{y}}$ that holds multiple top-ranked labels for training
samples, where $\mathbf{w}$ is used for training the multi-class model
$n_{\theta}(.)$, and $\hat{\mathbf{y}}$ for training the multi-label model
$r_{\phi}(.)$. Fig. 2 depicts AsyCo, in comparison with prediction
disagreement methods based on co-teaching and small-loss sample selection.
### 3.2 Asymmetric Co-teaching Optimisation
Figure 2: Comparison between traditional small-loss sample selection (top) and
our AsyCo, consisting of prediction disagreement between the multi-class model
A and multi-label model B (bottom). Traditional methods utilises the small-
loss assumption for classifying samples as clean or noisy, while our multi-
view sample selection uses prediction disagreements to update the sample-
selection variable $\mathbf{w}$ for classifying samples as clean, noisy or
unmatched (U) to train the classification net A. Our multi-view re-labelling
selects ambiguous samples and maximise disagreement by updating the re-
labelling variable $\mathbf{\hat{y}}$ for training the reference net B.
Our Asymmetric co-teaching optimisation trains a multi-class model with the
usual cross-entropy (CE), but the other model is trained with multi-label
learning [23] that associates samples with multiple labels and utilises binary
cross-entropy (BCE) to train for each label independently. We have two goals
with the multi-label model: 1) maximise the disagreement with the multi-class
model, and 2) formulate a mechanism to find the most likely clean labels by
selecting multiple top-ranked labels of training samples. While the first goal
is motivated by the training strategy differences, the second goal is
motivated by the hypothesis that a possible cause of the overfitting of noisy
labels is the single-class constraint that forces multi-class models to fit
only one class. By removing this constraint, the true clean label is likely to
be within the top-ranked candidate labels222Training strategy visualization in
supplementary material.. Our AsyCo optimisation starts with a warmup stage of
supervised learning to train both networks with:
$\begin{split}\theta^{\dagger}&=\arg\min_{\theta}\sum_{(\mathbf{x}_{i},\tilde{\mathbf{y}}_{i})\in\mathcal{D}}\ell_{\mathrm{CE}}(\tilde{\mathbf{y}}_{i},\sigma_{sm}(n_{\theta}(\mathbf{x}_{i}))),\\\
\phi^{\dagger}&=\arg\min_{\phi}\sum_{(\mathbf{x}_{i},\tilde{\mathbf{y}}_{i})\in\mathcal{D}}\ell_{\mathrm{BCE}}(\tilde{\mathbf{y}}_{i},\sigma_{sg}(r_{\phi}(\mathbf{x}_{i}))),\end{split}$
(1)
where $\sigma_{sm}(.)$ and $\sigma_{sg}(.)$ are the softmax and sigmoid
activation functions, respectively, $\ell_{\mathrm{CE}}(.)$ represents the CE
loss for multi-class learning, and $\ell_{\mathrm{BCE}}$ denotes the BCE loss
for multi-label learning. The two models from (1) will provide predictions as
follows:
$\begin{split}\tilde{\mathbf{y}}_{i}^{(n)}&=\mathrm{OneHot}(n_{\theta^{\dagger}}(\mathbf{x}_{i})),\\\
\tilde{\mathbf{y}}_{i}^{(r)}&=\mathrm{TopK}(r_{\phi^{\dagger}}(\mathbf{x}_{i})),\end{split}$
(2)
where $\tilde{\mathbf{y}}_{i}^{(n)}\in\mathcal{Y}$ is the one-hot single-label
prediction by $n_{\theta^{\dagger}}(\mathbf{x}_{i})$, and
$\tilde{\mathbf{y}}_{i}^{(r)}\in\\{0,1\\}^{|\mathcal{Y}|}$ is the top-$K$
multi-label prediction of $r_{\phi^{\dagger}}(\mathbf{x}_{i})$ (i.e., the
largest $K$ values from $r_{\phi^{\dagger}}(.)$ will set
$\tilde{\mathbf{y}}_{i}^{(r)}$ to $1$ and the rest are set to $0$). However,
removing the single-class constraint from multi-class classification
inevitably weakens the model performance. Thus, we aim to extract useful
information from top-ranked candidate labels to help training $n_{\theta}$
with multi-view consensus, explained below, which uses the label views
produced by the predictions from $n_{\theta}$ and $r_{\phi}$ and the training
labels, to select samples for training $n_{\theta}$ and re-label samples for
training $r_{\phi}$.
### 3.3 Multi-view Consensus
One of the objectives of maximising prediction disagreement between models is
to improve sample selection accuracy for co-teaching. We propose a new sample
selection based on multi-view consensus, where each sample $\mathbf{x}_{i}$
has three label views: the single-label training label
$\tilde{\mathbf{y}}_{i}$, the single-label one-hot prediction
$\tilde{\mathbf{y}}_{i}^{(n)}$, and the multi-label top-$K$ prediction
$\tilde{\mathbf{y}}_{i}^{(r)}$. These multiple views allow us to build
training subsets given prediction disagreements, as shown in Tab. 1, where the
Agreement Degree (AG) score is defined as:
$\text{AG}(\tilde{\mathbf{y}},\tilde{\mathbf{y}}^{(n)},\tilde{\mathbf{y}}^{(r)})=\tilde{\mathbf{y}}^{\top}\tilde{\mathbf{y}}^{(n)}+{\tilde{\mathbf{y}}^{(n)}}^{\top}\tilde{\mathbf{y}}^{(r)}+\tilde{\mathbf{y}}^{\top}\tilde{\mathbf{y}}^{(r)}$
(3)
Table 1: Three possible label views: the training label
$\tilde{\mathbf{y}}_{i}$, the single-label one-hot prediction
$\tilde{\mathbf{y}}_{i}^{(n)}$, and the multi-label top-$K$ prediction
$\tilde{\mathbf{y}}_{i}^{(r)}$. The combination of these multiple views form
the subsets, defined in the first column, with agreement scores
$\text{AG}(.)$, from (3), in the last column.
Subsets | $\tilde{\mathbf{y}}^{\top}\tilde{\mathbf{y}}^{(n)}$ | ${\tilde{\mathbf{y}}^{(n)}}^{\top}\tilde{\mathbf{y}}^{(r)}$ | $\tilde{\mathbf{y}}^{\top}\tilde{\mathbf{y}}^{(r)}$ | $\text{AG}(.)$
---|---|---|---|---
Core (C) | 1 | 1 | 1 | 3
Side-Core (SC) | 0 | 1 | 1 | 2
NY | 1 | 0 | 0 | 1
NR | 0 | 1 | 0 | 1
RY | 0 | 0 | 1 | 1
Unmatched (U) | 0 | 0 | 0 | 0
The training of the classification net $n_{\theta}(.)$ has the goals of
producing the testing model and of maximising the disagreement with
$r_{\phi}(.)$. This training employs a semi-supervised learning strategy [2],
which requires the division of the training set into clean and noisy sets.
Unlike previous methods that rely on the small-loss assumption to classify
training samples into clean or noisy [16, 9, 1], we utilize the subsets
created by prediction disagreements from the multiple label views shown in
Tab. 1. For training $n_{\theta}(.)$, we first discard all samples in the
subset $\mathrm{Unmatched}$ given their high level of uncertainty because both
models disagree with each other and with the training label. For the remaining
samples, we seek label agreements between pair of views beyond its own
prediction. More specifically, training samples are classified as clean when
$\tilde{\mathbf{y}}^{\top}\tilde{\mathbf{y}}^{(r)}=1$, which indicates that
the training label matches one of the top ranked predictions by $r_{\phi}(.)$.
Such agreement from label views $\tilde{\mathbf{y}}$ and
$\tilde{\mathbf{y}}^{(r)}$ indicates that the training label
$\tilde{\mathbf{y}}$ is within the top-ranked predictions by $r_{\phi}(.)$,
but may not match the prediction by $n_{\theta}(.)$. Therefore, classifying
such samples as clean can help maximise the disagreement with $r_{\phi}$ and
alleviate confirmation bias. The remaining samples with
$\tilde{\mathbf{y}}^{\top}\tilde{\mathbf{y}}^{(r)}=0$ are classified as noisy
because of the insufficient support by $r_{\phi}(.)$ for the training label
$\tilde{\mathbf{y}}$. Therefore, based on the criterion described above, the
classification net $n_{\theta}$ is trained with
$\\{\mathrm{C},\mathrm{SC},\mathrm{RY}\\}$ as clean and
$\\{\mathrm{NY},\mathrm{NR}\\}$ as noisy, defined by the following sample-
selection variable:
$\mathbf{w}_{i}=\left\\{\begin{array}[]{lll}+1,&\text{ if
}\text{AG}(\tilde{\mathbf{y}}_{i},\tilde{\mathbf{y}}^{(n)}_{i},\tilde{\mathbf{y}}^{(r)}_{i})>0\text{
and }\tilde{\mathbf{y}}_{i}^{\top}\tilde{\mathbf{y}}_{i}^{(r)}=1,\\\ 0,&\text{
if
}\text{AG}(\tilde{\mathbf{y}}_{i},\tilde{\mathbf{y}}^{(n)}_{i},\tilde{\mathbf{y}}^{(r)}_{i})>0\text{
and }\tilde{\mathbf{y}}_{i}^{\top}\tilde{\mathbf{y}}_{i}^{(r)}=0,\\\
-1,&\text{ if
}\text{AG}(\tilde{\mathbf{y}}_{i},\tilde{\mathbf{y}}^{(n)}_{i},\tilde{\mathbf{y}}^{(r)}_{i})=0,\end{array}\right.$
(4)
where $\mathbf{w}_{i}\in\\{+1,0,-1\\}$ denotes a clean, noisy, and unmatched
training sample, respectively.
The training of $n_{\theta}(.)$ is performed by
$\begin{split}\theta^{*}&=\arg\min_{\theta}\sum_{\begin{subarray}{c}(\mathbf{x}_{i},\tilde{\mathbf{y}}_{i})\in\mathcal{D}\\\
\mathbf{w}_{i}=+1\end{subarray}}\ell_{CE}(\tilde{\mathbf{y}}_{i},\sigma_{sm}(n_{\theta}(\mathbf{x}_{i})))\\\
&+\lambda\sum_{\begin{subarray}{c}(\mathbf{x}_{i},\tilde{\mathbf{y}}_{i})\in\mathcal{D}\\\
\mathbf{w}_{i}=0\end{subarray}}\ell_{MSE}(\upsilon(\sigma_{sm}(n_{\theta}(\mathbf{x}_{i})),T),\sigma_{sm}(n_{\theta}(\mathbf{x}_{i}))),\end{split}$
(5)
where $\upsilon(.,T)$ is a sharpening function [16] parameterised by the
temperature $T$, and $\lambda$ is the weight to control the strength of the
unsupervised learning with the noisy labels, and $\ell_{MSE}(.)$ denotes the
mean square error loss function.
The training of the reference net $r_{\phi}(.)$ has the goals of maximising
the disagreement with $n_{\theta}(.)$ using the multi-view consensus from Tab.
1, and maintaining the top-ranked labels of training samples as clean label
candidates. To achieve that, we focus on designing a new supervisory training
signal by re-labelling the samples where predictions by $n_{\theta}(.)$ and
$r_{\phi}(.)$ match (i.e.,
${\tilde{\mathbf{y}}^{(n)}}^{\top}\tilde{\mathbf{y}}^{(r)}=1$) and the
prediction by $n_{\theta}(.)$ does not match the training label
$\tilde{\mathbf{y}}$ (i.e.,
${\tilde{\mathbf{y}}}^{\top}\tilde{\mathbf{y}}^{(n)}=0$). The training samples
that meet this condition can be regarded as hard to fit by $n_{\theta}(.)$,
with the top-ranked predictions by $\tilde{\mathbf{y}}^{(r)}$ being likely to
contain the hidden clean label. The conditions above indicates that we select
samples from $\mathrm{SC}\bigcup\mathrm{NR}$ from Tab. 1 for re-labelling. For
samples in $\mathrm{SC}$, since $n_{\theta}(.)$ is trained with supervised
learning in (5), the maximisation of prediction disagreement is achieved by
re-labelling the sample to $\tilde{\mathbf{y}}^{(n)}$. For samples in
$\mathrm{NR}$, $n_{\theta}(.)$ is trained with unsupervised learning in (5),
so the prediction disagreement is maximised by re-labelling the sample to
$\tilde{\mathbf{y}}+\tilde{\mathbf{y}}^{(n)}$, forming a multi-label target.
We define the re-labelling variable $\hat{\mathbf{y}}$ to represent the new
supervisory training signal, as follows:
$\hat{\mathbf{y}}_{i}=\left\\{\begin{array}[]{lll}\tilde{\mathbf{y}}_{i}^{(n)},&\text{
if }(\mathbf{x}_{i},\tilde{\mathbf{y}}_{i})\in\mathrm{SideCore},\\\
\tilde{\mathbf{y}}_{i}+\tilde{\mathbf{y}}_{i}^{(n)},&\text{ if
}(\mathbf{x}_{i},\tilde{\mathbf{y}}_{i})\in\mathrm{NR},\\\
\tilde{\mathbf{y}}_{i},&\text{otherwise},\end{array}\right.$ (6)
with training of $r_{\phi}(.)$ achieved with:
$\phi^{*}=\arg\min_{\phi}\sum_{i=1}^{|\mathcal{D}|}\ell_{BCE}(\hat{\mathbf{y}}_{i},\sigma_{sg}(r_{\phi}(\mathbf{x}_{i}))).$
(7)
Note that this re-labelling is iteratively done at every epoch. The testing
procedure depends exclusively on the classification net $n_{\theta}(.)$.
## 4 Experiments
We show the results of extensive experiments on instance-dependent synthetic
noise benchmarks with datasets CIFAR10 and CIFAR100 [14] with various noise
rates and on three real-world datasets, namely: Animal-10N [27], Red Mini-
ImageNet [12] and Clothing1M [32].
### 4.1 Datasets
CIFAR10/100. For CIFAR10 and CIFAR100 [14], the training set contains 50K
images and testing set contains 10K images of size 32 $\times$ 32 $\times$ 3\.
CIFAR10 has 10 classes and CIFAR100 has 100 classes. We follow previous work
[30] for generating instance-dependent noise with rates in {0.2, 0.3, 0.4,
0.5}. Red Mini-ImageNet is proposed by [12] based on Mini-ImageNet [6]. The
images and their corresponding labels are annotated by Google Cloud Data
Labelling Service. This dataset is proposed to study real-world web-based
noisy label. Red Mini-ImageNet has 100 classes with each class containing 600
images from ImageNet. The images are resized to 32 $\times$ 32 from the
original 84 $\times$ 84 pixels to allow a fair comparison with other baselines
[33, 12]. We test our method on noise rates in {20%, 40%, 60%, 80%}. Animal
10N is a real-world dataset proposed in [27], which contains 10 animal species
with similar appearances (wolf and coyote, hamster and guinea pig, etc.). The
training set size is 50K and testing size is 10K, where we follow the same
setup as [27]. Clothing 1M is a real-world dataset with 100K images and 14
classes. The labels are generated from surrounding text with an estimated
noise ratio of 38.5%. We follow a common setup using a training image size of
224 $\times$ 224 pixels. The dataset also contains clean training, clean
validation and clean test sets with 50K, 14K and 10K images. We do not use
clean training and clean validation, only the clean testing is used for
measuring model performance.
Methods | CIFAR10 | CIFAR100
---|---|---
0.2 | 0.3 | 0.4 | 0.5 | 0.2 | 0.3 | 0.4 | 0.5
CE | 75.81 | 69.15 | 62.45 | 39.42 | 30.42 | 24.15 | 21.34 | 14.42
Mixup [38] | 73.17 | 70.02 | 61.56 | 48.95 | 32.92 | 29.76 | 25.92 | 21.31
Forward [22] | 74.64 | 69.75 | 60.21 | 46.27 | 36.38 | 33.17 | 26.75 | 19.27
T-Revision [31] | 76.15 | 70.36 | 64.09 | 49.02 | 37.24 | 36.54 | 27.23 | 22.54
Reweight [19] | 76.23 | 70.12 | 62.58 | 45.46 | 36.73 | 31.91 | 28.39 | 20.23
PTD-R-V [30] | 76.58 | 72.77 | 59.50 | 56.32 | 65.33 | 64.56 | 59.73 | 56.80
Decoupling [20] | 78.71 | 75.17 | 61.73 | 50.43 | 36.53 | 30.93 | 27.85 | 19.59
Co-teaching [9] | 80.96 | 78.56 | 73.41 | 45.92 | 37.96 | 33.43 | 28.04 | 23.97
MentorNet [13] | 81.03 | 77.22 | 71.83 | 47.89 | 38.91 | 34.23 | 31.89 | 24.15
CausalNL [34] | 81.79 | 80.75 | 77.98 | 78.63 | 41.47 | 40.98 | 34.02 | 32.13
CAL [40] | 92.01 | - | 84.96 | - | 69.11 | - | 63.17 | -
kMEIDTM [5] | 92.26 | 90.73 | 85.94 | 73.77 | 69.16 | 66.76 | 63.46 | 59.18
DivideMix [16] $\theta^{(1)}$ test † | 94.62 | 94.49 | 93.50 | 89.07 | 74.43 | 73.53 | 69.18 | 57.52
Ours | 96.00 | 95.82 | 95.01 | 94.13 | 76.02 | 74.02 | 68.96 | 60.35
DivideMix [16] † | 94.80 | 94.60 | 94.53 | 93.04 | 77.07 | 76.33 | 70.80 | 58.61
Ours 2$\times n_{\theta}$ test | 96.56 | 96.11 | 95.53 | 94.86 | 78.50 | 77.32 | 73.32 | 65.96
Table 2: Test accuracy (%) of different methods on CIFAR10/100 with instance-
dependent noise [30]. Results reproduced from publicly available code are
presented with $\dagger$. Best single/ensemble inference results are labelled
with red/green.
### 4.2 Implementation
For CIFAR10/10 and Red Mini-ImageNet we use Preact-ResNet18 [11] and train it
for 200 epochs with SGD with momentum=0.9, weight decay=5e-4 and batch
size=128. The initial learning rate is 0.02 and reduced by a factor of 10
after 150 epochs. The warmup period for all three datasets is 10 epochs. We
set $\lambda=25$ in (5) for CIFAR10 and Red Mini-ImageNet, and $\lambda=100$
for CIFAR100. In (2), we set $K=1$ for CIFAR10 and $K=3$ for CIFAR100 and Red
Mini-ImageNet. These values are fixed for all noise rates. For data
augmentations, we use random cropping and random horizontal flipping for all
three datasets.
For Animal 10N, we follow a common setup used by previous methods with a
VGG-19BN [25] architecture, trained for 100 epochs with SGD with momentum=0.9,
weight decay=5e-4 and batch size=128. The initial learning rate is 0.02, and
reduced by a factor of 10 after 50 epochs. The warmup period is 10 epochs. We
set $\lambda=25$ and $K=2$. For data augmentations, we use random cropping and
random horizontal flipping.
For Clothing1M, we use ImageNet [6] pre-trained ResNet50 [11] and train it for
80 epochs with SGD with momentum=0.9, weight decay=1e-3 and batch size=32. The
warmup period is 1 epoch. The initial learning rate is set to 0.002 and
reduced by a factor of 10 after 40 epochs. Following DivideMix [16], we also
sample 1000 mini-batches from the training set to ensure the training set is
pseudo balanced. We set $K=4$. For data augmentation, we first resize the
image to 256 $\times$ 256 pixels, then random crop to 224 $\times$ 224 and
random horizontal flipping.
For the semi-supervised training of $n_{\theta}(.)$, we use MixMatch [2] from
DivideMix [16]. We also extend our method to train two $n_{\theta}(.)$ models
and use ensemble prediction at inference time, similarly to DivideMix [16]. We
denoted this variant as $2\times n_{\theta}$. Our code is implemented in
Pytorch [21] and all experiments are performed on an RTX 3090333Time of
Different sample selection comparison in supplementary.
### 4.3 Comparison with SOTA Methods
We compare our AsyCo with the following methods: 1) CE, which trains the
classification network with standard CE loss on the noisy dataset; 2) Mixup
[38], which employs mixup on the noisy dataset; 3) Forward [22], which
estimates the noise transition matrix in a two-stage training pattern; 4)
T-Revision [31], which finds reliable samples to replace anchor points for
estimating transition matrix; 5) Reweight [19], which utilizes a class-
dependent transition matrix to correct the loss function; 6) PTD-R-V [30],
which proposes a part-dependent transition matrix for accurate estimation; 7)
Decoupling [20], which trains two networks on samples whose predictions from
the network are different; 8) Co-teaching [9], which trains two networks and
select small-loss samples as clean samples; 9) MentorNet [13], which utilizes
a teacher network for selecting noisy samples; 10) CausalNL [34], which
discovers a causal relationship in noisy dataset and combines it with Co-
Teaching; 11) CAL [40], which uses second-order statistics with a new loss
function; 12) kMEIDTM [5], which learns instance-dependent transition matrix
by applying manifold regularization during the training; 13) DivideMix [16],
which combines semi-supervised learning, sample selection and Co-Teaching to
achieve SOTA results; 14) FaMUS [33], which is a meta-learning method that
learns the weight of training samples to improve the meta-learning update
process; 15) Nested [4], which is a novel feature compression method that uses
nested dropout to regularize features when training with noisy label–this
approach can be combined with existing techniques such as Co-Teaching [9]; and
16) PLC [39], which is a method that produces soft pseudo label when learning
with label noise.
### 4.4 Experiment Results
Synthetic Noise Benchmarks. The experimental results of our proposed AsyCo
with instance-dependent noise on CIFAR10/100 are shown in Tab. 2. We reproduce
DivideMix [16] in this setup with single model at inference time denoted by
$\theta^{(1)}$ and also the original ensemble inference. Compared with the
best baselines, our method achieves large improvements for all noise rates. On
CIFAR10, we achieve $\approx 1.5\%$ improvements for low noise rates and
$\approx 1\%$ to $5\%$ improvements for high noise rates. For CIFAR100, we
improve between $\approx 1.5\%$ and $\approx 7\%$ for many noise rates. Note
that our result is achieved without using small-loss sample selection, which
is a fundamental technique for most noisy label learning methods [16, 9, 13].
The superior performance of AsyCo indicates that our multi-view consensus for
sample selection and top-rank re-labelling are effective when learning with
label noise.
Method | Noise rate
---|---
0.2 | 0.4 | 0.6 | 0.8
CE | 47.36 | 42.70 | 37.30 | 29.76
Mixup [38] | 49.10 | 46.40 | 40.58 | 33.58
DivideMix [16] | 50.96 | 46.72 | 43.14 | 34.50
MentorMix [12] | 51.02 | 47.14 | 43.80 | 33.46
FaMUS [33] | 51.42 | 48.06 | 45.10 | 35.50
Ours | 59.40 | 55.08 | 49.78 | 41.02
Ours 2$\times n_{\theta}$ test | 61.98 | 57.46 | 51.86 | 42.58
Table 3: Test accuracy (%) of different methods on Red Mini-ImageNet with
different noise rates. Baselines results are from FaMUS [33]. Best results
with single/ensemble inferences are labelled with red/green.
Method | Accuracy
---|---
CE | 79.4
Nested [4] | 81.3
Dropout + CE [4] | 81.1
SELFIE [27] | 81.8
PLC [39] | 83.4
Nested + Co-Teaching [4] | 84.1
Ours | 85.6
Ours 2$\times n_{\theta}$ | 86.3
Table 4: Test accuracy (%) of different methods on Animal-10N. Baselines
results are presented with Nested Dropout [4]. Best single/ensemble inference
results are labelled with red/green.
Single | Methods | CE | Forward [22] | PTD-R-V [30] | ELR [18] | kMEIDTM [5] | Ours
---|---|---|---|---|---|---|---
Accuracy | 68.94 | 69.84 | 71.67 | 72.87 | 73.34 | 73.60
Ensemble | Methods | Co-Teaching [9] | Co-Teaching+ [36] | JoCoR [29] | CausalNL [34] | DivideMix [16] | Ours 2$\times n_{\theta}$
Accuracy | 69.21 | 59.3 | 70.3 | 72.24 | 74.60 | 74.43
Table 5: Test accuracy (%) of different methods on Clothing1M. Best
single/ensemble inference results are labelled with red/green.
Real-world Noisy-label Datasets. In Tab. 3, we present results on Red Mini-
ImageNet [12]. Our method achieves SOTA results for all noise rates with 4% to
8% improvements in single model inference and 7% to 10% in ensemble inference.
The improvement is significant compared with FaMUS [33] with a gap of more
than 6%. Compared with DivideMix [16], our method achieves between 6% and 10%
improvements. In Tab. 4, we present the results for Animal 10N [27], where the
previous SOTA method was Nested Dropout + Co-Teaching [4], which achieves
84.1% accuracy. Our method achieves 85.6% accuracy, which is 2.2% higher than
previous SOTA. Additionally, our ensemble version achieves 86.34% accuracy,
which improves 1% more compared to our single inference model, yielding a new
SOTA result. In Tab. 5, we show our result on Clothing1M [32]. In the single
model setup, our model outperforms all previous SOTA methods. In the ensemble
inference setup, our model shows comparable performance with the SOTA method
DivideMix [16] and outperforms all other methods. Compared with other methods
based on prediction disagreement [9, 36, 29], our model improves by at least
3%. The performance on these three real-world datasets indicates the
superiority of our proposed AsyCo.
## 5 Ablation Study
For the ablation study, we first visualise the training losses of subsets from
Tab. 1 that are used by our multi-view consensus approach. We also compare the
accuracy of GMM selected clean samples and our multi-view selected samples.
Then we test alternative approaches for multi-view sample selection and re-
labelling. We perform all ablation experiments on the instance-dependent
CIFAR10/100 [30].
(a) CIFAR100 0.2 loss
(b) CIFAR100 0.2 Accuracy
(c) CIFAR100 0.5 loss
(d) CIFAR100 0.5 Accuracy
Figure 3: (a) and (c) are sample loss histograms for the subsets in Tab. 1
for CIFAR100 with 0.2 and 0.5 instance-dependent noise after warmup. Vertical
dot line is GMM threshold. (b) and (d) are accuracy of clean set selected by
GMM and our multi-view strategy. (b) and (d) also show accuracy of whether
hidden clean labels within $r_{\phi}$ top-ranked prediction or not for multi-
view re-labelling and not re-labelling.
Model | Ablation | CIFAR10 | CIFAR100
---|---|---|---
0.2 | 0.3 | 0.4 | 0.5 | 0.2 | 0.3 | 0.4 | 0.5
$n_{\theta}$ | $\mathbf{w}_{i}=0$ if $(\mathbf{x}_{i},\tilde{\mathbf{y}}_{i})\in\mathrm{RY}$ | 93.28 | 93.85 | 92.54 | 82.60 | 73.58 | 71.51 | 65.51 | 56.65
$\mathbf{w}_{i}=0$ if $(\mathbf{x}_{i},\tilde{\mathbf{y}}_{i})\in\mathrm{U}$ | 95.71 | 94.88 | 94.34 | 91.60 | 75.10 | 72.64 | 67.42 | 57.55
$\mathbf{w}_{i}=+1$ if $(\mathbf{x}_{i},\tilde{\mathbf{y}}_{i})\in\mathrm{U}$ | 95.20 | 95.14 | 94.72 | 90.27 | 75.34 | 73.21 | 66.09 | 55.95
Small-loss subsets | 92.37 | 91.80 | 90.93 | 78.53 | 70.10 | 69.52 | 64.69 | 56.35
$r_{\phi}$ | CE | 95.22 | 94.83 | 83.48 | 64.96 | 73.33 | 69.29 | 63.82 | 54.83
Frozen after warmup | 91.19 | 88.97 | 84.72 | 67.57 | 68.73 | 65.36 | 58.88 | 48.13
$\mathbf{\hat{y}_{i}}=\mathbf{\tilde{y}_{i}}$ | 95.42 | 94.69 | 90.53 | 84.95 | 74.43 | 71.75 | 62.25 | 53.69
$\mathbf{\hat{y}_{i}}=\mathbf{\tilde{y}^{(n)}_{i}}$ | 94.29 | 94.23 | 94.13 | 93.67 | 74.55 | 73.71 | 68.21 | 57.84
AsyCo original result: | 96.00 | 95.82 | 95.01 | 94.13 | 76.02 | 74.02 | 68.96 | 60.35
Table 6: Ablation study for the classification net $n_{\theta}$ and reference
net $r_{\phi}$.
Fig. 3(a) and Fig. 3(c) show the loss histograms after warmup for each subset
in Tab. 1. To compare with small-loss sample selection approaches, we adopt
the sample-selection approach by DivideMix [16] that is based on a Gaussian
Mixture Model (GMM) to divide the training set into clean and noisy subsets
(the vertical black dotted line is the threshold estimated by DivideMix).
These graphs show that the subsets’ loss histograms are relatively consistent
in different noise rates. Specifically, $\mathrm{C}$ always has the smallest
loss values among all subsets, which shows that our multi-view sample
selection is able to confidently extract clean samples. We also observe that
$\mathrm{NY}$ has small loss values in both graphs. However, using
$\mathrm{NY}$ as clean set does not produce promising performance, as shown in
Tab. 6, row ’Small-loss subsets’, which represents the use of almost all
samples in C and NY as clean samples (since they are on the left-hand side of
the GMM threshold). This indicates that the small-loss samples in
$\mathrm{NY}$ are likely to contain overfitted noisy-label samples, whereas
our multi-view sample selection successfully avoids selecting these samples.
In Fig. 3(b) and Fig. 3(d), we show the accuracy of the clean set selected by
the GMM-based small-loss strategy of DivideMix and by our multi-view consensus
during the training stages. We observe that multi-view selection performs
consistently better than GMM in both graphs. We also validate the accuracy of
the hidden clean label produced by the top ranked predictions of $r_{\phi}(.)$
by comparing the re-labelling produced by Eq. 6 versus no re-labelling (i.e.,
train $r_{\phi}(.)$ with the original training labels.) Our multi-view re-
labelling consistently improves the label accuracy overtime, which indicates
the effectiveness of our method.
Tab. 6 shows a study on the selection of different subsets from Tab. 1 for the
sample-selection when training the classification net $n_{\theta}(.)$. First,
we test the importance of classifying the samples in $\mathrm{RY}$ as clean
for training $n_{\theta}(.)$ by, instead, treating these samples as noisy in
Eq. (5) (i.e., by setting $\mathbf{w}_{i}=0$). This new sample selection
causes a large drop in performance for all cases, which suggests that
$\mathrm{RY}$ contains informative samples that are helpful for training
$n_{\theta}(.)$. Second, we test whether using the unmatched samples in
$\mathrm{U}$ can improve model training, where we include them as clean or
noisy samples by setting $\mathbf{w}_{i}=+1,0$, respectively. Both studies
lead to worse results compared to the original AsyCo that discards
$\mathrm{U}$ samples (see last row). Despite this result, we also notice that
in low noise rates (0.2, 0.3), treating $\mathrm{U}$ as clean leads to
slightly better accuracy than treating $\mathrm{U}$ as noisy. These results
suggest that the high uncertainty and lack of view agreements by the samples
in $\mathrm{U}$ lead to poor supervisory training signal, which means that
discarding these samples is currently the best option. Finally, the histograms
of Fig. 3 indicate that $\mathrm{NY}$ also contains small-loss samples.
Therefore, we make the traditional small-loss assumption to train our AsyCo
and use the subsets $\mathrm{C}$ and $\mathrm{NY}$ as clean and treat the
other subsets as noisy. As shown in the ”Small-loss subset” row of Tab. 6, the
accuracy is substantially lower, which suggests that the small-loss samples
may contain overfitted noisy-label samples.
We analyse the training of $r_{\phi}(.)$ with different training losses and
re-labelling strategies in Tab. 6. We first study how the multi-label training
loss provided by the BCE loss helps mitigate label noise by training our
reference net $r_{\theta}(.)$ with the CE loss $\ell_{CE}(.)$ in Eq. (1) and
(7), while keeping the multi-view sample selection and re-labelling strategies
unchanged. We observed that by training $r_{\theta}(.)$ with $\ell_{CE}(.)$
leads to a significant drop in accuracy for most cases, where for CIFAR10 with
low noise rate (20% and 30%), $\ell_{CE}(.)$ maintains the accuracy of
$\ell_{BCE}(.)$, but for larger noise rates, such as 40% and 50%,
$\ell_{CE}(.)$ is not competitive with $\ell_{BCE}(.)$ because it reduces the
prediction disagreements between $n_{\theta}(.)$ and $r_{\phi}(.)$,
facilitating the overfitting to the same noisy-label samples by both models.
For CIFAR100, $\ell_{CE}(.)$ leads to worse results than $\ell_{BCE}(.)$ for
all cases. These results suggest that to effectively co-teach two models with
prediction disagreement, the use of different training strategies is an
important component. Next, we study a training, where $r_{\phi}(.)$ is frozen
after warmup, but we still train $n_{\theta}(.)$. The result drops
significantly which indicates that $r_{\phi}(.)$ needs to be trained in
conjunction with $n_{\theta}(.)$ to achieve reasonable performance. We study
different re-labelling strategies by first setting
$\hat{\mathbf{y}}_{i}=\tilde{\mathbf{y}}$ for training $r_{\phi}(.)$, which
leads to comparable results for low noise rates, but worse results for high-
noise rates, suggesting that that only training with $\tilde{\mathbf{y}}$ is
not enough to achieve good performance. Finally, by setting
$\hat{\mathbf{y}}_{i}=\mathbf{\tilde{y}}^{(n)}$, we notice better but slightly
worse results than our proposed re-labelling from Eq. (6).
## 6 Conclusion
In this work, we introduced a new noisy label learning method called AsyCo.
Unlike previous SOTA noisy label learning methods that train two models with
the same strategy and select small-loss samples, AsyCo explores two different
training strategies and use multi-view consensus for sample selection. We show
in experiments that AsyCo outperforms previous methods in both synthetic and
real-world benchmarks. In the ablation study, we explore various subset
selection strategies for sample selection and re-labelling, which show the
importance of our design decisions. For future work, we will explore lighter
models for the reference net as only rank prediction is required. We will also
explore out-of-distribution (OOD) samples in noisy label learning because our
method currently assumes all samples are in-distribution.
## References
* [1] Eric Arazo, Diego Ortego, Paul Albert, Noel O’Connor, and Kevin McGuinness. Unsupervised label noise modeling and loss correction. In International conference on machine learning, pages 312–321. PMLR, 2019.
* [2] David Berthelot, Nicholas Carlini, Ian Goodfellow, Nicolas Papernot, Avital Oliver, and Colin A Raffel. Mixmatch: A holistic approach to semi-supervised learning. Advances in neural information processing systems, 32, 2019.
* [3] Avrim Blum and Tom Mitchell. Combining labeled and unlabeled data with co-training. In Proceedings of the eleventh annual conference on Computational learning theory, pages 92–100, 1998.
* [4] Yingyi Chen, Xi Shen, Shell Xu Hu, and Johan AK Suykens. Boosting co-teaching with compression regularization for label noise. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 2688–2692, 2021.
* [5] De Cheng, Tongliang Liu, Yixiong Ning, Nannan Wang, Bo Han, Gang Niu, Xinbo Gao, and Masashi Sugiyama. Instance-dependent label-noise learning with manifold-regularized transition matrix estimation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 16630–16639, 2022.
* [6] Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In 2009 IEEE conference on computer vision and pattern recognition, pages 248–255. Ieee, 2009.
* [7] Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805, 2018.
* [8] Arpit Garg, Cuong Nguyen, Rafael Felix, Thanh-Toan Do, and Gustavo Carneiro. Instance-dependent noisy label learning via graphical modelling. arXiv preprint arXiv:2209.00906, 2022.
* [9] Bo Han, Quanming Yao, Xingrui Yu, Gang Niu, Miao Xu, Weihua Hu, Ivor Tsang, and Masashi Sugiyama. Co-teaching: Robust training of deep neural networks with extremely noisy labels. Advances in neural information processing systems, 31, 2018.
* [10] Zongbo Han, Changqing Zhang, Huazhu Fu, and Joey Tianyi Zhou. Trusted multi-view classification. arXiv preprint arXiv:2102.02051, 2021.
* [11] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learningfor image recognition. ComputerScience, 2015.
* [12] Lu Jiang, Di Huang, Mason Liu, and Weilong Yang. Beyond synthetic noise: Deep learning on controlled noisy labels. In International Conference on Machine Learning, pages 4804–4815. PMLR, 2020.
* [13] Lu Jiang, Zhengyuan Zhou, Thomas Leung, Li-Jia Li, and Li Fei-Fei. Mentornet: Learning data-driven curriculum for very deep neural networks on corrupted labels. In International conference on machine learning, pages 2304–2313. PMLR, 2018.
* [14] Alex Krizhevsky, Geoffrey Hinton, et al. Learning multiple layers of features from tiny images. 2009\.
* [15] Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep convolutional neural networks. Communications of the ACM, 60(6):84–90, 2017.
* [16] Junnan Li, Richard Socher, and Steven CH Hoi. Dividemix: Learning with noisy labels as semi-supervised learning. arXiv preprint arXiv:2002.07394, 2020.
* [17] Geert Litjens, Thijs Kooi, Babak Ehteshami Bejnordi, Arnaud Arindra Adiyoso Setio, Francesco Ciompi, Mohsen Ghafoorian, Jeroen Awm Van Der Laak, Bram Van Ginneken, and Clara I Sánchez. A survey on deep learning in medical image analysis. Medical image analysis, 42:60–88, 2017.
* [18] Sheng Liu, Jonathan Niles-Weed, Narges Razavian, and Carlos Fernandez-Granda. Early-learning regularization prevents memorization of noisy labels. Advances in neural information processing systems, 33:20331–20342, 2020.
* [19] Tongliang Liu and Dacheng Tao. Classification with noisy labels by importance reweighting. IEEE Transactions on pattern analysis and machine intelligence, 38(3):447–461, 2015.
* [20] Eran Malach and Shai Shalev-Shwartz. Decoupling” when to update” from” how to update”. Advances in neural information processing systems, 30, 2017.
* [21] Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, et al. Pytorch: An imperative style, high-performance deep learning library. Advances in neural information processing systems, 32, 2019.
* [22] Giorgio Patrini, Alessandro Rozza, Aditya Krishna Menon, Richard Nock, and Lizhen Qu. Making deep neural networks robust to label noise: A loss correction approach. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 1944–1952, 2017.
* [23] Tal Ridnik, Emanuel Ben-Baruch, Nadav Zamir, Asaf Noy, Itamar Friedman, Matan Protter, and Lihi Zelnik-Manor. Asymmetric loss for multi-label classification. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 82–91, 2021.
* [24] Min Shi, Yufei Tang, Xingquan Zhu, and Jianxun Liu. Multi-label graph convolutional network representation learning. IEEE Transactions on Big Data, 2020.
* [25] Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556, 2014.
* [26] Vikas Sindhwani, Partha Niyogi, and Mikhail Belkin. A co-regularization approach to semi-supervised learning with multiple views. In Proceedings of ICML workshop on learning with multiple views, volume 2005, pages 74–79. Citeseer, 2005.
* [27] Hwanjun Song, Minseok Kim, and Jae-Gil Lee. Selfie: Refurbishing unclean samples for robust deep learning. In International Conference on Machine Learning, pages 5907–5915. PMLR, 2019.
* [28] Xiaosong Wang, Yifan Peng, Le Lu, Zhiyong Lu, Mohammadhadi Bagheri, and Ronald M Summers. Chestx-ray8: Hospital-scale chest x-ray database and benchmarks on weakly-supervised classification and localization of common thorax diseases. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 2097–2106, 2017.
* [29] Hongxin Wei, Lei Feng, Xiangyu Chen, and Bo An. Combating noisy labels by agreement: A joint training method with co-regularization. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 13726–13735, 2020.
* [30] Xiaobo Xia, Tongliang Liu, Bo Han, Nannan Wang, Mingming Gong, Haifeng Liu, Gang Niu, Dacheng Tao, and Masashi Sugiyama. Part-dependent label noise: Towards instance-dependent label noise. Advances in Neural Information Processing Systems, 33:7597–7610, 2020.
* [31] Xiaobo Xia, Tongliang Liu, Nannan Wang, Bo Han, Chen Gong, Gang Niu, and Masashi Sugiyama. Are anchor points really indispensable in label-noise learning? Advances in Neural Information Processing Systems, 32, 2019.
* [32] Tong Xiao, Tian Xia, Yi Yang, Chang Huang, and Xiaogang Wang. Learning from massive noisy labeled data for image classification. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 2691–2699, 2015.
* [33] Youjiang Xu, Linchao Zhu, Lu Jiang, and Yi Yang. Faster meta update strategy for noise-robust deep learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 144–153, 2021.
* [34] Yu Yao, Tongliang Liu, Mingming Gong, Bo Han, Gang Niu, and Kun Zhang. Instance-dependent label-noise learning under a structural causal model. Advances in Neural Information Processing Systems, 34:4409–4420, 2021.
* [35] Tom Young, Devamanyu Hazarika, Soujanya Poria, and Erik Cambria. Recent trends in deep learning based natural language processing. ieee Computational intelligenCe magazine, 13(3):55–75, 2018.
* [36] Xingrui Yu, Bo Han, Jiangchao Yao, Gang Niu, Ivor Tsang, and Masashi Sugiyama. How does disagreement help generalization against label corruption? In International Conference on Machine Learning, pages 7164–7173. PMLR, 2019.
* [37] Chiyuan Zhang, Samy Bengio, Moritz Hardt, Benjamin Recht, and Oriol Vinyals. Understanding deep learning (still) requires rethinking generalization. Communications of the ACM, 64(3):107–115, 2021.
* [38] Hongyi Zhang, Moustapha Cisse, Yann N Dauphin, and David Lopez-Paz. mixup: Beyond empirical risk minimization. arXiv preprint arXiv:1710.09412, 2017.
* [39] Yikai Zhang, Songzhu Zheng, Pengxiang Wu, Mayank Goswami, and Chao Chen. Learning with feature-dependent label noise: A progressive approach. arXiv preprint arXiv:2103.07756, 2021.
* [40] Zhaowei Zhu, Tongliang Liu, and Yang Liu. A second-order approach to learning with instance-dependent label noise. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 10113–10123, 2021.
|
* McIntyre et al. (2023) McIntyre, J. R., Chen, C. H. K., & Larosa, A. 2023, The Astrophysical Journal, 957, 111, doi: 10.3847/1538-4357/acf3dd
* McKinney et al. (2010) McKinney, W., et al. 2010, in Proceedings of the 9th Python in Science Conference, Vol. 445, Austin, TX, 51–56
* Meyrand et al. (2016) Meyrand, R., Galtier, S., & Kiyani, K. H. 2016, Phys. Rev. Lett., 116, 105002, doi: 10.1103/PhysRevLett.116.105002
* Meyrand et al. (2023) Meyrand, R., Squire, J., Mallet, A., & Chandran, B. D. G. 2023, Reflection-driven turbulence in the super-Alfvénic solar wind. https://arxiv.org/abs/2308.10389
* Meyrand et al. (2021) Meyrand, R., Squire, J., Schekochihin, A., & Dorland, W. 2021, Journal of Plasma Physics, 87, 535870301, doi: 10.1017/S0022377821000489
* Moncuquet et al. (2020) Moncuquet, M., Meyer-Vernet, N., Issautier, K., et al. 2020, The Astrophysical Journal Supplement Series, 246, 44, doi: 10.3847/1538-4365/ab5a84
* Monin & Jaglom (1987) Monin, A. S., & Jaglom, A. M. 1987, Statistical fluid mechanics. 2, 3rd edn. (Cambridge, Mass.: MIT Pr)
* Montgomery & Turner (1981) Montgomery, D., & Turner, L. 1981, Physics of Fluids, 24, 825, doi: 10.1063/1.863455
* Müller & Biskamp (2000) Müller, W.-C., & Biskamp, D. 2000, Phys. Rev. Lett., 84, 475, doi: 10.1103/PhysRevLett.84.475
* Müller et al. (2003) Müller, W.-C., Biskamp, D., & Grappin, R. 2003, Phys. Rev. E, 67, 066302, doi: 10.1103/PhysRevE.67.066302
* Müller & Grappin (2005) Müller, W.-C., & Grappin, R. 2005, Phys. Rev. Lett., 95, 114502, doi: 10.1103/PhysRevLett.95.114502
* Ng & Bhattacharjee (1996) Ng, C. S., & Bhattacharjee, A. 1996, ApJ, 465, 845, doi: 10.1086/177468
* Oboukhov (1962) Oboukhov, A. M. 1962, Journal of Fluid Mechanics, 13, 77, doi: 10.1017/S0022112062000506
* Osman et al. (2012) Osman, K. T., Matthaeus, W. H., Wan, M., & Rappazzo, A. F. 2012, Phys. Rev. Lett., 108, 261102, doi: 10.1103/PhysRevLett.108.261102
* Oughton & Matthaeus (2020) Oughton, S., & Matthaeus, W. H. 2020, ApJ, 897, 37, doi: 10.3847/1538-4357/ab8f2a
* Oughton et al. (2017) Oughton, S., Matthaeus, W. H., & Dmitruk, P. 2017, ApJ, 839, 2, doi: 10.3847/1538-4357/aa67e2
* Oughton et al. (2017) Oughton, S., Matthaeus, W. H., & Dmitruk, P. 2017, The Astrophysical Journal, 839, 2, doi: 10.3847/1538-4357/aa67e2
* Palacios et al. (2022) Palacios, J. C., Bourouaine, S., & Perez, J. C. 2022, ApJ, 940, L20, doi: 10.3847/2041-8213/ac92f6
* Parashar et al. (2019) Parashar, T. N., Cuesta, M., & Matthaeus, W. H. 2019, The Astrophysical Journal, 884, L57, doi: 10.3847/2041-8213/ab4a82
* Parashar et al. (2020) Parashar, T. N., Goldstein, M. L., Maruca, B. A., et al. 2020, The Astrophysical Journal Supplement Series, 246, 58, doi: 10.3847/1538-4365/ab64e6
* Parker & Tidman (1958) Parker, E. N., & Tidman, D. A. 1958, Physical Review, 111, 1206, doi: 10.1103/PhysRev.111.1206
* Passot & Sulem (2019) Passot, T., & Sulem, P. L. 2019, Journal of Plasma Physics, 85, 905850301, doi: 10.1017/S0022377819000187
* Passot et al. (2022) Passot, T., Sulem, P. L., & Laveder, D. 2022, Journal of Plasma Physics, 88, 905880312, doi: 10.1017/S0022377822000472
* Passot et al. (2018) Passot, T., Sulem, P. L., & Tassi, E. 2018, Physics of Plasmas, 25, 042107, doi: 10.1063/1.5022528
* Perez & Boldyrev (2009) Perez, J. C., & Boldyrev, S. 2009, Phys. Rev. Lett., 102, 025003, doi: 10.1103/PhysRevLett.102.025003
* Perez & Chandran (2013) Perez, J. C., & Chandran, B. D. G. 2013, The Astrophysical Journal, 776, 124, doi: 10.1088/0004-637X/776/2/124
* Perez et al. (2012) Perez, J. C., Mason, J., Boldyrev, S., & Cattaneo, F. 2012, Phys. Rev. X, 2, 041005, doi: 10.1103/PhysRevX.2.041005
* Perez et al. (2014) —. 2014, The Astrophysical Journal Letters, 793, L13, doi: 10.1088/2041-8205/793/1/L13
* Podesta (2009) Podesta, J. J. 2009, The Astrophysical Journal, 698, 986, doi: 10.1088/0004-637x/698/2/986
* Podesta & Bhattacharjee (2010) Podesta, J. J., & Bhattacharjee, A. 2010, ApJ, 718, 1151, doi: 10.1088/0004-637X/718/2/1151
* Podesta & Borovsky (2010) Podesta, J. J., & Borovsky, J. E. 2010, Physics of Plasmas, 17, 112905, doi: 10.1063/1.3505092
* Podesta et al. (2009) Podesta, J. J., Chandran, B. D. G., Bhattacharjee, A., Roberts, D. A., & Goldstein, M. L. 2009, Journal of Geophysical Research (Space Physics), 114, A01107, doi: 10.1029/2008JA013504
* Politano & Pouquet (1995) Politano, H., & Pouquet, A. 1995, Phys. Rev. E, 52, 636, doi: 10.1103/PhysRevE.52.636
* Pucci & Velli (2014) Pucci, F., & Velli, M. 2014, ApJ, 780, L19, doi: 10.1088/2041-8205/780/2/L19
* Pulupa et al. (2017) Pulupa, M., Bale, S. D., Bonnell, J. W., et al. 2017, Journal of Geophysical Research: Space Physics, 122, 2836, doi: 10.1002/2016JA023345
* Roberts et al. (1987) Roberts, D. A., Goldstein, M. L., Klein, L. W., & Matthaeus, W. H. 1987, Journal of Geophysical Research: Space Physics, 92, 12023, doi: https://doi.org/10.1029/JA092iA11p12023
* Roberts et al. (1992) Roberts, D. A., Goldstein, M. L., Matthaeus, W. H., & Ghosh, S. 1992, \jgr, 97, 17115, doi: 10.1029/92JA01144
* Ruffolo et al. (2020) Ruffolo, D., Matthaeus, W. H., Chhiber, R., et al. 2020, ApJ, 902, 94, doi: 10.3847/1538-4357/abb594
* Ruzmaikin et al. (1995) Ruzmaikin, A. A., Feynman, J., Goldstein, B. E., Smith, E. J., & Balogh, A. 1995, J. Geophys. Res., 100, 3395, doi: 10.1029/94JA02808
* Sahraoui et al. (2009) Sahraoui, F., Goldstein, M. L., Robert, P., & Khotyaintsev, Y. V. 2009, Phys. Rev. Lett., 102, 231102, doi: 10.1103/PhysRevLett.102.231102
* Salem et al. (2009) Salem, C., Mangeney, A., Bale, S. D., & Veltri, P. 2009, ApJ, 702, 537, doi: 10.1088/0004-637X/702/1/537
* Schekochihin (2022) Schekochihin, A. A. 2022, Journal of Plasma Physics, 88, 155880501, doi: 10.1017/S0022377822000721
* Schekochihin et al. (2009a) Schekochihin, A. A., Cowley, S. C., Dorland, W., et al. 2009a, \apjs, 182, 310, doi: 10.1088/0067-0049/182/1/310
* Schekochihin et al. (2009b) —. 2009b, The Astrophysical Journal Supplement Series, 182, 310, doi: 10.1088/0067-0049/182/1/310
* Shankarappa et al. (2023) Shankarappa, N., Klein, K. G., & Martinović, M. M. 2023, The Astrophysical Journal, 946, 85, doi: 10.3847/1538-4357/acb542
* She & Leveque (1994) She, Z.-S., & Leveque, E. 1994, Phys. Rev. Lett., 72, 336, doi: 10.1103/PhysRevLett.72.336
* Shebalin et al. (1983) Shebalin, J. V., Matthaeus, W. H., & Montgomery, D. 1983, Journal of Plasma Physics, 29, 525–547, doi: 10.1017/S0022377800000933
* Shi et al. (2023) Shi, C., Sioulas, N., Huang, Z., et al. 2023, arXiv e-prints, arXiv:2308.12376, doi: 10.48550/arXiv.2308.12376
* Sioulas (2023) Sioulas, N. 2023, MHDTurbPy, 0.1.0, Zenodo, doi: 10.5281/zenodo.7572468
* Sioulas et al. (2022) Sioulas, N., Huang, Z., Velli, M., et al. 2022, The Astrophysical Journal, 934, 143, doi: 10.3847/1538-4357/ac7aa2
* Sioulas et al. (2023) Sioulas, N., Velli, M., Huang, Z., et al. 2023, The Astrophysical Journal, 951, 141, doi: 10.3847/1538-4357/acc658
* Sioulas et al. (2023) Sioulas, N., Huang, Z., Shi, C., et al. 2023, ApJ, 943, L8, doi: 10.3847/2041-8213/acaeff
* Soni et al. (2024) Soni, S. L., Akhavan-Tafti, M., Suen, G. H. H., et al. 2024, Switchback Patches Evolve into Microstreams via Magnetic Relaxation. https://arxiv.org/abs/2402.13964
* Sorriso-Valvo et al. (1999) Sorriso-Valvo, L., Carbone, V., Veltri, P., Consolini, G., & Bruno, R. 1999, Geophysical Research Letters, 26, 1801, doi: https://doi.org/10.1029/1999GL900270
* Sorriso-Valvo et al. (2023) Sorriso-Valvo, L., Marino, R., Foldes, R., et al. 2023, A&A, 672, A13, doi: 10.1051/0004-6361/202244889
* Sorriso-Valvo et al. (2021) Sorriso-Valvo, L., Yordanova, E., Dimmock, A. P., & Telloni, D. 2021, The Astrophysical Journal Letters, 919, L30, doi: 10.3847/2041-8213/ac26c5
* Squire et al. (2020) Squire, J., Chandran, B. D. G., & Meyrand, R. 2020, The Astrophysical Journal Letters, 891, L2, doi: 10.3847/2041-8213/ab74e1
* Squire et al. (2022a) Squire, J., Meyrand, R., Kunz, M. W., et al. 2022a, Nature Astronomy, 6, 715, doi: 10.1038/s41550-022-01624-z
* Squire et al. (2022b) —. 2022b, Nature Astronomy, 6, 715, doi: 10.1038/s41550-022-01624-z
* Stix (1992) Stix, T. H. 1992, Waves in plasmas
* Strauss (1976) Strauss, H. R. 1976, Physics of Fluids, 19, 134, doi: 10.1063/1.861310
* Telloni (2022) Telloni, D. 2022, Frontiers in Astronomy and Space Sciences, 9, doi: 10.3389/fspas.2022.917393
* Tenerani et al. (2021) Tenerani, A., Sioulas, N., Matteini, L., et al. 2021, The Astrophysical Journal Letters, 919, L31, doi: 10.3847/2041-8213/ac2606
* Tu & Marsch (1995) Tu, C. Y., & Marsch, E. 1995, Space Sci. Rev., 73, 1, doi: 10.1007/BF00748891
* Tu et al. (1990) Tu, C. Y., Marsch, E., & Rosenbauer, H. 1990, Geophys. Res. Lett., 17, 283, doi: 10.1029/GL017i003p00283
* Turner et al. (2011) Turner, A. J., Gogoberidze, G., Chapman, S. C., Hnat, B., & Müller, W. C. 2011, Phys. Rev. Lett., 107, 095002, doi: 10.1103/PhysRevLett.107.095002
* Uzdensky & Loureiro (2016) Uzdensky, D. A., & Loureiro, N. F. 2016, Phys. Rev. Lett., 116, 105003, doi: 10.1103/PhysRevLett.116.105003
* Van Rossum & Drake Jr (1995) Van Rossum, G., & Drake Jr, F. L. 1995, Python reference manual (Centrum voor Wiskunde en Informatica Amsterdam)
* Vech & Chen (2016) Vech, D., & Chen, C. H. K. 2016, The Astrophysical Journal Letters, 832, L16, doi: 10.3847/2041-8205/832/1/L16
* Vech et al. (2018) Vech, D., Mallet, A., Klein, K. G., & Kasper, J. C. 2018, The Astrophysical Journal Letters, 855, L27, doi: 10.3847/2041-8213/aab351
* Velli (1993) Velli, M. 1993, A&A, 270, 304
* Velli et al. (1989) Velli, M., Grappin, R., & Mangeney, A. 1989, Physical Review Letters, 63, 1807, doi: 10.1103/PhysRevLett.63.1807
* Velli et al. (1990) Velli, M., Grappin, R., & Mangeney, A. 1990, Computer Physics Communications, 59, 153, doi: 10.1016/0010-4655(90)90165-W
* Velli et al. (1991) Velli, M., Grappin, R., & Mangeney, A. 1991, Geophysical & Astrophysical Fluid Dynamics, 62, 101, doi: 10.1080/03091929108229128
* Verdini & Grappin (2012) Verdini, A., & Grappin, R. 2012, Phys. Rev. Lett., 109, 025004, doi: 10.1103/PhysRevLett.109.025004
* Verdini & Grappin (2015) Verdini, A., & Grappin, R. 2015, ApJ, 808, L34, doi: 10.1088/2041-8205/808/2/L34
* Verdini et al. (2019) Verdini, A., Grappin, R., Alexandrova, O., et al. 2019, MNRAS, 486, 3006, doi: 10.1093/mnras/stz1041
* Verdini et al. (2018) Verdini, A., Grappin, R., Alexandrova, O., & Lion, S. 2018, ApJ, 853, 85, doi: 10.3847/1538-4357/aaa433
* Verdini et al. (2009) Verdini, A., Velli, M., & Buchlin, E. 2009, ApJ, 700, L39, doi: 10.1088/0004-637X/700/1/L39
* Vinogradov et al. (2023) Vinogradov, A., Alexandrova, O., Démoulin, P., et al. 2023, Embedded coherent structures from MHD to sub-ion scales in turbulent solar wind at 0.17 au. https://arxiv.org/abs/2307.10478
* Virtanen et al. (2020) Virtanen, P., Gommers, R., Oliphant, T. E., et al. 2020, Nature Methods, 17, 261, doi: 10.1038/s41592-019-0686-2
* Völk & Aplers (1973) Völk, H. J., & Aplers, W. 1973, Ap&SS, 20, 267, doi: 10.1007/BF00642204
* Wang et al. (2020) Wang, T., He, J., Alexandrova, O., Dunlop, M., & Perrone, D. 2020, ApJ, 898, 91, doi: 10.3847/1538-4357/ab99ca
* Wang et al. (2023) Wang, X., Huang, L., Wang, Y., & Yuan, H. 2023, Universe, 9, doi: 10.3390/universe9090399
* Wang et al. (2022) Wang, Y., Chhiber, R., Adhikari, S., et al. 2022, Strategies for determining the cascade rate in MHD turbulence: isotropy, anisotropy, and spacecraft sampling, arXiv, doi: 10.48550/ARXIV.2209.00208
* Wicks et al. (2011) Wicks, R. T., Horbury, T. S., Chen, C. H. K., & Schekochihin, A. A. 2011, Phys. Rev. Lett., 106, 045001, doi: 10.1103/PhysRevLett.106.045001
* Wicks et al. (2013a) Wicks, R. T., Mallet, A., Horbury, T. S., et al. 2013a, Phys. Rev. Lett., 110, 025003, doi: 10.1103/PhysRevLett.110.025003
* Wicks et al. (2013b) Wicks, R. T., Roberts, D. A., Mallet, A., et al. 2013b, The Astrophysical Journal, 778, 177, doi: 10.1088/0004-637X/778/2/177
* Wu et al. (2022) Wu, H., He, J., Yang, L., et al. 2022, On the scaling and anisotropy of two subranges in the inertial range of solar wind turbulence, arXiv, doi: 10.48550/ARXIV.2209.12409
* Wu et al. (2023) Wu, H., Huang, S., Wang, X., et al. 2023, The Astrophysical Journal Letters, 947, L22, doi: 10.3847/2041-8213/acca20
* Wu et al. (2013) Wu, P., Perri, S., Osman, K., et al. 2013, ApJ, 763, L30, doi: 10.1088/2041-8205/763/2/L30
* Yao et al. (2011) Yao, S., He, J. S., Marsch, E., et al. 2011, ApJ, 728, 146, doi: 10.1088/0004-637X/728/2/146
* Zhang et al. (2022) Zhang, J., Huang, S. Y., He, J. S., et al. 2022, The Astrophysical Journal Letters, 924, L21, doi: 10.3847/2041-8213/ac4027
* Zhao et al. (2022) Zhao, G. Q., Meyrand, R., Feng, H. Q., Wu, D. J., & Kasper, J. C. 2022, The Astrophysical Journal, 938, 124, doi: 10.3847/1538-4357/ac9380
* Zhao et al. (2023) Zhao, S., Yan, H., Liu, T. Z., Yuen, K. H., & Wang, H. 2023, arXiv e-prints, arXiv:2301.06709, doi: 10.48550/arXiv.2301.06709
* Zhdankin et al. (2016) Zhdankin, V., Boldyrev, S., & Chen, C. H. K. 2016, MNRAS, 457, L69, doi: 10.1093/mnrasl/slv208 |
# Asphalt Concrete Characterization Using Digital Image Correlation: A
Systematic Review of Best Practices, Applications, and Future Vision
Siqi Wang, Ph.D Zehui Zhu, Ph.D Tao Ma, Ph.D Jianwei Fan, Ph.D
###### Abstract
Digital Image Correlation (DIC) is an optical technique that measures
displacement and strain by tracking pattern movement in a sequence of captured
images during testing. DIC has gained recognition in asphalt pavement
engineering since the early 2000s. However, users often perceive the DIC
technique as an out-of-box tool and lack a thorough understanding of its
operational and measurement principles. This article presents a state-of-art
review of DIC as a crucial tool for laboratory testing of asphalt concrete
(AC), primarily focusing on the widely utilized 2D-DIC and 3D-DIC techniques.
To address frequently asked questions from users, the review thoroughly
examines the optimal methods for preparing speckle patterns, configuring
single-camera or dual-camera imaging systems, conducting DIC analyses, and
exploring various applications. Furthermore, emerging DIC methodologies such
as Digital Volume Correlation and deep-learning-based DIC are introduced,
highlighting their potential for future applications in pavement engineering.
The article also provides a comprehensive and reliable flowchart for
implementing DIC in AC characterization. Finally, critical directions for
future research are presented.
###### keywords:
Asphalt concrete , Digital image correlation , Fracture mechanics , Deep
learning , Digital volume correlation
††journal: Journal of Testing and Evaluation
[inst1]organization=Department of Road Engineering, School of Transportation,
Southeast University,addressline=Jiangning District, city=Nanjing,
state=Jiangsu, postcode=211189, country=China
[inst2]organization=Department of Civil and Environmental Engineering,
University of Illinois Urbana-Champaign,addressline=205 North Mathews Avenue,
city=Urbana, state=IL, postcode=61801, country=United States
††footnotetext: Accepted for publication in Journal of Testing and Evaluation.
DOI: 10.1520/JTE20230485.
## 1 Introduction
Digital Image Correlation (DIC) is an optical-based method used for measuring
displacements and strains in various materials. DIC functions by tracing
patterns across a sequence of surface images of the specimen during testing
[1]. This is achieved using subset-based matching, wherein gray value
correspondences are extracted by identifying their resemblances [2, 3]. In
1982, Peters and Ranson [4] introduced the concept of extracting local surface
deformations from single-camera images of planar specimens. A mathematical
framework was proposed to convert digitized 2-D images into full-field
displacement measurements, which is now known as two-dimensional digital image
correlation (2D-DIC). Professor Sutton further contributed to the field by
exploring implementation algorithms and applications [5, 6, 7]. However, it
became evident in the mid-1980s that 2D-DIC was limited to flat specimens and
single-plane deformations, which did not align with the requirements of most
engineering studies. Consequently, the necessity for stereovision systems
capable of capturing full-field three-dimensional displacement measurements on
surfaces emerged. In the early 1990s, Professors Chao and Sutton developed the
3D-DIC stereovision system and successfully conducted experiments using it [8,
9].
The accuracy and practicality of DIC have garnered attention within the
asphalt pavement engineering community since the early 2000s. In 2002, Seo et
al. [10] pioneered the application of 2D-DIC to analyze the stress-strain
behavior of the fracture process zone in monotonic and cyclic tests. Since
then, DIC has become a prominent tool for evaluating the material properties
of AC, validating experimental procedures, and verifying theoretical models
[11, 12, 13, 14, 15, 16, 17]. Figure 1 illustrates the number of scientific
papers retrieved from the Web of Science (Science Citation Index Expanded) by
using the search term _digital image correlation asphalt_ , covering the
period from 2002 to 2023. The increasing number of published articles
demonstrates the growing adoption of DIC in characterizing AC, particularly
after 2014. It is important to note that 2D-DIC has received considerably more
attention than 3D-DIC, primarily due to its simpler implementation (e.g.,
single-camera setup without the requirement of stereo camera calibration)
[18]. The dominance of 2D-DIC is evident as it accounts for over 95% of the
published articles in this field. Remarkably, the initial publication
employing 3D-DIC emerged as recently as 2017 [19].
Figure 1: Number of articles published using DIC in AC characterization.
Vendors commonly offer integrated software that enables users to obtain
displacement and strain measurements. Consequently, users often perceive the
DIC technique as an out-of-box tool and lack a thorough understanding of its
operational and measurement principles. However, the accuracy of displacement
and strain measurements obtained through DIC is significantly influenced by
the specific implementation details employed [1]. Common inquiries from users
encompass various aspects, such as the optimal preparation of specimen speckle
and imaging system set up to attain the highest accuracy, approaches for
assessing the precision of the DIC system, strategies for selecting user
inputs in DIC analysis, understanding the underlying algorithms used by the
DIC technique, and methods for post-processing and interpolating the measured
displacement or strain maps to characterize AC. Hence, performing a thorough
review of DIC can contribute to bolstering confidence in its usage and
fostering standardization within the pavement engineering community.
This article provides a comprehensive and in-depth review of DIC as a crucial
tool for laboratory testing of AC. The primary focus of this study centers
around the widely employed 2D-DIC and 3D-DIC techniques. The article
thoroughly examines the best practices pertaining to specimen speckle pattern
preparation, the configuration of single-camera or dual-camera imaging
systems, and the meticulous execution of DIC analyses. To enhance readers’
understanding of the utility of DIC in their own work, the article documents
experiences from over 100 publications spanning the past two decades, focusing
on applying DIC-measured full-field displacement and strain maps for AC
characterization. Furthermore, the article explores emerging DIC
methodologies, including Digital Volume Correlation (DVC) and deep-learning-
based DIC, which have not yet been adopted by the pavement engineering
community but exhibit significant potential for future applications. Lastly,
the article provides a flowchart intended to serve as a comprehensive and
reliable reference for future DIC implementation in AC characterization.
## 2 Specimen Preparation
A crucial factor for accurate DIC measurements is the quality of the speckle
pattern in use. The arrangement of speckles must possess specific attributes,
as highlighted by Dong et al. in their comprehensive analysis [20]. The term
”high contrast” refers to the necessity of observing variations in grayscale
intensities, resulting in significant intensity gradients among the speckles.
The condition of ”randomness” requires the absence of any repetitive or
periodic elements within the speckle configuration. This absence is vital for
achieving comprehensive displacement mapping across the entire field of view.
”Isotropy” mandates that the speckle arrangement remains unbiased in all
directions. Both the speckles and the spaces between them should maintain
consistent dimensions across various orientations, as noted by Reu [21]. To
prevent aliasing artifacts, it’s advisable to use speckle granules sized
around three to five pixels or slightly larger [22]. Lastly, the concept of
”stability” entails the firm adherence of the speckle pattern to the sample’s
surface. This adherence ensures that the pattern deforms coherently with the
sample, even during significant translations and deformations. This stability
should be upheld without causing noticeable changes in both geometric
arrangement and grayscale characteristics.
The ongoing scientific discourse pertains to whether the natural texture of AC
specimens meets the specified requirements. Xing et al. [23] proposed that the
natural texture of asphalt mixtures is suitable for DIC analysis, particularly
in the case of AC with a small normal maximum aggregate size (NMAS). This
perspective has received subsequent validation from various investigations,
including Guo et al. [24]. In contrast, Yuan et al. [25] conducted a
systematic comparison of DIC measurements using both artificially generated
patterns and natural textures. Their findings indicated that the natural
texture exhibited an error rate over three times greater than that of
artificially generated speckle patterns. Moreover, the broader DIC research
community tends to favor the generation of artificial speckle patterns to
enhance measurement reliability and consistency [1, 20]. Thus, the subsequent
discussion will center on best practices for creating artificial speckle
patterns.
### 2.1 Artificial Speckle Pattern Fabrication
citetdoll2015evaluation examined the performance of three commonly employed
methods in creating artificial speckle patterns for AC specimens. The
evaluation included a comprehensive analysis of the intensity histograms,
noise components, and measurement accuracy for simple motions involving rigid
body translation and rotation. The three speckle pattern fabrication
techniques assessed were as follows: 1) smoothing the sample surface using
sandpaper and an airbrush, followed by the application of several light layers
of white paint and a final layer of black paint [10]; 2) applying a black
paint layer to the surface and then generating the speckle pattern by spraying
white paint on top of it; and 3) applying a thin layer of plaster to the
specimen surface to fill in any holes (i.e., voids), followed by applying the
speckle pattern on the plaster layer.
The results showed that while there were no significant differences between
the two patterns where the paint was applied directly to the asphalt, the
pattern on plaster gave better results. However, using a coating can introduce
other drawbacks, as the material at the surface may not behave the same way as
the material underneath it, leading to inaccurate measurements. During
fracture experiments, the authors observed that the plaster coating did not
behave like the asphalt material, resulting in the peeling off of the plaster
and inaccurate measurements. Hence, the authors suggest using the direct
application of white and black paints without any coating, despite the
resulting increase in measurement error caused by surface irregularities, as
it enables accurate measurement of the material’s displacements [26].
LePage et al. [27] conducted an inquiry into whether superior results for DIC
are achieved with white-on-black or black-on-white painted speckle patterns.
Their findings identified the optimal speckle pattern composition as a white
paint basecoat overlaid with black speckles. The study highlighted that black
paint’s greater concealing capacity and white paint’s undertone due to
Rayleigh scattering contributed to heightened contrast of the black speckles.
Consequently, this increased contrast led to suggestions for reduced subset
sizes, narrower correlation confidence intervals, higher mean intensity
gradients, and ultimately more accurate displacement measurements (with a 24%
decrease in median normalized false displacement). As a result, the
recommended painted speckle pattern entails a thin white paint basecoat, equal
coverage for the basecoat and black speckles, and a speckle density of around
50% [27].
Regarding the fabrication of speckle patterns, both spray bottles and
airbrushes are commonly employed tools, as depicted in Figure 2. Nonetheless,
there exists a variation in the size of resulting speckle granules, with spray
bottle techniques generally yielding larger granules compared to airbrush
methods. To exemplify, in their work, Doll et al. [28] utilized an airbrush to
achieve an eight µm/pixel spatial resolution (i.e., the dimension of the pixel
size representing the area covered on the specimen), allowing differentiation
between strains in aggregate particles and the asphalt matrix regions between
particles. Conversely, for assessing far-field strain and displacement fields,
assuming homogeneous in-plane deformation, Doll et al. [29] utilized a spray
can, attaining an approximate spatial resolution of forty µm/pixel.
The dimensions of the nozzle, the gap between the nozzle and the specimen
surface, air pressure, and solution viscosity are all pivotal factors that may
impact the distribution of speckle sizes as well as the standard deviation of
this distribution [30]. Conducting preliminary trials is advisable to ensure
that speckle granules measure around three to five pixels in size or
marginally larger [22].
Figure 2: Schematic illustration of speckle pattern fabrication using a spray
can or airbrush.
### 2.2 Speckle Pattern Quality Assessment
Different operators using various techniques to fabricate speckle patterns may
result in different qualities, necessitating a quality assessment. Two
categories of parameters are used to assess speckle patterns: local and
global. Local parameters, such as subset entropy ($\delta$), sum of square of
subset intensity gradients (SSSIG), and mean subset fluctuation ($S_{f}$), are
designed to quantify individual subsets of the pattern and can assist with
selecting the optimal subset sizes. On the other hand, global parameters,
including mean intensity gradient (MIG) and Shannon entropy, quantify the
entire speckle pattern. SSSIG and MIG are the most cited local and global
metrics, respectively, owing to their solid theoretical foundations and
straightforwardnformulations [20, 31, 32, 33].
To calculate SSSIG, Equation 1 is used. The threshold of SSSIG can be
determined based on the desired level of accuracy. Further information can be
found in [31]. SSSIG is frequently employed to assist in selecting the optimal
subset size for DIC analysis, which will be discussed in a subsequent section.
It should be noted, however, that SSSIG cannot distinguish between random and
periodic speckle patterns as it only focuses on the local speckle pattern
within an individual subset.
$SSSIG=\sum_{i=1}^{N}\sum_{j=1}^{N}[f_{{x,y}}(\mathbf{x}_{ij})]$ (1)
where $f_{{x,y}}(\mathbf{x}_{ij})$ is the first derivative of the intensity
gray value at pixel $\mathbf{x}_{ij}$ in $x$\- or $y$\- direction; $N$ is the
subset size.
Equation 2 outlines the mathematical formula for calculating MIG. It is
essential to note that a high value of MIG indicates a good speckle pattern. A
recent study by Zhu and Al-Qadi [17] found that a minimum MIG value of 25
produced a small displacement error.
$MIG=\frac{\sum_{i=1}^{W}\sum_{j=1}^{H}|\nabla f(\mathbf{x}_{ij})|}{W\times
H}$ (2)
where $W$ and $H$ are image width and height in pixels, respectively; $|\nabla
f(\mathbf{x}_{ij})|=\sqrt{f_{x}^{2}(\mathbf{x}_{ij})+f_{y}^{2}(\mathbf{x}_{ij})}$;
$f_{x}(\mathbf{x}_{ij})$ and $f_{y}(\mathbf{x}_{ij})$ are the intensity
derivatives at pixel ${x}_{ij}$ at the $x$\- and $y$-direction, respectively.
## 3 2D-DIC
The 2D-DIC method is a popular optical measurement technology due to its
simple setup, minimal environmental prerequisites, and extensive sensitivity
and resolution capabilities. It has become the primary DIC technology used in
asphalt concrete characterization. However, limitations of the method include
in-plane deformation measurement only, the need for a randomly distributed
gray intensity on the object surface, and reliance on imaging system quality
[34]. This section will discuss best practices for 2D-DIC imaging system
setup, algorithms, and applications in asphalt concrete characterization.
### 3.1 Imaging System
A frequently employed 2D DIC setup comprises a camera, illumination system,
computer, and post-processing software. 2D-DIC requires high-quality images
and control of imaging parameters for accurate measurements.
First, to achieve optimal image quality and accurate measurement in 2D-DIC, it
is essential to determine the optimal camera-object distance by fitting the
Region-of-Interest (ROI) to the Field-of-View (FOV) as much as possible. The
pinhole camera model (Figure 3) can be used to calculate the optimal distance
between the camera and the object, given the selected lens.
Figure 3: Pinhole camera model.
For example, to measure the deformation of an AC specimen during a static
loading test, with a specimen size of approximately 50$\times$50 mm, a
standard 4 Megapixel camera with a sensor size of 2048$\times$2048 pixels and
a pixel size of 5 $\mu$m is used. The actual dimension is slightly increased
to ensure the object remains in view throughout the test. Using a 35 mm wide
lens, the optimal distance required to fit the ROI within the FOV can be
calculated (Equation 3):
$OD=\frac{wf}{S_{w}}+f=\frac{60\times 35}{2048\times\frac{5}{1000}}+35\approx
240mm$ (3)
where $OD$ is the optimal distance between the object and the sensor; $w$ is
the dimension of the AC specimen, in this case, its width; $S_{w}$ is the
corresponding dimension on the camera sensor, calculated as the number of
pixels multiplied by the size of a pixel on the sensor, which is typically
provided in the camera parameters; and $f$ is the focal length of the lens
[35].
Then, for precise 2D-DIC measurements, it’s essential for the camera’s CCD
sensor and the object’s surface to be aligned in parallel. Additionally, any
out-of-plane movement of the specimen during loading must be minimal, as
emphasized by Sutton et al. [36]. Therefore, careful adjustment and
positioning of the camera are essential, which can be challenging due to the
lack of proper tools.
A frequently employed method relies on the conventional computer vision camera
calibration process [37]. Initially, at least ten calibration images is
acquired using a standardized calibration plate. Subsequently, the calibration
plate is held against the specimen to ensure its parallel alignment with the
specimen. Next, a calibration procedure is executed, employing the final
captured image and readily accessible tools, such as the MATLAB Camera
Calibration Toolbox. Finally, iterative adjustments to the camera position are
made and the aforementioned steps are iteratively repeated until an acceptable
configuration is attained. It is essential to emphasize that this approach
yields accuracy within a tolerance of approximately $\pm$2∘ [38].
Wittevrongel et al. [38] developed a high-precision rotation stage (Figure 4)
using stepper engines to control the phi and theta angles, achieving precise
control with 200 steps per revolution. The Psi angle is omitted, as rotational
movements within the plane are permissible in 2D-DIC. The research findings
demonstrate that the camera can be positioned accurately with a perpendicular
precision of approximately $\pm$0.15∘.
Figure 4: Mechanical camera positioning tool (a) camera’s placement in
relation to the specimen’s surface; (b) diagram
.
Lastly, obtaining high-quality images involves adjusting the aperture, sensor
sensitivity (ISO), and exposure time (shutter speed) to achieve sharp, well-
illuminated images with minimal noise [39, 35]. These three parameters are
commonly referred to as the “exposure triangle” and are determined by the
properties of the lens and sensor. The relationship between these parameters
is illustrated in Figure 5.
Figure 5: Exposure triangle.
The ISO, or sensor sensitivity, is a property of the camera sensor. Increasing
the ISO makes the sensor more sensitive to light and increases the image
noise. To minimize noise, the ISO value should be kept at its factory default
(usually ISO 100) and not changed.
The aperture, a property of the lens, controls the amount of light that enters
the camera. A higher aperture allows more light to enter and narrows the depth
of field, which may cause the background to be blurred. For flat objects like
most AC specimens, positioned perpendicularly to the camera, narrow depths of
field are generally sufficient. However, more complex structures with curved
surfaces or different components may require a smaller aperture.
Moreover, to ensure a sharp image during a real experiment, the exposure time
must be shorter than the motion of the object being photographed. This is
crucial for accurate measurements. Increasing sensor sensitivity is not
recommended to avoid adding noise. Therefore, the solution is adding
artificial light to brighten the scene.
### 3.2 Algorithm
#### 3.2.1 Fundamental Principles
DIC operates by monitoring pattern displacements in a series of images through
subset-based matching, which identifies gray value correlations by comparing
similarities. As depicted in Figure 7, when computing point $P$ displacements,
a square subset of pixels $((2M+1)\times(2M+1))$ is selected from the
reference image and matched with the deformed image. The subset size ($M$) is
a crucial parameter in DIC analysis as it directly affects measurement
accuracy. For dependable correlation analysis, the subset dimensions must
balance distinct intensity patterns and accurate approximation of deformations
using $1^{st}$ or $2^{nd}$-order subset shape functions. This balancing act
calls for a compromise between employing larger or smaller subset sizes, as
discussed by Pan et al. [31]. SSSIG, which assesses speckle pattern quality,
is commonly used to select the optimal subset size. Figure 6 illustrates a
flowchart outlining the process to identify the ideal subset size. This
involves commencing with a smaller subset size and progressively enlarging it
until the SSSIG surpasses the predefined threshold. The threshold can be
determined based on the desired level of accuracy following the procedures
presented in Pan et al. [31].
Figure 6: Flowchart of using SSSIG to select optimal subset size.
In the reference image, it’s essential to designate an ROI that is
subsequently partitioned into equidistant grids. The computation of
displacements at each grid point facilitates the derivation of the
displacement field.
The matching is attained by searching for a correlation coefficient extremum.
Equation 4 lists commonly used correlation criteria. Compared to cross-
correlation (CC) and sum-of-squared differences (SSD), the zero-normalized
cross-correlation (ZNCC) and zero-normalized sum-of-squared differences
(ZNSSD) offer better performance against noise. They are less insensitive to
lighting fluctuations (e.g., offset and linear scale) [34].
Figure 7: Area-based matching.
$\begin{aligned}
C_{SSD}&=\sum_{i=-M}^{M}\sum_{j=-M}^{M}[f(x_{i},y_{j})-g(x_{i}^{\prime},y_{j}^{\prime})]^{2}\\\
C_{ZNSSD}&=\sum_{i=-M}^{M}\sum_{j=-M}^{M}[\frac{f(x_{i},y_{j})-f_{m}}{\sqrt{\sum_{i=-M}^{M}\sum_{j=-M}^{M}[f(x_{i},y_{j})-f_{m}]^{2}}}-\frac{g(x_{i}^{\prime},y_{j}^{\prime})-g_{m}}{\sqrt{\sum_{i=-M}^{M}\sum_{j=-M}^{M}[g(x_{i}^{\prime},y_{j}^{\prime})-g_{m}]^{2}}}]^{2}\\\
C_{CC}&=\sum_{i=-M}^{M}\sum_{j=-M}^{M}f(x_{i},y_{j})g(x_{i}^{\prime},y_{j}^{\prime})\\\
C_{ZNCC}&=\frac{\sum_{i=-M}^{M}\sum_{j=-M}^{M}[f(x_{i},y_{j})-f_{m}]\times[g(x_{i}^{\prime},y_{j}^{\prime})-g_{m}]}{\sqrt{\sum_{i=-M}^{M}\sum_{j=-M}^{M}[f(x_{i},y_{j})-f_{m}]^{2}}\sqrt{\sum_{i=-M}^{M}\sum_{j=-M}^{M}[g(x_{i}^{\prime},y_{j}^{\prime})-g_{m}]^{2}}}\\\
\end{aligned}$
(4)
where $f(x_{i},y_{j})$ is gray value at $(x_{i},y_{j})$ in the reference
subset. $g(x_{i}^{\prime},y_{j}^{\prime})$ is gray value at
$(x_{i}^{\prime},y_{j}^{\prime})$ in the deformed subset. $f_{m}$ and $g_{m}$
are mean gray values of the reference and deformed subset, respectively.
A correlation coefficient or ZNSSD cost approaching zero indicates a favorable
match. ZNCC is directly related to ZNSSD. Equation 5 indicates that a
$C_{ZNCC}$ value of 1 signifies a perfect match, while a value of 0 denotes no
correlation.
$C_{ZNCC}=1-0.5C_{ZNSSD}$ (5)
In Equation 4, the reference point $(x_{i},y_{j})$ is associated with the
deformed point $g(x_{i}^{\prime},y_{j}^{\prime})$ through a mapping function.
This function can be either $1^{st}$-order (as in Equation 6) or
$2^{nd}$-order (as in Equation 7). The $2^{nd}$-order function has the ability
to approximate more intricate displacements compared to the $1^{st}$-order
one.
$\begin{bmatrix}x_{i}^{\prime}\\\
y_{j}^{\prime}\end{bmatrix}=\begin{bmatrix}x_{0}\\\
y_{0}\end{bmatrix}+\begin{bmatrix}1+u_{x}&u_{y}&u\\\
v_{x}&1+v_{y}&v\end{bmatrix}\begin{bmatrix}\Delta x\\\ \Delta y\\\
1\end{bmatrix}$ (6)
$\begin{bmatrix}x_{i}^{\prime}\\\
y_{j}^{\prime}\end{bmatrix}=\begin{bmatrix}x_{0}\\\
y_{0}\end{bmatrix}+\begin{bmatrix}1+u_{x}&u_{y}&\frac{1}{2}u_{xx}&\frac{1}{2}u_{yy}&u_{xy}&u\\\
v_{x}&1+v_{y}&\frac{1}{2}v_{xx}&\frac{1}{2}v_{yy}&v_{xy}&v\end{bmatrix}\begin{bmatrix}\Delta
x\\\ \Delta y\\\ \Delta x^{2}\\\ \Delta y^{2}\\\ \Delta x\Delta y\\\
1\end{bmatrix}$ (7)
Here, $u$ and $v$ represent the horizontal and vertical displacement
components for the subset center $(x_{0},y_{0})$, respectively. The quantities
$\Delta x=x_{i}-x_{0}$ and $\Delta y=y_{j}-y_{0}$ are defined. Additionally,
$u_{x}$, $u_{y}$, $v_{x}$, and $v_{y}$ signify the components of
$1^{st}$-order displacement gradients. Furthermore, $u_{xx}$, $u_{yy}$,
$u_{xy}$, $v_{xx}$, $v_{yy}$, and $v_{xy}$ denote the components of
$2^{nd}$-order displacement gradients. This paper employs the notation
$\mathbf{p}$ to represent the desired displacement vector, which consists of
either 6 or 12 unknown parameters.
Given the aforementioned definitions, it is evident that the computation of
$\mathbf{p}$ entails an optimization task involving a user-defined cost
function, such as Equation 4 and Equation 5. DIC employs the Newton–Raphson
(NR) iterative approach for optimization, as outlined in Equation 8.
$\mathbf{p}=\mathbf{p}_{0}-\frac{\nabla C(\mathbf{p}_{0})}{\nabla\nabla
C(\mathbf{p}_{0})}$ (8)
Here, $\mathbf{p}_{0}$ represents the initial estimation of the displacement
vector, while $\mathbf{p}$ denotes the subsequent iterative solution. The
symbol $\nabla C(\mathbf{p}_{0})$ corresponds to the $1^{st}$-order
derivatives of the cost function, and the term $\nabla\nabla
C(\mathbf{p}_{0})$ refers to the Hessian matrix [34].
#### 3.2.2 RG-DIC
The preceding section exclusively delineated the process of computing
$\mathbf{p}$ for an individual point. To achieve full-field displacement
measurement, the reliability-guided digital image correlation (RG-DIC)
technique is widely employed [40]. This approach has been integrated into
open-source solutions such as Ncorr [40, 41].
The process commences with the determination of an initial displacement vector
estimate for a user-defined reference point. To achieve this, one might employ
normalized cross-correlation or the scale-invariant feature transform (SIFT)
technique for an informed initial estimation. Following this, the algorithm
computes $\mathbf{p}_{seed}$ and its associated correlation coefficient.
Subsequently, the algorithm computes the displacement vectors and correlation
coefficients for the four adjacent points of the seed point, utilizing
$\mathbf{p}_{seed}$ as the initial approximation. These computed correlation
coefficients are incorporated into a priority queue. The subsequent step
involves extracting the highest-correlation point from the queue and utilizing
its corresponding $\mathbf{p}$ as the starting point to compute displacements
for its neighboring points, if they are yet to be computed. This process
iterates until the priority queue becomes empty, signifying the calculation of
all points within the ROI.
Due to its incorporation of correlation analysis focused on points with the
highest correlation, the RG-DIC approach exhibits resilience to minor image
discontinuities. This attribute enhances its efficacy in analyzing images of
AC specimen surfaces, which often exhibit irregularities arising from factors
like air voids and diverse aggregate orientation.
The RG-DIC method may encounter challenges when dealing with substantial
discontinuities, such as cracks, within the deformed image. This issue
commonly arises during the analysis of deformed images obtained during the
post-peak load stage of AC testing. To mitigate the decorrelation problem, Zhu
and Al-Qadi [17] introduced the multi-seed incremental approach. In this
context, “multi-seed” entails manually placing seed points on all partitions
artificially created by the significant discontinuities, while “incremental
analysis” involves using an intermediate deformed image as an updated
reference image if the deformed image exhibits severe decorrelation with the
original reference image. When correctly implemented, the multi-seed
incremental RG-DIC analysis consistently attains high accuracy, even in the
presence of substantial discontinuities (such as cracks) in the deformed
image.
#### 3.2.3 Compute of Strains
In the realm of AC characterization, complete strain distributions frequently
hold greater significance and desirability compared to displacement fields.
However, strains are more challenging to resolve than displacement fields due
to their sensitivity to noise caused by differentiation [34, 41]. Thus, it is
necessary to smooth displacement fields before calculating strain fields.
An illustration of this concept is the strain window technique introduced by
Pan et al. [42], wherein displacement gradients and Green-Lagrangian strains
are computed through a least squares plane fit applied to a subset of
displacement information. Subsequently, the algorithm resolves an excessive
system of equations to ascertain the strains, offering flexibility in
adjusting the size of the subset window. More details can be found elsewhere
[41, 42, 43].
Upon the parameter solution, they are employed for the computation of
$e_{xx}$, $e_{xy}$, and $e_{y}$ as per Equation 9. This procedure is then
extended across the entire displacement field to derive the corresponding
strain field.
$\displaystyle e_{xx}$ $\displaystyle=\frac{1}{2}(2\frac{\partial u}{\partial
x}+(\frac{\partial u}{\partial x})^{2}+(\frac{\partial v}{\partial x})^{2})$
(9) $\displaystyle e_{xy}$ $\displaystyle=\frac{1}{2}(\frac{\partial
u}{\partial y}+\frac{\partial v}{\partial x}+\frac{\partial u}{\partial
x}\frac{\partial u}{\partial y}+\frac{\partial v}{\partial x}\frac{\partial
v}{\partial y})$ $\displaystyle e_{yy}$
$\displaystyle=\frac{1}{2}(2\frac{\partial v}{\partial y}+(\frac{\partial
u}{\partial y})^{2}+(\frac{\partial v}{\partial y})^{2})$
It is vital to emphasize that achieving reliable and precise full-field strain
estimation necessitates the careful selection of an appropriate local strain
calculation window size. For uniform deformation, a larger window size for
strain calculation is preferable. However, when dealing with non-uniform
deformation, the choice of strain calculation window size should be
deliberate, considering the interplay between strain accuracy and smoothness.
A small window might not effectively mitigate displacement noise, whereas an
overly large window might yield an impractical linear deformation
approximation within the strain calculation window [34, 42, 44].
#### 3.2.4 Software
The effective implementation of algorithms is crucial for 2D-DIC analysis.
Table 1 presents a compilation of presently accessible non-commercial 2D-DIC
software. It is essential to note that this list exclusively comprises
software with supporting peer-reviewed research papers, and there may be other
available choices.
Software | Authors | First Release | User Interface | Open Source | Free | Language | Citations (Oct 2023)
---|---|---|---|---|---|---|---
NCorr | Blaber et al. [41] | 2015 | Yes | Yes | Yes | Matlab | 1,629
DICe | Turner et al. [45] | 2015 | Yes | Yes | Yes | C++ | 12
$\mu$DIC | Olufsen et al. [46] | 2019 | No | Yes | Yes | Python | 36
OpenCorr | Jiang [47] | 2021 | No | Yes | Yes | C++ | 6
iCorrVision-2D | de Deus Filho et al. [48] | 2022 | Yes | Yes | Yes | Python | 7
Table 1: 2D-DIC Software.
### 3.3 Applications
The 2D-DIC technique is extensively used in the asphalt pavement field to
assess the material properties of AC. It is recognized as a practical and
robust method for quantifying the deformation of AC specimens in diverse
laboratory testing environments. A review of over 100 academic papers, with a
specific emphasis on publications from 2017 onwards, revealed that the semi-
circular bending (SCB) test, indirect tensile test (IDT), three-point bending
(3PB) test, and four-point bending (4PB) test were the most frequently
observed applications of the 2D-DIC technique. A limited number of studies
employed the single-edge notched bending (SENB) test, disk-shaped compact
tension (DCT) test, direct tension test (DTT), and triaxial repeated creep
test methodologies.
Table 2 summarizes the application of 2D-DIC in reviewed academic papers.
Broadly, the applications of 2D-DIC can be classified into three categories:
direct application of 2D-DIC-generated displacement or strain maps, derivation
of mechanistic parameters for AC material properties, and tracking of crack
propagation or damage evolution. The following sections provide detailed
discussions of these applications.
Type | Test | Applications | Articles
---|---|---|---
Fracture | SCB | Strain & Displacement | [49, 50, 51, 52, 19]
Mechanistic Parameters | [53, 54, 55, 56, 16, 57, 28, 29]
Crack Propagation | [53, 58, 59, 56, 54, 60, 55, 61, 62]
3PB | Strain & Displacement | [63, 64]
Crack Propagation | [65, 66, 67]
SENB | Strain & Displacement | [68]
Crack Propagation | [69]
DCT | Strain & Displacement | [70, 19, 64]
Fatigue | 4PB | Mechanistic Parameters | [71]
Crack Propagation | [72, 71, 73, 74, 75, 76, 77]
SCB | Strain & Displacement | [78, 50, 79, 80, 81, 82, 83]
Crack Propagation | [84, 79, 61, 15, 83, 62]
Flexural fatigue test | Crack Propagation | [85]
Strength | IDT | Strain & Displacement | [86, 87, 23, 88, 89, 90]
Damage Evolution | [56, 24, 91, 92]
Others | Pull-off adhesion test | Mechanistic Parameters | [93]
| DTT | Strain & Displacement | [94, 95, 96]
| Damage Evolution | [95, 97, 98]
| Freeze-thaw test | Strain & Displacement | [99]
| Light healing test | Strain & Displacement | [100]
| 2T3C HCA | Displacement | [101]
Table 2: Applications of 2D-DIC in AC laboratory tests.
#### 3.3.1 Direct Application of Strain/Displacement Fields
The predominant focus in reviewed papers only involves the direct application
of DIC-measured strain or displacement maps. These applications can be broadly
categorized into two groups based on measurement scope: global, involving the
entire displacement or strain fields of the specimen, and local, focused
solely on the area of interest.
In global applications, researchers analyze chosen displacement or strain
components (e.g., horizontal, vertical) at various time points during specimen
loading to gain qualitative insights into material mechanical properties. This
includes identifying high-strain zones on the specimen surface often
associated with cracks or damage zones, which will be elaborated upon in the
subsequent section dedicated to crack propagation measurement via DIC [102,
91, 56, 103, 12]. Another significant application involves employing DIC as a
validation tool for numerical models or displacement/strain sensors. DIC-
measured displacement or strain fields serve as the ”ground truth” to validate
these models or sensors [104, 78, 105, 106, 107]. Additionally, DIC is
employed as a measurement tool for deformation assessment, such as quantifying
permanent deformations in repeated load uniaxial tests or cyclic uniaxial
compression tests and determining vertical deformations in AC specimens during
IDT [106, 108, 90].
In the context of local applications, researchers focus on specific regions
within DIC-derived displacement and strain maps. A prominent use case involves
the measurement of Crack Mouth Opening Displacement (CMOD) and the
investigation of aggregate crushing near the load point. CMOD serves to
characterize the displacement alteration perpendicular to the crack plane
within a fatigued-notched region of a specimen subjected to fracture toughness
testing [109]. CMOD measurements are taken along the loading axis or the
specimen’s surface, quantifying the difference between the initial and final
crack openings. Traditionally, this is done with a physical clip-on
displacement gauge attached to opposite sides of the crack mouth to generate a
load-CMOD curve for fracture energy calculation. The application of DIC for
measuring CMOD obviates the need for a physical gauge, replacing it with
virtual measurement points positioned on opposite sides of the specimen [64].
DIC-derived CMOD is a crucial metric for assessing AC’s fracture
characteristics and crack resistance. Specifically, fracture energy,
representing the energy required to generate a unit area of crack, is a
significant parameter in this context. It can be determined by calculating the
area under the load-CMOD curve and subsequently dividing it by the ligament
area, approximately equivalent to the product of the crack length and specimen
thickness [109, 64, 54, 103, 110, 111, 112].
Another crucial local application involves the investigation of potential
aggregate crushing near the load point. This is of paramount importance due to
the significant energy dissipation associated with such crushing, potentially
resulting in an overestimation of energy contributions to fracture or fatigue
crack formation. Such overestimations are undesirable when employing energy-
based parameters for material property comparisons. For example, Doll [26]
performed SCB tests, wherein strain fields in the vicinity of the loading
point were visually compared at multiple time points throughout the loading
procedure. Their observations revealed no substantial (i.e., order of
magnitude) differences, confirming the absence of significant aggregate
crushing. Similar investigation can be found in the work of Yang et al. [80].
#### 3.3.2 Tracking Crack Propagation
Crack propagation monitoring in fracture or fatigue tests is a significant
application of DIC [113, 114, 115]. This application can be categorized into
four distinct approaches based on the underlying principles: visualization-
based, empirical-mechanistic-based, mechanistic-based, and computer-vision-
based.
Figure 8: Tracking crack propagation (a) visualization-based approach; (b)
strain thresholding approach; (c) deviation point assumption; (d) CTOD-based
mechanistic approach; (e) CrackPropNet.
In the visualization-based approach, researchers typically employ strain maps
to discern crack tips and boundaries [102, 91, 56, 103, 12]. High strains are
typically visually detected and categorized as zones of damage or cracks
(Figure 8 _(a)_). This method’s advantage lies in its simplicity, enabling
crack identification by a human expert without the need for subsequent
processing of DIC-derived strain fields. However, its drawback is its
subjective nature, leading to potential variations in crack propagation
assessment among different individuals. Additionally, domain expertise is
essential, as individuals lacking familiarity with the specific materials and
testing procedures may misinterpret crack identifications.
In the empirical mechanistic approach, researchers often make use of empirical
assumptions, such as strain thresholds or deviation points on relative
displacement curves, to define the initiation of cracking. For example,
Safavizadeh and Kim [75] employed a threshold of 9,000 $\mu\epsilon$ for
$e_{xx}$ and 6,000 $\mu\epsilon$ for $e_{yy}$ to detect vertical and
interfacial cracks in double-layer grid-reinforced asphalt concrete notched
beams under four-point bending fatigue loading (Figure 8 _(b)_). Buttlar et
al. [14] assumed that the deviation point on a relative displacement versus
number of cycles curve indicated failure at the layer interface in a double
shear test (Figure 8 _(c)_). The advantage of this approach is its minimal
post-processing requirements. However, it has the drawback of subjectivity and
a reliance on domain-specific knowledge. Furthermore, the empirical
assumptions are often challenging to validate or may remain unverifiable.
In the mechanics theory-based approach, researchers utilize fundamental
mechanics theory to identify cracks. For example, Zhu and Al-Qadi [17]
proposed employing the critical crack tip opening displacement (CTOD) to
define the onset of cleavage fracture (Figure 8 _(d)_). This proposed
threshold holds physical significance and can be readily determined from DIC
measurements. The advantage of this method lies in its reliance on well-
established fundamental theory, reduced dependence on user inputs, and a
higher likelihood of accurately representing actual cracks. However, it is
more complex to implement compared to previous approaches and is less amenable
to automation. It’s important to note that Zhu’s approach is specifically
applicable to fracture tests and has been validated only for mode I fracture.
Further research is recommended to develop methods suitable for fatigue tests
and other fracture modes.
In the context of low-level computer vision, researchers treat strain or
displacement maps as image data and apply classical computer vision techniques
such as thresholding, edge detection, and blob extraction to identify cracks
[116, 117, 118]. This approach primarily relies on detecting abrupt changes in
strain patterns, akin to sharp intensity changes in regular images. Its
advantages include reduced subjectivity compared to prior methods and the
potential for automation. However, a drawback lies in the necessity of using
thresholds, which lack physical meaning and cannot be explicitly validated for
their specific values. Another computer vision approach is based on optical
flow, which characterizes the apparent motion patterns of speckle patterns
[119]. Zhu and Al-Qadi [120] introduced CrackPropNet, a deep neural network
built upon optical flow principles (Figure 8 _(e)_). This network was trained
on a comprehensive image dataset encompassing various types of crack behavior
in AC. CrackPropNet takes a reference image and a deformed image as inputs,
producing a probability map representing crack edges as its output. Notably,
CrackPropNet achieves a high crack detection accuracy on AC specimen surfaces
while maintaining a rapid processing speed of 26 frames per second.
It is important to emphasize that after measuring crack propagation,
additional post-processing techniques can be employed to extract additional
insights. For instance, one can construct an R-curve, which plots the crack
growth resistance against the crack extension [109]. This R-curve-based
approach acknowledges the fact that the fracture resistance of AC may not
remain constant throughout the process of crack propagation [53, 121]. Crack
propagation measurement via DIC can also contribute to Paris’s law, a
prominent fatigue crack growth model that describes the connection between
crack growth rate and stress intensity factor for asphalt concrete, as
represented in Equation 10 [83, 102, 118].
$\frac{da}{dN}=A(\Delta K)^{n}$ (10)
where $a$ represents the crack length, $N$ is the loading cycle, and $A$ and
$n$ denote the material parameters in Paris’ law, while $\Delta K$ signifies
the stress intensity factor amplitude [122]. $A$ and $n$ find wide application
in various contexts, including the prediction of reflective cracking and the
design of asphalt overlays [123, 124].
#### 3.3.3 Derivation of Mechanistic Parameters
DIC-measured displacement or strain maps facilitate the determination of
mechanistic parameters. These parameters can be categorized into two groups
based on their processing complexity: direct and secondary. Direct parameters,
such as CTOD, and fracture process zone (FPZ), require limited post-
processing. In contrast, secondary parameters, including strain energy
density, stress intensity factor (SIF) and J-integral, necessitate substantial
post-processing. Further elaboration on these parameters is provided in
subsequent sections.
Figure 9: Combining mechanistic theory and DIC (a) stress-strain curve; (b)
locate crack tip and select a pair of reference points for CTOD measurement;
(c) eFPZ; (d) line J-integral around a notch; (e) viscous and pseudo
displacement fields on a SCB specimen surface.
CTOD functions as a fracture mechanics parameter, particularly pertinent to
elastic-plastic materials, signifying the material’s resistance to crack
propagation through strain at the crack tip [125]. Physically measuring CTOD
is often challenging; nevertheless, it can be deduced using a conventional
plastic hinge model or a J-integral based model [126, 109, 127, 128].
Moreover, accurately measuring CTOD using DIC is widely recognized as
challenging due to the precise crack tip location and suitable reference point
selection difficulties [129]. Zhu and Al-Qadi [17] adopted a method proposed
by Vasco-Olmo et al. [129] for measuring CTOD using DIC-measured displacement
field data from a monotonic SCB test (Figure 9 _(b)_). This approach entails
two steps: first, locating the crack tip by plotting profiles of horizontal
displacement perpendicular to the crack plane and determining the crack tip
coordinates at the intersection of these profiles. Second, determining CTOD by
defining a pair of reference points after locating the crack tip. By plotting
relative displacements for various pairs of reference points, the appropriate
reference point can be identified by identifying a stable plateau region,
indicating the end of the strip-yield zone. It should be noted that this
approach has only been applied to mode I fracture tests.
The DIC-measured strain field enables the determination of the stress-strain
curve and the computation of strain energy density. The stress-strain curve
characterizes AC’s response to loading, offering information on its strength,
stiffness, ductility, and failure thresholds. Strain energy denotes the energy
stored within a deforming material, while strain energy density represents the
energy stored per unit volume and corresponds to the integral area under the
stress-strain curve, essentially encapsulating the area under the stress-
strain curve. Strain energy density has been employed for fracture toughness
prediction, fatigue damage characterization, and assessment of non-fracture-
related energy dissipation [29, 130, 131].
Asphalt concrete is a viscoelastic material, demonstrating both instantaneous
elastic behavior and time-dependent viscous properties. This implies the
presence of energy dissipation within the material when subjected to loading.
Furthermore, AC’s modulus is not constant, introducing complexity in deriving
a stress field from the DIC-measured strain field. In accordance with
established viscoelastic theory, the constitutive response of viscoelastic
media can be expressed using the convolution Equation 11 [132].
$\sigma_{ij}(\xi)=\int_{0}^{\xi}E_{ijkl}(\xi-\xi^{\prime})\frac{\partial\epsilon_{kl}(\xi^{\prime})}{\partial\xi^{\prime}}d\xi^{\prime}$
(11)
where $\sigma_{ij}$ and $\epsilon_{ij}$ represent stress and strain tensor
components, $E_{ijkl}$ denotes the stiffness modulus components that depend on
both time and temperature, and $\xi$ is a dimensionless time reduction
parameter. $\xi$ is defined for thermo-rheologically simple materials as
$\xi=t/a_{T}$, with $a_{T}$ representing a shift factor derived from the
Williams–Landel–Ferry equation [133, 134].
In a scenario where strain is directly determined as a function of applied
load using DIC, the stress history was determined by evaluating the
convolution integral described in Equation 11 at each load increment. To
employ Equation 11, it remains imperative to possess explicit knowledge of the
temperature and time-dependent modulus $E$, typically expressed as a Prony
series fit based on experimental data (Equation 12).
$E(t)=E_{e}+\sum_{n=1}^{N}E_{n}e^{-\frac{t}{\rho_{n}}}$ (12)
The equilibrium modulus, denoted as $E_{e}$, is a key parameter, while $E_{n}$
represents the Prony coefficients, and $\rho_{n}$ corresponds to the
relaxation times.
Following this, one can plot the stress-strain curve (Figure 9 _(a)_) and
compute the strain energy density using Equation 13.
$W=\int_{0}^{\epsilon}\sigma_{ij}d\epsilon_{ij}$ (13)
The DIC-measured strain field facilitates the study of the FPZ, a region near
the crack tip where material undergoes damage, even in the absence of complete
cracking. This damage may manifest as microcracks, void formation, significant
plastic deformation, or large-scale shearing (shear bands) [135]. For AC, a
strain-based approach inspired by Wu et al. [136]’s work on concrete is
considered effective. In this approach, the FPZ is defined as the zone where
strains surpass a specified threshold value known as the tensile strain
capacity, representing the maximum strain the material can endure before crack
formation [28]. As precise tensile strain capacity values for different AC
mixes are often unavailable, researchers commonly adopt a consistent, albeit
arbitrary, threshold for comparative purposes. Consequently, instead of
obtaining an absolute measurement of the FPZ extent for each mix, researchers
calculate an estimated Fracture Process Zone (eFPZ), allowing for meaningful
comparisons (Figure 9 _(c)_). Doll et al. [28] have proposed thresholds of
3000 $\mu\epsilon$ at 25∘C and 1500 $\mu\epsilon$ at -12∘C.
Furthermore, DIC measurements allow for the computation of classical fracture
parameters, including SIF and J-integral. In the context of linear elastic
fracture mechanics, SIF serves as a predictive tool for assessing stress
distributions near crack or notch tips induced by external loading or residual
stresses. This parameter is contingent upon specimen geometry and loading
conditions. For example, in mode I fracture, the onset of crack propagation is
posited to arise when the applied stress intensity factor, denoted as $K_{I}$,
surpasses a critical threshold known as fracture toughness ($K_{Ic}$) [109].
In experimental settings, DIC is employed to acquire displacement data,
assuming the material behaves elastically. Subsequently, a least squares
regression is executed using Equation 14 to calculate $K_{I}$. However, it is
important to note that asphalt material exhibits pronounced viscoelastic
behavior, in contrast to the assumptions underlying Equation 14 , which
pertain to purely elastic materials. In cases of viscoelasticity, a similar
approach can be adopted with the utilization of pseudo displacements as
defined in Equation 15 [137] (Figure 9 _(e)_). A least squares regression is
then applied to these pseudo displacements using Equation 14, yielding
$K_{IR}$ and accounting for rigid body motion. It is essential to acknowledge
that the aforementioned procedure assumes a constant modulus for the asphalt
material, which is erroneous in situations where the material exhibits high
viscosity [26].
$\begin{bmatrix}u_{x}\\\
u_{y}\end{bmatrix}=\frac{K_{I}}{2\mu}\sqrt{\frac{r}{2\pi}}\begin{bmatrix}\cos(\frac{\theta}{2})[\kappa-1+2\sin^{2}(\frac{\theta}{2})]\\\
\sin(\frac{\theta}{2})[\kappa+1-2\cos^{2}(\frac{\theta}{2})]\end{bmatrix}+\begin{bmatrix}u_{x0}-\theta_{0}y\\\
u_{y0}+\theta_{0}x\end{bmatrix}$ (14)
Here, $\nu$ represents the Poisson ratio, $\mu$ stands for the shear modulus,
$u_{x0}$ and $u_{y0}$ pertain to rigid translation, and $\theta_{0}$
represents rigid rotation. For plane strain, where $\kappa$ is defined as
$\kappa=3-4\nu$. For plane stress, $\kappa$ is defined as
$\kappa=\frac{3-\nu}{1+\nu}$.
$u_{i}^{R}=\frac{1}{E^{R}}\int_{0}^{t}E(t-t^{\prime})\frac{\partial
u_{i}}{\partial t^{\prime}}dt^{\prime}$ (15)
$E^{R}$ represents the reference modulus, which is typically selected as an
arbitrary value, often taken as the instantaneous modulus denoted by $E_{0}$,
while $E(t)$ signifies the relaxation function [138, 139].
The J-integral, unlike the SIF, is applicable to a broader range of material
behaviors, including linear elastic, non-linear elastic, and plastic materials
(under the condition of no unloading). It is valid for situations without body
forces and in the context of 2D deformation fields. Theoretically, three
primary methods exist for measuring the J-integral of a material. Firstly, the
J-integral can be determined by analyzing DIC-measured displacement and strain
fields (Figure 9 _(d)_). Secondly, it can be obtained through tests involving
multiple specimens with varied pre-crack (i.e., notch) lengths, while keeping
other test parameters controlled [140]. Thirdly, the J-integral can be
measured using a single specimen by directly measuring the crack length [141].
Since this section focuses on the application of DIC, the details of the first
approach will be presented. The J-integral exhibits path independence for
hyperelastic materials when subjected to monotonic loading conditions.
However, it loses this path-independence property if the material dissipates
energy within the bulk. Schapery [137] introduced an alternative formulation
known as the pseudo J-integral, which is suitable for viscoelastic-homogeneous
materials and is calculated along a line contour, which may pose numerical
challenges when dealing with displacement derivatives from discrete data, such
as those obtained from DIC measurements. An alternative expression (Equation
16) is derived using Green’s theorem (Divergence theorem) and is valid under
specific conditions, including the absence of thermal strain, body forces, and
traction along the crack faces [142, 26]. It is crucial to highlight that the
path independence of the J-integral relies on the assumption of material
homogeneity, whereas AC exhibits heterogeneity.
$J=\int_{A}[\sigma_{ij}u_{j,i}-W\delta_{ij}]q_{1,i}dA$ (16)
where $\sigma_{ij}$ is the stress component, $u_{j,i}$ is the displacement,
$W$ is the strain energy density of AC
($W=\int_{0}^{\epsilon}\sigma_{ij}d\epsilon_{ij}$). The expression involves
the parameter $q_{1}$, which takes a value of 1 along the inner contour and 0
along the outer contour [142].
## 4 3D-DIC
The 2D-DIC method has been extensively studied in the pavement engineering
community for characterizing asphalt concrete. However, it has specific
requirements regarding specimen deformation, loading devices, and measuring
systems. In cases where the surface of the specimen is not planar or
experiences three-dimensional deformation following loading, the application
of the 2D-DIC method is unfeasible [34]. In response to these constraints, the
3D-DIC method, often referred to as stereo-DIC, has emerged as a solution [8].
This technique entails the utilization of two synchronized cameras or a single
camera equipped with custom light-splitting apparatuses grounded in binocular
stereo vision principles. This section will discuss best practices for 3D-DIC
imaging system setup, algorithms, and applications in asphalt concrete
characterization.
### 4.1 Imaging System
A frequently employed 3D DIC setup comprises two synchronized cameras, an
illumination setup, a computer, and a post-processing software (see Figure
10). Notably, the recommendations and guidelines discussed in a previous
section regarding the setup of a 2D-DIC imaging system also apply to 3D-DIC.
In addition, this section presents additional techniques to effectively set up
a 3D-DIC imaging system.
Figure 10: 3D-DIC setup.
First, one essential requirement for 3D-DIC is the simultaneous acquisition of
stereo images. However, achieving precise synchronization with minimal delay
can be challenging. Generally, industrial cameras that support specific
digital interface standards, such as CoaXPress, which has a built-in
synchronization capability, are required to meet this demand [143].
Second, it is vital to determine the stereo angle ($\alpha$), which refers to
the angular difference between the two camera views (Figure 10). Typically,
narrower stereo angles (shorter baseline) enhance in-plane measurement
accuracy but increase uncertainty in out-of-plane measurements. In scenarios
emphasizing strain assessment, a narrower stereo angle is commonly favored.
However, for improved out-of-plane results, it is recommended to use a larger
stereo angle (longer baseline). When using a wide-angle lens, a stereo angle
of at least 25∘ is advisable [144].
### 4.2 Algorithm
#### 4.2.1 Stereo Calibration
To ensure accurate and high-quality 3D-DIC measurements, precise calibration
of the two-camera unit used for simultaneous image capture, is crucial. Camera
calibration furnishes essential parameters for triangulation, encompassing
intrinsic details (such as center point, lens distortion coefficients, and
individual camera focal lengths) and extrinsic factors (including translation
and rotation between the dual cameras). A widely adopted and reliable method
for calibration was proposed by Zhang [37], which employs a 2D planar pattern.
This technique is known for its high accuracy and ease of use, and it has
become the standard method for calibration in most 3D-DIC techniques [1].
The calibration process involves utilizing the captured stereo pairs of the 2D
planar pattern. It is important to consider that the distortion-free
projection of a 3D point
$\widetilde{\mathbf{P}}_{w}=[x_{1}^{w},x_{2}^{w},x_{3}^{w},1]^{T}$ onto the
camera sensor $\widetilde{\mathbf{p}}_{c}=[x_{1}^{c},x_{2}^{c},1]^{T}$ can be
represented by Equation 17.
$\displaystyle s^{c}\widetilde{\mathbf{p}}_{c}$
$\displaystyle=\mathbf{K}_{c}[\mathbf{R}_{c}|\mathbf{t}_{c}]\widetilde{\mathbf{P}}_{w}$
(17) $\displaystyle=\begin{bmatrix}f_{1}^{c}&\gamma^{c}&c_{1}^{c}\\\
0&f_{2}^{c}&c_{2}^{c}\\\ 0&0&1\\\
\end{bmatrix}[\mathbf{R}_{c}|\mathbf{t}_{c}]\widetilde{\mathbf{P}}_{w}$
Here, the subscript and superscript $c$ are used to indicate the camera
indices. The symbol $s^{c}$ signifies a scaling factor, while $\mathbf{R}_{c}$
and $\mathbf{t}_{c}$ represent the rotation matrix and translation vector.
These parameters facilitate the transformation from the world coordinate
system to the camera coordinate system. $\mathbf{K}_{c}$ stands for the
camera’s intrinsic matrix, wherein $f_{1}^{c}$ and $f_{2}^{c}$ denote the
focal lengths in pixels, $c_{1}^{c}$ and $c_{2}^{c}$ represent the pixel
coordinates of the principal point (optical center), and $\gamma^{c}$
signifies the skew factor.
To achieve accurate calibration, it is essential to consider the non-linear
optical distortion caused by the lens. A widely employed distortion model
comprises radial and tangential distortions. These distortion coefficients are
specific to each camera and are thus included as part of the camera’s
intrinsic parameters.
Subsequently, the stereo extrinsic parameters ($\mathbf{R}$ and $\mathbf{t}$)
can be ascertained by evaluating the transformation equations connecting each
camera, as described in Equations 18 and 19.
$\mathbf{R}=\mathbf{R}_{r}\mathbf{R}_{l}^{-1}$ (18)
$\mathbf{t}=\mathbf{t}_{r}-\mathbf{R}_{r}\mathbf{R}_{l}^{-1}\mathbf{t}_{l}$
(19)
where subscripts $l$ and $r$ represent the left and right cameras,
respectively.
It is worth noting that stereo camera calibration can be conveniently
performed using available tools such as Matlab’s Stereo Camera Calibrator.
#### 4.2.2 Stereo Correlation
The foundation of 3D-DIC lies in correlation algorithms to establish
correspondence between points within left and right images. This correlation
analysis comprises two key stages: stereo matching and temporal matching (or
tracking). Stereo matching’s primary goal is precise alignment of identical
physical points present in the left and right camera images. Meanwhile,
temporal matching tracks these identical points across successive images
captured by the same camera at different instances or conditions. For temporal
matching, the established subset-based 2D-DIC algorithm can be employed.
Stereo matching is notably more intricate due to substantial perspective
distortion between images captured by distinct cameras, rendering it the most
challenging aspect of stereo vision measurement. To guarantee accurate and
efficient 3D deformation measurements, crucial factors such as matching
strategy, correlation algorithm, shape function, and initial estimation need
careful consideration within the context of stereo matching [1].
In stereo matching, the non-linear perspective projection frequently leads to
substantial inaccuracies when employing first-order shape functions,
especially when dealing with large subset sizes and considerable stereo
angles. To tackle this issue, adopting $2^{nd}$-order shape functions in
stereo matching is advisable, as it can enhance accuracy. For instance, Gao et
al. [145] introduced the inverse compositional Gauss-Newton (IC-GN2)
algorithm, employing $2^{nd}$-order shape functions. Concerning temporal
matching, the IC-GN algorithm employing first-order shape functions is
typically favored due to its greater computational efficiency.
Subset-based matching algorithms face challenges in estimating an initial
guess when images from different cameras experience significant deformations
caused by a large stereo angle. To overcome this limitation, feature-based
matching techniques have been introduced. One such technique is the scale-
invariant feature transformation (SIFT) algorithm, which allows for fast and
accurate stereo matching in these scenarios [146, 147].
In the context of 3D-DIC, three prevalent matching strategies are employed.
The initial approach (depicted in Figure 11 _(a)_) entails matching the left
reference image with the right reference image (for the initial state), and
subsequently aligning all left and right deformed images to their
corresponding initial states (temporal matching). The second strategy (Figure
11 _(b)_) involves correlating all deformed left and right images with the
left reference image. The third strategy (Figure 11 _(c)_) compares the
deformed left images to the left reference image via temporal matching,
followed by matching each corresponding right image with the present left
image through stereo matching. Amid these strategies, the first one, which
conducts the computationally intensive stereo matching only once, is often
deemed the most effective choice for practical measurements [1].
Figure 11: Strategies for stereo-correlations in 3D-DIC.
#### 4.2.3 Reconstruction
After the correlation process, each point in the image is now matched relative
to the reference image. By incorporating the calibrated parameters of the
stereo camera-unit, the classic triangulation method can be utilized to
recover the 3D coordinates of measurement points. Within the domain of 3D-DIC,
four prevalent techniques are utilized for 3D reconstruction: the least square
method, optimal method, mid-point method, and geometrical optimal method [148,
149, 150, 151]. A comparative analysis by Zhong et al. [152] established that
the least square method stands out for its superior computational efficiency
while maintaining measurement accuracy.
Figure 12: 3D reconstruction based on triangulation.
The least square method computation is straightforward. Initially, the world
coordinate system is established in alignment with the left camera coordinate
system. Illustrated in Figure 12, $\mathbf{R}_{L}$ corresponds to an identity
matrix, and $\mathbf{t}_{L}$ represents a zero vector.
$(\mathbf{R}_{R},\mathbf{t}_{R})$ is congruent to
$(\mathbf{R}_{C},\mathbf{t}_{C})$, as obtained through stereo calibration. The
point under computation is labeled as $Q$. Adopting the premise of a pinhole
camera model, the correlation between the left image coordinates
$(x_{l},y_{l})$ and the world coordinates $(X_{Q},Y_{Q},Z_{Q})$ is expressed
as elucidated in Equation 20.
$\begin{bmatrix}x_{l}\\\ y_{l}\\\
\end{bmatrix}=\begin{bmatrix}f_{x}^{L}&s_{L}&c_{x}^{L}\\\
0&f_{y}^{L}&c_{y}^{L}\\\ \end{bmatrix}\begin{bmatrix}X_{Q}/Z_{Q}\\\
Y_{Q}/Z_{Q}\\\ 1\\\ \end{bmatrix}$ (20)
Given, $\mathbf{R}_{c}=\begin{bmatrix}R_{11}&R_{12}&R_{13}\\\
R_{21}&R_{22}&R_{23}\\\ R_{31}&R_{32}&R_{33}\\\ \end{bmatrix}$ and
$\mathbf{t}_{c}=\begin{bmatrix}t_{x}\\\ t_{y}\\\ t_{z}\end{bmatrix}$. The
relationship between the right image coordinates $(x_{r},y_{r})$ and the world
coordinates $(X_{Q},Y_{Q},Z_{Q})$ can be described in Equation 21.
$\begin{bmatrix}x_{r}\\\ y_{r}\\\
\end{bmatrix}=\begin{bmatrix}f_{x}^{R}&s_{R}&c_{x}^{R}\\\
0&f_{y}^{R}&c_{y}^{R}\\\
\end{bmatrix}\begin{bmatrix}\frac{R_{11}X_{Q}+R_{12}Y_{Q}+R_{13}Z_{Q}+t_{x}}{R_{31}X_{Q}+R_{32}Y_{Q}+R_{33}Z_{Q}+t_{z}}\\\
\frac{R_{21}X_{Q}+R_{22}Y_{Q}+R_{23}Z_{Q}+t_{y}}{R_{31}X_{Q}+R_{32}Y_{Q}+R_{33}Z_{Q}+t_{z}}\\\
1\\\ \end{bmatrix}$ (21)
By integrating Equations 20 and 21, the world coordinates
$(X_{Q},Y_{Q},Z_{Q})$ can be reconstructed using Equation 22.
$\begin{bmatrix}X_{Q}&Y_{Q}&Z_{Q}\end{bmatrix}^{T}=(\mathbf{M}^{T}\mathbf{M})^{-1}\mathbf{M}^{T}\mathbf{b}$
(22)
$\mathbf{M}$ and $\mathbf{b}$ are given by Equations 23 and 24, respectively.
It is important to note that all the parameters in $\mathbf{M}$ and
$\mathbf{b}$ were obtained during the stereo calibration process.
$\mathbf{M}=\begin{bmatrix}f_{x}^{L}&s_{L}&c_{x}^{L}-x_{l}\\\
0&f_{y}^{L}&c_{y}^{L}-y_{l}\\\
R_{11}f_{x}^{R}+R_{21}s_{R}+R_{31}(c_{x}^{R}-x_{r})&R_{12}f_{x}^{R}+R_{22}s_{R}+R_{32}(c_{x}^{R}-x_{r})&R_{13}f_{x}^{R}+R_{23}s_{R}+R_{33}(c_{x}^{R}-x_{r})\\\
R_{21}f_{y}^{R}+R_{31}(c_{y}^{R}-y_{r})&R_{22}f_{y}^{R}+R_{32}(c_{y}^{R}-y_{r})&R_{23}f_{y}^{R}+R_{33}(c_{y}^{R}-y_{r})\end{bmatrix}$
(23)
$\mathbf{b}=\begin{bmatrix}0\\\ 0\\\
-(t_{x}f_{x}^{r}+t_{y}s_{R}+t_{z}(c_{x}^{R}-x_{r}))\\\
-(t_{y}f_{y}^{r}+t_{z}(c_{y}^{R}-y_{r}))\\\ \end{bmatrix}$ (24)
#### 4.2.4 Compute of Displacements and Strains
The previously acquired 3D coordinates serve as the basis for constructing
complete displacement and strain maps. Displacements are individually computed
for each point, while for each triangular element, strains are determined
through the application of the Cosserat point element method [153, 154]. The
vertices’ positional vectors of each triangular element are employed to
calculate the deformation gradient tensor $\mathbf{F}$. Utilizing
$\mathbf{F}$, both the right and left Cauchy-Green deformation tensors
($\mathbf{C}=\mathbf{F}^{T}\mathbf{F}$ and
$\mathbf{B}=\mathbf{F}\mathbf{F}^{T}$) are derived, alongside the Green-
Lagrangian and Eulerian-Almansi strain tensors
($\mathbf{E}=0.5(\mathbf{C}-\mathbf{I})$ and
$\mathbf{e}=0.5(\mathbf{I}-\mathbf{B}^{-1})$, respectively). By analyzing
these tensors, principal components and directions are determined, leading to
the derivation of principal stretches ($\lambda_{i}$) and strains ($E_{i}$ and
$e_{i}$), as well as measures like equivalent (Von-Mises) strain, maximal
shear strain, and area change.
#### 4.2.5 Software
The effective implementation of algorithms is crucial for 3D-DIC analysis.
Table 3 presents a compilation of presently accessible non-commercial 3D-DIC
software. It is essential to note that this list exclusively comprises
software with supporting peer-reviewed research papers, and there may be other
available choices.
Software | Authors | First Release | User Interface | Open Source | Free | Language | Citations (Oct 2023)
---|---|---|---|---|---|---|---
DICe | Turner et al. [45] | 2015 | Yes | Yes | Yes | C++ | 12
MultiDIC | Solav et al. [153] | 2018 | Yes | Yes | Yes | Matlab | 142
DuoDIC | Solav and Silverstein [154] | 2022 | Yes | Yes | Yes | Matlab | 4
iCorrVision-3D | Nunes et al. [155] | 2022 | Yes | Yes | Yes | Python | 2
Table 3: 3D-DIC Software.
### 4.3 Applications
The 3D-DIC technique offers advantages over the simpler 2D-DIC technique, as
it does not make assumptions about the planar surface of the specimen or
negligible out-of-plane displacement during testing. Yuan et al. [25]
investigated the out-of-plane deformation in both monotonic (fracture) and
cyclic (fatigue) SCB tests using 3D-DIC. The results showed that in the
fracture test, the out-of-plane displacement ranged from -0.45 mm to 0.45 mm,
while in the fatigue test, it fluctuated between -1 mm and 0.95 mm. In a
separate study, Cheng et al. [118] arrived at a similar conclusion. These
findings indicate that the widely accepted assumption of negligible out-of-
plane displacement in SCB tests may not be valid, further highlighting the
advantages of 3D-DIC.
However, the adoption of the 3D-DIC technique in characterizing AC has been
limited, with less than 5% of journal publications since 2002 utilizing this
technique. The first instance of such usage was reported in 2017 [19]. This
limited adoption can be primarily attributed to the requirement of two
synchronized cameras and a relatively complex camera calibration process [18].
Table 4 summarizes the applications of 3D-DIC in laboratory characterization
of AC. The current applications of 3D-DIC closely resemble those of the 2D
alternative, emphasizing the monitoring of displacement and strain map
evolution during testing and the tracking of crack propagation. Future
research may investigate the application of 3D-DIC in laboratory tests that
are not amenable to 2D-DIC. For example, the use of Linear Variable
Differential Transformers (LVDTs) is a common method for monitoring vertical
displacement in dynamic modulus (E*) tests on cylindrical specimens. However,
LVDTs require periodic calibration, labor-intensive installation, and
extensive training. Additionally, they provide only a limited number of
discrete measurement points on the specimen’s surface [156]. Conversely,
adopting 3D-DIC may allow for full-field displacement data acquisition,
potentially eliminating the need for LVDTs. Furthermore, it is crucial to
evaluate the validity of the assumption of negligible out-of-plane deformation
in tests other than the SCB.
Type | Test | Applications | Publications
---|---|---|---
Fracture | SCB | Strain & Displacement | [157, 118, 158, 25, 159]
Crack Propagation | [118]
Double cantilever beam (DCB) | Mechanistic Parameters | [160]
Fatigue | 4PB | Strain & Displacement | [161]
| Crack Propagation | [161]
SCB | Strain & Displacement | [25]
Strength | IDT | Strain & Displacement | [102]
Damage Evolution | [102]
Others | Repeated load uniaxial test | Displacement | [106]
Table 4: Applications of 3D-DIC in AC laboratory tests.
## 5 Emerging DIC Techniques
### 5.1 Digital Volume Correlation
2D-DIC and 3D-DIC are restricted to measuring surface displacements and
strains. However, the heterogeneous properties of AC can lead to inconclusive
results. This section discusses DVC, which enables displacement and strain
mapping within the interior of loaded samples [162, 163, 164]. Although the
asphalt pavement engineering community has not yet adopted the DVC technique,
the information presented here intends to promote future research in this
field.
Figure 13: The overall digital volume correlation process.
The initial stage in applying DVC involves acquiring 3D images of unloaded and
loaded specimens. X-ray computed tomography (CT) is the prevailing technique
used for imaging, where a series of 2D X-ray images are used to generate 3D
images of the specimens [164]. It is important to mention that CT has been
employed to investigate the internal structure of AC [165, 166, 167].
Additionally, other imaging techniques, such as magnetic resonance imaging
(MRI) and optical coherence tomography (OCT), can also be utilized [168, 169].
As depicted in Figure 13, the DVC process begins with the choice of an ROI
encompassing the points requiring displacement determination.
Next in line is the estimation of displacement vectors at each measurement
point. This is achieved through the correlation of a reference (unloaded)
image volume with a target (deformed) image volume. Like 2D- and 3D-DIC
methods, the calculation of displacement in DVC entails determining a
combination of transformations (e.g., translation, shear, rotation) that
minimizes a cost function, such as the sum-of-squares correlation coefficient
(SSCC) or normalized cross-correlation coefficient (NCCC) cost function [170,
171, 172].
In the last phase, strains are assessed at all measurement sites by analyzing
the deformation gradients within the neighboring vicinity. The strain tensor
at each point $\mathbf{p}$ is calculated by fitting a $2^{nd}$-order Taylor
series expansion of the displacement vector field using a group of nearby
points through a least squares approach [173, 171].
DVC presents promising solutions to common challenges in asphalt pavement
engineering, including the validation of surface strain maps’ representation
of strain distribution across the entire specimen and the assessment of the
correspondence between surface crack propagation measurements through DIC and
actual three-dimensional crack propagation. Additionally, DVC demonstrates
extensive prospective applications encompassing internal granular material
movement tracking, internal strain quantification, crack initiation and
fracture monitoring, computation of SIF along crack fronts, and analysis of
fatigue crack closure effects, among others. Nevertheless, it is important to
acknowledge that acquiring high-resolution 3D images capable of supporting DVC
can be particularly challenging, especially given the complex and
heterogeneous nature of AC [174].
### 5.2 Deep-Learning-Based DIC
DIC is an iterative optimization procedure that requires substantial
computational resources, resulting in extended calculation times. It also
involves user inputs, including ROI, seed locations, subset size, and strain
calculation window size, making it a non-automatic process. However, deep
learning presents a solution to these challenges, enabling faster and fully
automated DIC analysis, known as an end-to-end process.
To facilitate a comprehensive discussion on recent advancements in deep-
learning-based DIC, it is crucial to introduce the concept of optical flow.
Optical flow refers to the perceived displacement field derived from two views
of a scene. It arises from the relative motion between specimens and the
camera in the scene, encompassing movement and deformation [175]. Notably, DIC
and optical flow share common ground as both methodologies aim to determine
the movement of pixels or features across a sequence of images. Deep learning
techniques, including Convolutional Neural Networks (CNN), Recurrent Neural
Networks (RNN), and Transformers, have been widely employed by researchers for
optical flow estimation. Table 5 provides a concise overview of the most
influential works in this area. Furthermore, the table presents the average
endpoint error (AEPE) of these works on a well-known benchmark dataset, Sintel
[176].
Method | Year | Algorithm | AEPE (in pixels)
---|---|---|---
FlowNet [177] | 2015 | Supervised; CNN | S (8.43); C (8.81)
FlowNet2.0 [178] | 2017 | Supervised; CNN | 6.02
PWC-Net [179] | 2018 | Supervised; CNN | 5.04
UnFlow [180] | 2018 | Unsupervised; CNN | 10.22
RAFT [181] | 2020 | Supervised; RNN | 2.86
FlowFormer [182] | 2022 | Supervised; Transformer | 2.09
Table 5: Deep learning methods for optical flow estimation.
In 2021, Boukhtache et al. [183] introduced StrainNet, a deep-learning-based
DIC method, for accurately determining in-plane subpixel displacement fields
from pairs of reference and deformed speckle images. By fine-tuning FlowNet-S
using a synthetic speckle dataset, StrainNet achieved a high level of
accuracy, with a mean absolute error (MAE) of 0.0299 pixels. This accuracy is
comparable to conventional 2D-DIC methods while also significantly reducing
computation time. In a subsequent publication, the authors proposed a
lightweight version called StrainNet-l [184]. This version significantly
reduced the number of parameters from 38.68 million to 0.67 million while
maintaining a similar level of accuracy. StrainNet-l achieved an MAE of 0.0312
pixels, demonstrating that parameter reduction did not compromise its
performance. Wang and Zhao [185] made further advancements in improving the
accuracy of displacement field determination by training a CNN similar to
U-Net architecture. They utilized a synthetic dataset generated through the
application of the Hermite finite elements. The resulting network, named DIC-
Net, achieved an impressive MAE of 0.0130 pixels, indicating a significant
improvement in accuracy compared to previous methods. It is crucial to
emphasize that the aforementioned networks are designed exclusively for
retrieving displacement fields. To obtain strain maps, it is necessary to
convolve the displacement fields with suitable derivative filters. However,
Yang et al. [186] introduced an end-to-end network that directly measures
strain maps using a FlowNet-S-like architecture.
All the previously mentioned networks are applicable only to 2D-DIC, where a
reference image and a deformed image are used as input. However, Wang et al.
[187] developed a network called StrainNet-3D, designed explicitly for
displacement retrieval from stereo images, similar to 3D-DIC. This approach
incorporates an affine-transformation-based method for calculating disparities
and a lightweight CNN for subpixel correlation. To ascertain three-dimensional
displacement, the CNN-derived disparities and temporal optical flow are
employed, guided by principles from stereo vision. Additionally, an optional
refiner network can be employed to enhance the accuracy of the results
further. StrainNet-3D achieved comparable accuracy to conventional 3D-DIC,
with a mean absolute error (MAE) of 0.0146 pixels compared to 0.0110 pixels.
Notably, StrainNet-3D exhibited improved efficiency as it does not require
camera calibration and enables faster calculation.
Currently, deep-learning-based DIC methods utilizing RNN or transformer
architectures have not been observed in the literature. However, based on the
facts presented in Table 5, these architectures have the potential to enhance
the accuracy of deep-learning-based DIC approaches significantly.
Furthermore, it is worth noting that the pavement engineering community has
not yet adopted these recent advancements in DIC using deep learning.
Nevertheless, considering the advantages offered, such as improved
computational efficiency, full automation, and elimination of user inputs
compared to conventional DIC methods, it is recommended to initiate
investigations into the viability of employing these deep-learning-based DIC
methods for AC characterization.
## 6 Flowchart of DIC Implementation for AC Characterization
The presented flowchart (Figure 14) serves as a comprehensive and reliable
reference for implementing DIC in characterizing AC. It is based on a
synthesis of best practices derived from the literature discussed in this
paper. It is highly recommended to refer to the flowchart in conjunction with
the detailed explanations provided in the respective sections of this paper
for a thorough understanding of the implementation process.
Figure 14: Flowchart for DIC implementation in AC characterization: synthesis
of best practices from literature.
## 7 Summary and Recommendations for Future Research
This article presents a comprehensive review of DIC as a critical tool for
laboratory testing of AC. The focus is primarily on the widely used 2D-DIC and
3D-DIC techniques. The study thoroughly investigates best practices related to
speckle pattern preparation, configuration of single-camera or dual-camera
imaging systems, and meticulous execution of DIC analyses. Additionally,
emerging DIC methodologies, such as DVC and deep-learning-based DIC, are
introduced, highlighting their potential for future applications in pavement
engineering. Lastly, a flowchart is provided as a comprehensive and reliable
reference for implementing DIC in AC characterization.
The key takeaways are summarized as below:
* 1.
_Speckle Pattern Preparation_. The optimal painted speckle pattern for AC
specimen consists of black speckles applied onto a thin white basecoat. The
speckle granules should ideally be 3-5 pixels or larger in size. SSSIG and MIG
serve as effective indices for assessing the quality of the speckle pattern.
* 2.
_Imaging System Configuration_. The optimal camera-specimen distance can be
determined through mathematical calculations. To capture high-quality images
with minimal noise, the three parameters of the exposure triangle, namely
aperture, ISO, and shutter speed, need to be adjusted. Artificial lighting is
often required to enhance the brightness of the scene. In 2D-DIC, it is
advisable to employ a mechanical camera positioning tool to ensure parallel
alignment between the camera CCD sensor and the object surface. For 3D-DIC,
precise synchronization and stereo calibration of the dual-camera setup before
the experiment are essential to achieve accurate measurements. A narrower
stereo angle is preferable for in-plane measurement accuracy, while a larger
angle is preferred for improved out-of-plane results.
* 3.
_Algorithm_. In 2D-DIC, subset-based matching is performed between reference
and deformed images. In parallel, 3D-DIC encompasses stereo matching, which
strives to precisely align corresponding physical points in the images of the
left and right cameras, and temporal matching, which monitors these identical
points across successive images taken by the same camera under varying
conditions or time frames. Open-source software options are readily accessible
for both 2D- and 3D-DIC analyses.
* 4.
_Applications_. DIC has found extensive application in fracture, fatigue, and
strength tests. DIC has gained widespread utility in fracture, fatigue, and
strength testing, categorizable into three main groups: direct application of
DIC-generated displacement or strain maps, mechanistic parameter derivation,
and tracking of crack propagation or damage evolution.
The followings are recommended for future research:
* 1.
A scientific discourse exists concerning whether the natural texture of AC
specimens aligns with prescribed criteria. In light of discrepant findings in
existing research, it is advisable to conduct further investigations into the
circumstances under which natural texture may be employed, considering factors
such as mixture characteristics, imaging system configuration, and precision
requirements.
* 2.
Most of the reviewed articles primarily employed DIC for displacement and
strain measurement, with minimal post-processing. Nevertheless, it is crucial
to recognize that more meaningful and quantitative results, such as
mechanistic parameters and precise crack propagation paths, can be derived
through supplementary post-processing methods detailed in Section 3.3.
* 3.
The prevailing approaches for tracking and quantifying crack propagation with
DIC predominantly rely on visual or empirical methodologies. It is advisable
to investigate the integration of fundamental mechanistic theories with DIC
for the measurement of cracks in mode II fracture, mixed-mode fracture, and
fatigue tests. Additionally, the combination of computer vision and
fundamental mechanistic theories appears promising for achieving both high
reliability and automation.
* 4.
Present methods for computing pseudo SIF and J-integral using strain fields
obtained through DIC rely on the assumption of constant modulus and material
homogeneity, respectively. Nevertheless, under high viscosity, the constant
modulus assumption breaks down, and AC is inherently heterogeneous. Therefore,
it is recommended to investigate approaches for computing pseudo SIF and
J-integral under conditions where these assumptions do not hold.
* 5.
The utilization of 3D-DIC in AC characterization remains limited, accounting
for less than 5% of published articles in the field. Future research efforts
could focus on implementing 3D-DIC in additional laboratory tests,
particularly those where 2D-DIC is not feasible. Moreover, it is crucial to
assess the validity of the assumption of negligible out-of-plane deformation
in tests other than the SCB test and establish distinct guidelines for
determining the appropriate use of 2D- or 3D-DIC in AC characterization.
* 6.
A deficiency in the application of DIC in large- or full-scale tests of AC has
been observed. Cement concrete researchers have previously utilized DIC in
such assessments. Notable challenges that must be addressed include optimizing
the imaging system setup to minimize vibrations and achieve adequate spatial
resolution, preparing specimens suitable for large-scale testing, and
identifying the valuable insights attainable through DIC analysis.
* 7.
Both 2D-DIC and 3D-DIC are limited to surface displacement and strain
measurements, which may yield inconclusive results due to the heterogeneous
nature of AC. To overcome this limitation, it is recommended to investigate
the potential of DVC as a tool for mapping displacement and strain within the
interior of loaded AC samples.
* 8.
Deep-learning-based DIC methods have demonstrated enhanced computational
efficiency, full automation, and reduced dependence on user inputs when
compared to conventional DIC techniques. Therefore, it is recommended to
explore the viability of utilizing these deep-learning-based DIC methods for
characterizing AC.
## Acknowledgements
The authors extend their appreciation to the anonymous reviewers for their
valuable feedback, which significantly enhanced the quality of this paper.
This work received no external funding. Any commercial products mentioned in
this paper do not represent endorsements from the authors.
## References
* Pan [2018] B. Pan, Digital image correlation for surface deformation measurement: historical developments, recent advances and future goals, Measurement Science and Technology 29 (2018) 082001. doi:https://doi.org/10.1088/1361-6501/aac55b.
* Chu et al. [1985] T. Chu, W. Ranson, M. A. Sutton, Applications of digital-image-correlation techniques to experimental mechanics, Experimental mechanics 25 (1985) 232–244. doi:https://doi.org/10.1007/BF02325092.
* Pan [2011] B. Pan, Recent progress in digital image correlation, Experimental mechanics 51 (2011) 1223–1235. doi:https://doi.org/10.1007/s11340-010-9418-3.
* Peters and Ranson [1982] W. Peters, W. Ranson, Digital imaging techniques in experimental stress analysis, Optical engineering 21 (1982) 427–431. doi:https://doi.org/10.1117/12.7972925.
* Sutton et al. [1983] M. A. Sutton, W. Wolters, W. Peters, W. Ranson, S. McNeill, Determination of displacements using an improved digital correlation method, Image and vision computing 1 (1983) 133–139. doi:https://doi.org/10.1016/0262-8856(83)90064-1.
* Peters et al. [1983] W. Peters, W. Ranson, M. Sutton, T. Chu, J. Anderson, Application of digital correlation methods to rigid body mechanics, Optical Engineering 22 (1983) 738–742. doi:https://doi.org/10.1117/12.7973231.
* He et al. [1984] Z. He, M. Sutton, W. Ranson, W. Peters, Two-dimensional fluid-velocity measurements by use of digital-speckle correlation techniques, experimental mechanics 24 (1984) 117–121. doi:https://doi.org/10.1007/BF02324993.
* Luo et al. [1993] P. Luo, Y. Chao, M. Sutton, W.-H. Peters, Accurate measurement of three-dimensional deformations in deformable and rigid bodies using computer vision, Experimental mechanics 33 (1993) 123–132. doi:https://doi.org/10.1007/BF02322488.
* Luo et al. [1994] P.-F. Luo, Y. J. Chao, M. A. Sutton, Application of stereo vision to three-dimensional deformation analyses in fracture experiments, Optical Engineering 33 (1994) 981–990. doi:https://doi.org/10.1117/12.160877.
* Seo et al. [2002] Y. Seo, Y. Kim, M. W. Witczak, R. Bonaquist, Application of digital image correlation method to mechanical testing of asphalt-aggregate mixtures, Transportation Research Record 1789 (2002) 162–172. doi:https://doi.org/10.3141/1789-18.
* Chehab et al. [2007] G. R. Chehab, Y. Seo, Y. R. Kim, Viscoelastoplastic damage characterization of asphalt–aggregate mixtures using digital image correlation, International Journal of Geomechanics 7 (2007) 111–118. doi:https://doi.org/10.1061/(ASCE)1532-3641(2007)7:2(111).
* Birgisson et al. [2008] B. Birgisson, A. Montepara, E. Romeo, R. Roncella, J. Napier, G. Tebaldi, Determination and prediction of crack patterns in hot mix asphalt (hma) mixtures, Engineering Fracture Mechanics 75 (2008) 664–673. doi:https://doi.org/10.1016/j.engfracmech.2007.02.003.
* Birgisson et al. [2009] B. Birgisson, A. Montepara, E. Romeo, R. Roncella, R. Roque, G. Tebaldi, An optical strain measurement system for asphalt mixtures, Materials and Structures 42 (2009) 427–441. doi:https://doi.org/10.1617/s11527-008-9392-8.
* Buttlar et al. [2014] W. G. Buttlar, B. C. Hill, Y. R. Kim, M. E. Kutay, A. Millien, A. Montepara, G. H. Paulino, C. Petit, I. O. Pop, E. Romeo, et al., Digital image correlation techniques to investigate strain fields and cracking phenomena in asphalt materials, Materials and Structures 47 (2014) 1373–1390. doi:https://doi.org/10.1617/s11527-014-0362-z.
* Safavizadeh et al. [2017] S. Safavizadeh, A. Wargo, Y. R. Kim, Utilizing digital image correlation (dic) in asphalt pavement testing, Journal of Testing and Evaluation 46 (2017) 984–998. doi:https://doi.org/10.1520/JTE20160262.
* Rivera-Pérez et al. [2021] J. Rivera-Pérez, H. Ozer, J. Lambros, I. L. Al-Qadi, Illinois flexibility index test: effect of specimen geometry and test configuration on the asphalt concrete damage zone, Journal of Transportation Engineering, Part B: Pavements 147 (2021) 04020085. doi:https://doi.org/10.1061/JPEODX.0000243.
* Zhu and Al-Qadi [2023] Z. Zhu, I. L. Al-Qadi, Crack detection of asphalt concrete using combined fracture mechanics and digital image correlation, Journal of Transportation Engineering, Part B: Pavements 149 (2023) 04023012. doi:https://doi.org/10.1061/JPEODX.PVENG-1249.
* Zhu and Al-Qadi [2024] Z. Zhu, I. L. Al-Qadi, Sift-aided rectified 2d-dic for displacement and strain measurements in asphalt concrete testing, Journal of Transportation Engineering, Part B: Pavements (2024) _[Forthcoming]_. doi:https://doi.org/10.1061/JPEODX.PVENG-1401.
* Stewart et al. [2017] C. M. Stewart, J. G. Reyes, V. M. Garcia, Comparison of fracture test standards for a super pave dense-graded hot mix asphalt, Engineering Fracture Mechanics 169 (2017) 262–275. doi:https://doi.org/10.1016/j.engfracmech.2016.10.016.
* Dong and Pan [2017] Y. Dong, B. Pan, A review of speckle pattern fabrication and assessment for digital image correlation, Experimental Mechanics 57 (2017) 1161–1181. doi:https://doi.org/10.1007/s11340-017-0283-1.
* Reu [2014] P. Reu, All about speckles: speckle size measurement, Experimental Techniques 38 (2014) 1–2. doi:https://doi.org/10.1111/ext.12110.
* Reu [2015] P. Reu, All about speckles: contrast, Experimental Techniques 39 (2015) 1–2. doi:https://doi.org/10.1111/ext.12126.
* Xing et al. [2017] C. Xing, Y. Tan, X. Liu, K. Anupam, T. Scarpas, Research on local deformation property of asphalt mixture using digital image correlation, Construction and Building Materials 140 (2017) 416–423. doi:https://doi.org/10.1016/j.conbuildmat.2017.02.108.
* Guo et al. [2020] Q. Guo, H. Wang, Y. Gao, Y. Jiao, F. Liu, Z. Dong, Investigation of the low-temperature properties and cracking resistance of fiber-reinforced asphalt concrete using the dic technique, Engineering Fracture Mechanics 229 (2020) 106951. doi:https://doi.org/10.1016/j.engfracmech.2020.106951.
* Yuan et al. [2020] F. Yuan, L. Cheng, X. Shao, Z. Dong, L. Zhang, G. Wu, X. He, Full-field measurement and fracture and fatigue characterizations of asphalt concrete based on the scb test and stereo-dic, Engineering Fracture Mechanics 235 (2020) 107127. doi:https://doi.org/10.1016/j.engfracmech.2020.107127.
* Doll [2015] B. Doll, Evaluation of viscous effects in crack tip fields in recycled asphalt pavement materials using digital image correlation, University of Illinois at Urbana-Champaign, 2015.
* LePage et al. [2017] W. LePage, J. Shaw, S. Daly, Optimum paint sequence for speckle patterns in digital image correlation, Experimental Techniques 41 (2017) 557–563. doi:https://doi.org/10.1007/s40799-017-0192-3.
* Doll et al. [2017a] B. Doll, H. Ozer, J. Rivera-Perez, I. L. Al-Qadi, J. Lambros, Damage zone development in heterogeneous asphalt concrete, Engineering Fracture Mechanics 182 (2017a) 356–371. doi:https://doi.org/10.1016/j.engfracmech.2017.06.002.
* Doll et al. [2017b] B. Doll, H. Ozer, J. J. Rivera-Perez, I. L. Al-Qadi, J. Lambros, Investigation of viscoelastic fracture fields in asphalt mixtures using digital image correlation, International Journal of Fracture 205 (2017b) 37–56. doi:https://doi.org/10.1007/s10704-017-0180-8.
* Lionello and Cristofolini [2014] G. Lionello, L. Cristofolini, A practical approach to optimizing the preparation of speckle patterns for digital-image correlation, Measurement Science and Technology 25 (2014) 107001. doi:https://doi.org/10.1088/0957-0233/25/10/107001.
* Pan et al. [2008] B. Pan, H. Xie, Z. Wang, K. Qian, Z. Wang, Study on subset size selection in digital image correlation for speckle patterns, Optics express 16 (2008) 7037–7048. doi:https://doi.org/10.1364/OE.16.007037.
* Pan et al. [2010] B. Pan, Z. Lu, H. Xie, Mean intensity gradient: an effective global parameter for quality assessment of the speckle patterns used in digital image correlation, Optics and Lasers in Engineering 48 (2010) 469–477. doi:https://doi.org/10.1016/j.optlaseng.2009.08.010.
* Neggers et al. [2016] J. Neggers, B. Blaysat, J. P. Hoefnagels, M. G. Geers, On image gradients in digital image correlation, International Journal for Numerical Methods in Engineering 105 (2016) 243–260. doi:https://doi.org/10.1002/nme.4971.
* Pan et al. [2009] B. Pan, K. Qian, H. Xie, A. Asundi, Two-dimensional digital image correlation for in-plane displacement and strain measurement: a review, Measurement science and technology 20 (2009) 062001. doi:https://doi.org/10.1088/0957-0233/20/6/062001.
* Jones et al. [2018] E. M. Jones, M. A. Iadicola, et al., A good practices guide for digital image correlation, International Digital Image Correlation Society 10 (2018) 308–312. doi:https://doi.org/10.32720/idics/gpg.ed1/print.format.
* Sutton et al. [2000] M. A. Sutton, S. R. McNeill, J. D. Helm, Y. J. Chao, Advances in two-dimensional and three-dimensional computer vision, Photomechanics (2000) 323–372. doi:https://doi.org/10.1007/3-540-48800-6\\_10.
* Zhang [2000] Z. Zhang, A flexible new technique for camera calibration, IEEE Transactions on pattern analysis and machine intelligence 22 (2000) 1330–1334. doi:https://doi.org/10.1109/34.888718.
* Wittevrongel et al. [2015] L. Wittevrongel, M. Badaloni, R. Balcaen, P. Lava, D. Debruyne, Evaluation of methodologies for compensation of out of plane motions in a 2d digital image correlation setup, Strain 51 (2015) 357–369. doi:https://doi.org/10.1111/str.12146.
* Gruen and Huang [2013] A. Gruen, T. S. Huang, Calibration and orientation of cameras in computer vision, volume 34, Springer Science & Business Media, 2013.
* Pan [2009] B. Pan, Reliability-guided digital image correlation for image deformation measurement, Applied optics 48 (2009) 1535–1542. doi:https://doi.org/10.1364/AO.48.001535.
* Blaber et al. [2015] J. Blaber, B. Adair, A. Antoniou, Ncorr: open-source 2d digital image correlation matlab software, Experimental Mechanics 55 (2015) 1105–1122. doi:https://doi.org/10.1007/s11340-015-0009-1.
* Pan et al. [2009] B. Pan, A. Asundi, H. Xie, J. Gao, Digital image correlation using iterative least squares and pointwise least squares for displacement field and strain field measurements, Optics and Lasers in Engineering 47 (2009) 865–874. doi:https://doi.org/10.1016/j.optlaseng.2008.10.014.
* Eberly [2000] D. Eberly, Least squares fitting of data, Chapel Hill, NC: Magic Software (2000) 1–10.
* Pan et al. [2007] B. Pan, H. Xie, Z. Guo, T. Hua, Full-field strain measurement using a two-dimensional savitzky-golay digital differentiator in digital image correlation, Optical Engineering 46 (2007) 033601–033601. doi:https://doi.org/10.1117/1.2714926.
* Turner et al. [2015] D. Turner, P. Crozier, P. Reu, Digital image correlation engine, Technical Report, Sandia National Laboratories (SNL), Albuquerque, NM, and Livermore, CA …, 2015.
* Olufsen et al. [2020] S. N. Olufsen, M. E. Andersen, E. Fagerholt, $\mu$dic: An open-source toolkit for digital image correlation, SoftwareX 11 (2020) 100391. doi:https://doi.org/10.1016/j.softx.2019.100391.
* Jiang [2023] Z. Jiang, Opencorr: An open source library for research and development of digital image correlation, Optics and Lasers in Engineering 165 (2023) 107566. doi:https://doi.org/10.1016/j.optlaseng.2023.107566.
* de Deus Filho et al. [2022] J. C. A. de Deus Filho, L. C. da Silva Nunes, J. M. C. Xavier, icorrvision-2d: An integrated python-based open-source digital image correlation software for in-plane measurements (part 1), SoftwareX 19 (2022) 101131. doi:https://doi.org/10.1016/j.softx.2022.101131.
* Radeef et al. [2022] H. Radeef, N. Hassan, M. Mahmud, K. Usman, C. Ismail, Z. Al Saffar, H. Abbas, Influence of ageing and moisture damage on the illinois flexibility index value of polymer modified asphalt mixture, Physics and Chemistry of the Earth, Parts A/B/C 128 (2022) 103248. doi:https://doi.org/10.1016/j.pce.2022.103248.
* Wang et al. [2022] L. Wang, Y. Liu, L. Zhang, A multiscale study of moisture influence on the crumb rubber asphalt mixture interface, Applied Sciences 12 (2022) 6940. doi:https://doi.org/10.3390/app12146940.
* Wu et al. [2022] B. Wu, Z. Pei, P. Xiao, K. Lou, X. Wu, Influence of fiber-asphalt interface property on crack resistance of asphalt mixture, Case Studies in Construction Materials 17 (2022) e01703. doi:https://doi.org/10.1016/j.cscm.2022.e01703.
* Cui et al. [2022] S. Cui, N. Guo, L. Wang, Z. You, Y. Tan, Z. Guo, X. Luo, Z. Chen, Effect of freeze–thaw cycles on the pavement performance of sbs modified and composite crumb rubber modified asphalt mixtures, Construction and Building Materials 342 (2022) 127799. doi:https://doi.org/10.1016/j.conbuildmat.2022.127799.
* Radeef et al. [2023] H. R. Radeef, N. A. Hassan, M. Z. H. Mahmud, Z. H. Al Saffar, H. F. Abass, A. R. Z. Abidin, C. R. Ismail, Fracture resistance of polymeric wastes modified asphalt using r-curve and digital image correlation, Theoretical and Applied Fracture Mechanics 123 (2023) 103691. doi:https://doi.org/10.1016/j.tafmec.2022.103691.
* Hu et al. [2022] G. Hu, Q. Yang, X. Qiu, D. Zhang, W. Zhang, S. Xiao, J. Xu, Use of dic and ae for investigating fracture behaviors of cold recycled asphalt emulsion mixtures with 100% rap, Construction and Building Materials 344 (2022) 128278. doi:https://doi.org/10.1016/j.conbuildmat.2022.128278.
* Pei et al. [2021] Z. Pei, K. Lou, H. Kong, B. Wu, X. Wu, P. Xiao, Y. Qi, Effects of fiber diameter on crack resistance of asphalt mixtures reinforced by basalt fibers based on digital image correlation technology, Materials 14 (2021) 7426. doi:https://doi.org/10.3390/ma14237426.
* Al-Qadi et al. [2022] I. L. Al-Qadi, I. M. Said, U. M. Ali, J. R. Kaddo, Cracking prediction of asphalt concrete using fracture and strength tests, International Journal of Pavement Engineering 23 (2022) 3333–3345. doi:https://doi.org/10.1080/10298436.2021.1892108.
* Zhu et al. [2020] X. Zhu, Y. Fan, Y. Yu, F. A. Gilabert, Crack propagation and microwave healing of ferrite-filled asphalt mixes based on image correlation observations, Construction and Building Materials 262 (2020) 119978. doi:https://doi.org/10.1016/j.conbuildmat.2020.119978.
* Kong et al. [2023] L. Kong, D. Ren, S. Zhou, Z. He, C. Ai, C. Yan, Evaluating the evolution of fiber-reinforced emulsified asphalt cold-recycled mixture damage using digital image correlation, International Journal of Pavement Engineering 24 (2023) 2176495. doi:https://doi.org/10.1080/10298436.2023.2176495.
* Wu et al. [2022] X. Wu, C. Kou, P. Xiao, Z. Wu, A. Kang, Performance evaluation of hot mix asphalt reinforced by basalt fibers with various diameters, Journal of Testing and Evaluation 50 (2022) 1920–1933. doi:https://doi.org/10.1520/JTE20210431.
* Asghar et al. [2022] M. F. Asghar, M. J. Khattak, A. Olayinka, Evaluation of fracture performance of polyvinyl alcohol fiber reinforced hot mix asphalt, Construction and Building Materials 350 (2022) 128741. doi:https://doi.org/10.1016/j.conbuildmat.2022.128741.
* Zhu et al. [2020] X. Zhu, F. Ye, Y. Cai, B. Birgisson, Y. Yu, Digital image correlation-based investigation of self-healing properties of ferrite-filled open-graded friction course asphalt mixture, Construction and Building Materials 234 (2020) 117378. doi:https://doi.org/10.1016/j.jclepro.2019.05.353.
* Zhou et al. [2017] Z. Zhou, X. Gu, F. Ni, Q. Li, X. Ma, Cracking resistance characterization of asphalt concrete containing reclaimed asphalt pavement at intermediate temperatures, Transportation Research Record 2633 (2017) 46–57. doi:https://doi.org/10.3141/2633-07.
* Wu et al. [2022] H. Wu, L. Wang, J. Hu, X. Luo, Evaluation of low-temperature performance of sbs/cr composite modified-asphalt mixture under aging and freeze–thaw cycles, Journal of Materials in Civil Engineering 34 (2022) 04022239. doi:https://doi.org/10.1061/(ASCE)MT.1943-5533.0004395.
* Hill et al. [2017] B. Hill, O. Giraldo-Londoño, G. Paulino, W. Buttlar, Inverse estimation of cohesive fracture properties of asphalt mixtures using an optimization approach, Experimental mechanics 57 (2017) 637–648. doi:https://doi.org/10.1007/s11340-017-0257-3.
* Lin et al. [2023] Q. Lin, Z. Liu, J. Sun, L. Yu, Comprehensive modification of emulsified asphalt on improving mechanical properties of crumb rubber concrete, Construction and Building Materials 369 (2023) 130555. doi:https://doi.org/10.1016/j.conbuildmat.2023.130555.
* Wang et al. [2022] L. Wang, X. Bai, et al., Study on the low temperature cracking mechanism of steel slag asphalt mixture by macroscale and microscale tests, Advances in Materials Science and Engineering 2022 (2022). doi:https://doi.org/10.1155/2022/4875276.
* Cullen et al. [2021] S. Cullen, D. Offenbacker, A. Ali, Y. Mehta, C. Decarlo, M. Elshaer, Assessing the impact of geosynthetic interlayers on laboratory cracking and delamination of hot-mix asphalt mixtures, Transportation Research Record 2675 (2021) 148–160. doi:https://doi.org/10.1177/0361198121996712.
* Wang et al. [2021] L. Wang, S. Cui, L. Feng, Research on the influence of ultraviolet aging on the interfacial cracking characteristics of warm mix crumb rubber modified asphalt mortar, Construction and Building Materials 281 (2021) 122556. doi:https://doi.org/10.1016/j.conbuildmat.2021.122556.
* Wang et al. [2020] L. Wang, M. Shan, C. Chang, X. Zhou, The macro-and meso-cracking characteristics of warm mix crumb rubber asphalt mastics before and after aging, Construction and Building Materials 262 (2020) 120724. doi:https://doi.org/10.1016/j.conbuildmat.2020.120724.
* Wang et al. [2018] Z. Wang, Q. Dai, S. Guo, Microwave-healing performance of modified asphalt mixtures with flake graphite and exfoliated graphite nanoplatelet, Construction and Building Materials 187 (2018) 865–875. doi:https://doi.org/10.1016/j.conbuildmat.2018.06.210.
* Pedraza et al. [2019] A. Pedraza, H. Di Benedetto, C. Sauzéat, S. Pouget, Fracture properties of multirecycled asphalt mixes from four-point bending test using digital image correlation and back calculation, Journal of Testing and Evaluation 47 (2019) 20180524. doi:https://doi.org/10.1520/JTE20180524.
* Safavizadeh et al. [2022] S. A. Safavizadeh, S.-H. Cho, Y. R. Kim, Interface shear strength and shear fatigue resistance of fibreglass grid-reinforced asphalt concrete test specimens, International Journal of Pavement Engineering 23 (2022) 2531–2542. doi:https://doi.org/10.1080/10298436.2020.1861447.
* Sudarsanan et al. [2019] N. Sudarsanan, A. Arulrajah, R. Karpurapu, V. Amrithalingam, Digital image correlation technique for measurement of surface strains in reinforced asphalt concrete beams under fatigue loading, Journal of Materials in Civil Engineering 31 (2019) 04019135. doi:https://doi.org/10.1061/(ASCE)MT.1943-5533.0002743.
* Kumar V and Saride [2017] V. Kumar V, S. Saride, Use of digital image correlation for the evaluation of flexural fatigue behavior of asphalt beams with geosynthetic interlayers, Transportation Research Record 2631 (2017) 55–64. doi:https://doi.org/10.3141/2631-06.
* Safavizadeh and Kim [2017] S. A. Safavizadeh, Y. R. Kim, Dic technique to investigate crack propagation in grid-reinforced asphalt specimens, Journal of Materials in Civil Engineering 29 (2017) 04017011. doi:https://doi.org/10.1061/(ASCE)MT.1943-5533.0001839.
* Wargo et al. [2017] A. Wargo, S. A. Safavizadeh, Y. R. Kim, Comparing the performance of fiberglass grid with composite interlayer systems in asphalt concrete, Transportation Research Record 2631 (2017) 123–132. doi:https://doi.org/10.3141/2631-14.
* Safavizadeh et al. [2015] S. A. Safavizadeh, A. Wargo, M. Guddati, Y. R. Kim, Investigating reflective cracking mechanisms in grid-reinforced asphalt specimens: Use of four-point bending notched beam fatigue tests and digital image correlation, Transportation Research Record 2507 (2015) 29–38. doi:https://doi.org/10.3141/2507-04.
* Radeef et al. [2022] H. R. Radeef, N. A. Hassan, M. Z. H. Mahmud, A. R. Z. Abidin, R. P. Jaya, C. R. Ismail, H. F. Abbas, Linear viscoelastic response of semi-circular asphalt sample based on digital image correlation and xfem, Measurement 192 (2022) 110866. doi:https://doi.org/10.1016/j.measurement.2022.110866.
* Radeef et al. [2021] H. R. Radeef, N. A. Hassan, M. Z. H. Mahmud, A. R. Z. Abidin, C. R. Ismail, H. F. Abbas, Z. H. Al-Saffar, Characterisation of cracking resistance in modified hot mix asphalt under repeated loading using digital image analysis, Theoretical and Applied Fracture Mechanics 116 (2021) 103130. doi:https://doi.org/10.1016/j.tafmec.2021.103130.
* Yang et al. [2021] S. Yang, J. Jiang, Z. Leng, F. Ni, Feasibility and performance of the semi-circular bending test in evaluating the low-temperature performance of asphalt mortar, Construction and Building Materials 269 (2021) 121305. doi:https://doi.org/10.1016/j.conbuildmat.2020.121305.
* Jiang et al. [2019] J. Jiang, F. Ni, F. Wu, H. Sadek, Q. Lv, Evaluation of the healing potential of asphalt mixtures based on a modified semi-circular bending test, Construction and Building Materials 196 (2019) 284–294. doi:https://doi.org/10.1016/j.conbuildmat.2018.10.220.
* Jiang et al. [2018] J. Jiang, F. Ni, Q. Dong, Y. Zhao, K. Xu, Fatigue damage model of stone matrix asphalt with polymer modified binder based on tensile strain evolution and residual strength degradation using digital image correlation methods, Measurement 123 (2018) 30–38. doi:https://doi.org/10.1016/j.measurement.2018.03.037.
* Zhang et al. [2018] J. Zhang, M. Sakhaeifar, D. N. Little, A. Bhasin, Y.-R. Kim, Characterization of crack growth rate of sulfur-extended asphalt mixtures using cyclic semicircular bending test, Journal of Materials in Civil Engineering 30 (2018) 04018311. doi:https://doi.org/10.1061/(ASCE)MT.1943-5533.0002517.
* Jiang-san et al. [2022] H. Jiang-san, W. Lan, L. Xin, Anti-fatigue performance of warm-mixed rubber powder modified asphalt mixture based on the dic technique, Construction and Building Materials 335 (2022) 127489. doi:https://doi.org/10.1016/j.conbuildmat.2022.127489.
* Saride and Kumar [2019] S. Saride, V. V. Kumar, Estimation of service life of geosynthetic-reinforced asphalt overlays from beam and large-scale fatigue tests, Journal of Testing and Evaluation 47 (2019) 2693–2716. doi:https://doi.org/10.1520/JTE20170605.
* Xing et al. [2020] C. Xing, H. Xu, Y. Tan, D. Wang, C. Zhai, Strain field distribution of asphalt mortar using digital image processing, Construction and Building Materials 238 (2020) 117624. doi:https://doi.org/10.1016/j.conbuildmat.2019.117624.
* Jiao et al. [2020] Y. Jiao, L. Zhang, Q. Guo, M. Guo, Y. Zhang, Acoustic emission-based reinforcement evaluation of basalt and steel fibers on low-temperature fracture resistance of asphalt concrete, Journal of Materials in Civil Engineering 32 (2020) 04020104. doi:https://doi.org/10.1061/(ASCE)MT.1943-5533.0003118.
* Tan et al. [2017] Y. Tan, Z. Sun, X. Gong, H. Xu, L. Zhang, Y. Bi, Design parameter of low-temperature performance for asphalt mixtures in cold regions, Construction and Building Materials 155 (2017) 1179–1187. doi:https://doi.org/10.1016/j.conbuildmat.2017.09.094.
* Xing et al. [2020] C. Xing, L. Zhang, K. Anupam, Y. Tan, D. Wang, C. Zhai, Particle distribution around the damage area of asphalt mixture based on digital image correlation, Powder Technology 375 (2020) 11–19. doi:https://doi.org/10.1016/j.powtec.2020.07.090.
* Górszczyk et al. [2019] J. Górszczyk, K. Malicki, T. Zych, Application of digital image correlation (dic) method for road material testing, Materials 12 (2019) 2349. doi:https://doi.org/10.3390/ma12152349.
* Hasheminejad et al. [2019] N. Hasheminejad, C. Vuye, A. Margaritis, B. Ribbens, G. Jacobs, J. Blom, W. Van den Bergh, J. Dirckx, S. Vanlanduit, Investigation of crack propagation and healing of asphalt concrete using digital image correlation, Applied Sciences 9 (2019) 2459. doi:https://doi.org/10.3390/app9122459.
* Hasheminejad et al. [2018] N. Hasheminejad, A. Margaritis, B. Ribbens, C. Vuye, J. Blom, W. V. d. Bergh, J. Dirckx, S. Vanlanduit, Digital image correlation to investigate crack propagation and healing of asphalt concrete, in: Proceedings, volume 2, MDPI, 2018, p. 473. doi:https://doi.org/10.3390/ICEM18-05381.
* Sedghi et al. [2021] R. Sedghi, A. Rezaei Lori, A. Bokaei, A. Cannone Falchetto, Evaluating the bond strength and work of fracture of bituminous mastic using the taguchi method, International Journal of Pavement Engineering 22 (2021) 1318–1333. doi:https://doi.org/10.1080/10298436.2019.1684493.
* Roberto et al. [2021] A. Roberto, E. Romeo, G. Tebaldi, A. Montepara, Introducing a new test protocol to evaluate the rate of damage accumulation in mastics at intermediate temperatures, Road Materials and Pavement Design 22 (2021) S397–S410. doi:https://doi.org/10.1080/14680629.2021.1906306.
* Roberto et al. [2020] A. Roberto, E. Romeo, A. Montepara, R. Roncella, Effect of fillers and their fractional voids on fundamental fracture properties of asphalt mixtures and mastics, Road Materials and Pavement Design 21 (2020) 25–41. doi:https://doi.org/10.1080/14680629.2018.1475297.
* Yan et al. [2018] Y. Yan, F. Preti, E. Romeo, G. Lopp, G. Tebaldi, R. Roque, Fracture energy density of interstitial component of asphalt mixtures, Materials and Structures 51 (2018) 1–13. doi:https://doi.org/10.1617/s11527-018-1251-7.
* Preti et al. [2019] F. Preti, S. Noto, C. Accardo, E. Romeo, A. Montepara, G. Tebaldi, Effect of hyper-modified asphalt binder and steel slags on cracking and rutting behaviour of wearing course mixtures, Road Materials and Pavement Design 20 (2019) S678–S694. doi:https://doi.org/10.1080/14680629.2019.1633746.
* Kumar and Saride [2018] V. V. Kumar, S. Saride, Evaluation of cracking resistance potential of geosynthetic reinforced asphalt overlays using direct tensile strength test, Construction and Building Materials 162 (2018) 37–47. doi:https://doi.org/10.1016/j.conbuildmat.2017.11.158.
* Teguedi et al. [2017] M. C. Teguedi, E. Toussaint, B. Blaysat, S. Moreira, S. Liandrat, M. Grédiac, Towards the local expansion and contraction measurement of asphalt exposed to freeze-thaw cycles, Construction and Building Materials 154 (2017) 438–450. doi:https://doi.org/10.1016/j.conbuildmat.2017.07.152.
* Wang et al. [2017] Z. Wang, Q. Dai, S. Guo, R. Wang, M. Ye, Y. K. Yap, Experimental investigation of physical properties and accelerated sunlight-healing performance of flake graphite and exfoliated graphite nanoplatelet modified asphalt materials, Construction and Building Materials 134 (2017) 412–423. doi:https://doi.org/10.1016/j.conbuildmat.2016.12.129.
* Attia et al. [2020] T. Attia, H. Di Benedetto, C. Sauzéat, S. Pouget, 2t3c hca, a new hollow cylinder device using digital image correlation to measure properties of interfaces between asphalt layers, Construction and Building Materials 247 (2020) 118499. doi:https://doi.org/10.1016/j.conbuildmat.2020.118499.
* Stewart and Garcia [2019] C. M. Stewart, E. Garcia, Fatigue crack growth of a hot mix asphalt using digital image correlation, International journal of fatigue 120 (2019) 254–266. doi:https://doi.org/10.1016/j.ijfatigue.2018.11.024.
* Al-Qadi et al. [2015] I. L. Al-Qadi, H. Ozer, J. Lambros, A. E. Khatib, P. Singhvi, T. Khan, J. Rivera-Perez, B. Doll, et al., Testing protocols to ensure performance of high asphalt binder replacement mixes using RAP and RAS., Technical Report, Illinois Center for Transportation, 2015.
* Li et al. [2023] W. Li, Y. Huang, Z. Liu, L. Liu, Study on influencing factors of synergistic deformation between built-in strain sensor and asphalt mixture, Case Studies in Construction Materials 18 (2023) e01993. doi:https://doi.org/10.1016/j.cscm.2023.e01993.
* Hernandez et al. [2018] J. Hernandez, M. Sawalha, J. Rivera-Perez, H. Ozer, I. L. Al-Qadi, Micromechanical modeling of i-fit asphalt concrete specimens, Engineering Fracture Mechanics 200 (2018) 234–250. doi:https://doi.org/10.1016/j.engfracmech.2018.07.033.
* Behnke et al. [2021] R. Behnke, G. C. Falla, S. Leischner, T. Händel, F. Wellner, M. Kaliske, A continuum mechanical model for asphalt based on the particle size distribution: Numerical formulation for large deformations and experimental validation, Mechanics of Materials 153 (2021) 103703. doi:https://doi.org/10.1016/j.mechmat.2020.103703.
* Zobec and Klemenc [2021] P. Zobec, J. Klemenc, Application of a nonlinear kinematic-isotropic material model for the prediction of residual stress relaxation under a cyclic load, International Journal of Fatigue 150 (2021) 106290. doi:https://doi.org/10.1016/j.ijfatigue.2021.106290.
* Wen and Kim [2002] H. Wen, Y. Kim, Simple performance test for fatigue cracking and validation with westrack mixtures, Transportation Research Record 1789 (2002) 66–72. doi:https://doi.org/10.3141/1789-07.
* Anderson [2017] T. L. Anderson, Fracture mechanics: fundamentals and applications, CRC press, 2017. doi:https://doi.org/10.1201/9781315370293.
* Zhu et al. [2019] Z. Zhu, P. Singhvi, A. F. Espinoza-Luque, H. Ozer, I. L. Al-Qadi, Influence of mix design parameters on asphalt concrete aging rate using i-fit specimens, Construction and Building Materials 200 (2019) 181–187. doi:https://doi.org/10.1016/j.conbuildmat.2018.12.099.
* Li and Marasteanu [2010] X.-J. Li, M. Marasteanu, Using semi circular bending test to evaluate low temperature fracture resistance for asphalt concrete, Experimental mechanics 50 (2010) 867–876. doi:https://doi.org/10.1007/s11340-009-9303-0.
* Wagnoner et al. [2005] M. Wagnoner, W. Buttlar, G. Paulino, Disk-shaped compact tension test for asphalt concrete fracture, Experimental mechanics 45 (2005) 270–277. doi:https://doi.org/10.1007/BF02427951.
* Vanlanduit et al. [2009] S. Vanlanduit, J. Vanherzeele, R. Longo, P. Guillaume, A digital image correlation method for fatigue test experiments, Optics and Lasers in Engineering 47 (2009) 371–378. doi:https://doi.org/10.1016/j.optlaseng.2008.03.016.
* Zhao et al. [2022] Y. Zhao, D. Hu, Q. Liu, R. Wang, J. Bao, High resolution and real-time measurement of 2d fatigue crack propagation using an advanced digital image correlation, Engineering Fracture Mechanics 268 (2022) 108457. doi:https://doi.org/10.1016/j.engfracmech.2022.108457.
* Zhao et al. [2020] Y. Zhao, D. Hu, M. Zhang, W. Dai, W. Zhang, In situ measurements for plastic zone ahead of crack tip and continuous strain variation under cyclic loading using digital image correlation method, Metals 10 (2020) 273. doi:https://doi.org/10.3390/met10020273.
* Gehri et al. [2020] N. Gehri, J. Mata-Falcón, W. Kaufmann, Automated crack detection and measurement based on digital image correlation, Construction and Building Materials 256 (2020) 119383. doi:https://doi.org/10.1016/j.conbuildmat.2020.119383.
* Gehri et al. [2022] N. Gehri, J. Mata-Falcón, W. Kaufmann, Refined extraction of crack characteristics in large-scale concrete experiments based on digital image correlation, Engineering Structures 251 (2022) 113486. doi:https://doi.org/10.1016/j.engstruct.2021.113486.
* Cheng et al. [2022] L. Cheng, L. Zhang, X. Liu, F. Yuan, Y. Ma, Y. Sun, Evaluation of the fatigue properties for the long-term service asphalt pavement using the semi-circular bending tests and stereo digital image correlation technique, Construction and Building Materials 317 (2022) 126119. doi:https://doi.org/10.1016/j.conbuildmat.2021.126119.
* Horn and Schunck [1981] B. K. Horn, B. G. Schunck, Determining optical flow, Artificial intelligence 17 (1981) 185–203. doi:https://doi.org/10.1016/0004-3702(81)90024-2.
* Zhu and Al-Qadi [2023] Z. Zhu, I. L. Al-Qadi, Automated crack propagation measurement on asphalt concrete specimens using an optical flow-based deep neural network, International Journal of Pavement Engineering 24 (2023) 2186407. doi:https://doi.org/10.1080/10298436.2023.2186407.
* Ghafari and Nejad [2015] S. Ghafari, F. M. Nejad, R-curve behavior and crack propagation properties of asphalt concrete at low temperatures, Journal of Civil Engineering and Management 21 (2015) 559–570. doi:https://doi.org/10.3846/13923730.2014.890653.
* Paris and Erdogan [1963] P. Paris, F. Erdogan, A critical analysis of crack propagation laws, Journal Basic Engineering 85 (1963) 528–533. doi:https://doi.org/10.1115/1.3656900.
* Elseifi and Al-Qadi [2004] M. A. Elseifi, I. L. Al-Qadi, A simplified overlay design model against reflective cracking utilizing service life prediction, Road materials and pavement design 5 (2004) 169–191. doi:https://doi.org/10.1080/14680629.2004.9689968.
* Zhou et al. [2010] F. Zhou, S. Hu, X. Hu, T. Scullion, M. Mikhail, L. F. Walubita, Development, calibration, and verification of a new mechanistic-empirical reflective cracking model for hma overlay thickness design and analysis, Journal of transportation engineering 136 (2010) 353–369. doi:https://doi.org/10.1061/(ASCE)TE.1943-5436.0000096.
* Kim and Buttlar [2009] H. Kim, W. G. Buttlar, Discrete fracture modeling of asphalt concrete, International Journal of Solids and Structures 46 (2009) 2593–2604. doi:https://doi.org/10.1016/j.ijsolstr.2009.02.006.
* Chen et al. [2014] L. Chen, Z. Qian, Q. Lu, Crack initiation and propagation in epoxy asphalt concrete in the three-point bending test, Road Materials and Pavement Design 15 (2014) 507–520. doi:https://doi.org/10.1080/14680629.2014.908132.
* Harrison et al. [1980] J. Harrison, M. Dawes, G. Archer, M. Kamath, The cod approach and its application to welded structures, in: Engineering Applications of Fracture Analysis, Elsevier, 1980, pp. 249–267. doi:https://doi.org/10.1016/B978-0-08-025437-1.50023-1.
* Zhu and Joyce [2012] X.-K. Zhu, J. A. Joyce, Review of fracture toughness (g, k, j, ctod, ctoa) testing and standardization, Engineering fracture mechanics 85 (2012) 1–46. doi:https://doi.org/10.1016/j.engfracmech.2012.02.001.
* Vasco-Olmo et al. [2019] J. Vasco-Olmo, F. Díaz, F. Antunes, M. James, Characterisation of fatigue crack growth using digital image correlation measurements of plastic ctod, Theoretical and Applied Fracture Mechanics 101 (2019) 332–341. doi:https://doi.org/10.1016/j.tafmec.2019.03.009.
* Aliha [2019] M. Aliha, On predicting mode ii fracture toughness (kiic) of hot mix asphalt mixtures using the strain energy density criterion, Theoretical and Applied Fracture Mechanics 99 (2019) 36–43. doi:https://doi.org/10.1016/j.tafmec.2018.11.001.
* Luo et al. [2013] X. Luo, R. Luo, R. L. Lytton, Characterization of fatigue damage in asphalt mixtures using pseudostrain energy, Journal of Materials in Civil Engineering 25 (2013) 208–218. doi:https://doi.org/10.1061/(ASCE)MT.1943-5533.0000633.
* Christensen [2012] R. Christensen, Theory of viscoelasticity: an introduction, Elsevier, 2012.
* Christensen [1979] R. Christensen, A rate-dependent criterion for crack growth, International Journal of Fracture 15 (1979) 3–21. doi:https://doi.org/10.1007/BF00115904.
* Williams et al. [1955] M. L. Williams, R. F. Landel, J. D. Ferry, The temperature dependence of relaxation mechanisms in amorphous polymers and other glass-forming liquids, Journal of the American Chemical society 77 (1955) 3701–3707. doi:https://doi.org/10.1021/ja01619a008.
* Hu and Wittmann [1992] X. Z. Hu, F. Wittmann, Fracture energy and fracture process zone, Materials and Structures 25 (1992) 319–326. doi:https://doi.org/10.1007/BF02472590.
* Wu et al. [2011] Z. Wu, H. Rong, J. Zheng, F. Xu, W. Dong, An experimental investigation on the fpz properties in concrete using digital image correlation technique, Engineering Fracture Mechanics 78 (2011) 2978–2990. doi:https://doi.org/10.1016/j.engfracmech.2011.08.016.
* Schapery [1984] R. A. Schapery, Correspondence principles and a generalized j integral for large deformation and fracture analysis of viscoelastic media, International journal of fracture 25 (1984) 195–223. doi:https://doi.org/10.1007/BF01140837.
* Ozer [2011] H. Ozer, Development of domain integral and generalized finite element methods for three dimensional analysis of near-surface cracking in flexible pavements, University of Illinois at Urbana-Champaign, 2011.
* Tabatabaee and Bahia [2014] H. A. Tabatabaee, H. U. Bahia, Establishing use of asphalt binder cracking tests for prevention of pavement cracking, Road materials and pavement design 15 (2014) 279–299. doi:https://doi.org/10.1080/14680629.2014.927949.
* Wu et al. [2005] Z. Wu, L. N. Mohammad, L. Wang, M. A. Mull, Fracture resistance characterization of superpave mixtures using the semi-circular bending test, Journal of ASTM International 2 (2005) JAI12264. doi:https://doi.org/10.1520/JAI12264.
* Huang et al. [2020] R. Huang, S. Yang, T. Liu, G. Gong, Determination of j-integral of asphalt concrete based on sc (b) test configuration and image analysis, Construction and Building Materials 248 (2020) 118727. doi:https://doi.org/10.1016/j.conbuildmat.2020.118727.
* Nakamura et al. [1986] T. Nakamura, C. Shih, L. Freund, Analysis of a dynamically loaded three-point-bend ductile fracture specimen, Engineering Fracture Mechanics 25 (1986) 323–339. doi:https://doi.org/10.1016/0013-7944(86)90129-3.
* Reu [2012] P. Reu, Stereo-rig design: Camera selection—part 2, 2012. doi:https://doi.org/10.1111/j.1747-1567.2012.00872.x.
* Reu [2013] P. Reu, Stereo-rig design: Stereo-angle selection – part 4, 2013. doi:https://doi.org/10.1111/ext.12006.
* Gao et al. [2015] Y. Gao, T. Cheng, Y. Su, X. Xu, Y. Zhang, Q. Zhang, High-efficiency and high-accuracy digital image correlation for three-dimensional measurement, Optics and Lasers in Engineering 65 (2015) 73–80. doi:https://doi.org/10.1016/j.optlaseng.2014.05.013.
* Lowe [2004] D. G. Lowe, Distinctive image features from scale-invariant keypoints, International journal of computer vision 60 (2004) 91–110. doi:https://doi.org/10.1023/B:VISI.0000029664.99615.94.
* Zhou et al. [2012] Y. Zhou, B. Pan, Y. Q. Chen, Large deformation measurement using digital image correlation: a fully automated approach, Applied optics 51 (2012) 7674–7683. doi:https://doi.org/10.1364/AO.51.007674.
* Wang et al. [2011] Y.-Q. Wang, M. Sutton, X.-D. Ke, H. Schreier, P. Reu, T. Miller, On error assessment in stereo-based deformation measurements: part i: theoretical developments for quantitative estimates, Experimental mechanics 51 (2011) 405–422. doi:https://doi.org/10.1007/s11340-010-9449-9.
* Kanatani et al. [2008] K. Kanatani, Y. Sugaya, H. Niitsuma, Triangulation from two views revisited: Hartley-sturm vs. optimal correction, practice 4 (2008) 99.
* Fooladgar et al. [2013] F. Fooladgar, S. Samavi, S. M. R. Soroushmehr, S. Shirani, Geometrical analysis of localization error in stereo vision systems, IEEE Sensors Journal 13 (2013) 4236–4246. doi:https://doi.org/10.1109/JSEN.2013.2264480.
* Zhong et al. [2018] F. Q. Zhong, P. P. Indurkar, C. G. Quan, Three-dimensional digital image correlation with improved efficiency and accuracy, Measurement 128 (2018) 23–33. doi:https://doi.org/10.1016/j.measurement.2018.06.022.
* Zhong et al. [2019] F. Zhong, X. Shao, C. Quan, A comparative study of 3d reconstruction methods in stereo digital image correlation, Optics and Lasers in Engineering 122 (2019) 142–150. doi:https://doi.org/10.1016/j.optlaseng.2019.06.001.
* Solav et al. [2018] D. Solav, K. M. Moerman, A. M. Jaeger, K. Genovese, H. M. Herr, Multidic: An open-source toolbox for multi-view 3d digital image correlation, Ieee Access 6 (2018) 30520–30535. doi:https://doi.org/10.1109/ACCESS.2018.2843725.
* Solav and Silverstein [2022] D. Solav, A. Silverstein, Duodic: 3d digital image correlation in matlab, Journal of Open Source Software 7 (2022) 4279. doi:https://doi.org/10.21105/joss.04279.
* Nunes et al. [2022] L. Nunes, J. Xavier, et al., icorrvision-3d: An integrated python-based open-source digital image correlation software for in-plane and out-of-plane measurements (part 2), SoftwareX 19 (2022) 101132. doi:https://doi.org/10.1016/j.softx.2022.101132.
* Tran and Hall [2006] N. H. Tran, K. D. Hall, Evaluation of testing protocols for dynamic modulus of hot-mix asphalt, Transportation research record 1970 (2006) 126–132. doi:https://doi.org/10.1177/0361198106197000113.
* Yang et al. [2023] H. Yang, J. Ouyang, Z. Jiang, J. Ou, Effect of fiber reinforcement on self-healing ability of asphalt mixture induced by microwave heating, Construction and Building Materials 362 (2023) 129701. doi:https://doi.org/10.1016/j.conbuildmat.2022.129701.
* Wang et al. [2020] L. Wang, M. Shan, C. Li, The cracking characteristics of the polymer-modified asphalt mixture before and after aging based on the digital image correlation technology, Construction and Building Materials 260 (2020) 119802. doi:https://doi.org/10.1016/j.conbuildmat.2020.119802.
* Li et al. [2017] C. Li, L. Wang, X.-x. Wang, Crack and crack growth behavior analysis of asphalt mixtures based on the digital speckle correlation method, Construction and building materials 147 (2017) 227–238. doi:https://doi.org/10.1016/j.conbuildmat.2017.04.130. |
When Strings Surprise
Nissan Itzhaki1,2,3 and Uri Peleg2 1 Department of Physics, Princeton
University, Princeton, NJ 08544, USA
2 School of Physics and Astronomy, Tel Aviv University, Ramat Aviv, 69978,
Israel
3School of Natural Sciences, Institute for Advanced Study
1 Einstein Drive, Princeton, NJ 08540, USA
###### Abstract
We argue that on-shell excitations with large negative energies are created
rapidly when the string coupling increases with time. This does not indicate
an inconsistency in string theory since the negative energy on-shell
excitation is always entangled with an on-shell excitation with a positive
energy. The total energy of this energy-EPR state vanishes. We discuss the
reason the energy-EPR states appear in string theory and the role they might
play in black hole physics.
It is fair to say that thirty years after Joe Polchinski posed the question
[1] there is still confusion about what string theory is. This short note aims
to add to the confusion.
We claim that in soft backgrounds ($\alpha^{\prime}R\ll 1$ and
$g_{s}=e^{\Phi}\ll 1$) in which the string coupling grows slowly with time,
on-shell excitations with large negative energies are rapidly created. These
excitations do not imply inconsistency since they do not appear by themselves.
An on-shell excitation with large negative energy is always created together
with another on-shell excitation with positive energy such that the total
energy vanishes. Schematically, the state takes the form
$|\Psi\rangle=\int dE_{1}|E_{1}\rangle\otimes|E_{2}=-E_{1}\rangle.$ (1)
Namely, we argue that string theory admits energy-EPR states where the total
energy (rather than total spin in [2, 3]) vanishes. The vacuum in quantum
field theory is filled with pairs of this form. The difference is that here
the excitations are on-shell. This means, in particular, that an observer can
take her time in making a precise measurement of the energy of, say, the first
excitation and obtain a negative value, $E_{1}<0$.
We start by considering the simplest background in which $\partial_{\mu}\Phi$
is time-like and points to the future: a time-like linear dilaton background
$ds^{2}=-dt^{2}+dx^{2}+dy^{2}+dz^{2}+\mbox{Compact},\quad\Phi=Qt.$ (2)
We work with $\alpha^{\prime}=1$ and take $Q>0$, so that the string coupling
constant, $g_{s}=e^{Qt}$, blows up in the future. We consider $Q\ll 1$ which
means that, at least naively, a low-energy effective description is expected
to be valid. As we shall see the creation of the energy-EPR states occurs long
before approaching the strong coupling region at large $t$. To illustrate the
existence of states like (1) we need only one non-compact space-like
direction. We take the number of non-compact space-like directions to be three
to emphasize that this could happen in our world.111As we shall see the
process that leads to (1) is a local process that requires only that locally
$\partial_{t}\Phi>0$. Hence measuring an on-shell excitation with a large
negative energy does not mean we are heading towards a singularity in the
future.
For any positive $Q$, no matter how small, this background admits classical
string solutions to the equation of motions and Virasoro constraints that are
absent for $Q=0$ [4]
$t(\sigma,\tau)=t_{0}+Q\ln\left(\frac{1}{2}\cosh\left(\frac{\sigma}{Q}\right)+\frac{1}{2}\cosh\left(\frac{\tau}{Q}\right)\right),$
(3)
and $x=x_{0}+\sigma,~{}y=y_{0},~{}z=z_{0}$ with $-\infty<\sigma,\tau,<\infty.$
This solution describes a closed folded string that is created at
$t=t_{0},~{}x=x_{0},~{}y=y_{0}$, and $z=z_{0}$. The string expands rapidly and
asymptotically the fold travels at the speed of light. See figure 1.
At the fold, $\tau=0$, we have $\partial_{\tau}t=0$. Therefore, the same
solution in the upper half-plane ($-\infty<\sigma<\infty,~{}0\leq\tau<\infty$)
with a Neumann boundary condition at $\tau=0$ describes an Instant Open String
(IOS)222We thank I. Klebanov for this observation.. Like the fold of the IFS,
the endpoints of the IOS travel faster than light.
Figure 1: The IFS solution. The green arrows represent the null energy flux at
the fold, pointing backward in time to indicate negative energy. The flux
becomes more negative over time. This feeds energy into the bulk of the IFS,
allowing it to grow and become macroscopic.
The origin of the discontinuity in $Q$ – the fact that a solution exists for
any $Q>0$ and does not exist for $Q=0$ – is the following. By definition,
$\alpha^{\prime}$ corrections associated with the target space curvature, $H$
field, and second derivatives cannot dominate, in soft backgrounds, the
leading Virasoro constraints,
$g_{\mu\nu}\partial_{\pm}X^{\mu}\partial_{\pm}X^{\nu}.$ (4)
The dilaton gradient, however, is different. It is formally subleading in the
$\alpha^{\prime}$ expansion, but since it contributes to the Virasoro
constraints a linear term in $X^{\mu}$,
$\alpha^{\prime}\partial_{\mu}\Phi\partial_{\pm}^{2}X^{\mu},$ (5)
it can dominate (4) for a short (world sheet) time, even if
$\partial_{\mu}\Phi$ is small. This is what allows the string to fold. Near
the fold (5) dominates (4), and away from the fold (4) dominates. There is a
sense in which a time-like dilaton gradient violates the equivalence
principle. It triggers the creation of light strings that are simply absent in
its absence.333Space-like dilaton gradient also leads to solutions for any
$Q\neq 0$ that are absent when $Q=0$ [7]. The difference is that these
solutions have an infinite energy so they do not affect the low-energy
dynamics.
One might argue that the existence of instant strings (IFSs and/or IOSs) is
not that surprising. The background in which they are created is time-
dependent, and it is often the case that time dependence leads to the creation
of states from the vacuum. Hence it is natural to wonder if this is yet
another stringy version of the Schwinger mechanism [5] in which the role of
the electric field is played by $Q$.444The open string Schwinger mechanism was
discussed in [8], and a closed string analog, in which the $H$ field plays the
role of the electric field, was discussed in [9]. There are, in fact, some
crucial differences between instant string creation and the Schwinger
mechanism. We find it useful to discuss these differences as they emphasize
the unique features of instant strings and why they decay into (1).
The first difference is the creation scale. In the Schwinger mechanism, the
creation scale grows as the electric field is decreased. In particular, the
creation scale blows up as the electric field vanishes. Consequently
calculating basic quantities, such as the production rate, is challenging for
a varying electric field. For instant strings, the situation is the opposite.
As is clear from (3) the creation scale is $Q$. Hence as we decrease $Q$ the
instant string creation process becomes more and more local. Thus as long as
the curvature and the second derivatives of $\Phi$ are small they cannot
affect local properties of the IFS. For example, in the time-like linear
dilaton background (2) the IFS production rate is [6]
$\Gamma_{IFS}\sim\frac{Q^{2}}{g_{s}^{2}},$ (6)
where $g_{s}$ is the string coupling at $t_{0}$.555In the appendix we present
further evidence for this equation. The locality argument above implies that
in a more general background, with a time-like $\partial_{\mu}\Phi$ that
points to the future, it is
$\Gamma_{IFS}\sim\frac{(\partial_{\mu}\Phi)^{2}}{g_{s}^{2}}.$ (7)
Moreover, the instant string solution is expected to be well approximated by
(3) at distances shorter than the curvature scale.
This locality argument also implies that the instant strings are not related
to the perturbative tachyons that usually appear in (2) since the length scale
associated with the creation of these tachyons, $1/Q$, is much larger than the
scale associated with the creation of instant strings, $Q$. Indeed a small
second derivative of $\Phi$ can render the tachyon massless (or massive), but,
as discussed above, it will have little effect on the instant strings.
The second difference is that in the Schwinger effect, the role of the
electric field is twofold. It triggers the pair creation, and it also feeds
the pair with energy after the creation. It accelerates the electron, say, to
the left while accelerating the positron to the right. In the instant string
case, $Q$ is only the trigger for its creation, but it does not feed it with
energy after the creation. The instant string feeds itself. Namely, the
instant string is created from the vacuum with zero energy (and momentum),
and, as apparent from the existence of the zero modes $t_{0}$ and $x_{0}$, the
total energy and momentum remain zero at later times. This was verified in a
direct calculation [10]. As expected, away from the fold the only non-
vanishing component of the energy-momentum tensor is $T_{uv}$, (with $u=t+x$
and $v=t-x$) which in the small $Q$ limit takes a particular simple form666The
exact energy-momentum tensor can be found in [10].
$T_{uv}=\frac{1}{2\pi\alpha^{\prime}}\Theta(u)\Theta(v),$ (8)
associated with the tension of the folded string. Energy momentum conservation
fixes $T_{uu}$ and $T_{vv}$ at the fold to be
$T_{uu}=-\frac{1}{2\pi\alpha^{\prime}}\Theta(v)v\delta(u),\quad
T_{vv}=-\frac{1}{2\pi\alpha^{\prime}}\Theta(u)u\delta(v),$ (9)
which implies that at the fold there is a negative null flux. Evidently, the
way the instant string feeds itself is by transferring energy from the folds
toward the bulk of the string allowing it to grow with time.
The instant string solution is quite unusual. On the one hand, it describes a
light state with $E=P=0$. On the other hand, even from afar, it does not look
at all like a particle. In particular, it becomes macroscopic at late times,
and so, at finite string coupling, it can split. The IFS splits into two
folded strings (see Figure 2), and the IOS splits into two open strings.
Figure 2: IFS decay, the offset of the breaking point from the center, $\Delta
x$, determines the distribution of the bulk energy between the two components,
leading to $E_{1}=-E_{2}=\frac{\Delta x}{2\pi\alpha^{\prime}}$. The total
momentum of each component is due to the folds $P_{2}=-P_{1}=\frac{\Delta
t}{2\pi\alpha^{\prime}}$. The widths of each component’s bulk are respectively
$L_{1}=\Delta t+\Delta x$ and $L_{2}=\Delta t-\Delta x$. The blue and green
arrows at the folds in each time slice represent the energy and momentum
associated with the inner and outer folds.
Since the total momentum and energy of an instant string vanish we have
$E_{2}=-E_{1},~{}~{}~{}~{}P_{2}=-P_{1}.$ (10)
If the splitting takes place right in the middle of the instant string then
$E_{1}=E_{2}=0$. If the splitting occurs at some point to the right (left) of
the middle point then $E_{2}=-E_{1}>(<)0$. The splitting of an instant string
is a local process that does not depend on the location of the splitting (as
long as it is away from the fold). Hence the wave function of the two strings
associated with an instant string that splits at time $\Delta t$ after its
creation is well approximated by
$|\Psi(\Delta t)\rangle\sim\int_{-\frac{\Delta
t}{2\pi\alpha^{\prime}}}^{\frac{\Delta
t}{2\pi\alpha^{\prime}}}dE_{1}\left|E_{1}=\Delta
x/2\pi\alpha^{\prime},P_{1}=-\Delta
t/2\pi\alpha^{\prime}\right\rangle\otimes\left|E_{2}=-E_{1},P_{2}=-P_{1}\right\rangle,$
(11)
which can be viewed as an energy-EPR state. Note that one of the strings
always has a negative energy and that this negative energy is typically quite
large. It is of the order of $\Delta t/\alpha^{\prime}$. Using [11] we can
estimated that $\Delta t\sim l_{s}/g_{s}$ and so $E_{1}\sim M_{s}/g_{s}$.
Although the state $\left|E_{1},P_{1}\right\rangle$ has fixed quantum numbers,
and, unlike the IFS, from afar it does look like a particle, it cannot be
described by a $(1,1)$ vertex operator (even if $E_{1}>0$). The reason is that
while the total energy and momentum associated with this state do not vary in
time there is quite a bit of dynamic involved in the time evolution of
$\left|E_{1},P_{1}\right\rangle$. The simplest way to see this is to consider
the energy-momentum tensor associated with $\left|E_{1},P_{1}\right\rangle$.
This state has two folds (or ends, in the case of the IOS). One inherited from
the instant string and another due to the splitting. Causality implies that
the fold that was inherited from the instant string is not aware of the
splitting. Hence the negative null flux at the fold is still, say,
$T_{uu}=-\frac{1}{2\pi\alpha^{\prime}}\Theta(v)v\delta(u),$ (12)
and, in particular, it decreases with time (it becomes more negative). The
bulk of the string still contributes $T_{uv}=\frac{1}{2\pi\alpha^{\prime}}$,
and so by energy-momentum conservation at the new fold there is a positive
null flux
$T_{uu}=\frac{1}{2\pi\alpha^{\prime}}(v-L_{2})\Theta(v-L_{2})\delta(u-L_{1}),$
(13)
with $L_{1}=\Delta t+\Delta x$ and $L_{2}=\Delta t-\Delta x,$ that grows with
time (see figure (2)). The mechanism by which $\left|E_{1},P_{1}\right\rangle$
evolves in time is by transferring energy from the fold inherited from the IFS
to the new fold through the bulk of the string.
For $L_{1}\gg l_{s}$, we expect $\left|E_{1},P_{1}\right\rangle$ to split
further. It is natural to expect the splitting to stop when $L_{1}\sim l_{s}$.
This suggests that the final state associated with the decay of an instant
string involves two states, inherited from the fold of the IFS, with negative
energy of the order of $-M_{s}/g_{s}$. They point in opposite directions and
are entangled with many soft modes with positive energy. The total energy of
this energy-EPR state vanishes.
We would like to end with some questions:
$\bullet$ What are the possible imprints of the energy-EPR states? In
Cosmological scenarios that involve a time-dependent dilaton, these states are
created but they do not contribute to the average time evolution. The main
contribution in Cosmology is due to the IFSs (before they decay) that induce
negative pressure at no energy density cost [12]. The implication of this will
be discussed elsewhere [13]. The energy-EPR states do appear to be relevant
for fluctuations. It should be interesting to study the differences between
the fluctuations associated with the energy-EPR states and standard
cosmological fluctuations and see if there is a sharp prediction that can be
made.
Another possibility is a direct detection of an on-shell excitation with
negative energy. Unfortunately, at least in the IFS decay case, the negative
energy excitation appears to couple only via gravity to the standard model
fields which makes detection unrealistic. Note that since the energy is
negative, the gravitational shock wave produced by such an excitation induces
time advance which could lead to causality violation. To violate causality we
need, however, to have several such excitations and control their production
location and momenta. This does not appear to be an easy task given the way
they are produced.
$\bullet$ Why do these energy-EPR states appear in string theory? A possible
answer is related to the fact that the dilaton determines the amount of
classical or coarse-grained entropy via $G_{N}^{-1}\sim e^{-2\Phi}.$ As a
result when the dilaton varies in time so does the classical entropy. This
appears to be the source of the radiation of quantum or fine-grained entropy
in the form of energy-EPR states. The IFS appears to play the role of a
convertor as it converts the coarse-grained entropy into a fine-grained
entropy.
If we define $\Psi=e^{-\Phi},$ we have that the coarse-grained entropy scales
like $\Psi^{2}$ and (7) implies that $(\partial\Psi)^{2}$ determines the fine-
grained entropy production. It is, therefore, natural to dub $\Psi$ an
entropon. The term ”entropon” appears in the condensed matter literature in
the context of active solids [14], which are solids that involve self-
propelled excitations. Amusingly, there seems to be some analogy between
active matter and string theory with time-dependent dilaton. Standard closed
string modes are the analog of phonons. Both are excitations that are present
even when the solid is inactive. When the dilaton grows with time string
theory becomes active. It includes new self-propelled excitations: the instant
strings that grow by feeding themselves. Their decay products, the energy-EPR
states, dominate the entropy production.
$\bullet$ Are there implications to Black Holes? The BH in which it is easiest
to address this is the near extremal NS5-branes [15], i.e. the 2D BH [16, 17,
18, 19]. The region behind the horizon of such a BH includes a time-like
$\partial_{\mu}\Phi$ that points to the future. In [6] it was shown that the
production rate (7) implies that the number of IFSs an infalling observer
encounters on the way to the singularity is of the order of the BH entropy.
Here we claim that an IFS decays into an energy-EPR state, which means that
the Bekenstein-Hawking entropy associated with near extremal NS5-branes is of
the order of the fine-grained entropy associated with the energy-EPR states
that are created inside the BH. Combining this with [20], assuming that the
energy-EPR state also forms a wormhole, we seem to conclude that near extremal
NS5-branes are filled with tiny wormholes. In the context of JT gravity, a
related claim was made in [21]. Since the energy-EPR states involve
excitations with negative energy, the nature of these wormholes, if they
exist, is likely to be nonstandard.
Acknowledgements: We thank D. Gross, A. Hashimoto, G. Horowitz, V. Hubeny, J.
Minahan, I. Klebanov, H. Ooguri, M. Rangamani, and A. Sen for discussions.
Work supported in part by the ISF (grant number 256/22). This research was
supported in part by grant NSF PHY-2309135 to the Kavli Institute for
Theoretical Physics (KITP).
## Appendix A Evidence for (6)
A production rate that scales like $1/g_{s}^{2}$ should leave its mark on the
sphere partition function. In particular, if the initial state at $t=t_{i}$ is
the vacuum, $|0(t_{i})\rangle$, then the amplitude to remain in the vacuum at
a later time $t_{f}$ is exponentially suppressed due to IFS production
$\langle
0(t_{i})|0(t_{f})\rangle=\exp\left(-V\int_{t_{i}}^{t_{f}}\Gamma_{IFS}(t)~{}dt\right)\sim\exp\left(-VQe^{-2Qt_{i}}\right),$
(14)
where $V$ is the volume and we used (6). We assume here that $t_{f}-t_{i}\gg
Q$, which implies that the integral is dominated by $t_{i}$. We also take
$Q\ll 1$ which means that the dilute IFS-gas approximation is valid and that
interactions among the IFSs can be neglected. In this case (14) should be
compared with the time-like linear dilaton sphere partition function.
In string theory, however, $t_{i}$ and $t_{f}$ cannot be finite. We must take
$t_{i}=-\infty$ and $t_{f}=\infty$, which gives $0$. A way to put an effective
cutoff at $t_{i}$ in string theory is to add a Liouville potential. In space-
like linear dilaton both options for the Liouville wall ($\exp(b\phi)$ and
$\exp(\phi/b)$) cut off the strong coupling region. In the time-like case, we
can either cut the strong coupling region (in the future, in our case) or the
weak coupling region. For this particular calculation, we have to cut off the
weak coupling region since there $\Gamma_{IFS}$ blows up, and assume that the
singularity at the future does not matter for this calculation since there
$\Gamma_{IFS}$ vanishes.
As usual (see e.g. [22]) the full partition function $Z_{vac}$ is related to
the single string partition function, $Z_{1}$, via
$Z_{vac}=\exp(Z_{1}).$ (15)
Thus, for $Q\ll 1$, the time-like Liouville $Z_{1}$ should be compared with
$Z_{1}=-Qe^{-2Qt_{i}}.$ (16)
It appears that an agreement with IFS consideration requires that $Z_{1}$ is
real and negative. This is not standard for the partition function in a time-
like direction, which usually is imaginary.
Time-like Liouville theory, however, is not a standard theory. In particular,
its relation to space-like Liouville, via an analytic continuation of $b$, is
rather subtle [23, 24]). Luckily, using the Coulomb gas approach, $Z_{1}$ was
calculated in time-like Liouville by Giribet [25], who found
$Z_{1}=\frac{(1+b^{2})(\pi\Lambda\gamma(-b^{2}))^{Q/b}}{\pi^{3}Q\gamma(-b^{2})\gamma(-b^{-2})}.$
(17)
KPZ scaling [26] relates the $(\pi\Lambda\gamma(-b^{2}))^{Q/b}$ with the
$e^{-2Qt_{i}}$ in (16). The comparison we are left with is between the $-Q$ in
(16) and the factor of $(1+b^{2})/Q\gamma(-b^{2})\gamma(-b^{-2})$ in (17),
which, for $Q\ll 1$, indeed agree, up to a numerical factor that we cannot
determine at the moment. To check the numerical factor one needs to calculate
$\Gamma_{IFS}$ in the presence of the Liouville wall.
## References
* [1] J. Polchinski, “What is string theory?,” [arXiv:hep-th/9411028 [hep-th]].
* [2] D. Bohm, “Quantum Theory, ”Prentice Hall, Englewood Cliffs, 1951.
* [3] J. S. Bell, “On the Einstein-Podolsky-Rosen paradox,” Physics Physique Fizika 1, 195-200 (1964) doi:10.1103/Physics Physique Fizika.1.195
* [4] N. Itzhaki, “Stringy instability inside the black hole,” JHEP1810, 145 (2018) doi:10.1007/JHEP10(2018)145[arXiv:1808.02259[hep-th]].
* [5] J. S. Schwinger, “On gauge invariance and vacuum polarization,” Phys. Rev. 82, 664-679 (1951) doi:10.1103/PhysRev.82.664
* [6] A. Hashimoto, N. Itzhaki and U. Peleg, “A Worldsheet Description of Instant Folded Strings,” . J. High Energ. Phys. 2023, 88 (2023). doi:10.1007/JHEP02(2023)088 [arXiv:2209.04988 [hep-th]].
* [7] J. M. Maldacena, “Long strings in two dimensional string theory and non-singlets in the matrix model,” JHEP 09, 078 (2005) doi:10.1088/1126-6708/2005/09/078 [arXiv:hep-th/0503112 [hep-th]].
* [8] C. Bachas and M. Porrati, “Pair creation of open strings in an electric field,” Phys. Lett. B 296, 77-84 (1992) doi:10.1016/0370-2693(92)90806-F [arXiv:hep-th/9209032 [hep-th]].
* [9] F. Dowker, J. P. Gauntlett, G. W. Gibbons and G. T. Horowitz, “Nucleation of p-branes and fundamental strings,” Phys. Rev. D 53, 7115-7128 (1996) doi:10.1103/PhysRevD.53.7115 [arXiv:hep-th/9512154 [hep-th]].
* [10] K. Attali and N. Itzhaki, “The Averaged Null Energy Condition and the Black Hole Interior in String Theory,” Nucl. Phys. B 943, 114631 (2019) doi:10.1016/j.nuclphysb.2019.114631 [arXiv:1811.12117 [hep-th]].
* [11] J. Dai and J. Polchinski, “The Decay of Macroscopic Fundamental Strings,” Phys. Lett. B 220, 387-390 (1989) doi:10.1016/0370-2693(89)90892-7
* [12] N. Itzhaki, “String Theory and The Arrow of Time,” JHEP 03, 192 (2021) doi:10.1007/JHEP03(2021)192 [arXiv:2101.10142 [hep-th]].
* [13] N. Itzhaki and U. Peleg, to appear
* [14] L. Caprini, U. Marini Bettolo Marconi, A. Puglisi, and H. Lowen “Entropons as collective excitations in active solids,” J. Chem. Phys. 159, 041102 (2023)
* [15] J. M. Maldacena and A. Strominger, “Semiclassical decay of near extremal five-branes,” JHEP 12, 008 (1997) doi:10.1088/1126-6708/1997/12/008 [arXiv:hep-th/9710014 [hep-th]].
* [16] E. Witten, “On string theory and black holes,” Phys. Rev. D 44, 314-324 (1991) doi:10.1103/PhysRevD.44.314
* [17] G. Mandal, A. M. Sengupta and S. R. Wadia, “Classical solutions of two-dimensional string theory,” Mod. Phys. Lett. A 6, 1685-1692 (1991) doi:10.1142/S0217732391001822
* [18] S. Elitzur, A. Forge and E. Rabinovici, “Some global aspects of string compactifications,” Nucl. Phys. B 359, 581-610 (1991) doi:10.1016/0550-3213(91)90073-7
* [19] R. Dijkgraaf, H. L. Verlinde and E. P. Verlinde, “String propagation in a black hole geometry,” Nucl. Phys. B 371, 269-314 (1992) doi:10.1016/0550-3213(92)90237-6
* [20] J. Maldacena and L. Susskind, “Cool horizons for entangled black holes,” Fortsch. Phys. 61, 781-811 (2013) doi:10.1002/prop.201300020 [arXiv:1306.0533 [hep-th]].
* [21] D. Stanford and Z. Yang, “Firewalls from wormholes,” [arXiv:2208.01625 [hep-th]].
* [22] Polchinski, J. (1998) String Theory. Vol. 1, “An Introduction to the Bosonic String”.
* [23] A. B. Zamolodchikov, “Three-point function in the minimal Liouville gravity,” Theor. Math. Phys. 142, 183-196 (2005) doi:10.1007/s11232-005-0003-3 [arXiv:hep-th/0505063 [hep-th]].
* [24] D. Harlow, J. Maltz and E. Witten, “Analytic Continuation of Liouville Theory,” JHEP 12, 071 (2011) doi:10.1007/JHEP12(2011)071 [arXiv:1108.4417 [hep-th]].
* [25] G. Giribet, “On the timelike Liouville three-point function,” Phys. Rev. D 85, 086009 (2012) doi:10.1103/PhysRevD.85.086009 [arXiv:1110.6118 [hep-th]].
* [26] V. G. Knizhnik, A. M. Polyakov and A. B. Zamolodchikov, “Fractal Structure of 2D Quantum Gravity,” Mod. Phys. Lett. A 3, 819 (1988) doi:10.1142/S0217732388000982
|
# Explainable Incipient Fault Detection Systems for Photovoltaic Panels
Seshapalli Sairam, Seshadhri Srinivasan , Giancarlo Marafioti , B. Subathra,
Geir Mathisen , Korkut Bekiroglu Seshapalli Sairam is with Dept. of
Instrumentation and Control Engineering, Kalasalingam Academy for Research and
Education, Krishnankoil, Srivilliputtur, India. e-mail :
<EMAIL_ADDRESS>Srinivasan is with GE Corporate Research Center,
Bangalore-560068. e-mail<EMAIL_ADDRESS>Marafioti is
with SINTEF, Cybernetics. e-mail<EMAIL_ADDRESS>Subathra is
with Dept. of Instrumentation and Control Engineering, Kalasalingam Academy
for Research and Education, Krishnankoil, Srivilliputtur, India. e-mail :
<EMAIL_ADDRESS>Mathisen is with Norwegian University of Science and
Technology Faculty of Engineering, Cybernetics. e-mail :
<EMAIL_ADDRESS>Bekiroglu is with the College of Engineering, SUNY
Polytechnic Institute, Utica 13503, NY, USA. e-mail :
<EMAIL_ADDRESS>
###### Abstract
This paper presents an eXplainable Fault Detection and Diagnosis System
(XFDDS) for incipient faults in PV panels. The XFDDS is a hybrid approach that
combines the model-based and data-driven framework. Model-based FDD for PV
panels lacks high fidelity models at low irradiance conditions for detecting
incipient faults. To overcome this, a novel irradiance based three diode model
(IB3DM) is proposed. It is a nine parameter model that provides higher
accuracy even at low irradiance conditions, an important aspect for detecting
incipient faults from noise. To exploit PV data, an extreme gradient boosting
(XGBoost) is used due to its ability to detecting incipient faults. Lack of
explainability, feature variability for sample instances, and false alarms are
challenges with data-driven FDD methods. These shortcomings are overcome by
hybridization of XGBoost and IB3DM, and using eXplainable Artificial
Intelligence (XAI) techniques. To combine the XGBoost and IB3DM, a fault-
signature metric is proposed that helps reducing false alarms and also trigger
explanation on detecting incipient faults. To provide explainability, an
eXplainable Artificial Intelligence (XAI) application is developed. It uses
the local interpretable model-agnostic explanations (LIME) framework and
provides explanations on classifier outputs for data instance. These
explanations help field engineers/technicians for performing troubleshooting
and maintenance operations. The proposed XFDDS is illustrated using
experiments on different PV technologies and our results demonstrate the
perceived benefits.
###### Index Terms:
Explainable Artificial Intelligence (XAI) incipient fault, eXplainable Fault
Detection and Diagnosis System (XFDDS), eXtreme Gradient Boosting (XGBoost).
## I Introduction
### I-A Motivation
Exponential growth in photovoltaic (PV) deployments has raised interest in its
reliable operation [1]. As PV panels are installed in harsh environments and
subjected to varying weather conditions, they are prone to diverse faults
(permanent, incipient, and intermittent) with different severity levels [2].
Such faults could diminish energy production, accelerate aging, and even cause
fire hazards [3]. Therefore, detecting and locating faults early (at the
incipient stage) is pivotal for the PV panel’s reliable operations [4].
Detecting incipient faults challenging as the signatures are less evident due
to low magnitude, and the problem accentuates at low irradiance conditions.
Also, incipient faults are quite intermittent and show up for a short
duration. Therefore, detecting them becomes more challenging. Nevertheless,
incipient faults could develop as a severe fault in the long-run if left
undetected/unattended, leading to costly replacements and maintenance
operations [5]. Consequently, detecting incipient faults has gained
significant traction recently [6]. While fault-detection and diagnosis (FDD)
systems are proven to improve PV system reliability [7], there are few
challenges that need to be addressed for detecting incipient faults: (i) high
fidelity PV models providing good accuracy at low irradiance conditions are
required, (ii) existing FDD methods cannot reason their decisions to the field
engineers/technicians, and (iii) difficult distinguishing between false alarms
and incipient faults. Our objective in this paper is to propose an eXplainable
Fault Detection and Diagnosis Systems (XFDDS) for incipient faults in PV
panels that address challenges with FDD systems.
### I-B Literature Review
The FDD methods in the literature can be broadly discerned as being— model-
based (MB), signal-based (SB), and data-driven (DD) [8]. The MB methods use PV
panel models (set of nonlinear equations) followed by signal-analysis (e.g.,
correlation analysis) on the input-output data from the model to detect faults
[9]. The single-diode model (SDM) [10], double-diode model (DDM) [11], and
three-diode models (TDM) [12, 13] are widely used in FDD systems. While
existing models provide good accuracy at high irradiance, their accuracy is
less at low irradiance conditions. The SB methods use fault signatures from
sensor data to detect faults [14]. Widely used SB methods are: statistical
signal processing [15], I-V (current-voltage) characteristics analysis [16],
power loss analysis [17] and, voltage and current measurements [18]. More
recently, SB-FDD methods using two-stage support vector machines [19], multi-
signal decomposition, and fuzzy inference systems [20] have also been
proposed. The DD methods using labeled fault-data and artificial intelligence
(AI) techniques have shown promise in improving detection accuracy due to
their powerful model representation capabilities. The DD methods leverage
historical labeled data and powerful models from AI techniques to perform
multi-class regression or classification, which are quite important for
detecting faults [21]. In the literature, FDD methods using AI models such as
random forest [22], collaborative filtering [23], extreme gradient boosting
[24], and such techniques have been proposing (see,[25] and references
therein). Despite these advances, detecting incipient faults addressing the
fundamental challenges of accuracy at low irradiance conditions, lack of
explainability on decisions to field engineers/technicians, and distinguishing
false alarms from incipient faults is rather unexplored in the literature to
our best knowledge.
### I-C Contributions
This paper proposes an XFDSS for PV panels addressing challenges with FDD
system. It is a hybrid method that combines model-based and data-driven
approaches. To overcome accuracy challenges under low irradiance conditions
and detect incipient faults, an irradiance based three diode model (IB3DM)
proposed. The model uses irradiance and temperature in parameter computations
inherently, thereby increasing its accuracy even at low irradiance conditions.
For fault explainability and distinguishing false alarms from incipient
faults, the IB3DM is combined with data-based approaches that perform multi-
label classification. This paper uses the extreme gradient boosting (XGBoost)
based multi-label classifier due to its suitability to detect incipient
faults. As XGBoost cannot explain its decisions to the field
technician/engineer, recently developed eXplainable AI (XAI) techniques are
used. The XAI extends the capabilities of the AI techniques by providing
explanations on decisions on individual data-instances [26], a key aspect in
incipient fault detection. We show that these explanations are very useful for
field engineers/technicians to understand the fault-causes and fault-type. The
local interpretable model-agnostic explanations (LIME) approach is used [27]
to provide the explanations. The main idea is to perturb the features and
compute the importance and variable thresholds for being classified as faults
on individual samples. Main contributions are:
1. 1.
A novel three diode model called the Irradiance Based Three Diode Model
(IB3DM) which inherently captures the influences of solar irradiance and
ambient temperature;
2. 2.
Design an XFDS leveraging the accuracy of IB3DM, XGBoost, and LIME;
3. 3.
Illustrate the IB3DM and XFDS using experiments and simulations on different
PV technologies.
The paper is organized into five sections. The components of XFDDS is
explained in Section II. The IB3DM for PV panel and its parameter computation
is explained in Section III. The XFDDS methodology is presented in Section IV.
Results are presented in Section V and conclusions are presented in Section
VI.
## II Explainable Fault Detection and Diagnosis System
The main challenges with existing FDD techniques are:
* (C1)
Lack of high fidelity models capturing PV panel performance at low irradiance
conditions;
* (C2)
Existing FDD methods lack explanations to field engineers/technicians on why a
particular sample was classified as faults and the variable thresholds on
which this decision on a fault is made;
* (C3)
Data-based models compute feature importance for a particular fault on the
global data, whereas incipient faults are intermittent and there are
inconsistencies within data instances as well;
* (C4)
Data-based models cause false alarms due to mis-classification.
Figure 1: Explainable fault detection system
The XFDDS proposed in our work addresses the challenges (C1)-(C4) and its
schematic is shown in Fig. 1. Its main components are: climate service, IB3DM,
fault-signature metric, the XGBoost classifier, XAI triggers, sample store and
XAI application. The IB3DM uses the climate service (a web-application) to
obtain solar irradiance and temperature for predicting the PV outputs
(voltage, current, and power). Exploiting IB3DMs’ model accuracy, a fault-
signature metric is defined (see Section IV-B) which serves as a trigger for
obtaining explanations from XAI application and such samples are stored in
sample store, a local cache. The XGBoost based classifier is a combination of
multiple classifiers, and regression tree (CART) ensemble created using
boosting techniques [24]. The XGBoost is selected as the data-based model in
our application, as it naturally fits the incipient fault-detection framework
as detailed later. Two challenges with XGBoost are lack of explainability and
false alarms [28]. Moreover, the feature importances for a particular fault
are computed based on global data, contrary to this incipient faults are
intermittent with data varying among fault samples (feature inconsistency
problem). The particular of XAI application generate these explanations on a
data-instances, which is very important for incipient faults. These
explanations help the user to identify the fault types and variable thresholds
based on which the fault was detected. In what follows the IB3DM model is
first proposed and then the XFDDS approach is illustrated.
## III Novel Irradiance-based three diode model
Our model parameters depend on irradiance and module temperature and it
addresses the challenge (C1). Therefore, we call our model irradiance-based
three diode model (IB3DM). The IB3DM is an extension of the TDM proposed in
[12, 13] wherein $I_{P}$, the light generated current depends on irradiance
and module temperature. Further, in IB3DM, the ideality factors are not fixed;
rather, they are obtained as a solution to an optimization problem by
specifying bounds ([0, 2]). This is a deviation from existing works in three
diode models where higher ideality factors are used leading to low fill-factor
that could be achieved only in industrial-grade panels. This makes existing
TDM unsuitable for residential PV panels.
### III-A Equivalent Circuit and Model Parameters
Our idea is to propose a three diode model that accurately captures the PV
cells’ performance even under low irradiance conditions.
###### Remark 1.
In IB3DM, the source current is modelled as a dependent current source that is
a function of irradiance and module temperature, denoted by
$I_{p}(G,\leavevmode\nobreak\ T)$. The photo-generated current has two parts;
the first part which is a premultiplier is linearly dependent on the
irradiance and acts as a scaling factor for the second part that depends on
panel temperature.
Suppose the nominal phase current is denoted by $I_{p,n}$, then the dependence
on phase current on the irradiance is given by,
$I_{P}(G,T)=\Bigg{(}\frac{G}{G_{STC}}\Bigg{)}\Big{(}I_{p,n}+K_{I}\times(T-T_{r})\Big{)}$
(1)
where $K_{I}$ is a constant computed from data-sheets. The source current is
in parallel to three diodes with a series resistance $R_{s}$ and shunt
resistance $R_{sh}$, as shown in Fig. 2. With this modification, the current
source is a function of irradiance and module temperature. From the equivalent
circuit, the output current is given by
$I=I_{P}-I_{l1}-I_{l2}-I_{l3}-I_{sh}$ (2)
Similarly, current through the shunt resistance $R_{sh}$ in Fig. 2 is,
$I_{sh}=\frac{V+IR_{s}}{R_{sh}}$ (3)
The diode saturation currents can be calculated as:
$I_{01}=\frac{I_{S}+K_{I}\times(T-T_{r})}{exp\big{(}\frac{V_{OC}+K_{V}\times(T-T_{r})}{{V_{t}\times
n_{01}}}\big{)}-1},$ (4)
$I_{02}=\frac{I_{S}+K_{I}\times(T-T_{r})}{exp\big{(}\frac{V_{OC}+K_{V}\times(T-T_{r})}{{V_{t}\times
n_{02}}}\big{)}-1},$ (5)
$I_{03}=\frac{I_{S}+K_{I}\times(T-T_{r})}{exp\big{(}\frac{V_{OC}+K_{V}\times(T-T_{r})}{{V_{t}\times
n_{03}}}\big{)}-1},$ (6)
The saturation currents strongly depends on the temperature as indicated by
equations (4)-(6). Note that the coefficient $K_{V}$ is from the
manufacturers’ data-sheet and used to compute I-V curve for different
temperatures as seen the equations (4)-(6). The junction thermal voltage is
given by,
$V_{t}=\frac{N_{s}kT}{q}.$ (7)
Combining equations (1)-(7), one can obtain the equations relating the output
current, output voltage, and model parameters for the IB3DM:
$\begin{split}I&=I_{P}-I_{01}\Bigg{(}exp\Big{(}{\frac{V+IR_{s}}{n_{01}V_{t}}}\Big{)}-1\Bigg{)}\\\
&-I_{02}\Bigg{(}exp\Big{(}{\frac{V+IR_{s}}{n_{02}V_{t}}}\Big{)}-1\Bigg{)}\\\
&-I_{03}\Bigg{(}exp\Big{(}{\frac{V+IR_{s}}{n_{03}V_{t}}}\Big{)}-1\Bigg{)}-\frac{V+IR_{s}}{R_{sh}},\end{split}$
(8)
where $n_{01}$, $n_{02}$, and $n_{03}$ are diode ideality factors.
Consequently, the IB3DM has nine parameters given by
$\mathcal{P}_{IB3DM}=\\{I_{P},I_{l1},I_{l2},I_{I3},R_{S},R_{sh},n_{01},n_{02},n_{03}\\}$
which should be obtained from I-V curve data. Next step is to compute the
model parameters and an optimization based approach is proposed as detailed in
the next section.
Figure 2: Equivalent circuit of IB3DM.
### III-B The IB3DM Parameter Computation
To compute the IB3DMs’ model parameters, first the $I_{P}$ is calculated using
$G$ and $T$ from equation (1). Next, we define the objective function as the
root mean square error between the experimental V-I and the estimated model is
given by,
$\begin{split}f_{m}(V,I,\mathcal{P})&=I-I_{P}+I_{01}\Bigg{(}exp\Big{(}{\frac{V+IR_{s}}{n_{01}V_{t}}}\Big{)}-1\Bigg{)}\\\
&+I_{02}\Bigg{(}exp\Big{(}{\frac{V+IR_{s}}{n_{02}V_{t}}}\Big{)}-1\Bigg{)}\\\
&+I_{03}\Bigg{(}exp\Big{(}{\frac{V+IR_{s}}{n_{03}V_{t}}}\Big{)}-1\Bigg{)}+\frac{V+IR_{s}}{R_{sh}}\end{split}$
(9)
It measures the difference between experimental I-V curve data and the one
calculated using the model, i.e.,
${f_{m}(V,I,\mathcal{P})}=I_{measured}-I_{calculated}$ with $V$ varying over
the operating range. To compute the model parameters, we utilized the root
mean square error (RMSE) as a metric denoted by,
$\mathcal{J}=\sqrt{\frac{1}{N_{e}}\sum_{i=1}^{N_{e}}f_{m}(V,I,\mathcal{P})^{2}}$.
The parameter computation problem is modelled as an optimization problem given
by,
$\displaystyle\underset{\mathcal{P}_{IB3DM}}{\operatorname{min}}\leavevmode\nobreak\
\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\
\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\
\leavevmode\nobreak\ \leavevmode\nobreak\ \mathcal{J}$ (10) $\displaystyle
s.t.$ $\displaystyle\leavevmode\nobreak\ \leavevmode\nobreak\
\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\
\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\
\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\
\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\
\leavevmode\nobreak\ \leavevmode\nobreak\ \leavevmode\nobreak\
\eqref{eq:6}-\eqref{eq:10}$
Clearly, (10) is nonlinear and non-convex that is computationally complex to
solve using conventional optimization techniques. Usually meta-heuristic
approaches are used for computations for this type of optimization problems
[29]. Our analysis utilizes five different meta-heuristic algorithms that are
explained in results section.
## IV Explainable Fault Detection and Diagnosis Methodology
### IV-A Extreme Gradient Boosting for Fault Classification
Gradient boosting decision tree is a powerful machine learning algorithm
wherein multiple weak learner ensemble form a strong learner [30]. The XGBoost
main idea is that the training set in the current instance is related to the
learning results from previous learning, and the weights on data-samples are
adjusted on each sample in each iteration. These features naturally fits the
incipient fault-detection scenario as labeled fault-data are available
sequentially, and weight adaptation in each sample increases model accuracy.
Furthermore, in XGBoost the current decision tree (leaf) is fitted based on
the residuals from the previous trees and it uses the gradient boosting
principle wherein new decision trees are constructed to correlate to the
negative gradient of the loss-function. A detailed description of XGBoost
could be found in [31, 32].
_This investigation uses XGBoost for detecting two incipient faults:(i) line-
to-line (LL) fault which denotes short-circuit within the string or on
multiple PV strings and (ii) partial shading._ Two different classification
cases are considered: binary and multi-class classification. In binary
classification, labeled data about whether a given sample is faulty or healthy
operating condition is predicted without considering the fault-type (LL or
partial shading), whereas in multi-class fault classification, the fault-type
are also included as additional class.
Figure 3: Confusion Matrix for Binary Classification
In binary classification, voltage, current, irradiance, and power are the
inputs and a binary variable modelling the sample to be faulty/healthy is the
output. Dichotomous search optimization was used for tuning hyper-parameters
that resulted in 100 learners. During training, the classifier showed 93-95%
accuracy, and validation had 86-93% accuracy. The confusion matrix for the
binary classifier is shown in Fig. 3. One can see that the classifier performs
extremely well for binary classification.
Figure 4: Confusion Matrix for Multi-Class Classification
In multi-class classification, additional labels on the fault type (LL), or
partial shading) is added to the data-set. Then the XGBoost classifier is
trained to identify the fault label. The XGBoost hyper parameters was
optimized using dichotomous search and it had 130 learners. During the
training, the classifier showed 88-92% accuracy, and 78-82% during validation.
Confusion matrix for the multi-class classification during validation is shown
in Fig. 4. While its accuracy is reasonable, the challenges (C2)-(C4) are not
addressed by XGBoost.
### IV-B Explainable Fault Detection
To provide local explanations and address challenges (C2)-(C4), the data-
driven approach is first fused with model-based approach by proposing a fault-
signature metric (FSM) given by,
$\sigma(G,P)=\gamma(G)\leavevmode\nobreak\
exp\big{(}{\frac{-\|G-G_{s}\|_{\ell_{2}}}{\Sigma_{G}}}\big{)}^{-1}\times
exp\big{(}{\frac{-\|P-\hat{P}\|_{\ell_{2}}}{\Sigma_{P}}}\big{)}^{-1},$ (11)
where $\gamma(G),\Sigma_{G}$, and $\Sigma_{P}$ are the scaling factor as a
function of irradiance, variances in the actual solar irradiance, and power
generated by the solar panel, respectively. Also, $\|\leavevmode\nobreak\
\|_{\ell_{2}}$ represents $\ell_{2}$ norm. $\hat{P}$ is the estimated power
computed by the IB3DM, and the signature computes the deviations in power. The
fault-signature metric serves two purposes: (i) it weights low variability at
low irradiance conditions higher than at high irradiance conditions helps
overcoming noise and preventing false alarms and (ii) triggers for
explanations on detecting an incipient fault by using thresholds on fault-
signature metric.
The XFDDS uses FSM to eliminate false alarms by comparing it with results of
the XGBoost. Second, event triggers for fault explanations are generated using
FSM. Once the FSM exceeds a known threshold, explanations are asked from the
XAI application and such samples are stored in sample store.
On receiving triggers, the XAI application is activated that uses the local
interpretable model-agnostic explanations (LIME) [27] framework. It utilizes
surrogate modelling wherein the model is considered a black-box, and the
features are perturbed to find feature importance on a particular sample. The
data instance is perturbed, and samples are generated from the data-set
distribution, which is weighted based on their distances from the current
point. Then feature selection is applied to keep the relevant variables, and
the linear model is trained on the weighted data-set automatically within the
algorithm. Once trained, the model explains to the user about the variables
and their thresholds that made the model decide a particular sample instance
as a fault/normal operation. This is quite useful in detecting incipient
faults as they are intermittent and occur for a few fault-samples.
_While the core of XFDS is still the XGBoost, the XAI extracts explanations on
why a particular sample was classified as being faulty/healthy._ Further,
thresholds on variables that helped make these decisions are also provided,
which is quite useful in detecting even the fault type. For example, a LL
fault is characterized by high voltage but a lower current and power. As power
drops due to circulating currents are very hard to predict. Moreover,
intermittent LL faults are difficult to catch FDDS. However, with XAI could
reason out incipient LL faults.
Figure 5: Experimental setup using a PV module.
## V Results
The proposed XFDDS could be implemented on simple hardware as illustrated in
this section. In our experiments, the fault-detection is implemented by
interfacing embedded hardware with computer. However, the computer could be
replaced with any system-on-chip (SoC) with limited computing power. The
computer interfaces to the web application directly and obtains the weather
data. The weather station recordings are ported as a comma-separated variable.
The ATmega 328P processor is used as the data-acquisition unit, which
interfaces to the computer through the software application and receives the
measurements from sensors. Measurements were obtained from the current sensor
(INA169), temperature sensor (DS18B20), and voltage sensor (F031-06),
respectively. The data was transmitted to the computer using the UART
(Universal Asynchronous Receiver/Transmitter), a serial communication mode to
PLX-DAQ and the measurements are stored in a data-base for further processing.
Temperature sensor Maxim IC DS 18B20 is a 1-wire digital temperature sensor
that reports temperature in Celsius with 9-12 bit precision and has a working
range of -55 to 125$\mathrm{\SIUnitSymbolCelsius}$. A rheostat is used as the
load, and the experiments are conducted at International Research Center,
Kalasalingam University, India. Experiments with PV panel, sensors interfaced
with computers, and Python was used to implement the fault-detection scheme
(see Fig. 5).
### V-A The IB3DM Model Parameter Estimation
As stated earlier, the IB3DM parameter computation requires solving a
nonlinear and non-convex optimization problem in (10). To overcome
computational difficulties, five different meta-heuristic algorithms are used:
(i) firefly, (ii) particle swarm optimization (PSO), (iii) teaching-learning
based optimization (TLBO), (iv) biogeography based optimization (BBO), and (v)
shuffled frog leaping algorithm (SFLA) to estimate the model parameters of the
IB3DM.
The parameter is computed for two different PV technologies: monocrystalline
(STM5-20/36) at 1000 $\mathrm{W}\text{\,}{\mathrm{m}}^{-2}$ at
33$\mathrm{\SIUnitSymbolDegree}$ and polycrystalline (Solartech SPM-020P-R)
with 1000 $\mathrm{W}\text{\,}{\mathrm{m}}^{-2}$ at
45$\mathrm{\SIUnitSymbolDegree}$ with each panel having 36 cells in series.
The IB3DM parameters computed for a monocrystalline panel for the SDM, DDM,
CD3DM, and IB3DM with the five meta-heuristic algorithms is shown in Tab. I.
The RMSE value ($\mathcal{J}$) are shown in Tab. I. One can observe that IB3DM
offers better accuracy than existing models as evinced by their low RMSE
values. In addition, fire-fly algorithm provides better estimates of the I-V
curves.
Figure 6: (a) V-I Curve of the SDM, DDM, C3DM versus IB3DM, (b) P-V Curve of
SDM, DDM, C3DM, and IB3DM (Monocrystalline).
Figure 7: (a) V-I Curve of the SDM, DDM, C3DM versus IB3DM, (b) P-V Curve of SDM, DDM, C3DM, and IB3DM (Polycrystalline PV panel) Table I: Comparison of meta-heuristic algorithms for computing the model parameters of the Monocrystalline panels Model | Algorithm | $I_{P}(A)$ | $I_{01}(A)$ | $I_{02}(A)$ | $I_{03}(A)$ | $n_{1}$ | $n_{2}$ | $n_{3}$ | $R_{s}(\Omega)$ | $R_{sh}(\Omega)$ | $K$ | $RMSE$
---|---|---|---|---|---|---|---|---|---|---|---|---
| Firefly | 1.6644 | 1.55E-06 | - | - | 1.94 | - | - | 0.176 | 752.29 | - | 0.012245
| PSO | 1.6626 | 2.87E-06 | - | - | 1.87 | - | - | 0.3917 | 599.78 | - | 0.060392
SDM | TLBO | 1.6634 | 2.86E-06 | - | - | 1.67 | - | - | 0.4255 | 598.55 | - | 0.042516
| BBO | 1.6605 | 9.08E-07 | - | - | 1.99 | - | - | 0.6372 | 642.66 | - | 0.065356
| SFLA | 1.663 | 6.02E-06 | - | - | 1.96 | - | - | 0.2385 | 690.27 | - | 0.083196
| Firefly | 1.6645 | 1.71E-06 | 3.01E-12 | - | 1.85 | 1.72 | - | 0.2396 | 739.49 | - | 0.010945
| PSO | 1.7029 | 3.08E-05 | 6.24E-05 | - | 1.64 | 1.55 | - | 0.1263 | 606.28 | - | 0.052871
DDM | TLBO | 1.6638 | 1.14E-10 | 5.21E-06 | - | 1.52 | 1.91 | - | 0.5215 | 695.23 | - | 0.042925
| BBO | 1.7029 | 3.08E-05 | 6.24E-05 | - | 1.94 | 1.55 | - | 0.3263 | 606.28 | - | 0.085258
| SFLA | 1.6613 | 5.66E-06 | 2.24E-08 | - | 1.76 | 1.89 | - | 0.2199 | 673.52 | - | 0.092876
| Firefly | 1.6645 | 2.71E-06 | 3.01E-12 | 1.08E-05 | 1.72 | 1.57 | 1.47 | 0.296 | 739.49 | 0.0092 | 0.004852
| PSO | 1.7029 | 1.08E-05 | 2.24E-05 | 1.72E-05 | 1.52 | 1.22 | 1.42 | 0.2263 | 656.28 | 0.0272 | 0.018471
CD3DM | TLBO | 1.6638 | 2.24E-10 | 3.31E-06 | 4.24E-08 | 1.49 | 1.19 | 1.24 | 0.3215 | 595.23 | 0.0231 | 0.009229
| BBO | 1.7029 | 1.98E-05 | 2.41E-05 | 3.23E-08 | 1.76 | 1.28 | 1.62 | 0.2363 | 596.28 | 0.0185 | 0.024528
| SFLA | 1.6613 | 2.66E-06 | 5.24E-08 | 4.76E-06 | 1.62 | 1.24 | 1.32 | 0.2699 | 473.52 | 0.0289 | 0.050476
| Firefly | 1.6633 | 2.93E-06 | 5.10E-15 | 1.54E-07 | 1.35 | 1.46 | 1.24 | 0.0917 | 804.43 | - | 0.005463
| PSO | 1.7133 | 6.88E-04 | 1.80E-10 | 1.63E-09 | 1.02 | 1.09 | 1.14 | 0.3618 | 477.24 | - | 0.007824
IB3DM | TLBO | 1.6622 | 1.89E-08 | 8.67E-08 | 1.19E-05 | 1.08 | 1.06 | 1.15 | 0.3917 | 761.51 | - | 0.005936
| BBO | 1.6683 | 8.52E-06 | 4.13E-06 | 2.85E-04 | 1.03 | 1.12 | 1.39 | 0.2511 | 570.46 | - | 0.008193
| SFLA | 1.6683 | 8.52E-06 | 4.13E-06 | 2.85E-04 | 1.13 | 1.18 | 1.29 | 0.3511 | 570.46 | - | 0.008262
Table II: Comparison of meta-heuristic algorithms for computing the model parameters of the Polycrystalline panel Model | Algorithm | $I_{P}(A)$ | $I_{01}(A)$ | $I_{02}(A)$ | $I_{03}(A)$ | $n_{1}$ | $n_{2}$ | $n_{3}$ | $R_{s}(\Omega)$ | $R_{sh}(\Omega)$ | $K$ | $RMSE$
---|---|---|---|---|---|---|---|---|---|---|---|---
| Firefly | 1.047 | 4.43E-05 | - | - | 1.85 | - | - | 1.7763 | 612.53 | - | 0.030324
| PSO | 1.0458 | 5.86E-05 | - | - | 1.96 | - | - | 1.7286 | 680.97 | - | 0.549642
SDM | TLBO | 1.0345 | 1.91E-05 | - | - | 1.86 | - | - | 1.9521 | 684.96 | - | 0.495359
| BBO | 1.0343 | 1.43E-04 | - | - | 1.97 | - | - | 1.4589 | 653.48 | - | 0.576291
| SFLA | 1.0466 | 1.31E-04 | - | - | 1.98 | - | - | 1.5583 | 699.79 | - | 0.762183
| Firefly | 1.0435 | 5.15E-05 | 6.03E-13 | - | 1.83 | 1.65 | - | 1.7621 | 611.84 | - | 0.027368
| PSO | 1.0676 | 7.88E-05 | 6.03E-08 | - | 1.91 | 1.71 | - | 1.7272 | 591.83 | - | 0.389554
DDM | TLBO | 1.0323 | 2.14E-05 | 5.37E-06 | - | 1.79 | 1.69 | - | 1.1532 | 692.98 | - | 0.069885
| BBO | 1.0881 | 1.55E-05 | 6.62E-04 | - | 1.93 | 1.79 | - | 1.0243 | 641.61 | - | 0.401265
| SFLA | 1.0971 | 1.67E-04 | 3.53E-05 | - | 1.95 | 1.82 | - | 1.0004 | 358.15 | - | 0.495239
| Firefly | 1.0745 | 1.62E-06 | 1.92E-12 | 3.28E-05 | 1.62 | 1.48 | 1.28 | 1.0235 | 798.25 | 0.0052 | 0.011619
| PSO | 1.0929 | 6.54E-05 | 1.54E-05 | 1.92E-05 | 1.85 | 1.62 | 1.59 | 1.0563 | 696.28 | 0.0237 | 0.387132
CD3DM | TLBO | 1.6538 | 1.52E-10 | 2.26E-06 | 5.42E-08 | 1.68 | 1.34 | 1.45 | 1.5235 | 495.23 | 0.0259 | 0.065293
| BBO | 1.0029 | 1.98E-05 | 2.41E-05 | 3.23E-08 | 1.89 | 1.67 | 1.62 | 1.0063 | 606.28 | 0.0262 | 0.537698
| SFLA | 1.0513 | 3.66E-08 | 3.24E-06 | 3.76E-09 | 1.91 | 1.73 | 1.71 | 1.0039 | 623.52 | 0.0325 | 0.553676
| Firefly | 1.047 | 2.27E-10 | 2.39E-04 | 2.03E-12 | 1.42 | 1.37 | 1.06 | 1.0954 | 729.53 | - | 0.004856
| PSO | 1.049 | 8.33E-06 | 4.64E-04 | 3.70E-12 | 1.66 | 1.54 | 1.23 | 1.9449 | 693.99 | - | 0.006806
IB3DM | TLBO | 1.042 | 3.32E-06 | 1.06E-04 | 1.17E-05 | 1.53 | 1.45 | 1.03 | 1.2107 | 671.01 | - | 0.005077
| BBO | 1.039 | 1.43E-05 | 4.11E-05 | 4.67E-05 | 1.72 | 1.62 | 1.35 | 1.5124 | 697.74 | - | 0.007493
| SFLA | 1.045 | 2.63E-05 | 2.92E-07 | 2.26E-05 | 1.79 | 1.67 | 1.49 | 1.9119 | 691.77 | - | 0.007994
A comparison of the I-V and P-V curves with model parameters computed by the
firefly algorithm for the different PV models: SDM, DDM, CD3DM, and IB3DM is
shown in Fig. 6. The accuracy provided by IB3DM at MPP is also shown in the
zoomed portion. Similarly, accuracy is also high at low and high irradiance
conditions as well.
The IB3DM model parameters and its comparison with other PV models for poly-
crystalline panel (SPM-020P-R) is shown in Tab. II. The low RMSE values are
indicative of the accuracy provided by IB3DM. Further, firefly algorithm
provides the best model parameters compared with other meta-heuristic
techniques. Similarly, the I-V and P-V curves for the different diode model is
shown in Fig. 7. This results shows the model accuracy across irradiance
levels. These results demonstrate the IB3DMs’ ability to provide model
accuracy across different irradiance levels and PV technologies.
### V-B Explainable Incipient Fault Detection
Having computed the model parameters, the next step is to fuse IB3DM with XAI
to implement the XFDDS. First, we show the ability of IB3DM to detect faults
and then extensions to providing explanations are presented. Two studies used
to illustrate XFDDS capabilities:
1. (i)
Single-cell in a module is partially shaded in a PV panel consisting of 36
cells in series;
2. (ii)
Most cells in a array are partially shaded (8 among 36) in different
proportions (10-90%).
These two cases cover most scenarios envisaged during the PV panel operations
and the incipient fault was created artificially for the study.
### V-C Case Study 1
This study considers fault in polycrystalline panel (Solartech SPM-020P-R)
using IB3DM. To illustrate IB3DM ability to detect fault, the V-I and P-V
curves are computed at different irradiance levels (0-1000
$\mathrm{W}\text{\,}{\mathrm{m}}^{-2}$, 800
$\mathrm{W}\text{\,}{\mathrm{m}}^{-2}$, and 450
$\mathrm{W}\text{\,}{\mathrm{m}}^{-2}$). The I-V and P-V curves are shown in
Fig.9 (a) and (b), respectively. One can see that the IB3DM predictions
exactly coincide with experimental V-I and P-V curves, whereas model errors
are high with other models. Furthermore, slope of the PV curves at lower
voltages are indicative of the incipient fault and this is illustrated from
fault signatures at low irradiance conditions (see, Fig. 8). Using this fault-
signature metric, the XFDDS could avoids false alarms (false positives). This
results illustrates the ability of IB3DM to detect partial shading in single
cell and avoid false alarms raised by XGBoost classifier. Moreover, detecting
incipient faults is challenging with IB3DM alone.
Figure 8: Fault signature for different irradiance.
Figure 9: (a) V-I Curve for different irradiance, and (b) P-V Curve for
different irradiance (polycrystalline).
### V-D Case Study 2
The IB3DMs’ ability to detect faults in _monocrystalline array_ with partial
shading of different magnitudes across PV panel is illustrated. Variations in
power curves with the SDM, DDM, CD3DM, and IB3DM for irradiance levels (0-1000
$\mathrm{W}\text{\,}{\mathrm{m}}^{-2}$) are considered. Due to partial
shading, the voltage reduction is not severe, whereas the current reduction is
quite high which gets reflected in the power curve as well. This is
illustrated in Fig. 10 and is indicative of a fault. This is observed in
fault-signature metric as well. These findings when combined with XGBoost
based classifier could reduce mis-classification and false alarms.
Figure 10: Power curve for Partial shading condition for PV panel
There are two shortcomings with just IB3DM based detection. It requires enough
significant sample averages to detect incipient faults which is seldom
possible due to their intermittent nature. Next, they cannot identify fault
types nor provide explanations.
### V-E XFDDS for Incipient Faults
The XFDDS is implemented on two different XGBoost implementations, binary and
multi-class classification. The XGBoost leverages data and averts false alarms
by using IB3DM fault-signature metric. In this study, the monocrystalline
panel string consisting of 56 panels at the International Research Center,
Kalasalingam University, is used as the pilot to demonstrate the XFDDS. In
binary classification, the XGBoost classifies whether a given test sample is
faulty/healthy. This is compared with the fault-signature metric outcomes.
Using a threshold on FSM and based on XGBoost classifier outcomes,
explanations are triggered.
In our study, the partial shading fault was created artificially by covering
panels with papers. A sampling time of 1 $\mathrm{s}$ and the fault is created
at 2400 samples for a duration of 32 minutes. These samples are passed to the
IB3DM, which detects the presence of incipient fault through its fault
signature. This triggers the XAI application to generate explanations. The
explanations for partial shading faults are shown in Fig. 11. The explanations
are very intuitive for detecting incipient partial shading fault. As the
explanation shows that the irradiance (G $\geq$ 896) and the voltage (V $\geq$
141.29 $\mathrm{V}$) , but the current ($\leq$ 9.77 $\mathrm{A}$ ) and the
power magnitudes ($\leq$ 1354 $\mathrm{W}$) are low. This is indicative of a
partial shading fault for the field engineer.
Figure 11: Partial shading fault
Similarly, the LL fault was created by shorting out the lines within a single
string for a short duration. The IB3DM generates fault-signature of the power
curve, which triggered the explanations that are shown in Fig. 12 which is
explaining that current values are less than 7.44$\mathrm{A}$ and power values
lesser than 1112.52 $\mathrm{W}$ while the voltage is greater than 141.29
$\mathrm{V}$ helps the XFDDS decide that there is a LL fault.
Figure 12: Line-to-Line Fault
Explanations on a sample which was false positive detected by XGBoost and
explanations provided by the XAI is shown in Fig.13. Here the decision was
based on the current value, which was less than 7.84 $\mathrm{A}$. However,
the power level greater than 1112.89 $\mathrm{W}$ shows that this is not a
fault. This way, false positives could be avoided by explanations, and this
denotes an intermittent behaviour of the load to the field engineer.
Figure 13: Line-to-Line Fault Explanation Figure 14: Explanations for LL-fault
Multi-Label Classification
Next, the XFDDS was applied to multi-class classification problem. In this
case, the fault-labels were given as input as well. The IB3DM was used to
trigger explanations using thresholds on fault-signature metric. The
explanations for the incipient LL fault is shown in Fig. 14. In this case the
XGBoost identifies the LL fault and the causes are illustrated by current (
less than 7.84 $\mathrm{A}$) and power (less than 1122.38 $\mathrm{W}$).
Figure 15: Explanations for Partial Shading Faults
The explanations for partial shading faults are shown in Fig. 15. The
explanations are very intuitive as the current and power are less than a
threshold, whereas the irradiance is greater than 896
$\mathrm{W}\text{\,}{\mathrm{m}}^{-2}$. The fault could be easily understood
by the field technicians. Furthermore, the threshold values are explained to
the field engineers on current, power, irradiance, and voltage. This result
also shows that the explanations and the thresholds are oblivious to the
classification label, i.e., binary or multi-class.
## VI Conclusions
This paper presented an explainable fault-detection and diagnosis system
(XFDDS) for detecting incipient faults in PV panels. Its main components were:
irradiance based three diode model and eXplainable artificial intelligence
(XAI) application. The IB3DM used irradiance and temperature to compute its
model parameters, thereby increasing its accuracy even at low irradiation
conditions. The model parameters were computed solving a non-convex
constrained optimization problem with five different meta-heuristic
approaches. Our results demonstrated the model fidelity of the IB3DM even at
low irradiance conditions. To exploit the aggregated data from PV panels, an
extreme gradient boosting (XGBoost) based classifier was used and it had two
shortcomings: false alarms and lack of explainability. False alarms were
reduced by combining IB3DM with XGBoost classifier by proposing a fault-
signature metric (FSM). Explanations were provided by extending the XGBoost
with local interpretable model-agnostic explanations (LIME) in the XAI
application. The explanations are quite useful in identifying faults and
planning maintenance operations. The proposed XFDDS implementation, deployment
and demonstration on multiple PV technologies was used to illustrate the
capabilities. Extending XFDDS to include multiple faults and translating the
approach to an edge are the future course of this investigation.
## References
* [1] T. Rajesh, K. Tamilselvan, A. Vijayalakshmi, C. N. Kumar, and K. A. Reddy, “Design and implementation of an automatic solar tracking system for a monocrystalline silicon material panel using mppt algorithm,” _Materials Today: Proceedings_ , 2020.
* [2] B. K. Karmakar and A. K. Pradhan, “Detection and classification of faults in solar pv array using thevenin equivalent resistance,” _IEEE Journal of Photovoltaics_ , vol. 10, no. 2, pp. 644–654, 2020.
* [3] Y. Zhao, R. Ball, J. Mosesian, J.-F. de Palma, and B. Lehman, “Graph-based semi-supervised learning for fault detection and classification in solar photovoltaic arrays,” _IEEE Transactions on Power Electronics_ , vol. 30, no. 5, pp. 2848–2858, 2014.
* [4] D. S. Pillai, F. Blaabjerg, and N. Rajasekar, “A comparative evaluation of advanced fault detection approaches for pv systems,” _IEEE Journal of Photovoltaics_ , vol. 9, no. 2, pp. 513–527, 2019.
* [5] B. Jin, D. Li, S. Srinivasan, S.-K. Ng, K. Poolla, and A. Sangiovanni-Vincentelli, “Detecting and diagnosing incipient building faults using uncertainty information from deep neural networks,” in _2019 IEEE International Conference on Prognostics and Health Management (ICPHM)_. IEEE, 2019, pp. 1–8.
* [6] E. Garoudja, F. Harrou, Y. Sun, K. Kara, A. Chouder, and S. Silvestre, “Statistical fault detection in photovoltaic systems,” _Solar Energy_ , vol. 150, pp. 485–499, 2017.
* [7] A. Mellit, G. M. Tina, and S. A. Kalogirou, “Fault detection and diagnosis methods for photovoltaic systems: A review,” _Renewable and Sustainable Energy Reviews_ , vol. 91, pp. 1–17, 2018.
* [8] Z. Gao, C. Cecati, and S. X. Ding, “A survey of fault diagnosis and fault-tolerant techniques—part i: Fault diagnosis with model-based and signal-based approaches,” _IEEE Transactions on Industrial Electronics_ , vol. 62, no. 6, pp. 3757–3767, 2015.
* [9] Y. Chaibi, M. Malvoni, A. Chouder, M. Boussetta, and M. Salhi, “Simple and efficient approach to detect and diagnose electrical faults and partial shading in photovoltaic systems,” _Energy Conversion and Management_ , vol. 196, pp. 330–343, 2019.
* [10] S. Shongwe and M. Hanif, “Comparative analysis of different single-diode pv modeling methods,” _IEEE Journal of photovoltaics_ , vol. 5, no. 3, pp. 938–946, 2015.
* [11] F. Bradaschia, M. C. Cavalcanti, A. J. do Nascimento, E. A. da Silva, and G. M. de Souza Azevedo, “Parameter identification for pv modules based on an environment-dependent double-diode model,” _IEEE Journal of Photovoltaics_ , vol. 9, no. 5, pp. 1388–1397, 2019.
* [12] V. Khanna, B. Das, D. Bisht, P. Singh _et al._ , “A three diode model for industrial solar cells and estimation of solar cell parameters using pso algorithm,” _Renewable Energy_ , vol. 78, pp. 105–113, 2015.
* [13] M. H. Qais, H. M. Hasanien, and S. Alghuwainem, “Identification of electrical parameters for three-diode photovoltaic model using analytical and sunflower optimization algorithm,” _Applied Energy_ , vol. 250, pp. 109–117, 2019\.
* [14] A. Triki-Lahiani, A. B.-B. Abdelghani, and I. Slama-Belkhodja, “Fault detection and monitoring systems for photovoltaic installations: A review,” _Renewable and Sustainable Energy Reviews_ , vol. 82, pp. 2680–2692, 2018\.
* [15] M. Davarifar, A. Rabhi, A. El-Hajjaji, and M. Dahmane, “Real-time model base fault diagnosis of pv panels using statistical signal processing,” in _2013 International Conference on Renewable Energy Research and Applications (ICRERA)_. IEEE, 2013, pp. 599–604.
* [16] S. Fadhel, C. Delpha, D. Diallo, I. Bahri, A. Migan, M. Trabelsi, and M. Mimouni, “Pv shading fault detection and classification based on iv curve using principal component analysis: Application to isolated pv system,” _Solar Energy_ , vol. 179, pp. 1–10, 2019.
* [17] M. H. Ali, A. Rabhi, A. El Hajjaji, and G. M. Tina, “Real time fault detection in photovoltaic systems,” _Energy Procedia_ , vol. 111, pp. 914–923, 2017\.
* [18] L. Chen and X. Wang, “Adaptive fault localization in photovoltaic systems,” _IEEE Transactions on Smart Grid_ , vol. 9, no. 6, pp. 6752–6763, 2017.
* [19] Z. Yi and A. H. Etemadi, “Line-to-line fault detection for photovoltaic arrays based on multiresolution signal decomposition and two-stage support vector machine,” _IEEE Transactions on Industrial Electronics_ , vol. 64, no. 11, pp. 8546–8556, 2017.
* [20] ——, “Fault detection for photovoltaic systems based on multi-resolution signal decomposition and fuzzy inference systems,” _IEEE Transactions on Smart Grid_ , vol. 8, no. 3, pp. 1274–1283, 2016.
* [21] Y. Zhao, T. Li, X. Zhang, and C. Zhang, “Artificial intelligence-based fault detection and diagnosis methods for building energy systems: Advantages, challenges and the future,” _Renewable and Sustainable Energy Reviews_ , vol. 109, pp. 85–101, 2019.
* [22] K. Dhibi, R. Fezai, M. Mansouri, M. Trabelsi, A. Kouadri, K. Bouzara, H. Nounou, and M. Nounou, “Reduced kernel random forest technique for fault detection and classification in grid-tied pv systems,” _IEEE Journal of Photovoltaics_ , 2020.
* [23] Y. Zhao, D. Li, T. Lu, Q. Lv, N. Gu, and L. Shang, “Collaborative fault detection for large-scale photovoltaic systems,” _IEEE Transactions on Sustainable Energy_ , 2020.
* [24] Y. Gan, Z. Chen, L. Wu, C. Long, S. Cheng, and P. Lin, “A novel fault diagnosis method for pv arrays using extreme gradient boosting classifier,” 2019\.
* [25] A. Ebrahimifakhar, A. Kabirikopaei, and D. Yuill, “Data-driven fault detection and diagnosis for packaged rooftop units using statistical machine learning classification methods,” _Energy and Buildings_ , vol. 225, p. 110318, 2020\.
* [26] X.-H. Li, C. C. Cao, Y. Shi, W. Bai, H. Gao, L. Qiu, C. Wang, Y. Gao, S. Zhang, X. Xue _et al._ , “A survey of data-driven and knowledge-aware explainable ai,” _IEEE Transactions on Knowledge and Data Engineering_ , 2020\.
* [27] S. Bramhall, H. Horn, M. Tieu, and N. Lohia, “Qlime-a quadratic local interpretable model-agnostic explanation approach,” _SMU Data Science Review_ , vol. 3, no. 1, p. 4, 2020.
* [28] A. Torres-Barrán, Á. Alonso, and J. R. Dorronsoro, “Regression tree ensembles for wind energy and solar radiation prediction,” _Neurocomputing_ , vol. 326, pp. 151–160, 2019.
* [29] V. J. Chin, Z. Salam, and K. Ishaque, “Cell modelling and model parameters estimation techniques for photovoltaic simulator application: A review,” _Applied Energy_ , vol. 154, pp. 500–519, 2015.
* [30] R. Sun, G. Wang, W. Zhang, L.-T. Hsu, and W. Y. Ochieng, “A gradient boosting decision tree based gps signal reception classification algorithm,” _Applied Soft Computing_ , vol. 86, p. 105942, 2020.
* [31] J. Fan, X. Wang, L. Wu, H. Zhou, F. Zhang, X. Yu, X. Lu, and Y. Xiang, “Comparison of support vector machine and extreme gradient boosting for predicting daily global solar radiation using temperature and precipitation in humid subtropical climates: A case study in china,” _Energy Conversion and Management_ , vol. 164, pp. 102–111, 2018.
* [32] N. Sapountzoglou, J. Lago, and B. Raison, “Fault diagnosis in low voltage smart distribution grids using gradient boosting trees,” _Electric Power Systems Research_ , vol. 182, p. 106254, 2020.
|
# One-parameter discrete-time Calogero-Moser system
Umpon Jairuk† and Sikarin Yoo-Kong∗
† _Division of Physics, Faculty of Science and Technology,_
_Rajamangala University of Technology Thanyaburi, Rangsit-Nakornnayok Road,_
_Pathumthani, Thailand 12110._
∗ _The Institute for Fundamental Study(IF), Naresuan University(NU),_
_99 Moo 9, Tha Pho, Mueang Phitsanulok, Phitsanulok, Thailand, 65000_
###### Abstract
We present a new type of integrable one-dimensional many-body systems called a
one-parameter Calogero-Moser (CM) system. In the discrete level, the Lax pairs
with a parameter are introduced and, of course, the discrete-time equations of
motion are obtained as well as their corresponding discrete-time Lagrangian.
The integrability feature of this new system can be captured through the
discrete Lagrangian closure relation by employing a connection with the
temporal Lax matrices of the discrete-time Ruijsenaars-Schneider (RS) system,
exact solution, and the existence of the classical r-matrix. Under the
appropriate limit on the parameter, which in this case is approaching zero,
the standard CM system is retrieved both discrete-time and continuous-time.
## 1 Introduction
The Calogero-Moser (CM) system is a mathematical model that describes the
motion of one-dimensional system of particles interacting through long-range
forces [1, 2]. The CM system is, of course, an integrable system which
exhibits rich symmetries and possesses a sufficient number of conserved
quantities, according to Liouville’s integrability notion, to construct the
exact solutions. Let us give the equations motion of the CM system for the
simplest type of interaction, known as the rational case,
$\ddot{x}_{i}=\sum_{j=1}^{N}\frac{1}{(x_{i}-x_{j})^{3}}\;,\;\;\;i=1,2,3,...,N\;,$
(1.1)
where $x_{i}$ is a position of the $i^{th}$ particle.
The Ruijsenaars-Schneider (RS) system is another integrable one-dimensional
system of particles with a long-range interaction[3, 4]. In the simplest
interaction, namely the rational case, the equations of motion are given by
$\ddot{x}_{i}+\sum_{j=1}^{N}\dot{x}_{i}\dot{x}_{j}\left(\frac{1}{x_{i}-x_{j}+\lambda}+\frac{1}{x_{i}-x_{j}-\lambda}-\frac{2}{x_{i}-x_{j}}\right)\;,\;\;\;i=1,2,3,...,N;,$
(1.2)
where $\lambda$ is a parameter. Under the limit: $\lambda\to 0$, the CM system
is recovered. Then the RS system can be treated as a “one-parameter
generalisation” of the CM system.
In 1994, the time-discretised version of the CM system was introduced by
Nijhoff and Pang [5]. In the rational case, the discrete-time equations of
motion is given by
$\sum\limits_{k=1}^{N}\left(\frac{1}{x_{i}-\widetilde{x}_{k}}+\frac{1}{x_{i}-{\vrule
depth=0.0pt,width=0.0pt{\smash{{\mathop{x}\limits_{\displaystyle\widetilde{}}}}}}_{k}}\right)-\sum\limits_{k=1\atop
k\neq i}^{N}\frac{2}{x_{i}-x_{k}}=0,$ (1.3)
where $\widetilde{x}_{i}=x_{i}(n+1)$ is a forward shift and ${{\vrule
depth=0.0pt,width=0.0pt{\smash{{\mathop{x}\limits_{\displaystyle\widetilde{}}}}}}_{i}}=x_{i}(n-1)$
is a backward shift. The integrability of the system can be captured in the
same sense with the continuous system in terms of the classical r-matrix, the
existence of the exact solution, and the existence of a set of sufficient
invariants.
Soon after, the time-discretised version of the RS system was introduced [6].
In the rational case, the discrete-time equations of motion are given by
$\prod_{j=1\atop jk\neq
i}^{N}\frac{x_{i}-x_{j}+\lambda}{x_{i}-x_{j}-\lambda}=\prod_{j=1}^{N}\frac{(x_{j}-\widetilde{x}_{j})(x_{i}-{{\vrule
depth=0.0pt,width=0.0pt{\smash{{\mathop{x}\limits_{\displaystyle\widetilde{}}}}}}_{j}}+\lambda)}{(x_{j}-{{\vrule
depth=0.0pt,width=0.0pt{\smash{{\mathop{x}\limits_{\displaystyle\widetilde{}}}}}}_{j}})(x_{i}-\widetilde{x}_{j}+\lambda)}\;.$
(1.4)
Again, under the limit: $\lambda\to 0$, the discrete-time CM system is
recovered. Of course, the discrete-time RS system can also be treated as the
“one-parameter generalisation” of the discrete-time CM system.
Recently, a new hallmark for integrability was promoted known as the multi-
dimensional consistency. On the level of the discrete-time equations of
motion, the multi-dimensional consistency can be inferred as the consistency
around the cube [7, 8]. On the level of the Hamiltonians, the feature can be
captured through the Hamiltonian commuting flows as a direct consequence of
the involution in Liouville’s integrability [9]. Alternatively, on the level
of Lagrangians, the multi-dimensional consistency can be expressed through the
Lagrangian closure relation as a direct result in the variation of the action
with respect to independent variables. Since the closure relation for
Lagrangian 1-form will play a major role in this paper as an integrability
criterion, then we shall spend a bit more space to derive the relation. Now
let $\boldsymbol{n}$ be a vector in the lattice and let $\boldsymbol{e}_{i}$
be a unit vector in the $i^{th}$ direction. Then an elementary shift in the
$i^{th}$ direction on the lattice is defined as
$\boldsymbol{n}\to\boldsymbol{n}+\boldsymbol{e}_{i}$. Therefore, the discrete-
time Lagrangians can be expressed in the form
$\mathcal{L}_{i}(\boldsymbol{n})=\mathcal{L}_{i}(\boldsymbol{x}(\boldsymbol{n}),\boldsymbol{x}(\boldsymbol{n}+\boldsymbol{e}_{i}))\;,$
(1.5)
where $\boldsymbol{x}=\\{x_{1},x_{2},...,x_{N}\\}$. The discrete-time action
is defined as
$S=S[\boldsymbol{x}(\boldsymbol{n}):\Gamma]=\sum_{\boldsymbol{n}\in\Gamma}\mathcal{L}_{i}(\boldsymbol{x}(\boldsymbol{n}),\boldsymbol{x}(\boldsymbol{n}+\boldsymbol{e}_{i}))\;,$
(1.6)
where $\Gamma$ is a arbitrary discrete curve, see figure 1. Next, we shall
consider another discrete curve $\Gamma^{\prime}$ sharing the same endpoints
with the discrete curve $\Gamma$ and the action is given by
$S^{\prime}=S[\boldsymbol{x}(\boldsymbol{n}):\Gamma^{\prime}]=\sum_{\boldsymbol{n}\in\Gamma^{\prime}}\mathcal{L}_{i}(\boldsymbol{x}(\boldsymbol{n}),\boldsymbol{x}(\boldsymbol{n}+\boldsymbol{e}_{i}))\;.$
(1.7)
Of course, this can be viewed as the variation of independent variables
$\boldsymbol{n}\to\boldsymbol{n}+\Delta\boldsymbol{n}$ of the action
$\displaystyle S^{\prime}$ $\displaystyle=$ $\displaystyle
S-\mathcal{L}_{i}(\boldsymbol{x}(\boldsymbol{n}+\boldsymbol{e}_{j}),\boldsymbol{x}(\boldsymbol{n}+\boldsymbol{e}_{i}+\boldsymbol{e}_{j}))+\mathcal{L}_{i}(\boldsymbol{x}(\boldsymbol{n}),\boldsymbol{x}(\boldsymbol{n}+\boldsymbol{e}_{i}))$
(1.8)
$\displaystyle+\mathcal{L}_{j}(\boldsymbol{x}(\boldsymbol{n}+\boldsymbol{e}_{i}),\boldsymbol{x}(\boldsymbol{n}+\boldsymbol{e}_{j}+\boldsymbol{e}_{i}))-\mathcal{L}_{j}(\boldsymbol{x}(\boldsymbol{n}),\boldsymbol{x}(\boldsymbol{n}+\boldsymbol{e}_{j}))\;.$
The least action principle requires $\delta S=S^{\prime}-S=0$ resulting in
$\displaystyle 0$ $\displaystyle=$
$\displaystyle\mathcal{L}_{i}(\boldsymbol{x}(\boldsymbol{n}+\boldsymbol{e}_{j}),\boldsymbol{x}(\boldsymbol{n}+\boldsymbol{e}_{i}+\boldsymbol{e}_{j}))-\mathcal{L}_{i}(\boldsymbol{x}(\boldsymbol{n}),\boldsymbol{x}(\boldsymbol{n}+\boldsymbol{e}_{i}))$
(1.9)
$\displaystyle-\mathcal{L}_{j}(\boldsymbol{x}(\boldsymbol{n}+\boldsymbol{e}_{i}),\boldsymbol{x}(\boldsymbol{n}+\boldsymbol{e}_{j}+\boldsymbol{e}_{i}))+\mathcal{L}_{j}(\boldsymbol{x}(\boldsymbol{n}),\boldsymbol{x}(\boldsymbol{n}+\boldsymbol{e}_{j}))\;,$
which is the closure relation for the discrete-time Lagrangian 1-form.
Equivalently, for two-dimensional lattice, see figure 2, (1.9) can be re-
expressed in the form
$\widehat{\mathcal{L}(\boldsymbol{x},\widetilde{\boldsymbol{x}})}-\mathcal{L}(\boldsymbol{x},\widetilde{\boldsymbol{x}})-\widetilde{\mathcal{L}(\boldsymbol{x},\widehat{\boldsymbol{x}})}+\mathcal{L}(\boldsymbol{x},\widehat{\boldsymbol{x}})=0\;.$
(1.10)
$n_{i}$$n_{j}$$\Gamma$$\Gamma^{\prime}$ Figure 1: Arbitrary discrete curves on
the space of independent variables.
$n$$m$$\boldsymbol{x}$$\widetilde{\boldsymbol{x}}$$\widehat{\boldsymbol{x}}$$\widehat{\widetilde{\boldsymbol{x}}}$$\Gamma$$\Gamma^{\prime}$$\widehat{\mathcal{L}(\boldsymbol{x},\widetilde{\boldsymbol{x}})}$$\mathcal{L}(\boldsymbol{x},\widehat{\boldsymbol{x}})$$\mathcal{L}(\boldsymbol{x},\widetilde{\boldsymbol{x}})$$\widetilde{\mathcal{L}(\boldsymbol{x},\widehat{\boldsymbol{x}})}$
Figure 2: The local variation of the discrete curve on the space of two
independent variables.
In this work, we propose a new type of one-parameter CM system, apart from the
RS system and study its integrability through the existence of the exact
solution, classical r-matrix and the closure relation. Therefore, the
structure of the paper is in the following. In section 2, the two compatible
one-parameter discrete-time CM systems will be obtained from the Lax
equations. In section 3, the discrete-time Lagrangians are also established
and the closure relation is directly obtained via the connection between the
RS temporal Lax matrices and the Lagrangian. In section 4, the classical
r-matrix for the one-parameter discrete-time CM system is considered. In
section 5, the construction of the exact solution is carefully derived. In
section 6, the continuum limit will be performed on the one-parameter
discrete-time CM system resulting in the one-parameter continuous-time CM
system. The final section is a summary and possible further investigations.
## 2 One-parameter discrete-time CM system
In this section, we will construct the discrete-time CM system with a
parameter $\lambda$. First, we introduce the spatial Lax matrix:
$\boldsymbol{L}_{\lambda}$ with two temporal matrices: $\boldsymbol{M}$ and
$\boldsymbol{N}$ as follows
$\displaystyle\boldsymbol{L}_{\lambda}$ $\displaystyle=$
$\displaystyle\sum_{i,j=1}^{N}\frac{1}{x_{i}-x_{j}+\lambda}E_{ij}\;,$ (2.1a)
$\displaystyle\boldsymbol{M}$ $\displaystyle=$
$\displaystyle\sum_{i,j=1}^{N}\frac{1}{\widetilde{x}_{i}-x_{j}}E_{ij}\;,$
(2.1b) $\displaystyle\boldsymbol{N}$ $\displaystyle=$
$\displaystyle\sum_{i,j=1}^{N}\frac{1}{\widehat{x}_{i}-x_{j}}E_{ij}\;,$ (2.1c)
where $x_{i}(n,m)$ is the position of the $i^{th}$ particle, $N$ is the number
of particles in the system and $E_{ij}$ is the matrix with entries
$(E_{ij})_{kl}=\delta_{ik}\delta_{jl}$. Here, $\widehat{x}_{i}=x_{i}(m+1)$ is
a forward shift and ${{\vrule
depth=0.0pt,width=0.0pt{\smash{{\mathop{x}\limits_{\displaystyle\widehat{}}}}}}_{i}}=x_{i}(m-1)$
is a backward shift.
_Discrete flow- $n$ direction_: The compatibility between (2.1a) and (2.1b)
gives us
$\displaystyle\widetilde{\boldsymbol{L}_{\lambda}}\boldsymbol{M}$
$\displaystyle=$ $\displaystyle\boldsymbol{M}\boldsymbol{L}_{\lambda}$
$\displaystyle\sum\limits_{i,j=1}^{N}\sum\limits_{k,\ell=1}^{N}\frac{1}{(\widetilde{x}_{i}-\widetilde{x}_{j}+\lambda)}\frac{1}{(\widetilde{x}_{k}-x_{\ell})}E_{ij}E_{k\ell}$
$\displaystyle=$
$\displaystyle\sum\limits_{i,j=1}^{N}\sum\limits_{k,\ell=1}^{N}\frac{1}{(\widetilde{x}_{i}-x_{j})}\frac{1}{(x_{k}-x_{\ell}+\lambda)}E_{ij}E_{k\ell}\;$
$\displaystyle\sum\limits_{i,\ell=1}^{N}\sum\limits_{k=1}^{N}\frac{1}{(\widetilde{x}_{i}-\widetilde{x}_{k}+\lambda)(\widetilde{x}_{k}-x_{\ell})}E_{i\ell}$
$\displaystyle=$
$\displaystyle\sum\limits_{i,\ell=1}^{N}\sum\limits_{k=1}^{N}\frac{1}{(\widetilde{x}_{i}-x_{k})(x_{k}-x_{\ell}+\lambda)}E_{k\ell}\;.$
(2.2a) Taking out a common factor, we obtain
$\sum\limits_{k=1}^{N}\left(\frac{1}{\widetilde{x}_{i}-\widetilde{x}_{k}+\lambda}-\frac{1}{\widetilde{x}_{i}-x_{k}}\right)=\sum\limits_{k=1}^{N}\left(\frac{1}{x_{k}-x_{\ell}+\lambda}-\frac{1}{\widetilde{x}_{k}-x_{\ell}}\right)\;.$
(2.2b) We see that both sides of (2.2b) are independent and, therefore, it
holds if
$\sum\limits_{k=1}^{N}\left(\frac{1}{\widetilde{x}_{i}-\widetilde{x}_{k}+\lambda}-\frac{1}{\widetilde{x}_{i}-x_{k}}\right)\equiv\widetilde{p}\;,$
(2.2c) where $p=p(n)$ is independent particle indices and a function of
discrete time variable $n$. Taking a backward shift on (2.2c), we have
$\sum\limits_{k=1}^{N}\left(\frac{1}{x_{i}-x_{k}+\lambda}-\frac{1}{x_{i}-{\vrule
depth=0.0pt,width=0.0pt{\smash{{\mathop{x}\limits_{\displaystyle\widetilde{}}}}}}_{k}}\right)=p\;.$
(2.2d) Automatically, on the right-hand side of (2.2b), we have
$p=\sum\limits_{k=1}^{N}\left(\frac{1}{x_{\ell}-\widetilde{x}_{k}}-\frac{1}{x_{\ell}-x_{k}-\lambda}\right)\;.$
(2.2e) Now, it is not difficult to see that, from (2.2d) and (2.2e), we obtain
$\sum\limits_{k=1}^{N}\left(\frac{1}{x_{i}-\widetilde{x}_{k}}+\frac{1}{x_{i}-{\vrule
depth=0.0pt,width=0.0pt{\smash{{\mathop{x}\limits_{\displaystyle\widetilde{}}}}}}_{k}}\right)-\sum\limits_{k=1}^{N}\left(\frac{1}{x_{i}-x_{k}+\lambda}-\frac{1}{x_{i}-x_{k}-\lambda}\right)=0\;,$
(2.2f) which will be treated as a one-parameter discrete-time CM system and,
under the limit: $\lambda\to 0$, one obtains
$\sum\limits_{k=1}^{N}\left(\frac{1}{x_{i}-\widetilde{x}_{k}}+\frac{1}{x_{i}-{\vrule
depth=0.0pt,width=0.0pt{\smash{{\mathop{x}\limits_{\displaystyle\widetilde{}}}}}}_{k}}\right)-\sum\limits_{k=1\atop
k\neq i}^{N}\frac{2}{x_{i}-x_{k}}=0\;,$ (2.2g)
which is nothing but a standard discrete-time CM system in the $n$ direction.
_Discrete flow- $m$ direction_: The compatibility between (2.1a) and (2.1c)
gives us
$\displaystyle\widehat{\boldsymbol{L}_{\lambda}}\boldsymbol{M}$
$\displaystyle=$ $\displaystyle\boldsymbol{M}\boldsymbol{L}_{\lambda}$
$\displaystyle\sum\limits_{i,j=1}^{N}\sum\limits_{k,\ell=1}^{N}\frac{1}{(\widehat{x}_{i}-\widehat{x}_{j}+\lambda)}\frac{1}{(\widehat{x}_{k}-x_{\ell})}E_{ij}E_{k\ell}$
$\displaystyle=$
$\displaystyle\sum\limits_{i,j=1}^{N}\sum\limits_{k,\ell=1}^{N}\frac{1}{(\widehat{x}_{i}-x_{j})}\frac{1}{(x_{k}-x_{\ell}+\lambda)}E_{ij}E_{k\ell}\;$
$\displaystyle\sum\limits_{i,\ell=1}^{N}\sum\limits_{k=1}^{N}\frac{1}{(\widehat{x}_{i}-\widehat{x}_{k}+\lambda)(\widehat{x}_{k}-x_{\ell})}E_{i\ell}$
$\displaystyle=$
$\displaystyle\sum\limits_{i,\ell=1}^{N}\sum\limits_{k=1}^{N}\frac{1}{(\widehat{x}_{i}-x_{k})(x_{k}-x_{\ell}+\lambda)}E_{k\ell}\;.$
(2.3a)
Again, taking out a common factor, we obtain
$\sum\limits_{k=1}^{N}\left(\frac{1}{\widehat{x}_{i}-\widehat{x}_{k}+\lambda}-\frac{1}{\widehat{x}_{i}-x_{k}}\right)=\sum\limits_{k=1}^{N}\left(\frac{1}{x_{k}-x_{\ell}+\lambda}-\frac{1}{\widehat{x}_{k}-x_{\ell}}\right)\;.$
(2.3b)
The situation is similar to the previous discrete flow. Both sides of (2.3b)
are independent and it holds if
$\sum\limits_{k=1}^{N}\left(\frac{1}{\widehat{x}_{i}-\widehat{x}_{k}+\lambda}-\frac{1}{\widehat{x}_{i}-x_{k}}\right)\equiv\widehat{q}\;,$
(2.3c)
where $q=q(m)$ is independent particle indices and a function of discrete time
variable $m$. Computing a backward shift on (2.3c), we obtain
$\sum\limits_{k=1}^{N}\left(\frac{1}{x_{i}-x_{k}+\lambda}-\frac{1}{x_{i}-{\vrule
depth=0.0pt,width=0.0pt{\smash{{\mathop{x}\limits_{\displaystyle\widehat{}}}}}}_{k}}\right)=q\;.$
(2.3d)
From the right-hand side of (2.3b), we shall have
$q=\sum\limits_{k=1}^{N}\left(\frac{1}{x_{\ell}-\widehat{x}_{k}}-\frac{1}{x_{\ell}-x_{k}-\lambda}\right)\;.$
(2.3e)
Therefore, (2.3d) and (2.3e) give
$\sum\limits_{k=1}^{N}\left(\frac{1}{x_{i}-\widehat{x}_{k}}+\frac{1}{x_{i}-{\vrule
depth=0.0pt,width=0.0pt{\smash{{\mathop{x}\limits_{\displaystyle\widehat{}}}}}}_{k}}\right)-\sum\limits_{k=1}^{N}\left(\frac{1}{x_{i}-x_{k}+\lambda}-\frac{1}{x_{i}-x_{k}-\lambda}\right)=0\;,$
(2.3f)
which will be treated as a one-parameter discrete-time CM system in the
$m$-direction and, under the limit: $\lambda\to 0$, we obtain
$\sum\limits_{k=1}^{N}\left(\frac{1}{x_{i}-\widehat{x}_{k}}+\frac{1}{x_{i}-{\vrule
depth=0.0pt,width=0.0pt{\smash{{\mathop{x}\limits_{\displaystyle\widehat{}}}}}}_{k}}\right)-\sum\limits_{k=1\atop
k\neq i}^{N}\frac{2}{x_{i}-x_{k}}=0\;,$ (2.3g)
which is a discrete-time CM system in the $m$ direction.
_Commutativity between discrete flows_ : Two discrete-time dynamics will be
consistent if the compatibility between (2.1b) and (2.1c)
$\displaystyle\widehat{\boldsymbol{M}}\boldsymbol{N}$ $\displaystyle=$
$\displaystyle\widetilde{\boldsymbol{N}}\boldsymbol{M}\;$ (2.4a) holds. This
gives us a set of equations
$p-q=\sum\limits_{k=1}^{N}\left(\frac{1}{x_{i}-\widetilde{x}_{k}}-\frac{1}{x_{i}-\widehat{x}_{k}}\right)\;,$
(2.4b) and $p-q=\sum\limits_{k=1}^{N}\left(\frac{1}{x_{i}-{\vrule
depth=0.0pt,width=0.0pt{\smash{{\mathop{x}\limits_{\displaystyle\widetilde{}}}}}}_{k}}-\frac{1}{x_{i}-{\vrule
depth=0.0pt,width=0.0pt{\smash{{\mathop{x}\limits_{\displaystyle\widehat{}}}}}}_{k}}\right)\;,$
(2.4c) which will be called corner equations. Imposing (2.4b) = (2.4c), we
obtain
$\sum\limits_{k=1}^{N}\left(\frac{1}{x_{i}-\widetilde{x}_{k}}+\frac{1}{x_{i}-{\vrule
depth=0.0pt,width=0.0pt{\smash{{\mathop{x}\limits_{\displaystyle\widetilde{}}}}}}_{k}}\right)=\sum\limits_{k=1}^{N}\left(\frac{1}{x_{i}-\widehat{x}_{k}}-\frac{1}{x_{i}-{\vrule
depth=0.0pt,width=0.0pt{\smash{{\mathop{x}\limits_{\displaystyle\widehat{}}}}}}_{k}}\right)\;,$
(2.4d)
which is a constraint equation given the connection between one discrete flow
and another discrete flow.
## 3 Integrability: the closure relation
In this section, we will show that the one-parameter discrete-time CM systems
in the previous section are integrable in the sense that their discrete-time
Lagrangians satisfy the closure relation as a consequence of the least action
principle with respect to the independent variables [10, 11, 12, 13, 14, 15,
16, 17].
It is not difficult to see that (2.2f) and (2.3f) can be obtained from the
discrete Euler-Lagrange equations [12]
$\displaystyle\widetilde{\frac{\partial\mathcal{L}_{n}(x,\widetilde{x})}{\partial
x_{i}}}+\frac{\partial\mathcal{L}_{n}(x,\widetilde{x})}{\partial{\widetilde{x}_{i}}}=0\;,$
(3.1)
$\displaystyle\widehat{\frac{\partial\mathcal{L}_{m}(x,\widehat{x})}{\partial
x_{i}}}+\frac{\partial\mathcal{L}_{m}(x,\widehat{x})}{\partial{\widehat{x}_{i}}}=0\;,$
(3.2)
where
$\displaystyle\mathcal{L}_{n}{(x,\widetilde{x})}=-\sum\limits_{i,j=1}^{N}\ln\left|x_{i}-\widetilde{x}_{j}\right|+\sum\limits_{i,j=1}^{N}\ln\left|x_{i}-x_{j}+\lambda\right|+p(\Xi-\widetilde{\Xi})\;,$
(3.3)
and
$\displaystyle\mathcal{L}_{m}{(x,\widehat{x})}=-\sum\limits_{i,j=1}^{N}\ln\left|x_{i}-\widehat{x}_{j}\right|+\sum\limits_{i,j=1}^{N}\ln\left|x_{i}-x_{j}+\lambda\right|+q(\Xi-\widehat{\Xi})\;.$
(3.4)
Here $\Xi=\sum\limits_{i=1}^{N}x_{i}$ is a centre of mass variable.
To show that the Lagrangian closure relation for the one-parameter discrete-
time CM model holds, we shall employ a connection between the temporal Lax
matrix and Lagrangian as we did have in the case of the standard discrete-time
CM model [12]. An interesting point is that, for this present system, it turns
out that one could obtain the discrete-time Lagrangian from the relation
$\mathcal{L}(x,\widetilde{x})=\ln\left|\det\boldsymbol{M}_{RS}\right|$111See
the appendix A for the explicit computation., where $\boldsymbol{M}_{RS}$ is a
temporal matrix for the RS model given by
$\displaystyle\boldsymbol{M}_{RS}=\sum\limits_{i,j=1}^{N}\frac{\widetilde{h}_{i}h_{j}}{\widetilde{x}_{i}-x_{j}+\lambda}E_{ij}\;,$
(3.5)
where $h_{i}=h_{i}(n,m)$ are auxiliary variables which can be determined [6].
Suppose there is another temporal matrix given by
$\displaystyle\boldsymbol{N}_{RS}=\sum\limits_{i,j=1}^{N}\frac{\widehat{h}_{i}h_{j}}{\widehat{x}_{i}-x_{j}+\lambda}E_{ij}\;,$
(3.6)
and both $\boldsymbol{M}_{RS}$ and $\boldsymbol{N}_{RS}$ satisfy
$\displaystyle\widehat{\boldsymbol{M}}_{RS}\boldsymbol{N}_{RS}=\widetilde{\boldsymbol{N}}_{RS}\boldsymbol{M}_{RS}.\;$
(3.7)
Taking $\det$ and $\ln$, one obtains
$\displaystyle\ln\left|\det\widehat{\boldsymbol{M}}_{RS}\right|+\ln\left|\det\boldsymbol{N}_{RS}\right|=\ln\left|\det\widetilde{\boldsymbol{N}}_{RS}\right|+\ln\left|\det\boldsymbol{M}_{RS}\right|\;,$
(3.8)
resulting in the closure relation (1.10).
## 4 Integrability: the classical r-matrix
In this section, we shall construct the classical r-matrix for the one-
parameter discrete-time CM system. We first rewrite the spatial Lax matrix as
$\displaystyle\boldsymbol{L}_{\lambda}$ $\displaystyle=$
$\displaystyle\sum\limits_{i=1}^{N}\frac{1}{\lambda}E_{ii}-\sum\limits_{i,j=1\atop
j\neq i}^{N}\frac{1}{x_{i}-x_{j}+\lambda}E_{ij}\;.$ (4.1)
Next, we shall call the spatial Lax matrix of the standard CM system [18]
given by
$\displaystyle\boldsymbol{L}=\sum\limits_{i=1}^{N}P_{i}E_{ii}-\sum\limits_{i,j=1\atop
k\neq i}^{N}\frac{1}{x_{i}-x_{j}}E_{ij}\;,$ (4.2)
where $P_{i}$ is the momentum variable for $i^{th}$ particle. With this
structure, one finds that the classical r-matrix can be computed through the
relation
$\displaystyle\\{\boldsymbol{L}\overset{\otimes}{,}\boldsymbol{L}\\}$
$\displaystyle=$
$\displaystyle[r_{12},\boldsymbol{L}\otimes\mathds{1}]-[r_{12},\mathds{1}\otimes\boldsymbol{L}]\;,$
(4.3)
where $r_{12}$ is the classical r-matrix for the CM system. Comparing (4.1)
with (4.2), one immediately finds the classical r-matrix $r_{12}^{\lambda}$
for the one-parameter discrete-time CM system upon replacing
$P_{i}\to\frac{1}{\lambda}$ and
$\frac{1}{x_{i}-x_{j}}\to\frac{1}{x_{i}-x_{j}+\lambda}$
$\displaystyle\\{\boldsymbol{L}_{\lambda}\overset{\otimes}{,}\boldsymbol{L}_{\lambda}\\}$
$\displaystyle=$
$\displaystyle\left[r_{12}^{\lambda},\boldsymbol{L}_{\lambda}\otimes\mathds{1}\right]-\left[r_{12}^{\lambda},\mathds{1}\otimes\boldsymbol{L}_{\lambda}\right]\;.$
(4.4)
We shall note here that under the limit $\lambda\to 0$, the classical r-matrix
$r_{12}^{\lambda}$ will not yield the standard classical r-matrix. This
problem arises from the fact that the spatial Lax matrix (4.1) is a fake one
since it does not provide the integrals of motion through the relation
$I_{n}=\frac{1}{n!}Tr\boldsymbol{L_{\lambda}}^{n}$.
## 5 Integrability: the exact solution
In this section, we will construct the exact solution $\\{x_{i}(n)\\}$ with
initial values $\\{x_{i}(0)\\}$ and $\\{x_{i}(1)=\wtilde{x}_{i}(0)\\}$. We
shall first rewrite the Lax matrices in the forms
$\displaystyle\boldsymbol{X}\boldsymbol{L}-\boldsymbol{L}\boldsymbol{X}+\lambda\boldsymbol{L}=\boldsymbol{E},\;$
(5.1a)
$\displaystyle\widetilde{\boldsymbol{X}}\boldsymbol{M}-\boldsymbol{M}\boldsymbol{X}=\boldsymbol{E}\;,$
(5.1b) where $\boldsymbol{X}=\sum\limits_{i=1}^{N}x_{i}E_{ii}$ and
$\boldsymbol{E}=\sum\limits_{i=1}^{N}E_{ij}$. Moreover, we have
$\displaystyle(\widetilde{\boldsymbol{L}}-\boldsymbol{M})\boldsymbol{E}=0\;,$
(5.1c) and $\displaystyle\boldsymbol{E}(\boldsymbol{L}-\boldsymbol{M})=0\;,$
(5.1d) with also give equations of motion. Let’s write
$\boldsymbol{M}=\widetilde{\boldsymbol{U}}\boldsymbol{U}^{-1}$ and
$\boldsymbol{L}=\boldsymbol{U}\boldsymbol{\Lambda}\boldsymbol{U}^{-1}$, where
$\boldsymbol{U}$ is an invertible matrix. (5.1b) leads to
$\displaystyle\widetilde{\boldsymbol{X}}\widetilde{\boldsymbol{U}}\boldsymbol{U}^{-1}-\widetilde{\boldsymbol{U}}\boldsymbol{U}^{-1}\boldsymbol{X}$
$\displaystyle=$ $\displaystyle\boldsymbol{E}\;$
$\displaystyle\boldsymbol{U}^{-1}\widetilde{\boldsymbol{X}}\widetilde{\boldsymbol{U}}\boldsymbol{U}^{-1}-\widetilde{\boldsymbol{U}}^{-1}\widetilde{\boldsymbol{U}}\boldsymbol{U}^{-1}\boldsymbol{X}$
$\displaystyle=$
$\displaystyle\widetilde{\boldsymbol{U}}^{-1}\boldsymbol{E}\;$
$\displaystyle\widetilde{\boldsymbol{U}}^{-1}\widetilde{\boldsymbol{X}}\widetilde{\boldsymbol{U}}\boldsymbol{U}^{-1}\boldsymbol{U}-\boldsymbol{U}^{-1}\boldsymbol{X}\boldsymbol{U}$
$\displaystyle=$
$\displaystyle\widetilde{\boldsymbol{U}}^{-1}\boldsymbol{E}\boldsymbol{U}\;$
$\displaystyle\widetilde{\boldsymbol{U}}^{-1}\widetilde{\boldsymbol{X}}\widetilde{\boldsymbol{U}}-\boldsymbol{U}^{-1}\boldsymbol{X}\boldsymbol{U}$
$\displaystyle=$
$\displaystyle\widetilde{\boldsymbol{U}}^{-1}\boldsymbol{E}\boldsymbol{U}\;$
$\displaystyle\widetilde{\boldsymbol{Y}}-\boldsymbol{Y}$ $\displaystyle=$
$\displaystyle\widetilde{\boldsymbol{U}}^{-1}\boldsymbol{E}\boldsymbol{U},\;$
(5.1e) where $\boldsymbol{Y}=\boldsymbol{U}^{-1}\boldsymbol{X}\boldsymbol{U}$.
We also find that (5.1a) gives
$\displaystyle\boldsymbol{X}\boldsymbol{U}\boldsymbol{\Lambda}\boldsymbol{U}^{-1}-\boldsymbol{U}\boldsymbol{\Lambda}\boldsymbol{U}^{-1}\boldsymbol{X}+\lambda\boldsymbol{U}\boldsymbol{\Lambda}\boldsymbol{U}^{-1}$
$\displaystyle=$ $\displaystyle\boldsymbol{E}\;$
$\displaystyle\boldsymbol{X}\boldsymbol{U}\boldsymbol{\Lambda}\boldsymbol{U}^{-1}\boldsymbol{U}-\boldsymbol{U}\boldsymbol{\Lambda}\boldsymbol{U}^{-1}\boldsymbol{X}\boldsymbol{U}+\lambda\boldsymbol{U}\boldsymbol{\Lambda}\boldsymbol{U}^{-1}\boldsymbol{U}$
$\displaystyle=$ $\displaystyle\boldsymbol{E}\boldsymbol{U}\;$
$\displaystyle\boldsymbol{U}^{-1}\boldsymbol{X}\boldsymbol{U}\boldsymbol{\Lambda}-\boldsymbol{U}^{-1}\boldsymbol{U}\boldsymbol{\Lambda}\boldsymbol{U}^{-1}\boldsymbol{X}\boldsymbol{U}+\boldsymbol{U}^{-1}\lambda\boldsymbol{U}\boldsymbol{\Lambda}$
$\displaystyle=$
$\displaystyle\boldsymbol{U}^{-1}\boldsymbol{E}\boldsymbol{U}\;$
$\displaystyle\boldsymbol{U}^{-1}\boldsymbol{X}\boldsymbol{U}\boldsymbol{\Lambda}-\boldsymbol{\Lambda}\boldsymbol{U}^{-1}\boldsymbol{X}\boldsymbol{U}+\lambda\boldsymbol{U}^{-1}\boldsymbol{U}\boldsymbol{\Lambda}$
$\displaystyle=$
$\displaystyle\boldsymbol{U}^{-1}\boldsymbol{E}\boldsymbol{U}\;$
$\displaystyle\boldsymbol{U}^{-1}\boldsymbol{X}\boldsymbol{U}\boldsymbol{\Lambda}-\boldsymbol{\Lambda}\boldsymbol{U}^{-1}\boldsymbol{X}\boldsymbol{U}+\lambda\boldsymbol{\Lambda}$
$\displaystyle=$
$\displaystyle\boldsymbol{U}^{-1}\boldsymbol{E}\boldsymbol{U}\;$
$\displaystyle\boldsymbol{Y}\boldsymbol{\Lambda}-\boldsymbol{\Lambda}\boldsymbol{Y}+\lambda\boldsymbol{\Lambda}$
$\displaystyle=$
$\displaystyle\boldsymbol{U}^{-1}\boldsymbol{E}\boldsymbol{U}\;,$ (5.1f) and
(5.1c) gives
$\displaystyle\left(\widetilde{\boldsymbol{U}}\boldsymbol{\Lambda}\widetilde{\boldsymbol{U}}^{-1}-\widetilde{\boldsymbol{U}}\boldsymbol{U}^{-1}\right)\boldsymbol{E}$
$\displaystyle=$ $\displaystyle 0\;$
$\displaystyle\widetilde{\boldsymbol{U}}\boldsymbol{\Lambda}\widetilde{\boldsymbol{U}}^{-1}\boldsymbol{E}-\widetilde{\boldsymbol{U}}\boldsymbol{U}^{-1}\boldsymbol{E}$
$\displaystyle=$ $\displaystyle 0\;$
$\displaystyle\widetilde{\boldsymbol{U}}^{-1}\widetilde{\boldsymbol{U}}\boldsymbol{\Lambda}\widetilde{\boldsymbol{U}}^{-1}\boldsymbol{E}-\widetilde{\boldsymbol{U}}^{-1}\widetilde{\boldsymbol{U}}\boldsymbol{U}^{-1}\boldsymbol{E}$
$\displaystyle=$ $\displaystyle 0\;$
$\displaystyle\boldsymbol{\Lambda}\widetilde{\boldsymbol{U}}^{-1}\boldsymbol{E}-\boldsymbol{U}^{-1}\boldsymbol{E}$
$\displaystyle=$ $\displaystyle 0\;$
$\displaystyle\boldsymbol{U}^{-1}\boldsymbol{E}\boldsymbol{U}$
$\displaystyle=$
$\displaystyle\boldsymbol{\Lambda}\widetilde{\boldsymbol{U}}^{-1}\boldsymbol{E}\boldsymbol{U}\;.$
(5.1g) Substituting (5.1g) into (5.1f), we obtain
$\displaystyle\boldsymbol{Y}\boldsymbol{\Lambda}-\boldsymbol{\Lambda}\boldsymbol{Y}+\lambda\boldsymbol{\Lambda}$
$\displaystyle=$
$\displaystyle\boldsymbol{\Lambda}\widetilde{\boldsymbol{U}}^{-1}\boldsymbol{E}\boldsymbol{U}\;.$
(5.1h) To eliminate the invertible matrix $\boldsymbol{U}$ and
$\boldsymbol{E}$ on the right hand side of (5.1h), we use (5.1d) which can be
expressed in the form
$\displaystyle\boldsymbol{E}\left(\boldsymbol{U}\boldsymbol{\Lambda}\boldsymbol{U}^{-1}-\widetilde{\boldsymbol{U}}\boldsymbol{U}^{-1}\right)$
$\displaystyle=$ $\displaystyle 0\;$
$\displaystyle\boldsymbol{E}\boldsymbol{U}\boldsymbol{\Lambda}\boldsymbol{U}^{-1}-\boldsymbol{E}\widetilde{\boldsymbol{U}}\boldsymbol{U}^{-1}$
$\displaystyle=$ $\displaystyle 0\;$
$\displaystyle\boldsymbol{E}\boldsymbol{U}\boldsymbol{\Lambda}\boldsymbol{U}^{-1}\boldsymbol{U}-\boldsymbol{E}\widetilde{\boldsymbol{U}}\boldsymbol{U}^{-1}\boldsymbol{U}$
$\displaystyle=$ $\displaystyle 0\;$
$\displaystyle\boldsymbol{E}\boldsymbol{U}\boldsymbol{\Lambda}-\boldsymbol{E}\widetilde{\boldsymbol{U}}$
$\displaystyle=$ $\displaystyle 0\;$
$\displaystyle\boldsymbol{U}^{-1}\boldsymbol{E}\widetilde{\boldsymbol{U}}$
$\displaystyle=$
$\displaystyle\boldsymbol{U}^{-1}\boldsymbol{E}\boldsymbol{U}\boldsymbol{\Lambda}\;.$
(5.1i) Since
$\boldsymbol{U}^{-1}\boldsymbol{E}\widetilde{\boldsymbol{U}}=\widetilde{\boldsymbol{U}}^{-1}\boldsymbol{E}\boldsymbol{U}$,
we then obtain
$\displaystyle\widetilde{\boldsymbol{U}}^{-1}\boldsymbol{E}\boldsymbol{U}$
$\displaystyle=$
$\displaystyle\boldsymbol{U}^{-1}\boldsymbol{E}\boldsymbol{U}\boldsymbol{\Lambda}\;.$
(5.1j)
Substituting (5.1i) into (5.1e), one finds
$\displaystyle\widetilde{\boldsymbol{Y}}-\boldsymbol{Y}$ $\displaystyle=$
$\displaystyle\boldsymbol{U}^{-1}\boldsymbol{E}\boldsymbol{U}\boldsymbol{\Lambda}\;.\;$
(5.2)
Rearranging (5.1h), we obtain
$\displaystyle\boldsymbol{\Lambda}^{-1}\boldsymbol{Y}\boldsymbol{\Lambda}-\boldsymbol{\Lambda}^{-1}\boldsymbol{\Lambda}\boldsymbol{Y}+\boldsymbol{\Lambda}^{-1}\lambda\boldsymbol{\Lambda}$
$\displaystyle=$
$\displaystyle\boldsymbol{\Lambda}^{-1}\boldsymbol{\Lambda}\widetilde{\boldsymbol{U}}^{-1}\boldsymbol{E}\boldsymbol{U},\;$
$\displaystyle\boldsymbol{\Lambda}^{-1}\boldsymbol{Y}\boldsymbol{\Lambda}-\boldsymbol{Y}+\lambda$
$\displaystyle=$
$\displaystyle\widetilde{\boldsymbol{U}}^{-1}\boldsymbol{E}\boldsymbol{U}\;.$
(5.3)
Substituting (5.1e) into (5.3), we get
$\displaystyle\boldsymbol{\Lambda}^{-1}\boldsymbol{Y}\boldsymbol{\Lambda}-\boldsymbol{Y}+\lambda$
$\displaystyle=$ $\displaystyle\widetilde{\boldsymbol{Y}}\boldsymbol{Y},\;$
$\displaystyle\widetilde{\boldsymbol{Y}}$ $\displaystyle=$
$\displaystyle\boldsymbol{\Lambda}^{-1}\boldsymbol{Y}\boldsymbol{\Lambda}+\lambda\;.$
(5.4)
Hence if we proceed $n$ steps, we find that
$\displaystyle\widetilde{\widetilde{\boldsymbol{Y}}}$ $\displaystyle=$
$\displaystyle\boldsymbol{\Lambda}^{-1}\widetilde{\boldsymbol{Y}}\boldsymbol{\Lambda}+\lambda,\;$
$\displaystyle=$
$\displaystyle\boldsymbol{\Lambda}^{-1}\left[\boldsymbol{\Lambda}^{-1}\boldsymbol{Y}\boldsymbol{\Lambda}+\lambda\right]\boldsymbol{\Lambda}+\lambda,\;$
$\displaystyle=$
$\displaystyle\left(\boldsymbol{\Lambda}^{-1}\right)^{2}\boldsymbol{Y}\boldsymbol{\Lambda}^{2}+2\lambda,\;$
$\displaystyle.\;$ $\displaystyle.\;$ $\displaystyle.\;$
$\displaystyle\boldsymbol{Y}(n)$ $\displaystyle=$
$\displaystyle\left(\boldsymbol{\Lambda}\right)^{-n}\boldsymbol{Y}\boldsymbol{\Lambda}^{n}+n\lambda\;$
(5.5)
and, of course, for the $m$ steps
$\displaystyle\boldsymbol{Y}(m)$ $\displaystyle=$
$\displaystyle\left(\boldsymbol{\Lambda}\right)^{-m}\boldsymbol{Y}\boldsymbol{\Lambda}^{m}+m\lambda.\;$
(5.6)
Then, at any $(n,m)$ steps, we have
$\displaystyle\boldsymbol{Y}(n,m)$ $\displaystyle=$
$\displaystyle\left(p+\boldsymbol{\Lambda}\right)^{-n}\left(q+\boldsymbol{\Lambda}\right)^{-m}\boldsymbol{Y}(0,0)\left(q+\boldsymbol{\Lambda}\right)^{m}\left(p+\boldsymbol{\Lambda}\right)^{n}+(n+m)\lambda.\;$
(5.7)
It is not difficult to find that, under the limit $\lambda\mapsto 0$, one
obtains
$\displaystyle\boldsymbol{Y}(n,m)$ $\displaystyle=$
$\displaystyle\left(p+\boldsymbol{\Lambda}\right)^{-n}\left(q+\boldsymbol{\Lambda}\right)^{-m}\boldsymbol{Y}(0,0)\left(q+\boldsymbol{\Lambda}\right)^{m}\left(p+\boldsymbol{\Lambda}\right)^{n},\;$
(5.8)
which is nothing but a standard solution of the discrete-time CM system [5].
## 6 The continuum limit
In this section, we consider the continuum limit of the one-parameter
discrete-time CM system which had been investigated in the previous sections.
Since there are two discrete-time variables $(n,m)$, we may perform a naive
continuum limit [5] on each of these variables resulting in the one-parameter
continuous-time CM system. To proceed such continuuum limit, we define
$x_{i}=Z_{i}+n\Delta$, where $\Delta$ is a small parameter. Consequently, we
also have $\widetilde{x}_{i}=\widetilde{Z}_{i}+(n+1)\Delta$ and ${\vrule
depth=0.0pt,width=0.0pt{\smash{{\mathop{x}\limits_{\displaystyle\widetilde{}}}}}}_{i}={\vrule
depth=0.0pt,width=0.0pt{\smash{{\mathop{Z}\limits_{\displaystyle\widetilde{}}}}}}_{i}+(n-1)\Delta$.
Then (2.2f) becomes
$\displaystyle\sum\limits_{k=1}^{N}\left(\frac{1}{Z_{i}-\widetilde{Z}_{k}-\Delta}+\frac{1}{Z_{i}-{\vrule
depth=0.0pt,width=0.0pt{\smash{{\mathop{Z}\limits_{\displaystyle\widetilde{}}}}}}_{k}+\Delta}\right)-\sum\limits_{i,k=1\atop
k\neq
i}^{N}\left(\frac{1}{Z_{i}-Z_{k}+\lambda}+\frac{1}{Z_{i}-Z_{k}-\lambda}\right)=0\;.$
(6.1)
or
$\displaystyle\left(\frac{1}{Z_{i}-\widetilde{Z}_{i}-\Delta}+\frac{1}{Z_{i}-{\vrule
depth=0.0pt,width=0.0pt{\smash{{\mathop{Z}\limits_{\displaystyle\widetilde{}}}}}}_{i}+\Delta}\right)-$
$\displaystyle\sum\limits_{i,k=1\atop k\neq
i}^{N}\left(\frac{1}{Z_{i}-\widetilde{Z}_{k}-\Delta}+\frac{1}{Z_{i}-{\vrule
depth=0.0pt,width=0.0pt{\smash{{\mathop{Z}\limits_{\displaystyle\widetilde{}}}}}}_{k}+\Delta}-\frac{1}{Z_{i}-Z_{k}+\lambda}-\frac{1}{Z_{i}-Z_{k}-\lambda}\right)=0\;.\;$
(6.2)
Taking the expansion, we get
$\displaystyle\widetilde{Z}_{i}$ $\displaystyle=$ $\displaystyle
Z_{i}+\varepsilon\frac{dZ_{i}}{dt}+\frac{\varepsilon^{2}}{2}\frac{d^{2}Z_{i}}{dt^{2}}+...\;,$
(6.3) $\displaystyle{\vrule
depth=0.0pt,width=0.0pt{\smash{{\mathop{Z}\limits_{\displaystyle\widetilde{}}}}}}_{i}$
$\displaystyle=$ $\displaystyle
Z_{i}-\varepsilon\frac{dZ_{i}}{dt}+\frac{\varepsilon^{2}}{2}\frac{d^{2}Z_{i}}{dt^{2}}+...\;,$
(6.4)
where $\varepsilon$ is the time-step parameter. Then, the first two terms in
(6.2) can be expressed in the form
$\displaystyle\left(\frac{1}{Z_{i}-\widetilde{Z}_{i}-\Delta}+\frac{1}{Z_{i}-{\vrule
depth=0.0pt,width=0.0pt{\smash{{\mathop{Z}\limits_{\displaystyle\widetilde{}}}}}}_{i}+\Delta}\right)$
$\displaystyle=$
$\displaystyle\frac{\varepsilon^{2}}{\Delta^{2}}\frac{d^{2}Z_{i}}{dt^{2}}+...\;.$
(6.5)
We also find that
$\displaystyle\sum\limits_{k=1\atop k\neq
i}^{N}\left(\frac{1}{Z_{i}-\widetilde{Z}_{k}-\Delta}+\frac{1}{Z_{i}-{\vrule
depth=0.0pt,width=0.0pt{\smash{{\mathop{Z}\limits_{\displaystyle\widetilde{}}}}}}_{k}+\Delta}\right)$
$\displaystyle=\sum\limits_{k=1\atop k\neq
i}^{N}\left(\frac{2}{Z_{i}-Z_{k}}+\frac{1}{(Z_{i}-Z_{k})^{3}}\left(\varepsilon^{2}\frac{dZ_{k}}{dt}+2\varepsilon\Delta\frac{dZ_{k}}{dt}+\Delta^{2}\right)+....\right)\;.$
(6.6)
If $\varepsilon\approx\Delta^{2}$, one finds that
$\displaystyle\sum\limits_{k=1\atop k\neq
i}^{N}\left(\frac{1}{Z_{i}-\widetilde{Z}_{k}-\Delta}+\frac{1}{Z_{i}-{\vrule
depth=0.0pt,width=0.0pt{\smash{{\mathop{Z}\limits_{\displaystyle\widetilde{}}}}}}_{k}+\Delta}\right)$
$\displaystyle\approx$ $\displaystyle\sum\limits_{k=1\atop k\neq
i}^{N}\left(\frac{2}{Z_{i}-Z_{k}}+\frac{2\Delta^{2}}{(Z_{i}-Z_{k})^{3}}\right)\;.$
(6.7)
Finally, the continuous version of the one-parameter CM system is given by
$\displaystyle\frac{d^{2}Z_{i}}{dt^{2}}+\sum\limits_{i,k=1\atop k\neq
i}^{N}\left(g^{\prime}\left[\frac{2}{Z_{i}-Z_{k}}-\frac{1}{Z_{i}-Z_{k}+\lambda}-\frac{1}{Z_{i}-Z_{k}-\lambda}\right]+\frac{2g}{(Z_{i}-Z_{k})^{3}}\right)=0\;,$
(6.8)
where $g\equiv\frac{\Delta^{4}}{\varepsilon^{2}}$ and
$g^{\prime}\equiv\frac{\Delta^{2}}{\varepsilon^{2}}$. Therefore, under the
limit: $\lambda\to 0$, we have
$\displaystyle\frac{d^{2}Z_{i}}{dt^{2}}+2g\sum\limits_{i,k=1\atop k\neq
i}^{N}\frac{1}{(Z_{i}-Z_{k})^{3}}$ $\displaystyle=$ $\displaystyle 0\;,$ (6.9)
which is actually a standard continuous CM system. With (6.8), the Lagrangian
is given by
$\displaystyle\mathscr{L}_{\lambda}=\sum\limits_{i=1}^{N}\frac{\partial
Z_{i}}{\partial t}-\frac{1}{2}\sum\limits_{i,k=1\atop k\neq
i}^{N}\frac{g}{(Z_{i}-Z_{k})^{2}}-g^{\prime}\sum\limits_{i,k=1\atop k\neq
i}^{N}\left(\ln\left|Z_{i}-Z_{k}+\lambda\right|+\ln\left|(Z_{i}-Z_{k})\right|\right)$
(6.10)
with the Euler-Lagrange equation
$\displaystyle\frac{\partial\mathscr{L}_{\lambda}}{\partial
Z_{i}}-\frac{\partial}{\partial
t}\left(\frac{\partial\mathscr{L}_{\lambda}}{\partial(\frac{\partial
Z_{i}}{\partial t})}\right)=0\;.$ (6.11)
Of course, under the limit $\lambda\to 0$,
$\lim_{\lambda\mapsto
0}\mathscr{L}_{\lambda}=\mathscr{L}=\sum\limits_{i=1}^{N}\frac{\partial
Z_{i}}{\partial t}+\sum\limits_{i,k=1\atop k\neq
i}^{N}\frac{g}{(Z_{i}-Z_{k})^{2}}\;,$ (6.12)
the standard Lagrangian for the CM system is recovered222We note that the CM
system in this equation comes with the opposite sign with the standard one..
In addition, the Hamiltonian of the one-parameter continuous-time of CM system
can be written in the form
$\displaystyle\mathscr{H}_{\lambda}=\sum\limits_{i=1}^{N}P_{i}^{2}+\frac{1}{2}\sum\limits_{i,k=1\atop
k\neq i}^{N}\frac{g}{(Z_{i}-Z_{k})^{2}}+g^{\prime}\sum\limits_{i,k=1\atop
k\neq
i}^{N}\left(\ln\left|Z_{i}-Z_{k}+\lambda\right|+\ln\left|Z_{i}-Z_{k}\right|\right),\;$
(6.13)
where $P_{i}=\frac{\partial Z_{i}}{\partial t}$ is the momentum variable for
the $i^{th}$ particle.
## 7 Summary
In the present work, we propose a new type of integrable one-dimensional many-
body system called a one-parameter or a deformed discrete-time CM system.
Under the limit: $\lambda\to 0$, a standard CM system is recovered in both
discrete and continuous cases. In figure 3, we provide a diagram of the
connection for all CM-type systems. We would rank our model on the same level
as the RS system since both systems contain a parameter.
One-parameter CM systemRS system$\lambda$ $\to$ $0$$\lambda$ $\to$
$0$$\lambda$ $\to$ $\infty$CM systemGoldfish System Figure 3: The connection
among one-parameter CM, RS, CM and Goldfish systems.
We also would like to note that the continuous system obtained in section 6 is
just the first one in CM hierarchy [12]. A question can be addressed here is
that “how is the other one deformed in the hierarchy?” Moreover, one also can
try to study the integrability condition as well as the quantum property of
the system. Further investigation is needed and we shall answer these points
elsewhere.
## Appendix A The connection between Lagrangian and $\boldsymbol{M}_{RS}$ of
the RS model
In this appendix, we will derive the connection between one-parameter
discrete-time Lagrangian and $\boldsymbol{M}_{RS}$
$\displaystyle\boldsymbol{M}_{RS}=\sum\limits_{i,j=1}^{N}\frac{\widetilde{h}_{i}h_{j}}{\widetilde{x}_{i}-x_{j}+\lambda}E_{ij}\;.$
(A.1)
For simplicity, we shall first start with the case of $2\times 2$ matrix given
by
$\boldsymbol{M}_{RS}=\begin{bmatrix}\frac{\widetilde{h}_{1}h_{1}}{\widetilde{x}_{1}-x_{1}+\lambda}&\frac{\widetilde{h}_{1}h_{2}}{\widetilde{x}_{1}-x_{2}+\lambda}\\\
\frac{\widetilde{h}_{2}h_{1}}{\widetilde{x}_{2}-x_{1}+\lambda}&\frac{\widetilde{h}_{2}h_{2}}{\widetilde{x}_{2}-x_{2}+\lambda}\end{bmatrix}\;.$
Then, we compute the determinant
$\displaystyle\det\boldsymbol{M}_{RS}$ $\displaystyle=$
$\displaystyle\frac{\widetilde{h}_{1}h_{1}\widetilde{h}_{2}h_{2}}{(\widetilde{x}_{1}-x_{1}+\lambda)(\widetilde{x}_{2}-x_{2}+\lambda)}-\frac{\widetilde{h}_{2}h_{1}\widetilde{h}_{1}h_{2}}{(\widetilde{x}_{2}-x_{1}+\lambda)(\widetilde{x}_{1}-x_{2}+\lambda)},\;$
$\displaystyle=$ $\displaystyle
h_{1}\widetilde{h}_{1}h_{2}\widetilde{h}_{2}\left[\frac{1}{(\widetilde{x}_{1}-x_{1}+\lambda)(\widetilde{x}_{2}-x_{2}+\lambda)}-\frac{1}{(\widetilde{x}_{2}-x_{1}+\lambda)(\widetilde{x}_{1}-x_{2}+\lambda)}\right],\;$
$\displaystyle=$ $\displaystyle
h_{1}\widetilde{h}_{1}h_{2}\widetilde{h}_{2}\left[\frac{(\widetilde{x}_{2}-x_{1}+\lambda)(\widetilde{x}_{1}-x_{2}+\lambda)-(\widetilde{x}_{1}-x_{1}+\lambda)(\widetilde{x}_{2}-x_{2}+\lambda)}{\prod\limits_{i,j=1}^{2}(\widetilde{x}_{i}-x_{j}+\lambda)}\right].\;$
(LABEL:AOCM) can be further simplified as follows
$\displaystyle\det\boldsymbol{M}_{RS}$ $\displaystyle=$ $\displaystyle
h_{1}\widetilde{h}_{1}h_{2}\widetilde{h}_{2}\left[\frac{(\widetilde{x}_{2}-\widetilde{x}_{1})(x_{1}-x_{2})}{\prod\limits_{i,j=1}^{2}(\widetilde{x}_{i}-x_{j}+\lambda)}\right]\;.$
(A.3)
Recalling the relations [13]
$\displaystyle h^{2}_{i}$ $\displaystyle=$
$\displaystyle-\frac{\prod\limits_{j=1}^{N}(x_{i}-x_{j}+\lambda)(x_{i}-\widetilde{x}_{j}-\lambda)}{\prod\limits_{\mathop{i,j=1}\limits_{j\neq
i}}^{N}(x_{i}-x_{j})\prod\limits_{j=1}^{N}(x_{i}-\widetilde{x}_{j})},\;$ (A.4)
$\displaystyle{\widetilde{h}_{i}}^{2}$ $\displaystyle=$
$\displaystyle-\frac{\prod\limits_{j=1}^{N}(\widetilde{x}_{i}-x_{j}+\lambda)(\widetilde{x}_{i}-\widetilde{x}_{j}-\lambda)}{\prod\limits_{\mathop{i,j=1}\limits_{j\neq
i}}^{N}(\widetilde{x}_{i}-\widetilde{x}_{j})\prod\limits_{j=1}^{N}(\widetilde{x}_{i}-x_{j})}\;,$
(A.5)
then, for $i,j=1,2$, we have
$\displaystyle h^{2}_{1}$ $\displaystyle=$
$\displaystyle-\frac{(x_{1}-x_{1}+\lambda)(x_{1}-x_{2}+\lambda)(x_{1}-\widetilde{x}_{1}-\lambda)(x_{1}-\widetilde{x}_{2}-\lambda)}{(x_{1}-x_{2})(x_{1}-\widetilde{x}_{1})(x_{1}-\widetilde{x}_{2})},\;$
(A.6) $\displaystyle{\widetilde{h}_{1}}^{2}$ $\displaystyle=$
$\displaystyle\frac{(\widetilde{x}_{1}-x_{1}+\lambda)(\widehat{x}_{1}-x_{2}+\lambda)(\widetilde{x}_{1}-\widetilde{x}_{1}-\lambda)(\widetilde{x}_{1}-\widetilde{x}_{2}-\lambda)}{(\widetilde{x}_{1}-\widetilde{x}_{2})(\widetilde{x}_{1}-x_{1})(\widetilde{x}_{1}-x_{2})},\;$
(A.7) $\displaystyle h^{2}_{1}$ $\displaystyle=$
$\displaystyle-\frac{(x_{2}-x_{1}+\lambda)(x_{2}-x_{2}+\lambda)(x_{2}-\widetilde{x}_{1}-\lambda)(x_{2}-\widetilde{x}_{2}-\lambda)}{(x_{2}-x_{1})(x_{2}-\widetilde{x}_{1})(x_{2}-\widetilde{x}_{2})},\;$
(A.8) $\displaystyle{\widetilde{h}_{1}}^{2}$ $\displaystyle=$
$\displaystyle\frac{(\widetilde{x}_{2}-x_{1}+\lambda)(\widehat{x}_{2}-x_{2}+\lambda)(\widetilde{x}_{2}-\widetilde{x}_{1}-\lambda)(\widetilde{x}_{2}-\widetilde{x}_{2}-\lambda)}{(\widetilde{x}_{2}-\widetilde{x}_{1})(\widetilde{x}_{2}-x_{1})(\widetilde{x}_{2}-x_{2})}\;.$
(A.9)
Taking $\ln$, we get
$\displaystyle\ln|h_{1}|$ $\displaystyle=$
$\displaystyle\frac{1}{2}\left[\ln|\lambda|+\ln|x_{1}-x_{2}+\lambda|+\ln|x_{1}-\widetilde{x}_{1}-\lambda|\right.$
(A.10)
$\displaystyle\left.+\ln|x_{1}-\widetilde{x}_{2}-\lambda|-\ln|x_{1}-x_{2}|-\ln|x_{1}-\widetilde{x}_{1}|-\ln|x_{1}-\widetilde{x}_{2}|\right],\;\;\;$
$\displaystyle\ln|\widetilde{h}_{1}|$ $\displaystyle=$
$\displaystyle\frac{1}{2}\left[\ln|\lambda|+\ln|\widetilde{x}_{1}-x_{1}+\lambda|+\ln|\widetilde{x}_{1}-x_{2}+\lambda|\right.$
(A.11)
$\displaystyle\left.-\ln|\widetilde{x}_{1}-\widetilde{x}_{2}-\lambda|-\ln|\widetilde{x}_{1}-\widetilde{x}_{2}|-\ln|\widetilde{x}_{1}-x_{1}|-\ln|\widetilde{x}_{1}-x_{2}|\right],\;\;\;$
$\displaystyle\ln|h_{2}|$ $\displaystyle=$
$\displaystyle\frac{1}{2}\left[\ln|\lambda|+\ln|x_{2}-x_{1}+\lambda|+\ln|x_{2}-\widetilde{x}_{1}-\lambda|\right.$
(A.12)
$\displaystyle\left.+\ln|x_{2}-\widetilde{x}_{2}-\lambda|-\ln|x_{2}-x_{1}|-\ln|x_{2}-\widetilde{x}_{1}|-\ln|x_{2}-\widetilde{x}_{2}|\right],\;\;\;$
$\displaystyle\ln|\widetilde{h}_{2}|$ $\displaystyle=$
$\displaystyle\frac{1}{2}\left[\ln|\lambda|+\ln|\widetilde{x}_{2}-x_{1}+\lambda|+\ln|\widetilde{x}_{2}-x_{2}+\lambda|\right.$
(A.13)
$\displaystyle\left.+\ln|\widetilde{x}_{2}-\widetilde{x}_{1}-\lambda|-\ln|\widetilde{x}_{2}-\widetilde{x}_{1}|-\ln|\widetilde{x}_{2}-x_{1}|-\ln|\widetilde{x}_{2}-x_{2}|\right]\;.\;\;\;$
Hence,
$\displaystyle\det\boldsymbol{M}_{RS}$ $\displaystyle=$
$\displaystyle\ln|h_{1}|+\ln|\widetilde{h}_{1}|+\ln|h_{2}|+\ln|\widetilde{h}_{2}|+\ln|\widetilde{x}_{2}-\widetilde{x}_{1}|\;$
(A.14)
$\displaystyle+\ln|x_{1}-x_{2}|-\sum\limits_{i,j=1}^{2}\ln|\widetilde{x}_{i}-x_{j}+\lambda|\;$
$\displaystyle=2\ln|\lambda|-\sum\limits_{i,j=1}^{2}\ln|\widetilde{x}_{i}-x_{j}+\lambda|\;$
$\displaystyle=\sum\limits_{i,j=1}^{2}\ln|x_{i}-x_{j}+\lambda|-\sum\limits_{i,j=1}^{2}\ln|x_{i}-\widetilde{x}_{j}|.\;$
Obviously, for the $N$ particles or $N\times N$ matrix, we have
$\det\boldsymbol{M}_{RS}=\sum\limits_{i,j=1}^{N}\ln|x_{i}-x_{j}+\lambda|-\sum\limits_{i,j=1}^{N}\ln|x_{i}-\widetilde{x}_{j}|\;,$
(A.15)
which is indeed the discrete-time Lagrangian for the one-parameter CM system.
Acknowledgements
> Umpon Jairuk would like to thank the Rajamangala University of Technology
> Thanyaburi(RMUTT) for financial support under the Personnel Development Fund
> in 2023.
## References
* [1] Calogero F, 1975, _Exactly Solvable One-Dimensional Many-Body Problems_ , Lett. nuovo cimento, 13(11), pp. 411-416.
* [2] Moser J, 1975, _Three Integrable Hamiltonian Systems Connected with Isospectral Deformations_ , Adv. Math, 16, pp. 197-220.
* [3] Ruijsenaars S M N, 1987, _Complete Integrability of Relativistic Calogero-Moser Systems and Elliptic Function Identities_ , Commun. Math. Phys., 110, pp. 191-213.
* [4] Schneider H, 1987, _Integrable Relativistic N-Particle Systems in an External Potential_ , Phys. D: Nonlinear Phenom., pp. 203-209.
* [5] Nijhoff F W and Pang G, 1994, _A Time-Discretized Version of the Calogero-Moser Model_ , Phys. Lett. A, 191, pp. 101-107.
* [6] Nijhoff F W, Ragnisco O and Kuznetsov V 1996, _Integrable Time-Discretisation of the Ruijsenaars-Schneider Model_ , Commun. Math. Phys., 176, pp. 681-700.
* [7] Nijhoff F W, and Walker A J, 2001, _The Discrete and Continuous Painleve’ VI Hierarchy and the Garnier Systems_ , Glasgow Math. J., A43, pp. 109-123.
* [8] Nijhoff F W, 2002, _Lax Pair for the Adler(Lattice Krichever–Novikov) System_ , Phys. Lett., A297, pp. 49-58.
* [9] Babelon O, Bernard D and Talon M, 2003, _Introduction to Classical Integrable Systems_ , Cambridge University ISBN:9780521822671.
* [10] Lobb S B, Nijhoff F W and Quispe G R W, 2009, _Lagrangian Multiform Structure for the Lattice KP System_ , J. Phys. A: Math. Theor., 42, 472002.
* [11] Lobb S B and Nijhoff F W, 2010, _Lagrangian Multiform Structure for the Lattice Gel’Fand–Dikii Hierarchy_ , J. Phys. A: Math. Theor., 43, 072003.
* [12] Yoo-Kong S, Lobb S and Nijhoff F, 2011, _Discrete-time Calogero–Moser System and Lagrangian 1-form Structure_ , J. Phys. A: Math. Theor., 44, 365203.
* [13] Yoo-Kong S and Nijhoff F, 2013, _Discrete-time Ruijsenaars-Schneider system and Lagrangian 1-form structure_ , arXiv:1112.4576v2 nlin.SI .
* [14] Jairuk U, Yoo-Kong S and Tanasittikosol M, 2016, _The Lagrangian Structure of Calogero’s Goldfish Model_ Theoret. and Math. Phys., 183(2), pp.665-683.
* [15] Jairuk U, Yoo-Kong S and Tanasittikosol M, 2017, _On the Lagrangian 1-form Structure of the Hyperbolic Calogero-Moser System_ , Rep. Math. Phys., 79(3), pp.299-330.
* [16] Piensuk W and Yoo-Kong S, 2021, _Geodesic Compatibility: Goldfish Systems_ , Rep. Math. Phys., 87(1), pp.45-58.
* [17] Boll R, Petrera M and Suris Y B, 2015, _Multi-time Lagrangian 1-forms for families of Bäcklund Transformations. Relativistic Toda-type Systems_ , J. Phys. A: Math. Theor., 48 085203\.
* [18] Avan J and Talon M, 1993, _Classical R-matrix Structure for the Calogero Model_ , Phys. Lett., B303 33-37.
|
# Mobility-Aware Smart Charging of Electric
Bus Fleets
Ahmadreza Moradipari21, Nathaniel Tucker21, Tuo Zhang2, Gustavo Cezar3 and
Mahnoosh Alizadeh2 2Department of Electrical and Computer Engineering,
University of California, Santa Barbara, California, 93106, USA 3SLAC
National Accelerator Laboratory, GISMo Group, California, 94025, USA 1Authors
have equal contribution
###### Abstract
We study the joint route assignment and charge scheduling problem of a transit
system dispatcher operating a fleet of electric buses in order to maximize
solar energy integration and reduce energy costs. Specifically, we consider a
complex bus transit system with preexisting routes, limited charging
infrastructure, limited number of electric buses, and time-varying electricity
rates. We present a mixed integer linear program (MILP) that yields the
minimal cost daily operation strategy for the fleet (i.e., route assignments
and charging schedules using daily solar forecasts). We present numerical
results from a real-world case study with Stanford University’s Marguerite
Shuttle (a large-scale electric bus fleet) to demonstrate the validity of our
solution and highlight the significant cost savings compared to the status
quo.
## I Introduction
Due to the potential reduction in operational costs [1], elimination of
tailpipe emissions [2], and encouragement from government agencies [3],
transit systems have started to purchase electric buses over the traditional
diesel or compressed natural gas (CNG) buses. At surface level, replacing
traditional buses with electric buses might seem like a simple task; however,
there are many obstacles preventing a transit system from simply assigning
electric buses to existing routes that were previously served by diesel buses.
The two most fundamental obstacles are the restricted travel distance and
lengthy recharge time of electric buses. Even with recent advances in electric
transportation and battery technology, modern electric buses are commonly
restricted to operate within 20%-95% state of charge (SOC) to prevent
stressing the batteries and reducing lifespan [4]. Combining this SOC
limitation with the high cost of large battery packs, most electric buses are
currently inferior to diesel/CNG buses in operational range. Second, the
recharging process of an electric bus takes significantly more time than the
refueling process of a diesel/CNG bus[4]. Additionally, due to the lengthy
recharge time and limited charging infrastructure, the transit system
dispatcher must be mindful of how the fleet’s recharging infrastructure is
managed in order to provide adequate energy to serve routes.
Despite the aforementioned challenges, the promise of eliminating large
amounts of greenhouse gas emissions from transit buses has enticed early
adopters to operate fleets of electric buses since the early 21st century [1];
however, it is likely that these electric bus fleets are operating
suboptimally in their recharging strategies and route assignments [5].
Accordingly, there has been increasing interest in the optimal operation and
infrastructure planning of electric bus fleets.
The first category of work that studies optimized charging for electric bus
fleets considers the assignment of buses to routes as given, i.e., the times
at which each bus is parked and is available to recharge is predetermined.
Specifically, the authors of [6] present an optimization model for installing
charging infrastructure and sizing batteries for a cost-effective electric bus
fleet. Similarly, the authors of [5] consider infrastructure planning as well
as fleet composition and the recharging process, with the goal of minimizing
total cost of ownership (TOC) of the fleet. Moving away from infrastructure
planning, the authors of [7] present a method to minimize battery aging costs
of an electric bus fleet recharging at nighttime. The authors of [8] present
the cost savings from controlling the charging thresholds for a fleet of
electric buses serving one route continuously in Tallahassee, Florida.
Similarly, the authors of [9] present a MILP framework for scheduling bus
charging and show the potential cost savings from an electric bus fleet in
Davis, California. Furthermore, [10] presents a charging strategy for electric
buses with fast charging infrastructure.
Considering both route assignment and charge scheduling (i.e., the mobility-
aware setting) the authors of [11] present a k-greedy solution method to
maximize travel distance of each electric bus within the fleet. A work similar
to ours, [12], presents a linear formulation for route assignment and charge
scheduling; however, the aim is to minimize the number of electric buses
needed to replace an existing diesel fleet. Hence, the variability of
electricity costs are not considered.
Similar to the aforementioned papers, the work presented in this manuscript
considers both the route assignment and charge scheduling problem of an
electric bus fleet. However, the presented approach is able to improve upon
previous mobility-aware work by accounting for time-varying electricity
prices, utilizing on-site solar energy generation, and providing a minimal
cost schedule for the fleet’s daily operation.
Organization: Section II describes the problem of a fleet dispatcher operating
a fleet of electric buses and proposes a mixed integer linear program (MILP)
formulation that solves for the minimal cost route assignments and recharging
schedule. Section III presents the results of the MILP for the real-world
example of Stanford’s Marguerite Shuttle Transit System.
Figure 1: Primary service area for Stanford University’s Marguerite Shuttle.
Trip origins at Caltrain Palo Alto Transit Center (star). Full system map
available at: https://transportation.stanford.edu/marguerite
## II Problem Description
We consider a fleet dispatcher attempting to optimize an electric bus transit
system. Specifically, the fleet dispatcher aims to assign electric buses to
serve the daily trips and schedule the recharging of the buses to minimize
electricity cost (e.g., recharging during the inexpensive electricity rates of
nighttime or when solar generation is abundant while still fulfilling all
required bus routes). In the following, we consider the case where the
physical infrastructure (e.g., buses, chargers, parking spots, etc.) and time-
tables (e.g., routes, stops, start/end times, etc.) are already established
within the transit system, but not yet optimized for the aforementioned
objective (as is the case for the Stanford University Marguerite Shuttle,
discussed in Section III). Given the transit system’s fixed time-table and
electric bus infrastructure, the fleet dispatcher seeks to answer questions
such as the following:
1. 1.
Which electric bus should be assigned to each route at each time?
2. 2.
When should each electric bus be recharged?
3. 3.
Does the system need to utilize spare diesel buses to supplement the electric
buses?
4. 4.
Would more infrastructure benefit the daily operation of the electric bus
fleet?
5. 5.
What size of on-site solar generation system is needed to fully supply the
fleet with renewable energy?
Let us consider the Stanford Marguerite Shuttle Transit System (Figure 1)
which consists of 38 electric buses, 23 diesel buses, 23 electric bus
chargers, and total of 20 daily routes. Currently, the assignment of buses to
routes and their recharging strategy follows rules adopted by operators that
work well in practice by ensuring sufficient charge is available for service.
However, as we demonstrate in our numerical case study, the current assignment
results in significant losses for the transit system in terms of daily
operational costs and can be improved upon through a joint charge and route
assignment policy. As such, in order to optimize the decision making problem
of the fleet dispatcher, we formulate a mixed-integer-linear-program (MILP) to
solve for both the optimal recharging schedules and route assignments for an
electric bus transit system.
### II-A MILP Formulation
In the electric bus transit system, we consider one central transit center
(i.e., bus depot) from which all the buses start and finish their routes as
well as recharge. The buses are required to serve numerous routes throughout
the service area, and each route must be served multiple times each day (i.e.,
the electric bus fleet is required to fulfill multiple trips for each route).
We denote $\mathcal{S}$ as the set of scheduled trips across all routes that
need to be fulfilled. For each trip $i\in\mathcal{S}$, let $a_{i}$ and $b_{i}$
denote the start and end time of trip $i$. More specifically, these are the
times that a bus leaves the depot and later returns if serving trip $i$. If
trip $i$ is a one-way route that does not loop back to the depot, we account
for the extra duration for the bus to return to the depot in $b_{i}$
accordingly (i.e., the trip end time $b_{i}$ accounts for “deadhead” travel).
Similarly, if a route does not start at the depot, we account for the deadhead
travel time to the starting location in $a_{i}$.
In order to capture the state of charge of each bus at any time $t$, we
discretize the day into $T$ time steps (e.g., five minute intervals) and
$\mathcal{T}$ is the set of time steps for an entire day. Furthermore, let
$d_{i}$ be the energy consumption per time step for a bus serving trip $i$
(while we assume that varying traffic conditions across different routes can
affect energy consumption rates, we assume that the buses are identical in
their energy consumption when they serve the same route). Let $\mathcal{K}$ be
the set of electric buses and $\mathcal{N}$ be the set of electric bus
chargers installed at the central depot. For each charger $n\in\mathcal{N}$,
$u_{n}$ is the charging rate. Additionally, let ${\bf
p}=[p(t)]_{t\in\mathcal{T}}$ be the vector of electricity prices for an entire
day. We denote as $E^{k}_{min}$ and $E^{k}_{max}$ the minimum and maximum
energy levels for bus $k$, respectively. The fleet dispatcher usually sets
$E^{k}_{min}>0,\forall k\in\mathcal{K}$ for safety precautions. Let $g(t)$ be
the available on-site solar generation at time $t$, which we assume is known
at the time of dispatch. Moreover, we assume that the electricity used from
the on-site solar generation is free for the operator. Last, we denote the
initial energy level of bus $k$ as $e_{0}^{k}$.
Next, we describe the decision variables used in the MILP formulation. We set
the binary variable $X_{i}^{k}(t)$ to $1$ if bus $k$ is serving trip $i$ at
time $t$ and $0$ otherwise. We set the binary variable $Z_{k}(t)$ to $1$ if
bus $k$ is charging at time $t$ and $0$ otherwise. We set the binary variable
$Y_{n}^{k}(t)$ to $1$ if bus $k$ is occupying charger $n$ at time $t$ and $0$
otherwise. We use the variable $E^{k}(t)$ to track the energy level of bus $k$
at time $t$. Lastly, let $V(t)$ be the total amount of electricity that the
dispatcher purchases from the grid at time $t$, and $S(t)$ be the amount of
electricity that buses obtain from the available on-site solar generation at
time $t$. With the necessary notation and decision variables, the joint
charging and routing MILP for the electric bus fleet can be formulated as
follows:
$\displaystyle\text{Minimize }\hskip 10.0pt\sum_{t\in\mathcal{T}}\hskip
10.0ptp(t)V(t)$ (1a) Subject to: $\displaystyle
Z^{k}(t)+\sum_{i\in\mathcal{S}}X^{k}_{i}(t)\leq 1,$ $\displaystyle\forall
k\in\mathcal{K},t\in\mathcal{T}$ (1b)
$\displaystyle\sum_{k\in\mathcal{K}}X_{i}^{k}(t)=1,$ $\displaystyle\forall
i\in\mathcal{S},t\in[a_{i},b_{i}]$ (1c) $\displaystyle
X_{i}^{k}(t+1)=X_{i}^{k}(t),$ $\displaystyle\forall
i\in\mathcal{S},k\in\mathcal{K},t\in[a_{i},b_{i}-1]$ (1d)
$\displaystyle\sum_{k\in\mathcal{K}}Y_{n}^{k}(t)\leq 1,$ $\displaystyle\forall
n\in\mathcal{N},t\in\mathcal{T}$ (1e)
$\displaystyle\sum_{n\in\mathcal{N}}Y_{n}^{k}(t)=Z^{k}(t),$
$\displaystyle\forall k\in\mathcal{K},t\in\mathcal{T}$ (1f) $\displaystyle
E^{k}(t)$
$\displaystyle=E^{k}(t-1)+\sum_{n\in\mathcal{N}}u_{n}Y_{n}^{k}(t)-\sum_{i\in\mathcal{S}}d_{i}X_{i}^{k}(t),$
(1g) $\displaystyle\hskip 89.0pt\forall k\in\mathcal{K},t\in\mathcal{T}$
$\displaystyle\sum_{n\in\mathcal{N}}\sum_{k\in\mathcal{K}}Y_{n}^{k}(t)u_{n}=V(t)+S(t),$
$\displaystyle\forall t\in\mathcal{T}$ (1h) $\displaystyle E^{k}_{min}\leq
E^{k}(t)\leq E^{k}_{max},$ $\displaystyle\forall
k\in\mathcal{K},t\in\mathcal{T}\hskip 58.0pt$ (1i) $\displaystyle
X_{i}^{k}(t)\in\\{0,1\\},$ $\displaystyle\forall
i\in\mathcal{S},k\in\mathcal{K},t\in\mathcal{T}$ (1j) $\displaystyle
Y_{n}^{k}(t)\in\\{0,1\\},$ $\displaystyle\forall
n\in\mathcal{N},k\in\mathcal{K},t\in\mathcal{T}$ (1k) $\displaystyle
Z^{k}(t)\in\\{0,1\\},$ $\displaystyle\forall k\in\mathcal{K},t\in\mathcal{T}$
(1l) $\displaystyle 0\leq S(t)\leq g(t),$ $\displaystyle\forall
t\in\mathcal{T}$ (1m) $\displaystyle E^{k}(0)=e_{0}^{k},$
$\displaystyle\forall k\in\mathcal{K}$ (1n) $\displaystyle
E^{k}(T)=e_{0}^{k},$ $\displaystyle\forall k\in\mathcal{K}.$ (1o)
The objective in equation (1a) aims to minimize the daily electricity cost of
recharging the bus fleet. Constraint (1b) ensures that a bus is either
charging, serving a trip, or parked in the depot (without charging).
Constraint (1c) ensures that all the required daily trips will be served by a
bus. Constraint (1d) ensures that one unique bus will serve each trip (i.e., a
trip cannot be interrupted to switch buses). Constraint (1e) ensures that a
bus can only occupy one charger per time slot. Constraint (1f) guarantees that
if a bus is occupying a charger, then it is charging. Constraint (1g)
calculates the energy level of each bus in each time epoch. Specifically, the
energy level at time $t$ is equal to the energy level at time $t-1$ plus the
charged energy if the bus was charging or minus the spent energy if the bus
was serving a trip. Constraint (1h) ensures that buses obtain electricity from
either the grid or on-site solar. Constraint (1i) ensures that the buses
operate above a desired minimum energy threshold. Constraints (1j)-(1l) are
binary constraints on the decision variables. Constraint (1m) ensures that the
solar energy used by the bus fleet is less than or equal to available solar
generation at time $t$. Lastly, constraint (1n) sets the initial energy of
each bus and constraint (1o) ensures that the energy level of the fleet
returns to the initial value so the same route assignments and charge schedule
can be used for the next day.
### II-B Behind-the-Meter Solar Integration
To exploit free on-site solar energy and to avoid injecting excess power back
into the distribution grid, the fleet dispatcher prioritizes recharging the
buses during periods when solar generation is available. Only if there is not
enough solar energy, then the fleet dispatcher should purchase electricity
from the grid. As stated in Section II-A, to accommodate behind-the-meter
solar integration, the dispatcher’s MILP formulation makes use of a daily
solar forecast, $g(t)|_{t=1,\dots,T}$. This can be estimated from forecast
models, including those that use weather forecasts, and previous years’ solar
irradiance data. We note that if the solar generation is over-estimated, then
the fleet will have to purchase more expensive grid energy potentially during
peak times such as midday. As such, a conservative estimate is preferred as
cheaper electricity can be procured in the late night period. Future work
could investigate moving-horizon solution methods to account for stochastic
solar generation and update the route and charge assignments in real-time as
solar energy data becomes available.
## III Case Study
As stated in the introduction, the motivation for the proposed MILP for
electric bus fleets is the real-world Stanford Marguerite Shuttle Transit
System (Figure 1). The Marguerite Shuttle System is free, open to the public,
and operates seven days a week all year traversing the Stanford campus and
surrounding areas. More specific information can be found at
https://transportation.stanford.edu/marguerite.
### III-A Stanford Marguerite Shuttle System Information
Currently, the Marguerite fleet consists of 23 diesel buses and 38 electric
buses from BYD split into 10 K7 models with battery capacity of 197kWh, 10 K9
models and 18 K9M models, both with 324kWh battery capacity. Additionally, the
central depot is equipped with 23 double port electric bus chargers where each
port can deliver up to 40kW. Each bus can be charged from one or two ports for
a total power of 80kW. For the electricity rates, we consider PG&E’s E-20
electricity rate structure for off-peak, partial-peak, and peak hours. The
electricity rates are given in Table I.
TABLE I: PG&E E-20 Rate Structure Time Interval | Label | Price
---|---|---
12:00am-8:30am | Off-Peak | $0.08422/kWh
8:30am-12:00pm | Partial-Peak | $0.11356/kWh
12:00pm-6:00pm | Peak | $0.16127/kWh
6:00pm-9:30pm | Partial-Peak | $0.11356/kWh
9:30pm-12:00am | Off-Peak | $0.08422/kWh
Furthermore, the Marguerite Shuttle system serves up to 20 unique routes on
any given day. Across all 20 routes, 15 of them are mainly fulfilled by
electric buses, meaning that the electric bus fleet is required to make 352
trips per day, during weekdays. The specific routes and mileages are listed in
Table II. For the purposes of this numerical example, the solar forecast used
was an average daily solar generation calculated from October 2019 with a
maximum generation of 1 MW. The solar forecast is displayed in Figure 2.
TABLE II: Stanford Marguerite Shuttle Route Information Route Name | Daily Trips | Trip Miles
---|---|---
C Line | 33 | 7.00
C Limited | 11 | 4.60
MC Line (AM/PM) | 46 | 3.00
MC Line (Mid Day) | 11 | 5.10
P Line (AM/PM) | 56 | 2.50
P Line (Mid Day) | 11 | 4.00
Research Park (AM/PM) | 24 | 10.40
X Express (AM) | 12 | 1.20
X Line | 44 | 4.60
X Limited (AM) | 10 | 2.00
X Limited (PM) | 10 | 1.50
Y Express (PM) | 20 | 1.20
Y Line | 44 | 4.60
Y Limited (AM) | 10 | 2.40
Y Limited (PM) | 10 | 2.00
Totals | 352 trips/day | 1431.50 miles/day
Figure 2: Average daily solar generation for a 1 MW on-site installation. Data
averaged from CAISO renewable database in October 2019.
### III-B Simulation Results
The proposed MILP was implemented in Matlab making use of CVX and Mosek. All
numerical experiments were run on a laptop with 16 GB of RAM and 3.5 GHz Intel
i7 processor. This section reports on the charging schedule, route
assignments, and cost savings when comparing the proposed MILP solution with
on-site solar generation, without on-site solar generation, and the status quo
(i.e., the status quo is the actual operations of the Stanford Marguerite
Fleet from 7-October-2019) which does not yet exploit free on-site solar
generation.
Figure 3 presents the energy levels of each bus in the fleet during the day
when the dispatch is generated through our proposed MILP. Time on the x-axis
begins at 5:00am, as this is the start of the earliest route that must be
fulfilled. The left plot shows the energy levels of the buses when the MILP is
not utilizing on-site solar generation. The right plot shows the battery
levels of the buses when the MILP accounts for on-site solar generation. It
will become more clear when examining Figure 4 that the buses charge more
during midday in the right plot than the left, to make use of the free on-site
solar.
Figure 3: Left: Battery levels for each electric bus when considering a fleet
without available on-site solar. Right: Battery levels for each electric bus
when optimizing with available on-site solar generation.
Figure 4 presents the total charging power of the fleet across the entire day.
The red curve presents the total charging power for the MILP solution that
does not exploit on-site solar generation. Conversely, the blue plot shows the
fleet’s total charging power from the MILP solution that does account for on-
site solar generation. It is clear from this plot that the solution that
accounts for on-site solar (blue) is able to charge in the middle of the day
when solar is abundant; however, the solution that does not exploit solar
(red) does not charge during the midday as the electricity prices are highest
at this time. Instead, the fleet has a spike in charging power in the evening
when electricity rates are decreased. This large transient in the evening
could be detrimental to grid stability, increase in harmonics, accelerate
aging of grid assets (i.e. transformers) and could potentially lead to demand
charges for the fleet dispatcher due to high power consumption. As such, the
solution making use of on-site solar generation with a forecasting method is
preferable.
Figure 4: Total charging power of the fleet throughout the day. Blue: Solution
accounting for on-site solar generation. Red: Solution does not include on-
site solar generation.
Last, Figure 5 presents the daily electricity costs for the three different
test cases. Case A: Status Quo. We had access to the data from the operations
of the Stanford Marguerite fleet on 7-October-2019 and calculated the cost of
charging the fleet under the E-20 rate structure. As such, under normal
operation, the daily operational cost was $715.10 USD. Case B corresponds to
the solution of the proposed MILP with the same routes, buses, and chargers as
Case A; however, the mobility-aware solution reassigned buses to new trips and
rescheduled the charging of each bus. In Case B, the MILP solution did not
account for on-site solar and the daily cost was $267.90 USD. Last, Case C was
identical to Case B; however, the MILP accounted for the on-site solar
generation and had access to the daily solar forecast. As such, the daily cost
was reduced to $61.89 USD. From these results, it is evident that the fleet
dispatcher benefits from the MILP formulation for routing and charging ($55\%$
decrease in cost in Case B).
Figure 5: Price Comparison for 3 difference regimes: Case 1: Status Quo,
electric bus charging data obtained from real-implementation (Stanford
Marguerite Shuttle) on 7-Oct-2019. Case 2: Mobility-Aware MILP solution for
same routes and buses as Case A, without on-site solar generation. Case 3:
Mobility-Aware MILP solution for same routes and buses as Case A, with on-site
solar generation.
## IV Conclusion
In this paper, we investigated the joint route assignment and charge
scheduling problem of a transit system dispatcher operating a fleet of
electric buses in order to maximize solar energy integration and reduce energy
costs. We considered a complex bus transit system with preexisting routes,
limited charging infrastructure, limited number of electric buses, and time-
varying electricity rates. We presented a mixed integer linear program (MILP)
that yields route assignments and charging schedules using daily solar
forecasts. We presented numerical results from a real-world case study with
Stanford University’s Marguerite Shuttle to demonstrate the cost-saving
benefits of our solution and highlight the significant cost savings compared
to the status quo.
Future work includes investigating a moving-horizon solution approach to
account for stochastic solar generation. Additionally, we would like to add
traditional diesel routes to the optimization to further minimize emissions
and to expand the clean operation of the electric bus fleet. Further future
work can include performing field test experiments with real buses during
operational hours, determining the optimal solar capacity to fully charge the
electric bus fleet, and quantify the value and size of onsite solar and
battery combination for resiliency.
## Acknowledgment
The authors would like to thank the Stanford Transportation team for the
support, discussions, and information about operations. This work was funded
by the California Energy Commission under grant EPC-17-020. SLAC National
Accelerator Laboratory is operated for the US Department of Energy by Stanford
University under Contract DE-AC02-76SF00515.
## References
* [1] J. Horrox and M. Casale, “Electric Buses: Lessons from Cities Pioneering Clean Transportation,” October 2019, U.S. PIRG Education Fund, Environment America Research & Policy Center, Frontier Group.
* [2] “Greenhouse gases, Regulated Emissions, and Energy use in Transportation (GREET) Model,” Argonne National Laboratory. https://greet.es.anl.gov/.
* [3] “California transitioning to all-electric public bus fleet by 2040,” December 2018, California Air Resources Board. https://ww2.arb.ca.gov/.
* [4] “Battery Bus Range - It’s All in the Math,” Mass Transit. https://www.masstransitmag.com/bus/article/12131451/battery-bus-range-its-all-in-the-math.
* [5] M. Rogge, E. van der Hurk, A. Larsen, and D. U. Sauer, “Electric bus fleet size and mix problem with optimization of charging infrastructure,” _Applied Energy_ , vol. 211, pp. 282 – 295, 2018. [Online]. Available: http://www.sciencedirect.com/science/article/pii/S0306261917316355
* [6] A. Kunith, R. Mendelevitch, and D. Goehlich, “Electrification of a city bus network—an optimization model for cost-effective placing of charging infrastructure and battery sizing of fast-charging electric bus systems,” _International Journal of Sustainable Transportation_ , vol. 11, no. 10, pp. 707–720, 2017. [Online]. Available: https://doi.org/10.1080/15568318.2017.1310962
* [7] A. Houbbadi, R. Trigui, S. Pelissier, E. Redondo-Iglesias, and T. Bouton, “Optimal scheduling to manage an electric bus fleet overnight charging,” _Energies_ , vol. 12, no. 14, p. 2727, 2019.
* [8] N. Qin, A. Gusrialdi, R. P. Brooker, T. Ali _et al._ , “Numerical analysis of electric bus fast charging strategies for demand charge reduction,” _Transportation Research Part A: Policy and Practice_ , vol. 94, pp. 386–396, 2016.
* [9] Y. Wang, Y. Huang, J. Xu, and N. Barclay, “Optimal recharging scheduling for urban electric buses: A case study in davis,” _Transportation Research Part E: Logistics and Transportation Review_ , vol. 100, pp. 115–132, 2017.
* [10] H. Chen, Z. Hu, Z. Xu, J. Li, H. Zhang, X. Xia, K. Ning, and M. Peng, “Coordinated charging strategies for electric bus fast charging stations,” in _2016 IEEE PES Asia-Pacific Power and Energy Engineering Conference (APPEEC)_. IEEE, 2016, pp. 1174–1179.
* [11] T. Paul and H. Yamada, “Operation and charging scheduling of electric buses in a city bus route network,” in _17th International IEEE Conference on Intelligent Transportation Systems (ITSC)_ , Oct 2014, pp. 2780–2786.
* [12] M. Janovec and M. Koháni, “Exact approach to the electric bus fleet scheduling,” _Transportation Research Procedia_ , vol. 40, pp. 1380–1387, 2019.
|
Dedicated to Anna and Jack
Thomassen formulated the following conjecture: Every $3$-connected cubic graph has a red-blue vertex coloring such that the blue subgraph has maximum degree at most $1$
(that is, it consists of a matching and some isolated vertices) and the red
subgraph has minimum degree at least $1$ and contains no $3$-edge path.
Since all monochromatic components are small in this coloring and there is a certain irregularity, we call such a coloring crumby.
Recently, Bellitto, Klimošová, Merker, Witkowski and Yuditsky [2] constructed an infinite family refuting the above conjecture.
Their prototype counterexample is $2$-connected, planar, but contains a $K_4$-minor and also a $5$-cycle.
This leaves the above conjecture open for some important graph classes: outerplanar graphs, $K_4$-minor-free graphs, bipartite graphs.
In this regard, we prove that $2$-connected outerplanar graphs, subdivisions of $K_4$ and $1$-subdivisions of cubic graphs admit crumby colorings.
A subdivision of $G$ is genuine if every edge is subdivided at least once.
We show that every genuine subdivision of any subcubic graph admits a crumby coloring.
We slightly generalise some of these results and formulate a few conjectures.
§ INTRODUCTION
Our notations and terminology mostly follow the Graph Theory book by Bondy and Murty [3].
In particular, we call a graph subcubic if it has maximum degree at most 3 and $P_k$ denotes a path on $k$ vertices.
Thomassen [8] gave an intricate inductive proof of the following result, a classical case of Wegner's conjecture [10]: the square of every planar cubic graph is $7$-colorable.
In the same paper, Thomassen formulated an attractive conjecture, which would imply the aforementioned result.
Every $3$-connected cubic graph has a red-blue vertex coloring such that
the blue subgraph has maximum degree at most $1$
(that is, it consists of a matching and some isolated vertices) and the red
subgraph has minimum degree at least $1$ and contains no $3$-edge path.
Since every monochromatic subgraph is small, but the conditions in the two colors are asymmetric, making this coloring somewhat irregular, we call this a crumby coloring.
From a classical graph decomposition point of view, we are seeking graph classes such that every member admits a crumby coloring.
As it was remarked early, the $3$-prism does not have a crumby coloring. For some time, it looked like this is the only counterexample.
Supporting this, Barát [1] showed that every subcubic tree has a crumby coloring.
However, Bellitto et al. [2] found a construction that produces an infinite family of $3$-connected cubic counterexamples and also $2$-connected planar graphs without crumby colorings.
This fact gives evidence that the intricate induction by Thomassen is somewhat unavoidable.
On the other hand, it leaves open the possibility that crumby colorings might exist for some important graph classes. For instance, outerplanar graphs or bipartite graphs.
Indeed, in Section <ref> we show that any $2$-connected subcubic outerplanar graph admits a crumby coloring even if an arbitrary vertex's color is prescribed.
The fact that we can prescribe the color of a vertex is useful in the following sense. We believe that crumby colorings exist for every subcubic outerplanar graph.
However, there are various difficulties to extend the results on $2$-connected graphs to all outerplanar graphs.
In a general outerplanar graph there might be trees attached to $2$-connected blocks or between them.
Since Conjecture <ref> holds for trees, it gives some hope to combine these two results as building bricks, where having the extra freedom of prescribing the color of a vertex comes into the picture.
The following theorem is a straightforward strengthening of a result of Barát [1]. It is routine to check that the original proof literally holds for this version.
Every subcubic tree admits a crumby coloring such that the color of a leaf is prescribed.
We strengthen this result further in Section <ref>.
This allows us to significantly decrease the number of problematic attached trees.
As a weakening of Conjecture <ref>, we conjecture that every $K_4$-minor-free graph admits a crumby coloring. This class is interesting for several reasons. Since outerplanar graphs are $K_4$- and $K_{2,3}$-minor-free, this would be a natural extension step from outerplanar graphs. It also concurs with the fact that all known counterexamples to Conjecture <ref> contain $K_4$-minors. In contrast, we show a crumby coloring of any subdivision of $K_4$ in Section <ref>.
However, we first prove that a special class of bipartite graphs admit crumby colorings in Section <ref>.
They are the $1$-subdivisions of cubic graphs, that arise from cubic graphs by adding an extra vertex on each edge.
In this way, we form bipartite graphs, where the vertices in one class have degree 2 and in the other class degree 3.
Motivated by these observations, we introduced the notion of genuine subdivision of a graph $G$ in the latter part of Section <ref>. That is a graph $H$ which one get from $G$ by subdividing every edge of $G$ with at least one vertex. As a generalization, we prove that every genuine subdivision of any subcubic graph admits a crumby coloring.
A crucial idea in both proof is to use the maximum matching of the original graph. To this end, we employ the famous Edmonds-Gallai decomposition theorem.
§ BIPARTITE GRAPHS AND SUBDIVISIONS
Despite the infinite family of counterexamples in [2], we still believe that Conjecture <ref> holds for most subcubic graphs.
We pose the following
Every subcubic bipartite graph admits a crumby coloring.
We can prove this for a special class of bipartite graphs, where the degrees are all 2 in one class and 3 in the other class.
In the proof, we apply the Edmonds-Gallai decomposition theorem [4, 5] that gives us information about the structure of the maximum matchings of a graph $G$.
We recall that $P_k$ denotes a path with $k$ vertices and $N(X)$ denotes the set of neighbors of a vertex set $X$.
A graph $G$ is hypomatchable or factor-critical if for every vertex $x$, the graph $G-x$ has a perfect matching.
Let $G$ be a graph and let $A\subseteq V(G)$ be the collection of all vertices $v$ such that there exists a maximum size matching which does not cover $v$.
Set $B=N(A)$ and $C=V(G)\setminus (A \cup B)$. Now
$(i)$ Every odd component $O$ of $G-B$ is hypomatchable and $V(O)\subseteq A$.
$(ii)$ Every even component $Q$ of $G-B$ has a perfect matching and $V(Q)\subseteq C$.
$(iii)$ For every $X\subseteq B$, the set $N(X)$ contains vertices in more than $|X|$ odd components of $G-B$.
In what follows, we study subdivisions of cubic graphs.
If we add precisely one new vertex on each edge, then the resulting graph is a $1$-subdivision.
We support Conjecture <ref> by showing the following
Let $S(G)$ be the $1$-subdivision of a cubic graph $G$.
The bipartite graph $S(G)$ admits a crumby coloring.
The idea of the proof is to color the original vertices (in $G$) red and color the subdivision vertices blue.
If $G$ admits a perfect matching $M$, then we recolor the subdivision vertices on $M$ to red.
This results in a crumby coloring consisting of red $P_3$-s and blue singletons.
We refer to this idea later as the standard process.
For instance, every 2-edge-connected graph $G$ admits a perfect matching by Petersen's Theorem.
If the graph $S(G)$ is the 1-subdivision of such $G$, then the standard process gives a crumby coloring of $S(G)$.
In what follows, we modify this simple idea to the general case, where $G$ is any cubic graph.
If $G$ does not possess a perfect matching, we can still consider a maximum size matching in $G$ and use the Edmonds-Gallai decomposition.
Let $G$ be a cubic graph, and let $B$ be the set given by the Edmonds-Gallai theorem.
Any isolated vertex in $B$ must be connected to at least two odd components of $G-B$.
The third edge might go to a third odd component, an even component or to one of the first two odd components.
Initially, let every vertex of $G$ be red and
every subdivision vertex blue.
We recolor a few vertices as follows.
In every even component, there exists a perfect matching and we recolor the subdivision vertices on the matching edges to red.
Consider the vertex sets $A$ and $B$ of the Edmonds-Gallai decomposition.
Contract the components of $A$ to vertices to get $A^*$.
The bipartite graph $(A^*,B)$ satisfies the Hall-condition by property $(iii)$. Therefore, we find a matching $M$ covering $B$.
We recolor the subdivision vertices of the matching edges in $M$ to red.
We continue with the odd components corresponding to the vertices of $A^*$ saturated by $M$.
In these components, we use property $(i)$ and find an almost perfect matching.
The subdivision vertices on these matching edges are colored red as well.
So far we only created red $P_3$-s separated by blue singletons.
What is left to consider is the union of odd components corresponding to unsaturated vertices of $A^*$.
Let $H$ be such a component.
Let $x$ be an arbitrary vertex and consider a perfect matching in $H-x$ by property $(i)$.
We recolor the subdivision vertices on these matching edges to red.
Let $y$ be a neighbor of $x$ in $H$ and denote the subdivision vertex on the edge $xy$ by $v_{xy}$.
Let $x$ and $v_{xy}$ be red and $y$ blue.
Let us check that the neighborhood of $y$ remains a crumby coloring.
Since there was a matching edge $zy$ and both $z$ and $v_{yz}$ are red, this is appropriate.
There is a third edge $wy$ incident to $y$, and the subdivision vertex on this edge is blue forming a blue $P_2$ together with $y$, this is also appropriate.
After doing this for every remaining odd component $H$ a crumby coloring of $S(G)$ arises.
Next, we complement the previous result.
Here we allow all longer subdivisions.
Let $G$ be a cubic graph.
Let $H$ be an arbitrary subdivision of $G$ such that every edge is subdivided at least twice.
The graph $H$ admits a crumby coloring.
Let us color the original vertices of $G$ blue.
We find that almost any subdivided edge admits a crumby coloring such that the end-vertices are singleton blues.
The only exception is the case with 4 subdivision vertices.
In particular, we use the following colorings for the internal vertices:
$rr$, $rrr$, $rrrb$, $rrbrr$, $rrrbrr$, $rrbbrrr$, $rrbrrbrr$ etc.
Let us use these particular colorings on $H$.
We might create some blue stars with 2 or 3 leaves.
Apart from that, the coloring satisfies the crumby conditions.
Now we recolor the problematic blue centers of these stars red.
If vertex $c$ is such a center, and there was a blue 3-star at $c$, then we recolor the neighbor $n_1$ of $c$ red and recolor the neighbor $n_2$ of $n_1$ blue.
If vertex $c$ was the center of a blue 2-star, then we have to consider two cases according to the red neighbor $v$ of $c$.
If $v$ was the end-vertex of a red $P_3$, then we do the same recoloring as in the previous case, but also recolor $v$ to blue.
If $v$ was the end-vertex of a red $P_2$, then the recoloring of $c$ creates a red $P_3$ and we are done.
The process terminates with a crumby coloring of $H$.
Motivated by the results of this section, we introduce the following notion.
A graph $H$ is a genuine subdivision of a graph $G$ if every edge of $G$ contains at least one subdivision vertex.
Every genuine subdivision $S(G)$ of any subcubic graph $G$ admits a crumby coloring.
We may assume that $G$ is connected (otherwise one can repeat the same argument on the connected components).
Our proof uses the same ideas as the proof of Theorem <ref>. To make the argument more transparent,
we prove a lemma assuming $G$ has a perfect matching.
Let $G$ be a subcubic graph, which has a perfect matching.
Every genuine subdivision $S(G)$ of $G$ admits a crumby coloring.
First, let us color the vertices of $G$ red and suppose that $G$ has a perfect matching $M$. Our aim is to color the vertices on all the edges outside of $M$ satisfying the crumby conditions such that every vertex of $G$ remains red, but none of them is the end of a red $P_3$. In the last step of the proof, we show that the vertices on the edges of $M$ can be colored in such a way that the possible problems (i.e. still isolated red vertices of $G$) disappear.
Since $G$ is a subcubic graph, the subgraph $G-M$ is a disjoint union of cycles and paths.
In both cases, we give a crumby coloring by taking one edge of $G-M$ at a time (which is a path on at least 3 vertices in $S(G)$) and color its vertices such that none of the endpoints become the end of a red $P_3$, but most of them get a red neighbor. In Table <ref>, we summarize crumby colorings of the paths $P_k$ on $k$ vertices for different purposes, which we use later. Note that we highlighted by capital letters those cases, in which the aim is not attainable (for $k\ge 8$ all of the aims are).
\[
\begin{array}{|c|c|c|c|}
\hline
k & \makecell{\mathrm{all~of~the} \\ \mathrm{endpoints~are} \\ \mathrm{singleton~red}} & \makecell{\mathrm{all~of~the} \\ \mathrm{endpoints~are} \\ \mathrm{in~a~red~} K_2} & \makecell{\mathrm{one ~endpoint~is} \\ \mathrm{a~singleton~red,} \\ \mathrm{and~the~other}\\ \mathrm{is~in~a~red~} K_2} \\
\hline
3 & rbr & RBR & RBR \\
\hline
4 & rbbr & RRBR & rrbr \\
\hline
5 & RRBBR & rrbrr & rrbbr \\
\hline
6 & rbrrbr & rrbbrr & RRBBRR \\
\hline
7 & rbrrbbr & RRBRRBR & rrbrrbr \\
\hline
8 & rbbrrbbr & rrbrrbrr & rrbbrrbr \\
\hline
\end{array}
\]
Crumby colorings of $P_k$ for different purposes
In the first step of the proof, we are going through the edges of all the path components (starting from a leaf) and all cycles (in a cyclic order) and use the second column of Table <ref> on every edge in order to leave as few red singletons as possible. Thus a red singleton either has degree 3 and an edge outside $M$ with exactly one subdivision vertex or it has degree 2 and the only edge of its path component has 1, 2 or 5 subdivision vertices, or it has degree 1 and hence we have not considered it yet.
In the final correction step, we color the vertices along the edges of $M$ using the second column of Table <ref>. We emphasize that after this, there is still no red vertex of $G$, which is an end of a red $P_3$ (some of them can be a middle vertex on a red $P_3$). There are four situations, in which this does not eliminate all the singleton red vertices. In the sequel, we solve each of these situations.
Case 1: Both $u$ and $v$ are red singletons and there is exactly 1 subdivision vertex $x$ along $uv$. If $d(u)=2$, then we color $u$ to blue and $x$ to red.
If $d(u)=3$, then there is an incident edge with 1 subdivision vertex $y$, hence we can recolor $y$ and $x$ red and $u$ blue.
Case 2: Both $u$ and $v$ are red singletons and there are exactly 5 subdivision vertices $x_1,x_2,x_3,x_4,x_5$ along $uv$. If $d(u)=2$, then we color $u$, $x_3$ and $x_4$ blue and $x_1$, $x_2$ and $x_5$ red. If $d(u)=3$, then there is an incident edge with 1 subdivision vertex $y$, hence we can recolor $y$ red, and use the same coloring along $uv$ as before.
Case 3: Both $u$ and $v$ are red singletons and there are exactly 2 subdivision vertices $x_1,x_2$ along $uv$. We color $u$ and $v$ blue and $x_1$ and $x_2$ red. If $u$ or $v$ have degree 3, then there is an incident edge with 1 subdivision vertex $y$, hence we can recolor $y$ red.
Case 4: Suppose that $v$ is a red singleton and $u$ has red neighbors and there is exactly 1 subdivision vertex $x$ along $uv$. We color $v$ blue and $x$ red. Again, if $u$ or $v$ have degree 3, then there is an incident edge with 1 subdivision vertex $y$, hence we can recolor $y$ red.
Notice that the recolorings in the previous cases cannot create a large blue component, since the other edges incident to $u$ cannot have two blue vertices next to $u$. Lastly, if $d(u)=d(v)=1$, then by the connectivity assumption, this path is the whole $G$ hence it has a crumby coloring. This concludes the case that $G$ has a perfect matching.
By Lemma <ref>, we may assume that $G$ does not have a perfect matching, and we use the Edmonds-Gallai decomposition. The idea is to fix a maximal matching $M$ of $G$ and color the vertices along the edges of the subgraph $G-M$ first. Let $A' \subset A$ consist of those odd components, which have exactly one vertex unsaturated by $M$. For every $O_i\in A'$, denote the uncovered vertex by $z_i$. We can perform the first step of the proof of Lemma <ref> on the restricted subgraph of $S(G)$ on the vertex set $V' = V(S(G)) \setminus \bigcup_{O_i \in A'} z_i$. This results in an almost crumby coloring, the only exceptions can be some red singletons of $G$. Note that there may be some red $P_3$ components, but the vertices of $G$ can never be their endpoints.
In the second step, we color the vertices along the edges incident to each $z_i$. As before, we use the second column of Table <ref> to color those vertices, but make sure that $z_i$ surely gets a red neighbor. It can be done, except when all of the edges incident with $z_i$ have exactly 1 subdivision vertex. But in that case, we recolor these subdivison vertices to red and $z_i$ to blue.
Now only the vertices along the edges of $M$ remained uncolored. It is time to perform the final correction step of the proof of Lemma <ref>, which concludes the proof.
One can observe that in the proof of Theorem <ref> we actually do not rely on the existence of subdivision vertices on those edges of the original subcubic graph $G$, which have an endpoint of degree $1$.
§ OUTERPLANAR GRAPHS
We know that Conjecture <ref> holds for trees and fails in general for $2$-connected planar graphs.
A natural minor-closed class between the aformentioned classes is the class of outerplanar graphs.
As the first step, we prove the following.
Let $G$ be a $2$-connected subcubic outerplanar graph and let $v$ be a vertex of $G$. We may prescribe the color of $v$ and find a crumby coloring of $G$.
We consider $G$ together with its outerplanar embedding. An ear decomposition of a $2$-connected graph $G$ is a series of graphs $G_0, G_1, \dots, G_k=G$ such that $G_0$ is a cycle and $G_i=G_{i-1}\cup E_i$, where each $E_i$ is a path that have its two leaves in common with $G_{i-1}$. We may assume that $G_0$ is a bounded face containing the vertex $v$, and if $d(v)=3$ then let $v$ be an endpoint of $E_1$.
Since $G$ is a 2-connected outerplanar graph, it has an open ear decomposition such that on each ear the attachment vertices (endpoints) are adjacent.
The endpoints of the ears are different by the subcubic property.
In general, we start the coloring process with $G_0$.
There is an exceptional situation though.
If $d(v)=3$, then we immediately add the other bounded face containing $v$ as the first ear to form $G_1$.
We first show that the starting subgraph ($G_0$ or $G_1$ depending on the degree of $v$) of $G$ has a crumby coloring.
Secondly, we show that if $G_i$ has a crumby coloring, then $G_{i+1}$ also admits a crumby coloring in which the colors of the vertices of $G_i$ are unchanged except possibly the endpoints of the ear $E_{i+1}$.
This procedure leads to a crumby coloring of $G$.
During the coloring process, we establish and maintain a significant property of our crumby coloring.
Namely, we never color two adjacent vertices of degree 2 (with respect to the current subgraph) blue, unless we know that there is no later ear with this pair of endpoints. Let us call it .
We use the shorthand $r$ for red and $b$ for blue.
Starting the procedure: If $d(v)=2$, then the only bounded face of $G$ containing vertex $v$ is a cycle of length $k\ge 3$. We know that $G_0=C_k$ has a crumby coloring, but we need more.
For $k=5$, we must observe that there exist adjacent vertices $x$ and $y$ in $C_5$, which are not the endpoints of any ear[Vertex $x$ might be the endpoint of ear $E_i$, but in that case $y$ is not the other endpoint.].
Therefore, we color $x$ and $y$ blue and the remaining 3 vertices red in order to establish . The required crumby colorings of $C_k$ are shown in the following table for $k\le 8$.
For larger $k$, we get a crumby coloring by starting with $rrb$ and continue with the crumby coloring of $C_{k-3}$.
\[
\begin{array}{|c|c|c|c|c|c|c|}
\hline
k & 3 & 4 & 5 & 6 & 7 & 8 \\
\hline
\mathrm{crumby~coloring~of~}C_k & rrb & rrrb & rrrbb & rrbrrb & rrbrrrb & rrrbrrrb \\
\hline
\end{array}
\]
If $k\ne 5$, then we can rotate the above given crumby colorings such that $v$ gets its prescribed color.
We notice the following for $k=5$.
If the prescribed color of $v$ is red, then we can choose two adjacent vertices $x$ and $y$ of the $5$-face containing $v$ (distinct from $v$), for which there is no (later) ear connecting them.
We color $x$ and $y$ blue and the rest red to establish .
If $v$ is supposed to be blue, then holds immediately as we rotate the given coloring of $C_5$ to make $v$ blue.
If $d(v)=3$, then we show a crumby coloring of $G_1$, the subgraph spanned by the two bounded faces containing $v$.
One endpoint of the first ear $E_1$ is $v$.
We denote the other endpoint by $u$.
Suppose that the boundary of $G_0$ is $(u, w_1, w_2, \dots, w_k, v)$, where $u$ and $v$ are adjacent, and the internal points of $E_1$ are $(z_1,z_2,\dots,z_{\ell})$ from $u$ to $v$.
Firstly, we give a crumby coloring for $k=\ell=2$.
There are two cases depending on the prescribed color of $v$, see Figure <ref>.
Crumby colorings for $k=\ell=2$
In the remaining cases, we color $u$ and $v$ differently and
assume the prescribed color of $v$ to be red.
If it was blue, then $u$ plays the role of $v$. In the following table, we summarize the initial colorings depending on the values of $k$ and $\ell$.
\[
\begin{array}{|c|c|c|c|}
\hline
~ & \makecell{\ell=3d+1 ~(d \in \mathbb{N}) \\ \mathrm{the~color~of~} \\ (z_1,z_2,\dots,z_{\ell})} & \makecell{\ell=3d+2 ~(d \in \mathbb{N}) \\ \mathrm{the~color~of~} \\ (z_1,z_2,\dots,z_{\ell})} & \makecell{\ell=3d+3 ~(d \in \mathbb{N}) \\ \mathrm{the~color~of~} \\ (z_1,z_2,\dots,z_{\ell})} \\
\hline
\makecell{k=3c+1 ~ (c\in \mathbb{N}) \\ \mathrm{let~the~color~of~} \\ (w_1,w_2,\dots,w_k) \\ \mathrm{be~} (rrb)^{c}r} & (rrb)^{d}r & \makecell{\mathrm{if~}d=0:~ br\\ \mathrm{if~}d\ge1:~ (rrb)^{d-1}rrrbr} & (rrb)^{d}rrb \\
\hline
\makecell{k=3c+2 ~ (c\in \mathbb{N}) \\ \mathrm{let~the~color~of~} \\ (w_1,w_2,\dots,w_k) \\ \mathrm{be~} \underbrace{(rrb)^{c-1}rrr}_{\mathrm{if~} c~\ge~1}br} & (rrb)^{d}r & \makecell{\mathrm{if~}d=0:~ br~(\mathrm{for~}\\ c=0,\mathrm{~see~Figure~}\ref{kl2}) \\ \mathrm{if~}d\ge1:~ (rrb)^{d-1}rrrbr} & (rrb)^{d}rrb \\
\hline
\makecell{k=3c+3 ~ (c\in \mathbb{N}) \\ \mathrm{let~the~color~of~} \\ (w_1,w_2,\dots,w_k) \\ \mathrm{be~} (rrb)^{c}rrb} & (rrb)^{d}r & \makecell{\mathrm{if~}d=0:~ br \\ \mathrm{if~}d\ge1:~ (rrb)^{d-1}rrrbr} & \makecell{\mathrm{if~}d=0:~ brr \\ \mathrm{if~}d\ge1:~ (rrb)^{d-1}rrrbrr} \\
\hline
\end{array}
\]
The crumby colorings we use if $d(v)=3$, depending on the values of $k$ and $\ell$
It is immediate that these are crumby colorings, and have , thus the coloring process can start.
Adding a new ear: Let us assume that $G_i$ has already been colored and holds.
We consider the next ear $E_{i+1}=(x,z_1,z_2,\dots,z_{\ell},y)$ of the ear decomposition.
implies that the color of the endpoints of this ear cannot be both blue.
We assume $x$ is red.
Case $\ell=1$, $rb$: If $x$ is not an endpoint of any red $P_3$, then we color $z_1$ red.
If $y$ is a singleton blue vertex, then we color $z_1$ blue. Otherwise, we interchange the color of $x$ and $y$, their previous components remained admissible, and color $z_1$ red.
Case $\ell=1$, $rr$: We color $z_1$ blue.
Case $\ell=2$, $rb$:
In the following table, we summarize the possibilities and give a suitable coloring for the endpoints and the internal points of the ear.
\[
\begin{array}{|c|c|c|c|c|}
\hline
\ell=2,~rb: & \makecell{x \mathrm{~is~an~endpoint} \\ \mathrm{of~a~red~} P_3 \\ \& \\ y \mathrm{~is~a~singleton} \\ \mathrm{~blue~vertex}} & \makecell{x \mathrm{~is~an~endpoint} \\ \mathrm{of~a~red~} P_3 \\ \& \\ y \mathrm{~is~in~a~blue} \\ K_2 \mathrm{~component}} & \makecell{x \mathrm{~is~not~an~endpoint} \\ \mathrm{of~a~red~} P_3 \\ \& \\ y \mathrm{~is~a~singleton} \\ \mathrm{~blue~vertex}} & \makecell{x \mathrm{~is~not~an~endpoint} \\ \mathrm{of~a~red~} P_3 \\ \& \\ y \mathrm{~is~in~a~blue} \\ K_2 \mathrm{~component}} \\
\hline
\makecell{\mathrm{color~of~} \\ (x,z_1,z_2,y)} & brrb & brrr & rrbb & rrbr \\
\hline
\end{array}
\]
Case $\ell=2$, $rr$: If both $x$ and $y$ are red, then at most one of them can be an endpoint of a red $P_3$.
We may assume that there is no red $P_3$ in $G_i$ ending in $x$. We color $z_1$ red and $z_2$ blue.
Case $\ell=3$, $rb$: We color $z_1,z_2,z_3$ to $brr$.
Case $\ell=3$, $rr$: We may assume $x$ is not an endpoint of any red $P_3$.
If there is no (later) ear with endpoints $z_2$ and $z_3$, then we color $z_1,z_2,z_3$ to $rbb$ maintaining .
On the other hand, if there exists an ear with endpoints $z_2$ and $z_3$ together with $m$ internal points (denote them by
$w_1,\dots,w_m$), then we merge the two ears and add them to
$G_i$ in one step.
We give a crumby coloring of the resulting graph in the following table.
Independent of the value $m$, we color $z_1$ and $z_3$ blue in order to avoid conflicts with the rest of the coloring.
We color $z_2$ red as well as $w_1$.
The coloring of $(w_1,w_2,\dots,w_m)$ is only shown for $m\le 6$ in the following table.
For greater $m$, we use the crumby coloring for $m-3$ and add $brr$ at the end.
\[
\begin{array}{|c|c|c|c|c|c|c|}
\hline
\ell=3,~rr: & m=1 & m=2 & m=3 & m=4 & m=5 & m=6 \\
\hline
\makecell{\mathrm{color~of~} \\ (w_1,w_2,\dots,w_m)} & r & rr & rrb & rbrr & rrbrr & rrbrrr \\
\hline
\end{array}
\]
Case $\ell=4$, $rb$: We color $z_1,z_2,z_3,z_4$ to $brrr$.
Case $\ell=4$, $rr$: We color $z_1,z_2,z_3,z_4$ to $brrb$.
Case $\ell=5$, $rb$: Depending on the type of the components of $x$ and $y$ we need to color the points of the next ear a bit differently. In the following table, we summarize the possibilities and give a suitable coloring for the endpoints and the internal points of the ear.
\[
\begin{array}{|c|c|c|c|}
\hline
\ell=5,~rb: & \makecell{x \mathrm{~is~an~endpoint} \\ \mathrm{of~a~red~} P_3 \\ \& \\ y \mathrm{~is~a~singleton} \\ \mathrm{~blue~vertex}} & \makecell{x \mathrm{~is~an~endpoint} \\ \mathrm{of~a~red~} P_3 \\ \& \\ y \mathrm{~is~in~a~blue} \\ K_2 \mathrm{~component}} & \makecell{x \mathrm{~is~not~an~endpoint} \\ \mathrm{of~a~red~} P_3} \\
\hline
\makecell{\mathrm{color~of~} \\ (x,z_1,z_2,z_3,z_4,z_5,y)} & brrbrrb & brrbrrr & rrbrrrb \\
\hline
\end{array}
\]
Case $\ell=5$, $rr$: We color $z_1,z_2,z_3,z_4,z_5$ to $brrrb$.
Case $\ell=6$, $rb$: We color $z_1,z_2,z_3,z_4,z_5,z_6$ to $brrbrr$.
Case $\ell=6$, $rr$: We may assume $x$ is not an endpoint of any red $P_3$. We color
$z_1,z_2,z_3,z_4,z_5,z_6$ to $rbrrrb$.
For $\ell\ge 7$, we create a crumby coloring using the cases for smaller values of $\ell$.
We start by $brr$, and continue with the given coloring of the ear with $\ell-3$ internal points. By starting with $brr$, we trace back to a similar situation for $\ell-3$ internal points in which $z_3$ takes over the role of $x$.
We remark that $z_3$ is in a red $K_2$ component, thus cannot be an endpoint of a red $P_3$.
A general outerplanar graph is not necessarily 2-connected. It is glued together from 2-connected blocks in a tree-like manner.
Some of the edges can form a tree hanging from a vertex of a block, or connecting a number of $2$-connected outerplanar components. In our case, the maximum degree 3 condition gives some extra structural information.
We are convinced that the natural extension of Theorem <ref> to all subcubic outerplanar graphs holds.
Every outerplanar graph with maximum degree $3$ admits a crumby coloring.
Considering this problem, one gets the impression that particular small trees attached to the vertices of a 2-connected outerplanar graph make the difficulty. It turns out, that most trees do not cause any problems at all.
To prove our statement, we also need the following.
Any subcubic tree $T$ admits a crumby coloring such that the color of an arbitrary vertex of degree $2$ is prescribed, unless $T=P_3$.
If $T=P_3$, then the middle vertex cannot be blue in a crumby coloring. Therefore, this is an exception. From now on, we assume that $T$ has at least 4 vertices. Every tree admits a crumby coloring by Theorem <ref>. Let us suppose that $T$ is a minimal example of a tree, which has a vertex $v$ of degree 2 such that in any crumby coloring of $T$, the color of $v$ must be red. We think of $v$ as the root, and denote the two neighbors of $v$ by $x$ and $y$.
If any of the neighbors of $v$ is of degree 2, say $x$, then we can delete the edge $vx$ and consider the two remaining trees $T_v$ rooted at $v$ and $T_x$ rooted at $x$. We get a contradiction by using Theorem <ref> with prescribed color red on $x$ and blue on $v$ in the respective trees.
If $d_T(x)=2$, then we get a contradiction
Since $T$ has at least 4 vertices, we may assume that $d_T(x)=3$. As before, we get a contradiction if the color of $x$ can be red in a crumby coloring of $T_x$, since we can color $v$ blue and use Theorem <ref> on $T_v$. Therefore, let us suppose that $T_x$ is a tree, for which the degree 2 vertex $x$ can only be colored blue in a crumby coloring. Denote the neighbors of $x$ in $T_x$ by $z$ and $w$.
Due to the same reasons as above, the degree of $z$ and $w$ cannot be 2 in $T_x$. It cannot be 1 either, since in that case $T_x$ has a crumby coloring in which that leaf is prescribed red. Consequently $x$ is also red, which is a contradiction. Hence $d_{T_x}(z)=d_{T_x}(w)=3$, and by the minimality of $T$, we know that $T_z$ admits a crumby coloring such that the degree 2 vertex $z$ is blue. Now we may delete the edge $xz$ and precolor the degree 1 vertex $x$ red and find a crumby coloring of a subgraph of $T_x$. However, we can add back the edge $xz$ giving a crumby coloring of $T_x$ with red $x$, a contradiction. The same holds for $T_w$, but there is one exception: if both $T_z=T_w=P_3$. In Figure <ref>, we give a crumby coloring of $T_x$ so that $x$ is red, which concludes the proof.
A crumby coloring of $T_x$ such that $x$ is red
If $G$ is a graph that admits a crumby coloring, and $T$ is an arbitrary tree with a leaf $v$, then let $G_T$ denote a graph which we get by identifying $v$ with any vertex of $G$.
Observe that if an attachment tree $T$ is not $K_2$ or $K_{1,3}$, then it is trivial to get a crumby coloring of $G_T$. The key idea is to assign different colors to $v$ and its only neighbor $x$ inside $T$. Consider a crumby coloring of $G$, therefore the color of $v$ is given, and color $x$ differently. By Theorem <ref> and Theorem <ref> (depending on $d_T(x)$), we can extend this coloring to a crumby coloring of $T-v$ which results in a crumby coloring of $G_T$.
Therefore, it is indifferent with respect to crumby colorings to attach trees, which are not isomorphic to $K_2$ or $K_{1,3}$. In the sequel, we assume that every attachment tree is either $K_2$ or $K_{1,3}$.
Now, we prove a basic instance of Conjecture <ref> relying on Theorem <ref>.
Let $C$ be a cycle with vertices $v_1,\dots,v_k$, plus we might attach arbitrary trees $\{T_i\}$ to vertices $\{v_i\}$ of $C$, where $i\in I$ and $I\subseteq [k]$. The resulting graph $G$ admits a crumby coloring.
We may assume that each attachment tree is isomorphic to $K_2$ or $K_{1,3}$ by Remark <ref>.
Our arguments slightly vary depending on some properties of $G$, thus we explain them separately.
Notice that some vertices of $C$ have attachments and some do not.
In the latter case, the vertex is called empty. First, let us assume that there are no empty vertices at all.
We notice that even $k$ is simple.
We color the vertices of $C$ alternately red and blue.
This gives the prescribed color of a leaf $v_i$ in the tree $T_i$. We color $T_i$ using Theorem <ref> for each $i=1,\dots,k$.
These colorings together form a crumby coloring of $G$.
Assume now that $k$ is odd.
We try to reuse the previous strategy by cutting off two consecutive vertices $v_i$ and $v_{i+1}$ and the trees $T_i$ and $T_{i+1}$ from $G$.
We notice that the remaining graph $H$ admits a crumby coloring by the previous argument. In particular, the first and last vertex on $C-\{v_i,v_{i+1}\}$ receive the same color.
For every $j$ between 1 and $k$, the tree $T_j-v_j$ admits a crumby coloring.
Let us record for every $j$ the color of $u_j$, the neighbor of $v_j$ in $T_j$.
Since $k$ is odd, there is an index $\ell$ such that $u_{\ell}$ and $u_{\ell+1}$ received the same color, say blue.
Now we color $v_{\ell}$ and $v_{\ell+1}$ red and cut the cycle $C$ by removing $\{v_{\ell},v_{\ell+1}\}$. We color $H$ as before such that we color the first and last vertex on $C-\{v_{\ell},v_{\ell+1}\}$ blue.
If $u_{\ell}$ was red, then we interchange colors accordingly.
Altogether, a crumby coloring of $G$ arises.
Unless there are no attachment trees at all (which case is easy), we can find two consecutive vertices of $C$, say $v_1$ and $v_2$ such that there is a tree attached to $v_1$, but $v_2$ has none. We use the following algorithm to color the vertices on $C$ starting by coloring $v_1$ red and $v_2$ blue. Our aim is to color the vertices along $C$ alternately, except in one case, when after a blue vertex we color an empty vertex red.
In that case, the next vertex must be also red.
Observe that if a red vertex is non-empty, then no matter if the tree is $K_2$ or $K_{1,3}$, we can color its vertices maintaining the crumby property. If $v_{i-1}$ is blue, and $v_i$ is an empty red, then $v_{i+1}$ must also be red.
However, it is attainable that $v_{i+1}$ is not an end of a red $P_3$. Only two problems can occur during this algorithm.
Both of them might happen, when we color $v_k$.
If $v_k$ was blue, then $v_1$ might remain a red singleton. However, this cannot be the case by the existence of $T_1$. Otherwise if $v_k$ is red, then we might create a large red component.
If $T_1=K_2$, then the leaf of $T_1$ can be blue.
Hence the red component cannot contain a red $P_4$, since $v_k$ was not an end of a red $P_3$. If $T_1=K_{1,3}$, then the center of $T_1$ must be red, which causes a problem if $v_{k-1}$ is an empty red or $T_k=K_{1,3}$. If we created a red $P_4$, then we recolor $v_1$ to blue and color the remaining vertices in $T_1$ red.
Using the ideas of the previous proofs, we can show Conjecture $\ref{outerplanar}$ for a few other classes.
For instance, if $G$ is glued together from $2$-connected pieces in a tree-like fashion by single edges or paths of any length. Actually the paths might be replaced by any tree, as long as the first vertex outside of a $2$-connected piece has degree $2$. Even if the degree is $3$, our algorithm works except when the tree part between two $2$-connected components is precisely $P_3$, see Figure $\ref{f:fa_p3}$.
In these good cases, we use Theorem $\ref{t:treesplus}$ and Theorem $\ref{t:fa2foku}$ as follows.
We first color a $2$-connected outerplanar subgraph $G_1$.
There is at least one vertex of attachment $x_1$, where a tree $T$ is glued on $G_1$.
Let $y_1$ be the neighbor of $x_1$ in $T$, which we know has degree $1$ or $2$ in $T-x_1$.
We prescribe the color of $y_1$ to be different from that of $x_1$.
We continue this way, until the entire graph is colored.
Problematic situation between two 2-connected outerplanar components $G_1$ and $G_2$
§ SUBDIVISIONS OF THE COMPLETE GRAPH ON 4 VERTICES
Here we consider subdivisions of $K_4$, that has played interesting role in coloring problems [6].
As a strengthening of Hajós' conjecture, Toft [9] posed the problem if every 4-chromatic graph contains a totally odd subdivision of $K_4$.
Thomassen [7] and independently Wang [11] gave an affirmative answer for this.
Bellitto et al. [2] constructed planar graphs refuting Conjecture <ref>.
Characteristically, those counterexamples have $K_4$-minors.
Therefore, we study whether this property has fundamental importance.
We conjecture that every $K_4$-minor-free subcubic graph possesses a crumby coloring.
On the other hand, we show that one topological appearance of $K_4$ is not yet an obstacle.
As the core of the problem, we first consider $\le2$-subdivisions of $K_4$.
That is, every edge contains 0, 1 or 2 subdivision vertices.
It feels straightforward to give a computer-assisted proof, which we did.
We decided to include it as an appendix. However, we opted for a human proof argument.
Let $G$ be a subdivision of $K_4$ such that every edge is divided into at most $3$ parts. The graph $G$ admits a crumby coloring.
Let $V(K_4)=\{A,B,C,D\}$.
Every edge of $K_4$ may remain intact or might be subdivided by either one or two new internal vertices.
Our arguments are organized by the number of intact edges.
If there are no intact edges (genuine subdivision), then color the vertices of $K_4$ red and every subdivision vertex blue.
Since the red vertices are isolated, we must recolor some internal vertices red.
If there are two independent edges of $K_4$ with one internal vertex each, then recolor these internal vertices red.
Otherwise, there exists a vertex of $K_4$, vertex $B$ say, with at least two incident original edges $BC$ and $BD$ with two internal vertices.
There are two possibilities regarding the number of internal points on $AD$ and $AC$ as one can see in Figure <ref>.
If there is only one internal vertex on one of these edges, then change its color to red, and change the color of two internal vertices on $BD$ and $BC$ as shown. In the other case, one can create a crumby coloring just like in the picture on the right. Note that every dashed edge have at least one internal vertex and these are blue. Hence we got a crumby coloring.
Crumby colorings of genuine subdivisions of $K_4$, where vertex $B$ is incident with 2 edges with 2 internal vertices
Next assume there is precisely one intact edge, $AB$ say.
If $CD$ contains exactly one internal vertex $x$, then color $x$ and the vertices of $K_4$ red, and color the other internal vertices blue to get a crumby coloring.
Thus we may assume that $CD$ contains two internal vertices.
If any of the remaining 4 edges of $K_4$ contains two internal vertices, then there is a path on 7 vertices formed along these two particular edges of $K_4$.
We color the vertices of the path $rrbrrbr$ starting from $C$.
It can be extended to a crumby coloring by coloring the vertices of $K_4$ red and the internal vertices blue. In Figure <ref>, we illustrate such a coloring and also cover the only remaining case. Again all dashed edges contain blue internal vertices.
Extendable coloring of the path and a crumby coloring of the remaining case
Suppose there are exactly two intact edges of $K_4$.
If these two edges are independent, then again color the vertices of $K_4$ red and the internal points blue.
This results in a crumby coloring.
Therefore, we may assume that the intact edges are $AB$ and $BD$. If one of the edges incident to $C$ contains two internal vertices, then color the internal vertex adjacent to $C$ red along with $A,B,C,D$. Color the other vertices blue. In Figure <ref>, we give crumby colorings in the case, where the three edges incident to $C$ have 1 subdivision vertex. There are two such cases depending on the number of internal vertices on $AD$.
Two intact edges and all three edges incident to $C$ have exactly one subdivision vertex
Assume there are at least three intact edges.
There might be three of them, which form a path on the vertices of $K_4$ or there are exactly three intact edges either incident with the same vertex of $K_4$ or forming a triangle. In Figure <ref>, we give crumby colorings for the latter two cases by coloring the internal vertices on the dotted edge red, and on the dashed edges blue.
Precisely three intact edges incident to the same vertex or forming a triangle
Let us suppose that there is a path on the vertices of $K_4$, which consists of three intact edges.
We may assume that these are $AB$, $BC$ and $CD$.
Our idea is to color $A$ and $C$ red and $B$ blue (the color of $D$ might vary), and depending on the number of internal vertices on the remaining edges (which again form a path) color the vertices of this path suitably. Let $(i,j,k)$ denote the case, in which the number of internal vertices on $CA$, $AD$ and $DB$ are exactly $i$, $j$ and $k$, in that order.
There are three subcases depending on the value of $i$. In the following tables, we summarize the possibilities and give crumby colorings.
$(j,k)$: color of the path
(0,0) $R~bb~R~R~B$
(0,1) $R~bb~R~R~b~B$
(0,2) $R~bb~R~R~rb~B$
$(j,k)$: color of the path
(1,0) $R~br~R~b~R~B$
(1,1) $R~br~R~b~R~b~B$
(1,2) $R~br~R~b~R~rb~B$
$(j,k)$: color of the path
(2,0) $R~br~R~bb~R~B$
(2,1) $R~br~R~bb~R~b~B$
(2,2) $R~br~R~bb~R~rb~B$
Crumby colorings if there are three undivided edges which form a path and $i=2$
$(j,k)$: color of the path
(0,0) $R~b~R~R~B$
(0,1) $R~b~R~R~b~B$
(0,2) $R~b~R~R~rb~B$
$(j,k)$: color of the path
(1,0) $R~b~R~b~B~R$
(1,1) $R~b~R~b~B~r~R$
(1,2) $R~r~B~b~R~bb~R$
$(j,k)$: color of the path
(2,0) $R~b~R~rb~R~B$
(2,1) $R~b~R~rb~R~b~B$
(2,2) $R~b~R~rb~R~rb~B$
Crumby colorings if there are three undivided edges which form a path and $i=1$
$(j,k)$: color of the path
(0,0) $B~R~R~R$
(0,1) $B~R~R~b~R$
(0,2) $B~R~R~bb~R$
$(j,k)$: color of the path
(1,0) $B~R~b~R~R$
(1,1) $R~B~r~R~r~B$
(1,2) $B~R~b~R~rb~R$
$(j,k)$: color of the path
(2,0) $B~R~bb~R~R$
(2,1) $B~R~br~R~b~R$
(2,2) $B~R~br~R~rb~R$
Crumby colorings if there are three intact edges which form a path and $i=0$
This finishes all cases of the lemma.
Using the solutions for the restricted cases in the previous lemma, we can prove the crumby colorings for all subdivisions of $K_4$.
Let $G$ be a subdivision of $K_4$. The graph $G$ admits a crumby coloring.
Let $G'$ be the following reduction of $G$.
Independently, on each edge of $K_4$,
we replace the $k$ subdivision vertices by $k$ $\mathrm{mod}~3$ vertices.
By Lemma <ref>, there is a crumby coloring of $G'$.
We extend the coloring of $G'$ independently on each edge of $K_4$. If along an edge of $K_4$ both colors appear, then we can find two consecutive vertices $x$ and $y$ with different colors.
We insert the necessary number of blocks of $brr$ between $x$ and $y$ such that it remains a crumby coloring.
Otherwise, the considered edge of $K_4$ is monochromatic.
If every vertex is red, then we insert $rbr$ between any two of them. Now we continue the extension (if necessary) just like in the previous case, since there exist consecutive vertices with different colors.
A monochromatic blue edge means two blue vertices in $K_4$.
In that case, we insert $rrr$ between them.
Again, we continue the extension (if necessary) just like in the first case, since there exist consecutive vertices with different colors.
In a slightly opposite direction, motivated by our proof idea of Theorem <ref>, we pose the following
Every $K_4$-minor-free graph admits a crumby coloring.
§ ACKNOWLEDGEMENTS
The first author would like to thank Bjarne Toft for 25 years of friendship and encouragement. He influenced our work in Section <ref>. The first author was partially supported by ERC Advanced Grant "GeoScape" and NKFIH Grant K. 131529.
The second author was supported in part by the Hungarian National Research, Development
and Innovation Office, OTKA grant no. SNN 132625 and K 124950.
[1]J. Barát.
Decomposition of cubic graphs related to Wegner’s conjecture.
Disc. Math 342(5) 1520–1527, (2019).
[2]T. Bellitto, T. Klimošová, M. Merker, M. Witkowski, Y. Yuditsky.
Counterexamples to Thomassen's conjecture on decomposition of cubic graphs.
[3] J.A. Bondy, U.S.R. Murty.
Graph Theory. Springer-Verlag, London, XII+663 pages, (2008).
[4] J. Edmonds.
Paths, trees, and flowers,
Can. J. Math. 17 449–467, (1965).
[5]T. Gallai.
Maximale Systeme unabhänginger Kanten,
Magyar Tud. Akad. Mat. Kutató Int. Közl. 9 401–413,
[6]T.R. Jensen and B. Toft. Graph Coloring Problems.
Wiley (1994).
[7] C. Thomassen.
Totally Odd $K_4$-subdivisions in $4$-chromatic Graphs.
Combinatorica 21, 417–443 (2001).
[8] C. Thomassen. The square of a planar cubic graph is 7-colorable.
JCTB 128 192–218, (2017).
[9] B. Toft: Problem 11, in: Recent Advances in Graph Theory, Academia Praha 543–544, (1975).
[10] G. Wegner. Graphs with given diameter and a coloring problem. Technical Report, University of Dortmund, (1977).
[11] W. Zang.
Proof of Toft’s Conjecture: Every graph containing no fully odd $K_4$ is 3-colorable.
J. Combin. Optimization 2 117–188, (1998).
§ APPENDIX
Computer-assisted proof of Lemma <ref>
|
# Learning MRI Artifact Removal With Unpaired Data
Siyuan Liu1 Kim-Han Thung1 Liangqiong Qu1 Weili Lin1 Dinggang Shen1 Pew-Thian
Yap1,✉ and the UNC/UMN Baby Connectome Project Consortium
###### Abstract
Retrospective artifact correction (RAC) improves image quality post
acquisition and enhances image usability. Recent machine learning driven
techniques for RAC are predominantly based on supervised learning and
therefore practical utility can be limited as data with paired artifact-free
and artifact-corrupted images are typically insufficient or even non-existent.
Here we show that unwanted image artifacts can be disentangled and removed
from an image via an RAC neural network learned with unpaired data. This
implies that our method does not require matching artifact-corrupted data to
be either collected via acquisition or generated via simulation. Experimental
results demonstrate that our method is remarkably effective in removing
artifacts and retaining anatomical details in images with different contrasts.
Department of Radiology and Biomedical Research Imaging Center (BRIC),
University of North Carolina at Chapel Hill, NC, U.S.A.
††footnotetext: ✉ Corresponding author: Pew-Thian Yap<EMAIL_ADDRESS>
## Introduction
Structural magnetic resonance imaging (sMRI) captures high spatial-resolution
details of brain anatomy, but is susceptible to artifacts caused for example
by eye and head motions[1], especially when scanning pediatric, elderly,
claustrophobic, and epileptic patients[2]. Artifacts can result in unusable
images and hence cause financial losses for imaging studies[3]. Motion
artifact correction[4] can be used to remove artifacts, improve image quality,
and increase the amount of usable images. This is particularly important in
view of the fact that the accuracy and reliability of subsequent analysis or
diagnosis can be jeopardized by poor image quality.
Methods for correction of motion artifacts can be prospective or
retrospective. Prospective techniques[5, 6, 7, 8, 9, 10] utilize either
optical tracking of target markers placed on the head or continuously
reacquired images from dedicated navigator scans for real-time motion
prediction[8]. However, prospective methods require additional hardware and/or
scanner modifications. Motion markers can cause patient discomfort and optical
tracking requires expensive hardware, needs clear visibility of target
markers, and may be sensitive to facial movements. Moreover, these methods
typically assume quiescent periods with minimal motion and reacquire data when
this condition is not met. This prolongs acquisition time without necessarily
bringing substantial improvements to image quality when there is little
motion.
In contrast, retrospective artifact correction (RAC)[4] can be used to remove
artifacts, improve image quality, and enhance image usability without
requiring additional hardware, as motion estimation and correction are
considered a part of the image reconstruction process. Retrospective
techniques can be acquisition-based or software-based. One representative
technique of acquisition-based methods is PROPELLER[11], which is a self-
navigation technique that utilizes a number of concentric blades rotated at
different angles to cover the whole k-space. The k-space center is repeatedly
sampled and used as self-navigators for the estimation of rotation and
translation information. It has been shown to be effective for 2D motion
correction[12]. However, acquisition-based methods such as PROPELLER can only
estimate in-plane motion parameters, and the through-plane motion might
disrupt signals across slices. Moreover, retrospective techniques often
require additional purposefully designed navigator sequences, more
complicatedly designs, longer acquisition times, and impose additional
constraints on imaging parameters (i.e., TR/TE/TI).
Software-based methods for post-acquisition RAC[4] can be used to remove
artifacts without modification of sequences, mounting of markers, and
constraining acquisition parameters. They are inexpensive post-processing
method that can be readily incorporated across all scanners. Particularly,
deep neural networks (DNNs), such as convolutional neural networks (CNNs),
have demonstrated great potential for simultaneous removal of a variety of
artifacts irrespective of the acquisition scheme[13, 14]. CNNs are typically
trained in a supervised manner, which in RAC requires paired artifact-
corrupted and artifact-free images. Such paired data can be collected by
scanning the same subjects without and with motions, which can be impractical,
costly, and time-consuming. Artifact-corrupted images can also be generated by
adding simulated artifacts to artifact-free images [15, 16, 17, 18]. However,
simulations might not accurately and sufficiently reflect all possible forms
of real artifacts.
In this paper, we consider the artifact removal problem as image translation
from an artifact-corrupted domain to an artifact-free domain. We gain
inspirations from unsupervised image translation techniques, such as UNIT[19],
CycleGAN[20], BicycleGAN[21], and Pix2Pix[22], which employ auto-encoders to
learn invertible mappings between domain pairs using unpaired images. We
introduce an end-to-end disentangled unsupervised cycle-consistent adversarial
network (DUNCAN), which can be trained using unpaired data for flexible and
simultaneous removal of various sMRI artifacts. We employ cycle translations
between artifact-corrupted and artifact-free domains, where each cycle
translation is defined as a forward translation from one domain to its target
domain, followed by a backward translation from the target domain to the
original domain. Both the forward and backward translations are realized with
auto-encoders. Note that each MR image, even deemed good in quality, may
inevitably contain some artifacts. Therefore, we assume that images from
either artifact-corrupted or artifact-free domains are composed of an
anatomical content component, residing in a domain-invariant content space,
and an artifact component, residing in a domain-specific artifact space. The
auto-encoders disentangle these two components in an image via two kinds of
encoders in each domain translation mapping, i.e., a content encoder, which
captures anatomical structures shared across domains, and an artifact encoder,
which captures artifacts specific to a domain. Then, the decoder ensembles the
extracted content and artifact features from both encoders to translate images
to the target domain. To ensure complete disentanglement of content and
artifact components, we propose a multi-scale content consistency (MS-CC) loss
and a content-swapping mechanism supervised by adversarial learning. We also
design a multi-scale reconstruction consistency (MS-RC) loss, including a
pixel reconstruction consistency (PRC) loss, an edge reconstruction
consistency (ERC) loss, and a structure reconstruction consistency (SRC) loss,
to avoid degradation of structural details. In addition, we propose an image
quality consistency (IQC) loss to ensure that no structural details are
removed from artifact-free images. The architecture of DUNCAN is summarized in
Figure 1 and detailed in the Methods section.
Figure 1: Overview of DUNCAN. a, Disentangled cycle translation (DCT) mapping
$\mathcal{M}_{\text{c}\rightarrow\text{f}\rightarrow\text{c}}$ consists of two
sequential domain mappings $\mathcal{M}_{\text{c}\rightarrow\text{f}}$ and
$\mathcal{M}_{\text{f}\rightarrow\text{c}}$. For
$\mathcal{M}_{\text{c}\rightarrow\text{f}}$, an artifact-corrupted image
$x_{\text{c}}$ is first encoded in the domain-invariant content space
$\mathcal{C}$ and the domain-specific artifact space $\mathcal{A}_{\text{c}}$
to obtain the content (CT) information $z_{\text{c}}^{\text{CT}}$ and artifact
(AF) information $z_{\text{c}}^{\text{AF}}$, respectively. Then,
$z_{\text{c}}^{\text{CT}}$ and $z_{\text{c}}^{\text{AF}}$ are decoded to
remove artifacts from $x_{\text{c}}$, and to obtain the intermediate
translated image $x_{\text{c}\rightarrow\text{f}}$ in the artifact-free domain
$\mathcal{I}_{\text{f}}$. For $\mathcal{M}_{\text{f}\rightarrow\text{c}}$,
$x_{\text{c}\rightarrow\text{f}}$ is first encoded in the content space
$\mathcal{C}$ and the artifact space $\mathcal{A}_{\text{f}}$ to obtain the
content information $z_{\text{f}}^{\text{CT}}$ and the artifact information
$z_{\text{f}}^{\text{AF}}$, respectively. Then $z_{\text{f}}^{\text{CT}}$ and
$z_{\text{f}}^{\text{AF}}$ are decoded to add artifacts to
$x_{\text{c}\rightarrow\text{f}}$ to reconstruct image $\hat{x}_{\text{c}}$.
DCT mapping $\mathcal{M}_{\text{c}\rightarrow\text{f}\rightarrow\text{c}}$ is
hence
$\\{x_{\text{c}}\in\mathcal{I}_{\text{c}}\\}\rightarrow\\{z_{\text{c}}^{\text{CT}}\in\mathcal{C},z_{\text{c}}^{\text{AF}}\in\mathcal{A}_{\text{c}}\\}\rightarrow\\{x_{\text{c}\rightarrow\text{f}}\in\mathcal{I}_{\text{f}}\\}\rightarrow\\{z_{\text{c}\rightarrow\text{f}}^{\text{CT}}\in\mathcal{C},z_{\text{c}\rightarrow\text{f}}^{\text{AF}}\in\mathcal{A}_{\text{f}}\\}\rightarrow\\{\hat{x}_{\text{c}}\in\mathcal{I}_{\text{c}}\\}$.
Conversely, DCT mapping
$\mathcal{M}_{\text{f}\rightarrow\text{c}\rightarrow\text{f}}$ is
$\\{x_{\text{f}}\in\mathcal{I}_{\text{f}}\\}\rightarrow\\{z_{\text{f}}^{\text{CT}}\in\mathcal{C},z_{\text{f}}^{\text{AF}}\in\mathcal{A}_{\text{f}}\\}\rightarrow\\{x_{\text{f}\rightarrow\text{c}}\in\mathcal{I}_{\text{c}}\\}\rightarrow\\{z_{\text{f}\rightarrow\text{c}}^{\text{CT}}\in\mathcal{C},z_{\text{f}\rightarrow\text{c}}^{\text{AF}}\in\mathcal{A}_{\text{c}}\\}\rightarrow\\{\hat{x}_{\text{f}}\in\mathcal{I}_{\text{f}}\\}$.
b, DUNCAN takes any two unpaired images, i.e., one image $x_{\text{c}}$ from
artifact-corrupted domain $\mathcal{I}_{\text{c}}$ and one image
$x_{\text{f}}$ from artifact-free domain $\mathcal{I}_{\text{f}}$, as inputs.
The artifact-corrupted and artifact-free domain cycles incorporate the DCT
mappings $\mathcal{M}_{\text{c}\rightarrow\text{f}\rightarrow\text{c}}$ and
$\mathcal{M}_{\text{f}\rightarrow\text{c}\rightarrow\text{f}}$, respectively.
The content-swapping translation (CST) and identity translation (IT),
respectively, give the content-swapped translated images, i.e.,
$\hat{x}_{\text{f}\leftrightarrow\text{c}}$ and
$\hat{x}_{\text{c}\leftrightarrow\text{f}}$, and identity translated images,
i.e., $\tilde{x}_{\text{c}}$ and $\tilde{x}_{\text{f}}$. c, CST in artifact-
corrupted and artifact-free domains. d, DCT and IT in artifact-corrupted and
artifact-free domains. e, Network architecture of the proposed auto-encoder
($G_{\text{c}}$ or $G_{\text{f}}$). f, Network architecture of the
discriminator. The discriminator employs a fully convolutional network (FCN)
to determine if the generate image is real or fake based on a semantic
map[22].
## Results
Figure 2: Visual comparison of corrected in vivo images. a, T1-weighted images
and, b, T2-weighted images corrected using various methods. From top to bottom
are images with heavy, moderate, and minor artifacts. In a and b, the original
artifact-corrupted images are shown in the first column and the images
corrected using CycleGAN, Pix2Pix, and DUNCAN are shown respectively in the
second to fourth columns. DUNCAN outperforms the other methods in removing
artifacts and in preserving anatomical details. Figure 3: Visual comparison of
corrected in silico images. a, T1-weighted images and, b, T2-weighted images
corrected using various methods. From top to bottom are images with heavy,
moderate, and minor artifacts. In a and b, the ground truth is shown in the
first column, the original artifact-corrupted images in the second column, and
the images corrected using U-Net, CycleGAN, Pix2Pix, and DUNCAN, respectively,
in the third to sixth columns. DUNCAN removes more artifacts and preserves
more anatomical details in agreement with the ground truth. Figure 4:
Quantitative comparison of corrected in silico T1-weighted images. Numerical
evaluation conducted with different levels of artifacts (heavy, moderate, and
minor) and various metrics (MSE, SSIM, MS-SSIM, PSNR, VIF, and UQI). The bars
show the means and the error bars show the standard errors on the means. The
sample sizes of IS_T1_HVY, IS_T1_MOD, and IS_T1_MIN are 200, 300, 300,
respectively. Compared with the other methods, DUNCAN yields lower MSE and
higher SSIM, MS-SSIM, PSNR, VIF, and UQI. Figure 5: Quantitative comparison of
corrected in silico T2-weighted images. Numerical evaluation conducted with
different levels of artifacts (heavy, moderate, and minor) and various metrics
(MSE, SSIM, MS-SSIM, PSNR, VIF, and UQI). The bars show the means and the
error bars show the standard errors on the means. The sample sizes of
IS_T2_HVY, IS_T2_MOD, and IS_T2_MIN are 200, 300, 300, respectively. Compared
with the other methods, DUNCAN yields lower MSE and higher SSIM, MS-SSIM,
PSNR, VIF, and UQI. Figure 6: Segmentation accuracy of in silico images. a,
DSC comparison of artifact-corrupted images with and without correction,
indicating artifact removal improves image usability. b, Applying DUNCAN on
the artifact-free images do not degrade image details, as indicated by the
high DSCs. The bars show the means and the error bars show the standard errors
on the means. The sample size is 10 for each case.
### 0.1 Datasets
We evaluated DUNCAN using (i) An in vivo dataset of T1- and T2-weighted
images of children scanned from one month to six years of age[23]; and (ii)
An in silico dataset of T1- and T2-weighted images with simulated artifacts.
We denote the in vivo datasets for T1- and T2-weighted MR images as IV_T1 and
IV_T2, respectively. For each modality, we selected 20 artifact-free and 20
artifact-corrupted volumes for training, and 10 artifact-corrupted volumes for
testing. We extracted 76 to 85 axial slices from each image volume, resulting
in a total of 1620, 1600, and 800 axial slices respectively from the 20
artifact-free, 20 artifact-corrupted, and 10 artifact-corrupted T1-weighted
volumes for IV_T1 and 1520, 1550, and 800 axial slices respectively from the
20 artifact-free, 20 artifact-corrupted, and 10 artifact-corrupted T2-weighted
volumes for IV_T2. Each artifact-corrupted image volume was labeled with one
of three artifact severity levels: minor, moderate, or heavy.
To generate the in silico datasets, we synthesized artifact-corrupted images
from the artifact-free images from IV_T1 and IV_T2 with three levels of
artifacts, i.e., minor, moderate, and heavy. The resulting datasets are
respectively denoted as IS_T1_MIN, IS_T1_MOD, and IS_T1_HVY for T1-weighted
images, and IS_T2_MIN, IS_T2_MOD, and IS_T2_HVY for T2-weighted images. We
simulated the motion artifacts in k-space, reflecting background noise
movements, swallowing-like movements, and random sudden movements. We
generated the background noise movement via a pseudorandomized series (Perlin
noise[24]) with magnitude of 5, the swallowing-like movements via
multiplications with linear phase shifts in motion directions, i.e.,
translations along $z$-axis and rotations along $x$-axis, and the random
sudden movements via sudden changes in the magnitudes of motions in all
directions. For IS_T1_MIN, IS_T1_MOD, and IS_T1_HVY, 1620, 1620, 800, and 800
axial slices were extracted, respectively, from the 20 artifact-free, 20
synthesized artifact-corrupted, 10 artifact-free, and 10 synthesized artifact-
corrupted T1-weighted volumes. For IS_T2_MIN, IS_T2_MOD, and IS_T2_HVY, 1520,
1520, 800, and 800 axial slices were extracted, respectively, from the 20
artifact-free, 20 synthesized artifact-corrupted, 10 artifact-free, and 10
synthesized artifact-corrupted T2-weighted volumes.
### 0.2 Evaluation Metrics
To quantitatively evaluate the performance of DUNCAN on the in silico
datasets, several image quality metrics, including mean square error (MSE),
structural similarity index (SSIM)[25], multi-scale structural similarity
index (MS-SSIM)[26], peak signal-to-noise ratio (PSNR), visual information
fidelity (VIF)[27], and universal quality index (UQI)[28], were utilized to
gauge the quality of the artifact-corrected images. We used the default
settings for all the hyper-parameters of the evaluation metrics. For all
metrics, except MSE, higher values indicate better performance.
### 0.3 Compared Methods
To verify the effectiveness and superiority of DUNCAN, we compared it with
three state-of-the-art methods that are closely related to our task, i.e., one
supervised method – U-Net[13] – and two unsupervised methods – CycleGAN[20]
and Pix2Pix[22] – implemented with Keras0.3. Pix2Pix differs from CycleGAN by
using a least-squares adversarial loss[29] and PatchGAN[22] as the
discriminator. 11footnotetext: https://github.com/eriklindernoren/Keras-GAN/
### 0.4 Performance Evaluation Using In Vivo Datasets
Since no ground truth is available for the in vivo images, only qualitative
comparisons were conducted. The comparison results for different levels of
artifacts are shown for the T1-weighted and T2-weighted datasets in Figures 2a
and 2b, respectively. CycleGAN and Pix2Pix are unable to remove the artifacts
completely for different levels of artifacts in the T1- and T2-weighted
images. In comparison, DUNCAN is able to remove artifacts with varying
severity without introducing new artifacts.
### 0.5 Performance Evaluation Using In Silico Datasets
Visual comparison results for the in silico T1- and T2-weighted datasets are
provided in Figures 3a and 3b, respectively. The error maps, gradient maps,
and gradient error maps for T1- and T2-weighted images are provided in
Supplementary Figures 1–3, respectively. Quantitative comparison results using
various evaluation metrics are summarized in Figures 4 and 5, respectively.
Quantitative comparison results of gradient maps using various evaluation
metrics on in silico T1- and T2-weighted images are included in Supplementary
Figure 4. CycleGAN and Pix2Pix yield similar performance for the various
evaluation metrics, but in terms of visual appearance, Pix2Pix is
significantly better than CycleGAN due to its PatchGAN discriminator and
least-squares adversarial loss. Although U-Net was trained with paired data
and performs better than CycleGAN and Pix2Pix for the various evaluation
metrics, CycleGAN and Pix2Pix generate images that are sharper than U-Net,
both qualitatively and quantitatively, as illustrated in Supplementary Figures
2–4. This is due to the use of adversarial learning in CycleGAN and Pix2Pix.
In comparison, as DUNCAN utilizes both adversarial learning and disentangled
representation learning of artifacts and contents, it yields better
performance in artifact removal and better capability in maintaining
structural information in artifact-corrupted images. Even when corrupted with
heavy artifacts, image details can still be satisfactorily recovered by
DUNCAN.
### 0.6 Tissue Segmentation
To further demonstrate that DUNCAN can improve image usability, we applied BET
[30] and FAST [31] on the testing data in IS_T1 and IS_T2 for brain extraction
and tissue segmentation, respectively. We report in Figure 6 the tissue
segmentation accuracy before and after artifact correction, as measured by the
Dice similarity coefficient (DSC). Tissue segmentation maps from the artifact-
free images were used as references. The results shown in Figure 6a indicate
that DSCs are improved remarkably by DUNCAN correction. To validate the
quality preservation property of DUNCAN for artifact-free images, we also
evaluated the tissue segmentation accuracy of artifact-free images processed
with DUNCAN. The high DSCs shown in Figure 6b indicate that the quality of
artifact-free images is preserved.
### 0.7 Discussion
We have demonstrated that DUNCAN can be applied to MR images for post-
acquisition artifact removal. DUNCAN is therefore useful when raw k-space data
and reconstruction algorithms are not available. DUNCAN is a flexible method
for RAC irrespective of the acquisition technique. For training, the user only
needs to label the images as artifact-corrupted or artifact-free. No
additional images need to be acquired and no knowledge of MR physics is needed
to simulate artifacts. DUNCAN can potentially allow image imperfections such
as noise, streaking, and ghosting to be removed without explicitly generating
them for supervised training. DUNCAN can be incorporated in a quality control
pipeline to improve image usability. We also note that DUNCAN can be used, via
translation of artifact-free to artifact-corrupted images, to generate natural
and realistic artifacts that can be used for supervised or unsupervised
training of machine learning algorithms.
This work was supported in part by National Institutes of Health grants
(EB006733, AG053867, MH117943, MH104324, MH110274) and the efforts of the
UNC/UMN Baby Connectome Project Consortium. The authors thank Dr. Xiaopeng
Zong of the University of North Carolina at Chapel Hill for an initial
discussion on motion artifact simulation and Dr. Yoonmi Hong of the University
of North Carolina at Chapel Hill and Dr. Yong Chen of Case Western Reserve
University for proofreading the paper.
The authors declare that they have no competing financial interests.
Correspondence and requests for materials should be addressed to Pew-Thian Yap
(email: ptyap@med.unc.edu).
The data used in this paper were provided by the investigative team of the
UNC/UMN Baby Connectome Project. The data can be obtained from the National
Institute of Mental Health Data Archive (NDA) (http://nda.nih.gov/) or by
contacting the investigative team[23].
The source code and trained models for this study are publicly available on
Zenodo (https://zenodo.org/record/3742351)[32].
SL designed the framework and network architecture, carried out the
implementation, performed the experiments, and analyzed the data. SL and PTY
wrote the manuscript. SL, KHT, and PTY revised the manuscript. LQ contributed
to the initial formulation of the method before moving to Stanford University.
WL provided the infant data for training and testing. PTY conceived the study
and were in charge of overall direction and planning. DS was involved in the
initial discussion of the problem when he was with the University of North
Carolina at Chapel Hill. All work was done at the University of North Carolina
at Chapel Hill.
In this work, we (i) consider the artifact removal problem as image
translation from an artifact-corrupted domain to an artifact-free domain;
(ii) propose an end-to-end unsupervised RAC framework based on a disentangled
unsupervised cycle-consistent adversarial network (DUNCAN, see Figure 1),
which employs two auto-encoders to learn a cycle translation mapping that
translates the images forward and backward between the artifact-corrupted and
artifact-free domains; (iii) adopt two encoders to embed the images in a
domain-invariant content space, which contains anatomical information, and a
domain-specific artifact space, which captures artifact and noise information,
and adopt a decoder to translate the images to a target domain using the
encoded content and artifact features; (iv) realize content-artifact
disentanglement, hinging on determining the domain-invariant content space
using two strategies: a multi-scale content consistency (MS-CC) loss to keep
content features consistent across domains and a content-swapping mechanism to
ensure the domain-invariance of the content space; and (v) introduce a
quality preservation mechanism to ensure that no image details are removed.
### 0.8 Disentangled Cycle Translation Mapping
Let $\mathcal{I}_{\text{c}}$ and $\mathcal{I}_{\text{f}}$ be the domains of
artifact-corrupted and artifact-free MR images, respectively. Our method aims
to learn the nonlinear mappings between the two domains, i.e.,
$\mathcal{M}_{\text{c}\rightarrow\text{f}}:\mathcal{I}_{\text{c}}\rightarrow\mathcal{I}_{\text{f}}$
and
$\mathcal{M}_{\text{f}\rightarrow\text{c}}:\mathcal{I}_{\text{f}}\rightarrow\mathcal{I}_{\text{c}}$,
using unpaired training images. In practice, each acquired MR image, even with
good quality, may inevitably contain artifacts. Therefore, we assume that each
MR image is a nonlinear combination of content and artifact components. For
two unpaired images
$(x_{\text{c}}\in\mathcal{I}_{\text{c}},x_{\text{f}}\in\mathcal{I}_{\text{f}})$,
disentangled cycle translation (DCT) mapping
$\mathcal{M}_{\text{c}\rightarrow\text{f}\rightarrow\text{c}}:\mathcal{I}_{\text{c}}\rightarrow\mathcal{I}_{\text{f}}\rightarrow\mathcal{I}_{\text{c}}$
is accomplished with sequential forward and backward translation mappings
$\mathcal{M}_{\text{c}\rightarrow\text{f}}$ and
$\mathcal{M}_{\text{f}\rightarrow\text{c}}$; conversely, the DCT mapping
$\mathcal{M}_{\text{f}\rightarrow\text{c}\rightarrow\text{f}}:\mathcal{I}_{\text{f}}\rightarrow\mathcal{I}_{\text{c}}\rightarrow\mathcal{I}_{\text{f}}$
is realized with $\mathcal{M}_{\text{f}\rightarrow\text{c}}$ and
$\mathcal{M}_{\text{c}\rightarrow\text{f}}$, as illustrated in Figure 1a.
Specifically, taking DCT mapping
$\mathcal{M}_{\text{c}\rightarrow\text{f}\rightarrow\text{c}}$ as an example,
for forward translation mapping $\mathcal{M}_{\text{c}\rightarrow\text{f}}$,
we first encode $x_{\text{c}}$ in two latent spaces, i.e., the domain-
invariant content space $\mathcal{C}$ and the domain-specific artifact space
$\mathcal{A}_{\text{c}}$, to obtain two disentangled representations, i.e.,
the artifact (AF) representation $z_{\text{c}}^{\text{AF}}$ and the content
(CT) representation $z_{\text{c}}^{\text{CT}}$, respectively. We then build a
decoder based on the disentangled representations to construct intermediate
image $x_{\text{c}\rightarrow\text{f}}$ in the artifact-free domain
$\mathcal{I}_{\text{f}}$. The forward translation mapping
$\mathcal{M}_{\text{c}\rightarrow\text{f}}$ for $x_{\text{c}}$ can be
summarized as
$\\{x_{\text{c}}\in\mathcal{I}_{\text{c}}\\}\rightarrow\\{z_{\text{c}}^{\text{CT}}\in\mathcal{C},z_{\text{c}}^{\text{AF}}\in\mathcal{A}_{\text{c}}\\}\rightarrow\\{x_{\text{c}\rightarrow\text{f}}\in\mathcal{I}_{\text{f}}\\}$.
We then further conduct a backward translation mapping
$\mathcal{M}_{\text{f}\rightarrow\text{c}}$ on
$x_{\text{c}\rightarrow\text{f}}$. We first encode
$x_{\text{c}\rightarrow\text{f}}$ as
$z_{\text{c}\rightarrow\text{f}}^{\text{CT}}$ in content space $\mathcal{C}$
and $z_{\text{c}\rightarrow\text{f}}^{\text{AF}}$ in artifact space
$\mathcal{A}_{\text{f}}$. Note that this artifact space is specific to input
from domain $\mathcal{I}_{\text{f}}$, whereas $\mathcal{A}_{\text{c}}$ is
specific to input from domain $\mathcal{I}_{\text{c}}$, as images from both
domains have different types of artifacts manifesting in different styles.
Feeding $z_{\text{c}\rightarrow\text{f}}^{\text{CT}}$ and
$z_{\text{c}\rightarrow\text{f}}^{\text{AF}}$ as input to a decoder, we obtain
the reconstructed artifact-corrupted image $\hat{x}_{\text{c}}$. The backward
translation mapping $\mathcal{M}_{\text{f}\rightarrow\text{c}}$ for
$x_{\text{c}\rightarrow\text{f}}^{\text{CT}}$ can be summarized as
$\\{x_{\text{c}\rightarrow\text{f}}^{\text{CT}}\in\mathcal{I}_{\text{f}}\\}\rightarrow\\{z_{\text{c}\rightarrow\text{f}}^{\text{CT}}\in\mathcal{C},z_{\text{c}\rightarrow\text{f}}^{\text{AF}}\in\mathcal{A}_{\text{f}}\\}\rightarrow\\{\hat{x}_{\text{c}}\in\mathcal{I}_{\text{c}}\\}$.
Similarly, we also perform DCT mapping for $x_{\text{f}}$ via
$\mathcal{M}_{\text{f}\rightarrow\text{c}}$ and
$\mathcal{M}_{\text{c}\rightarrow\text{f}}$ to obtain in sequence the
intermediate artifact-corrupted image $x_{\text{f}\rightarrow\text{c}}$ and
artifact-free image $\hat{x}_{\text{f}}$, i.e.,
$\\{x_{\text{f}}\in\mathcal{I}_{\text{f}}\\}\rightarrow\\{z_{\text{f}}^{\text{CT}}\in\mathcal{C},z_{\text{f}}^{\text{AF}}\in\mathcal{A}_{\text{f}}\\}\rightarrow\\{x_{\text{f}\rightarrow\text{c}}\in\mathcal{I}_{\text{c}}\\}\rightarrow\\{z_{\text{f}\rightarrow\text{c}}^{\text{CT}}\in\mathcal{C},z_{\text{f}\rightarrow\text{c}}^{\text{AF}}\in\mathcal{A}_{\text{c}}\\}\rightarrow\\{\hat{x}_{\text{f}}\in\mathcal{I}_{\text{f}}\\}$.
### 0.9 The DUNCAN Architecture
Figure 1b shows an overview of the network architecture of DUNCAN, consisting
of two DCT mappings (artifact-corrupted and artifact-free domain cycles), two
content-swapping and identity translations (in both artifact-corrupted and
artifact-free domains), and four adversarial constraints (for the generated
artifact-corrupted and artifact-free images). As described in the previous
section, the artifact-corrupted and artifact-free domain cycles aim to perform
DCT mappings $\mathcal{M}_{\text{c}\rightarrow\text{f}\rightarrow\text{c}}$
and $\mathcal{M}_{\text{f}\rightarrow\text{c}\rightarrow\text{f}}$,
respectively, using two domain translation mappings, i.e.,
$\mathcal{M}_{\text{f}\rightarrow\text{c}}$ and
$\mathcal{M}_{\text{c}\rightarrow\text{f}}$. Each domain translation mapping
is realized by two encoders to disentangle an image into content and artifact
features, and one decoder to reconstruct the target-domain image using the
disentangled features, as illustrated in Figure 1c. More specifically, the
mapping $\mathcal{M}_{\text{f}\rightarrow\text{c}}$ is realized by content
encoder $E_{\text{f}}^{\text{CT}}$, artifact encoder
$E_{\text{f}}^{\text{AF}}$, and decoder $D_{\text{f}}$, whereas the mapping
$\mathcal{M}_{\text{c}\rightarrow\text{f}}$ is realized by content encoder
$E_{\text{c}}^{\text{CT}}$, artifact encoder $E_{\text{c}}^{\text{AF}}$, and
decoder $D_{\text{c}}$. With any two unpaired images
$x_{\text{c}}\in\mathcal{I}_{\text{c}}$ and
$x_{\text{f}}\in\mathcal{I}_{\text{f}}$ as inputs, the encoders and decoders
are learned to respectively reconstruct images $\hat{x}_{\text{c}}$ and
$\hat{x}_{\text{f}}$ via DCT mappings
$\mathcal{M}_{\text{c}\rightarrow\text{f}\rightarrow\text{c}}$ and
$\mathcal{M}_{\text{f}\rightarrow\text{c}\rightarrow\text{f}}$.
Using the domain-invariant property of the content space, we propose content-
swapping translation (CST) for complete representation disentanglement, as
illustrated in Figure 1c. The idea behind this mechanism is that when the
content and artifact information are completely disentangled, swapping the
domain-invariant content information between domain translations should not
lead to changes in translation outcomes. Specifically, we replace content
information from $E_{\text{c}}^{\text{CT}}(x_{\text{c}})$ in domain
translation $\mathcal{M}_{\text{c}\rightarrow\text{f}}$ with content
information from $E_{\text{f}}^{\text{CT}}(x_{\text{f}})$ to construct
content-swapped translated image
$\hat{x}_{\text{c}\leftrightarrow\text{f}}\in\mathcal{I}_{f}$ via decoder
$D_{\text{c}}$. Similarly, we replace content information from
$E_{\text{f}}^{\text{CT}}(x_{\text{f}})$ in domain translation
$\mathcal{M}_{\text{f}\rightarrow\text{c}}$ with content information from
$E_{\text{c}}^{\text{CT}}(x_{\text{c}})$ to generate content-swapped
translated image
$\hat{x}_{\text{f}\leftrightarrow\text{c}}\in\mathcal{I}_{\text{c}}$ via
decoder $D_{\text{f}}$. The translated images
$\hat{x}_{\text{c}\leftrightarrow\text{f}}$ and
$\hat{x}_{\text{f}\leftrightarrow\text{c}}$, respectively, are constrained by
$x_{\text{f}}$ and $x_{\text{c}}$ via discriminators
$D_{\text{c}\leftrightarrow\text{f}}^{\textrm{ADV}}$ and
$D_{\text{f}\leftrightarrow\text{c}}^{\textrm{ADV}}$.
Furthermore, we constrain the forward translation mappings with identity
translation (IT) mappings, as illustrated in Figure 1d, to maintain image
quality when no alteration is expected. Specifically, when translation
mappings $\mathcal{M}_{\text{c}\rightarrow\text{f}}$ and
$\mathcal{M}_{\text{f}\rightarrow\text{c}}$ are applied to $x_{\text{f}}$ and
$x_{\text{c}}$, respectively, the mappings are constrained to result in
identity reconstructions $\tilde{x}_{\text{f}}\in\mathcal{I}_{\text{f}}$ and
$\tilde{x}_{\text{c}}\in\mathcal{I}_{\text{c}}$. A set of consistency losses
are used to ensure that the identity translated images are consistent with the
corresponding input images.
### 0.10 Auto-Encoder Architecture
We devise two auto-encoders $G_{\text{c}}$ and $G_{\text{f}}$ to respectively
generate artifact-free and artifact-corrupted images. Each auto-encoder, with
architecture illustrated in Figure 1e, consists of (i) a content encoder,
$E_{\text{c}}^{\text{CT}}$ or $E_{\text{f}}^{\text{CT}}$, and an artifact
encoder, $E_{\text{c}}^{\text{AF}}$ or $E_{\text{f}}^{\text{AF}}$, that
respectively map an input image to the domain-invariant latent space
$\mathcal{C}$ and the domain-specific latent space, $\mathcal{A}_{\text{c}}$
or $\mathcal{A}_{\text{f}}$, to respectively extract the content and artifact
information of the image, and (ii) a decoder, $\mathcal{D}_{\text{c}}$ or
$\mathcal{D}_{\text{f}}$, that generates from the extracted features an image
in the target domain of the auto-encoder. We describe next the details for
each component of the auto-encoder.
Content Encoder The content encoder takes the original image as input, and
extracts content features through 4 residual blocks. Each residual block
consists of 4$\times$4 convolution, leaky ReLU, and instance normalization
(IN)[33] layers. We use an IN layer instead of a batch normalization layer[33]
to accelerate model convergence and maintain independence between features.
All normalized feature maps are activated by leaky ReLU with negative slope
0.2.
Artifact Encoder We first down-sample the input image using 2$\times$2
average-pooling in the artifact encoder. Then, we extract features from the
down-sampled image using 3 residual blocks without IN layers since IN removes
the feature means and variances, which contain important artifact information.
Decoder The decoder takes the extracted content and artifact features as input
and generates, using a set of up-sampling layers and residual blocks, a
content image and an artifact image, which are concatenated and then fused
through a residual block and an 1$\times$1 convolution layer to generate the
translated image.
### 0.11 Adversarial Learning
We employ generative adversarial networks (GANs) to better learn the
translation mappings between the artifact-free and artifact-corrupted image
domains. A GAN is comprised of a generator network and a discriminator
network. In our case, the auto-encoder acts as the generator network by
translating an input image to a target-domain image. The discriminator network
is a classifier that distinguishes between real and fake images. As training
progresses, the generator is getting better at fooling the discriminator, and
the discriminator is getting better at distinguishing real and fake images. We
employ two discriminators $D_{\text{c}}^{\textrm{ADV}}$ and
$D_{\text{f}\leftrightarrow\text{c}}^{\textrm{ADV}}$ in the artifact-corrupted
domain and another two discriminators $D_{\text{f}}^{\textrm{ADV}}$ and
$D_{\text{c}\leftrightarrow\text{f}}^{\textrm{ADV}}$ in the artifact-free
domain. We use PatchGAN[22], shown in Figure 1f, as the discriminators. The
numbers of filters are 64, 128, 256, 512 for the convolution layers and the
number of output channel is 1. Leaky ReLU activation is implemented with a
negative slope coefficient of 0.2.
### 0.12 Disentangled Representation Learning
We took several measures to ensure that our encoders can properly disentangle
content and artifact information from an input image. First, as content space
$\mathcal{C}$ is domain-invariant, i.e., shared between the artifact-free and
artifact-corrupted domains, the content information of an image and its
generated counterpart in the target domain should be consistent. For example,
the content information of $x_{\text{c}}$ and
$x_{\text{c}\rightarrow\text{f}}$ should be consistent, and so should
$x_{\text{f}}$ and $x_{\text{f}\rightarrow\text{c}}$. To this end, we propose
a multi-scale content consistency (MS-CC) loss based on the low- and high-
level features of the content encoders to respectively encourage the
consistency of structural and semantic content information. Second,
discriminating between real and content-swapped generated images via
discriminators $D_{\text{f}\leftrightarrow\text{c}}^{\text{ADV}}$ and
$D_{\text{c}\leftrightarrow\text{f}}^{\text{ADV}}$ also ensures better
disentanglement by the encoders.
### 0.13 Image Quality Consistency (IQC)
To ensure that the image quality of an input image is similar to the
translated image, we propose a pixel-wise image quality consistency (IQC) loss
to encourage the auto-encoders in the translation mappings
$\mathcal{M}_{\text{f}\rightarrow\text{c}}$ and
$\mathcal{M}_{\text{c}\rightarrow\text{f}}$ to perform as identity translation
mappings
$\mathcal{M}_{\text{c}\rightarrow\text{c}}:\mathcal{I}_{\text{c}}\rightarrow\mathcal{I}_{\text{c}}$
and
$\mathcal{M}_{\text{f}\rightarrow\text{f}}:\mathcal{I}_{\text{f}}\rightarrow\mathcal{I}_{\text{f}}$
when respectively given artifact-corrupted and artifact-free images. The IQC
loss encourages the artifact-free image generator to not remove image details
when given a good quality image. Similarly, the IQC loss encourages the
artifact-corrupted image generator to not introduce any additional artifacts
when given an artifact-corrupted image.
### 0.14 Loss Functions
We leverage two types of loss functions, i.e., consistency losses and
adversarial losses to facilitate model training, as illustrated in Figures 1c
and 1d.
Consistency Losses We utilize three consistency losses: multi-scale content
consistency (MS-CC) loss $\mathcal{L}_{\textrm{MS-CC}}$, which measures
content consistency between the input and output of the forward translation of
each DCT (i.e., $\mathcal{M}_{\text{c}\rightarrow\text{f}}$ in
$\mathcal{M}_{\text{c}\rightarrow\text{f}\rightarrow\text{c}}$ and
$\mathcal{M}_{\text{f}\rightarrow\text{c}}$ in
$\mathcal{M}_{\text{f}\rightarrow\text{c}\rightarrow\text{f}}$), multi-scale
reconstruction consistency (MS-RC) loss $\mathcal{L}_{\textrm{MS-RC}}$, which
computes the reconstruction consistency between an image and its reconstructed
counterpart in the same domain, and image quality consistency (IQC) loss,
which computes the image quality consistency between an image and its identity
translated counterpart in the same domain.
The MS-CC loss measures low- and high-level content feature differences and is
formulated as
$\mathcal{L}_{\textrm{MS-
CC}}=\sum_{i}[\mathbb{E}_{x_{\text{c}}\sim\mathcal{I}_{\text{c}}}\|\phi^{i}_{\text{c}}(x_{\text{c}})-\phi^{i}_{\text{f}}(x_{\text{c}\rightarrow\text{f}})\|_{1}+\mathbb{E}_{x_{\text{f}}\sim\mathcal{I}_{\text{f}}}\|\phi^{i}_{\text{f}}(x_{\text{f}})-\phi^{i}_{\text{c}}(x_{\text{f}\rightarrow\text{c}})\|_{1}],$
(1)
where $x_{\text{c}}$ and $x_{\text{f}}$ denote the artifact-corrupted and
artifact-free images, respectively, and
$x_{\text{c}\rightarrow\text{f}}=D_{\text{c}}(z_{\text{c}}^{\text{CT}},z_{\text{c}}^{\text{AF}})$
and
$x_{\text{f}\rightarrow\text{c}}=D_{\text{f}}(z_{\text{f}}^{\text{CT}},z_{\text{f}}^{\text{AF}})$
denote the corresponding artifact-free and artifact-corrupted images generated
by the decoders, respectively. $\phi^{i}_{\text{c}}(\cdot)$ and
$\phi^{i}_{\text{f}}(\cdot)$ denote the outputs of the $i$-th residual block
of the content encoders $E_{\text{c}}^{\text{CT}}$ and
$E_{\text{f}}^{\text{CT}}$, respectively.
$z_{\text{c}}^{\text{CT}}=E_{\text{c}}^{\text{CT}}(x_{\text{c}})\in\mathcal{S}$
and
$z_{\text{f}}^{\text{CT}}=E_{\text{f}}^{\text{CT}}(x_{\text{f}})\in\mathcal{S}$
denote the content information extracted respectively from $x_{\text{c}}$ and
$x_{\text{f}}$, whereas
$z_{\text{c}}^{\text{AF}}=E_{\text{c}}^{\text{AF}}(x_{\text{c}})\in\mathcal{A}_{\text{c}}$
and
$z_{\text{f}}^{\text{AF}}=E_{\text{f}}^{\text{AF}}(x_{\text{f}})\in\mathcal{A}_{\text{f}}$
denote the artifact information extracted respectively from $x_{\text{c}}$ and
$x_{\text{f}}$.
We compute the MS-RC loss by combining three reconstruction consistency
losses, i.e., the pixel reconstruction consistency (PRC) loss
$\mathcal{L}_{\textrm{PRC}}$, the edge reconstruction consistency (ERC) loss
$\mathcal{L}_{\textrm{ERC}}$, and the structure reconstruction consistency
(SRC) loss $\mathcal{L}_{\textrm{SRC}}$, as defined below:
$\displaystyle\mathcal{L}_{\textrm{MS-RC}}$
$\displaystyle=\mathcal{L}_{\textrm{PRC}}+\mathcal{L}_{\textrm{ERC}}+\mathcal{L}_{\textrm{SRC}},$
(2) $\displaystyle\mathcal{L}_{\textrm{PRC}}$
$\displaystyle=\mathbb{E}_{x_{\text{c}}\sim\mathcal{I}_{\text{c}}}\|x_{\text{c}}-\hat{x}_{\text{c}}\|_{1}+\mathbb{E}_{x_{\text{f}}\sim\mathcal{I}_{\text{f}}}\|x_{\text{f}}-\hat{x}_{\text{f}}\|_{1},$
(3) $\displaystyle\mathcal{L}_{\textrm{ERC}}$
$\displaystyle=\mathbb{E}_{x_{\text{c}}\sim\mathcal{I}_{\text{c}}}\|\psi_{\text{L}}(x_{\text{c}})-\psi_{\text{L}}(\hat{x}_{\text{c}})\|_{1}+\mathbb{E}_{x_{\text{f}}\sim\mathcal{I}_{\text{f}}}\|\psi_{\text{L}}(x_{\text{f}})-\psi_{\text{L}}(\hat{x}_{\text{f}})\|_{1},$
(4) $\displaystyle\mathcal{L}_{\textrm{SRC}}$
$\displaystyle=\mathbb{E}_{x_{\text{c}}\sim\mathcal{I}_{\text{c}}}\|\psi_{\text{H}}(x_{\text{c}})-\psi_{\text{H}}(\hat{x}_{\text{c}})\|_{1}+\mathbb{E}_{x_{\text{f}}\sim\mathcal{I}_{\text{f}}}\|\psi_{\text{H}}(x_{\text{f}})-\psi_{\text{H}}(\hat{x}_{\text{f}})\|_{1},$
(5)
where
$\hat{x}_{\text{c}}=D_{\text{f}}(E_{\text{f}}^{\text{CT}}(x_{\text{c}\rightarrow\text{f}}),E_{\text{f}}^{\text{AF}}(x_{\text{c}\rightarrow\text{f}}))$
and
$\hat{x}_{\text{f}}=D_{\text{c}}(E_{\text{c}}^{\text{CT}}(x_{\text{f}\rightarrow\text{c}}),E_{\text{c}}^{\text{AF}}(x_{\text{f}\rightarrow\text{c}}))$
are, respectively, the reconstructed images in the artifact-corrupted and
artifact-free domains, and $\psi_{\text{L}}(\cdot)$ and
$\psi_{\text{H}}(\cdot)$ are, respectively, the low-level structural
information that reflects edges and high-level semantic information that
reflects contents measured by a pre-trained network (i.e., VGG19[34] trained
on ImageNet).
To preserve image quality after translation when no alteration is expected, an
image quality consistency (IQC) loss is devised to measure the pixel-wise
difference between the input and identity mapped images as
$\mathcal{L}_{{}_{\textrm{IQC}}}=\mathbb{E}_{x_{\text{c}}\sim\mathcal{I}_{\text{c}}}\|x_{\text{c}}-\tilde{x}_{\text{c}}\|_{1}+\mathbb{E}_{x_{\text{f}}\sim\mathcal{I}_{\text{f}}}\|x_{\text{f}}-\tilde{x}_{\text{f}}\|_{1},$
(6)
where
$\tilde{x}_{\text{c}}=D_{\text{f}}(E_{\text{f}}^{\text{CT}}(x_{\text{c}}),E_{\text{f}}^{\text{AF}}(x_{\text{c}}))$
and
$\tilde{x}_{\text{f}}=D_{\text{c}}(E_{\text{c}}^{\text{CT}}(x_{\text{f}}),E_{\text{c}}^{\text{AF}}(x_{\text{f}}))$
are the identity mapped images of $x_{\text{c}}$ and $x_{\text{f}}$,
respectively.
Adversarial Losses We apply two types of adversarial losses, i.e., single-
domain adversarial (SD-ADV) loss and cross-domain adversarial (CD-ADV) loss,
to enhance the judgment accuracy of the discriminators. All adversarial losses
are designed with the mean square error function. The SD-ADV loss is
calculated in a specific domain, i.e., $\mathcal{I}_{\text{c}}$ or
$\mathcal{I}_{\text{f}}$, as
$\begin{split}\mathcal{L}_{{}_{\textrm{SD-
ADV}}}=&\frac{1}{2}\mathbb{E}_{x_{\text{c}}\sim\mathcal{I}_{\text{c}}}(D^{\textrm{ADV}}_{\text{c}}(x_{\text{c}})-I)^{2}+\frac{1}{2}\mathbb{E}_{x_{\text{f}}\sim\mathcal{I}_{\text{f}}}(D^{\textrm{ADV}}_{\text{c}}(x_{\text{f}\rightarrow\text{c}}))^{2}\\\
+&\frac{1}{2}\mathbb{E}_{x_{\text{f}}\sim\mathcal{I}_{\text{f}}}(D^{\text{ADV}}_{\text{f}}(x_{\text{f}})-I)^{2}+\frac{1}{2}\mathbb{E}_{x_{\text{c}}\sim\mathcal{I}_{\text{c}}}(D^{\text{ADV}}_{\text{f}}(x_{\text{c}\rightarrow\text{f}}))^{2},\end{split}$
(7)
where $D^{\text{ADV}}_{\text{c}}$ and $D^{\text{ADV}}_{\text{f}}$ are the
discriminators used to distinguish between real and fake images respectively
in domains $\mathcal{I}_{\text{c}}$ and $\mathcal{I}_{\text{f}}$, $I$ is a
matrix of ones with size $N_{1}\times N_{2}$ matching the output of the
discriminator. The cross-domain adversarial (CD-ADV) loss is defined as
$\begin{split}\mathcal{L}_{{}_{\textrm{CD-
ADV}}}=&\frac{1}{2}\mathbb{E}_{x_{\text{c}}\sim\mathcal{I}_{\text{c}}}(D^{\text{ADV}}_{\text{f}\leftrightarrow\text{c}}(x_{\text{c}})-I)^{2}+\frac{1}{2}\mathbb{E}_{x_{\text{f}}\sim\mathcal{I}_{\text{f}},x_{\text{c}}\sim\mathcal{I}_{\text{c}}}(D^{\text{ADV}}_{\text{f}\leftrightarrow\text{c}}(\hat{x}_{\text{f}\leftrightarrow\text{c}}))^{2}\\\
+&\frac{1}{2}\mathbb{E}_{x_{\text{f}}\sim\mathcal{I}_{\text{f}}}(D^{\text{ADV}}_{\text{c}\leftrightarrow\text{f}}(x_{\text{f}})-I)^{2}+\frac{1}{2}\mathbb{E}_{x_{\text{c}}\sim\mathcal{I}_{\text{c}},x_{\text{f}}\sim\mathcal{I}_{\text{f}}}(D^{\text{ADV}}_{\text{c}\leftrightarrow\text{f}}(\hat{x}_{\text{c}\leftrightarrow\text{f}}))^{2},\end{split}$
(8)
where
$\hat{x}_{\text{c}\leftrightarrow\text{f}}=D_{\text{c}}(z_{\text{f}}^{\text{CT}},z_{\text{c}}^{\text{AF}})$
and
$\hat{x}_{\text{f}\leftrightarrow\text{c}}=D_{\text{f}}(z_{\text{c}}^{\text{CT}},z_{\text{f}}^{\text{AF}})$
are the images reconstructed by cross-domain content information, i.e.,
$z_{\text{f}}^{\text{CT}}$ or $z_{\text{c}}^{\text{CT}}$, and current-domain
artifact information, i.e., $z_{\text{c}}^{\text{AF}}$ or
$z_{\text{f}}^{\text{AF}}$.
Total Loss In summary, the total loss function of DUNCAN is
$\mathcal{L}_{\textrm{total}}=\mathcal{L}_{{}_{\textrm{SD-
ADV}}}+\mathcal{L}_{{}_{\textrm{CD-ADV}}}+\lambda_{{}_{\textrm{MS-
CC}}}\mathcal{L}_{{}_{\textrm{MS-
CC}}}+\lambda_{{}_{\textrm{PRC}}}\mathcal{L}_{{}_{\textrm{PRC}}}+\lambda_{{}_{\textrm{ERC}}}\mathcal{L}_{{}_{\textrm{ERC}}}+\lambda_{{}_{\textrm{SRC}}}\mathcal{L}_{{}_{\textrm{SRC}}}+\lambda_{{}_{\textrm{IQC}}}\mathcal{L}_{{}_{\textrm{IQC}}},$
(9)
where $\lambda_{{}_{\textrm{MS-CC}}}$, $\lambda_{{}_{\textrm{PRC}}}$,
$\lambda_{{}_{\textrm{ERC}}}$, $\lambda_{{}_{\textrm{SRC}}}$, and
$\lambda_{{}_{\textrm{IQC}}}$ are the loss weights used for controlling the
contributions of the terms in term in Equation (9).
### 0.15 Implementation Details
DUNCAN was implemented using Keras with Tensorflow backend. Evaluation was
based on a machine with a CPU (Intel i7-8700K) and a GPU (NVIDIA GeForce GTX
1080Ti 11GB RAM). The Adam optimizer with $1\times 10^{-4}$ learning rate was
utilized for minimizing the loss function. For in vivo T1- and T2-weighted
datasets, i.e., IV_T1 and IV_T2, we used $\lambda_{{}_{\textrm{MS-CC}}}=5$,
$\lambda_{{}_{\textrm{PCC}}}=10$, $\lambda_{{}_{\textrm{ERC}}}=5$,
$\lambda_{{}_{\textrm{SRC}}}=5$, and $\lambda_{{}_{\textrm{IQC}}}=1$ for MS-
CC, PRC, ERC, SRC, and IQC losses, respectively. For in silico T1- and
T2-weighted datasets, i.e., IS_T1 and IS_T2, we used $\lambda_{{}_{\textrm{MS-
SCC}}}=10$, $\lambda_{{}_{\textrm{PCC}}}=20$,
$\lambda_{{}_{\textrm{ERC}}}=10$, $\lambda_{{}_{\textrm{SRC}}}=10$, and
$\lambda_{{}_{\textrm{IQC}}}=5$ for MS-CC, PRC, ERC, SRC, and IQC losses,
respectively. For both the in vivo and in silico datasets, every three
adjacent slices in each volume were inserted into RGB channels of a color
image, which was then normalized to have a range between -1 and 1 and cropped
to 208$\times$256 from the geometric center. During training, one artifact-
corrupted image and one artifact-free image were randomly selected each time
as inputs.
### 0.16 References
## References
* [1] Budde, J., Shajan, G., Scheffler, K. & Pohmann, R. Ultra-high resolution imaging of the human brain using acquisition-weighted imaging at 9.4t. _NeuroImage_ 86, 592–598 (2014).
* [2] Zhuo, J. & Gullapalli, R. P. MR artifacts, safety, and quality control. _RadioGraphics_ 26, 275–297 (2006).
* [3] Andre, J. _et al._ Toward quantifying the prevalence, severity, and cost associated with patient motion during clinical MR examinations. _J. Am. Coll. Radiol._ 12, 689–695 (2015).
* [4] Zaitsev, M., Maclaren, J. & Herbst, M. Motion artifacts in MRI: A complex problem with many partial solutions. _Journal of Magnetic Resonance Imaging_ 42, 887–901 (2015).
* [5] Zaitsev, M., Dold, C., Sakas, G., Hennig, J. & Speck, O. Magnetic resonance imaging of freely moving objects: prospective real-time motion correction using an external optical motion tracking system. _NeuroImage_ 31, 1038–1050 (2006).
* [6] Qin, L. _et al._ Prospective head-movement correction for high-resolution MRI using an in-bore optical tracking system. _Magnetic Resonance in Medicine_ 62, 924–934 (2009).
* [7] Ooi, M. B., Krueger, S., Thomas, W. J., Swaminathan, S. V. & Brown, T. R. Prospective real-time correction for arbitrary head motion using active markers. _Magnetic Resonance in Medicine_ 62, 943–954 (2009).
* [8] Schulz, J. _et al._ An embedded optical tracking system for motion-corrected magnetic resonance imaging at 7T. _Magnetic Resonance Materials in Physics, Biology and Medicine_ 25, 443–453 (2012).
* [9] Maclaren, J. _et al._ Measurement and correction of microscopic head motion during magnetic resonance imaging of the brain. _PLoS ONE_ 7, e48088 (2012).
* [10] Maclaren, J., Herbst, M., Speck, O. & Zaitsev, M. Prospective motion correction in brain imaging: A review. _Magnetic Resonance in Medicine_ 69, 621–636 (2012).
* [11] Pipe, J. G. Motion correction with propeller mri: Application to head motion and free‐breathing cardiac imaging. _Magnetic Resonance in Medicine_ 42, 963–969 (1999).
* [12] Vertinsky, A. T. _et al._ Performance of PROPELLER relative to standard FSE t2-weighted imaging in pediatric brain MRI. _Pediatric Radiology_ 39, 1038–1047 (2009).
* [13] Jin, K. H., McCann, M. T., Froustey, E. & Unser, M. Deep convolutional neural network for inverse problems in imaging. _IEEE Transactions on Image Processing_ 26, 4509–4522 (2017).
* [14] Haskell, M. W. _et al._ Network accelerated motion estimation and reduction (NAMER): Convolutional neural network guided retrospective motion correction using a separable motion model. _Magnetic Resonance in Medicine_ 82, 1452–1461 (2019).
* [15] Johnson, P. M. & Drangova, M. Motion correction in mri using deep learning. In _Proceeding of the 26th Annual Meeting ISMRM_ (Paris, France, 2018).
* [16] Tamada, D., Kromrey, M.-L., Ichikawa, S., Onishi, H. & Motosugi, U. Motion artifact reduction using a convolutional neural network for dynamic contrast enhanced MR imaging of the liver. _Magnetic Resonance in Medical Sciences_ (2019).
* [17] Küstner, T. _et al._ Retrospective correction of motion-affected MR images using deep learning frameworks. _Magnetic Resonance in Medicine_ 82, 1527–1540 (2019).
* [18] Johnson, P. M. & Drangova, M. Conditional generative adversarial network for 3d rigid-body motion correction in MRI. _Magnetic Resonance in Medicine_ 82, 901–910 (2019).
* [19] Liu, M.-Y., Breuel, T. & Kautz, J. Unsupervised image-to-image translation networks. In _Proceedings of Advances in Neural Information Processing Systems (NIPS)_ , 700–708 (2017).
* [20] Zhu, J.-Y., Park, T., Isola, P. & Efros, A. A. Unpaired image-to-image translation using cycle-consistent adversarial networks. In _Proceedings of IEEE International Conference on Computer Vision (ICCV)_ , 2223–2232 (2017).
* [21] Zhu, J.-Y. _et al._ Toward multimodal image-to-image translation. In _Advances in Neural Information Processing Systems (NIPS)_ , 465–476 (2017).
* [22] Isola, P., Zhu, J.-Y., Zhou, T. & Efros, A. A. Image-to-image translation with conditional adversarial networks. In _Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR)_ , 1125–1134 (2017).
* [23] Howell, B. R. _et al._ The UNC/UMN baby connectome project (BCP): An overview of the study design and protocol development. _NeuroImage_ 185, 891–905 (2019).
* [24] Perlin, K. An image synthesizer. _ACM SIGGRAPH Computer Graphics_ 19, 287–296 (1985).
* [25] Wang, Z., Bovik, A., Sheikh, H. & Simoncelli, E. Image quality assessment: From error visibility to structural similarity. _IEEE Transactions on Image Processing_ 13, 600–612 (2004).
* [26] Wang, Z., Simoncelli, E. & Bovik, A. Multiscale structural similarity for image quality assessment. In _Proceedings of IEEE Asilomar Conference on Signals, Systems and Computers_ , 1398–1402 (2003).
* [27] Sheikh, H. & Bovik, A. Image information and visual quality. _IEEE Transactions on Image Processing_ 15, 430–444 (2006).
* [28] Wang, Z. & Bovik, A. A universal image quality index. _IEEE Signal Processing Letters_ 9, 81–84 (2002).
* [29] Mao, X. _et al._ Least squares generative adversarial networks. In _Proceedings of IEEE International Conference on Computer Vision (ICCV)_ , 2794–2802 (2017).
* [30] Smith, S. M. Fast robust automated brain extraction. _Human Brain Mapping_ 17, 143–155 (2002).
* [31] Zhang, Y., Brady, M. & Smith, S. Segmentation of brain MR images through a hidden markov random field model and the expectation-maximization algorithm. _IEEE Transactions on Medical Imaging_ 20, 45–57 (2001).
* [32] Liu, S. _et al._ Code used in article “Learning MRI artifact removal with unpaired data”. (2020). DOI: 10.5281/zenodo.3742351.
* [33] Ulyanov, D., Vedaldi, A. & Lempitsky, V. Improved texture networks: maximizing quality and diversity in feed-forward stylization and texture synthesis. In _Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR)_ , 6924–6932 (2017).
* [34] Simonyan, K. & Zisserman, A. Very deep convolutional networks for large-scale image recognition. In _Proceedings of International Conference on Learning Representations (ICLR)_ (2015).
|
# Self-Supervised Learning in Event Sequences: A Comparative Study and Hybrid
Approach of Generative Modeling and Contrastive Learning
Viktor Moskvoretskii1,3,∗ Dmitry Osin1,∗ Egor Shvetsov1 Igor Udovichenko1
Maxim Zhelnin1 Andrey Dukhovny2 Anna Zhimerikina2 Albert Efimov2 Evgeny
Burnaev1
1Skolkovo Institute of Science and Technology
2Sberbank
3HSE University
{V.Moskvoretskii, d.osin, e.shvetsov, i.udovichenko, M.Zhelnin,
<EMAIL_ADDRESS>{Zhimerikina.A.Y, AADukhovny<EMAIL_ADDRESS>
###### Abstract
This study investigates self-supervised learning techniques to obtain
representations of Event Sequences (EvS). It is a key modality in various
applications, including but not limited to banking, e-commerce, and
healthcare.
We perform a comprehensive study of generative and contrastive approaches in
self-supervised learning, applying them both independently. We find that there
is no single supreme method. Consequently, we explore the potential benefits
of combining these approaches. To achieve this goal, we introduce a novel
method that aligns generative and contrastive embeddings as distinct
modalities, drawing inspiration from contemporary multimodal research.
Generative and contrastive approaches are often treated as mutually exclusive,
leaving a gap for their combined exploration. Our results demonstrate that
this aligned model performs at least on par with, and mostly surpasses,
existing methods and is more universal across a variety of tasks. Furthermore,
we demonstrate that self-supervised methods consistently outperform the
supervised approach on our datasets.
**footnotetext: These authors contributed equally to this work
## 1 Introduction
Motivation. Event Sequences (EvS) are widely used in various real-world
applications, including medicine Waring et al. (2020), biology Valeri et al.
(2023), social medial analysis Liguori et al. (2023), fault diagnosis Shao et
al. (2019), churn prediction Jain et al. (2021); Sudharsan and Ganesh (2022),
customer segmentation Carnein and Trautmann (2019), fraud detection Xie et al.
(2022) and more Zhuzhel et al. (2023); Fursov et al. (2021). As a result,
there is a growing need to effectively model such data. Most self-supervised
methods are focused on images, text, speech, or time-series. In Computer
Vision (CV), contrastive learning methods have achieved state-of-the-art
results as a pre-training strategy before supervised fine-tuning He et al.
(2021); Chen et al. (2020); Caron et al. (2021). In Natural Language
Processing (NLP), most modern models rely on generative pre-training Radford
and Narasimhan (2018); Devlin et al. (2019). However, these approaches are
largely unexplored for EvS.
The aim of our work is (1) to study generative and contrastive self-supervised
approaches for pre-training and representation learning in the domain of EvS,
(2) and to determine if they can complement each other. We study two ways to
combine these approaches. For the first, we use a combined loss function. For
the second, drawing inspiration from contemporary multimodal studies Radford
et al. (2021); Zhai et al. (2023); Moskvoretskii and Kuznetsov (2023) we
consider differently pre-trained models as distinct modalities and employ
alignment techniques.
##### Contributions.
We are the first to our knowledge to examine the generative modeling for pre-
training in the domain of EvS. We introduce a novel method called the
Multimodal-Learning Event Model (MLEM) that aligns two pre-training
strategies. Specifically, we use a pre-trained contrastive model to align a
generative model. Our results have demonstrated that, on average, the aligned
model outperforms any single method across a diverse range of tasks and
datasets. Furthermore, our results indicate that self-supervised methods for
pre-training outperform supervised approaches on all the datasets we have
examined.
Furthermore, our study uncovers two significant insights. First, we find that
the generative approach consistently achieves superior performance on tasks
related to the Temporal Point Process (TPP), such as predicting the next event
type and event time.
Second, we find that most of the methods are robust to perturbations along the
time axis. However, they significantly degrade in performance on a synthetic
dataset where the time component plays a crucial role. This observation
suggests that some event sequences can be sufficiently classified using a ”Bag
of Words” approach, where the order of events is not as important.
We provide the source code for all the experiments described in this
paper.***The source code of our work is publicly available at
https://github.com/VityaVitalich/MLEM.
## 2 Related work
In the following section, we will review various generative, contrastive,
hybrid, and other related methods and works.
Generative methods for pre-training and representation learning have seen
notable development in recent years in both NLP Radford and Narasimhan (2018);
Devlin et al. (2019); Raffel et al. (2023) and CV He et al. (2021); Assran et
al. (2023); Bachmann et al. (2022). In NLP one of the generative pre-training
procedures is the next token prediction used in BERT Devlin et al. (2019).
In Lin et al. (2022) authors study generative modeling of EvS for TPP tasks.
Similarly to NLP, they use next-event prediction as their main training
objective. While EvS has been studied in such a generative context, the
authors in Lin et al. (2022) did not specifically investigate generative
modeling as a representation of learning or pre-training. The similarities
between autoregressive modeling in TPP and the success of generative
approaches in NLP have prompted us to consider the potential for applying
generative-style self-supervised learning and pre-training to the EvS domain.
Contrastive methods are a widely recognized in He et al. (2020); Grill et al.
(2020). These methods typically draw closer the embeddings of variations of
the same object and distance them from those of different objects. In CoLES
Babaev et al. (2022) authors study contrastive approach for EvS. In their
work, the positive samples consist of subsequences derived from a single
user’s events, while the negative samples are obtained from events of
different users. We utilize this method to examine the efficacy of the
contrastive approach in our study.
Hybrid methods. Our work falls into the category of hybrid self-supervised
approaches Qi et al. (2023); Yang et al. (2023); Lazarow et al. (2017); Oquab
et al. (2023); Zhou et al. (2022); Kim et al. (2021); Kim and Ye (2022);
Shukla and Marlin (2021). Few studies have merged generative and contrastive
methods, typically focusing on loss combination. DINOv2 Oquab et al. (2023)
updates the traditional contrastive loss from DINO Caron et al. (2021) with
iBOT’s Zhou et al. (2022) reconstruction loss through Masked Image Modeling.
Similarly, GCRL Kim et al. (2021) combines these losses via a weighted sum,
significantly outperforming models that are purely generative or contrastive.
EBCLR Kim and Ye (2022) also adopts this approach in the context of energy-
based models, using a sum of contrastive and generative losses.
Other hybrid methods focus on combining generative and supervised learning Liu
and Abbeel (2020). One such method, mTAND Shukla and Marlin (2021), has been
applied for EvS classification.
Supervised learning is commonly used for the EvS classification task. Some
works focus on modeling the underlying dynamics of the EvS process. For
example, irregular samples can be modeled by employing dynamic modeling with
Ordinary Differential Equations Rubanova et al. (2019). In contrast, SeqNAS
Udovichenko et al. (2024) adopts Neural Architecture Search to identify
optimal architectures without strong prior assumptions about the underlying
process. A key finding is the effectiveness of Recurrent Neural Networks
(RNNs) in modeling EvS, leading us to incorporate an RNN encoder in our model.
Furthermore, the authors propose benchmark datasets for EvS classification. We
selected the largest datasets from the benchmark, as self-supervised methods
require substantial data volumes.
Figure 1: A scheme for contrastive pre-training and representation learning is
illustrated using two different sequences,{$S_{1},S_{2}$}, where
$S\in\mathcal{R}^{l\times f}$ and $l$ is a sequence and sequence length,
respectively. The sub-sequences
{$S_{1}^{{}^{\prime}},S_{2}^{{}^{\prime}},S_{1}^{{}^{\prime\prime}},S_{2}^{{}^{\prime\prime}}$}
are sampled using a subsequence sampler described in Section 4.1, and the
latent representations
{$h_{1}^{{}^{\prime}},h_{2}^{{}^{\prime}},h_{1}^{{}^{\prime\prime}},h_{2}^{{}^{\prime\prime}}$},
where $h\in\mathcal{R}^{m}$, are produced by the contrastive encoder model
$\mathcal{E}_{c}$. In our work, we utilize the GRU (Gated Recurrent Unit) and
extract the last hidden state as $h$. The term $L^{con}$ denotes the
contrastive loss. Figure 2: In our generative method, the bottleneck encoder
$\mathcal{E}_{g}$ encodes the entire sequence $S\in\mathcal{R}^{l\times f}$
into $h\in\mathcal{R}^{m}$, $l$ denotes sequence lengths and $f$ and $m$
corresponding hidden sizes. In our work, we utilize the GRU (Gated Recurrent
Unit) and extract the last hidden state as $h$. For generation, we employ a
transformer decoder with recursive generation conditioned on $h$. The term
$L^{LM}$ denotes the reconstruction loss, and $[BOS]$ represents the Beginning
Of Sequence token. Figure 3: MLEM approach for EvS. We illustrate it with two
different sequences {$S_{1},S_{2}$}. Both the contrastive CON and generative
GEN encoders map the sequences {$S_{1},S_{2}$} into corresponding
{$h_{1}^{c},h_{2}^{c},h_{1}^{g},h_{2}^{g}$} latent representations. We align
the representations from different models using SigLIP and compute the
alignment loss, $L^{align}$ to train GEN. Simultaneously we train GEN encoder
with another reconstruction objective $L^{LM}$. In the end, only the GEN
encoder is used as the final model for fine-tuning and obtaining
representations. Throughout the procedure, we use a pre-trained CON encoder
with frozen weights.
## 3 Event Sequences Preliminaries
Datasets used in this work can be described as sets of sequences:
$C=\\{(S_{1},y_{1}),(S_{2},y_{2}),\ldots\\}$, where $y_{i}$ is an attribute
corresponding to the entire sequence or some target. Each
$S_{i}=((t^{1}_{i},\Upsilon^{1}_{i}),(t^{2}_{i},\Upsilon^{2}_{i}),\ldots)$ is
a sequence of events $x^{j}_{i}=(t^{j}_{i},\Upsilon^{j}_{i})$, where
$t_{i}^{j}$ represents the time when the event $x_{i}^{j}$ occurred and
$t_{i}^{j}\leq t^{j+1}_{i}$. The set of sub-events
$\Upsilon^{j}_{i}=\\{k^{1},k^{2},\ldots\\}$ describes the event $x_{i}^{j}$.
It is important to note that in this work, the terms Event Sequences (EvS) and
Irregularly Sampled Time-series (ISTS) are used interchangeably, as the
occurrence of measurement or sampling for the latter can be seen as an event.
## 4 Methodology
### 4.1 Contrastive Learning
Contrastive Learning encodes a sequence $S$ into a compact representation
$f_{e}:S_{i}\rightarrow h_{i}\in\mathcal{R}^{m}$, by bringing positive pairs
(i.e., semantically similar objects) closer to each other in the embedding
space, while pushing negative pairs (i.e., dissimilar objects) further apart.
In the CoLES Babaev et al. (2022) framework, the authors suggest constructing
a set of positive pairs by sampling sub-sequences from a sequence
$S_{i}\rightarrow\\{S_{i}^{{}^{\prime}},S_{i}^{{}^{\prime\prime}}\\}$, where
each element in $\\{S_{i}^{{}^{\prime}},S_{i}^{{}^{\prime\prime}}\\}$ is
shorter than $S_{i}$ and elements in
$\\{S_{i}^{{}^{\prime}},S_{i}^{{}^{\prime\prime}}\\}$ may intersect, the
number of sub-sequences is not limited to two. We adopt this approach and
refer to it as the subsequence sampler in Figure 1. Further, we utilize loss
function (1) from Hadsell et al. (2006). The overall procedure is illustrated
in Figure 1.
$\begin{split}L^{con}&=\frac{1}{|C|}\sum_{i=1}^{|C|}\sum_{j=1}^{|C|}\Bigg{(}z_{ij}\frac{1}{2}||h_{i}-h_{j}||_{2}^{2}\\\
&+(1-z_{ij})\frac{1}{2}\max\\{0,\rho-||h_{i}-h_{j}||_{2}\\}^{2}\Bigg{)}\end{split}$
(1)
Here, $z_{ij}\in\\{0,1\\}$ denotes if objects are semantically similar, $h$ is
the sequence embedding, $\rho$ \- the minimal margin between dissimilar
objects and $C=\\{S_{1},S_{2},\ldots\\}$ is a set of sequences.
### 4.2 Generative Modeling
We employ generative modeling to train our bottleneck encoder. Although our
methodology is similar to Sequence-to-Sequence models, it differs as we use a
bottleneck encoder to derive a single vector representation for the entire
sequence.
For the encoder, we utilize RNN, which extracts the final hidden state $h$,
which is subsequently passed into a Transformer decoder Vaswani et al. (2017).
The decoder reconstructs the entire sequence, conditioned on $h$ via the
Cross-Attention mechanism, in an auto-regressive manner. This entire process
is illustrated in Figure 2.
To train the model we use the next event prediction objective, which is
similar to training language models. Nonetheless, as each event in the
sequence $x^{j}_{i}=(t^{j}_{i},\Upsilon^{j}_{i})$ contain multiple sub-events
$\Upsilon=\\{k^{1},k^{2},\ldots\\}$, we need to reconstruct all of them. To
this end, we apply the cross-entropy loss for each categorical $k$ and the
Mean Squared Error (MSE) loss for each real-valued $k$. The final loss is a
cumulative sum of the losses for all elements in
$\Upsilon=\\{k^{1},k^{2},\ldots\\}$, plus the MSE loss for time $t$. We denote
this loss as $L^{LM}$.
Intuitively, this procedure requires our encoder to develop a representation
informative enough for the decoder to accurately reconstruct the entire
sequence. Since we map all our sequences $S$ into $h\in\mathcal{R}^{m}$ we
call all encoders bottleneck encoders.
All details related to the model can be found in Section 5.3.
Method | ABank | Age | PhysioNet | Pendulum | TaoBao
---|---|---|---|---|---
| ROC-AUC | Accuracy | ROC-AUC | MSE | ROC-AUC
Supervised | $0.768\pm 0.000$ | $0.602\pm 0.005$ | $0.790\pm 0.021$ | $\underline{0.33\pm 0.02}$ | $0.684\pm 0.002$
Contrastive | $0.729\pm 0.033$ | $0.638\pm 0.007$ | $\mathbf{0.815\pm 0.013}$ | $\mathbf{0.26\pm 0.02}$ | $0.679\pm 0.003$
Generative | $\underline{0.788\pm 0.003}$ | $0.639\pm 0.007$ | $0.787\pm 0.007$ | $\mathbf{0.26\pm 0.03}$ | $\mathbf{0.695\pm 0.004}$
Naïve | $0.658\pm 0.020$ | $0.638\pm 0.007$ | $0.759\pm 0.024$ | $\mathbf{0.26\pm 0.04}$ | $0.691\pm 0.002$
MLEM | $\mathbf{0.790\pm 0.000}$ | $\mathbf{0.642\pm 0.005}$ | $0.780\pm 0.004$ | $\mathbf{0.26\pm 0.05}$ | $\mathbf{0.695\pm 0.002}$
Table 1: Evaluation of self-supervised methods fine-tuned using supervised
loss. ABank, PhysioNet and TaoBao are reported with ROC-AUC, AGE reported with
accuracy and Pendulum with MSE.
### 4.3 Aligning generative encoder
To align the embeddings obtained through generative modeling with those
acquired through contrastive learning, we treat them as distinct modalities
and employ a contrastive aligning procedure inspired by CLIP Radford et al.
(2021). We utilize the recent SigLIP loss Zhai et al. (2023) for this purpose.
The resulting aligned encoder model is referred to as the Multimodal-Learning
Event Model (MLEM).
Overall MLEM training procedure is exactly the same as for the generative
model described in Section 4.2, except the alignment, which incurs an
additional loss (2) resulting in the total loss (3). Training MLEM requires a
pre-trained contrastive encoder. Using this encoder we train a generative
model and align its embeddings with the embeddings produced by the contrastive
encoder. Contrastive encoder weights are not updated during training. Both
generative $\mathcal{E}_{g}$ and contrastive $\mathcal{E}_{c}$ encoders
receive a set of sequences $C=\\{S_{1},S_{2},\ldots\\}$ and map each sequence
$S$ into corresponding hidden states $h^{g}$ and $h^{c}$, then the goal of
MLEM alignment is to pull embeddings obtained from the same sequence closer to
each other and to push embeddings from different sequences further away. This
procedure is illustrated in Figure 3. Below is the alignment loss we use:
$L^{align}=\frac{1}{|C|}\sum_{i=1}^{|C|}\sum_{j=1}^{|C|}\underbrace{\log\frac{1}{1+e^{z_{ij}(-t\cdot{h^{g}_{i}}\cdot
h^{c}_{j}+b)}}}_{L^{align}_{ij}}$ (2)
Here, $z_{ij}\in\\{-1,1\\}$ indicates if a pair of embeddings originated from
the same sequence $S$, $t$ and $b$ denote the temperature and bias,
respectively, both are learnable parameters.
Total loss to train MLEM:
$L=\alpha L^{LM}(S,\hat{S})+\beta
L^{align}(\mathcal{E}_{g}(S),\mathcal{E}_{c}(S))$ (3)
Here, $S$ and $\hat{S}$ denote original and reconstructed sequences, $\alpha$
and $\beta$ are hyperparameters that adjust the strength of alignment.
All technical details can be found in Section 5.3.
### 4.4 Naïve method
As a baseline, we also examine a straightforward, or Naïve, approach that
merges generative and contrastive methods. This is achieved by incorporating a
contrastive objective into the generative model, akin to methodologies used in
prior studies Kim et al. (2021); Oquab et al. (2023). The final loss is a
weighted sum of objectives $L=\alpha L^{LM}+\beta L^{con}$. One of the
downsides of this implementation is that we can not utilize full-length
sequences because sub-sequence sampling is required for contrastive procedure.
Further, we compare Naïve method with MLEM and show that while it is very
close by design it leads to inferior results.
## 5 Experiments
### 5.1 Datasets
In our study, we use five EvS datasets formally outlined in Section 3.
* •
A subset of EvS datasets from SeqNAS benchmark Udovichenko et al. (2024), more
precisely: ABank, Age, TaoBao. ABank and Age are bank transactions and TaoBao
is user activity.
* •
PhysioNet 2012 comprises highly sparse medical event data, with the goal of
predicting mortality.
* •
Pendulum is a synthetic dataset simulating pendulum motion with the objective
of predicting its length using a sequence of coordinates. This dataset was
created specifically to evaluate a model’s capability to effectively
incorporate the time component in sequence modeling.
Each dataset contains a range of features, both numerical and categorical. All
datasets, with the exception of ABank, consist of irregularly sampled time
steps. Comprehensive details for each dataset are available in the
supplementary materials.
| ABank | Age | PhysioNet | Pendulum | TaoBao
---|---|---|---|---|---
| ROC-AUC | TPP Acc. | Acc. | TPP Acc. | ROC-AUC | TPP MSE | MSE | TPP MSE | ROC-AUC | TPP Acc.
| Linear Probing
Contrastive | $0.678\pm 0.003$ | $0.37$ | $0.623\pm 0.010$ | $0.28$ | $\mathbf{0.819\pm 0.003}$ | $1.05$ | $\mathbf{0.37\pm 0.02}$ | $0.8$ | $0.680\pm 0.001$ | $0.22$
Generative | $0.753\pm 0.003$ | $0.43$ | $0.610\pm 0.004$ | $0.3$ | $0.759\pm 0.010$ | $\mathbf{0.93}$ | $0.74\pm 0.09$ | $0.8$ | $\mathbf{0.689\pm 0.005}$ | $\mathbf{0.35}$
Naïve | $0.703\pm 0.002$ | $0.37$ | $0.602\pm 0.003$ | $\mathbf{0.31}$ | $0.733\pm 0.005$ | $\mathbf{0.93}$ | $0.59\pm 0.07$ | $0.8$ | $0.680\pm 0.003$ | $0.32$
MLEM | $\mathbf{0.757\pm 0.000}$ | $\mathbf{0.43}$ | $\mathbf{0.627\pm 0.000}$ | $0.29$ | $0.762\pm 0.010$ | $\mathbf{0.93}$ | $0.41\pm 0.01$ | $\mathbf{0.79}$ | $0.683\pm 0.003$ | $\mathbf{0.35}$
| Non-linear Probing
Contrastive | $0.691\pm 0.004$ | $0.37$ | $0.629\pm 0.010$ | $0.28$ | $\mathbf{0.822\pm 0.001}$ | $1.05$ | $\mathbf{0.37\pm 0.00}$ | $0.8$ | $0.686\pm 0.001$ | $0.22$
Generative | $0.758\pm 0.003$ | $0.43$ | $0.618\pm 0.004$ | $0.3$ | $0.764\pm 0.006$ | $\mathbf{0.93}$ | $0.63\pm 0.02$ | $0.8$ | $0.684\pm 0.002$ | $\mathbf{0.35}$
Naïve | $0.704\pm 0.005$ | $0.37$ | $0.608\pm 0.002$ | $\mathbf{0.31}$ | $0.774\pm 0.008$ | $\mathbf{0.93}$ | $0.57\pm 0.05$ | $0.8$ | $\mathbf{0.690\pm 0.002}$ | $0.32$
MLEM | $\mathbf{0.759\pm 0.003}$ | $\mathbf{0.43}$ | $\mathbf{0.634\pm 0.005}$ | $0.29$ | $0.780\pm 0.001$ | $\mathbf{0.93}$ | $0.40\pm 0.01$ | $\mathbf{0.79}$ | $0.688\pm 0.002$ | $\mathbf{0.35}$
Table 2: Evaluation of self-supervised methods using linear and non-linear
probing for downstream tasks and TPP tasks. The table reports the mean and
standard deviation calculated from three different seeds for downstream task
metrics. For TPP tasks, only the mean values from three seeds are presented,
as the standard deviation was consistently $0.00$. In the table, the best-
performing values are highlighted in bold, while the second-best values are
underlined.
### 5.2 Evaluation objectives
To evaluate the effectiveness of self-supervised training strategies, our
study focuses on two key aspects: the quality of the embeddings and the
effectiveness of fine-tuning the entire network, following self-supervised
studies Dubois et al. (2024, 2021).
#### 5.2.1 Main Objectives
To assess the quality of embeddings, we primarily rely on metrics from
downstream tasks. This involves utilizing both linear and non-linear probing
methods. For non-linear probing, we employ Gradient Boosting algorithms with
LGBM Ke et al. (2017) applied directly to the embeddings.
We evaluate embeddings on several tasks, including the prediction of an
attribute or a target $y_{i}$ given the entire $S_{i}$ and the prediction of
consequent event $x^{j}$ given a set of
$S_{i}=\\{x_{i}^{1},\ldots,x^{j-1}_{i}\\}$. The second objective addresses the
TPP task, which involves predicting the type of the next event or the time of
the next event’s arrival. We have processed each dataset such that the target
is either the category of the next event (for datasets that include this
feature) or the time until the next event (for datasets without a primary
event category).
#### 5.2.2 Secondary Objectives
We incorporate additional metrics to assess the quality of embeddings.
Specifically, we utilize anisotropy and intrinsic dimension, drawing on
insights from previous research Nakada and Imaizumi (2020); Razzhigaev et al.
(2023). In Section 6.3, we compare our findings with the results presented in
the aforementioned works.
Anisotropy assesses the non-uniform distribution of embeddings in space,
providing insights into the contextualization of our embedding. Lower
anisotropy in embeddings has been linked to better model performance Ait-Saada
and Nadif (2023). In line with the approach used in Razzhigaev et al. (2023),
we compute anisotropy as the ratio of the highest singular value to the sum of
all singular values:
$Anisotropy(H)=\frac{\sigma_{1}^{2}}{\sum\limits_{i}\sigma_{i}^{2}}$
The intrinsic dimension evaluates the optimal dimensionality of data, shedding
light on the core information captured by the embeddings. To determine the
intrinsic dimension, we employ the method proposed in Facco et al. (2017).
This method examines how the volume of an $n$-dimensional sphere changes with
dimension $d$. Further details are available in the original paper or in
Razzhigaev et al. (2023).
### 5.3 Models
Feature embeddings. To transform features into a numerical vector, we use an
embedding layer for categorical features and linear projection for numerical
ones. For ABank, Age and TaoBao we set the dimension of each feature to 32.
For Pendulum and PhysioNet dimension is set to 8. Additionally, we use the
time difference between events as a separate feature.
For consistency in our experiments, we use the same encoder architecture
across all training strategies and datasets. We use a single-layer GRU with a
hidden size of 512, we take the last hidden state as sequence embedding. The
GRU is selected for its proven effectiveness in encoding time-ordered
sequences and its extensive application in relevant literature, providing a
reliable baseline for our study Rubanova et al. (2019); Tonekaboni et al.
(2021); Yoon et al. (2019); Udovichenko et al. (2024).
For Supervised model, we use the aforementioned encoder with a linear head.
Contrastive learning approach is described at 4.1 and uses the same encoder as
mentioned above. Furthermore, we integrate a Feed Forward Projector atop the
GRU for loss calculation, a technique commonly adopted in several studies for
enhanced performance Grill et al. (2020); Oquab et al. (2023).
For the generative modeling, we employ a vanilla transformer decoder
configured with LayerNorm, consisting of 3 layers, 2 heads, a hidden size of
128, and a Feed Forward hidden size of 256. To ensure training stability, we
also apply LayerNorm to the sequence embedding produced by the encoder.
Additionally, a projection layer is used atop the decoder to predict each
feature.
### 5.4 Training details
To maintain consistency in our experiment, we trained all models using
identical parameters. Models were trained for 100 epochs on datasets with
fewer than 100K sequences and for 40 epochs on larger datasets. The learning
rate was set at 1e-3, weight decay at 3e-3, and the batch size was fixed at
128. For the MLEM model, we set $\alpha$ at 1 and $\beta$ at 10. We averaged
the results across three different seeds to ensure reliability and
reproducibility.
## 6 Results
### 6.1 Self-supervised pre-training for fine-tuning
Here we evaluate the effectiveness of self-supervised pre-training strategies
prior to supervised fine-tuning the entire model. For fine-tuning, we employed
pre-trained encoders with randomly initialized linear layers and trained the
model for 10 epochs. The results presented in Table 1 indicate that, among all
the evaluated pre-training methods, MLEM consistently achieves the most
favorable results across all datasets, with the exception of the highly sparse
PhysioNet dataset. Furthermore, it is noteworthy that supervised learning
without self-supervised pre-training consistently produces inferior results.
### 6.2 Embeddings probing
We conducted a comprehensive analysis of the quality of embeddings using both
linear and non-linear probing techniques, results are presented in Table 2. We
evaluated performance on various tasks described in Section 5.2.1, including
EvS classification and regression tasks for the entire sequence, as well as
TPP tasks. Despite some variations in absolute values, the overall trends and
rankings of methods remain consistent across both probing strategies.
Our assessment reveals that neither the contrastive nor the generative
approach consistently outperforms the other. However, the MLEM model
consistently demonstrates superior performance across most datasets. Notably,
when MLEM is not the top performer, it often ranks second, highlighting its
versatility. By effectively integrating the advantages of both contrastive and
generative techniques, this approach consistently delivers strong overall
performance across a range of datasets.
Figure 4: Mean percentage change averaged across datasets. The X-axis
represents the dropout probability, while the Y-axis indicates the mean change
in the linear probing downstream metric, compared to the metric with no
dropout.
### 6.3 Anisotropy and Intrinsic Dimension
Method | Anisotropy $\downarrow$ | Intrinsic Dimension $\uparrow$
---|---|---
Contrastive | $0.11\pm 0.04$ | $10.15\pm 6.12$
Generative | $0.08\pm 0.03$ | $14.86\pm 10.12$
Naïve | $0.07\pm 0.04$ | $11.26\pm 6.51$
MLEM | $\mathbf{0.06\pm 0.03}$ | $\mathbf{15.86\pm 9.02}$
Table 3: Average anisotropy and intrinsic dimension across datasets
We conducted an evaluation of anisotropy and the intrinsic dimensions of
embeddings from various pre-training strategies, as detailed in 5.2.2. As a
result, we show that MLEM has the highest intrinsic dimension and the lowest
anisotropy, on average, across all datasets. This finding may indicate that
our pre-training strategy effectively enables the embeddings to encapsulate a
greater amount of information and ensures their uniform distribution in space.
The results are presented in Table 3.
Contrary to the findings in Dubois et al. (2024), we did not observe any
correlation between intrinsic dimension and downstream performance metrics.
Similarly, we did not find any correlation for anisotropy, which aligns with
the aforementioned work.
Figure 5: Correlation plots for anisotropy and intrinsic dimension with
normalized performance.
Figure 6: The t-SNE visualizations showcase embeddings resulting from various
pre-training strategies. The top row displays the embeddings for the Age
dataset, while the bottom row illustrates those for the TaoBao dataset. Each
point represents a sequence $S_{i}$ from a given dataset, colored accordingly
to the corresponding attribute $y_{i}$. For Age, there are 4 classes and 2
classes for TaoBao
### 6.4 Visualization
The findings related to anisotropy and intrinsic dimension align with the
t-SNE visualization of the Age and TaoBao datasets shown in Figure 6. While
contrastive embeddings typically form distinct clusters, the generative
approaches and MLEM provide a more complex embedding space.
## 7 Embedding Robustness
Our study also explores the robustness of model embeddings to specific types
of perturbations. We focus on examining the impact of dropout and the
shuffling of events within one sequence on the performance of the embeddings
on downstream tasks.
### 7.1 Sensitivity to data omission
To investigate the effects of data omission, we applied a random dropout
strategy, removing entire events from a sequence along the time axis, with
varying probabilities $p_{dropout}\in\\{0.1,0.3,0.5,0.7\\}$. This allowed us
to examine the impact of different levels of data removal on the embeddings,
as seen in Figure 4, by measuring the average percentage decrease in
downstream task metrics.
MLEM’s performance slightly decreases at $0.1$ dropout probability and more so
at higher levels. This indicates that MLEM is more sensitive to dropout and
requires the presence of all the samples rather than depending on one specific
sample. This property can be used in a sensitivity analysis or related
applications.
Additionally, the generative modeling embeddings exhibited an intriguing
behavior: they not only resisted the decline but actually surpassed their
original performance at a dropout rate of 0.5. To draw reliable conclusions
based on this finding, further work is required with a larger number of
datasets and various models.
### 7.2 Sensitivity to perturbations
In subsequent tests, we examined the impact of perturbing events within a
sequence on downstream task performance. The results, presented in Table 4,
show that perturbing sequence elements had minimal to no effect on the
results. This suggests that current models may be employing a ’Bag-of-Words’
approach and considering a set of events rather than the sequential nature of
the data. One might assume that this discrepancy is due to not suitable models
or training strategies. However, this hypothesis is not supported by a decline
in performance on the pendulum dataset, where the time component is crucial
and the same models were used.
This raises the question of whether we could further enhance model performance
by explicitly considering the sequential nature of the data, or if this is
solely a property of the datasets themselves.
Method | ABank | Age | PhysioNet | Pendulum | TaoBao
---|---|---|---|---|---
Contrastive | +0.47% | -1.93% | -0.37% | -367.57% | -0.15%
Generative | -0.93% | +0.16% | -6.19% | -91.89% | -0.29%
Naive | -1.42% | +0.99% | -3.00% | -147.46% | 0.00%
MLEM | -1.45% | -0.16% | -6.56% | -285.37% | +0.29%
Table 4: Percentage change in linear probing performance after shuffling
events order
## 8 Computational complexity
To obtain the MLEM encoder, we need to train two models, which is not
desirable. However, to demonstrate the practicality of our method, we compare
MLEM with SeqNas Udovichenko et al. (2024) in terms of performance and
computational efficiency. SeqNas employs an architectural search to identify
the most optimal supervised architecture for the task at hand. We posit that
SeqNas represents an upper bound for downstream metrics. Table 5 presents
results from the two largest datasets, examined by us, demonstrating that MLEM
achieves performance near the upper bound set by SeqNas, but with
significantly lower computational cost.
Method | ABank | Age
---|---|---
| ROC-AUC | GPU hours | Accuracy | GPU hours
MLEM | $0.790\pm 0.000$ | $25$ | $0.642\pm 0.005$ | $9$
SeqNas | $0.7963\pm 0.001$ | $288$ | $0.645\pm 0.002$ | $80$
Table 5: Comparison of downstream metrics and computational costs for training
and fine-tuning MLEM versus SeqNas.
## 9 Discussion
We observe that in our settings, MLEM outperforms the Naïve method, despite
their initial similarities. One possible explanation for the success of our
method can be attributed to viewing MLEM as a Knowledge Distillation Hinton et
al. (2015) procedure. However, further work is required to confirm this
hypothesis.
We observed that data perturbation does not significantly impact model
performance, suggesting a near-time-invariance. This could be attributed to
the nature of the examined datasets or the absence of appropriate modeling
approaches. Both hypotheses suggest the potential for future research in time-
sensitive modeling.
We also observe consistent underperformance of methods that include generative
approach on the PhysioNet dataset. This could be attributed to the high rate
of missing values, which complicates the generation of accurate predictions
for subsequent steps. Furthermore, the relatively small size of the dataset
may be a contributing factor, given that generative modeling is generally
dependent on large amounts of data.
## 10 Conclusion
We proposed a novel approach to combine multiple self-supervised strategies
and demonstrated that this approach yields superior results compared to any
single self-supervised method. We believe this opens up a new direction for
enhancing self-supervised learning.
Our study revealed that self-supervised pre-trained models, especially those
with generative pre-training, outperform purely supervised methods, with MLEM
showing the most promise. This suggests potential benefits in exploring
different pre-training and architectural options.
In our work, we are the first, to our knowledge, to apply generative modeling
to event sequences for representation learning and pre-training. Our findings
show that neither generative nor contrastive pre-training clearly surpasses
the other in effectiveness. To address this, we developed the MLEM method,
which innovatively merges contrastive learning with generative modeling. MLEM
aligns these two approaches as separate modalities, inspired by recent
advances in multimodal research, and it demonstrates enhanced performance on
most datasets for both primary and secondary goals.
## References
* Ait-Saada and Nadif [2023] Mira Ait-Saada and Mohamed Nadif. Is anisotropy truly harmful? a case study on text clustering. In Anna Rogers, Jordan Boyd-Graber, and Naoaki Okazaki, editors, Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 1194–1203, Toronto, Canada, July 2023. Association for Computational Linguistics.
* Assran et al. [2023] Mahmoud Assran, Quentin Duval, Ishan Misra, Piotr Bojanowski, Pascal Vincent, Michael Rabbat, Yann LeCun, and Nicolas Ballas. Self-supervised learning from images with a joint-embedding predictive architecture, 2023.
* Babaev et al. [2022] Dmitrii Babaev, Nikita Ovsov, Ivan Kireev, Maria Ivanova, Gleb Gusev, Ivan Nazarov, and Alexander Tuzhilin. Coles: contrastive learning for event sequences with self-supervision. In Proceedings of the 2022 International Conference on Management of Data, pages 1190–1199, 2022.
* Bachmann et al. [2022] Roman Bachmann, David Mizrahi, Andrei Atanov, and Amir Zamir. Multimae: Multi-modal multi-task masked autoencoders, 2022.
* Carnein and Trautmann [2019] Matthias Carnein and Heike Trautmann. Customer segmentation based on transactional data using stream clustering. In Advances in Knowledge Discovery and Data Mining: 23rd Pacific-Asia Conference, PAKDD 2019, Macau, China, April 14-17, 2019, Proceedings, Part I 23, pages 280–292. Springer, 2019.
* Caron et al. [2021] Mathilde Caron, Hugo Touvron, Ishan Misra, Hervé Jégou, Julien Mairal, Piotr Bojanowski, and Armand Joulin. Emerging properties in self-supervised vision transformers, 2021.
* Chen et al. [2020] Ting Chen, Simon Kornblith, Mohammad Norouzi, and Geoffrey Hinton. A simple framework for contrastive learning of visual representations, 2020.
* Devlin et al. [2019] Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. Bert: Pre-training of deep bidirectional transformers for language understanding, 2019.
* Dubois et al. [2021] Yann Dubois, Douwe Kiela, David J. Schwab, and Ramakrishna Vedantam. Learning optimal representations with the decodable information bottleneck, 2021.
* Dubois et al. [2024] Yann Dubois, Tatsunori Hashimoto, and Percy Liang. Evaluating self-supervised learning via risk decomposition, 2024.
* Facco et al. [2017] Elena Facco, Maria d’Errico, Alex Rodriguez, and Alessandro Laio. Estimating the intrinsic dimension of datasets by a minimal neighborhood information. Scientific Reports, 7(1), September 2017.
* Fursov et al. [2021] Ivan Fursov, Matvey Morozov, Nina Kaploukhaya, Elizaveta Kovtun, Rodrigo Rivera-Castro, Gleb Gusev, Dmitry Babaev, Ivan Kireev, Alexey Zaytsev, and Evgeny Burnaev. Adversarial attacks on deep models for financial transaction records, 2021.
* Grill et al. [2020] Jean-Bastien Grill, Florian Strub, Florent Altché, Corentin Tallec, Pierre H. Richemond, Elena Buchatskaya, Carl Doersch, Bernardo Avila Pires, Zhaohan Daniel Guo, Mohammad Gheshlaghi Azar, Bilal Piot, Koray Kavukcuoglu, Rémi Munos, and Michal Valko. Bootstrap your own latent: A new approach to self-supervised learning, 2020.
* Hadsell et al. [2006] Raia Hadsell, Sumit Chopra, and Yann Lecun. Dimensionality reduction by learning an invariant mapping. pages 1735 – 1742, 02 2006.
* He et al. [2020] Kaiming He, Haoqi Fan, Yuxin Wu, Saining Xie, and Ross Girshick. Momentum contrast for unsupervised visual representation learning, 2020.
* He et al. [2021] Kaiming He, Xinlei Chen, Saining Xie, Yanghao Li, Piotr Dollár, and Ross Girshick. Masked autoencoders are scalable vision learners, 2021.
* Hinton et al. [2015] Geoffrey Hinton, Oriol Vinyals, and Jeff Dean. Distilling the knowledge in a neural network. arXiv preprint arXiv:1503.02531, 2015.
* Jain et al. [2021] Hemlata Jain, Ajay Khunteta, and Sumit Srivastava. Telecom churn prediction and used techniques, datasets and performance measures: a review. Telecommunication Systems, 76:613–630, 2021.
* Ke et al. [2017] Guolin Ke, Qi Meng, Thomas Finley, Taifeng Wang, Wei Chen, Weidong Ma, Qiwei Ye, and Tie-Yan Liu. Lightgbm: A highly efficient gradient boosting decision tree. In I. Guyon, U. Von Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett, editors, Advances in Neural Information Processing Systems, volume 30. Curran Associates, Inc., 2017.
* Kim and Ye [2022] Beomsu Kim and Jong Chul Ye. Energy-based contrastive learning of visual representations, 2022.
* Kim et al. [2021] Saehoon Kim, Sungwoong Kim, and Juho Lee. Hybrid generative-contrastive representation learning, 2021.
* Lazarow et al. [2017] Justin Lazarow, Long Jin, and Zhuowen Tu. Introspective neural networks for generative modeling. In Proceedings of the IEEE International Conference on Computer Vision, pages 2774–2783, 2017.
* Liguori et al. [2023] Angelica Liguori, Luciano Caroprese, Marco Minici, Bruno Veloso, Francesco Spinnato, Mirco Nanni, Giuseppe Manco, and Joao Gama. Modeling events and interactions through temporal processes–a survey. arXiv preprint arXiv:2303.06067, 2023.
* Lin et al. [2022] Haitao Lin, Lirong Wu, Guojiang Zhao, Pai Liu, and Stan Z Li. Exploring generative neural temporal point process. arXiv preprint arXiv:2208.01874, 2022.
* Liu and Abbeel [2020] Hao Liu and Pieter Abbeel. Hybrid discriminative-generative training via contrastive learning. arXiv preprint arXiv:2007.09070, 2020.
* Moskvoretskii and Kuznetsov [2023] Viktor Moskvoretskii and Denis Kuznetsov. Imad: Image-augmented multi-modal dialogue. arXiv preprint arXiv:2305.10512, 2023.
* Nakada and Imaizumi [2020] Ryumei Nakada and Masaaki Imaizumi. Adaptive approximation and generalization of deep neural network with intrinsic dimensionality, 2020.
* Oquab et al. [2023] Maxime Oquab, Timothée Darcet, Théo Moutakanni, Huy Vo, Marc Szafraniec, Vasil Khalidov, Pierre Fernandez, Daniel Haziza, Francisco Massa, Alaaeldin El-Nouby, Mahmoud Assran, Nicolas Ballas, Wojciech Galuba, Russell Howes, Po-Yao Huang, Shang-Wen Li, Ishan Misra, Michael Rabbat, Vasu Sharma, Gabriel Synnaeve, Hu Xu, Hervé Jegou, Julien Mairal, Patrick Labatut, Armand Joulin, and Piotr Bojanowski. Dinov2: Learning robust visual features without supervision, 2023.
* Qi et al. [2023] Zekun Qi, Runpei Dong, Guofan Fan, Zheng Ge, Xiangyu Zhang, Kaisheng Ma, and Li Yi. Contrast with reconstruct: Contrastive 3d representation learning guided by generative pretraining. arXiv preprint arXiv:2302.02318, 2023.
* Radford and Narasimhan [2018] Alec Radford and Karthik Narasimhan. Improving language understanding by generative pre-training. 2018\.
* Radford et al. [2021] Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, Gretchen Krueger, and Ilya Sutskever. Learning transferable visual models from natural language supervision, 2021.
* Raffel et al. [2023] Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. Exploring the limits of transfer learning with a unified text-to-text transformer, 2023.
* Razzhigaev et al. [2023] Anton Razzhigaev, Matvey Mikhalchuk, Elizaveta Goncharova, Ivan Oseledets, Denis Dimitrov, and Andrey Kuznetsov. The shape of learning: Anisotropy and intrinsic dimensions in transformer-based models, 2023.
* Rubanova et al. [2019] Yulia Rubanova, Ricky T. Q. Chen, and David Duvenaud. Latent odes for irregularly-sampled time series, 2019.
* Shao et al. [2019] Siyu Shao, Ruqiang Yan, Yadong Lu, Peng Wang, and Robert X Gao. Dcnn-based multi-signal induction motor fault diagnosis. IEEE Transactions on Instrumentation and Measurement, 69(6):2658–2669, 2019.
* Shukla and Marlin [2021] Satya Narayan Shukla and Benjamin M. Marlin. Multi-time attention networks for irregularly sampled time series, 2021.
* Sudharsan and Ganesh [2022] R Sudharsan and EN Ganesh. A swish rnn based customer churn prediction for the telecom industry with a novel feature selection strategy. Connection Science, 34(1):1855–1876, 2022.
* Tonekaboni et al. [2021] Sana Tonekaboni, Danny Eytan, and Anna Goldenberg. Unsupervised representation learning for time series with temporal neighborhood coding, 2021.
* Udovichenko et al. [2024] Igor Udovichenko, Egor Shvetsov, Denis Divitsky, Dmitry Osin, Ilya Trofimov, Ivan Sukharev, Anatoliy Glushenko, Dmitry Berestnev, and Evgeny Burnaev. SeqNAS: Neural architecture search for event sequence classification. IEEE Access, 12:3898–3909, 2024.
* Valeri et al. [2023] Jacqueline A Valeri, Luis R Soenksen, Katherine M Collins, Pradeep Ramesh, George Cai, Rani Powers, Nicolaas M Angenent-Mari, Diogo M Camacho, Felix Wong, Timothy K Lu, et al. Bioautomated: An end-to-end automated machine learning tool for explanation and design of biological sequences. Cell Systems, 14(6):525–542, 2023.
* Vaswani et al. [2017] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. Attention is all you need. Advances in neural information processing systems, 30, 2017.
* Waring et al. [2020] Jonathan Waring, Charlotta Lindvall, and Renato Umeton. Automated machine learning: Review of the state-of-the-art and opportunities for healthcare. Artificial intelligence in medicine, 104:101822, 2020.
* Xie et al. [2022] Yu Xie, Guanjun Liu, Chungang Yan, Changjun Jiang, and MengChu Zhou. Time-aware attention-based gated network for credit card fraud detection by extracting transactional behaviors. IEEE Transactions on Computational Social Systems, 2022.
* Yang et al. [2023] Yonghui Yang, Zhengwei Wu, Le Wu, Kun Zhang, Richang Hong, Zhiqiang Zhang, Jun Zhou, and Meng Wang. Generative-contrastive graph learning for recommendation. 2023\.
* Yoon et al. [2019] Jinsung Yoon, Daniel Jarrett, and Mihaela van der Schaar. Time-series generative adversarial networks. In H. Wallach, H. Larochelle, A. Beygelzimer, F. d'Alché-Buc, E. Fox, and R. Garnett, editors, Advances in Neural Information Processing Systems, volume 32. Curran Associates, Inc., 2019.
* Zhai et al. [2023] Xiaohua Zhai, Basil Mustafa, Alexander Kolesnikov, and Lucas Beyer. Sigmoid loss for language image pre-training, 2023.
* Zhou et al. [2022] Jinghao Zhou, Chen Wei, Huiyu Wang, Wei Shen, Cihang Xie, Alan Yuille, and Tao Kong. ibot: Image bert pre-training with online tokenizer, 2022.
* Zhuzhel et al. [2023] Vladislav Zhuzhel, Vsevolod Grabar, Galina Boeva, Artem Zabolotnyi, Alexander Stepikin, Vladimir Zholobov, Maria Ivanova, Mikhail Orlov, Ivan Kireev, Evgeny Burnaev, et al. Continuous-time convolutions model of event sequences. arXiv preprint arXiv:2302.06247, 2023.
## Detailed Dataset Information
Detailed information regarding all the datasets is presented in Table 6.
ABank is a dataset of bank transactions with regularly sampled time-steps,
representing transaction IDs rather than the transaction times. The goal is
binary classification: predicting whether a user will default.
Age consists of bank transactions with irregularly sampled time-steps. The
task is to categorize users into one of four age groups.
PhysioNet features medical measurements with irregularly sampled time-steps.
The target is binary classification: predicting in-hospital mortality.
Pendulum is a synthetic dataset created from pendulum simulations with
irregularly sampled time-steps, using the Hawkes process. It includes features
like the x and y coordinates of the pendulum, normalized by its length. Each
observation has a unique real-valued pendulum length, and the task is to
predict this length. The detailed dataset synthesis procedure is described in
section Pendulum dataset generation.
Taobao captures user activity on the TaoBao platform with irregular time-
steps. The binary classification task is to predict whether a user will make a
payment in the next 7 days.
For all datasets, categories that appeared fewer than 500 times were
consolidated into a single category. Event time values were normalized to a
unit interval for consistency. In the case of the PhysioNet dataset, features
were aggregated at 360-second intervals. If a feature appeared at least once
within this timeframe, its average was calculated. If a feature was absent, it
was marked with a value of -1. No other manual feature generation or specific
preprocessing was performed, unless otherwise specified.
Train - test - validation splits. For test sets we followed existing
literature Babaev et al. [2022]; Udovichenko et al. [2024], with the exception
of the PhysioNet dataset, for which we created the split. The train and
validation sets for each training session were random in fixed proportions.
All the metrics are reported on test sets. In cases where sequences exceeded a
specified length, we selected the $N$ most recent events, with $N$ being the
sequence length as defined in dataset Table 6.
| ABank | Age | Taobao | PhysioNet | Pendulum
---|---|---|---|---|---
# Observations | 2.7B | 26B | 7.8M | 0.5M | 8.9M
Mean observations per user | 280 | 881 | 806 | 72 | 89
Observations std. per user | 270 | 124 | 1042 | 21 | 11
Max observations per user in modeling | 200 | 1000 | 1000 | 200 | 200
# Sequences | 963M | 30K | 9K | 8K | 100K
# Classes | 2 | 4 | 2 | 2 | -
# Categorical features | 16 | 1 | 2 | 3 | 0
# Real-valued features | 3 | 1 | 0 | 38 | 2
Target | Default | Age group | Payment | Mortality | Length
Table 6: Statistics of datasets
## Pendulum dataset generation
We developed a pendulum dataset to assess models when time dependency is
crucial.
We simulated pendulum motion with different lengths and sampled coordinates
with irregular time intervals, which were derived using a sampling method
based on the Hawkes process. Therefore, our dataset consists of sequences
where each event is represented with time and two coordinates, each sequence
is generated with different pendulum length.
We opted to Hawkes process to emphasize the critical role of accurate event
timing in successful model performance corresponding to real world
applications.
To model the Hawkes process, we consider the following intensity function
$\lambda(t)$ that is given by (4).
$\lambda(t)=\mu+\sum_{t_{i}<t}\alpha e^{-\beta(t-t_{i})}$ (4)
We used following parameters for the Hawkes process:
* •
$\mu$ is the base intensity, was fixed at 10;
* •
$\alpha$ is the excitation factor, was chosen to be 0.2;
* •
$\beta$ is the decay factor, was set to 1.
* •
$t_{i}$ are the times of previous events before time $t$.
The example of generated event times with these parameters is depicted in
Figure 8.
Figure 7: The figure illustrates the pendulum motion at various instances,
with time steps determined by a Hawkes process. It captures the pendulum’s
trajectory using only the normalized planar coordinates at these sampled
times. Figure 8: Example of temporal distribution of events ranging from 0 to
5 seconds. Each event is marked with a star along the timeline. The y-axis
serves only as a technical aid to separate the events for clarity and does not
convey any additional information.
These parameter values were selected with the intention of generating
sequences that contained approximately 100 events each. Additionally, this
specific combination of $\mu$, $\alpha$, and $\beta$ was designed to create
sequences where events would be densely clustered during certain intervals and
sparsely distributed during others. This configuration allowed us to simulate
scenarios that closely mimic real-world dynamics, where event occurrences can
fluctuate between periods of high and low activity.
To model the pendulum we consider the second-order differential equation:
$\theta^{\prime\prime}+\left(\frac{b}{m}\right)\theta^{\prime}+\left(\frac{g}{L}\right)\sin(\theta)=0$
(5)
where,
* •
$\theta^{\prime\prime}$ is the Angular Acceleration,
* •
$\theta^{\prime}$ is the Angular Velocity,
* •
$\theta$ is the Angular Displacement,
* •
$b$ is the Damping Factor,
* •
$g=9.81\,\text{m/s}^{2}$ is the acceleration due to gravity,
* •
$L$ is the Length of pendulum,
* •
$m$ is the Mass of bob in kg.
To convert this second-order differential equation into two first-order
differential equations, we let $\theta_{1}=\theta$ and
$\theta_{2}=\theta^{\prime}$, which gives us:
$\theta_{2}^{\prime}=\theta^{\prime\prime}=-\left(\frac{b}{m}\right)\theta_{2}-\left(\frac{g}{L}\right)\sin(\theta_{1})$
(6) $\theta_{1}^{\prime}=\theta_{2}$ (7)
Thus, the first-order differential equations for the pendulum simulation are:
$\displaystyle\theta_{2}^{\prime}$
$\displaystyle=-\left(\frac{b}{m}\right)\theta_{2}-\left(\frac{g}{L}\right)\sin(\theta_{1})$
(8) $\displaystyle\theta_{1}^{\prime}$ $\displaystyle=\theta_{2}$ (9)
In our simulations, we fixed the damping factor $b=0.5$ and the mass of the
bob $m=1$. The length $L$ of the pendulum is taken from a uniform distribution
$U(0.5,5)$, representing a range of possible lengths from 0.5 to 5 meters. The
initial angular displacement $\theta$ and the initial angular velocity
$\theta^{\prime}$ are both taken from a uniform distribution $U(1,9)$, which
provides a range of initial conditions in radians and radians per second,
respectively.
Our primary objective is to predict the length of the pendulum, denoted as
$L$, using the normalized coordinates $x$ and $y$ on the plane. These
coordinates are scaled with respect to the pendulum’s length, such that the
trajectory of the pendulum is represented in a unitless fashion. This
normalization allows us to abstract the pendulum’s motion from its actual
physical dimensions and instead focus on the pattern of movement. An
illustrative example of this motion is presented in Figure 7, where the path
traced by the pendulum bob is depicted over time.
## Generated datasets
We conducted a preliminary evaluation of our decoder’s ability to reconstruct
sequences from their embeddings. We hypothesize that while the regenerated
sequences may exhibit slight deviations from the original data, the overall
distribution of features across whole dataset should align. To investigate
this, we compared distributions of features in generated datasets, results are
visualized in Figure 9 for Age dataset.
There is a notable resemblance between the generated sequences and the actual
data regarding the distribution of numerical feature ”amount”, particularly
around the mean. However, the model struggles with accurately reproducing the
timing and the key dataset feature—the user group. The MLEM tends to
overrepresent the most frequent classes while underrepresenting less common
ones. Moreover, the Generative model despite of the mostly same performance
exhibits unexpected behaviour, overproducing some of the rarer classes.
These findings suggest directions for potential improvements, more
specifically: improving time component modeling, applying different generation
approaches, and studying different architecture designs such as enhancing
either the encoder’s or decoder’s performance.
Figure 9: Distributions of features for real and generated Age dataset. True
denotes distributions for real dataset, Generative and MLEM denote
distributions for datasets generated from embeddigs obtain via Generative and
MLEM approaches.
|
# Fill-ins with scalar curvature lower bounds and applications to positive
mass theorems
Stephen McCormick Institutionen för teknikvetenskap och matematik
Luleå tekniska universitet
971 87 Luleå
Sweden<EMAIL_ADDRESS>
###### Abstract.
Given a constant $C$ and a smooth closed $(n-1)$-dimensional Riemannian
manifold $(\Sigma,g)$ equipped with a positive function $H$, a natural
question to ask is whether this manifold can be realised as the boundary of a
smooth $n$-dimensional Riemannian manifold with scalar curvature bounded below
by $C$. That is, does there exist a _fill-in_ of $(\Sigma,g,H)$ with scalar
curvature bounded below by $C$?
We use variations of an argument due to Miao and the author [21] to explicitly
construct fill-ins with different scalar curvature lower bounds, where we
permit the fill-in to contain another boundary component provided it is a
minimal surface. Our main focus is to illustrate the applications of such
fill-ins to geometric inequalities in the context of general relativity. By
filling in a manifold beyond a boundary, one is able to obtain lower bounds on
the mass in terms of the boundary geometry through positive mass theorems and
Penrose inequalities. We consider fill-ins with both positive and negative
scalar curvature lower bounds, which from the perspective of general
relativity corresponds to the sign of the cosmological constant, as well as a
fill-in suitable for the inclusion of electric charge.
###### Contents
1. 1 Introduction
2. 2 Brief Background
3. 3 General Fill-In Construction
4. 4 Fill-Ins With Scalar Curvature Lower Bounds
1. 4.1 Negative scalar curvature lower bound
2. 4.2 Positive scalar curvature lower bound
5. 5 Applications
1. 5.1 The asymptotically hyperbolic positive mass theorem with boundary
2. 5.2 A Penrose-like inequality with electric charge
3. 5.3 Engelhardt–Wall Outer Entropy and Bray’s Inner Bartnik Mass
## 1\. Introduction
Given a closed $(n-1)$-dimensional Riemannian manifold $(\Sigma,g)$ equipped
with a smooth function $H$, a natural question to ask is whether or not the
triple $(\Sigma,g,H)$ admits a non-negative scalar curvature fill-in. That is,
can $(\Sigma,g)$ be realised as the boundary of an $n$-dimensional manifold
$(\Omega,\gamma)$ with non-negative scalar curvature and outward-pointing mean
curvature $H$?. Recently the problem was highlighted by Gromov (Problem A in
[13]) and it has been the subject of several recent works (for example, [15,
28, 29]). Here we are predominantly interested in applications of such fill-
ins to mathematical general relativity, where the fill-in $\Omega$ is often
permitted to have another boundary component, provided that it is a minimal
surface. Indeed in many situations where the problem arises, such a minimal
surface inner boundary can itself be filled in and therefore causes no
problems.
In what follows, we will refer to a triple $(\Sigma,g,H)$ as Bartnik data
following the literature on Bartnik’s quasi-local mass, which concerns the
closely related problem of non-negative scalar curvature extensions – that is,
“filling out” to infinity – as opposed to fill-ins (see, for example, the
review articles [6] and [19]). Rather than focusing purely on “fill-ins” with
non-negative scalar curvature, it is interesting to ask if $(\Sigma,g,H)$ can
be realised as the boundary of a Riemannian manifold with any prescribed
scalar curvature lower bound.
This article serves to highlight an elementary approach to constructing fill-
ins used in [21] and demonstrate some new results using the same technique,
with an emphasis on the applications of such fill-ins. For example, these
fill-ins can lead to a kind of positive mass theorem with boundary, which we
illustrate in the case of asymptotically hyperbolic manifolds (Theorem 1.2) as
well as an example for asymptotically flat manifolds equipped with an electric
field (Theorem 5.2). Since the type of fill-ins constructed have a minimal
surface inner boundary, which means they also give a lower bound on the inner
Bartnik mass – a notion of quasi-local mass formulated by Bray [3] in the
spirit of the Bartnik mass, replacing the aforementioned extensions with fill-
ins. We state some of the main results more precisely below. The reader is
directed to Section 2 for more background on the problems considered here.
###### Theorem 1.1 (Fill-ins with negative scalar curvature lower bound).
Let $(\Sigma,g,H)$ be $(n-1)$-dimensional Bartnik data with $H>0$, and $C<0$
be a negative constant such that
$R_{g}-2H\Delta H^{-1}>0$
and
(1.1) $\max\left\\{\max_{\Sigma}\frac{n-2}{n-1}\,\frac{H^{2}}{R_{g}-2H\Delta
H^{-1}},\max_{\Sigma}\frac{H^{2}r_{o}^{2}}{(n-1)^{2}}\right\\}<1-\frac{C}{n(n-1)}r_{o}^{2}$
where $R_{g}$ is the scalar curvature of $g$ and $r_{o}$ is the area radius of
$(\Sigma,g)$. Then there exists a compact manifold $(\Omega,\gamma)$ with
boundary $\partial\Omega=\Sigma_{o}\cup\Sigma_{H}$ where $\Sigma_{o}$ is
isometric to $(\Sigma,g)$ with outward-pointing mean curvature $H$,
$\Sigma_{H}$ is minimal, and the scalar curvature $R_{\gamma}$ of $\gamma$ is
greater than $C$.
###### Remark 1.1.
The condition (1.1) is a technical condition that arises due to the
construction itself and can be thought of as a kind of “pointwise Hawking mass
positivity”. What we mean by this is that in the case that $R_{g}$ and $H$ are
constant, then the condition reduces to simply positivity of the a higher
dimensional Hawking mass tailored to the appropriate cosmological constant.
Note that while the Hawking mass is generally considered in dimension $3$ and
there are several inequivalent formulations of the Hawking mass in higher
dimensions, when $H$ is constant they all agree with the expected value for
spheres in Schwarzschild–anti-de Sitter manifolds. Namely, the expression
$m_{H}(\Sigma,g,H;C)=\frac{r_{o}}{2}\left(1-\frac{r_{o}^{2}}{(n-1)^{2}}\left(H^{2}+\frac{C(n-1)}{n}\right)\right).$
From this we are able to prove the following positive mass theorem for
asymptotically hyperbolic manifolds with boundary using the same method as was
used to obtain a Penrose-like inequality in [21]. Specifically, we obtain the
following.
###### Theorem 1.2 (Positive mass theorem for asymptotically hyperbolic
manifold with boundary).
Let $(M,\gamma)$ be an asymptotically hyperbolic $n$-manifold that is spin,
with scalar curvature $R_{\gamma}\geq C$ for some $C<0$, and inner boundary
$\Sigma=\partial M$. Assume that the Bartnik data $(\Sigma,g,H)$ induced on
the boundary satisfies
$R_{g}-2H\Delta H^{-1}>0$
and
$\max\left\\{\max_{\Sigma}\frac{n-2}{n-1}\,\frac{H^{2}}{R_{g}-2H\Delta
H^{-1}},\max_{\Sigma}\frac{H^{2}r_{o}^{2}}{(n-1)^{2}}\right\\}<1-\frac{C}{n(n-1)}r_{o}^{2}.$
Then $(M,\gamma)$ has positive asymptotically hyperbolic mass, in the sense of
Wang [32].
###### Remark 1.2.
The definition of mass in the asymptotically hyperbolic case is somewhat
subtle, however the precise definition will not be required here. This theorem
follows directly from applying a known positive mass theorem to a manifold
with corners, and for the sake of simplicity we apply the work of Bonini and
Qing directly [2], who prove a positive mass theorem for such manifolds with
corners using Wang’s definition of mass [32].
It is important to remark that the known positive mass theorem for
asymptotically hyperbolic manifolds in fact already applies for manifolds with
boundaries like those considered here provided that
$H\leq\sqrt{\frac{-C(n-1)}{n}}$. So from this perspective, the result provides
only a minor extension. The positive mass theorem with boundary presented here
is more interesting when thought of as leading to a Penrose-like inequality
assuming that the Riemannian Penrose inequality is established for
asymptotically hyperbolic manifolds. Details of this Penrose-like inequality
are given in Section 5, however in order to rigorously prove it we would first
require a proof of the Riemannian Penrose inequality in the asymptotically
hyperbolic case, which remains an open problem.
The next result demonstrates the existence of fill-ins with a non-negative
scalar curvature lower bound.
###### Theorem 1.3 (Fill-ins with non-negative scalar curvature lower bound).
Let $(\Sigma,g)$ be a closed $(n-1)$-dimensional manifold, $H$ be a positive
function. Suppose for some constant $C\geq 0$ we have
(1.2) $R_{g}-2H\Delta H^{-1}>\frac{n(n-2)H^{2}}{n(n-1)-Cr_{o}^{2}}$
where $R_{g}$ is the scalar curvature of $g$ and $r_{o}$ is the area radius of
$(\Sigma,g)$, which we further ask satisfies $r_{o}<\sqrt{\frac{n(n-1)}{C}}$
when $C>0$. Then there exists a compact manifold with scalar curvature bounded
below by $C$, whose boundary consists of two disconnected components, one
being a minimal surface and the other isometric to $(\Sigma,g)$ with outward-
pointing mean curvature $H$.
###### Remark 1.3.
Note that for coordinate $(n-1)$-spheres in an $n$-sphere, one would have
equality in (1.2). When $C=0$, this reduces precisely to the case established
by Miao and the author in [21].
###### Remark 1.4.
There is no positive mass theorem or Penrose-like inequality in this case, as
the counterexamples of Brendle, Marques and Neves, to Min-Oo’s Conjecture [4]
demonstrate that a positive mass theorem in the usual sense does not hold
here. This can be seen explicitly in Proposition 4.1, which demonstrates that
there exist complete fill-ins of Bartnik data corresponding to the
cosmological horizon in a negative mass Schwarzschild–de Sitter manifold.
As another application of the fill-in construction, Section 5.2 establishes a
charged Penrose-like inequality. That is, a lower bound on the total mass of
an asymptotically flat manifold with boundary, equipped with an electric
field, satisfying dominant energy conditions. This is stated precisely in
Theorem 5.1.
This article is organised as follows. Section 2 provides a brief background
and overview of some related results before Section 3 gives the general fill-
in construction used here. Section 4 constructs the fill-ins required for
positive and negative scalar curvature lower bounds, and finally Section 5
provides all of the applications to positive mass-type theorems.
## 2\. Brief Background
A Riemannian manifold with scalar curvature bounded below by a constant $C$
may be interpreted as time-symmetric initial data for the Einstein equations
satisfying the dominant energy condition with cosmological constant
$\Lambda=C/3$. In the case where $C\leq 0$ the notion of total mass of an
isolated system in the context of general relativity is well-understood,
corresponding to the mass of an asymptotically hyperbolic manifold ($C<0$) or
an asymptotically flat manifold ($C=0$). It is therefore no surprise that
notions of mass in general relativity are connected with the problem of fill-
ins with scalar curvature lower bounds.
More specifically, for $C=0$ the positive mass theorem [25, 33] is a well-
known foundational result in mathematical relativity, and for $C<0$ analogous
positive mass theorems have been established for asymptotically hyperbolic
manifolds [9, 32]. That is, a complete asymptotically flat (resp.
asymptotically hyperbolic) Riemannian manifold with scalar curvature bounded
below by $0$ (resp. $C$, where $C<0$) has non-negative mass. The case where
$C>0$, corresponding to a positive cosmological constant, does not have a
standard notion of mass nor appear to have any prospect for a positive mass
theorem to hold in general (see Remark 1.4 and Proposition 4.1).
From the known positive mass theorems, it follows that if Bartnik data
$(\Sigma,g,H)$ can be realised as the boundary of an asymptotically flat or
asymptotically hyperbolic manifold with scalar curvature lower bound of $C$
and negative mass (defined appropriately for the value of $C$), then it cannot
admit a fill-in with scalar curvature bounded below by $C$. This is due to
that fact that if such a fill-in exists then through a gluing procedure one
could obtain an asymptotically flat or asymptotically hyperbolic manifold with
minimimal surface boundary (if the boundary is non-empty) and negative mass,
which would contradict the relevant positive mass theorem. Similarly, quasi-
local positive mass theorems can also provide obstructions to the existence of
fill-ins. For example, Shi and Tam’s proof of the positivity of the Brown–York
mass can be rephrased as a result on fill-ins as follows.
###### Theorem 2.1 (Shi–Tam [26]).
Let $(\Sigma,g,H)$ be $(n-1)$-dimensional Bartnik data where $3\leq n\leq 7$
and with $H>0$, and such that $g$ is isometric to strictly convex closed
hypersurface in $\mathbb{R}^{n}$. Then if
(2.1) $\int_{\Sigma}Hd\mu_{g}>\int_{\Sigma}H_{o}d\mu_{g},$
where $H_{o}$ is the mean curvature of $(\Sigma,g)$ isometrically embedded
$\mathbb{R}^{n}$, there exists no fill-in of $(\Sigma,g,H)$ with non-negative
scalar curvature.
###### Remark 2.1.
When $n=3$, the existence of the required isometric embedding is well-known to
be equivalent to the condition that $g$ has positive Gauss curvature [23, 24].
Furthermore, the dimensional restriction here is simply required to apply the
positive mass theorem.
Shi and Tam also proved a positivity statement for an asymptotically
hyperbolic quasi-local mass in dimension $3$, which similarly gives the non-
existence of fill-ins with negative scalar curvature lower bounds as follows.
###### Theorem 2.2 (Shi–Tam [27], see also Wang–Yau [30]).
Let $(\Sigma,g,H)$ be $2$-dimensional topologically spherical Bartnik data
with $H>0$ and Gauss curvature $K_{g}>\frac{C}{6}$ for some $C<0$ that
satisfies
$\int_{\Sigma}(H_{o}-H)\cosh(-\sqrt{\frac{C}{6}}\,r)d\mu_{g}<0$
where $H_{o}$ is the mean curvature of $(\Sigma,g)$ isometrically embedded in
$3$-dimensional hyperbolic space with constant scalar curvature equal to $C$,
and $r$ is the geodesic distance function from a fixed point in $\Sigma$. Then
$(\Sigma,g,H)$ admits no fill-in with scalar curvature bounded below by $C$.
###### Remark 2.2.
The isometric embedding into hyperbolic space required for Theorem 2.2 not
only exists, but is unique up to an isometry of hyperbolic space [7, 24]. It
seems likely that a version of Theorem 2.2 would also hold in higher
dimensions, however some care should be taken in checking the details with
particular attention given to the existence of required isometric embedding.
It will be a recurring theme that the size of $H$ governs whether or not a
fill-in exists. Jauregui [15] shows this in a clear way with the following
theorem.
###### Theorem 2.3 (Jauregui [15]).
Let $(\Sigma,g,H)$ be $2$-dimensional Bartnik data data with positive Gauss
curvature and $H>0$. Then there exists $\lambda_{o}>0$ such that
$(\Sigma,g,\lambda H)$ admits a fill-in with non-negative scalar curvature for
all $\lambda<\lambda_{o}$ and there exists no such fill-in for
$\lambda>\lambda_{o}$.
###### Remark 2.3.
Theorem 2.3 also likely can be extended in a straightforward manner to higher
dimension than $3$, although it relies on Theorem 2.1 so some extra hypotheses
regarding the isometric embedding are likely required.
In a similar spirit to Jauregui’s result [15], Shi, Wang, Wei and Zhu [28]
establish the following result.
###### Theorem 2.4 (Shi–Wang–Wei–Zhu [28]).
Let $g$ be a metric on the sphere $\Sigma=\mathbb{S}^{n-1}$ ($3\leq n\leq 7$)
such that there exists a continuous path in the space of smooth positive
scalar curvature metrics on $\mathbb{S}^{n-1}$ from $g$ to the standard round
metric. Then there exists a constant $h_{o}$ depending on $g$ such that
$(\Sigma,g,H)$ admits no fill-in with non-negative scalar curvature for all
$H>0$ satisfying
(2.2) $\int_{\Sigma}Hd\mu_{g}>h_{o}.$
While there are several results demonstrating that if $H$ is too large in some
sense then no fill-in can exist, it is difficult to explicitly quantify how
large $H$ may be. It would be interesting to obtain an explicit, computable
lower bound on $H$ in terms of $g$ for an obstruction of the existence of a
fill-in, such as an explicit value for $h_{o}$ in (2.2).
## 3\. General Fill-In Construction
We construct fill-ins of Bartnik data $(\Sigma,g,H)$ by constructing a metric
$\gamma$ on the cylinder $\Sigma\times I$ for some interval $I$, such that one
boundary component induces the Bartnik data while the other is a minimal
surface. This construction originates with [18] and has been used successfully
in several related problems (see [6] for a survey).
We now give the general construction of these collars so we may refer to it
later. Consider a metric of the form
(3.1) $\gamma=A(x)^{2}dt^{2}+E(t)^{2}g$
where $A$ is a positive function on $\Sigma$, $E$ is a positive function of
$t$, and $g$ is a fixed metric on $\Sigma$.
Computing the scalar curvature of the metric $\gamma$ we obtain
(3.2)
$\displaystyle\begin{split}E(t)^{2}R_{\gamma}=&\,R_{g}-2A^{-1}\Delta_{\Sigma}A\\\
&-(n-1)A^{-2}\left((n-2)E^{\prime}(t)^{2}+2E(t)E^{\prime\prime}(t).\right)\end{split}$
We will be interested in choices of $E$ and $A$ that allow us to prescribe the
mean curvature of some constant $t$ boundary surface, and ensure a prescribed
lower bound on $R_{\gamma}$. The mean curvature of each slice with constant
$t$ can computed directly as
(3.3) $H_{t}(x)=\frac{(n-1)E^{\prime}(t)}{E(t)A(x)}.$
For convenience, we will choose the parameter $t$ such that $t=0$ is the
surface with prescribed mean curvature, and we use $t<0$ for the fill-in, so
that $t$ is increasing towards the outer boundary. We also must prescribe the
induced metric on the outer boundary surface. To this end, we will always
scale $E$ such that $E(0)=1$. Letting $H=H(x)$ be the mean curvature we wish
to prescribe, we find
(3.4) $A(x)=\frac{(n-1)E^{\prime}(0)}{H(x)}$
and then (3.2) becomes
(3.5)
$\displaystyle\begin{split}E(t)^{2}R_{\gamma}=&\,R_{g}-2H\Delta_{\Sigma}(H^{-1})\\\
&-\frac{H^{2}}{(n-1)(E^{\prime}(0))^{2}}\left((n-2)E^{\prime}(t)^{2}+2E(t)E^{\prime\prime}(t).\right)\end{split}$
In what follows, we will use this construction with different choices of $E$,
coming from profile functions of model spherically symmetric metrics. We now
demonstrate that the idea used to prove the main result of [21] can be used to
construct fill-ins with general scalar curvature lower bounds. Let
$\left(\Sigma,g,H\right)$ be Bartnik data where $\Sigma$ is an $(n-1)$-sphere
and $H$ is a positive function. We consider a metric of the form
(3.6) $\gamma=A(x)^{2}dt^{2}+\frac{u_{m}(t)^{2}}{r_{o}^{2}}g$
on $\Sigma\times[t_{o},0]$ where $r_{o}$ is the area radius of $g$ and
$t_{o}<0$ will be determined later such that $\Sigma\times\\{t_{o}\\}$
corresponds to a minimal surface boundary. Note that this is simply (3.1) with
$E(t)=\frac{u_{m}(t)}{r_{o}}$ and $g(t)=g$ a constant path. In [21] the
function $u_{m}$ was chosen to be a Schwarzschild profile function, however
here we would like to include Schwarzschild–de Sitter and Schwarzschild–anti-
de Sitter profiles too. Specifically $u_{m}$ is taken to be such that
$u_{m}(0)=r_{o}$ and satisfies
$u_{m}^{\prime}(t)=\sqrt{1+\epsilon u(t)^{2}-\frac{2m}{u(t)^{n-2}}}$
where $\epsilon\in\mathbb{R}$ depends on the desired scalar curvature lower
bound and $m>0$ is some parameter. Specifically, for some $C\in\mathbb{R}$, we
seek fill-ins with scalar curvature bounded below by $C$, and to do so we will
choose $\epsilon=-\frac{C}{n(n-1)}$. From (3.3), the mean curvature of each
constant $t$ slice is given by
(3.7) $H_{t}(x)=\frac{n-1}{A(x)u_{m}(t)}\sqrt{1+\epsilon
u_{m}(t)^{2}-\frac{2m}{u_{m}(t)^{n-2}}}.$
It is important to note that for $\epsilon\geq 0$, or $\epsilon<0$ and $m$ not
too large, one may solve find a value of $u_{m}=r_{H}$ such that $1+\epsilon
u_{m}(t)^{2}-\frac{2m}{u_{m}(t)^{n-2}}=0$, corresponding to an apparent
horizon in the model manifold. In fact, if $\epsilon<0$ there are two such
values of $u_{m}$ where this quantity vanishes, in which case $r_{H}$ is taken
to be the smaller of the two, as the larger radius corresponds to a
cosmological horizon in the Schwarzschild–de Sitter manifold. In particular,
if $r_{o}>r_{H}$ then there is always a $t_{o}<0$ such that $H_{t_{o}}=0$. We
will always ensure this is true so that the interior boundary of our collar is
a minimal surface.
Similarly, equation (3.5) for the scalar curvature gives
(3.8)
$\displaystyle\begin{split}R_{\gamma}-C=\frac{r_{o}^{2}}{u_{m}^{2}}&\left(R_{g}-2H\Delta
H^{-1}-\frac{n-2}{n-1}H^{2}(1+\epsilon
r_{o}^{2}-\frac{2m}{r_{o}^{n-2}})^{-1}\right.\\\
&\left.-\left(\frac{C}{r_{o}^{2}}+\frac{n}{n-1}H^{2}\epsilon(1+\epsilon
r_{o}^{2}-\frac{2m}{r_{o}^{n-2}})^{-1}\right)u_{m}^{2}\right).\end{split}$
When $C=0$ this is exactly what was considered in [21]. We consider the cases
of $C<0$ and $C\geq 0$ separately, however the idea is the same in both cases.
We choose $m>0$ such that the manifold
$\left(\Sigma\times[t_{o},0],\gamma\right)$ has a minimal surface at the
surface $t=t_{o}$, scalar curvature bounded below by $C$ and mean curvature of
the surface $t=0$ prescribed as above.
## 4\. Fill-Ins With Scalar Curvature Lower Bounds
### 4.1. Negative scalar curvature lower bound
We consider the case where the scalar curvature lower bound $C$ is negative,
and first establish Theorem 1.1. Although Theorem 1.2 is essentially a
corollary of this, we reserve the proof of that until Section 5 to discuss
with other applications of the fill-ins satisfying scalar curvature bounds.
###### Proof of Theorem 1.1.
Consider the fill-in constructed above in Section 3, whose metric is given by
(3.6). In this case, we choose $\epsilon=-\frac{C}{n(n-1)}>0$ and the function
$u_{m}$ is the radial profile function for an Schwarzschild–anti-de Sitter
manifold with scalar curvature equal to $C$. By (3.7), the fill-in constructed
above has a minimal surface at some $t=t_{o}$ provided $m>0$. So we simply
seek to choose $m>0$ such that the right-hand side of (3.8) is non-negative. A
straightforward albeit non-optimal way to ensure that, is to impose
(4.1) $R_{g}-2H\Delta H^{-1}-\frac{n-2}{n-1}H^{2}(1+\epsilon
r_{o}^{2}-\frac{2m}{r_{o}^{n-2}})^{-1}\geq 0$
and
(4.2) $\frac{C}{r_{o}^{2}}+\frac{n}{n-1}H^{2}\epsilon(1+\epsilon
r_{o}^{2}-\frac{2m}{r_{o}^{n-2}})^{-1}\leq 0.$
With our choice of $\epsilon$, (4.2) becomes
(4.3) $m\leq\frac{r_{o}^{n-2}}{2}\left(1+\epsilon
r_{o}^{2}-\frac{H^{2}r_{o}^{2}}{(n-1)^{2}}\right),$
and (4.1) can be expressed similarly as
(4.4) $m\leq\frac{r_{o}^{n-2}}{2}\left(1+\epsilon
r_{o}^{2}-\frac{n-2}{n-1}\frac{H^{2}}{R_{g}-2H\Delta H^{-1}}\right).$
In order for the fill-in to have a minimal surface inner boundary, rather than
a cusp, we require $m>0$ so we obtain a fill-in provided
(4.5) $\max\left\\{\max_{\Sigma}\frac{n-2}{n-1}\,\frac{H^{2}}{R_{g}-2H\Delta
H^{-1}},\max_{\Sigma}\frac{H^{2}r_{o}^{2}}{(n-1)^{2}}\right\\}<1+\epsilon
r_{o}^{2}.$
∎
Note that in the case where $C=0$, (4.2) is trivially satisfied and this
reduces to what was shown in [21].
### 4.2. Positive scalar curvature lower bound
We next turn to consider a positive lower bound on the scalar curvature and
prove Theorem 1.3.
###### Proof of Theorem 1.3.
We again use the same fill-in metric (3.6) from Section 3, however in this
case with $C>0$ and $\epsilon=-\frac{C}{n(n-1)}<0$. In this case, the model
space is the Schwarzschild–de Sitter family of manifolds. In this case, a
minimal surface is again located where $u^{\prime}_{m}=0$, however here we
must take a little more care with the roots of
(4.6) $1+\epsilon x^{2}-\frac{2m}{x^{n-2}}.$
When $\epsilon\geq 0$ we find that there is only one root, and therefore one
minimal surface in the model space, which represents a black hole’s horizon.
However, when $\epsilon<0$ and provided that $m$ is not too large, (4.6) has
two real positive roots, $0<r_{+}<r_{-}$. These correspond to a black hole
horizon at $r_{+}$ and a cosmological horizon at $r_{-}$ in the model
Schwarzschild–de Sitter manifold. This model is a compact manifold with two
connected minimal surface boundary components, one sphere at $r_{+}$ and
another sphere at $r_{-}$. We will choose $m$ arbitrarily small but positive
and require that the area radius of $g$, $r_{o}$ satisfy $r\in(r_{+},r_{-})$.
Since the roots $r_{+}$ and $r_{-}$ tend to $0$ and
$\frac{1}{\sqrt{-\epsilon}}$ respectively, as $m\to 0$, the condition
$r_{o}^{2}<\frac{n(n-1)}{C}$ guarantees $r_{+}>2m>0$ for sufficiently small
$m$. In particular, we have $r_{o}\geq u_{m}(t)\geq r_{+}>2m>0$. Turning back
to the scalar curvature equation, (3.8) implies that $R_{\gamma}\geq C$ is
equivalent to
$R_{g}-2H\Delta H^{-1}-\frac{H^{2}}{n-1}(1+\epsilon
r_{o}^{2}-\frac{2m}{r_{o}^{n-2}})^{-1}\left(n-2-n\epsilon
u_{m}^{2}\right)+\frac{n(n-1)\epsilon u_{m}^{2}}{r_{o}^{2}}\geq 0.$
As we do not expect to obtain anything optimal by this method, it will suffice
to make a crude estimate using the fact that $u_{m}$ is positive.
Specifically, we ask that
$R_{g}-2H\Delta H^{-1}-\frac{n-2}{n-1}H^{2}(1+\epsilon
r_{o}^{2}-\frac{2m}{r_{o}^{n-2}})^{-1}\geq 0,$
which is ensured by choosing
$0<m\leq\frac{r^{n-2}}{2}\left(1+\epsilon
r_{o}^{2}-\frac{n-2}{n-1}\frac{H^{2}}{R_{g}-2H\Delta H^{-1}}\right).$
That is, there exists a fill-in with minimal surface boundary and scalar
curvature bounded below by $C$, provided that
(4.7) $\frac{n-1}{n-2}\frac{R_{g}-2H\Delta
H^{-1}}{H^{2}}<\left(1-\frac{C}{n(n-1)}r_{o}^{2}\right)^{-1}.$
∎
Note that equation (4.7) amounts to a strong, point-wise quasi-local positive
mass assumption, which is somewhat natural in the context of non-positive
scalar curvature lower bounds given the positive mass theorem. However, as
mentioned above, the analogous positive mass theorem does not hold for
positive scalar curvature lower bounds. That is, fill-ins with positive scalar
curvature bounds should exist under far weaker assumptions than in the case of
non-positive scalar curvature bounds.
One can see this from the counterexample to Min-Oo’s conjecture constructed by
Brendle, Marques and Neves in [4]. Therein they establish the existence of
compact manifolds with scalar curvature bounded below by $n(n-1)$, with a
neighbourhood of the boundary isometric to a neighbourhood of the boundary of
the $n$-dimensional hemisphere, and strictly positive scalar curvature
somewhere on the interior. From a perturbation of this counterexample one can
obtain a manifold with the same scalar curvature lower bound with a
neighbourhood of the boundary isometric to a neighbourhood of the cosmological
horizon (unstable minimal surface) in a negative mass Schwarzschild–de Sitter
manifold. Although this is a fairly obvious consequence of the main results of
[4], as it does not appear to be recorded explicitly, we state it here for
completeness.
###### Proposition 4.1.
There exists a compact Riemannian manifold $(M,g)$ with boundary, having
scalar curvature $R_{g}\geq 6$, with the inequality strict ($R_{g}>6$) on an
open subset, such that there exists a subset $\Omega\supset\partial M$
isometric to a neighbourhood of the boundary of a Schwarzschild–de Sitter
manifold with negative mass.
###### Proof.
By Theorem 4 of [4] there exists a metric $g_{o}$ on the hemisphere
$S_{+}^{n}$ with scalar curvature $R_{g_{o}}>n(n-1)$ everywhere, $g_{o}$ is
exactly equal to the standard round metric on $\partial S_{+}^{n}=S^{n-1}$,
and the outward-pointing mean curvature $H$ of $\partial S_{+}^{n}$ with
respect to $g_{o}$ is strictly negative. By a small rescaling, one can also
obtain a metric $g_{\varepsilon}$ that also satisfies
$R_{g_{\varepsilon}}>n(n-1)$ with negative mean curvature on the boundary,
such that $g_{\varepsilon}$ restricted to $\partial S_{+}^{n}$ is round with
area $A_{\varepsilon}>4\pi$. Note that the cosmological horizon boundary of a
Schwarzschild–anti-de Sitter manifold of mass $m$ has area $\widetilde{A}_{m}$
satisfying
$m=\frac{1}{2}\left(\frac{\widetilde{A}_{m}}{\omega_{n-1}}\right)\left(1-\frac{\widetilde{A}_{m}}{\omega_{n-1}}\right).$
That is, the metric $g_{\varepsilon}$ restricted to the boundary $\partial
S_{+}^{n}$ is equal to the boundary metric for a negative mass
Schwarzschild–anti-de Sitter manifold. One can therefore apply Theorem 5 of
[4] to obtain the result. Note that although the negative mass
Schwarzschild–anti-de Sitter manifolds are singular at a point, this does not
affect the application of Theorem 5 of [4] since it is purely a local
construction. ∎
###### Remark 4.1.
The above is simply a perturbative construction, so the manifolds obtained
correspond to a (negative) mass parameter very close to zero. It would be an
interesting question to ask for Bartnik data $(S^{n-1},g_{0},H=0)$, with
$g_{o}$ round, how large can $|S^{n-1}|_{g_{o}}$ be and still admit a fill-in
with scalar curvature bounded below by $n(n-1)$.
## 5\. Applications
### 5.1. The asymptotically hyperbolic positive mass theorem with boundary
As mentioned above, in [21], the fill-ins constructed were used to prove a
“Penrose-like” inequality. In particular, it was shown that for an
asymptotically flat manifold with boundary $\Sigma$ satisfying the hypotheses
of Theorem 1.3 with $C=0$, there exists a fill-in with minimal surface
boundary whose area $A$ satisfies
(5.1)
$\left(\frac{A}{\omega_{n-1}}\right)^{\frac{n-2}{n-1}}=\left(\frac{|\Sigma|}{\omega_{n-1}}\right)^{\frac{n-2}{n-1}}\left(1-\frac{n-2}{n-1}\frac{H^{2}}{R_{g}-2H\Delta
H^{-1}}\right),$
and then via the Riemannian Penrose inequality we obtain that the ADM mass is
bounded below by the right-hand side of (5.1).
An analogous result follows by the same reasoning for asymptotically
hyperbolic manifolds with boundary using the fill-in constructed in Section
4.1 if one assumes the Riemannian Penrose inequality holds in this case. While
this version of the Riemannian Penrose inequality is only conjectured, and not
yet established, it is known that the positive mass theorem holds for
asymptotically hyperbolic manifolds with minimal surface boundary [9]. So we
obtain the following positive mass theorem for asymptotically hyperbolic
manifolds with boundary.
###### Proof of Theorem 1.2.
Let $(M,\gamma)$ satisfy the hypotheses of Theorem 1.2 with boundary Bartnik
data $(\Sigma,g,H)$, and let $\gamma_{\Omega}$ be the fill-in metric on
$\Omega=\Sigma\times[t_{o},1]$ constructed for the proof of Theorem 1.1,
filling in this Bartnik data. We can attach $(\Omega,\gamma_{\Omega})$ to
$(M,\gamma)$ along their matching Bartnik data to form a manifold with corner
à la Miao [22] and then double it along the minimal surface to form a complete
asymptotically hyperbolic manifold with corners and two ends. The conclusion
follows now from the positive mass theorem with corners, which has been
established in the asymptotically hyperbolic case by Bonini and Qing [2]. ∎
###### Remark 5.1.
If the conjectured Riemannian Penrose inequality for asymptotically hyperbolic
manifolds were established, then one could use a suitable version of it for
manifolds with corners to obtain an improved lower bound on the mass of an
asymptotically hyperbolic manifold with boundary. In particular, we have the
following.
Let $(M,\gamma)$ be an asymptotically hyperbolic $n$-manifold with scalar
curvature bounded below by $C=-\epsilon n(n-1)$ and interior boundary
$\Sigma$, and define the quantity
(5.2)
$\chi=\frac{1}{n-1}\max\left\\{\max_{\Sigma}\frac{(n-2)H^{2}}{R_{g}-2H\Delta
H^{-1}},\max_{\Sigma}\frac{H^{2}r_{o}^{2}}{(n-1)}\right\\}.$
If $\chi<1+\epsilon r_{o}^{2}$, where $r_{o}$ is the area radius of $\Sigma$,
then assuming the asymptoticaly hyperbolic Penrose inequality holds (on a
manifold with corners), we would conclude that the total mass of $(M,\gamma)$
is bounded below by $\frac{1}{2}r_{o}^{n-2}\left(1+\epsilon
r_{o}^{2}-\chi\right)$.
We state this as a remark and omit a formal proof of this statement, as we
require a precise statement of the appropriate Riemannian Penrose inequality,
which remains an open problem. However, a sketch is provided as we will refer
back to at the end of Section 5.3.
###### Sketch of proof.
Choose $m=\frac{r_{o}^{n-2}}{2}\left(1+\epsilon r_{o}^{2}-\chi\right)>0$ and
obtain a fill-in with minimal surface boundary as in Section 4.1. One can
quickly check from the form of the metric (3.6) that the area radius $r_{H}$
of this minimal surface satisfies
$1+\epsilon r_{H}^{2}-\frac{2m}{r_{H}^{n-2}}=0.$
From our choice of $m$, we therefore obtain the relationship
(5.3) $\frac{1}{2}r_{H}^{n-2}\left(1+\epsilon
r_{H}^{2}\right)=\frac{1}{2}r_{o}^{n-2}\left(1+\epsilon
r_{o}^{2}-\chi\right),$
where the left-hand side of the equation is exactly the lower bound on the
total mass of an asymptotically hyperbolic manifold conjectured by the
Riemannian Penrose inequality. This fill-in can then be glued to $(M,\gamma)$
and the conclusion would follow from the Riemannian Penrose inequality. ∎
###### Remark 5.2.
There are two different conjectured versions of the asymptotically hyperbolic
Penrose inequality corresponding roughly to whether the asymptotically
hyperbolic manifold is being viewed as time-symmetric initial data for a
spacetime with negative cosmological constant, or as an asymptotically
hyperbolic slide of an asymptotically flat spacetime. The former considers the
horizon to be a minimal surface, whereas the latter considers a surface of
constant mean curvature equal to $n-1$. Here we consider the former version,
and while it seems that many suspect it to hold there is some reason to
suspect that perhaps only the latter is true in general. We do not wish to
speculate on the conjecture here, however it is worthwhile noting that the
construction given above could in principle be used to construct a
counterexample if one exists. That is, if one can find an asymptotically
hyperbolic manifold (perhaps only defined near infinity) containing a surface
on which the right-hand side of 5.3 is larger than the total mass, then after
gluing, a counterexample to the asymptotically hyperbolic Penrose inequality
would be constructed. However, the author has attempted this with no success.
### 5.2. A Penrose-like inequality with electric charge
In the framework of general relativity, it is common to consider the Einstein
equations coupled to other equations governing the matter content of the
universe to model. Considering $(M,\gamma)$ as initial data from the
perspective of general relativity, we can add an electric field $E$ – a vector
field on $M$ – to describe initial data for the coupled Einstein–Maxwell
system, gravity coupled to the electric field. In this section we will only
consider the case of vanishing cosmological constant, which in the preceding
sections equated to manifolds with non-negative scalar curvature. This
restriction to only considering vanishing cosmological constant is not
required to construct the fill-ins, however the estimates become much messier
and these “charged” fill-ins are not of particular interest independent of a
Penrose-like inequality. However, such an inequality cannot hold in the
positive cosmological constant case and remains out of reach for the negative
cosmological constant case, for the reasons discussed above. On the other
hand, the scalar curvature lower bound considered here is not zero and in fact
depends on the electric field $E$. Specifically, we will require
(5.4) $R_{g}\geq(n-1)(n-2)|E|_{g}^{2}.$
The divergence of the electric field corresponds to the charge of any matter
source terms, and in order to later apply a charged Riemannian Penrose
inequality [14, 16, 17, 20] we will ask that this vanishes. It is not strictly
required that the $\nabla\cdot E=0$ for the charged Riemannian Penrose
inequality to hold (see [20]), however it will nevertheless be fruitful to
impose this for the fill-ins constructed. In the preceding sections, a fill-in
is taken as a manifold with boundary metric and mean curvature prescribed
since this is the appropriate boundary condition to glue the fill-in to an
exterior manifold while preserving the scalar curvature condition. However,
when an electric field is present we would also like to preserve the sign of
$\text{div}(E)$ in a distributional sense when performing the gluing, which
amounts to matching $\phi=E\cdot n$ the normal component of the electric field
on $\Sigma$. Therefore the appropriate Bartnik data for including electric
charge is the triple $(\Sigma,g,H,\phi)$ (see, for example, Section 3 of [8]).
We again consider a metric of the form (3.6) except now use a
Reissner–Nordström manifold as our model and profile curve
(5.5) $\gamma=A(x)^{2}dt^{2}+\frac{v_{m,Q}(t)^{2}}{r_{o}^{2}}g,$
where $v_{m,Q}$ satisfies $v_{m,Q}(0)=r_{o}$ and
$v^{\prime}_{m,Q}(t)=\sqrt{1+\frac{Q^{2}}{v(t)^{2(n-2)}}-\frac{2m}{v(t)^{n-2}}}.$
In what follows we will use the shorthand $v=v_{m,Q}$ for the sake of
presentation. We will also assume $H$ is constant here, as it simplifies the
computations considerably and does not change the qualitative properties of
the estimate we obtain. We again choose $A(x)$ according to (3.4) and compute
the scalar curvature of $\gamma$ similarly to (3.8) to obtain
(5.6)
$R_{\gamma}=\frac{r_{o}^{2}}{v^{2}}\left(R_{g}-\frac{n-2}{n-1}H^{2}\left(1+\frac{Q^{2}}{r_{o}^{2(n-2)}}-\frac{2m}{r_{o}^{n-2}}\right)^{-1}\left(1-\frac{Q^{2}}{v^{2(n-2)}}\right)\right).$
We then set the electric field as
$E=\frac{r_{o}^{n-1}\phi}{v^{n-1}A}\partial_{t},$
which is easily checked to be divergence-free. Then we see that the
appropriate energy condition, $R_{g}\geq(n-1)(n-2)|E|_{g}^{2}$, is equivalent
to
$R_{g}-\frac{n-2}{n-1}H^{2}\left(1+\frac{Q^{2}}{r_{o}^{2(n-2)}}-\frac{2m}{r_{o}^{n-2}}\right)^{-1}\left(1-\frac{Q^{2}}{v^{2(n-2)}}\right)-\frac{(n-1)(n-2)r_{o}^{2(n-2)}\phi^{2}}{v^{2(n-2)}}\geq
0.$
Assuming $H>(n-1)\phi>0$, we may choose $Q$ in such a way to cancel out the
$v^{2}$ terms. Namely, we set
$\displaystyle Q^{2}$
$\displaystyle=\frac{(n-1)^{2}r_{o}^{2(n-2)}\hat{\phi}^{2}}{H^{2}}\left(1+\frac{Q^{2}}{r_{o}^{2(n-2)}}-\frac{2m}{r_{o}^{n-2}}\right)$
$\displaystyle Q^{2}$
$\displaystyle=(n-1)^{2}r_{o}^{2(n-2)}\hat{\phi}^{2}\left(1-\frac{2m}{r_{o}^{2(n-2)}}\right)\left(H^{2}-(n-1)^{2}\hat{\phi}^{2}\right)^{-1},$
where we write $\hat{\phi}=\max_{\Sigma}(\phi)$ for the sake of presentation.
This leaves $m<\frac{r_{o}^{2(n-2)}}{2}$ as a free parameter, which we will
choose to ensure
$R_{g}-\frac{n-2}{n-1}H^{2}(1+\frac{Q^{2}}{r_{o}^{2(n-2)}}-\frac{2m}{r_{o}^{n-2}})^{-1}\geq
0.$
With our choice of $Q$, this is equivalent to
$R_{g}-\frac{n-2}{n-1}\,\frac{\left(H^{2}-(n-1)^{2}\hat{\phi}^{2}\right)}{\left(1-\frac{2m}{r_{o}^{n-2}}\right)}\geq
0,$
so we choose
$m=\frac{r_{o}^{n-2}}{2}\left(1-\frac{n-2}{n-1}\,\frac{H^{2}-(n-1)^{2}\hat{\phi}^{2}}{\min_{\Sigma}(R_{g})}\right).$
Our fill-in now satisfies the appropriate energy condition for the charged
Riemannian Penrose inequality (5.4), however we have yet to check that the
metric is non-singular. In fact, we require that $m>Q$ to ensure that
$v^{\prime}$ vanishes somewhere and therefore providing us with a minimal
surface boundary. Note that with our choice of $m$, we have
$\displaystyle Q^{2}$
$\displaystyle=r_{o}^{2(n-2)}\left(\frac{n-2}{n-1}\,\frac{H^{2}-(n-1)^{2}\hat{\phi}^{2}}{\min_{\Sigma}(R_{g})}\right)\left(\frac{H^{2}}{(n-1)^{2}\hat{\phi}^{2}}-1\right)^{-1}$
$\displaystyle Q^{2}$
$\displaystyle=\frac{(n-1)(n-2)\hat{\phi}^{2}r_{o}^{2(n-2)}}{\min_{\Sigma}(R_{g})}.$
In order to ensure $m>Q$, we calculate the difference $m-Q$ from the above
expressions to obtain
(5.7)
$\displaystyle\begin{split}\frac{2R_{g}}{r_{o}^{(n-2)}}(m-Q)=&\,\min_{\Sigma}(R_{g})-\frac{n-2}{n-1}H^{2}+(n-2)(n-1)\hat{\phi}^{2}\\\
&-2\hat{\phi}\sqrt{(n-1)(n-2)\min_{\Sigma}(R_{g})}.\end{split}$
The right-hand side of (5.7) is a quadratic expression in
$\sqrt{\min_{\Sigma}(R_{g})}$, so we can directly check that this is positive
when
$\min_{\Sigma}(R_{g})>\frac{n-2}{n-1}\left((n-1)\hat{\phi}+H\right)^{2}.$
Note that when $\phi\equiv 0$ this reduces to the condition for the existence
of fill-in for the uncharged case [21]. By construction, we have proven the
following.
###### Theorem 5.1.
Let $(\Sigma,g,H,\phi)$ be charged Bartnik data with constant $H$ satisfying
$H>(n-1)\phi>0$ and satisfying
$\min_{\Sigma}(R_{g})>\frac{n-2}{n-1}\left((n-1)\max_{\Sigma}(\phi)+H\right)^{2}$
then there exists a metric $\gamma$ and divergence-free vector field $E$ on
$M=\Sigma\times[0,1]$ satisfying $R_{g}\geq(n-1)(n-2)|E|_{\gamma}^{2}$, such
that one boundary component is a minimal surface and on the other the induced
metric is $g$, outward-pointing mean curvature is $H$, and the outward normal
component of $E$ is $\phi$.
From this fill-in, we are able to prove an electrically charged Penrose-like
inequality (and charged positive mass theorem for manifold with boundary).
###### Theorem 5.2.
Let $(M,\gamma,E)$ be an asymptotically flat manifold with charge of dimension
$3\leq n\leq 7$, and boundary $\Sigma$ with charged Bartnik data
$(\Sigma,g,H,\phi)$, satisfying $\nabla\cdot E\geq 0$ and
$\min\limits_{\Sigma}R_{g}>\frac{n-2}{n-1}\left((n-1)\max\limits_{\Sigma}\phi+\max\limits_{\Sigma}H\right)^{2}.$
Then
$\mathfrak{m}_{ADM}\geq m+\frac{Q_{\Sigma}^{2}-Q^{2}}{m+\sqrt{m^{2}-Q^{2}}},$
where $Q_{\Sigma}=\frac{1}{\omega_{n-1}}\int_{S}\phi\,d\mu_{g}$ is the
electric charge on $\Sigma$ in $M$, and the parameters $m$ and $Q$ are given
by
(5.8)
$\displaystyle\begin{split}m&=\frac{r_{o}^{n-2}}{2}\left(1-\frac{n-2}{n-1}\,\frac{H^{2}-(n-1)^{2}\max_{\Sigma}\phi^{2}}{\min_{\Sigma}(R_{g})}\right)\\\
Q^{2}&=\frac{(n-1)(n-2)\max_{\Sigma}\phi^{2}r_{o}^{2(n-2)}}{\min_{\Sigma}(R_{g})}.\end{split}$
Furthermore, if $\nabla\cdot E\equiv 0$ then we have
(5.9) $\mathfrak{m}_{ADM}\geq|Q_{\Sigma}|.$
###### Remark 5.3.
The expression for $m$ given by (5.8) can be compared to the charged Hawking
mass, while the expression $Q_{\Sigma}^{2}-Q^{2}$ can be seen to vanish when
$(S,g)$ is a round sphere and $\phi$ is constant.
###### Proof.
We apply Theorem 5.1 to construct a fill-in of the Bartnik data
$(\Sigma,g,H_{o},\phi)$, where $H_{o}=\max_{\Sigma}(H)$. We then obtain a
(charged) manifold with corner by attaching the fill-in to $M$ and can apply
the charged Riemannian Penrose inequality for manifolds with corners
established in [8]. Note that the results in [8] are stated only in dimension
$3$ only, however it is clear from the proof that the charged Riemannian
Penrose inequality with corners holds in dimension up to $7$ (see remark 5.4
below). Specifically, from Theorem 1.3 of [8] we have
(5.10)
$\mathfrak{m}_{ADM}(M,\gamma)\geq\frac{r_{H}^{n-2}}{2}\left(1+\frac{Q_{\Sigma}^{2}}{r_{H}^{2(n-2)}}\right),$
where $r_{H}$ is the area radius of the minimal surface boundary.
From the definition of the profile function $v$ used in the fill-in, we have
that the minimal surface occurs when $v^{\prime}=0$, which implies
$1+\frac{Q^{2}}{r_{H}^{2(n-2)}}-\frac{2m}{r_{H}^{n-2}}=0,$
and then from (5.10), we have
(5.11) $\mathfrak{m}_{ADM}(M,\gamma)\geq
m+\frac{Q_{\Sigma}^{2}-Q^{2}}{r_{H}^{n-2}},$
with
$r_{H}^{n-2}=m+\sqrt{m^{2}-Q^{2}}.$
Finally note that (5.9) follows from applying the charged positive mass
theorem [12, 10] instead of the charged Riemannian Penrose inequality. ∎
###### Remark 5.4.
The charged Riemannian Penrose inequality with corners established in [8] is
presented in dimension 3, following [22]. However, as illustrated in the
appendix of [21], the argument holds up to dimension $7$ (where the standard
problem concerning the regularity of minimal surfaces prevents it immediately
being generalised to higher dimensions than that). Naturally, there are
dimensional constants in the inequality used, which we are careful to
correctly include here.
###### Remark 5.5.
As with the other inequalities established by this method, it is expected that
there is room slightly improve the inequality, however comparing it to the
lower bound on ADM mass in terms of the charged Hawking mass in dimension 3,
one sees that the method is unlikely to achieve an optimal inequality.
### 5.3. Engelhardt–Wall Outer Entropy and Bray’s Inner Bartnik Mass
Recently, Wang [31] noted that the concept of outer entropy due to Engelhardt
and Wall [11] in the context of the AdS/CFT correspondence is essentially the
same concept as Bray’s inner Bartnik mass [3]. The former was formulated from
the perspective of the AdS/CFT correspondence for asymptotically hyperbolic
manifolds while the latter was formulated from a purely geometric perspective
for asymptotically flat manifolds. In particular, the outer entropy is
equivalent to an asymptotically hyperbolic analogue of the inner mass, rather
than the standard one. Nevertheless, at the heart of both is the problem of
constructing fill-ins of Bartnik data with a minimal surface boundary, and
taking the supremum of the minimal surface area over an appropriate class of
fill-ins111Technically, the inner mass is defined using fill-ins that extend
out to another asymptotic end, but this distinction is minor..
The Penrose-like inequality obtained in [21] in fact was first motivated by
considerations of the Bartnik–Bray inner mass. Since this mass is taken as a
supremum, we immediately conclude that the inner Bartnik mass of Bartnik data
$(\Sigma,g,H)$ is bounded below by the right-hand side of (5.1).
While an asymptotically hyperbolic analogue of the standard Bartnik mass has
been recently investigated [5], to the best of the author’s knowledge an
asymptotically hyperbolic inner Bartnik mass has not been considered in the
literature. Nevertheless, there is an obvious analogue one could consider,
which is the one equivalent to the outer entropy. Namely, we define the
asymptotically hyperbolic inner Bartnik mass of given Bartnik data
$(\Sigma,g,H)$ as the supremum, taken over the set of all fill-ins with scalar
curvature bounded below by $-\epsilon n(n-1)$ with no closed minimal surfaces
except for a minimal surface boundary, of the quantity
$\frac{1}{2}r^{n-2}\left(1+\epsilon r^{2}\right)$
where here $r$ is the area radius of the minimal surface boundary. It follows
from (5.3) that this asymptotically hyperbolic inner Bartnik mass of some data
$(\Sigma,g,H)$ is bounded below by
(5.12) $\frac{1}{2}r_{o}^{n-2}\left(1+\epsilon r_{o}^{2}-\chi\right)$
where $\chi$ is given by (5.2) and $r_{o}$ is the area radius of $g$. Then
observation of Wang connects this to the Engelhardt–Wall outer entropy, which
is simply the supremum of the area of the minimal surface over the same set of
fill-ins. That is the lower bound for the outer entropy is
$\omega_{n-1}r_{H}^{n-1}$ where $r_{H}$ satisfies (5.3).
## References
* [1]
* [2] Bonini, V., and Qing, J. A Positive Mass Theorem on Asymptotically Hyperbolic Manifolds with Corners along a Hypersurface, Ann. Henri Poicaré, 9 347–371, 2008.
* [3] Bray H. L., and Chruściel P. T., The Penrose inequality, The Einstein Equations and the Large Scale Behavior of Gravitational Fields, 50 years of the Cauchy problem in General Relativity ed. H. Friedrich and P. T. Chruściel (Basel: Birkhäuser), 39–70, 2004.
* [4] Brendle, S.; Marques, F. and Neves, A., Deformations of the hemisphere that increase scalar curvature, Inventiones Mathematicae, 185(1), 2011.
* [5] Cabrera Pacheco, A. J., Cederbaum, C. and McCormick, S., Asymptotically hyperbolic extensions and an analogue of the Bartnik mass, J. Geom. Phys., 132, 2018.
* [6] Cabrera Pacheco, A. J., and Cederbaum, C., A survey on extensions of Riemannian manifolds and Bartnik mass estimates, Memorias de la reunión de Matemáticos Mexicanos en el Mundo 2018, Contemporary Mathematics series, AMS, 2019.
* [7] do Carmo, P. and Warner, F. W., Rigidity and convexity of hypersurfaces in spheres, J. Differ. Geom. 4, 133–144, 1970.
* [8] Chen, P.-N. and, McCormick, S. Quasi-local Penrose inequalities with electric charge, Int. Math. Res. Not. rnab215, 2021.
* [9] Chruściel, P. and Herzlich, M., The mass of asymptotically hyperbolic Riemannian manifolds, Pacific J. Math. 212 (2003), no.2, 231–264.
* [10] de Lima, L. L., Girão, F., Lozório, W., and Silva, J., Penrose inequalities and a positive mass theorem for charged black holes in higher dimensions, Class. Quantum Grav., 33, Number 3, 2016.
* [11] Engelhardt, N., and Wall, A. C., Decoding the Apparent Horizon: Coarse-Grained Holographic Entropy, Phys. Rev. Lett. 121, 211301, 2018.
* [12] Gibbons, G. W., Hawking, S. W., Horowitz, G. T., and Perry, M. J., Positive mass theorems for black holes, Commun. Math. Phys., 88, 295–308, 1983.
* [13] Gromov, M., Scalar Curvature of Manifolds with Boundaries: Natural Questions and Artificial Constructions, preprint arXiv:1811.04311, 2018.
* [14] Jang, P. S., Note on cosmic censorship, Phys. Rev. D, 20(4), 1979.
* [15] Jauregui, J., Fill-ins of nonnegative scalar curvature, static metrics, and quasi-local mass, Pacific J. Math., 261(2), 2011.
* [16] Khuri, M., Weinstein, G. and Yamada, S., Extensions of the charged Riemannian Penrose inequality, Class. Quantum Grav., 32, 2015.
* [17] Khuri, M., Weinstein, G. and Yamada, S., Proof of the Riemannian Penrose inequality with charge for multiple black holes, J. Differential Geom., 106(3), 2017.
* [18] Mantoulidis, C. and Schoen, R., On the Bartnik mass of apparent horizons, Class. Quantum Grav., 32 (2015), no. 20, 205002.
* [19] McCormick, S., An Overview of Bartnik’s Quasi-Local Mass, 2024 (to appear).
* [20] McCormick, S., On the charged Riemannian Penrose inequality with charged matter, Class. Quantum Grav. 37(1), 2020.
* [21] McCormick, S. and Miao, P., On a Penrose-like inequality in dimensions less than eight, Int. Math. Res. Not., 2019(7), 2019.
* [22] Miao, P., Positive mass theorem on manifolds admitting corners along a hypersurface, Adv. Theor. Math. Phys., 6 (2002), no. 6, 1163–1182.
* [23] Nirenberg, L. The Weyl and Minkowski problems in differential geometry in the large, Comm. Pure Appl. Math. 6, 337–394, 1953.
* [24] Pogorelov, A. V., Some results on surface theory in the large, Adv. Math. 1 191–264, 1964.
* [25] Schoen, R. and Yau, S.-T., On the proof of the positive mass conjecture in general relativity, Commun. Math. Phys. 65(1), 45–76, 1979.
* [26] Shi, Y. and Tam, L.-F., Positive mass theorem and the boundary behaviors of compact manifolds with nonnegative scalar curvature, J. Differential. Geom. 62 (2002), no. 1, 79–125.
* [27] Shi, Y. and Tam, L.-F., Rigidity of compact manifolds and positivity of quasi-local mass, Classical and Quantum Gravity, 24 (2007), no. 9.
* [28] Shi, Y., Wang. W., Wei, G., and Zhu, J., On the fill-in of nonnegative scalar curvature metrics, Math. Ann. 379, 235–270, 2021.
* [29] Shi, Y., Wang. W., and Wei, G., Total mean curvature of the boundary and nonnegative scalar curvature fill-ins, J. Reine Angew. Math. 2022(784) (2022), pp. 215–250.
* [30] Wang, M.-T. and Yau, S.-T., A generalization of Liu-Yau’s quasi-local mass, Commun. Anal. Geom. 15(2), 249–282, 2007.
* [31] Wang, J., Outer entropy equals Bartnik-Bray inner mass and the gravitational ant conjecture, Phys. Rev. D 102, 066009, 2020.
* [32] Wang, X., The Mass of Asymptotically Hyperbolic Manifolds, J. Diff. Geom. 57(2), 273–299, 2001.
* [33] Witten, E., A new proof of the positive energy theorem, Commun. Math. Phys. 80(3), 381–402, 1981.
|
Optimizing Packet Reception Rates for Low Duty-Cycle BLE Relay Nodes
This work is financed by the ERDF – European Regional Development Fund through the Operational Programme for Competitiveness and Internationalisation - COMPETE 2020 Programme within project WHERE.IS (POCI-01-0247-FEDER-024191).
Nuno Paulino, Luís M. Pessoa
INESC TEC and Faculty of Engineering,
University of Porto, Porto, Portugal
{nuno.m.paulino<EMAIL_ADDRESS>André Branquinho, Rafael Tavares, Igor Ferreira
Wavecom, Aveiro, Portugal
{abranquinho, rtavares<EMAIL_ADDRESS>
In order to achieve the full potential of the Internet-of-Things, connectivity between devices should be ubiquitous and efficient. Wireless mesh networks are a critical component to achieve this ubiquitous connectivity for a wide range of services, and are composed of terminal devices (i.e., nodes), such as sensors of various types, and wall powered gateway devices, which provide further internet connectivity (e..g, via WiFi).
When considering large indoor areas, such as hospitals or industrial scenarios, the mesh must cover a large area, which introduces concerns regarding range and the number of gateways needed and respective wall cabling infrastructure. Solutions for mesh networks implemented over different wireless protocols exist, like the recent BLE 5.1. Besides range concerns, choosing which nodes forward data through the mesh has a large impact on performance and power consumption.
We address the area coverage issue via a battery powered BLE relay device of our own design, which acts as a range extender by forwarding packets from end nodes to gateways. We present the relay's design and experimentally determine the packet forwarding efficiency for several scenarios and configurations. In the best case, up to 35 of the packets transmitted by 11 nodes can be forwarded to a gateway by a single relay under continuous operation. A battery lifetime of 1 year can be achieved with a relay duty cycle of 20.
BLE, Bluetooth, low-energy, wireless sensor networks, mesh networks
§ INTRODUCTION
Wireless mesh networks can be the platform for many applications. A common use case are sensor networks <cit.>, but others include domotics <cit.>, automated inventory tracking or localization <cit.>. Specific scenarios include healthcare <cit.>, security <cit.> and warehouses and industrial facilities <cit.>.
Depending on the application, mesh networks can be built with WiFi devices, for example, but WiFi end-points or routers typically require wall power. On the other hand, BLE devices benefit from a comparatively lower cost, power efficiency, and smaller device sizes. The lower power consumption alone negates the need for installation of a wired infrastructure in favor of battery power, reducing costs further. A BLE mesh network with battery powered end and relay nodes can be deployed in legacy locations without such an infrastructure. However, the range and data-rate for BLE devices is lesser than that of protocols such as WiFi <cit.>, meaning that denser networks may be required, which introduces the need for efficient are coverage and packet relaying.
We address the specific case where end nodes periodically transmit data on advertising channels only. The range of the network is restricted by the range of the edge nodes to the gateways, and on the available wall power for the gateways.
To address this, intermediate nodes acting as relays are designed to extend the advertising range of the nodes. These nodes should also be battery powered, since otherwise they could be easily replaced with gateways. However, continuous operation by the relays to listen for sporadic transmissions from the nodes would result in an unsuitably short battery life.
By configuring their listening period with a low duty-cycle (i.e., configuring the network nodes to a low-power operating mode for a given listening period), the battery life can be considerably extended. Consequently, since BLE packet transmission is sporadic, especially as the network nodes operate asynchronously, idling some (or all) the relay nodes inevitably leads to packet losses. However, some applications may not consider that all data is high priority, and some degree of data loss and/or end-to-end delay may be acceptable.
To characterize the efficiency of a system reliant on battery-powered relay nodes, we present an in-house design for such a BLE relay, and characterize the system's packet loss in different conditions. Specifically, we vary the number of client nodes, the listening time spent on each BLE channel, and apply two different forwarding policies, one of which has additional configuration parameters. Additionally, we subject the system to noise from other Bluetooth devices external to the network.
We validate the operation of our BLE relay design by manufacture and assembly, employing some of the units as beacons (so we may configure transmission periods), while another unit performs the relay function under several software configurations which implement our operating policies.
This paper is organized as follows: <Ref> reviews related work, <Ref> describes the network topology we addressed, <Ref> presents the design characteristics of the BLE relay node, and the configurable operating parameters, like the duty cycle and forwarding policy. <Ref> presents experimental evaluation of packet reception rates for different scenarios. <Ref> concludes the paper.
§ RELATED WORK
A comprehensive survey on the research efforts in BLE mesh topologies is presented by Darroudi et al. <cit.>. The survey categorizes and compares nearly 30 approaches to BLE network designs, including standardization solutions proposed by the Bluetooth SIG and the IETF, academic solutions, and proprietary solutions.
A major distinction between mesh approaches is whether data is transmitted by flooding (e.g., using the BLE advertising channels), or by through end-to-end connections through specific nodes. A comparison is presented in <cit.>, where the authors compare the Trickle flooding protocol <cit.> with the FruityMesh connection based protocol <cit.>. Both are evaluated regarding their multi-hop efficiency, for a network of nine intermediate nodes placed between two source and sink nodes. The packet delivery ratio and the end-to-end delay are measured. Both approaches are comparable in this scenario, with a packet delivery rate of close to 40 when 10 packets are generated per second by the source node. FruityMesh suffers an end-to-end delay which is approximately 9x higher compared to Trickle, but in turn requires 3x less power.
Kim et al. <cit.> present BLEMesh. A packet forwarding protocol is proposed to transmit batches of packets. Less transmissions are required in total to transport data end-to-end, through intermediate nodes, relative to naive flooding or routing based approaches. The packets include priority tables used by intermediate nodes to determine if a received packet should be re-transmitted, based on whether or not that packet was already forwarded by a node of higher priority. A downside is that the payload capability of the BLE packet diminishes as the number of nodes and batch size increases. A simulated evaluation for a mesh with 5 nodes, and assuming only one advertising channel, achieves a reduction of 54.5 in the required number of transmissions, relative to flood routing.
Brandão et al. <cit.> propose the Drypp protocol, based on the Trickle flooding protocol <cit.>. Trickle is a mesh network protocol for BLE where each node captures and attempts to re-transmit data at a later time, unless it meanwhile listens to redundant transmissions sent by other nodes. Drypp introduces a load balancing method which relies on dynamic adaptation of the protocol parameters based on each node's battery level. For three test nodes implementing the Drypp protocol, an 11 increase in network lifetime was achieved relative to Trickle, in exchange for a 7.5 decrease in throughput.
A BLE mesh network relying on a routing protocol is evaluated in <cit.>. The proposed mesh network is designed for environmental monitoring and disaster scenarios, and both the edge (sensor nodes) and the Wi-Fi capable gateway nodes are battery powered. Information if propagated based on Trickle routing <cit.>. To extend battery life, the sensor nodes are periodically shut off, and modifications to the trickle algorithm are introduced to prevent packet loss due to these power-down periods. Given the periods for listening and transmission time, the authors estimated a lifetime of 589 days for a sensor node, and 511 days for a gateway, when equipped with 6000 and 8000 lithium polymer batteries, respectively.
The work in <cit.> specifically addresses optimization of the use of Bluetooth relays in mesh networks. Connection-less mesh networks propagate data by controlled flooding between nodes, until the destination node of a particular data packet is reached. However, this leaves the network vulnerable to excessive flooding as a function of the number of nodes used as relays and/or selected to be relays. The authors employ state-of-the-art relay selection algorithms to a BLE mesh network, and evaluate the effect of six different relay selection algorithms to a Connected Dominating Set (CDS) representation of the mesh. Using an in-house simulator, different relays densities were tested with two end nodes exchanging 1000 messages one-way. The lowest packet loss can be achieved by computing the routing with the fewest hops, but the lowest power consumption is possible for a genetic algorithm which find the minimum CDS of the network, at the cost of suffering the highest packet loss (as high as 80).
In <cit.>, a method for relay node management is proposed based on a tree representation for the mesh network, together with an integer linear programming formulation which minimizes the number of relay nodes required to ensure connectivity between all nodes. The algorithm requires that the number of nodes and network topology be know to determine the relay routing. Using an in-house simulator, the authors evaluate the routing efficiency and energy consumption of a system composed of up to 100 nodes in an indoor configuration where line-of-sight is not possible for all pairs of nodes. A power consumption reduction of up to 12x is claimed over the conventional case where any relay node can be used as a relay during forwarding (i.e., flooding).
In general, the choice of protocol and network topology is application dependant. <Ref> summarizes the results (or a subset of results) from the experimental evaluations shown in this section. The values reported are our best effort at a comparison of the presented approaches, as well as our own. Depending on the respective experiments, some columns show either scalar, ranges of values, or lists (correspondence between list values is kept column to column). Node power reports the power consumption of each node of the tested mesh, taking into account the entire operating time, including any sleep periods of the nodes (i.e., the average power consumption throughout the experiment lifetime.
The experiments we conducted can be categorized as controlled flooding mechanism, but where we rely on details specific to a class of applications to determine forwarding behaviour. We consider end nodes with a constant packet rate, and envision a tree topology for the network where a relay is responsible for the end nodes within its range, and where relays are out of range amongst themselves. Additionally, we are not concerned with end-to-end delay, as data is non-critical given equal importance. We also conduct experiments while introducing real-world noise due to other wireless devices external to the network, which we have not observed in other works we have identified.
§ NETWORK TOPOLOGY
The use-case network topology for the evaluation of our relay, and respective forwarding policies, is shown in <Ref>. We target use cases where the end nodes are battery powered, and periodically transmit information about the environment (e.g., sensor data). The gateways are BLE/Wi-Fi devices which synchronize the status of the network with the centralized system. The network was designed and tested according to the features/constraints of the Bluetooth 4.1 specification <cit.>.
BLE mesh topology, with battery-powered end nodes and intermediate relay nodes, and wall-powered BLE/Wi-Fi gateways interfacing with an upstream server system
One of the characteristics of BLE is the transmission range (approximately 20). This means that either all nodes placed throughout the site have to be within this range of a wall-powered gateway in order for data to be retrieved by those nodes, or that data is forwarded through nodes. The former is a potentially expensive solution, and the later is the object of study on mesh network routing protocols.
However, if the end-nodes are simple sensors and cannot move data to and from each other (or if they are physically placed in such a way that a sequence of hops from end node to gateway cannot be established), more sophisticated battery-powered intermediate nodes are required which do not gather data themselves, but serve as range extenders to the gateways.
This paper focuses presents a design of the relay node, which functions as a packet receiver, gatherer, and re-transmitter. This makes it possible to extend the network range in situations where the indoor configuration or cost do not allow for a more ubiquitous distribution of wall-powered gateways. It also provides a cheaper solution relative to fully-fledged gateways, since it may replace them where Wi-Fi capabilities are not needed. Additionally, since the relays are battery powered, they are easy to relocate according to changes in the application requirements, or simply to tune the quality of the sensed data. The relays are compatible with any off-the-shelf end node which is BLE/Bluetooth 4.1 compliant.
§ BLE RELAY NODE
The purpose of the BLE relay device is to serve as a packet forwarder. It discards (i.e, does not forward) packets originating from devices which are not part of the its own network. Currently this is done by MAC address filtering. The only payload sent is the identification of each node.
We implemented two relay designs, both based on a single Nordic Semiconductor nRF52832 micro-controller <cit.>, which performs the packet reception and re-transmission, and idles the relay by going into a low-power mode. The configuration parameters listed earlier, such as listening intervals and periodicity, are controlled by the firmware residing on the non-volatile program memory of the nRF52832 chip. All relay implementations are composed by one single-layer, dual-sided, FR-4 PCB with a 1 thickness.
BLE relay prototype A, powered by a 3.3 button cell
BLE relay prototype B, with multiple power sources; the experimental evaluations in this paper consider only this relay variant
Two variants of the relay prototype; functionally identical with different power sources
The first prototype relay is shown in <Ref>. It contains the nRF52832 chip, a J-Link type programming header, and a single 3.3 CR2032 button cell battery. The relay is considerably small, with with a 23x38x10 profile. The antenna for reception and transmission of Bluetooth packets is a co-planar IFA, tuned for 2.4.
A second prototype designed for longer lifespan is shown in <Ref>. A series of four 3.3 AA batteries powers the relay when deployed in a location where wall power is unavailable, which is the primary use-case of the device. Alternatively, a mini-USB connector accepts a 5 input. An LTC4419 chip <cit.> is used as a power selector, which prioritizes the USB power input. A TPS62125 <cit.> regulates the chosen input to 3.3 for the nRF52832. Finally, the J-Link programming header powers the device in the absence of other power sources. The antenna design is identical to that of prototype A (albeit with a longer trace to the PCB edge, of 2.1), and the device is 74x64x25.
The relay's software can accept a number of configuration parameters which will be the focus of the experimental evaluation. <Ref> shows the cyclical operation mode of the relay during scanning. The relay stays in a given channel during a scan interval, and listens on that channel during the length of the scan window. In our tests we vary the length of the scan interval and set the scan window to an equal value. We evaluate the effects of two forwarding policies and estimate lifetime of the devices as a function of the sleep time (for the best performing scan interval and policy). Only advertising channels are used, and paired connections are not established, which is typical for one-way sensor meshes.
§ EXPERIMENTAL EVALUATION
We evaluate the relay's performance regarding packet reception and forwarding, for different scan interval lengths, policies, and sleep time. We employed the experimental setup show in <Ref>. In addition to the elements of the system shown, additional BLE nodes were placed in the environment, to act as noise, thus subjecting the system to a realistic operating condition. For all our tests, the scan window occupies the entire duration of the scan interval, in order to evaluate only the effects of the listening time, forwarding policy and sleep time. Exploring the effects of the length of sleep time (i.e., device duty cycle), in conjunction with non-equal scan window and interval lengths, , on power savings and performance is out of the scope of this paper.
Given this, we evaluated the following characteristics:
* the rate of packets received by the relay while subject to noise, for different scan intervals (i.e., advertising channel switching periods);
* the forwarding efficiency between the relay and a gateway using an immediate forwarding policy, first with two client nodes, and then with 11 client nodes;
* forwarding efficiency for 11 nodes, under a policy which buffers received packets and forwards replicas to the gateway, to reduce the overhead of switching between radio modes;
* power consumption as a function of device duty cycle (i.e., sleep time).
Experimental setup for relay efficiency evaluation
In order to account for all transmitted and received packets, the relay and the terminal gateway communicate every packet received via serial connection. Each packet is annotated with the originating node. Since the transmission period of the nodes is known, we know the total transmitted packets for a given run time. We can then compute the packet losses in different conditions, between the nodes and the relay, and between the relay and the gateway.
§.§ Relay Reception Efficiency for 2 Nodes
In this test, the relay's packet reception rate under noise was tested for two client nodes, set to transmit advertising packets with period of 1. The test environment contained another 15 BLE nodes, external to the network, advertising at different intervals and thus acting as noise.
We varied the relay's scan interval between 50 and 1150. The scan window occupies the entire period.
What is measured in this case is the packet reception rate under noise, and due to the intrinsic loss of packets due to the randomness of the selected transmission and reception channels. The Bluetooth specification outlines a total of 40 channels, three of which (37, 38 and 39) are used for advertising packets.
<Ref> shows the measured reception rates of the relay. Three runs were performed per configuration. Per run, each of the two nodes transmitted 600 packets. For a transmission rate of 1 packet per second, this totals an experimental time of 90 per configuration. For all experiments the average reception rate is 88 ($\sigma = 1.02\%$).
The scan interval does not affect the reception rate significantly. Even so, it is marginally more efficient for the relay to stay tuned into a single channel for as long as possible, i.e., longer scan intervals. This might contribute to a slightly reduced packet loss since less time is spent switching radio channels, which contributes to idle time.
Also, since the Bluetooth protocol also dictates that an advertising event must be sent by a node on all three channels, the likelihood of the relay capturing a packet is higher by staying on a single channel for a period of time which is greater than the node's transmission period.
Note that in this scenario the relay's radio never transmits, and we evaluated the best case reception rate in a noisy scenario. Since the radio is half-duplex, once the relay begins forwarding packets, its reception rate will consequently decrease, as we present next.
§.§ Relay Forwarding Efficiency for 2 Nodes and 11 Nodes
<Ref> shows the reception efficiencies between the nodes and the relay, and between the relay and gateway. <Ref> shows the case with 2 client nodes, and 15 nodes acting as noise, and <Ref> shows the case with 11 client nodes, and 6 nodes acting as noise.
In these experiments, the sleep time is zero, as we wish to evaluate the performance, for a long period of operation, only as a function of the network size, scan interval, and noise introduced by other devices. The relay has an immediate forwarding policy for every packet received.
<Ref> shows that the relay experiences a greater packet loss relative to the data in <Ref>, since it was configured to interrupt the scan interval and re-transmit immediately. This policy intended to reduce the travel time of the packets to the gateway. However this means that only one packet is relayed per scan interval, which explains the loss of packets from the nodes to the relay. Consequently, the number of packets forwarded to the gateway diminishes as the scan interval increases.
For scan intervals greater than 350, the number of packets received by the gateways actually exceeds those forwarded. This is due to two factors. Firstly, for forwarding the relay must be switched to advertising mode for a duration such that only one packet is sent. However, non-deterministic behaviour during channel switching and switching between reception and transmission sometimes produces duplicate packets. Secondly, the gateway may receive packets directly from the nodes, depending on transmission power. This leads to an apparent increase in system performance for lengthier scan intervals, despite the relay's losses.
<Ref> shows the same metrics when 11 nodes are introduced into the system. For the same reason as before, the reception rate (for both the relay and gateway) decreases with the relay's scan interval. However, this case shows how the relay effectively acts as an intermediate buffer to hold packets. The shorter the scan intervals, the quicker the relay echoes packets, decreasing the likelihood that packets are missed while the gateway is occupied, either by being in a non-listening state, e.g., switching between channels, or by being busy processing beacons received either directly from the nodes or by the relay.
However, even in the best case, only approximately 16 of the total packets sent arrive at the gateway, which implies significant energy expenditure by the beacons without benefit. The next section improves this with a different forwarding policy.
§.§ Relay Forwarding Efficiency for 11 Nodes & Batching Policy
We programmed the relay with a forwarding policy based on a listening period, and a forwarding period. During the listening period, the relay accumulates the captured packets, e.g., 4 packets from node #1, 10 from node #2, and one from node #3. During forwarding, the relay echoes up to $N$ repetitions of a packet per node, regardless of how many packets were received per node. For instance, for 10 packets received for node #1, five echoes will be transmitted. This reduces the total traffic, and also normalizes the amount of packets sent upstream to the gateways, potentially boosting reception of packets sent by nodes under noiser conditions.
<Ref> shows the reception rates, this time including also the rate of successful transfer between the relay and gateway. The three scenarios employ a listening period of 10, and a different number of packet repetitions each, e.g., five packet repetitions for <Ref>. For each case, the interval between repetitions is also varied. Once again, the sleep time is zero, and the scan interval is 50 for all cases.
The listening time is also shown, which represents the amount of time during each listen-and-forward cycle that the relay is listening. The relay first listens during the scan time ($S_{Time}$) (switching between channels every scan interval), and buffers the packets. Then it enters forwarding mode where each packet is re-sent a given number of times ($Nr_{Repeats}$) at a set interval ($R_{Interval}$). Given that there are 11 nodes, the ratio between listening and forwarding time can be estimated as:
\begin{equation}
L(\%) = \frac{S_{Time}}{S_{Time} + R_{Interval} \times Nr_{Repeats} \times N_{Nodes} }
\end{equation}
In the best case, with 5 repetitions at 10 interval, up to 35 of packets are now successfully forwarded to the gateway, which is 2.2x increase in performance relative to immediate forwarding. Although the relay captures less packets directly from the nodes, due to the lengthier forwarding period, the overall forwarding efficiency is higher.
In a multi-relay scenario, a superior performance should be expected, although the best strategy regarding scan interval, repeat interval, and repeat count would have to be determined. However, a possible approach would be to have each relay in the system forward only a subset of all nodes, thus reducing its own load and preventing excessive in-system noise.
§.§ Estimated Power Consumption vs. Sleep Time
This section explores the power consumption in continued operation as a function of sleep time, given that the forwarding rates during uptime are indicated by the previous experiments. To retrieve power consumption, we utilized a power profiler kit from Nordic Semiconductors <cit.>.
We first use the power profiler to measure the current draw during radio operation (i.e., during scan window periods). Regardless of configuration values, the relay draws 7.5.
We then evaluate the power consumption for different duty cycles defined by the scan and sleep times. The scan interval and window remain equal at 50, and adopt a batching policy with 5 repetitions and a 10 repeat time. The efficiency for this case was 35.
The average current draw and efficiency as a function of the duty cycle can be calculated by the product of the cycle and these baseline values of 7.5 and 35, respectively. The battery life is computed based on the relay's four AA batteries totaling 12000.
<Ref> shows the resulting efficiencies and battery life. The efficiency is shown based on the experimental runs with 11 nodes transmitting at a 1 interval. In this case, a duty cycle of 100 leads to the 35 efficiency, but a battery life of only approximately 2.16 months. To attain a battery life of a year, a duty cycle of 20 is required, with an estimated efficiency of 7. Note that the effective efficiency of forwarding remains 35, since a duty cycle of 20 implies that, in the best case, 20 of all packets would be forwarded.
Additionally, note that for all experiments, the efficiency is dependant on the total amount of packets sent by the nodes. These experimental runs impose a 1 period per node. For some applications like sensor networks for temperature or light intensity readings with periods of in the order of minutes, longer update periods would be tolerable, especially since a long battery life is also desired for the nodes.
We can extrapolate that for a node transmission period of 2.5, the relay could forward 87 of the packets, given the same up-time and fewer packets, a behaviour similar to the one observed for <cit.> (see <Ref>). For a duty cycle of 20 to ensure close to a year of battery life, the estimated efficiency would increase by 10 percentage points.
The efficiency and power consumption are still subject to additional parameters such as multiple relays, tweaks to the batching policy, different values for scan and sleep times which resulting the same duty cycle, node transmission period and number of nodes. This exploration is out of the scope of this paper and left as future work.
§ CONCLUSION
We have presented an evaluation of a Bluetooth device and packet forwarding policies in mesh networks. The objective of the relay device is to extend the range of transmission between end devices, such as Bluetooth nodes, and the gateway devices, which are wall-powered and communicate with a central server. The relays allow for more area coverage without additional gateways, which are more costly, and without the necessary additional wall-power infrastructure.
We first evaluated the relay's packet reception with 2 nodes, under noise generated by 17 nodes which were not part of the system, for values of the scan window between 50 and 1150, and found that the relay can receive up to 90 of the node transmissions, for a node transmission period of 1.
We then evaluated the forwarding efficiency, measured as the number of packets received by the gateway versus the total number of packets sent by the nodes. For a policy where the relay immediately forwards a received packet, only 16 of packets sent by 11 nodes are received by the gateway. By employing a policy of deferred forwarding, and multiple packet repetitions per listened node, this increases to 35.
Finally, we measured the power draw of the device using a power analyzer, and estimated the lifetime of the four AA batteries (12000) for different duty cycles and node transmission periods.
|
* Nation _et al._ (2008) P. D. Nation, M. P. Blencowe, and E. Buks, “Quantum analysis of a nonlinear microwave cavity-embedded dc SQUID displacement detector”, Physical Review B 78, 104516 (2008).
* (8) N. Diaz-Naufal, D. Zoepfl, M. L. Juan, C. M. F. Schneider, L. F. Deeg, G. Kirchmair, and A. Metelmann, in preparation .
* Gardiner and Collett (1985) C. W. Gardiner and M. J. Collett, “Input and output in damped quantum systems: Quantum stochastic differential equations and the master equation”, Physical Review A 31, 3761 (1985).
* Laflamme and Clerk (2011) C. Laflamme and A. A. Clerk, “Quantum-limited amplification with a nonlinear cavity detector”, Physical Review A 83, 033803 (2011).
* (11) DOI 10.5281/zenodo.7231517, https://doi.org/10.5281/zenodo.7231517.
* Burnham and Anderson (2004) K. P. Burnham and D. R. Anderson, eds., “Model Selection and Multimodel Inference” (Springer New York, New York, NY, 2004).
* Htt (b) https://scikit-learn.org/ .
* Bothner _et al._ (2022) D. Bothner, I. C. Rodrigues, and G. A. Steele, “Four-wave-cooling to the single phonon level in Kerr optomechanics”, Communications Physics 5, 33 (2022).
* Safavi-Naeini _et al._ (2013) A. H. Safavi-Naeini, J. Chan, J. T. Hill, S. Gröblacher, J. Miao, Y. Chen, A. Aspelmeyer, and O. Painter, “Laser noise in cavity-optomechanical cooling and thermometry”, New Journal of Physics 15, 035007 (2013).
* Marquardt _et al._ (2007) F. Marquardt, J. P. Chen, A. A. Clerk, and S. M. Girvin, “Quantum Theory of Cavity-Assisted Sideband Cooling of Mechanical Motion”, Physical Review Letters 99 (2007).
* Kornev and Arzumanov (1997) V. K. Kornev and A. V. Arzumanov, “Numerical simulation of Josephson-junction system dynamics in the presence of thermal noise”, Inst. Physics Conf. Ser. 158, 627 (1997).
* Polonsky _et al._ (1991) S. V. Polonsky, V. K. Semenov, and P. N. Shevchenko, “PSCAN: Personal superconductor circuit analyser”, Superconductor Science and Technology 4, 667 (1991).
* Note (1) $S_{II}^{m}$ and $S_{II}^{p}$ denote the area/power below a certain spectral component.
* Zoepfl _et al._ (2017) D. Zoepfl, P. R. Muppalla, C. M. F. Schneider, S. Kasemann, S. Partel, and G. Kirchmair, “Characterization of low loss microstrip resonators as a building block for circuit QED in a 3D waveguide”, AIP Advances 7, 085118 (2017). |
# Diagnostics of non-Maxwellian electron distributions in solar active regions
from Fe XII lines observed by Hinode/EIS and IRIS
G. Del Zanna DAMTP, Center for Mathematical Sciences, University of Cambridge,
Wilberforce Road, Cambridge, CB3 0WA, UK V. Polito Bay Area Environmental
Research Institute, NASA Research Park, Moffett Field, CA 94035, USA Lockheed
Martin Solar and Astrophysics Laboratory, Building 252, 3251 Hanover Street,
Palo Alto, CA 94304, USA J. Dudík Astronomical Institute, Academy of Sciences
of the Czech Republic, 25165 Ondřejov, Czech Republic P. Testa Harvard-
Smithsonian Center for Astrophysics, 60 Garden St, Cambridge, MA 02193, USA
H.E. Mason DAMTP, Center for Mathematical Sciences, University of Cambridge,
Wilberforce Road, Cambridge, CB3 0WA, UK E. Dzifčáková Astronomical
Institute, Academy of Sciences of the Czech Republic, 25165 Ondřejov, Czech
Republic
###### Abstract
We present joint Hinode/EIS and IRIS observations of Fe XII lines in active
regions, both on-disk and off-limb. We use an improved calibration for the EIS
data, and find that the 192.4 Å / 1349 Å observed ratio is consistent with the
values predicted by CHIANTI and the coronal approximation in quiescent areas,
but not in all active region observations, where the ratio is often lower than
expected by up to a factor of about two. We investigate a number of physical
mechanisms that could affect this ratio, such as opacity and absorption from
cooler material. We find significant opacity in the EIS Fe XII 193 and 195 Å
lines, but not in the 192.4 Å line, in agreement with previous findings. As we
cannot rule out possible EUV absorption by H, He and He II in the on-disk
observations, we focus on an off-limb observation where such absorption is
minimal. After considering these, as well as possible non-equilibrium effects,
we suggest that the most likely explanation for the observed low Fe XII 192.4
Å / 1349 Å ratio is the presence of non-Maxwellian electron distributions in
the active regions. This is in agreement with previous findings based on EIS
and IRIS observations independently.
atomic processes — atomic data — Sun: UV radiation — Ultraviolet: general
††journal: ApJ
## 1 Introduction
Spectral lines from Fe XII provide a wide range of plasma diagnostics for the
solar corona, as this ion produces strong lines and is highly abundant. The
strongest transitions are in the extreme ultraviolet (EUV), and have been
routinely observed by the Hinode Extreme Ultraviolet Imaging Spectrometer
(EIS) (Culhane et al., 2007). Among them, the most intense lines are three
decays to the ground state, from 4P states of Fe XII , at 192.4, 193.5, and
195.1 Å. These Fe XII EIS lines have been widely used for a range of
diagnostic applications, especially in active regions.
Fe XII also produces several weaker forbidden lines in the UV from transitions
within its ground configuration. These include the 1242 Å line which has been
observed by e.g., SoHO SUMER (Wilhelm et al., 1997), and the 1349 Å line,
observed by the Interface Region Imaging Spectrograph (IRIS) (De Pontieu et
al., 2014). Because of the difference in the excitation energies between the
ground configuration states and the upper levels emitting the EUV lines, the
ratios of the UV forbidden lines to any of the 192.4, 193.5 and 195.1 Å lines
observed by EIS provide a direct and important diagnostic of the electron
temperature, largely independent of any assumption of ionisation equilibrium,
although the ratios also have a density dependence as shown in Fig. 1. For the
same reason, these ratios are also excellent, unexplored diagnostics for the
presence of non-Maxwellian electron distributions (NMED), see e.g., Dudík et
al. (2014). In this case, independent measurements of the electron temperature
are necessary.
We have recently obtained strong evidence that NMED effects are present in
active regions (Lörinčík et al., 2020; Dudík et al., 2017), especially in the
active region coronal loops and also the so-called moss, a thin layer at the
footpoints of the 3 MK loops (Fletcher & De Pontieu, 1999; Testa et al.,
2013), where Fe XII emission is brightest (see, e.g., Tripathi et al., 2008;
Testa et al., 2016).
The ratios of the Fe XII UV forbidden lines vs. the EUV lines is also
sensitive to any EUV absorption due to the presence of cool material such as
filaments and spicular material, which could significantly affect many
diagnostic applications of EUV lines. Most of the EUV absorption is due to
photoionisation of the ground state of neutral hydrogen, with a threshold at
912 Å, but significant absorption can also be due to photoionisation of the
ground states of neutral helium (threshold at 504 Å) and ionised helium
(threshold at 228 Å). Such absorption is widespread in the solar corona, and
is easily visible in active regions filaments. However, any absorption due to
low-lying emission such as spicules is more difficult to measure, as it is
inter-mingled with the moss emission. De Pontieu et al. (2009) carried out a
comparison between the Fe XII forbidden line observed by SoHO SUMER at 1242 Å
and the 195.1 Å line observed by Hinode/EIS in an active region. They found
that the 195.1 Å / 1242 Å ratio in moss regions was a factor of about 1.5
lower than expected and concluded that a likely explanation for the
discrepancy was absorption in the EUV due to cool plasma. They used an early
version of CHIANTI, the one which was available at that time. Since then, a
large-scale scattering calculation for Fe XII (Del Zanna et al., 2012)
significantly changed (by 30–50%) the populations of the ground configuration
states. The new calculations consequently increased significantly the
intensities of the forbidden lines. The improved Fe XII atomic data were made
available in version 8 of CHIANTI, and are also those in the current CHIANTI
v.10 (Del Zanna et al., 2021). With the improved atomic data, the 195.1/1242 Å
ratio decreases by about a factor of 1.5, bringing them in better agreement
with the ratios observed by De Pontieu et al. (2009) in the moss regions,
although not with the loop regions.
Figure 1: Theoretical intensity ratio (ergs) between the EUV 192.4 Å EIS line
and the UV 1349.4 Å IRIS forbidden line, calculated with CHIANTI v.10 and a
range of electron densities and temperatures.
As IRIS is capable of measuring the Fe XII 1349 Å line with a faster cadence
than that of SUMER for the 1242 Å line (about one hour), we devised a Hinode
campaign (HOP 246) of simultaneous EIS/IRIS active region observations. The
campaign started on 2014 February 14 and was mostly run in March 2014 on a few
active regions, in particular following the disk passage of AR 12014 during
the second half of the month. In spite of relatively long IRIS exposures
(30s), the signal for the Fe XII 1349 Å line, a weak forbidden transition, was
consistently low, except for a few observations when the active region was
near the limb. An analysis of two of those observations was presented by Testa
et al. (2016). Their results focused on Doppler flows and widths, but also
indicated a significant discrepancy (up to nearly a factor of two) between the
observed and predicted 195.1 Å / 1349.4 Å ratios, with the observed ones being
systematically higher. The discrepancy increased with the new atomic data in
CHIANTI version 8 (Del Zanna et al., 2015), relative to version 7 and seemed
to indicate a problem with the atomic data. This was surprising since the
benchmarking of Fe XII with observations generally showed good agreement (see
a summary in Del Zanna & Mason, 2018). After further investigation, this
discrepancy was found, for the most part, to be explained by the errant
inclusion of an obsolete keyword in eis_prep, and the adopted EIS calibration,
which is different from the updated version used here.
To test the Fe XII 192.4 Å / 1349.4 Å diagnostics, we analysed the HOP 246
observations, but also searched the entire IRIS and EIS databases for any
other suitable observations where the Fe XII lines were observed by both
instruments. We analysed several of these datasets and in the process
identified a series of problems associated with the EIS Fe XII observations,
as discussed below.
Section 2 outlines the data analysis and describes some of the main issues we
encountered which affected the selection of the observations. Section 3
describes the observations analysed here, while Section 4 summarizes our
conclusions. An Appendix provides supplementary information.
## 2 Data analysis and selection
### 2.1 EIS
The EIS data were processed with custom-written software (see, e.g., Del Zanna
et al., 2011). EIS saturation is at 16000 DN, however we found indications of
some non-linear effects for lower values approaching this threshold (Del Zanna
et al., 2019). The strongest EIS 195.1 Å line was sometimes saturated (or
close to saturation) in the AR moss regions. For this reason (and for other
reasons discussed below) observations of the weaker 192.4 Å line were used
instead.
An analysis of a large number of EIS observations of different features, on-
disk, off-limb, with different exposures, slit combinations, summarised in Del
Zanna et al. (2019), revealed several anomalies in the 192.4, 193.5, and 195.1
Å lines. The main ones affect the instrumental widths of the 193.5, and 195.1
Å lines and their reduced intensities (compared to the weaker 192.4 Å line),
in all active region and many off-limb observations. The only explanation
found for the anomalous ratios and widths of these lines was the ubiquitous
presence of opacity effects. In fact, these three lines are decays to the
ground state, so the ratios of these lines are insensitive to density and
temperature variations. Their theoretical ratios show agreement with well-
calibrated observations of the quiet Sun within 1% (Storey et al., 2005; Del
Zanna & Mason, 2005). Opacity effects were found to decrease the intensity of
the stronger 195.1 Å line by about 40%. Note that in active region
observations the relative intensity of the 195.1 Å line should actually
increase (compared to the quiet Sun) due to the presence of a weak Fe XII
density-sensitive transition (Del Zanna & Mason, 2005).
To diagnose the presence of NMED, the temperature needs to be estimated
independently. The Fe XI lines, identified by Del Zanna (2010) and used in
Lörinčík et al. (2020) offer such a diagnostic, but are generally not
telemetered, and in one of the observations discussed here are very weak, so
we had to resort to standard emission measure analyses.
To measure electron densities and for a meaningful comparison with IRIS, we
need to convert the EIS DN values to physical units using a radiometric
calibration. Del Zanna (2013a) presented a significant revision of the ground
calibration, with an additional time-dependent decrease in the sensitivity of
the long-wavelength channel. That calibration substantially affected the
ratios of the 192.4, 193.5, and 195.1 Å lines, which in quiet Sun on-disk
observations were forced to agree with theory and previous observations. As
further wavelength-dependent corrections as a function of time were clearly
needed, and the calibration only considered data until 2012, a long-term
program was started by GDZ and H.P.Warren, both of the EIS team, to provide an
improved radiometric calibration. Here we adopt these new calibration results,
as discussed in the Appendix.
### 2.2 IRIS
The IRIS and EIS observations are generally carried out simultaneously, but
are not necessarily co-spatial. In fact, the EIS slit is moved from west to
east to ‘raster’ a solar region, while the IRIS slit is moved in the opposite
direction (see e.g., Testa et al., 2016). Several IRIS observations were
carried out with a roll angle of 90 degrees, so that some co-spatial and co-
temporal EIS/IRIS observations were guaranteed. In some instances, several
EIS/IRIS rasters were repeated, so it was possible to check the solar
variability.
In addition to the available IRIS and EIS datasets, we also analysed context
images using images from the Atmospheric Imaging Assembly (AIA; Lemen et al.,
2012) telescope on board the Solar Dynamic Observatory (SDO; Pesnell et al.,
2012) in the 193 Å broad band filter, to select observations with small solar
variability (we note that the AIA 193 Å band is typically dominated by the
three Fe XII 192.4, 193.5, and 195.1 Å lines in the moss regions; e.g.,
Martínez-Sykora et al. 2011).
IRIS level 2 data were used. The data were spatially binned as described
below, to improve the signal. The Fe XII line was fitted with a Gaussian in
each pixel and the conversion to physical units was performed afterwards. The
radiometric calibration of the IRIS instrument is discussed in Wülser et al.
(2018). The uncertainty in the IRIS calibration for the data analysed here is
of the order of 20–30%
We also note that Testa et al. (2016) showed that in cases with low signal-to-
noise (especially when the peak DN in the line is less than 10), the intensity
of the line is likely to be under-estimated by up to $\sim 15$ percent. For
the comparisons with EIS, we typically only consider the regions where the
IRIS line has averaged peak DN above 20. Hence, we have not applied any such
corrections to the IRIS intensities.
During the analysis of the on-disk observations, we noticed the presence of an
unidentified photospheric (very narrow) line at exactly the rest wavelength of
the 1349.4 Å Fe XII line (see the Appendix). We have estimated that the
contribution of this line to on-disk moss regions is however minimal, of the
order of 5% in some locations, by using the C I 1354.28 Å line as a reference.
In addition, we estimate that the theoretical emissivity of the 1349.4 Å Fe
XII line is accurate to within 10%. As mentioned, the new atomic model
(CHIANTI, v8) increased the intensities of the forbidden lines by 50 percent
or more. A benchmark of the v.8 atomic data against quiet Sun off-limb SoHO
SUMER observations indicated excellent agreement (Del Zanna & DeLuca, 2018).
The population of the upper state of the 1349.4 Å line is mainly driven by
cascading effects. Improvements with future atomic calculations cannot be
ruled out. However, it is unlikely that larger calculations would affect the
line by more than a few percent. In the main scattering calculations, all
states up to $n=4$ were included by Del Zanna et al. (2012). Cascading effects
from higher states up to the main $n=6$ were included with an approximate
(distorted wave) calculation, showing an increase in the forbidden lines by
about 3%. These cascading effects were not included in CHIANTI v8 as the size
of the model ion would have been very large, and as a 3% increase was deemed
negligible.
### 2.3 On-disk observation - 2014 March 29
Following the above-mentioned data selection constraints, we analysed several
observations. A large scatter in the 192.4 Å / 1349.4 Å ratios was found,
although consistent results among the various measurements were also found. We
provide results for one of the on-disk observations, that obtained on 2014
March 29 by IRIS at 23:27-02:14 UT, i.e., the same observing sequence analyzed
by Testa et al. (2016).
The EIS raster we focus on here was obtained during 23:24 and 23:50 UT. Note
that in Testa et al. (2016) a later raster obtained a couple of hours later
(2014 March 30 01:36-02:02) was analyzed. In the brightest moss regions, the
EIS 195.1 Å line reached 15,000 DN, i.e., was very close to saturation. Fig. 2
(top) shows an image of the integrated intensity of the Fe XII 192.4 Å line
and its ratio (ergs) with the 195.1 Å line. The expected ratio is 0.31, which
is generally observed on-disk, but not in the brightest moss regions, where
the ratio increases to values around 0.4, an indication of some opacity
effects. We have assumed that the opacity effects in the 192.4 Å line are
negligible (see discussion below), and used this line for the comparison with
IRIS.
Figure 2: Top: intensity image in the Fe XII 192.4 Å line and ratio with the
195.1 Å line (ergs) for the 2014-03-29 observation. Note that the ratio should
have a value of 0.31. Bottom: intensity image of the IRIS 1349.4 Å line and
the 192.4 Å / 1349 Å ratio (ergs).
The IRIS raster was obtained with a -90o roll angle, 30s exposures and
stepping the 0.33′′ slit by 1′′, with 64 steps. The IRIS data were rotated and
rebinned by a factor of 12 along the slit, to obtain a spatial resolution in
the EW direction comparable to the EIS one, as EIS rastered with about 2′′
steps using the 2′′ slit. In the other direction, the IRIS data were first
interpolated linearly over a 0.33′′ grid, and then rebinned by a factor of 3,
to achieve a spatial resolution of 1′′, equivalent to the EIS pixel size along
the slit.
The contribution of the unidentified cool line blending the IRIS Fe XII line
was estimated by removing 4% of the C I line at 1357.13 Å. This resulted in a
small correction, of the order of 5 percent in a few places, i.e., not
affecting the main results.
As the effective spatial resolution of EIS is about 3–4′′(partly due to the
jitter during the long exposures), for a direct pixel-to-pixel comparison, the
IRIS data were convolved to obtain an effective spatial resolution close to
the EIS one. Such smoothing was not carried out in the analysis by Testa et
al. (2016), which may explain why a broader scatter in the ratios was found in
their analysis, compared to what is shown here. Finally, the EIS and IRIS
images were co-aligned by cross-correlation. The resulting IRIS image is shown
in Fig. 2 (bottom), together with the calibrated ratio of the 192.4 Å / 1349.4
Å lines (in ergs). It is clear that a pixel-to-pixel comparison has some
limitations, as in some places the morphology in the EIS and IRIS lines is not
quite the same. That is partly due to the non-simultaneity, partly due to the
EIS effective resolution which is very difficult to model. However, overall
the comparison is satisfactory. Figure 2 shows that the 192.4 Å / 1349.4 Å
ratio varies significantly, between values close to 30 in some regions to
around 15 in the brightest regions.
The 192.4 Å / 195.1 Å ratio (shown in Fig. 2) is indicative of some opacity
effects, which would be significant in the 195.1 Å line, but relatively small
(about 10%) in the weaker 192.4 Å line (see a discussion below on opacity
issues).
Figure 3: The 192.4 Å / 1349.4 Å ratio for the observation on 2014-03-29, as a
function of the calibrated intensity in the IRIS line.
A scatter plot of the 192.4 Å / 1349.4 Å ratio as a function of the calibrated
intensity in the IRIS line is shown in Fig. 3. It shows a large variation of
about a factor of two, with lower values where Fe XII is brightest, in the
moss regions. We selected three moss regions, indicated as B1, B2, and B3 in
Fig. 2 and measured the averaged density. The averaged intensities (obtained
from the pixel-by-pixel measurements) in the lines, their ratios and the
averaged densities are shown in Table 1. The averaged density is about 3
$\times 10^{9}$ cm-3 using the Fe XIII lines. The densities from Fe XII are
higher, partly because of opacity effects. We then measured the temperature
distribution with both an EM loci and a DEM method, using coronal abundances.
For the DEM analysis we used a modified version of the CHIANTI v.10 programs,
where the DEM is modelled as a spline function and the routine MPFIT is used.
The DEM results for the region B1 are shown in Fig. 4 as an example. The
temperature distribution is multi-thermal, but the Fe XII and Fe XIII lines
can also be reasonably modelled with an isothermal plasma around 2 MK. In the
moss region B1, the averaged ratio is about 19, lower than 25.1, the expected
value calculated with the measured density and the DEM.
Note that this is the same AR observed in Testa et al. (2016) (although not
observed at the same time; see sec. 2.3), where we recall the 195.1 Å / 1349.4
Å ratios were found higher than predicted (up to nearly a factor of two). We
tracked down the reason for this large difference, which was mostly due to an
obsolete keyword (correct_sensitivity) in the EIS standard eis_prep software,
and the different EIS calibration used.
The large variations in the ratio and the very low values (about 15) need to
be explained. They could be due to strong EUV absorption by neutral hydrogen
and helium emission or other non-equilibrium effects discussed below. Cool
filamentary material is always present in active regions, but its absorption
is difficult to quantify, unless it is higher up in the corona and the
underlying emission can reliably be estimated. In this observation, and the
other ones we have analysed, we did not find obvious evidence that the lower
ratios were due to cool filaments. However, we cannot rule out the possibility
that neutral hydrogen and helium is intermixed with the moss emission.
Figure 4: Top: DEM for the B1 region, indicated in Fig. 2. The points are plotted at the effective temperature, and at the theoretical vs. the observed intensity ratio multiplied by the DEM value. The wavelengths (Å) and main ion are indicated. Bottom: emissivity ratio of the main Fe XIII lines in the B1 region. Table 1: Intensities I (ergs) and ratios R (ergs) in the moss regions observed on 2014-03-29. Values in parentheses are DN. The last column shows the densities from the Fe XIII 204 Å / 202 Å ratio. Region | I (192 Å) | I (195 Å) | I (1349 Å) | R (192/195 Å) | R (192/1349 Å) | Log Ne
---|---|---|---|---|---|---
B1 | 1480 (11905) | 3970 (45533) | 78 (1100) | 0.37 | 19.0 | 9.5
B2 | 1850 (14881) | 4630 (53015) | 94 (1330) | 0.40 | 19.7 | 9.45
B3 | 1280 (10274) | 3180 (36400) | 41 (581) | 0.40 | 31.2 | 9.5
We have analysed other on-disk observations of active regions, and found
similar results to those shown above. Aside from other observations of the
same AR at the end of March 2014, we have analysed in detail an observation on
2013 Oct 7 and one on 2013 Nov 30.
### 2.4 Off-limb observation of 2013-10-22
To reduce the possible effects of absorption by cool material, we have
searched for off-limb observations with minimal filament material.
Unfortunately, only one suitable observation was found. This was obtained on
2013-10-22. The EIS study designed by us (cam_ar_limb_lite_v2) was rich in
diagnostic lines and had a good signal, as the exposure was 45 s. One problem
with this observation was the presence of a storm of high-energy particles so
each exposure had to be inspected to remove those particle hits, as standard
cosmic ray removal procedures did not work. In spite of this, some anomalous
intensities are still present due to residual particle hits/warm pixels in
some weaker lines. EIS rastered during 06:45–8:51 with the 2′′ slit an off-
limb region where a small active region was present. Most of the AR was
located well behind the east limb, as we could judge from AIA observations of
the following days.
We checked for the presence of cool filaments or spicular material using AIA
observations in 304 Å, but also 193 Å and 211 Å, together with H$\alpha$
observations by the Kanzelhöhe Observatory. The co-alignment of AIA with EIS
was achieved using a slicing method for the AIA 193 Å data to produce a
pseudo-raster corresponding to EIS Fe XII 192.4 Å. We find that best co-
alignment is found if the AIA is rotated with respect to EIS by about 0.5∘, as
well as shifted by a few arc seconds in both axes. The Kanzelhöhe H$\alpha$
data traditionally have excellent pointing, which we verified by comparison
with AIA 193 Å, focusing on filaments off-limb. Thus, the Kanzelhöhe H$\alpha$
data were coaligned with EIS analogously to AIA data.
The context AIA and H$\alpha$ data are shown in Figure 5 alongside the EIS
raster. We have selected two regions for further analysis, which are labelled
as ’AR’ and ’QR’.
The H$\alpha$ data and the AIA coronal images do not show any indications of
absorption by cool material off-limb in these regions. The main absorption
would be due to neutral hydrogen and neutral helium, with a minor contribution
from ionized helium.
The AIA 304 Å images show some emission above the limb in the ‘AR’ region, but
the amount of ionized helium is difficult to quantify, for multiple reasons,
including uncertainties in the chemical abundances, instrument calibration,
and coronal contribution to the band. We estimate that in the ‘AR’ region the
Si XI 303.33 Å line alone accounts for about a quarter of the AIA count rates
(which are about 40 DN s-1). In fact, with the DEM distribution we obtained,
the intensity of the Si XI line results 5280 (erg s-1 cm-2 sr-1). Using the
estimated effective area of the AIA channel (for this observation and
normalised to EVE), that is equivalent to an average of 11 DN s-1 per AIA
pixel due to Si XI. We note that the resonance He II lines at 303.8 Å are
formed at higher temperatures and have much larger optical thickness than
H$\alpha$, which in turn has similar optical thickness to the H and He
continua around 195 Å (e.g. Wang et al., 1998; Anzer & Heinzel, 2005). Thus,
the presence of weak-signal structures in He II, but not in H$\alpha$ along
the LOS is still consistent with negligible absorption of EUV radiation by
chromospheric or transition-region material.
IRIS scanned the same region from east to west with the 0.33′′ slit, 30s
exposure times and ‘sparse rastering’, i.e., the slit location was stepped by
1′′. The interesting area above the limb, where some IRIS signal from Fe XII
was present, was observed almost simultaneously by IRIS and EIS. We performed
the IRIS and EIS data reduction and calibration in a similar way to that
described in the previous section.
Figure 5: Context observations of the 2013-10-22 off-limb active region. The
EIS Fe XII 192.4 Å line is shown in the panel (a), while the AIA 193 Å pseudo-
raster is shown in panel (b). Panels (c)–(f) show snapshots from AIA and
Kanzelhohe H$\alpha$ observations, all coaligned to match the EIS and IRIS
observations.
Figure 6: Summary of the EIS/IRIS comparison on the off-limb AR observation on
2013-10-22.
Figure 6 shows a summary of the EIS/IRIS comparison. As for the on-disk cases,
the 192.4/195.1 Å is higher in the brightest regions, indicating some opacity
effects. The width of the 195.1 Å line is also larger in the same regions. As
in the previous cases, the 192.4Å / 1349.4 Å ratio varies significantly from
values around 30, north of the AR, to values around 15 closest to the core of
the AR. Figure 7 shows the scatter plot of this ratio. Averaged intensities
and ratios in those regions are shown in Table 2.
Figure 7: Scatter plot for the off-limb AR observation on 2013-10-22. Table 2: Intensities I (ergs) and ratios R (ergs) in the two the off-limb regions observed on 2013-10-22. Values in parentheses are intensities in DN (the exposure times for EIS and IRIS were 45 and 30 seconds, respectively). Region | I (192 Å) | I (195 Å) | I (1349 Å) | R (192/195 Å) | R (192/1349 Å) |
---|---|---|---|---|---|---
AR | 1545 (19412) | 2920 (51907) | 73 (3343) | 0.53 | 21 |
QR | 880 (11049) | 1770 (31521) | 28.1 (1288) | 0.50 | 31 |
Figure 8 shows the emissivity ratios of the EIS Fe XII and Fe XIII lines, in
the quiet off-limb region (above) and active region (below). It is clear that
both regions are affected by opacity, which reduces the intensities of the Fe
XII 193.5 and 195.1 Å lines, compared to the 192.4 Å one. The densities
obtained from the Fe XII lines are close to those obtained from the Fe XIII
lines, considering the Fe XII 192.4 Å line, and the fact that this line is
likely underestimated because of opacity effects (see discussion below). We
adopt the Fe XIII densities as they are more reliable. The QR and AR regions
have densities around 4 and 10 $\times$ 108 cm-3.
Note that the Fe XIII lines include the photoexcitation effects, which affect
the population of the ground state and the density diagnostics by up to 10%,
as discussed in Dudík et al. (2021). They are caused by the large flux of
photons emitted by the disk around 1 $\mu$m, and resonantly absorbed by the
two near-infrared Fe XIII lines within the ground configuration. We have also
explored the effects due to photoexcitation in the Fe XII model ion,
considering that several transitions within the ground configuration fall in
the visible and far UV, but we did not find significant changes. We used
observations of the solar irradiance in the far UV and visible.
We have looked at the spatial distribution of various line ratios sensitive to
temperature and found that the temperature so obtained is relatively constant
in the off-limb regions. We produced EM loci plots for the quiet Sun and AR
regions, finding that observations are consistent with an almost isothermal
plasma around log $T$ [K]=6.2–6.25, which is the typical formation temperature
of Fe XII. We have then performed a DEM analysis using a set of strong lines
from Iron, not density-sensitive. The results are shown in Fig. 9 and confirm
the near isothermality of the plasma emission, with a marked higher
temperature component in the AR. The DEM analysis also indicated that the S/Fe
relative abundance is close to photospheric around 1.2 MK (using a S X line).
Figure 8: Emissivity ratios of the EIS Fe XII and Fe XIII lines, in the quiet
off-limb region (above) and active region (below).
Figure 9: DEMs for the quiet off-limb region (above) and active region (below)
for the 22-Oct-2013 observation. The points are plotted at the temperature
$T_{\rm max}$ of the maximum in the emissivity, and at the theoretical vs. the
observed intensity ratio multiplied by the DEM value. The wavelengths (Å) and
main ion are indicated.
We regard the spatial variation in the 192.4 Å / 1349.4 Å ratio as important,
since this is independent of any calibration issues, and largely independent
of the small variation in the density and temperature in the off-limb regions.
The averaged ratio in the QR region (31) is close to the expected value, 34.1,
obtained by folding the emissivities with the DEM distribution. On the other
hand, the AR value (21) is significantly lower than the expected value (30.9,
with the DEM shown above). The lowest values near the limb (around 15) are
even more difficult to explain.
As there is no clear indication for absorption by filament material, and as
opacity effects would decrease the 192.4 Å line by only a small amount (see
Sect. 3.1), we speculate that the main effect that could be responsible for
changing the ratio is NMED. The fact that the ratio has values close to the
expected ones in the northern part of the off-limb region, suggests that the
EIS vs. IRIS radiometric calibration is reasonably accurate.
## 3 Possible effects on the Fe XII line ratio and the temperatures
### 3.1 Opacity effects
Following Del Zanna et al. (2019), the optical thickness at line centre can be
written as
$\tau_{0}=8.3\,10^{-21}\,f_{lu}\frac{\lambda^{2}}{\Delta\lambda_{\mathrm{FWHM}}}\;N_{l}\,\Delta
S$ (1)
where $f_{lu}$ is the absorption oscillator strength, $N_{l}$ is the number
density of the lower level, $\Delta S$ the path length,
$\Delta\lambda_{\mathrm{FWHM}}$ is the FWHM of the line profile in Å, and
$\lambda$ is the wavelength in Å. For the 195 Å line, $f_{lu}=2.97/4$,
neglecting the weaker line.
The population of the lower level can be written as
$N_{l}={N_{l}\over N({\rm Fe\,XII})}\,{N({\rm Fe\,XII})\over N({\rm
Fe})}\,Ab({\rm Fe})\,\frac{N_{\mathrm{H}}}{N_{\mathrm{e}}}\,N_{\mathrm{e}}\;,$
(2)
where $N_{l}/N({\rm Fe\,XII})$ is the relative population of the ground state,
${N({\rm Fe\,XII})/N({\rm Fe})}$ is the peak relative population of the ion,
$Ab({\rm Fe})$ is the Fe abundance, $N_{\mathrm{H}}/N_{\mathrm{e}}=0.83$, and
$N_{\mathrm{e}}$ is the averaged electron number density.
Considering the box above the active region, as we have assumed for
photospheric abundances, we have $Ab({\rm Fe})=3.16\,\times 10^{-5}$. From the
EM loci / DEM analysis, we have $EM=10^{28.3}$ [cm-5] and log $T$[K]= 6.25,
approximately. With this temperature, ${N({\rm Fe\,XII})/N({\rm Fe})}=0.21$
using the CHIANTI ionisation equilibrium. Assuming the density from the Fe
XIII line ratio ($1\times 10^{9}$ cm-3, we have $N_{l}/N({\rm Fe\,XII})=0.75$
for these values of $T$, $N_{\mathrm{e}}$. From the $EM$ and $N_{\mathrm{e}}$
values, assuming a filling factor of 1, we obtain a path length of
2$\times$1010 cm, from which we obtain $\tau_{0}=0.96$ for the 195.1 Å line,
and $\tau_{0}=0.32$ for the 192.4 Å line, as this transition has an oscillator
strength a third of the 195.1 Å line.
Assuming that the source function $S_{\nu}(\tau_{\nu})$ does not vary along
the line of sight, the peak intensity of each line is
$I_{\nu}=S_{\nu}\,\left(1-e^{-\tau_{0}}\right)\;.$ (3)
Recalling that the line source function $S_{\nu}$ is:
$S_{\nu}={2\,h\,\nu^{3}\over c^{2}}\;\left({g_{u}N_{l}\over
g_{l}N_{u}}-1\right)^{-1}\;,$ (4)
with standard notation, we find that $S_{195}/S_{192}=1.04$ using the
statistical weights $g$ and the level populations calculated with the model
ion.
For $\tau_{0}(195)=0.96$, the ratio of the intensities is then
$I_{192}/I_{195}=0.43$, which is higher than the optically thin value of 0.31
and closer to the observed value of 0.53 for the region.
To estimate how much the weaker 192.4 Å line is suppressed for an optical
depth of 0.32, as our simple assumption is equivalent to the average escape
factor formalism, we consider the homogeneous case discussed by Kastner &
Kastner (1990), and obtain an escape factor of about 0.89, i.e., the 192.4 Å
line is suppressed by about 10%. Indeed if we increase the 192.4 Å line
intensity by this amount, the emissivity ratio curves would result in a
slightly lower electron density, in better agreement with the values obtained
from the Fe XIII ratio.
Finally, for the quiet off-limb ‘QR’ region, if we repeat the above estimates,
considering the lower $EM$ and lower density, we obtain
$\tau_{0}(192.4)=0.33$, i.e., a similar optical depth, in agreement with the
fact that the observed ratio is very similar.
Figure 10: Non-Maxwellian $\kappa$-distributions (top row) and their influence
on the Fe XII 192.4 Å and 1349.4 Å lines, whose contribution functions are
shown in the middle panels. The energy excitation thresholds for these two
lines are denoted by dashed lines in the top panel. Bottom panel shows the
behaviour of the 192.4 Å / 1349 Å ratio with $\kappa$, assuming peak formation
temperatures.
Figure 11: Diagnostics of the NMED represented by $\kappa$-distributions using
the ratio-ratio technique. Individual colors represent the value of $\kappa$,
while the cross of different sizes represent the observed line ratios in the
QR and AR boxes. The photon noise uncertainty $\sigma_{\mathrm{phot}}$ (light
blue), as well as added 20% to 30% calibration uncertainties $\sigma_{20,30}$
(violet and black, respectively) are shown. Colored asterisks in the right
panel denote the DEMκ-predicted line intensity ratios (see Section 3.3 for
details). Note that both axes are scaled logarithmically.
Figure 12: Observed ratios in each individual pixel corresponding to Figure 7
are overplotted on the theoretical diagnostic curves. Two sets of curves are
shown, for log($N_{\mathrm{e}}$ [cm-3]) = 8.6 and 9.4, representing the lowest
and highest densities detected. The points are color-coded either according to
the electron density (left panel) or according to the Fe XII 1349 Å intensity
(right panel).
### 3.2 Non-Maxwellian electron distributions (NMED)
#### 3.2.1 NMED effects on the Fe XII ratio
To evaluate the effects of NMED, we considered the $\kappa$-distributions, a
well known class of non-Maxwellian distributions characterized by a near-
Maxwellian core and a power-law high-energy electron tail (see, e.g.,
Livadiotis, 2017; Lazar & Fichtner, in press). We use the standard expression
for $\kappa$-distributions of the second kind (see the discussion in
Dzifčáková et al., 2021), namely
$f_{\kappa}(E)dE=A_{\kappa}\frac{2}{\sqrt{\pi}\left(k_{\mathrm{B}}T\right)^{3/2}}\frac{E^{1/2}dE}{\left(1+\frac{E}{(\kappa-3/2)k_{\mathrm{B}}T}\right)^{\kappa+1}}\,,$
(5)
where $E$ is the electron kinetic energy, $T$ is the temperature,
$k_{\mathrm{B}}$ is the Boltzmann constant, and $A_{\kappa}$ is a constant for
normalization to unity. From the expression above it follows that the slope of
the high-energy power-law slope of the high-energy tail of a
$\kappa$-distribution is $\kappa+1/2$. The shape of the $\kappa$-distributions
as a function of $E$ is depicted in the top row of Figure 10.
The synthetic spectra for Fe XII and Fe XIII were obtained using the KAPPA
database (Dzifčáková et al., 2015, 2021), which allows for calculation of
spectra for $\kappa$-distributions using the same atomic data as CHIANTI
version 10 (Dere et al., 1997; Del Zanna et al., 2021). We calculated the Fe
XII and Fe XIII line intensities for a range of temperatures $T$ and $\kappa$
values and found that the EIS/IRIS ratio of Fe XII 192.4 Å / 1349 Å line
intensities offer unprecedented sensitivity to NMED, with the difference
between Maxwellian and $\kappa$ = 2 being of about a factor of two, depending
on temperature. This sensitivity to NMED comes from the widely different
wavelengths, and thus excitation energy thresholds of the two lines - 192.4
and 1349 Å (cf., Dudík et al., 2014).
The line contribution functions $G(T,\kappa)$ of the two lines, equivalent to
intensities normalized to unity emission measure, are shown in Figure 10. For
low $\kappa$ = 2, the peak formation of the Fe XII 192.4 Å line occurs at
higher $T$, and its intensity decreases. The shift in the temperature of the
peak, as well as about half of the decrease of the peak, are due to the
behaviour of the ionization equilibrium with $\kappa$ (Dzifčáková & Dudík,
2013; Dzifčáková et al., 2021). The decrease in excitation due to relatively-
lower amount of electrons in the $\kappa$ = 2 distribution at few hundred eV
(top panel of Figure 10) also contributes to the decrease of the peak of the
Fe XII 192.4 Å line. Compared to that, the forbidden 1349.4 Å line intensity
increases for low $\kappa$ (bottom row of Figure 10) despite the decrease of
the relative ion abundance. The reason is chiefly that the forbidden line,
whose excitation cross-section decreases with $E$, and which is excited by
electrons at energies of $E$ $\geq$ 9.2 eV, experiences excess excitation by
the relatively-higher peak of the $\kappa$ = 2 distribution (top row of Figure
10). The overall result is that for decreasing $\kappa$, the Fe XII 192.4 Å /
1349 Å line intensity ratio decreases (bottom panel of Figure 10).
However, one line ratio sensitive to $\kappa$ is not enough to determine the
$\kappa$ from observations. This is because the distribution function has two
independent parameters, namely $T$ and $\kappa$ (Equation 5), which thus need
to be determined simultaneously. (e.g., Dzifčáková & Kulinová, 2010; Mackovjak
et al., 2013; Dudík et al., 2014, 2015; Lörinčík et al., 2020; Dzifčáková et
al., 2021). Therefore, it is advantageous to combine this ratio with a
primarily temperature-sensitive Fe XII / Fe XIII ratio, which allows for de-
coupling of the sensitivities to $\kappa$ and to $T$ (see Figure 11) provided
the plasma is in ionization equilibrium. For the latter ratio, we chose the Fe
XII 192.4 Å line together with the unblended and well-observed Fe XIII 202.0 Å
line, thus minimizing the photon noise uncertainties. The ”ratio-ratio”
diagnostic diagram for $T$ and $\kappa$ is then constructed by plotting the
dependence on one line ratio upon the other one, see Figure 11. There, the
colored curves denote individual values of $\kappa$, with black being
Maxwellian and red corresponding to $\kappa$ = 2. Individual values of log($T$
[K]) are denoted by gray isotherms intersecting the curves for different
$\kappa$.
#### 3.2.2 NMED measurements
The line intensity ratios of Fe XII 192.4 Å / 1349 Å together with the Fe XII
192.4 Å / Fe XIII 202.0 Å observed in the AR and QR boxes are shown in Figure
11 together with their uncertainties, consisting of photon noise uncertainty
$\sigma_{\mathrm{phot}}$ (light blue) as well as the added 20–30% calibration
uncertainty, denoted as $\sigma_{20}$–$\sigma_{30}$ (violet and black crosses,
respectively). This uncertainty is conservative, but is shown nevertheless
because the instruments were not cross-calibrated independently. We note
however that the differences in the observed Fe XII 192.4 Å / 1349 Å ratio are
systematic between the quiet Sun and active region (see Figure 6. That means
the differences between AR and QR shown in the diagnostic diagram in Figure 11
are not a result of purely calibration uncertainty, since the calibration is
the same for both the QR and AR. Note also that we have corrected the Fe XII
192.4 Å line intensity for the optical depth effects, as discussed in Section
3.1.
In the QR box, where the observed ratio is higher and about 30, the plasma is
consistent with the Maxwellian or weakly NME distribution within the
uncertainties (left panel of Figure 11). However, in the AR box, the observed
ratio (of about 20) corresponds to NMED with the value of $\kappa$ $<$ 5–10
even considering the calibration uncertainties. The value of $\kappa$ is
possibly even lower, $\kappa$ = 2–3, as indicated by the photon noise
uncertainty (Figure 11).
We note that the theoretical diagnostic diagram consisting of the Fe XII 192.4
Å / 1349 Å together with the Fe XII 192.4 / Fe XIII 202.0 Å line intensity
ratios also show some dependence on electron density. However, this dependence
on density is much weaker than those of the Fe XI line ratios previously
employed for diagnostics of $\kappa$ by Lörinčík et al. (2020). Given that
electron density can be determined nearly independently of $\kappa$ (see,
e.g., Dudík et al., 2014, and references therein), we are confident that the
current determination of NMED effects is not influenced by uncertainties in
the determination of $N_{\mathrm{e}}$.
The estimate of the uncertainty in electron density of $\approx$0.1 dex in
log($N_{\mathrm{e}}$ [cm-3]) (see Figure 8) leads only to small changes in the
theoretical diagnostic curves in Figure 11 (see Appendix C); meaning that the
result of $\kappa$ $\lesssim$ 5–10 in the AR box holds even when this
uncertainty in the electron density is taken into account.
To illustrate the spatial variations in the NMED, we overplotted the ratios in
all the pixels in the off-limb observation of 2013 October 22, corresponding
to Figure 7, on the NMED ratio-ratio diagrams (see Figure 12). We color-coded
the individual points either by the electron density $N_{\mathrm{e}}$ (left
panel) or the observed Fe XII 1349 Å intensity (right panel). The electron
densities were measured using the Fe XII 186.9 / 192.4 Å density-sensitive
ratio, and were found to range between log($N_{\mathrm{e}}$ [cm-3]) = 8.6 to
9.4. We note that the highest values are found in the active region where the
Fe XII 1349 Å line is brightest.
Figure 12 shows that the spread in the location of the observed Fe XII 192.4 Å
/ 1349 Å ratio is well matched by the theoretical curves. In agreement with
Figure 7, the larger Fe XII 1349 Å intensities correspond to locations that
are more non-Maxwellian. Finally, the Fe XII 192.4 Å / Fe XIII 202.0 Å ratio,
plotted on the horizontal axis, which is dominantly sensitive to $T$,
indicates that the plasma is nearly isothermal, with all the points being
clustered close to the log($T$ [K]) = 6.25 isotherm.
We therefore conclude that the NMED effects provide a possible explanation for
the observed anomalously low Fe XII 192.4 Å/1349 Å line intensity ratios.
### 3.3 Plasma multithermality
The diagnostics of $\kappa$ in the previous section assumed that the plasma
can be described by two parameters, $\kappa$ and $T$. However, as we have seen
earlier in Section 3.1, if interpreted as Maxwellian, the observations
indicate presence of some degree of multi-thermality (see Figure 9).
Generally, if the plasma is multi-thermal, the differential emission measure
(DEMs) can be a function of $\kappa$ (Mackovjak et al., 2014; Dudík et al.,
2015; Lörinčík et al., 2020). This has consequences for the diagnostics of
$\kappa$, as these DEM${}_{\kappa}(T)$ could affect the predicted line
intensities and their ratios that need to be compared with the observed ones.
In fact, once the synthetic line intensities and their ratios are obtained for
the respective DEM${}_{\kappa}(T)$, each of the ratio-ratio diagnostic curves
in Figure 11 collapses to a single point representing the two synthetic line
intensity ratios predicted by the respective DEMκ.
In order to take the possible plasma multithermality into account, we
performed the DEM${}_{\kappa}(T)$ inversions in the AR box for each $\kappa$
using the same method as in Section 3.1. In doing so, we used the respective
line contribution functions $G(T,\kappa)$ as inputs. We note that this DEM
analysis for variable $\kappa$ was done only for the AR box, as the quiet-Sun
region (QR) intensities are already consistent with Maxwellian.
The DEMκ-predicted points for each $\kappa$ are shown in the right panel of
Figure 11 as series of colored asterisks, where the color represents the value
of $\kappa$. It is seen that each point is close to the respective curve for
the same $\kappa$, as expected. This analysis confirms that the Fe XII
intensities in the active region can be explained by non-Maxwellian
$\kappa$-distributions, as the points for $\kappa$ = 2–5 are a relatively
close match to the observed intensities, while the Maxwellian point is still
outside of the error-bars even if the calibration uncertainty is
conservatively assumed to be 30%.
### 3.4 Time-dependent ionization (TDI)
In the presence of heating and cooling events occurring on short timescales,
the possible effects of time-dependent ionization (TDI) on our diagnostics
should also be considered. A full treatment of TDI requires detailed modelling
of dynamic heating events in ARs, including its effect on both ion charge
state distribution and the relative level population. As such, it is outside
the scope of this work. Nevertheless we refer the reader to existing
literature on this subject as well as theoretical arguments which indicate (to
demonstrate) that TDI effects are likely not significant enough to explain the
observed discrepancies in the intensity ratio of the two Fe XII lines. For
instance, a relevant recent work is that of Olluri et al. (2015), who
presented simulations of a quiet solar region from the three-dimensional
magnetohydrodynamic code (MHD) code Bifrost (Gudiksen et al., 2011) including
non-equilibrium ionization, showing that the Fe XII ion was found to be close
to its ionization equilibrium. Although a quiet Sun case might not be entirely
applicable to our observations (the Fe xii emission in a quiet region will be
primarily emitted in the corona whereas in a bright AR it will mostly be
confined to the TR), we note that in the same simulation the TR ions were
significantly out of equilibrium (see Figure 15 in Olluri et al., 2015).
Another example comes from the simulations of nanoflare-heated coronal loops
by Bradshaw & Klimchuk (2011), where the “warm” emission, which includes Fe
XII and Fe XIII, was mostly close to equilibrium, even if the hotter emission
was significantly out of equilibrium.
In the following paragraphs we also discuss possible effects of TDI on both on
the (1) Fe XII relative emission as well as the (2) ion charge state
distribution.
(1) TDI effects could lead to changes in the relative level population of Fe
XII, and thus changes in the 192.4 Å / 1349 Å line intensity ratio. The EIS
192.4 Å line is an allowed transition with a very short decay time, of the
order of picoseconds. On the other hand, the IRIS 1349 Å forbidden line is a
decay from the 2P1/2 state, one of the metastable levels in the ground
configuration, which have typical decay times that are much longer. The
lifetime of the 2P1/2 level is only 4 milliseconds, so timescales this short
would be needed to alter significantly the intensity of the IRIS line,
compared to the equilibrium calculations. However, unlike the upper state of
the EIS 192.4 Å line, which is solely populated from the ground state, the
population of the 2P1/2 is more complex. To assess it, we have looked at the
dominant processes, calculated in equilibrium at the temperatures and
densities of the active regions we have observed. We find that about half of
the population of the 2P1/2 is due to cascading from higher states, most of
which are connected to the ground state, 4S3/2. Nearly 30% of its population
comes from the ground state, and nearly 20% from the 2D5/2 state, which has a
longer lifetime of 0.4 s. In turn, about 90% of the 2D5/2 population comes
from cascading from high-lying states, which again are mostly connected to the
ground state.
Therefore, non-equilibrium effects with timescales shorter than 0.4 s would
affect the population of the 2D5/2 state but in turn change only by a small
amount the intensity of the IRIS line. Overall, the ratio of the IRIS and EIS
lines would be affected by at most 20% if the timescales are shorter than 0.4
s.
(2) TDI effects could affect our observed ratios through the ion charge
distributions. The timescales for ion charge distributions to reach
equilibrium are considerably longer in the solar corona. For example, at
coronal densities, the Fe XII has an ionization equilibration timescale of the
order of 102 s (Smith & Hughes, 2010), which is apt to be prolonged if there
are flows in the plasma that lead to mixing of plasma from regions of
different temperatures. Therefore, the TDI effects could affect the ionisation
temperatures we have estimated. We recall that we estimated the temperature
(via DEM analysis or line ratios) using lines from successive ionization
stages of Iron. In particular, we used the Fe XII / Fe XIII line intensity
ratio for simultaneous diagnostics of $T$ and $\kappa$ (see Sect. 3.2). For
the measured Fe XII 192.4 Å / 1349 Å ratio in the AR box to be consistent with
Maxwellian, the complementary Fe XII 192.4 Å / Fe XIII 202.0 Å ratio would
need to be different by about a factor of 10 (see the right panel of Figure
11). This means that for the plasma to be Maxwellian, the Fe XII / Fe XIII
ratio should be at least 5 instead of the measured value of 0.5. Therefore, to
explain the observations, the TDI effects would have to lead to departures
from the Fe XII / Fe XIII ratios by about at least an order of magnitude (cf.
Figure 11), which we deem unlikely, as the two ions are typically formed at
similar temperatures and regions even in cases where the heating is transient
and strong (see, e.g., Figures 2–3 of Reale & Orlando, 2008).
Based on the considerations above, we suggest that TDI alone cannot easily
explain the observed Fe XII ratios in our AR observations, although future
numerical investigation will be necessary to rule it out completely.
## 4 Discussion
As described in Section 3, assuming that NMED are present offers by itself a
satisfactory explanation for the departures in the Fe XII 192.4 Å / 1349 Å
line intensity ratio in the observed active regions. We now discuss the
implications this finding entails, with emphasis on the timescales involved.
These include:
* •
timescale for equilibration of free electrons to a Maxwellian fluid,
* •
timescales for spontaneous emission,
* •
timescales for TDI effects,
* •
typical timescales for evolution of the AR emission,
* •
spectrometer exposure times,
* •
possible coronal heating frequency.
With the timescales for spontaneous emission and TDI effects were already
discussed in Section 3.4, we now examine the remaining ones, as well as their
possible interplay.
### 4.1 Timescales for maintaining NMED
Our analysis of the NMED effects was based on the $\kappa$-distributions
(Section 3.2), which have only one extra parameter, $\kappa$, and are assumed
to be time-independent. However, once accelerated and non-Maxwellian, the bulk
of the free electrons tends to thermalize due to collisions. Meanwhile, the
same free electrons drive ionization, recombination, and excitation processes
necessary for creation of the observed spectra. The timescale
$\tau_{\mathrm{e}}$ for equilibration of the free electrons to a Maxwellian
electron fluid due to both electron–electron and electron–ion collisions is
given by Equation (3.50) of Goedbloed & Poedts (2004), which in cgs units is:
$\tau_{\mathrm{e}}=\frac{1.09\times
10^{10}}{\mathrm{ln}\Lambda}\frac{\tilde{T}^{3/2}}{ZN_{\mathrm{e}}}\,,$ (6)
where ln$\Lambda$ is the Coulomb logarithm, $\tilde{T}$ is electron
temperature in keV units, and $Z$ is the proton number. Taking $Z$ = 1
(considering that most of the ions in the solar corona are Hydrogen ions),
ln$\Lambda$ $\approx$ 10, and using the measured values of
log($N_{\mathrm{e}}$ [cm-3]) = 9.1 and log($T$ [K]) = 6.25 (corresponding to
$\tilde{T}$ of about 0.22 keV), we obtain $\tau_{\mathrm{e}}$ $\approx$ 0.1 s.
We note that the above classical formula holds for the bulk of the electron
distribution function, as the electrons in the high-energy tail are
progressively less collisional, with the collision frequency decreasing with
with kinetic energy $E$ as $E^{-3/2}$. In addition, the acceleration of
progressively higher-$E$ electrons can also take longer (see Bian et al.,
2014), although the details will depend on the acceleration mechanism itself;
which, if indeed operating in the solar corona, is as of yet unknown. If the
acceleration occurs due to turbulence, as derived by Bian et al. (2014), then
the parameter $\kappa^{*}$ = $\kappa+1$ describes the competing timescales of
electron acceleration and collisional timescales, $\kappa^{*}$ =
$\tau_{\mathrm{acc}}/2\tau_{\mathrm{coll}}$ (see Equation (14) of Bian et al.,
2014). It follows that if the measured $\kappa$ values as low as 2–3 in active
regions are correct, the electrons must be continuously accelerated.
Otherwise, we would not be able to see changes in the measured Fe XII 192.4 Å
/ 1349 Å ratio due to NMED effects, as the electrons would return to
equilibrium Maxwellian distribution within a fraction of the exposure times
required for our remote-sensing spectroscopic measurements.
In addition, it should be noted that the timescales for spontaneous emission
in Fe XII (discussed in Section 3.4) are much shorter, by orders of magnitude,
than the electron equilibration timescale $\tau_{\mathrm{e}}$ derived above.
Therefore, the level population of Fe XII reflects the changes in the electron
distribution much faster, and is likely in equilibrium even in the case if the
electron distribution undergoes evolution.
### 4.2 Implications for coronal heating
It is interesting to consider the implication of continuous re-acceleration of
non-Maxwellian electrons (Section 4.1) in terms of coronal heating. We
speculate that if continuous re-acceleration is connected to the frequency of
the ”nanoflare” heating of the solar corona, our observations may suggest
novel constraints on the nanoflare heating models. We note that the current
leading nanoflare or nanoflare train models (see, for example, Cargill, 2014;
Barnes et al., 2016; Viall & Klimchuk, 2017; Reva et al., 2018; Warren et al.,
2020, and references therein) typically consider heating durations of the
order of tens of seconds with separation between individual heating events as
large as of the order of 102–103 s. In addition, recent observations of moss
variability in ARs with IRIS suggest that heating durations of the order of
tens of seconds are common (Testa et al., 2013, 2014, 2020).
Our implication that the re-acceleration occurs continously can be reconciled
with these works if the heating occurs due to short individual bursts (so that
electrons are re-accelerated), while the duration of the envelope of the
heating can be as long as 101–102 s. One mechanism that behaves this way is
slipping reconnection, which is the general mode of reconnection in three
dimensions (see, e.g. Janvier et al., 2013; Dudík et al., 2014). During
slipping reconnection, individual field lines reconnect many times, indeed
sequentially, with different field lines, while their footpoints slip across
the solar surface. The slipping reconnection in many small-scale quasi-
separatrix layers has been shown to be a viable coronal heating mechanism
(Yang et al., 2018) and is indeed sometimes observed to occur in moss regions
(Testa et al., 2013). However, other mechanisms can also lead to many
individual heating events occurring due to a longer-duration conditions of
energy release in a coronal loop. One can imagine that, for example, wave-
particle resonance interactions would behave much the same way as long as the
larger-scale wave lasts. Such speculations are however out of the scope of the
present work, and we do not engage in them further. Nevertheless, we do note
that if the scenario of frequent re-acceleration events occurring within a
longer-duration heating envelope is correct, the behavior of emission within
individual emitting strands (as well as their collective emission) should be
modeled in detail, as there are many timescales involved, as mentioned at the
beginning of this section, including the timescale for equilibration of the
relative level population, TDI effects, and the NMED effects.
## 5 Summary
We have investigated coordinated Hinode/EIS and IRIS observations of Fe XII
lines. While the EIS observes the allowed lines in the EUV part of the
spectrum, the IRIS observes the forbidden line at 1349 Å. We find that the
ratio of these two lines decreases strongly with the increase in intensity of
the forbidden 1349 Å line in active regions. In the quiet Sun, the Fe XII
192.4 Å / 1349 Å ratio is about 30–40, while in active regions, the ratio
decreases down to values of below 20, even reaching values as low as 10 in
some cases. These measurements were accompanied by determination of the
temperature and emission measure using lines of Fe IX–Fe XVI, as well as
electron densities using density-sensitive Fe XII and Fe XIII lines from EIS.
Using synthetic spectra obtained from CHIANTI version 10, we investigated
whether the behaviour of the Fe XII 192.4 Å / 1349 Å ratio could be due to its
dependence on electron temperature and density. Especially in active regions,
we found significant and systematic discrepancies in the observed 192.4 / 1349
Å ratio with respect to the predictions based on the synthetic spectra
obtained by CHIANTI. In the AR box that we selected for detailed analysis, we
measured values of log($T$ [K]) = 6.25 and log($N_{\mathrm{e}}$ [cm-3]) = 9.1,
resulting in a predicted Fe XII 192.4 / 1349 Å ratio of about 30, while the
observed value is about 20.
We reviewed the potential causes of this discrepancy, including:
1. 1.
Opacity effects on the Fe XII EUV lines,
2. 2.
presence of cool plasma along the line of sight,
3. 3.
plasma multithermality,
4. 4.
dependence of the observed ratio on non-Maxwellian electron distributions
(NMED).
5. 5.
effects due to time-dependent ionization (TDI)
Opacity in the Fe XII lines was detected as an increase in width of the EUV
lines, especially the 193.5 and 195.1 Å lines (see Del Zanna et al., 2019).
Being the weakest of the three transition, the 192.4 Å line is least affected.
Based upon the measured temperatures and emission measures, we estimated that
the optical depth in the 192.4 Å line is about 0.32 (and 0.96 for the 195.1 Å
line), leading to suppression of the Fe XII 192.4 Å line by about 10%. This
effect was therefore deemed insufficient to explain the discrepancies in the
192.4 Å / 1349 Å ratio. We subsequently corrected the observed 192.4 Å line
intensity accordingly to account for self-absorption.
The relative absence of cool material along the line of sight was checked
based on the AIA 193 Å and H$\alpha$ observations by the Kanzelhöhe Solar
Observatory. We note that the two wavelengths have similar optical thickness
(Anzer & Heinzel, 2005) and that the absorption near 195 Å occurs due to the H
I, He I, and He II continua. Our selected QR and AR for the quiet and active
region were also chosen to be above the H$\alpha$ spicules, and in regions
devoid of prominence material, so that the absorption by the H and He continua
was deemed negligible.
We used the the $\kappa$-distributions to study the influence of the NMED on
the line ratio. Using the updated KAPPA database (Dzifčáková et al., 2021)
corresponding to CHIANTI version 10, it was found that the Fe XII 192.4 Å /
1349 Å ratio decreases with increasing number of high-energy electrons (i.e.,
lower $\kappa$). The observed Fe XII ratio of about 20 in the AR can be
explained by NMED with $\kappa$ as low as 2–3, although calibration
uncertainties are significant. In addition, the spatial distribution of the
ratio matches well the theoretical diagnostic curves for NMED, where the
lowest observed ratios correspond to strongly NMED plasmas. These theoretical
curves for NMED are only weakly dependent on electron density and show strong
sensitivity to $\kappa$, making the Fe XII ratio one of the best diagnostic
options for the NMED. In addition, the plasma multithermality was ruled out as
the cause of the departure of the Fe XII ratio in active regions, since any
DEM effects would only exacerbate the the discrepancy.
Finally, based on theoretical arguments as well as existing literature, we
concluded that TDI effects alone are likely insufficient to explain the
observed discrepancies in the Fe XII ratio, although they cannot be ruled out.
Our measurements employed a new EIS calibration, which will be described in
detail in a separate publication. The uncertainty inherent in the calibration
limits the determination of $\kappa$ from our measurements. Nevertheless, the
off-limb quiet Sun and active region are observed simultaneously, and the new
calibration shows that the ratio in the quiet Sun is consistent with
Maxwellian electrons, in accordance with independent previous measurements
from EIS (Lörinčík et al., 2020), but also X-ray instruments (Kuhar et al.,
2018), which do not show presence of accelerated particles in quiet Sun
regions. This indicates that the relative EIS/IRIS calibration is likely
correct.
For the reasons listed above, we are left with NMED as the most likely,
simplest cause of the anomalously low Fe XII 192.4 Å / 1349.4 Å ratio in the
observed active regions.
Using Equation (3.50) of Goedbloed & Poedts (2004) we calculated that the
timescale $\tau_{\mathrm{e}}$ for equilibration of the free electrons to a
Maxwellian electron fluid is given by $\tau_{\mathrm{e}}$ $\approx$ 0.1 s, for
the core of the distribution, using the values of temperature and density
measured. Given that the Fe XII lines were observed with exposure times of
tens of seconds, this suggest that the electrons must be continuously
accelerated or re-accelerated over these timescales, otherwise they would
return to equilibrium Maxwellian distribution within a fraction of second. Our
observations could thus provide interesting new constraints on the nanoflare-
based coronal heating models.
Observations with well-calibrated instruments in the future could use these or
similar allowed-to-forbidden coronal line ratios to diagnose the presence of
NMED. One attractive option is EUVST, as it will observe the same lines as the
EIS SW channel, and UV lines with a high sensitivity, hopefully measuring the
diagnostic ratios with a cadence of a fraction of a second.
GDZ and HEM acknowledge support from STFC (UK) via the consolidated grants to
the atomic astrophysics group (AAG) at DAMTP, University of Cambridge
(ST/P000665/1. and ST/T000481/1). VP was supported by NASA under contract
NNG09FA40C (IRIS). PT was supported by contract 8100002705 from Lockheed-
Martin to SAO, NASA contract NNM07AB07C to the Smithsonian Astrophysical
Observatory, and NASA grant 80NSSC20K1272. J.D. and E.Dz. acknowledge support
from Grants No. 20-07908S and 22-07155S of the Grant Agency of the Czech
Republic, as well as institutional support RVO:67985815 from the Czech Academy
of Sciences. GDZ, JD, and HEM also acknowledge support from the Royal Society
via the Newton International Alumni programme. We thank the anonymous referee
for careful reading and useful comments. IRIS is a NASA small explorer mission
developed and operated by LMSAL with mission operations executed at NASA Ames
Research center and major contributions to downlink communications funded by
ESA and the Norwegian Space Centre. Hinode is a Japanese mission developed and
launched by ISAS/JAXA, with NAOJ as a domestic partner and NASA and STFC (UK)
as international partners. It is operated by these agencies in cooperation
with the ESA and NSC (Norway). SDO data were obtained courtesy of NASA/SDO and
the AIA and HMI science teams. H$\alpha$ data were provided by the Kanzelhöhe
Observatory, University of Graz, Austria.
## References
* Anzer & Heinzel (2005) Anzer, U., & Heinzel, P. 2005, ApJ, 622, 714, doi: 10.1086/427817
* Barnes et al. (2016) Barnes, W. T., Cargill, P. J., & Bradshaw, S. J. 2016, ApJ, 833, 217, doi: 10.3847/1538-4357/833/2/217
* Bian et al. (2014) Bian, N. H., Emslie, A. G., Stackhouse, D. J., & Kontar, E. P. 2014, ApJ, 796, 142, doi: 10.1088/0004-637X/796/2/142
* Bradshaw & Klimchuk (2011) Bradshaw, S. J., & Klimchuk, J. A. 2011, ApJS, 194, 26, doi: 10.1088/0067-0049/194/2/26
* Cargill (2014) Cargill, P. J. 2014, ApJ, 784, 49, doi: 10.1088/0004-637X/784/1/49
* Culhane et al. (2007) Culhane, J. L., Harra, L. K., James, A. M., et al. 2007, Sol. Phys., 60, doi: 10.1007/s01007-007-0293-1
* De Pontieu et al. (2009) De Pontieu, B., Hansteen, V. H., McIntosh, S. W., & Patsourakos, S. 2009, ApJ, 702, 1016, doi: 10.1088/0004-637X/702/2/1016
* De Pontieu et al. (2014) De Pontieu, B., Title, A. M., Lemen, J. R., et al. 2014, Sol. Phys., 289, 2733, doi: 10.1007/s11207-014-0485-y
* Del Zanna (2010) Del Zanna, G. 2010, A&A, 514, A41, doi: 10.1051/0004-6361/201014063
* Del Zanna (2013a) —. 2013a, A&A, 555, A47, doi: 10.1051/0004-6361/201220810
* Del Zanna (2013b) —. 2013b, A&A, 558, A73, doi: 10.1051/0004-6361/201321653
* Del Zanna (2019) Del Zanna, G. 2019, A&A, 624, A36, doi: 10.1051/0004-6361/201834842
* Del Zanna & Andretta (2015) Del Zanna, G., & Andretta, V. 2015, A&A, 584, A29, doi: 10.1051/0004-6361/201526804
* Del Zanna & DeLuca (2018) Del Zanna, G., & DeLuca, E. E. 2018, ApJ, 852, 52, doi: 10.3847/1538-4357/aa9edf
* Del Zanna et al. (2021) Del Zanna, G., Dere, K. P., Young, P. R., & Landi, E. 2021, ApJ, 909, 38, doi: 10.3847/1538-4357/abd8ce
* Del Zanna et al. (2015) Del Zanna, G., Dere, K. P., Young, P. R., Landi, E., & Mason, H. E. 2015, A&A, 582, A56, doi: 10.1051/0004-6361/201526827
* Del Zanna et al. (2019) Del Zanna, G., Gupta, G. R., & Mason, H. E. 2019, A&A, 631, A163, doi: 10.1051/0004-6361/201834625
* Del Zanna & Mason (2005) Del Zanna, G., & Mason, H. E. 2005, A&A, 433, 731, doi: 10.1051/0004-6361:20041848
* Del Zanna & Mason (2018) Del Zanna, G., & Mason, H. E. 2018, Living Reviews in Solar Physics, 15
* Del Zanna et al. (2011) Del Zanna, G., O’Dwyer, B., & Mason, H. E. 2011, A&A, 535, A46, doi: 10.1051/0004-6361/201117470
* Del Zanna et al. (2012) Del Zanna, G., Storey, P. J., Badnell, N. R., & Mason, H. E. 2012, A&A, 543, A139, doi: 10.1051/0004-6361/201219023
* Dere et al. (1997) Dere, K. P., Landi, E., Mason, H. E., Monsignori Fossi, B. C., & Young, P. R. 1997, A&AS, 125, 149, doi: 10.1051/aas:1997368
* Dudík et al. (2014) Dudík, J., Del Zanna, G., Mason, H. E., & Dzifčáková, E. 2014, A&A, 570, A124, doi: 10.1051/0004-6361/201424124
* Dudík et al. (2021) Dudík, J., Del Zanna, G., Rybák, J., et al. 2021, ApJ, 906, 118, doi: 10.3847/1538-4357/abcd91
* Dudík et al. (2014) Dudík, J., Janvier, M., Aulanier, G., et al. 2014, ApJ, 784, 144, doi: 10.1088/0004-637X/784/2/144
* Dudík et al. (2017) Dudík, J., Polito, V., Dzifčáková, E., Del Zanna, G., & Testa, P. 2017, ApJ, 842, 19, doi: 10.3847/1538-4357/aa71a8
* Dudík et al. (2015) Dudík, J., Mackovjak, Š., Dzifčáková, E., et al. 2015, ApJ, 807, 123, doi: 10.1088/0004-637X/807/2/123
* Dzifčáková & Dudík (2013) Dzifčáková, E., & Dudík, J. 2013, ApJS, 206, 6, doi: 10.1088/0067-0049/206/1/6
* Dzifčáková et al. (2015) Dzifčáková, E., Dudík, J., Kotrč, P., Fárník, F., & Zemanová, A. 2015, ApJS, 217, 14, doi: 10.1088/0067-0049/217/1/14
* Dzifčáková et al. (2021) Dzifčáková, E., Dudík, J., Zemanová, A., Lörinčík, J., & Karlický, M. 2021, ApJS, submitted
* Dzifčáková & Kulinová (2010) Dzifčáková, E., & Kulinová, A. 2010, Sol. Phys., 263, 25, doi: 10.1007/s11207-010-9539-y
* Fletcher & De Pontieu (1999) Fletcher, L., & De Pontieu, B. 1999, ApJ, 520, L135, doi: 10.1086/312157
* Goedbloed & Poedts (2004) Goedbloed, J. P. H., & Poedts, S. 2004, Principles of Magnetohydrodynamics
* Gudiksen et al. (2011) Gudiksen, B. V., Carlsson, M., Hansteen, V. H., et al. 2011, A&A, 531, A154, doi: 10.1051/0004-6361/201116520
* Janvier et al. (2013) Janvier, M., Aulanier, G., Pariat, E., & Démoulin, P. 2013, A&A, 555, A77, doi: 10.1051/0004-6361/201321164
* Kastner & Kastner (1990) Kastner, S. O., & Kastner, R. E. 1990, J. Quant. Spec. Radiat. Transf., 44, 275, doi: 10.1016/0022-4073(90)90033-3
* Kuhar et al. (2018) Kuhar, M., Krucker, S., Glesener, L., et al. 2018, ApJ, 856, L32, doi: 10.3847/2041-8213/aab889
* Lazar & Fichtner (in press) Lazar, M., & Fichtner, H. in press, Kappa Distributions - From Observational Evidences via Controversial Predictions to a (Springer Nature Switzerland AG)
* Lemen et al. (2012) Lemen, J. R., Title, A. M., Akin, D. J., et al. 2012, Sol. Phys., 275, 17, doi: 10.1007/s11207-011-9776-8
* Livadiotis (2017) Livadiotis, G. 2017, Kappa Distributions: Theory and Applications in Plasmas, 1st Edition (Elsevier)
* Lörinčík et al. (2020) Lörinčík, J., Dudík, J., Del Zanna, G., Dzifčáková, E., & Mason, H. E. 2020, ApJ, 893, 34, doi: 10.3847/1538-4357/ab8010
* Mackovjak et al. (2013) Mackovjak, Š., Dzifčáková, E., & Dudík, J. 2013, Sol. Phys., 282, 263, doi: 10.1007/s11207-012-0136-0
* Mackovjak et al. (2014) —. 2014, A&A, 564, A130, doi: 10.1051/0004-6361/201323054
* Martínez-Sykora et al. (2011) Martínez-Sykora, J., De Pontieu, B., Testa, P., & Hansteen, V. 2011, ApJ, 743, 23, doi: 10.1088/0004-637X/743/1/23
* Olluri et al. (2015) Olluri, K., Gudiksen, B. V., Hansteen, V. H., & De Pontieu, B. 2015, ApJ, 802, 5, doi: 10.1088/0004-637X/802/1/5
* Pesnell et al. (2012) Pesnell, W. D., Thompson, B. J., & Chamberlin, P. C. 2012, Sol. Phys., 275, 3, doi: 10.1007/s11207-011-9841-3
* Reale & Orlando (2008) Reale, F., & Orlando, S. 2008, ApJ, 684, 715, doi: 10.1086/590338
* Reva et al. (2018) Reva, A., Ulyanov, A., Kirichenko, A., Bogachev, S., & Kuzin, S. 2018, Sol. Phys., 293, 140, doi: 10.1007/s11207-018-1363-9
* Smith & Hughes (2010) Smith, R. K., & Hughes, J. P. 2010, ApJ, 718, 583, doi: 10.1088/0004-637X/718/1/583
* Storey et al. (2005) Storey, P. J., Del Zanna, G., Mason, H. E., & Zeippen, C. J. 2005, A&A, 433, 717, doi: 10.1051/0004-6361:20041771
* Testa et al. (2016) Testa, P., De Pontieu, B., & Hansteen, V. 2016, ApJ, 827, 99, doi: 10.3847/0004-637X/827/2/99
* Testa et al. (2020) Testa, P., Polito, V., & De Pontieu, B. 2020, ApJ, 889, 124, doi: 10.3847/1538-4357/ab63cf
* Testa et al. (2013) Testa, P., De Pontieu, B., Martínez-Sykora, J., et al. 2013, ApJ, 770, L1, doi: 10.1088/2041-8205/770/1/L1
* Testa et al. (2014) Testa, P., De Pontieu, B., Allred, J., et al. 2014, Science, 346, 1255724, doi: 10.1126/science.1255724
* Tripathi et al. (2008) Tripathi, D., Mason, H. E., Young, P. R., & Del Zanna, G. 2008, A&A, 481, L53, doi: 10.1051/0004-6361:20079034
* Viall & Klimchuk (2017) Viall, N. M., & Klimchuk, J. A. 2017, ApJ, 842, 108, doi: 10.3847/1538-4357/aa7137
* Wang et al. (1998) Wang, H., Chae, J., Gurman, J. B., & Kucera, T. A. 1998, Sol. Phys., 183, 91, doi: 10.1023/A:1005010504873
* Warren et al. (2014) Warren, H. P., Ugarte-Urra, I., & Landi, E. 2014, ApJS, 213, 11, doi: 10.1088/0067-0049/213/1/11
* Warren et al. (2020) Warren, H. P., Reep, J. W., Crump, N. A., et al. 2020, ApJ, 896, 51, doi: 10.3847/1538-4357/ab917c
* Wilhelm et al. (1997) Wilhelm, K., Lemaire, P., Curdt, W., et al. 1997, Sol. Phys., 170, 75, doi: 10.1023/A:1004923511980
* Wülser et al. (2018) Wülser, J. P., Jaeggli, S., De Pontieu, B., et al. 2018, Sol. Phys., 293, 149, doi: 10.1007/s11207-018-1364-8
* Yang et al. (2018) Yang, K. E., Longcope, D. W., Ding, M. D., & Guo, Y. 2018, Nature Communications, 9, 692, doi: 10.1038/s41467-018-03056-8
## Appendix A Blending of the IRIS Fe XII line
As the IRIS spectral range is rich in unidentified narrow lines due to
photospheric/chromospheric lines, as well as molecular lines, we have analysed
one full spectral IRIS atlas during a flare, and measured the intensities of
all the known and unknown cool lines in the spectra. The observation was
obtained on 2013-10-11 with a long dense IRIS raster that started at 23:55 UT.
Fig. 13 shows superimposed two spectra, one obtained on a moss region (pixel
values 240:260 in solar X and 130:160 along the slit), and the other one at
pixel coordinates 73 (solar X) and 192 (solar Y) on a flare ribbon, reduced by
a factor of 20. It is clear that in the moss region the 1349.4 Å line is due
to Fe XII, as it has the expected width. The spectrum in the ribbon is instead
solely due to an unidentified narrow cool line, with a wavelength coincident
with that of the Fe XII line. The spatial distribution of this unidentified
line is quite different than that of most known lines such as the Cl I 1351.66
Å. It is similar to that of the strong C I lines at 1357.13, 1354.28 Å, but is
actually closest in morphology to another unidentified line at 1350.69 Å. The
ratios of the known C I lines are relatively constant, so a possible way to
estimate the contribution of the unidentified line at 1349.4 Å is to consider
the observed ratios in the ribbons with the C I lines. For example the ratio
(in data number) with the C I 1354.28 Å ranges between 0.02 and 0.07.
Figure 13: IRIS FUV1 spectra around the Fe XII 1349.4 Å line.
## Appendix B EIS radiometric calibration
Briefly, a DEM analysis was applied to off-limb quiet Sun observations close
in time to the observations discussed here, to obtain the relative EIS
calibration using the strongest coronal lines. The advantage is that the
plasma is nearly isothermal with an isodensity in these cases and possible
issues related to the presence of NMED are avoided. Also, this removes
blending with cool lines from a few coronal lines. This is an extension of the
method used by Warren et al. (2014), where strict isothermality was assumed.
The established relative calibration for the short-wavelength (SW) channel was
then used to calibrate the EIS spectra, for a direct cross-calibration with
simultaneous SDO AIA 193 Å data, taking into account the different spatio-
temporal resolutions, basically following the methods described in Del Zanna
et al. (2011); Del Zanna (2013b). Good agreement (to within a few percent)
between the AIA DN/s predicted from EIS, and those observed by AIA (re-scaled
to the lower spatio-temporal resolution of EIS) is imposed, noting that a
typical spatial scatter around 10% is normally found. We used the modelled AIA
degradation as available in SolarSoft, with the option of the normalisation
with SDO EVE. We also checked this AIA calibration against simultaneous SDO
EVE observations, using the latest EVE calibration, which in itself relies on
a comparison with a few sounding rocket flights and adjustments using line
ratios, following the methods adopted for EIS (Del Zanna, 2013a). In turn, the
prototype EVE flown on the sounding rocket flights is regularly calibrated on
the ground. The absolute calibration of the EVE prototype is deemed accurate
to within 20%, although detailed comparisons carried out on the first flight
showed larger discrepancies (40%) for some of the strongest lines (Del Zanna &
Andretta, 2015; Del Zanna, 2019). The overall accuracy of the EIS absolute
calibration adopted here, considering all the comparisons, could be estimated
to be in the range 20–30%. Such a reliable calibration in the EUV (a
notoriously difficult problem) could only be established for EIS data in 2013
and 2014, as in 2014 the failure of EVE MEGS-A meant that no direct AIA/EVE
cross-calibrations could be carried out. After 2014, the only useful cross-
calibration EVE sounding rocket was flown in 2018. The results of the EIS
improved calibration will be published in Del Zanna and Warren (2022, in
preparation).
Figure 14: Dependence of the NMED diagnostic diagrams on log($N_{\mathrm{e}}$
[cm-3]). The diagnostic curves shown by full lines and the observed ratios
correspond to Figure 11 right. Dashed a dot-dashed lines denote changes in
electron density equal to 0.1 dex.
## Appendix C Electron density uncertainties and the diagnostics of $\kappa$
The measurements of $\kappa$ done in Section 3.2.2 required prior
determination of electron density. However, the electron densities is also
subject to uncertainties of the measured line intensities, especially the
photon noise. Note we do not consider the calibration uncertainty, since all
lines used for measurements of $N_{\mathrm{e}}$ are observed by the same
channel of EIS.
The photon noise uncertainties are shown by gray stripes in the emissivity
ratio plots on Figure 8. It is seen that the photon noise introduces
uncertainty into the measurements of log($N_{\mathrm{e}}$ [cm-3]) of about 0.1
dex for Fe XIII, and slightly larger, $\approx$0.15 dex, for the Fe XII 186.9
Å and 192.4 Å pair of lines.
In Figure 14, we show the changes in the diagnostic diagram for the box AR
(see also right panel Figure 11) that occur due to the 0.1 dex uncertainty in
the measurements of log($N_{\mathrm{e}}$ [cm-3]). This uncertainty in electron
density is shown by different linestyles. It is seen that for the smallest
considered value of $\kappa$ = 2, the difference is negligible, while
uncertainty in $\kappa$ slightly increases with increasing $\kappa$ (i.e.,
approaching Maxwellian). However, even then, the curves for $\kappa$ = 10 and
Maxwellian still do not overlap. Therefore, our determination that the NMED
represent a viable explanation of the Fe XII 192.4 Å / 1349 Å ratio observed
in the AR is not influenced in the uncertainties in the measurements of
electron density.
|
# Scale invariance in early embryonic development
Miloš Nikolić,a,b Victoria Antonetti,a,c Feng Liu,b,c Gentian Muhaxheri,a,d
Mariela D. Petkova,e Martin Scheeler,a Eric M. Smith,a William Bialek,a,b,f
and Thomas Gregora,b,g aJoseph Henry Laboratories of Physics and bLewis–Sigler
Institute for Integrative Genomics, Princeton University, Princeton NJ 08544
USA
cCenter for Quantitative Biology and School of Physics, Peking University,
Beijing 100871 China
dDepartment of Physics, Lehman College, City University of New York, Bronx, NY
10468 USA
eProgram in Biophysics, Harvard University, Cambridge MA 02138 USA
fInitiative for the Theoretical Sciences, The Graduate Center, City University
of New York, 365 Fifth Ave., New York, NY 10016 USA
gDepartment of Developmental and Stem Cell Biology UMR3738, Institut Pasteur,
75015 Paris, France
###### Abstract
The body plan of the fruit fly is determined by the expression of just a
handful of genes. We show that the spatial patterns of expression for several
of these genes scale precisely with the size of the embryo. Concretely,
discrete positional markers such as the peaks in striped patterns have
absolute positions along the anterior–posterior axis that are proportional to
embryo length, with better than $1\%$ accuracy. Further, the information (in
bits) that graded patterns of expression provide about position can be
decomposed into information about fractional or scaled position and
information about absolute position or embryo length; all of the available
information is about scaled position, again with $\sim 1\%$ accuracy. These
observations suggest that the underlying genetic network exhibits scale
invariance in a deeper mathematical sense. Taking this mathematical statement
seriously requires that the network dynamics have a zero mode, which connects
to many other observations on this system.
## I Introduction
Closely related organisms can vary widely in size, but variations in their
proportions are much smaller [1, 2, 3]. There is a considerable gap between
this qualitative observation and some precise mathematical statement of
scaling, e.g., that the linear dimensions of all elements in the body plan are
in direct proportion to the linear dimensions of the organism. If correct this
scale invariance would be visible not only in the fully developed organism but
already at some earlier stages in its development.
There are many examples of “allometric scaling,” power-law relationships among
different quantities across a well-defined class of organisms [4, 5, 6]. In
some cases, these relations connect the linear dimensions of different body
parts. Nonetheless, truly precise spatial scaling in embryonic development
would be quite surprising.
We understand the mechanisms of pattern formation in a wide range of
non–biological systems, from fluid flows to crystal growth (snowflakes) and
more [7, 8, 9, 10, 11, 12], but none of these examples exhibit scale
invariance. Instead, the elements of the pattern have linear dimensions set by
microscopic parameters, and larger systems exhibit more repetitions of the
same pattern rather than expansion or contraction of pattern elements to match
the size of the system as a whole [13]. Going back to the pioneering work of
Turing [14], the mathematical structure of the equations governing these
systems is not so different from the structure of models for genetic or
biochemical networks. If we take these analogies literally, we would predict
that taller people should have more vertebrae, which is obviously wrong. Is
there a real problem here, or are we just oversimplifying?
Here we try to make the notion of scale invariance in development more
precise. We use the first hours of development in the fruit fly as an example,
following spatial patterns of morphogen concentration as they flow through
three layers of a genetic network, from maternal inputs to the gap genes to
the pair rule genes [15, 16, 17]. In the spirit of earlier work [18, 19, 20,
21, 22] we analyze discrete positional markers, such as the stripes in pair
rule-gene expression, and find that positions of these markers vary in
proportion to the length of the embryo with better than $1\%$ accuracy [23].
We then go beyond discrete markers, decomposing the information carried by
graded patterns of gap gene expression into information about fractional or
scaled position vs. information about the absolute position; we find that all
the available information is about fractional position along the
anterior–posterior axis. Information that would signal a deviation from scale
invariance is less than $1\%$ of the total.
These results provide strong evidence for scaling in a precise mathematical
sense, for both the gap genes and the pair rule genes. But at least one of the
maternal inputs, Bicoid [24, 25], does not show any sign of scale invariance:
as in well-understood non-biological pattern-forming systems, there is a
length scale that presumably is set by underlying molecular parameters and
does not adjust in response to the linear dimensions of the whole embryo. This
suggests that scale invariance is an emergent property of the gap gene
network.
We argue that true scale invariance places very specific requirements on the
dynamics of this network, independent of molecular details: it must have a
“zero mode.” This has connections to other observations on gap gene dynamics
[26, 27] and to more detailed models [28, 29].
## II Testing for scaling
To make the notion of scaling more precise we take seriously the idea that
cell fates are determined by the concentration of particular molecules called
morphogens [30]. Since the cell fates are tied to their positions, the
concentrations of morphogens must also carry information about position along
the body axes. These ideas are especially crisp in the early fly embryo, where
we know the identities of all the relevant morphogens and rich spatial
patterns in the concentrations of these molecules are established before cells
make large-scale movements [31].
We focus on pattern formation along a single axis, which will be the
anterior–posterior axis in the analysis of fly embryos below. Then we can
measure position along this axis by a single variable $0<x<L$, where $x=0$ is
the anterior end of the embryo, $x=L$ is the posterior end, and hence $L$ is
the length of the embryo. There are multiple morphogen species, indexed by
$\rm i$, and if we neglect the discreteness of cells then their concentration
profiles are described by continuous functions $g_{\rm i}(x;L)$. The notation
emphasizes that concentration profiles may be different in embryos of
different size $L$.
True scale invariance is the statement that the concentration of morphogens
depends only on position relative to the length, that is
$g_{\rm i}(x;L)=\Phi_{\rm i}(x/L).$ (1)
If there is a fixed map from morphogen concentrations to cell fates, this
scaling behavior would guarantee that cells adopt a fate that depends on their
relative position $x/L$, and not separately on $x$ and $L$.
How do we test for scale invariance? If the concentration of the morphogen has
a single peak as a function of $x$, we can write
$g_{\rm i}(x;L)=g(x-x_{\rm p};L),$ (2)
then scale invariance as in Eq. (1) requires that all the $L$ dependence is
contained in the position of the peak.
$x_{\rm p}=\langle f_{\rm p}\rangle\cdot L+{\rm noise};$ (3)
where $f_{p}$ is the fractional or scaled peak position,
$\langle\cdots\rangle$ is the average over many embryos with different
lengths, and $\rm noise$ allows that positions jitter from embryo to embryo.
We emphasize that Eq. (3) is not just the statement that positional markers
adjust (in absolute distance) to the length of the embryo; scale invariance as
we have defined it in Eq. (1) requires that this adjustment is exactly linear
with zero intercept. There is a natural generalization to concentration
profiles that have multiple peaks, as with the pair rule genes (Fig. 1A, B).
It has been known for some time that the morphogens in the early fly embryo
carry enough information to specify scaled positions with $\sim 1\%$ precision
all along the anterior–posterior axis [32, 33]. At the same time, embryos from
the same mother, in an inbred laboratory stock, fluctuate in length with a
standard deviation of $\sigma_{L}/\langle L\rangle\sim 4\%$ (Appendix A and
[34, 35]). It would seem that to make these numbers consistent with one
another, positional signals must scale with embryo length, but this is a bit
subtle.
Imagine a hypothetical embryo in which, e.g., the peak of the morphogen
profile is perfectly anchored in absolute position relative to the anterior
pole of the embryo, with no scaling and no noise, such that $x_{\rm p}=\langle
x_{\rm p}\rangle$. Then the relative or fractional positions $f_{\rm p}=x_{\rm
p}/L$ fluctuate only because the lengths of the embryos vary,
$\displaystyle\sigma_{f_{\rm p}}^{2}(A)$ $\displaystyle\equiv$
$\displaystyle\langle(\delta f_{\rm p})^{2}\rangle=\langle x_{\rm
p}\rangle^{2}\left[\left\langle\left(\frac{1}{L}\right)^{2}\right\rangle-\left\langle\frac{1}{L}\right\rangle^{2}\right]$
(4) $\displaystyle\sim$ $\displaystyle\left({{\langle x_{\rm
p}\rangle}\over{\langle L\rangle}}\cdot{{\sigma_{L}}\over{\langle
L\rangle}}\right)^{2}.$ (5)
Thus for a marker that on average is a quarter of the way from the anterior to
posterior, $\langle x_{\rm p}\rangle=0.25\langle L\rangle$, fluctuations will
be $\sigma_{f_{\rm p}}(A)\sim 0.01$ even without scaling. Similarly, if we
have a marker anchored at some fixed absolute position relative to the
posterior then the variance in fractional position will be
$\sigma_{f_{\rm p}}^{2}(P)=\left(1-{{\langle x_{\rm p}\rangle}\over{\langle
L\rangle}}\right)^{2}\cdot\left({{\sigma_{L}}\over{\langle
L\rangle}}\right)^{2}.$ (6)
We can imagine cells combining anterior and posterior signals to reduce the
error,
${1\over{\sigma_{f_{\rm p}}^{2}(A,P)}}={1\over{\sigma_{f_{\rm
p}}^{2}(A)}}+{1\over{\sigma_{f_{\rm p}}^{2}(P)}}.$ (7)
With $\sigma_{L}/\langle L\rangle\sim 0.04$, fluctuations in fractional
position thus could be less than $\sim 1.4\%$ everywhere along the
anterior–posterior axis, even in the absence of any scaling mechanism.
Convincing ourselves that pattern formation is truly scale invariant requires
a very precise measurement and depends on the system itself being very
precise.
It is intuitive to think about scaling as the proportionality of positions to
embryo length, as in Eq. (3), but it should be possible to test the scaling of
the entire morphogen profile, as in Eq. (1), more directly. There are two
related observations. First, to compare morphogen profiles across embryos of
different lengths, we need a metric. Second, since morphogen profiles are
noisy, it is unrealistic to expect the exact equality of two functions across
all values of $x$. Fortunately, the noise level itself provides a metric for
comparison, which is made precise in the language of information theory.
The statement that morphogen profiles depend on $x$ and $L$ means that the
concentrations of these molecules provide information about the underlying
positional variables. This is quantified, uniquely, by the Shannon information
[36, 37, 38],
$I({\mathbf{g}}\rightarrow\\{x,L\\})=\int d{\mathbf{g}}\int dx\int
dL\,P\left({\mathbf{g}}|\\{x;L\\}\right)P(x,L)\log_{2}\left[{{P\left({\mathbf{g}}|\\{x,L\\}\right)}\over{P\left({\mathbf{g}}\right)}}\right]\,{\rm
bits},$ (8)
where for more compact notation we write ${\mathbf{g}}=\\{g_{\rm i}\\}$ and
$d{\mathbf{g}}=\prod_{\rm i}dg_{\rm i}$. Here
$P\left({\mathbf{g}}|\\{x,L\\}\right)$ is the probability of finding the set
of morphogen concentrations $\\{g_{\rm i}\\}$ at position $x$ in an embryo of
length $L$; $P\left({\mathbf{g}}\right)$ is the probability of finding these
concentrations averaged over all values of $x$ and $L$; and $P(x,L)$ is the
distribution of positions and lengths. It is useful to recall that this
information is mutual: the concentrations of morphogens provide cells with
information about position, and specifying position allows us to predict the
concentrations, so we write $I({\mathbf{g}};\\{x,L\\})$. Information depends
on both the mean spatial profiles of the morphogens and their noise levels.
True scale invariance would mean that all of the information conveyed by the
morphogens is about the fractional position $x/L$:
$I({\mathbf{g}};\\{x,L\\})=I({\mathbf{g}};x/L)\,\,\,\,{\rm(perfect\
scaling)}.$ (9)
Equivalently, if we want to predict the morphogen concentration, it is enough
to specify the fractional position, and no extra information is gained by
knowing $x$ and $L$ separately. We can think of the total information as
having a component about the relative position and an extra increment that
describes the deviation from scaling,
$I({\mathbf{g}};\\{x,L\\})=I({\mathbf{g}};x/L)+\Delta I,$ (10)
and we will see that with samples from a sufficiently large number of embryos,
we can make a reliable estimate of $\Delta I$. The smaller the fraction
$\Delta I/I({\mathbf{g}};x/L)$ the closer the system is to a mathematical
ideal of scaling. More explicit expressions for $\Delta I$ are developed in
Appendix B and applied to experiments in §IV.
We emphasize that true scale invariance, corresponding to $\Delta I=0$, is a
very strong condition. Different levels of evidence for scaling in embryonic
development have inspired models in which competing mechanisms can provide
some cancellation of the intrinsic length scales determined by diffusion
constants and reaction rates [39, 40, 41, 42]. These models typically allow
for scaling in the position of a single discrete positional marker (e.g., the
middle of the embryo), or for approximate scaling across a larger segment of
the relevant axes. True scale invariance would require new dynamical
mechanisms.
## III Stripes and boundaries
In the early fly embryo, information about position along the
anterior–posterior axis flows from maternal inputs through the network of gap
genes to the pair-rule genes [17]. The pair-rule genes are expressed in
interdigitating striped patterns that provide a preview of the segmented body
plan in the fully developed organism; these stripes are visible within three
hours after the egg is laid (Fig. 1A–C). The positions of pair-rule stripes
are a clear example of the positional markers discussed above.
Here we analyze the spatial profiles of gene expression for three of the pair-
rule genes—eve, prd, and run—measured using fluorescent antibody staining of
the corresponding proteins in more than one hundred embryos that were fixed
during nuclear cycle 14 (nc14), i.e. between 2 and $3\,$h of development [33].
Our results recapitulate earlier work [23] on a larger ensemble of embryos.
As soon as the stripes are visible it is straightforward to measure their
positions $x_{\rm i}$ [33]. The time during nc14 can be measured with $\sim
1\,{\rm min}$ precision by following the progression of the invaginating
cellularization membrane [32]. The stripe positions vary systematically in
time [43, 44, 45, 19, 46] and are well described by
$\frac{x_{\rm i}(t)}{L}=\frac{x_{\rm i}(t_{0})}{L}+s_{\rm i}(t-t_{0}),$ (11)
as shown for the Eve stripes in Fig 1D. Combining data from all time points,
we shift each embryo to the reference time $t_{0}=45\,{\rm min}$,
$\frac{x_{\rm i}(t)}{L}\rightarrow\frac{x_{\rm i}(t)}{L}-s_{\rm i}(t-t_{0}).$
(12)
We use this same procedure for the Prd and Run stripes, although these become
clear only at slightly later times.
Figure 1: Precise scaling of pair-rule stripes in the Drosophila embryo. (A)
Bright-field image overlaid with fluorescent antibody staining for Eve protein
(fuschia), focusing on the mid-sagittal plane with the dorsal side up;
scalebar is $100\,\mu{\rm m}$. (B) Expression of Eve in the second half of
nuclear cycle fourteen (nc14). Solid line is the mean, and shaded region is
the standard deviation across $N_{\rm em}=108$ embryos in a time window
between 30 and $60\,$min from the start of nc14. Inset shows a single nucleus
with a white square (width $0.01L$) used to average intensities. (C) Eve
expression profiles as a function of relative position along the body axis for
12 time bins during nc14, as indicated by color. (D) Linear dynamics of Eve
peak positions during nc14, fit to Eq. (11). (E) Absolute positions of Eve
peaks measured from the anterior pole referred to $t_{0}=45\,{\rm min}$, as in
Eq. (12), plotted vs. embryo length. (F) Standard deviation of scaled stripe
positions as a function of mean position for three pair-rule genes, and for
the cephalic furrow (CF, see Appendix C). Error bars are standard deviations
from bootstrapping. Black curves with red shading (bootstrapped errors) are
estimates of precision based on anchoring in Eqs. (5–7), and $d$ is the
spacing between neighboring cells.
Figure 1E shows that the stripe positions $x_{\rm i}$ measured from the
anterior pole are proportional to the length of the embryo $L$. More
precisely, if we fit these linear relations then intercepts are zero and
slopes are equal to the mean fractional positions, as in Eq. (3), both results
with error bars of less than $1\%$ (Appendix C). This provides _prima facie_
evidence for scaling of the pair-rule stripes, reinforcing the conclusions of
earlier work [18, 19, 20, 21].
We can go beyond the mean behaviors to look at fluctuations around these
means. For each stripe $\rm i$ in each embryo $\alpha$, we can write
${{x_{\rm i}^{\alpha}}\over{L^{\alpha}}}=\langle f_{\rm i}\rangle+\delta
f_{\rm i}^{\alpha},$ (13)
where $\langle\cdots\rangle$ now is an average over all the embryos in our
sample. The variance of the relative position is $\sigma_{f_{\rm
i}}^{2}=\langle(\delta f_{\rm i})^{2}\rangle$, and Fig. 1F shows that
$\sigma_{f_{\rm i}}\leq 0.01$ for all 21 pair rule stripes that we measure.
This is consistent with previous measurements, and with the information
content of the gap gene expression patterns that feed into the generation of
pair-rule stripes [47, 33], but earlier work did not address scaling
explicitly.
As a caution, we note that the observation of scaling in fixed embryos would
be trivial if variations in embryo length were dominated by shrinkage during
fixation. Across $N_{\rm em}=609$ fixed embryos used for the analysis of gap
genes (below) we find a mean length $\langle L\rangle_{\rm fix}=455\,\mu{\rm
m}$, while across $N_{\rm em}=610$ live embryos (§V) we find $\langle
L\rangle_{\rm live}=490\,\mu{\rm m}$. Hence, shrinkage with fixation is a bit
less than $10\%$ across many different experiments. But the variations in
length are almost the same, $(\sigma_{L}/\langle L\rangle)_{\rm fix}=0.038$,
while $(\sigma_{L}/\langle L\rangle)_{\rm live}=0.037$. The small extra
variance in the length of fixed embryos cannot explain the scaling behavior
that we observe.
Figure 1F also shows that the fluctuations in fractional position are smaller
than the bound on mechanisms that have no explicit scaling, from Eq. (7). This
bound is very tight, because of the small variance in emrbyo lengths, and thus
requires extreme precision in the measurement and biological reproducibility
of the fractional positions to demonstrate scaling. To emphasize the
importance of precision, we note that the position of the cephalic furrow is
directly regulated by pair rule gene expression [48], but it has a slightly
higher relative positional variance, due to the experimental difficulty of
defining morphological features to less than the width of a single cell [49].
Here we show explicitly that the furrow position scales with embryo length
(Appendix C). Even though the precision of the CF position is almost $\sim
1\%$ in the scaled coordinates [49], this alone is not sufficient to reject
the hypothesis that positions are defined in absolute rather than relative
coordinates, as can be seen from Fig. 1F.
The pair rule stripes are shaped by input from the gap genes [50], and it is
natural to ask whether the scaling behavior that we observe is inherited from
these inputs. The gap genes were long discussed in terms of “expression
domains,” as if they were on/off switches [51, 52, 53, 54]. We now know that
this misses a substantial fraction of the positional information encoded by
these genes [47, 55, 33], but defining the boundaries of the expression
domains as positional markers (Fig. 2A–D) allows us to give a preliminary
analysis of scaling by following the same ideas as for the positions of the
pair-rule stripes.
Previous experiments have measured the expression profiles of the gap genes
[33], staining $N_{\rm em}=609$ fixed embryos in nc14 with fluorescent
antibodies directed at the proteins encoded by the gap genes (Fig. 2A–D). We
define expression boundaries as the positions where the concentrations are
half their maximum mean value, and we correct their relative positions to
$t_{0}=45$ min as above. Figure 2E shows that all thirteen of the gap gene
boundaries defined in this way have absolute positions that scale precisely
with embryo length, as with the positions of the pair rule stripes. The
accuracy of this scaling again is better than $\sim 1\%$, and this precision
is better than the limiting performance of mechanisms that do not have some
explicit sensitivity to embryo length (Fig. 2F). For the gap genes, this
procedure allows us to span almost the entire range of the anterior–posterior
axis.
In summary, stripes and boundaries of gene expression in the early fly embryo
provide discrete positional markers, and the absolute positions of these
markers are in all cases proportional to the length of the embryo. This is
consistent with previous observations [18, 19, 20, 21], but the precision of
the scaling that we observe here is surprising. This suggests that the
underlying genetic network exhibits true scale invariance, which we now test
using the information decomposition [Eq. (10)].
Figure 2: Precise scaling of gap gene expression boundaries. Expression
profiles of (A) Hunchback (Hb), (B) Giant (Gt), (C) Knirps (Kni), and (D)
Krüppel (Kr), based on immunofluorescent staining (Appendix D). Means (solid
lines) and standard deviations (shading) across embryos aligned by scaled
position $x_{s}$. Vertical lines indicate the mean positions of expression
boundaries as well as a small peak in Kni. (E) Absolute position of all gap
gene boundaries as a function of the embryo length. Dashed black line
indicates the position of the posterior of the embryo. Boundary positions are
time-corrected to $t_{0}=45\,{\rm min}$, as with the stripe positions in Fig.
1D. (F) Standard deviation of scaled boundary positions as a function of mean
position for all 13 markers. Error bars are standard deviations from
bootstrapping. Black curves with red shading (bootstrapped errors) are
estimates of precision based on anchoring in Eqs. (5–7), and $d$ is the
spacing between neighboring cells. Horizontal dashed lines denote the distance
$d$ and half-distance $d/2$, between neighboring nuclei. Dotted gray line
indicates 1% precision.
## IV Absolute vs. scaled positional information
The concentrations of morphogens provide cells with information about their
position in the embryo. This “positional information” [30] can be measured in
bits if we have access to data on the mean and variability of spatial profiles
for the concentration of the relevant molecules [47, 55]. The local expression
levels of individual gaps genes convey roughly two bits of information about
position, twice what is possible in a model of on/off expression domains.
Taken together all four gap genes provide $\sim 4.2\,{\rm bits}$, sufficient
to specify positions with $\sim 1\%$ accuracy along the anterior–posterior
axis, as seen above. However, these earlier analyses assumed, implicitly, that
information is about the fractional or scaled position. Is this correct?
Figure 3: Expression of Hb in scaled coordinates. (A) Mean concentration of
Hb, $\langle g_{\rm Hb}(x_{s})\rangle$, vs scaled position (solid line, as in
Fig. 2A) and the conditional distribution $P(g_{\rm Hb}|x_{s})$ around this
mean (shading). Intensity bin size is 0.05 maximum $\langle g_{\rm
Hb}\rangle$. (B) A slice through the conditional distribution at $x_{s}=\
0.47$ (dashed black lines) compared with distributions estimated from embryos
in narrow bins of length, $P_{L}(g_{\rm Hb}|x_{s})$. Lengths were binned in 5
bins with an equal number of embryos in each, such that each bin contains
about 60 embryos with variations in $L$ of less than $1\%$. Mean lengths in
each bin are indicated at the upper right of each panel. Probability
distributions of $g_{\rm Hb}$ are estimated using a kernel density estimator
with a Gaussian kernel that has width $\delta g=0.07\times\max_{x_{s}}\langle
g_{\rm Hb}(x_{s})\rangle$.
Figure 4: Near zero deviation from perfect scaling, in bits. (A) The extra
information $\Delta I$ that Hb expression levels carry about absolute rather
than scaled position, defined by Eq. (10) and evaluated from Eq. (14).
Estimates are based on random choices of $N$ embryos out of the full
experimental ensemble (points; circles show means with standard deviations),
and the extrapolation $N_{\rm em}\rightarrow\infty$ follows the methods of
Appendix E (red line). The result is $\Delta I=0.00\pm 0.008\,{\rm bits}$ (red
circle with error bar). (B) The extra information $\Delta I$ conveyed by all
four gap genes together, defined as in (A) by Eq. (10) but now evaluated using
Eq. (15). Symbols as in (A); the result is $\Delta I=0.038\pm 0.039\,{\rm
bits}$. Error bars are larger because we are analyzing a multidimensional
code, but there still is no significant difference from $\Delta I=0$.
The key to separating information about scaled vs. absolute position is to
compare the variance in morphogen concentrations at a scaled position $x_{s}$
depending on whether we constrain the length of the embryo (Appendix B).
Qualitatively, if there is perfect scaling then knowing the length would not
add any information with which to predict the morphogen concentration. Since
information is mutual this would mean that all the available information is
about the scaled position. To test this quantitatively in the context of the
gap genes, we have assembled data on $N_{\rm em}=301$ embryos, in each of
which we have reliable simultaneous measurements on the spatial profiles of
expression in all four gap genes, as described in Appendix D.
Figure 3A shows the spatial profile of Hb as a function of scaled position
along the anterior–posterior axis. At each scaled position $x_{s}=x/L$ we can
visualize the distribution of expression levels, which is well approximated as
a Gaussian (Appendix D and [47]). We can then ask if this distribution changes
when we look only at embryos in a narrow range of lengths $L$, and the answer
is no (qualitatively; Fig. 3B). Quantitatively we want to estimate the
difference in entropy between these two distributions, averaged over $x_{s}$
and $L$, which will give us the deviation from scaling $\Delta I$ in Eq. (10),
as explained in Appendix B. The calculation of the entropy simplifies in the
Gaussian approximation, depending just on the variances as in Eq. (48),
$\Delta I={1\over
2}\langle\log_{2}[\sigma_{g}^{2}(x_{s})]\rangle_{x_{s}}-{1\over
2}\langle\log_{2}[\sigma_{g}^{2}(x_{s}|L)]\rangle_{x_{s},L},$ (14)
where $\sigma_{g}^{2}(x_{s}|L)$ is the variance in concentration at scaled
position $x_{s}$ across embryos of length $L$ and $\sigma_{g}^{2}(x_{s})$ is
the same variance computed across all embryos.
Applying Eq. (14) requires estimating the relevant variances and also making
bins along the $x_{s}$ and $L$ axes. For the scaled position we choose bins of
size $\Delta x_{s}=0.01$, consistent with the precision that we see in Figs. 1
and 2. To sample the range of embryo lengths we use $N_{\rm
bins}=5,\,10,\,15,$ or $20$ adaptive bins, and find the same results in all
cases (Appendix E). As is well known, estimates of entropy or information are
subject to systematic errors [56, 38]. In the present case, if we substitute
estimates of the variances into Eq. (14), we find a nonzero result for $\Delta
I$. But suppose we include different numbers of embryos in our analysis. In
that case, we see that our estimate of $\Delta I$ depends on $1/N_{\rm em}$ as
expected theoretically [56, 38], and having seen this predicted dependence we
can extrapolate $N_{\rm em}\rightarrow\infty$. In particular, if we shuffle
the data so that the true $\Delta I=0$, then our estimation procedure returns
a random number with zero mean and standard deviation equal to our quoted
error bar, demonstrating that we have control over the systematic errors.
These now standard analysis methods are explained more fully in Appendix E.
Results of this analysis for Hb are shown in Fig. 4A. Using all $N_{\rm
em}=301$ embryos in our data set produces a very small estimate of $\Delta I$,
but even this is exaggerated by systematic errors as we see by changing
$N_{\rm em}$. Our best estimate extrapolates to zero as $N_{\rm
em}\rightarrow\infty$, with an error bar smaller than $0.01\,{\rm bits}$. When
we repeat the same analyses for each of the other gap genes (i.e., Gt, Kni,
and Kr), we get the same result (Appendix E).
Figure 5: The maternal input Bicoid does not scale. (A) Measurements of Bcd
concentration in $N_{\rm em}=582$ live embryos are grouped into eight classes
by embryo length $L$ and averaged. There is only one global normalization, so
this shows that absolute concentrations have the same dependence on absolute
position $x$ across all classes. (B) The same data plotted vs. scaled position
$x_{s}=x/L$. Profiles separate, providing evidence against scaling. (C) Extra
information $\Delta I$ that Bcd concentration provides about absolute vs.
scaled position, defined by Eq. (10) and evaluated from Eq. (14). Symbols as
in Fig. 3, but the extrapolation now leads to a significantly nonzero value of
$\Delta I=0.1\pm 0.02\,{\rm bits}$. Data from [49].
We can generalize this analysis to consider all four gap genes simultaneously.
Now the role of the variance in Eq. (14) is played by the covariance matrix of
the fluctuations, as in Eq. (52):
$\Delta I={1\over
2}\langle\log_{2}\left[||\Sigma(x_{s})||\right]\rangle_{x_{s}}-{1\over
2}\langle\log_{2}\left[||\Sigma(x_{s}|L)||\right]\rangle_{x_{s},L}.$ (15)
Here $||\Sigma(x_{s}|L)||$ is the determinant of the covariance matrix
describing fluctuations in the expression levels of all four genes at scaled
position $x_{s}$ across embryos of length $L$, and $\Sigma(x_{s}|L)$ is the
covariance computed across all embryos. Because we are looking at higher
dimensional variations the impact of the finiteness of our data set is larger,
but again we see the predicted dependence on $1/N_{\rm em}$ and can
extrapolate to give $\Delta I=0.038\pm 0.039\,{\rm bits}$ (Fig. 4B). Once
again this is consistent with $\Delta I=0$: there is no significant evidence
for encoding of information about absolute, as opposed to scaled position.
Although the number of bits has meaning, it is useful to express the deviation
from perfect scaling as a fraction of the information available about scaled
position [47, 55],
${{I({\mathbf{g}}\rightarrow\\{x,L\\})-I({\mathbf{g}}\rightarrow
x/L)}\over{I({\mathbf{g}}\rightarrow x/L)}}=0.009\pm 0.009.$ (16)
Thus the patterns of gap gene expression scale with $1\%$ accuracy, not just
at discrete positional markers but across the entire range of graded spatial
variations.
## V Maternal inputs do not scale
Having observed scaling in the pair rule stripe positions and followed this
back to the gap genes, it is natural to ask if we can go one step further and
trace the scaling behavior of the patterning system to the maternal inputs. Of
the three maternal inputs that drive patterning along the anterior–posterior
axis of the fly embryo, much attention has been given to Bicoid (Bcd) [24,
25]. The protein is present at high concentrations in the anterior, and there
is a nearly exponential decay of concentration with distance toward the
posterior; one can monitor the dynamics of Bicoid protein concentrations
quantitatively in live embryos using GFP-fusions [34].
Comparison across closely related species of flies shows that the length scale
of this exponential decay varies in proportion to the mean length of the
embryo [57]. Insertion of bicoid genes from other species into Drosophila
melanogaster produces protein concentration profiles with length scales
appropriate to the host, but these are not sufficient to rescue the embryo
from deletion of the native Bcd [58]. These results emphasize the subtlety of
comparison across species and the impact of genetic variations, leading us to
re-examine the behavior of Bcd profiles across a large number of live embryos
drawn from the same inbred laboratory strain used in the analysis of gap and
pair rule genes.
Figure 5 analyzes Bcd profiles from $N_{\rm em}=582$ live embryos [49].
Measurements are taken during a small temporal window in nuclear cycle
fourteen [34], and the only normalization (as with the gap genes) is to
subtract a common background level from all the embryos and set the highest
mean concentration to one. When we group the embryos into eight classes based
on their length $L$, we see that the average concentration profiles in all
groups are the same when plotted vs. absolute position, except for small
effects at the posterior pole (Fig. 5A). If we plot vs. scaled position the
different groups of embryos separate significantly (Fig. 5B), providing direct
evidence against scaling. We make this precise using the same information
theoretic approach as above and now find a significant nonzero value of
$\Delta I=0.1\pm 0.02\,{\rm bits}$ (Fig. 5C). This may seem like a small
number, but this is related to the $\sim 4\%$ scale of variations in embryo
length. We conclude that the maternal inputs do not scale, in agreement with
earlier suggestions [18].
We emphasize that the absence of scaling in the maternal inputs should not be
interpreted as a form of noise. Indeed, absolute concentrations of Bcd protein
are highly reproducible across embryos and this can be traced to highly
reproducible numbers of mRNA molecules [59, 49, 60]. Instead, we should think
of the maternal inputs as a nearly deterministic response to the boundary
conditions in the embryo, which also have a direct impact on the gap genes;
see Eqs. (19, 20) below.
## VI Scaling and zero modes
The results here strongly support the view that patterns of gap gene
expression are genuinely scale invariant and that this is an emergent property
of the gap gene network. Here we take the precise mathematical notion of scale
invariance literally and explore its implications. While we do not pretend to
have a detailed model, it is useful to have in mind a class of models for how
patterns might form. As a caution we recall Turing’s introductory remarks
[14]: “This model will be a simplification and an idealization, and
consequently a falsification.”
If we focus on variations just along the anterior–posterior axis $x$, and
ignore the discreteness of nuclei, then the concentration $g_{\rm i}$ of
protein encoded by gene $\rm i$ plausibly obeys an equation of the form
${{\partial g_{\rm i}}\over{\partial t}}=D_{\rm i}{{\partial^{2}g_{\rm
i}}\over{\partial x^{2}}}+R_{\rm i}F_{\rm i}({\mathbf{g}})-{1\over{\tau_{\rm
i}}}g_{\rm i}.$ (17)
Here $D_{\rm i}$ is the diffusion constant of species $\rm i$, $R_{\rm i}$ is
the maximum rate at which these proteins can be synthesized, $\tau_{\rm i}$ is
their lifetime before being degraded, and $F_{\rm i}({\mathbf{g}})$ describes
all the potentially complex interactions by which all the proteins regulate
the expression of gene $\rm i$. We assume that the mRNA and protein dynamics
have separate time scales so that one is effectively slaved to the other and
we can write only one variable for each gene. Further, we neglect time scales
that might arise in the process of regulation itself, such as switching
between different regulatory states, so that $F_{\rm i}({\mathbf{g}})$ is an
instantaneous function of the relevant concentrations. These assumptions are
quite conventional, and other than this what we have written is very general.
For example, the function $F_{\rm i}({\mathbf{g}})$ could describe both
activating and repressive interactions, and these interactions could be
combinatorial. These equations include as special cases Turing’s original
models [14] and their intellectual descendants [61, 62].
The maximum steady state concentration of each protein is $R_{\rm i}\tau_{\rm
i}$, and we can choose units in which this is equal to one, as with the
normalized profiles of gap gene expression in Fig. 2A–D. For simplicity we
will assume that all the decay times are the same, $\tau_{\rm i}=\tau$,
although this is not essential for what follows; finally, we choose units of
time such that $\tau=1$. Then we have
${{\partial g_{\rm i}}\over{\partial t}}=\lambda_{\rm
i}^{2}{{\partial^{2}g_{\rm i}}\over{\partial x^{2}}}+F_{\rm
i}({\mathbf{g}})-g_{\rm i},$ (18)
where the length scale $\lambda_{\rm i}=\sqrt{D_{\rm i}\tau}$. This describes
an autonomous network, which is not quite realistic for the gap genes—which
are driven by maternal inputs—but should be sufficient to draw qualitative
conclusions about the implications of scale invariance.
The length of the embryo appears not in the dynamical equations but in the
boundary conditions. For most proteins, there can be no diffusive flux of
molecules into or out of the ends of the embryo, so that
$-D_{\rm i}{{\partial g_{\rm i}}\over{\partial x}}{\bigg{|}}_{x=0}=D_{\rm
i}{{\partial g_{\rm i}}\over{\partial x}}{\bigg{|}}_{x=L}=0.$ (19)
The situation for maternal inputs is different; as an example, in making the
egg the mother places mRNA for the protein Bicoid (Bcd) at the anterior end
($x=0$), and this is translated continuously, so that
$-D_{\rm Bcd}{{\partial g_{\rm Bcd}}\over{\partial x}}{\bigg{|}}_{x=0}=T_{\rm
Bcd},$ (20)
where $T_{\rm Bcd}$ is the rate of translation in appropriate units.
Let us imagine that the final pattern we observe is in steady state, so that
$0=\lambda_{\rm i}^{2}{{\partial^{2}g_{\rm i}(x;L)}\over{\partial
x^{2}}}+F_{\rm i}({\mathbf{g}})-g_{\rm i}(x;L),$ (21)
where the notation reminds us that length dependence can arise once we impose
the boundary conditions. If we have true scale invariance as in Eq. (1) then
if we make a small change in the length of the embryo, so that $L\rightarrow
L+\delta L$, the expression levels should change as
$\displaystyle g_{\rm i}(x;L)$ $\displaystyle\rightarrow$ $\displaystyle
g_{\rm i}(x;L)+{{\delta L}\over{L}}\psi_{\rm i}(x/L)$ (22)
$\displaystyle\psi_{\rm i}(x_{s})$ $\displaystyle=$ $\displaystyle-
x_{s}\Phi_{\rm i}^{\prime}(x_{s}),$ (23)
but Eq. (21) still must be true. This requires that
$\sum_{\rm j}\left[\left(\lambda_{\rm i}^{2}{{\partial^{2}}\over{\partial
x^{2}}}-1\right)\delta_{\rm ij}+{{\partial F_{\rm i}}\over{\partial g_{\rm
j}}}{\bigg{|}}_{{\mathbf{g}}={\mathbf{\Phi}}}\right]\psi_{\rm j}(x/L)=0.$ (24)
This seemingly abstract condition has a direct implication for the dynamics of
the network.
Suppose that the system is close to its steady state so that we can write
$g_{\rm i}(x;L;t)=\Phi_{\rm i}(x/L)+\delta g_{\rm i}(x;t)$ (25)
and $\delta{\mathbf{g}}$ is small. Then we can linearize the dynamics in Eq.
(18) to yield
${{\partial(\delta g_{\rm i})}\over{\partial t}}=\sum_{\rm
j}\left[\left(\lambda_{\rm i}^{2}{{\partial^{2}}\over{\partial
x^{2}}}-1\right)\delta_{\rm ij}+{{\partial F_{\rm i}}\over{\partial g_{\rm
j}}}{\bigg{|}}_{{\mathbf{g}}={\mathbf{\Phi}}}\right]\delta g_{\rm j}.$ (26)
We recognize the term in brackets as the same one that appears in Eq. (24). To
understand this connection it is useful to think of all possible spatial
patterns of gene expression as points in a high dimensional space.
Concretely we can write
$\delta g_{\rm i}(x;t)=\sum_{\mu}a_{\mu}(t)\phi_{\rm i}^{\mu}(x)$ (27)
where the functions $\\{\phi_{\rm i}^{\mu}(x)\\}$ are the spatial “modes” of
expression and the set $\\{a_{\mu}\\}$ provides the coordinates of one
expression profile in this multidimensional space. The number of modes is the
number of genes times the number of independent points along the $x$ axis,
e.g. the number of rows of cells; for the gap genes the result is that the
space has a dimensionality $d>300$. We can choose these modes as
eigenfunctions of the operator that appears in both Eqs. (24) and (26),
$\sum_{\rm j}\left[\left(\lambda_{\rm i}^{2}{{\partial^{2}}\over{\partial
x^{2}}}-1\right)\delta_{\rm ij}+{{\partial F_{\rm i}}\over{\partial g_{\rm
j}}}{\bigg{|}}_{{\mathbf{g}}={\mathbf{\Phi}}}\right]\phi_{\rm
j}^{\mu}(x)=-\lambda_{\mu}\phi_{\rm i}^{\mu}(x),$ (28)
where $\lambda_{\mu}\geq 0$ means that the steady state is stable. Then so
long as the deviations from the steady state are small, the dynamics of the
network are simple in this coordinate system,
${{da_{\mu}(t)}\over{dt}}=-\lambda_{\mu}a_{\mu}(t).$ (29)
Through Eq. (24) we see that perfect scale invariance implies a “zero mode,” a
particular mode of gene expression associated with eigenvalue
$\lambda_{\mu}=0$. Importantly this is not the steady state pattern itself,
but an additional mode.
The existence of a zero mode has several implications:
* •
Most literally, one component in the spatial pattern of gene expression will
relax very slowly to its steady state, much more slowly than other components.
Formally the relaxation should be as a power of time rather than exponential.
* •
The dynamics describe a “restoring force” that pulls the patterns of gene
expression toward their steady state values; the eigenvalues are the spring
constants associated with these restoring forces. Along the zero mode, there
is no (linear) restoring force, and in the presence of any finite noise, the
fluctuations along this mode will be very large compared with other modes.
* •
Along directions with nonzero $\lambda_{\mu}$ the fluctuations in $\mathbf{g}$
will be approximately Gaussian so long as they remain small, as we see for the
gap genes. But along the zero mode, there should be some deviation from
Gaussian behavior.
There is evidence that the spatial patterns of gap gene expression can be
compressed into a lower dimensional space, consistent with the idea that a
zero mode dominates the dynamics [63]. The ($4\times 4$) covariance matrix of
fluctuations in gap gene expression is dominated by a single mode at almost
all locations along the anterior–posterior axis, this large variance mode
relaxes $\sim 10\times$ more slowly than the lower variance modes, and one can
even see hints of non–Gaussian behavior [26].
The existence of a zero mode is a statement about the linearized dynamics. If
the absence of a linear restoring force continues for finite deviations from
the steady state then there is a line of attracting spatial patterns rather
than a single stable pattern. Different points along this line are the
patterns appropriate to embryos of different lengths, and the final pattern is
selected by boundary conditions. Line attractors have long been discussed for
neural networks [64]. It has been noted that models of the gap gene network
might support such line attractors [28], and there are also suggestions that
internal dynamics of the network can generate approximate scaling [29]. The
observation of nearly perfect scale invariance in the real network leads us to
a much sharper version of these ideas.
## VII Discussion
Scale invariance is an appealing concept. It quantifies the intuition that
organisms are built from parts that are in proportion to one another,
independent of an individual organism’s overall size. There is a long history
of searching for such scaling not just in adult organisms but at early stages
of development, and the fruit fly Drosophila melanogaster has been a
particular target for these studies [40, 20, 21, 19, 29]. If we compare
related species of flies we can see spatial patterns of gene expression that
scale, on average, across $10\times$ changes in embryo length [57, 58], and
similar results are obtained within a single species but with artificial
selection for length variation [22]. It has always been less clear whether
scaling occurs without such large genetic variations, across the natural
length variations in a single species.
We have explored scaling across many embryos from a quasi-inbred laboratory
stock, minimizing genetic variation. Across this ensemble, we see length
fluctuations with a standard deviation of $\pm 4\%$ but embryos in the tails
of the distribution have lengths $\pm 10\%$ from the mean (Fig. A1). Following
previous work, we measured the positions of discrete markers—such as the CF
position, the peaks of pair-rule stripes, and the boundaries of gap gene
domains—and found precise scaling of the absolute positions with embryo
length. This is consistent with previous results, but what is new is the
precision that we observe: markers are at positions that are scaled relative
to the embryo length with an accuracy of $\sim 1\%$ across the full extent of
the anterior–posterior axis. This observed precision excludes a broad class of
models that combine information from both ends of the embryo without explicit
scaling [39, 40, 41, 42].
There remains a gap between the positioning of discrete markers and the fuller
notion of scale invariance. The gap gets smaller as we track more markers
across a wider range of positions, but it would be attractive to address scale
invariance directly. We have introduced an information theoretic approach that
analyzes the full, graded spatial profiles of gene expression and measures
similarity in the natural units provided by the intrinsic noise levels of
these profiles. Concretely, we introduce a decomposition of the information
that morphogen concentrations provide about position into a component about
scaled position and a deviation from scaling. Applied to the gap genes in the
early fly embryo, the result is clear: the deviation from scaling is less than
one percent of the total positional information. It is perhaps surprising that
we can make such a precise statement about the functional output of a complex
network.
In contrast to the results for the gap genes and the pair-rule genes, at least
one of the maternal inputs, Bicoid, does not exhibit scaling. We can see this
“by eye,” simply plotting profiles vs. absolute or scaled position, and these
impressions are quantified by the same information theoretic approaches used
to demonstrate scaling in the gap genes. Error bars again are in the range of
$\sim 0.01\,{\rm bits}$, but the deviation from scaling now is $\sim 10\times$
as large. The conclusion is that near-perfect scale invariance is an emergent
property of the gap gene network.
If we take scale invariance as a precise mathematical statement then the
dynamics of the underlying genetic network must have a zero mode. This is
equivalent to saying that the dynamics do not have a single attractor, but
rather a line of attractors as in models for short-term memory in neural
networks [64]. Then position along this line is chosen by the boundary
conditions and hence the length of the embryo. A zero mode would provide
connections among several otherwise disparate observations on the gap genes.
Finally, recent experiments on mammalian pseudo-embryos suggest that scale
invariance may be a more universal feature of genetic networks underlying
developmental pattern formation [65]. In these self-organizing cell aggregates
derived from stem cells, scale invariance emerges without fixed boundary
conditions, but rather with boundaries that move as the aggregate grows. The
existence of a zero mode in the regulatory network becomes even more
attractive as a general mechanism for scaling.
###### Acknowledgements.
We are grateful to E. F. Wieschaus for his advice and for many inspiring
discussions. We thank M. Biggin and N. Patel for sharing the antibodies used
in these measurements. This work was supported in part by US National Science
Foundation Grant PHY–1734030 (Center for the Physics of Biological Function);
by National Institutes of Health Grants R01GM077599 and R01GM097275; by the
Simons Foundation; by the John Simon Guggenheim Memorial Foundation.
## Appendix A Natural length variations of embryos in a laboratory strain of
flies
As described in the main text, much previous work on scaling has exploited the
natural variation in embryo lengths across the evolutionary tree or the
variations that can be selected artificially over reasonable times. Here we
use variations in length that occur within a single laboratory strain, OreR,
minimizing genetic variations. Measurements on large numbers of live embryos
are taken from Refs. [49, 35] and on fixed embryos from Ref. [33].
As an example, Fig. A1 shows the probability distribution of embryo lengths
$L$ estimated from $N_{\rm em}=610$ living dechorionated embryos (Bcd-GFP 2XA
strain in [49]). The mean length of the embryos is $\langle L\rangle=490\pm
0.76\,\mu{\rm m}$, and the standard deviation is $\sigma_{L}=18\pm
1.06\,\mu{\rm m}$. This corresponds to a fractional variation
$\sigma_{L}/\langle L\rangle=0.037$, and as noted in the main text our sample
is sufficiently large that it includes embryos $\pm 10\%$ from the mean. This
is true also in the case of fixed OreR embryos where we find
$\sigma_{L}/\langle L\rangle=0.038$ and $\sigma_{L}/\langle L\rangle=0.039$ in
the experimental ensembles used for the analysis fo the gap and pair rule
genes, respectively.
Figure A1: Distribution of live embryo lengths. Data from $N_{\rm em}=610$
embryos [49] analyzed in bins of size $\Delta L/\langle L\rangle=0.02$. The
mean embryo length $\langle L\rangle=490\pm 0.76\,\mu{\rm m}$. Overlaid is the
error bar indicating the standard deviation of $\sigma_{L}=0.0371\langle
L\rangle$, and each dot indicates the length of one of the embryos in our
sample.
## Appendix B Decomposing information
We want to give explicit expressions that allow us to decompose positional
information, as in Eq. (10), based on estimates from real data. We give more
detail than usual in hopes of making the analysis accessible to a broader
audience.
Concentrations can depend both on the absolute position $x$ and the length of
the embryo $L$. We can rewrite this as a dependence on the scaled position
$x_{s}=x/L$ and the length $L$. Thus we have
$P\left({\mathbf{g}}|\\{x,L\\}\right)=P\left({\mathbf{g}}|\\{x/L,L\\}\right)=P_{L}(\\{g_{\rm
i}\\}|x_{s}),$ (30)
where $P_{L}$ is a distribution formed only across embryos of length $L$.
Similarly, we expect that cells are uniformly distributed along the $x$ axis
up to the length $L$, so that
$P(x,L)={1\over L}\Theta(1-x_{s})P(L),$ (31)
where $\Theta$ is the unit step function. Then we can substitute into Eq. (8):
$\displaystyle I(\\{g_{\rm i}\\}\rightarrow\\{x,L\\})$ $\displaystyle=$
$\displaystyle\int d{\mathbf{g}}\int dx\int
dL\,P\left({\mathbf{g}}|\\{x;L\\}\right)P(x,L)\log_{2}\left[{{P\left({\mathbf{g}}|\\{x,L\\}\right)}\over{P\left({\mathbf{g}}\right)}}\right]$
(32) $\displaystyle=$ $\displaystyle\int
dL\,P(L)\int_{0}^{1}dx_{s}\,P_{L}({\mathbf{g}}|x_{s})\log_{2}\left[{{P_{L}({\mathbf{g}}|x_{s})}\over{P\left({\mathbf{g}}\right)}}\right].$
(33)
Now we insert a factor of unity:
$\displaystyle\log_{2}\left[{{P_{L}({\mathbf{g}}|x_{s})}\over{P\left({\mathbf{g}}\right)}}\right]$
$\displaystyle=$
$\displaystyle\log_{2}\left[{{P_{L}({\mathbf{g}}|x_{s})}\over{P\left({\mathbf{g}}\right)}}{{P({\mathbf{g}}|x_{s})}\over{P({\mathbf{g}}|x_{s})}}\right]$
(34) $\displaystyle=$
$\displaystyle\log_{2}\left[{{P({\mathbf{g}}|x_{s})}\over{P\left({\mathbf{g}}\right)}}\right]-\log_{2}\left[P({\mathbf{g}}|x_{s})\right]+\log_{2}\left[P_{L}({\mathbf{g}}|x_{s})\right].$
(35)
Substituting, we can write
$I({\mathbf{g}}\rightarrow\\{x,L\\})=I_{1}+I_{2}+I_{3},$ (36)
where the three components are
$\displaystyle I_{1}$ $\displaystyle=$ $\displaystyle\int_{0}^{1}dx_{s}\int
d{\mathbf{g}}\,P({\mathbf{g}}|x_{s})\log_{2}\left[{{P({\mathbf{g}}|x_{s})}\over{P\left({\mathbf{g}}\right)}}\right]$
(37) $\displaystyle I_{2}$ $\displaystyle=$
$\displaystyle-\int_{0}^{1}dx_{s}\int
d{\mathbf{g}}\,P({\mathbf{g}}|x_{s})\log_{2}[P({\mathbf{g}}|x_{s})]$ (38)
$\displaystyle I_{3}$ $\displaystyle=$ $\displaystyle\int
dL\,P(L)\int_{0}^{1}dx_{s}\int
d{\mathbf{g}}\,P_{L}({\mathbf{g}}|x_{s})\log_{2}[P_{L}({\mathbf{g}}|x_{s})],$
(39)
We can identify the three terms: First, $I_{1}$ is the information that the
concentrations of morphogens provide about the scaled position,
$I_{1}=I({\mathbf{g}}\rightarrow x/L).$ (40)
Second, $I_{2}$ is the entropy of the distribution of concentrations at a
particular value of scaled position $x_{s}$, averaged over this position,
$I_{2}=\langle S[P({\mathbf{g}}|x_{s})]\rangle_{x_{s}},$ (41)
where $S[Q]$ denotes the entropy of the distribution $Q$. Finally, $I_{3}$ is
the negative of the entropy of the same distribution but restricted to embryos
of length $L$, and then averaged also over $L$,
$I_{3}=-\langle S[P_{L}({\mathbf{g}}|x_{s})]\rangle_{x_{s},L}.$ (42)
Comparing with Eq. (10) we see that the deviation from scaling can be written
as the difference between two entropies, suitably averaged:
$\Delta I=\langle S[P({\mathbf{g}}|x_{s})]\rangle_{x_{s}}-\langle
S[P_{L}({\mathbf{g}}|x_{s})]\rangle_{x_{s},L}.$ (43)
This has a very simple interpretation: There is a deviation from scaling if
specifying the length of the embryo reduces the entropy of fluctuations in
morphogen concentration at a given scaled position.
These general expressions simplify enormously in the case where we have only a
single morphogen and the conditional distributions are Gaussian. In this case
$\displaystyle P(g|x_{s})$ $\displaystyle=$
$\displaystyle{1\over{Z(x_{s})}}\exp\left[-{1\over 2}\chi^{2}(g;x_{s})\right]$
(44) $\displaystyle\chi^{2}(g;x_{s})$ $\displaystyle=$
$\displaystyle{{[g-\langle g(x_{s})\rangle]^{2}}\over{\sigma_{g}^{2}(x_{s})}}$
(45) $\displaystyle Z(x_{s})$ $\displaystyle=$
$\displaystyle\sqrt{2\pi\sigma_{g}^{2}(x_{s})},$ (46)
where $\langle g(x_{s})\rangle$ is the mean and $\sigma_{g}^{2}(x_{s})$ is the
variance of $g$ at scaled positions $x_{s}$. Importantly the entropy of a
Gaussian distribution does not depend on the mean, and we have [37, 38]
$S\left[P(g|x_{s})\right]={1\over 2}\log_{2}\left[2\pi
e\sigma_{g}^{2}(x_{s})\right].$ (47)
Thus we find the deviation from scaling is
$\Delta I={1\over
2}\langle\log_{2}[\sigma_{g}^{2}(x_{s})]\rangle_{x_{s}}-{1\over
2}\langle\log_{2}[\sigma_{g}^{2}(x_{s}|L)]\rangle_{x_{s},L},$ (48)
where $\sigma_{g}^{2}(x_{s}|L)$ is the variance in concentration at scaled
position $x_{s}$ across embryos of length $L$. In other words, there is a
deviation from scaling if the variance in morphogen concentration is reduced
by knowing the length of the embryo.
This result can be generalized to multiple morphogens if we stay in the
Gaussian approximation. Now with $d$ genes, we have
$\displaystyle P({\mathbf{g}}|x_{s})$ $\displaystyle=$
$\displaystyle{1\over{Z(x_{s})}}\exp\left[-{1\over 2}\chi^{2}(\\{g_{\rm
i}\\};x_{s})\right]$ (49) $\displaystyle\chi^{2}(\\{g_{\rm i}\\};x_{s})$
$\displaystyle=$ $\displaystyle\sum_{{\rm i}=1}^{d}\sum_{{\rm j}=1}^{d}[g_{\rm
i}-\langle g_{\rm i}(x_{s})\rangle][\Sigma^{-1}(x_{s})]_{\rm ij}[g_{\rm
j}-\langle g_{\rm j}(x_{s})\rangle]$ (50) $\displaystyle Z(x_{s})$
$\displaystyle=$
$\displaystyle\left[(2\pi)^{d}||\Sigma(x_{s})||\right]^{1/2},$ (51)
where $\Sigma(x_{s})$ is the covariance matrix of fluctuations in
concentration at scaled position $x_{s}$ and $||\Sigma(x_{s})||$ is the
determinant of this matrix. Following the same logic as in the case of one
gene we have
$\Delta I={1\over
2}\langle\log_{2}\left[||\Sigma(x_{s})||\right]\rangle_{x_{s}}-{1\over
2}\langle\log_{2}\left[||\Sigma(x_{s}|L)||\right]\rangle_{x_{s},L}.$ (52)
Even if $P_{L}({\mathbf{g}}|x_{s})$ is perfectly Gaussian, averaging over $L$
could generate non-Gaussian behavior in $P({\mathbf{g}}|x_{s})$. We are
neglecting this here, but since we find that $\Delta I$ is very small, and for
the gap genes consistent with $\Delta I=0$, both the conditional and averaged
distributions are very nearly Gaussian, as seen in Fig. 4B.
## Appendix C Cephalic furrow and scale invariance
Upon the onset of gastrulation (i.e., three hours after fertilization), the
cephalic furrow (CF) emerges as the first macroscopic morphological feature
along the anterior–posterior axis in the developing fly embryo. It results
from collective cell movement that can be seen using bright-field microscopy.
There are hints in early experiments that this marker is positioned very
precisely [25]. Modern experiments show that, as a fraction of the embryo
length $L$, CF position $x_{\rm CF}$ is reproducible to nearly $1\%$ accuracy
[49].
When we plot CF position in absolute units as a function of embryo length we
observe a linear relationship with zero intercept (Fig. A2A). The slope of
this fit is well within 1% of the mean scaled position $\langle f_{\rm
CF}\rangle$. More generally, all the discrete positional markers that we track
(CF, pair-rule stripes, gap boundaries) have absolute positions that vary
linearly with embryo length; the intercepts of the best-fit linear relations
are zero; the slopes match the mean scaled positions of the markers (Fig. A2B)
as predicted by scale invariance [Eq. (3)]; and the precision of this match is
better than $1\%$.
Figure A2: Cephalic furrow and the proportionality of scaling. (A) The
absolute position of the cephalic furrow measured in live emrbyos [49] is
proportional to the embryo length. Red line is the best fit with 95%
confidence intervals shown as dashed lines. The entire fit is shown in the
inset, emphasizing that the intercept is zero. (B) Slopes of absolute position
vs. embryo length for multiple positional markers, each plotted vs. its mean
scaled position. (C) Replotting of data in (B) to show that slopes and scaled
positions are equal within $1\%$, as predicted for perfect scaling [Eq. (3)].
## Appendix D Aspects of the gene expression data
We analyze gene expression patterns for the pair-rule genes, the gap genes,
and the maternal input Bicoid. In each case, the concentration of the protein
is inferred from the intensity of a fluorescence signal. In each case images
are collected by focusing on the midsagittal plane, the extent of the embryo
is defined by thresholding the fluorescence intensity, and to avoid geometric
distortions we avoid the $5\%$ of the embryo at both the anterior and
posterior poles. Fluorescence intensities are averaged over a small area, as
shown in the inset to Fig. 1B, sliding along the dorsal rim of the embryo. In
live embryos, we can keep track of time during nc14 directly, while in fixed
embryos we use the progress of the cellularization as a clock with precision
$\sim 1\,{\rm min}$.
For each gene $\rm i$ we measure an intensity $I_{\rm i}^{\alpha}(x_{s})$ as a
function of scaled position in embryo $\alpha$. In each experiment, we
normalize by assuming that the minimum mean concentration is zero and we
choose units such that the maximum mean concentration is one. This defines
$g_{\rm i}^{\alpha}(x_{s})={1\over{S_{\rm i}}}\left[I_{\rm
i}^{\alpha}(x_{s})-B_{\rm i}\right],$ (53)
where the background is
$B_{\rm i}=\min_{x_{s}}\langle I_{\rm i}^{\alpha}(x_{s})\rangle$ (54)
and the scale factor is
$S_{\rm i}=\max_{x_{s}}\langle I_{\rm
i}^{\alpha}(x_{s})\rangle-\min_{x_{s}}\langle I_{\rm
i}^{\alpha}(x_{s})\rangle.$ (55)
Importantly there is no freedom to normalize the profiles measured in
individual embryos, which would distort our estimates of noise and variability
[59].
Data on three pair-rule genes—Eve, Prd, and Run—are taken from Ref. [33]. We
fit the sum of seven Gaussians to each profile, identifying stripe positions
with the centers of the Gaussians. We have also used the average peak profiles
as templates [23] and made more restricted fits to small segments of the peaks
[27]; results are the same with all three methods. Corrections for the drift
of peak position vs. time in nc14 were made as described in the main text.
Figure A3: Apparent variance of gap gene expression across multiple
experimental sessions. Results from eight sessions are largely reproducible
for each of the four genes. In regions where mean expression levels are near
zero, variances typically are $\sigma_{g}^{2}\ll 10^{-3}$, except in a handful
of sessions with highly variable backgrounds; these are excluded from further
analysis.
Simultaneous measurements on all four gap genes also were drawn from
experiments described in Ref. [33]. Because the analyses done here are so
demanding of data, we tried to merge data from as many independent
experimental sessions as possible. Most quantities are extremely reproducible
from session to session, but in a handful of sessions, we found anomalously
large variations in background fluorescence across the individual embryos.
Concretely, if we measure the variance of expression levels for the individual
genes, we typically find that $\sigma_{g}^{2}(x_{s})\ll 10^{-3}$ in regions of
$x_{s}$ with near zero mean (Fig. A3). In a few sessions, these fluctuations
in background are much larger, and these sessions are excluded; more
precisely, since all genes are measured simultaneously, excess background
variance in one gene is sufficient to exclude those data. This leaves five
independent sessions with a total of $N_{\rm em}=301$ embryos which we pool
into one data set for all further analyses.
For the analysis of gap gene expression boundaries, we mark the points that
are half-maximal along the sharp slopes, as indicated in Fig. 2. For the weak
peak of Kni expression near $x_{s}=0.33$ we fitted a Gaussian profile and took
the marker as the center of the Gaussian.
Gap gene profiles vary slowly but significantly throughout nc14. If we don’t
treat this variation explicitly it can be confused for noise, resulting in a
substantial overestimate of the variances and entropies. To separate the
temporal dynamics of the gap genes from their noise level, we follow Ref. [32]
and detrend the variations at each position, generalizing the treatment of the
stripe positions in Fig. 1D. The alternative is to focus only on a small
window of time [33], but this limits the size of the data set we can use.
Another systematic source of variation is the dependence of gap gene profiles
on the dorsal-ventral coordinate [32]. Previous work thus has been very strict
in analyzing embryos with narrowly defined orientations. To expand our data
set we are less strict, but this is problematic for the Kni profiles in the
range $0.15<x_{s}<0.45$, which contains a small peak. When analyzing Kni
alone, or all four gap genes together, we exclude this region. The alternative
is to analyze the other three genes together across the full length of the
anterior–posterior axis; results for $\Delta I$ are the same.
Figure A4: Fluctuations in gap gene expression are approximately Gaussian.
Distributions of z-scored fluctuations, as in Eq. (56), are estimated for each
individual gap gene, pooled across positions and embryos; error bars are
standard deviations. Black curves are Gaussians with zero mean and unit
variance.
An important assumption for our analysis is that the distribution of gene
expression at a given anterior–posterior position is Gaussian, as shown
previously [47]. For completeness, we reproduce this result for our larger
data set. In a single embryo $\alpha$ we observe a gene expression level
$g^{\alpha}(x_{s})$ at scaled position $x_{s}$. We compute the mean and
standard deviation across all the embryos in our ensemble and define
normalized deviations or z-scores
$\Delta^{\alpha}(x_{s})={{g^{\alpha}(x_{s})-\langle
g(x_{s})\rangle}\over{\sigma_{g}(x_{s})}}.$ (56)
We pool across all $\alpha=1,\,2,\,\cdots,\,N_{\rm em}$ embryos and across all
positions $x_{s}$ to estimate the probability density $P(\Delta)$. Results are
in Fig. A4 for each of the four gap genes.
Finally, measurements of Bicoid concentration are taken from fluorescent
imaging of live embryos expressions a Bicoid-GFP fusion [49]; we consider only
strain 2XA, which has the Bcd dosage closest to that of wild-type flies. With
live measurements we can choose a time window, in this case $t=16\pm
2\,\rm{min}$ after the start of nc14, avoiding any temporal detrending while
still including $N_{\rm em}=582$ embryos. Some measurements with missing data
points along the length of the embryo were excluded from this set.
## Appendix E Estimating $\Delta I$ from limited data
Entropy and information depend on probability distributions, not just on their
moments, and thus are especially difficult to estimate from a finite sample of
data. Further, the entropy is a nonlinear function of the probabilities, and
so random errors in probability become systematic errors in our estimates of
information. This problem was appreciated in the very first attempts to use
information theory in the analysis of experiments on biological systems [56].
In the subsequent decades, many approaches have been developed, driven
especially by the analysis of neural coding. The approach we take here follows
the discussion in Appendix A.8 of Ref. [38]. Rather than just saying that we
follow established methods, we repeat some details that can be found in other
contexts in hopes that our presentation thus will be more accessible.
If we estimate an information-theoretic quantity such as $\Delta I$ in Eq.
(43) based on data from measurements in $N_{\rm em}$ independent embryos, then
with any simple estimation procedure our estimates will be biased:
$\Delta I=\Delta I_{\infty}+{{A(N_{\rm bins})}\over{N_{\rm em}}}+{{B(N_{\rm
bins})}\over{N_{\rm em}^{2}}}+\cdots.$ (57)
Here $\Delta I_{\infty}$ is the true value of $\Delta I$ which we would
observe if we could collect an infinite number of samples. The notation
reminds us that if we make bins along some continuous axis, then the size of
the corrections at finite $N_{\rm em}$ depend on the number of bins $N_{\rm
bins}$. With more bins the corrections are larger, which means that a naive
estimate with a fixed number of embryos will depend on the bin size. The hope
is that we can find a regime in which the extrapolated $\Delta I_{\infty}$ is
independent of $N_{\rm bins}$.
It is important that Eq. (57) is not just a guess, but a prediction that can
be derived theoretically. Theory also gives values for the coefficients $A$
and $B$, but these depend on details such as the independence of samples; the
form is more general. This suggests a strategy in which we vary the number of
embryos that we include in our analysis and look for the predicted systematic
dependence on $N_{\rm em}$. If we can see this behavior then we can feel
confident in fitting to Eq. (57) and extracting an estimate $\Delta
I_{\infty}$ [38].
This estimation procedure is illustrated by Fig. 4 in the main text and by
Fig. A5. When we vary the number of embryos that we include in our analysis,
we can choose at random from the total number available, so we have a path
also to estimating error bars (below). In Fig. 4A we analyze $\Delta I$ for
the spatial profiles of Hb expression using the Gaussian approximation of Eq.
(48). We have to make estimates of the variance as a function of the scaled
coordinate $x_{s}$ and the embryo length $L$. As explained in the main text,
we choose bins of $\Delta x_{s}=0.01$, consistent with the observed precision
of the pair-rule stripes and with earlier work [47, 33]. Along the $L$ axis we
use adaptive bins, that is bins with boundaries chosen so that the number of
embryos in each bin is as nearly equal as possible; these bins are chosen
based on the full experimental ensembles, and not readjusted as we choose
smaller samples at random.
In Figure 4A we have chosen $N_{\rm bins}=5$ adaptive bins along the $L$ axis,
and we see a clean depending of $\Delta I$ on $N_{\rm em}$ as predicted in Eq.
(57). The dependence on $N_{\rm em}$ is dominated by the linear term $A$,
although some curvature is detectable. The quality of the fit to Eq. (57) is
very good, and the extrapolation is to $\Delta I_{\infty}=0$ with a precision
of better than $0.01\,{\rm bits}$.
In Figure A5 we see how the plot of $\Delta I$ vs. $N_{\rm em}$ changes as we
vary $N_{\rm bins}=5,\,10,\,15,\,20$. With more bins, there are fewer embryos
in each bin, which means that the minimum number of embryos that we need for a
meaningful analysis is larger. Increasing the number of bins might reveal
otherwise hidden information, but also increases the size of systematic
errors. Comparing across the panels in Fig. A5 we see that at fixed $N_{\rm
em}$ the apparent $\Delta I$ increases with $N_{\rm bins}$, and if we didn’t
explore the full dependence on $N_{\rm em}$ we might be tempted to conclude
that we are indeed revealing extra information. But this is not the case,
since the plots at all values of $N_{\rm bins}$ extrapolate to zero within
error bars.
Figure A5: Consistent estimates of $\Delta I$ with varying $N_{\rm bins}$. (A)
Repeats the results of Fig 4A on $\Delta I$ vs. $N_{\rm em}$ for Hb, analyzed
with $N_{\rm bins}=5$ adaptive bins along the $L$ axis. (B) As in (A) with
$N_{\rm bins}=10$; (C) with $N_{\rm bins}=15$; and (D) with $N_{\rm bins}=20$.
Systematic errors are large at fixed $N_{\rm em}$ and increasing $N_{\rm
bins}$, but the extrapolation $N_{\rm em}\rightarrow\infty$ is within error
bars of $\Delta I=0$ in each case.
One useful test of these extrapolation methods is to be sure that we arrive at
zero information in those cases where we know that the answer must be zero. As
an example, if we randomly permute or shuffle the data we can break
correlations that lead to nonzero information. In this case, if we randomly
reassign lengths $L$ to the embryos, then we must have $\Delta I=0$. This is
illustrated in Fig. A6, using the example of Bcd. Here the real data
extrapolate to a nonzero value of $\Delta I$ (Fig. 5), and when we shuffle we
still see significantly nonzero values at $N_{\rm em}\sim 100$. But using the
whole $N_{\rm em}$ dependence we can see this extrapolates smoothly to zero,
as it should.
Figure A6: Recovering $\Delta I=0$ in shuffled data. We permute the lengths
$L$ of the embryos at random and repeat the analysis of the Bcd profiles.
While the real data extrapolate to nonzero $\Delta I$ (Fig 5C), here we
recover $\Delta I=0$ as expected.
An essential part of this analysis is the estimation of error bars. For
reasonably large $N_{\rm em}$ the systematic and random errors are additive,
and the variance of random errors scales $\propto 1/N_{\rm em}$ as usual. This
means that if we compute the variance of $\Delta I$ across random halves of
the data, and divide by two, we should have a good estimate of the variance in
$\Delta I$ based on all of the data. If this error bar $\sigma_{\Delta I}$ is
correct, and our extrapolation is consistent, then when the true $\Delta I$ is
zero, as with shuffled data, we can form a z-score,
$z=\Delta_{\infty}/\sigma_{\Delta I}$, and $z$ should be Gaussian with zero
mean and unit variance. We can test this because the extrapolation procedure
involves taking random subsamples of the data, each of which generates a
slightly different value of $\Delta I_{\infty}$. Figure A7 shows the
distribution $P(z)$ obtained from a shuffled version of the Hb data,
illustrating a good match to the expected Gaussian distribution; the deviation
is a bias toward smaller $z$, suggesting that our estimates of the error bars
may be a bit conservative.
Figure A7: Distribution of errors in $\Delta I$. The distribution of errors of
$\Delta I$ was estimated by repeating 5000 times the entire procedure leading
to Fig. 3A, on shuffled versions of the Hb data, where we know the true value
of $\Delta I$ is 0 bits. We calculate the z-score based on our estimates of
$\Delta I$ and error bars $\sigma_{\Delta I}$. Red histogram shows the
probability distribution $P(z)$ in bins of size $\Delta z=0.1$. Shaded area is
the uncertainty estimated through bootstrapping; black line is a Gaussian
distribution with $\langle z\rangle=0$ and $\langle z^{2}\rangle=1$.
Now that we have control over both the systematic and random errors in
estimating $\Delta I$, we summarize the results. We have done separate
analyses for each of the gap genes, for all four gap genes together, and for
the maternal input Bicoid. As we see in Fig. A8A, all results for the gap
genes are consistent with $\Delta I=0\,{\rm bits}$ within error bars, while
for Bicoid we find a significantly nonzero value. These quantities are perhaps
best expressed as fractions of the information conveyed about scaled position,
as shown in Fig. A8B; estimates of $I({\mathbf{g}}\rightarrow x_{s})$ are from
Refs. [47, 55].
Figure A8: Summary of deviations from scale invariance. (A) Extrapolated
$\Delta I$ with error bars for Bicoid, each gap gene individually, and the
combination of all gap genes. (B) Deviation from scale invariance as a
percentage of the information about scaled position [47, 55].
## References
* Ishimatsu _et al._ [2018] K. Ishimatsu, T. W. Hiscock, C. Z. M., D. W. K. Sari, K. Lischer, D. L. Richmond, Y. Bessho, T. Matsui, and S. G. Megason, Size-reduced embryos reveal a gradient scaling based mechanism for zebrafish somite formation, Development 145, dev161257 (2018).
* Almuedo-Castillo _et al._ [2018] M. Almuedo-Castillo, A. Bläßle, D. Mörsdorf, L. Marcon, G. H. Soh, K. W. Rogers, A. F. Schier, and P. Müller, Nature Cell Biology 20, 1032 (2018).
* Leibovich _et al._ [2020] A. Leibovich, T. Edri, S. L. Klein, S. A. Moody, and A. Fainsod, Natural size variation among embryos leads to the corresponding scaling in gene expression, Developmental Biology 462, 165 (2020).
* Huxley [1932] J. S. Huxley, _Problems of Relative Growth_ (Meuthen, London, 1932).
* McMahon and Bonner [1983] T. A. McMahon and J. T. Bonner, _On Size and Life_ (Scientific American Library, New York, 1983).
* West _et al._ [1997] G. B. West, J. H. Brown, and B. J. Enquist, A general model for the origin of allometric scaling laws in biology, Science 276, 122 (1997).
* Kepler [2010] J. Kepler, _The Six–Cornered Snowflake: A New Year’s Gift_ (Paul Dry Books, Philadelphia, PA, 2010) translated from the 1611 Edition by J Bromberg.
* Bénard [1900] H. Bénard, Les tourbillions cellulaires dans une nappe liquide transportent de la chaleur par convection en regime permanent, Ann Chim Phys 7, 62 (1900).
* Lord Rayleigh [1916] Lord Rayleigh, On convection currents in a horizontal layer of fluid, when the higher temperature is on the under side, Phil Mag 32, 529 (1916).
* Flesselles _et al._ [1991] J.-M. Flesselles, A. J. Simon, and A. J. Libchaber, Dynamics of one-dimensional interfaces: An experimentalist’s view, Advances in Physics 40, 1 (1991).
* Cross and Hohenberg [1993] M. C. Cross and P. C. Hohenberg, Pattern formation outside of equilibrium, Reviews of Modern Physics 65, 851 (1993).
* Lappa [2009] M. Lappa, _Thermal Convection: Patterns, Evolution and Stability_ (John Wiley and Sons, New York, 2009).
* Langer [1989] J. S. Langer, Dendrites, viscous fingers, and the theory of pattern formation, Science 243, 1150 (1989).
* Turing [1952] A. M. Turing, The chemical basis of morphogenesis, Philos Trans R Soc Lond B 237, 37 (1952).
* Scott and Carroll [1987] M. P. Scott and S. B. Carroll, The segmentation and homeotic gene network in early Drosophila development, Cell 51, P689 (1987).
* Jaeger and Verd [2020] J. Jaeger and B. Verd, Dynamic positional information: Patterning mechanism versus precision in gradient-driven systems, Current Topics in Developmental Biology 137, 219 (2020).
* Tkačik and Gregor [2021] G. Tkačik and T. Gregor, The many bits of positional information, Development 148, dev176065 (2021).
* Houchmandzadeh _et al._ [2002] B. Houchmandzadeh, E. Wieschaus, and S. Leibler, Establishment of developmental precision and proportions in the early Drosophila embryo, Nature 415, 798 (2002).
* Surkova _et al._ [2008] S. Surkova, D. Kosman, K. Kozlov, E. Myasnikova, A. A. Samsonova, A. Spirov, C. E. Vanario-Alonso, M. Samsonova, J. Reinitz, _et al._ , Characterization of the Drosophila segment determination morphome, Developmental Biology 313, 844 (2008).
* Holloway _et al._ [2006] D. M. Holloway, L. G. Harrison, D. Kosman, C. E. Vanario-Alonso, and A. V. Spirov, Analysis of pattern precision shows that drosophila segmentation develops substantial independence from gradients of maternal gene products, Developmental Dynamics 235, 2949 (2006).
* Lott _et al._ [2007] S. E. Lott, M. Kreitman, A. Palsson, E. Alekseeva, and M. Z. Ludwig, Canalization of segmentation and its evolution in Drosophila, Proceedings of the National Academy of Sciences (USA) 104, 10926 (2007).
* Miles _et al._ [2011] C. M. Miles, S. E. Lott, C. L. Luengo Hendriks, M. Z. Ludwig, Manu, C. L. Williams, and M. Kreitman, Artificial selection on egg size perturbs early pattern formation in Drosophila melanogaster, Evolution 65, 33 (2011).
* Antonetti _et al._ [2018] V. Antonetti, W. Bialek, T. Gregor, G. Muhaxheri, M. Petkova, and M. Scheeler, Precise spatial scaling in the early fly embryo, arXiv:1812.11384 (2018).
* Driever and Nüsslein-Volhard [1988a] W. Driever and C. Nüsslein-Volhard, A gradient of bicoid protein in Drosophila embryos, Cell 54, 83 (1988a).
* Driever and Nüsslein-Volhard [1988b] W. Driever and C. Nüsslein-Volhard, The bicoid protein determines position in the Drosophila embryo in a concentration-dependent manner, Cell 54, 95 (1988b).
* Krotov _et al._ [2014] D. Krotov, J. O. Dubuis, T. Gregor, and W. Bialek, Morphogenesis at criticality, Proceedings of the National Academy of Sciences (USA) 111, 3683 (2014).
* McGough _et al._ [2023] L. McGough, H. Casademunt, M. Nikolić, M. Petkova, T. Gergor, and W. Bialek, Finding the last bits of positional information, arXiv:2312.05963 (2023).
* Manu _et al._ [2009] Manu, S. Surkova, A. V. Spirov, V. V. Gurksy, H. Janssens, K. Ah-Ram, O. Radulescu, C. E. Vanario-Alonso, D. H. Sharp, M. Samsonova, and J. Reinitz, Canalization of gene expression and domain shifts in the Drosophila blastoderm by dynamical attractors, PLoS Computational Biology 5, e1000303 (2009).
* Vakulenko _et al._ [2009] S. Vakulenko, Manu, J. Reinitz, and O. Radulescu, Size regulation in the segmentation of Drosophila: Interacting interfaces between localized domains of gene expression ensure robust spatial patterning, Physical Review Letters 103, 168102 (2009).
* Wolpert [1969] L. Wolpert, Positional information and the spatial pattern of cellular differentiation, Journal of Theoretical Biology 25, 1 (1969).
* Lawrence [1992] P. A. Lawrence, _The Making of a Fly: The Genetics of Animal Design_ (Blackwell Scientific Publications Ltd, Oxford, 1992).
* Dubuis _et al._ [2013a] J. O. Dubuis, R. Samanta, and T. Gregor, Accurate measurements of dynamics and reproducibility in small genetic networks, Molecular Systems Biology 9, 639 (2013a).
* Petkova _et al._ [2019] M. D. Petkova, G. Tkačik, W. Bialek, E. F. Wieschaus, and T. Gregor, Optimal decoding of cellular identities in a genetic network, Cell 176, 844 (2019).
* Gregor _et al._ [2007a] T. Gregor, E. F. Wieschaus, A. P. McGregor, W. Bialek, and D. W. Tank, Stability and nuclear dynamics of the bicoid morphogen gradient, Cell 130, 141 (2007a).
* Smith [2015] E. M. Smith, _Scaling and Regulation of Gene Expression in the Developing Fly Embryo_ , Ph.D. thesis, Princeton University (2015).
* Shannon [1948] C. E. Shannon, A mathematical theory of communication, Bell System Technical Journal 27, 379 (1948).
* Cover and Thomas [1991] T. M. Cover and J. A. Thomas, _Elements of Information Theory_ (Wiley and Sons, New York, 1991).
* Bialek [2012] W. Bialek, _Biophysics: Searching for Principles_ (Princeton University Press, 2012).
* Howard and ten Wolde [2005] M. Howard and P. R. ten Wolde, Finding the center reliably: Robust patterns of developmental gene expression, Physical review letters 95, 208103 (2005).
* Houchmandzadeh _et al._ [2005] B. Houchmandzadeh, E. Wieschaus, and S. Leibler, Precise domain specification in the developing Drosophila embryo, Physical Review E 72, 061920 (2005).
* McHale _et al._ [2006] P. McHale, W.-J. Rappel, and H. Levine, Embryonic pattern scaling achieved by oppositely directed morphogen gradients, Physical Biology 3, 107 (2006).
* Čapek and Müller [2019] D. Čapek and P. Müller, Positional information and tissue scaling during development and regeneration, Development 146, dev177709 (2019).
* Frasch _et al._ [1988] M. Frasch, R. Warrior, J. Tugwood, and M. Levine, Molecular analysis of even-skipped mutants in Drosophila development, Genes and Development 2, 1824 (1988).
* Jaeger _et al._ [2004a] J. Jaeger, S. Surkova, M. Blagov, H. Janssens, D. Kosman, K. N. Kozlov, E. Myasnikova, C. E. Vanario-Alonso, M. Samsonova, D. H. Sharp, and J. Reinitz, Dynamic control of positional information in the early Drosophila embryo, Nature 430, 368 (2004a).
* Jaeger _et al._ [2004b] J. Jaeger, M. Blagov, D. Kosman, K. N. Kozlov, Manu, E. Myasnikova, S. Surkova, C. E. Vanario-Alonso, M. Samsonova, D. H. Sharp, and J. Reinitz, Dynamical analysis of regulatory interactions in the gap gene system of Drosophila melanogaster, Genetics 167, 1721 (2004b).
* Bothma _et al._ [2014] J. P. Bothma, H. G. Garcia, E. Esposito, G. Schlissel, T. Gregor, and M. Levine, Dynamic regulation of eve stripe 2 expression reveals transcriptional bursts in living Drosophila embryos, Proceedings of the National Academy of Sciences (USA) 111, 10598 (2014).
* Dubuis _et al._ [2013b] J. O. Dubuis, G. Tkačik, E. F. Wieschaus, T. Gregor, and W. Bialek, Positional information, in bits, Proceedings of the National Academy of Sciences (USA) 110, 16301 (2013b).
* Vincent _et al._ [1997] A. Vincent, J. T. Blankenship, and E. Wieschaus, Integration of the head and trunk segmentation systems controls cephalic furrow formation in Drosophila, Development 124, 3747 (1997).
* Liu _et al._ [2013] F. Liu, A. H. Morrison, and T. Gregor, Dynamic interpretation of maternal inputs by the Drosophila segmentation gene network, Proceedings of the National Academy of Sciences (USA) 110, 6724 (2013).
* Jaeger [2011] J. Jaeger, The gap gene network, Cellular and Molecular Life Sciences 68, 243 (2011).
* Kauffman _et al._ [1978] S. A. Kauffman, R. M. Shymko, and K. Trabert, Control of sequential compartment formation in Drosophila: A uniform mechanism may control the locations of successive binary developmental commitments, Science 199, 259 (1978).
* Meinhardt [1986] H. Meinhardt, Hierarchical inductions of cell states: A model for segmentation in Drosophila, Journal of Cell Science Supplement 4, 357 (1986).
* Albert and Othmer [2003] R. Albert and H. G. Othmer, The topology of the regulatory interactions predicts the expression pattern of the segment polarity genes in Drosophila melanogaster, Journal of Theoretical Biology 223, 1 (2003).
* Spirov and Holloway [2003] A. V. Spirov and D. M. Holloway, Making the body plan: Precision in the genetic hierarchy of Drosophila embryo segmentation, In Silico Biology 3, 89 (2003).
* Tkačik _et al._ [2015] G. Tkačik, J. O. Dubuis, M. D. Petkova, and T. Gregor, Positional information, positional error, and readout precision in morphogenesis: A mathematical framework, Genetics 199, 39 (2015).
* Miller [1955] G. Miller, Note on the bias of information estimates, in _Information Theory in Psychology II–B: Problems and Methods_ , edited by H. Quastler (Free Press, Glencoe IL, 1955) pp. 95–100.
* Gregor _et al._ [2005] T. Gregor, W. Bialek, R. R. de Ruyter van Steveninck, D. W. Tank, and E. F. Wieschaus, Diffusion and scaling during early embryonic pattern formation, Proceedings of the National Academy of Sciences 102, 18403 (2005).
* Gregor _et al._ [2008] T. Gregor, A. P. McGregor, and E. F. Wieschaus, Shape and function of the bicoid morphogen gradient in dipteran species with different sized embryos, Developmental Biology 316, 350 (2008).
* Gregor _et al._ [2007b] T. Gregor, D. W. Tank, E. F. Wieschaus, and W. Bialek, Probing the limits to positional information, Cell 130, 153 (2007b).
* Petkova _et al._ [2014] M. D. Petkova, S. C. Little, F. Liu, and T. Gregor, Maternal origins of developmental reproducibility, Current Biology 24, 1283 (2014).
* Gierer and Meinhardt [1972] A. Gierer and H. Meinhardt, A theory of biological pattern formation, Kybernetik 12, 30 (1972).
* Meinhardt [2008] H. Meinhardt, Models of biological pattern formation: From elementary steps to the organization of embryonic axes, Current Topics in Developmental Biology 81, 1 (2008).
* Seyboldt _et al._ [2022] R. Seyboldt, J. Lavoie, A. Henry, and P. François, Latent space of a small genetic network: Geometry of dynamics and information, Proceedings of the National Academy of Sciences (USA) 119, e2113651119 (2022).
* Seung [1996] H. S. Seung, How the brain keeps the eyes still, Proceedings of the National Academy of Sciences (USA) 93, 13339 (1996).
* Merle _et al._ [2023] M. Merle, L. Friedman, C. Chureau, A. Shoushtarizadeh, and T. Gregor, Precise and scalable self-organization in mammalian pseudo-embryos, arXiv:2303.17522 (2023).
|
# Software Architecture Challenges in Integrating Hybrid Classical-Quantum
Systems
Vlad Stirbu University of Jyväskylä
Jyväskylä, Finland
<EMAIL_ADDRESS>Tommi Mikkonen University of Jyväskylä
Jyväskylä, Finland
<EMAIL_ADDRESS>
###### Abstract
The emergence of quantum computing proposes a revolutionary paradigm that can
radically transform numerous scientific and industrial application domains.
The ability of quantum computers to scale computations exponentially imply
better performance and efficiency for certain algorithmic tasks than current
computers provide. However, to gain benefit from such improvement, quantum
computers must be integrated with existing software systems, a process that is
not straightforward. In this paper, we investigate challenges that emerge from
building larger hybrid classical-quantum computers, and discuss some
approaches that could be employed to overcome these challenges.
###### Index Terms:
Quantum software, software architecture, classic-quantum systems
## I Introduction
Quantum computers have demonstrated the potential to revolutionize various
fields, including cryptography, drug discovery, materials science, and machine
learning, by leveraging the principles of quantum mechanics. However, the
current generation of quantum computers, known as noisy intermediate-scale
quantum (NISQ) computers, suffer from noise and errors, making them
challenging to operate. Additionally, the development of quantum algorithms
requires specialized knowledge not readily available to the majority of
software professionals. These factors pose a significant entry barrier for
leveraging the unique capabilities of quantum systems.
For the existing base of business applications, classical computing has
already proven its capabilities across a diverse range of solutions. However,
some of the computations they must perform can be accelerated with quantum
computing, much like GPUs are used today. Therefore, quantum systems should
not function in isolation, but they must coexist and interoperate with
classical systems. To this end, software architects play a crucial role in
achieving seamless integration, while simultaneously designing systems that
effectively meet the unique requirements of businesses.
To address the challenges associated with this integration, this paper focuses
on designing hybrid systems that integrate quantum and classical computing,
aiming to overcome architectural, design, and operational hurdles. In doing
so, we look at the software development lifestyle, the technology stack of
hybrid classic-quantum systems, and deployment techniques used today.
## II Background
The software development lifecycle (SDLC) of hybrid classic-quantum
applications consist of a multi-faceted approach, as depicted in Fig. 1. At
the top level, the classical software development process starts by
identifying user needs and deriving them into system requirements. These
requirements are transformed into a design and implemented. The result is
verified against the requirements and validated against user needs. Once the
software system enters the operational phase, any detected anomalies are used
to identify potential new system requirements, if necessary. A dedicated track
for quantum components is followed within the SDLC [1], specific to the
implementation of quantum technology. The requirements for these components
are converted into a design, which is subsequently implemented on classic
computers, verified on simulators or real quantum hardware, and integrated
into the larger software system. During the operational phase, the quantum
software components are executed on real hardware. Scheduling ensures
efficient utilization of scarce quantum hardware, while monitoring
capabilities enable the detection of anomalies throughout the process.
Figure 1: Software development lifecycle of a hybrid classical-quantum system
A typical hybrid classic-quantum software system is understood as a classical
program that has one or more software components that are implemented using
quantum technology, as depicted in Fig. 2. A quantum component utilises
quantum algorithms [2], that are transformed into quantum circuits using a
toolkit like Cirq111https://quantumai.google/cirq or
Qiskit222https://qiskit.org. The quantum circuit describes quantum
computations in a machine-independent language using quantum assembly (QASM)
[3]. This circuit is translated by a computer that controls the quantum
computer in a machine specific circuit and a sequence of pulses that control
the operation of individual hardware qubits [4]. Due to the scarcity or
quantum hardware and the process of preparing the individual runs, the quantum
task execution process is lengthy, having the characteristics of batch
processing in classical computing. In fact, techniques used in batch
processing, such as Slurm [5], can be used to implement this step, which adds
indirection to the underlying software architecture.
Figure 2: Quantum computing model: components and interfaces
## III Architectural concerns
### III-A Design – Algorithms, data structures and APIs
Quantum algorithms are designed specifically to take advantage of quantum
mechanics properties such as quantum superposition and entanglement. They
provide advantages over classical equivalents for specific areas, such as
factoring or linear search. Software architects should evaluate the
feasibility to achieve quantum advantage during the component requirements
phase of the SDLC. They must ensure that the needed computational resources
are available and that data can be mapped from the classic to quantum domains.
For example, TensorFlow Quantum333https://www.tensorflow.org/quantum is a
library for rapid prototyping of hybrid quantum-classical ML models that
focuses on quantum data and hybrid quantum-classical machine learning models.
The batch nature of the quantum task execution has a profound impact on the
software architecture of a hybrid classic-quantum system. The jobs submitted
for execution are queued and scheduled using fair-share policies. As the task
execution results are not available immediately, the software system should
favour asynchronous communication. Further, the system designers must consider
the security and privacy aspects of executing tasks on quantum hardware
infrastructure shared by several organizations.
### III-B Operations – Implementation leak and resource allocation
Quantum application written using popular libraries like Cisq and Qiskit have
a monolith nature. They combine into a single imperative program the
application logic components, the general purpose quantum circuit design, the
quantum hardware selection (e.g. backend configuration), and the
transformation of the machine specific circuit that is actually executed. To
make the software architecture modular, the general purpose part needs to be
separated from the quantum backend. Essential backend information, such as the
quantum volume (the qubit connectedness), needs to be accessed at runtime so
that the actual hardware selection can be done dynamically based on dynamic
factors, like hardware availability (if there are multiple providers) and cost
estimates. For example, Kubernetes serves as an extensible orchestration
platform that enables efficient scheduling of classic computing jobs. The
capabilities of quantum computers can be exposed in this computing
environment, while the scheduler can be enhanced to efficiently handle quantum
jobs.
## IV Conclusions and future steps
The fundamental differences in programming models and the varying levels of
maturity in tools and practices between the classical and quantum domains
makes their seamless integration difficult. To gain insights and firsthand
experience, we intend to collaborate with the users of
HELMI444https://docs.csc.fi/computing/quantum-computing/overview/ quantum
computer, in an effort to overcome the integration barriers between classical
and quantum computing.
## Acknowledgement
This work has been supported by the Academy of Finland (project DEQSE 349945)
and Business Finland (project TORQS 8582/31/2022).
## References
* [1] B. Weder, J. Barzen, F. Leymann, and D. Vietz, Quantum Software Development Lifecycle, pp. 61–83. Cham: Springer International Publishing, 2022.
* [2] A. Montanaro, “Quantum algorithms: an overview,” npj Quantum Information, vol. 2, p. 15023, Jan 2016.
* [3] A. Cross, A. Javadi-Abhari, T. Alexander, N. De Beaudrap, L. S. Bishop, S. Heidel, C. A. Ryan, P. Sivarajah, J. Smolin, J. M. Gambetta, and B. R. Johnson, “Openqasm 3: A broader and deeper quantum assembly language,” ACM Transactions on Quantum Computing, vol. 3, sep 2022.
* [4] T. Alexander, N. Kanazawa, D. J. Egger, L. Capelluto, C. J. Wood, A. Javadi-Abhari, and D. C. McKay, “Qiskit pulse: programming quantum computers through the cloud with pulses,” Quantum Science and Technology, vol. 5, p. 044006, aug 2020.
* [5] A. B. Yoo, M. A. Jette, and M. Grondona, “Slurm: Simple Linux utility for resource management,” in Job Scheduling Strategies for Parallel Processing: 9th International Workshop, JSSPP 2003, Seattle, WA, USA, June 24, 2003. Revised Paper 9, pp. 44–60, Springer, 2003.
|
# On a field tensor for gravity and electromagnetism
M<EMAIL_ADDRESS>
Faculty of Technology, Natural Sciences and Maritime Sciences
University of South-Eastern Norway
Porsgrunn, Norway
###### Abstract
We show that a three rank Lanczos type tensor field is an appropriate choice
to describe relativistic electromagnetic and gravitational effects. More
precisely, we identify the irreducible field-decompositions of this tensor as
gravitaional and electromagnetic fields. A set of divergence equations are
proposed as field equations for the unified field.
## 1 Introduction
In the early to mid 1900 a number of articles were published on the
unification of electromagnetism and gravitation. This program of unification
has been put under the umbrella term Unified Field Theories (UFTs) — see [4]
for a comprehensive review. But due to the remarkable achievment of Quantum
Field theory in unifying the nuclear and electromagnetic forces, the UFT
program has been replaced by the pursuit of a theory of Quantum Gravity. Since
spinors are needed in the description of fermions [13], it is essential for a
unified field theory to admit spinor structure in order to be a viable theory
for the description of e.g. electrons. Geroch has shown in [2] that it is a
necessary and sufficient condition for a non-compact space time to admit a
spinor structure if it carries a global field of orthonormal tetrads. The
frame formalism also reflect the role of observers in physics, and is thus a
natural formalism both in classical relativity and quantum field theory [12].
Furthermore, due to the nonlinearity of the Einstein equations, a metric
distributional solution describing a point particle is not possible in general
relativity [3]. We refer to [10] for a review of the use of distributions in
general relativity. On the other hand, the Maxwell equations do admit a
solution representing a charged point particle. In the present work we explore
the possibility of a theory which both admits a spinor structure — by
employing a global tetrad field — and whose field equations are linear with
respect to the sources and field tensor, in striking similarity with the
Maxwell equations. We remark that we do not make use of the spinor structure
in the present article. A proper investigation of the spinorial equations and
detailed analysis of the spinor fields will be published elsewhere.
## 2 Geometric considerations
Let $(\mathcal{M},{\bm{g}})$ denote a spacetime represented by a 4-dimensional
manifold, $\mathcal{M}$, with a Lorentzian metric ${\bm{g}}$. The motion of
particles of some matter filling spacetime give rise to a natural splitting by
constructing frames comoving with the flow lines of the particles. This has
the further advantage that it does not require a foliation of $\mathcal{M}$.
We shall denote the tangent vector to the flow lines as ${\bm{u}}$ satisfying
${\bm{g}}({\bm{u}},{\bm{u}})=-1.$
At each point $p\in\mathcal{M}$ the frame field $\\{{\bm{e}}_{a}\\}$ is such
that
${\bm{g}}({\bm{e}}_{a},{\bm{e}}_{b})=\eta_{ab},$
where $\eta_{ab}$ are the frame components of the Minkowski metric. The frames
$\\{{\bm{e}}_{a}\\}$ give rise to a co-frame, $\\{\mathbf{\omega}^{a}\\}$
satisfying
$\langle{{\bm{e}}_{a},\mathbf{{\bm{\omega}}}^{b}\rangle}=\delta_{a}{}^{b}.$
In the following all indices will be given in terms of the frame and co-frame
unless otherwise stated. The metric tensor give rise to a natural connection
$\mathbf{\nabla}$ such that $\mathbf{\nabla}{\bm{g}}=0$, which is the metric
compatibility condition. In terms of the frames, this condition takes the form
$\Gamma_{a}{}^{b}{}_{c}\eta_{bd}+\Gamma_{a}{}^{b}{}_{d}\eta_{bc}=0,$ (1)
where the frame connection coefficients are defined by the directional
derivative along the direction of the frame indices
$\nabla_{a}{\bm{e}}_{b}=\Gamma_{a}{}^{c}{}_{b}{\bm{e}}_{c},\qquad\nabla_{a}=\langle{{\bm{e}}_{a},\mathbf{\nabla}\rangle}.$
Thus, for a two rank tensor $\bm{\Omega}$ we have that the frame components of
its derivative is given by,
$\nabla_{a}\Omega_{bc}=e_{c}[\Omega_{bc}]-\Gamma_{a}{}^{d}{b}\Omega_{dc}-\Gamma_{a}{}^{d}{c}\Omega_{bd}$
. Furthermore, if the connection $\mathbf{\nabla}$ is torsion-free, we have
that
$\Sigma_{a}{}^{c}{}_{b}=0,$ (2)
where the frame components of the torsion tensor are defined by
$\Sigma_{a}{}^{c}{}_{b}{\bm{e}}_{c}=\left[{\bm{e}}_{a},{\bm{e}}_{b}\right]+\left(\Gamma_{a}{}^{c}{}_{b}-\Gamma_{b}{}^{c}{}_{a}\right){\bm{e}}_{c}.$
The commutation of the connection may be expressed in terms of the Riemann
curvature tensor and the torsion tensor
$\displaystyle\nabla_{[a}\nabla_{b]}v^{c}=R^{c}{}_{dab}v^{d}+\Sigma_{a}{}^{d}{}_{b}\nabla_{d}v^{c},$
$\displaystyle\nabla_{[a}\nabla_{b]}w_{c}=-R^{d}{}_{cab}w_{d}+\Sigma_{a}{}^{d}{}_{b}\nabla_{d}w_{c}.$
The frame components of the Riemann curvature tensor is given by
$R^{c}{}_{dab}=\partial_{a}\Gamma_{b}{}^{c}{}_{d}-\partial_{b}\Gamma_{a}{}^{c}{}_{d}+\Gamma_{f}{}^{c}{}_{d}(\Gamma_{b}{}^{f}{}_{a}-\Gamma_{a}{}^{f}{}_{b})+\Gamma_{b}{}^{f}{}_{d}\Gamma_{a}{}^{c}{}_{f}-\Gamma_{a}{}^{f}{}_{d}\Gamma_{b}{}^{c}{}_{f}-\Sigma_{a}{}^{f}{}_{b}\Gamma_{f}{}^{c}{}_{d}$
(3)
—see [11] for details. The Riemann tensor has all the usual symmetries, and it
satisfies the Bianchi identity for a general connection
$\displaystyle
R^{d}{}_{[cab]}+\nabla_{[a}\Sigma_{b}{}^{d}{}_{c]}+\Sigma_{[a}{}^{e}{}_{b}\Sigma_{c]}{}^{d}{}_{e}=0,$
(4)
$\displaystyle\nabla_{[a}R^{d}{}_{|e|bc]}+\Sigma_{[a}{}^{f}{}_{b}R^{d}{}_{|e|c]f}=0.$
(5)
Furthermore, we recall that the Riemann tensor admits the _irreducible
decomposition_
$\displaystyle
R^{c}{}_{dab}=C^{c}{}_{dab}+2(\delta^{c}{}_{[a}L_{b]d}-\eta_{d[a}L_{b]}{}^{c}),$
(6)
with $C^{c}{}_{dab}$ the components of the _Weyl tensor_ and
$S_{ab}\equiv R_{ab}-\frac{1}{6}R\eta_{ab}$ (7)
denotes the components of the _Schouten tensor_. The connection
$\mathbf{\nabla}$ is called the Levi-Civita connection of ${\bm{g}}$ if it
satisfies (1) and (2). In what follows we will assume the connection to be
Levi-Civita.
### A projection formalism
At each point in the spacetime manifold $\mathcal{M}$ the flow lines give rise
to a tangent space which can be split into parts in the direction of
${\bm{u}}$ and those orthogonal. This means that without implying a foliation,
we may decompose every tensor defined at each point $p\in\mathcal{M}$ into its
orthogonal and timelike part. This may be done by contracting with
$\mathbf{u}$ and the projector defined as
$h_{a}{}^{b}\equiv\eta_{a}{}^{b}+u_{a}u^{b},\qquad{\bm{u}}=u^{a}\mathbf{e}_{a}.$
Thus, a tensor $T_{ab}$ may be split into its time-like, mixed and space-like
parts given, respectively, by
$T_{00}=u^{a}u^{b}T_{ab},\qquad T^{\prime}_{0c}=u^{a}h^{b}{}_{c}T_{ab},\qquad
T^{\prime}_{cd}=h^{a}{}_{c}h^{b}{}_{d}T_{ab},$
where ′ denotes that the free indices left are spatial —e.g.
$T^{\prime}_{a0}u^{a}=0$. Decomposing $\mathbf{\nabla u}$ we obtain
$\nabla_{a}u^{b}=\chi_{a}{}^{b}-u_{a}a^{b},$ (8)
where $\chi_{a}{}^{b}$ and $a^{b}$ are the components of the Weingarten tensor
and 4-acceleration, respectively, defined by
$\chi_{a}{}^{b}\equiv h_{a}{}^{c}\nabla_{c}u^{b},\qquad a^{b}\equiv
u^{c}\nabla_{c}u^{b}.$
We split $\chi_{ab}$ into its symmetric, tracefree part and antisymmetric part
— i.e we have,
$\chi_{(ab)}-\frac{1}{3}h_{ab}\chi\equiv\sigma_{ab},\qquad\chi_{[ab]}\equiv\omega_{{\bm{a}}{\bm{b}}}.$
In the literature (e.g. see [12] p.217) $\chi$, $\sigma_{ab}$ and
$\omega_{ab}$ is called, respectively, the expansion, shear and the twist of
the congruence with four velocity ${\bm{u}}$. The decomposition (8) now takes
the form,
$\nabla_{a}u^{b}=\sigma_{a}{}^{b}+\frac{1}{3}h_{a}{}^{b}\chi+\omega_{a}{}^{b}-u_{a}a^{b}.$
(9)
The decomposition of the four volume is
$\epsilon_{abcd}=-2\left(u_{[a}\epsilon_{b]cd}-\epsilon_{ab[c}u_{d]}\right),\qquad\epsilon_{bcd}=\epsilon_{abcd}u^{a}.$
Given a tensor $T_{abc}$ which is antisymmetric in its two last indices, we
may construct the electric and magnetic parts with respect to $\mathbf{u}$. In
frame indices this is, respectively, defined by
$E_{cd}\equiv T_{abe}h_{c}{}^{a}h_{d}{}^{b}u^{e},\qquad B_{cd}\equiv
T^{\ast}{}_{abe}h_{c}{}^{a}h_{d}{}^{b}u^{e},$
where the Hodge dual operator, denoted by ∗, is defined by
$T^{\ast}{}_{abe}\equiv-\frac{1}{2}\epsilon^{mn}{}_{be}T_{amn},$
and has the property that
$T^{\ast\ast}{}_{abc}=-T_{abc}.$
Depending on the symmetries and rank of the tensor, the above definition for
electric and magnetic decomposition may vary slightly. Central for our
discussion is that $E_{ab}$ and $B_{ab}$ are spatial and symmetric.
## 3 The field tensor
We consider the rank three tensor ${\bm{Z}}$ (hereafter called the Z-tensor)
with the following symmetries,
$Z_{[abc]}=0,\qquad Z_{abc}=Z_{a[bc]}.$
It can be readily shown that the first symmetry property implies that
$Z_{cab}=2Z_{[ba]c}.$ (10)
The Hodge dual of the Z-tensor ${\bm{Z}}^{\ast}$ is defined in the customary
way by,
$Z^{\ast}{}_{abc}\equiv-\frac{1}{2}\epsilon_{bc}{}^{de}Z_{ade}.$
The frame fields ${\bm{e}}_{a}$ provide a natural 1+3 decomposition of
${\bm{Z}}$ and ${\bm{Z}}^{\ast}$ into parts in the direction of and orthogonal
to the flow ${\bm{u}}$. This is obtained by using the projector ${\bm{h}}$ as
described in Section 2.The decomposition read,
$\displaystyle
Z_{abc}=-2\eta_{a[b}P_{c]}+\epsilon_{bc}{}^{d}\Phi_{ad}+2u_{[b}\Psi_{c]a}-\epsilon^{d}{}_{bc}u_{a}Q_{d}+2\epsilon^{d}{}_{a[c}u_{b]}Q_{d},$
(11a) $\displaystyle
Z^{\ast}{}_{amn}=\epsilon_{mnb}u_{a}P^{b}-2\epsilon_{ab[m}u_{n]}P^{b}+2\Phi_{a[m}u_{n]}+\epsilon_{mnb}\Psi_{a}{}^{b}+2\eta_{a[n}Q_{m]},$
(11b)
where we have defined,
$\Psi_{ab}\equiv Z_{(a^{\prime}b^{\prime})0},\qquad\Phi_{ab}\equiv
Z^{\ast}_{(a^{\prime}b^{\prime})0},\qquad P_{a}\equiv Z_{a00},\qquad
Q_{a}\equiv Z^{\ast}_{a00}.$
The tensors $\Psi_{ab}$ and $\Phi_{ab}$ are by definition symmetric tensors
defined on the orthogonal space of ${\bm{u}}$ —i.e. one has that
$\Psi_{ac}u^{a}=0,\qquad\Phi_{ac}u^{a}=0.$
Furthermore, since $\epsilon_{abc}$, $\Psi_{ab}$ and $\Phi_{ab}$ are spatial
fields, it is readily shown that
$P_{0}=Q_{0}=0.$
The traces of the Z-tensor and its dual are,
$\displaystyle Z^{a}{}_{ba}=3P_{b}+\Psi u_{b},\qquad Z^{a}{}_{b}{}^{b}=0,$
(12) $\displaystyle Z^{\ast}{}^{a}{}_{ba}=3Q_{b}-\Phi u_{b},\qquad
Z^{\ast}{}^{a}{}_{b}{}^{b}=0,$ (13)
where,
$\Psi\equiv\Psi^{a}{}_{a},\qquad\Phi\equiv\Phi^{a}{}_{a}.$
The first trace in (12) implies that
$Z^{a}{}_{0a}=-\Psi,$ (14)
and the first trace in (13) together with the first symmetry property implies
that
$Z^{\ast}{}^{a}{}_{0a}=\Phi=0.$ (15)
###### Lemma 1.
Let ${\bm{Z}}$ be a tensor of rank 3 with antisymmetry about two neighbouring
indices. Then ${\bm{Z}}$ has the symmetry property $Z_{[abc]}=0$ and the dual
field $Z^{\ast}_{(a^{\prime}b^{\prime})0}$ has vanishing trace.
We make the further assumption that $\Psi=0$ — i.e we have that
$Z^{a}{}_{0a}=Z^{\ast}{}^{a}{}_{0a}=0.$
###### Remark 1.
The assumption that $\Psi=0$ is motivated by the fact that we want to relate
the fields $\bm{\Psi}$ and $\bm{\Phi}$ to the electric and magnetic part of
the Weyl tensor. Observe that our assumption is a weaker constraint than the
Lanczos algebraic gauge — e.g see [9], [5],
$Z^{a}{}_{ba}=0.$
In fact, the Lanczos gauge violate our assumption that the fields ${\bm{P}}$
and $\bm{\Psi}$ represents pure electric and gravitational fields,
respectively, and can thus not be related in such a way as this gauge implies
— see equation (12).
###### Remark 2.
Observe that the absence of electric and magnetic fields is a necessary
condition for the Z-tensor to be a a Cotton tensor.
## 4 Finding the field equations for the Z-tensor
In the theory we propose, both gravity and electromagnetism is represented in
terms of a field on space time. The geometry of $\mathcal{M}$ will be given by
the frame components, rather than the metric, and the connection coefficients
as outlined in the introduction. Equations for the frame and the connection is
given by the choice of propagation — e.g. Fermi propagation — and the
definition of the Riemann and the torsion tensor. For more details on the
geometric equations, the reader is referred to [7], [1] and [8]. In what
follows we shall focus the discussion on the fields presented in the previous
section — i.e. $\bm{\Psi}$, $\bm{\Phi}$, ${\bm{P}}$ and ${\bm{Q}}$. These will
be taken as the fundamental fields, from which we may construct the unified
field tensor ${\bm{Z}}$. We thus seek a set of equations for ${\bm{Z}}$ which
will reduce to the relativistic Maxwell equations in the limit of no
gravitational field, and the Bianchi equations in the limit of no
electromagnetic fields. We begin with the Maxwell equations.
We observe that due to the symmetry of ${\bm{Z}}$ and ${\bm{Z}}^{\ast}$, it is
natural to define the two rank anti-symmetric tensors ${\bm{F}}$ and
${\bm{F}}^{\ast}$ as follows,
$F_{ab}\equiv u^{a}Z_{abc},\qquad
F^{\ast}{}_{ab}\equiv\dfrac{1}{2}\epsilon_{ab}{}^{mn}F_{mn}=u^{a}Z^{\ast}{}_{abc}.$
Using the decomposition of the Z-tensor, it is readily shown that
$F_{ab}=u_{b}P_{a}-u_{a}P_{b}+\epsilon_{abc}Q^{c},$
which is the right form of the Faraday tensor with $P_{a}$ and $Q_{a}$ as the
electric and magnetic fields respectively. The Maxwell equations are then
given by
$\displaystyle\nabla^{b}F_{ab}$ $\displaystyle=j_{c}$ (16a)
$\displaystyle\nabla^{b}F^{\ast}{}_{ab}$ $\displaystyle=0,$ (16b)
which may be formulated as evolution and constraint equations for the electric
and magnetic fields — i.e.
$\displaystyle u^{a}h_{mb}\nabla_{a}E^{b}-\epsilon_{mab}\nabla^{b}B^{a}$
$\displaystyle=-a^{a}\epsilon_{mab}B^{b}+J^{a}h_{ma}+E^{a}\chi_{am}-E_{m}\chi^{a}{}_{a}.$
(17a) $\displaystyle\nabla_{a}E^{a}$
$\displaystyle=a^{a}E_{a}+u^{a}J_{a}-\epsilon_{abc}B^{a},$ (17b)
$\displaystyle u^{b}h^{d}{}_{a}\nabla_{b}B^{a}$
$\displaystyle=a^{b}E^{a}\epsilon^{d}{}_{ba}+B^{b}\chi_{b}{}^{d}-B^{d}\chi^{b}{}_{b}-\epsilon^{d}{}_{ba}\nabla^{a}E^{b}\chi^{bc},$
(17c) $\displaystyle\nabla_{b}B^{b}$
$\displaystyle=a^{b}B_{b}+E^{b}\epsilon_{bac}\chi^{ac}.$ (17d)
We now turn to consider equations for the gravitational field. It is customary
to here study solutions to the Einstein field equations — i.e
$R_{ab}-\dfrac{1}{2}Rg_{ab}=\tau_{ab}.$ (18)
But as we are seeking a theory where the geometry is given by the frame
components and the gravitational field is represented by the irreducible
components of the Weyl tensor, we will use the Bianchi identity (5) as field
equations. In this formalism the Einstein equations takes on the form of
constraint equations — see equation (20). Thus, the unknowns for the
gravitational field will be the electric $E_{ab}$ and magnetic $B_{ab}$ part
of the Weyl tensor — i.e we consider the equations
$\displaystyle
u^{a}h_{m}{}^{c}h_{n}{}^{d}\nabla_{a}E_{cd}+\epsilon_{mdc}h_{n}{}^{a}\nabla^{d}B_{a}{}^{c}$
$\displaystyle=-2a^{a}B_{(m}{}^{c}\epsilon_{n)ac}-2E_{mn}\chi^{a}{}_{a}-E_{ac}h_{mn}\chi^{ac}$
$\displaystyle+2E_{na}\chi^{a}{}_{m}+E_{ma}\chi_{n}{}^{a}-\tfrac{1}{2}u^{a}h_{m}{}^{c}h_{n}{}^{d}\nabla_{a}S_{cd}$
$\displaystyle+\tfrac{1}{2}u^{a}h_{m}{}^{c}h_{n}{}^{d}\nabla_{d}S_{ac}$ (19a)
$\displaystyle\nabla_{a}E_{d}{}^{a}$
$\displaystyle=a^{a}E_{da}+E_{ac}u_{d}\chi^{ac}-\epsilon_{dcf}B_{a}{}^{f}\chi^{ac}-\epsilon_{acf}B_{d}{}^{f}$
$\displaystyle\chi^{ac}-\tfrac{1}{2}u^{a}u^{c}\nabla_{c}S_{da}+\tfrac{1}{2}u^{a}u^{c}\nabla_{d}S_{ac},$
(19b) $\displaystyle
u^{a}h_{l}{}^{c}h_{n}{}^{d}\nabla_{a}B_{cd}-\epsilon_{dc(n}h_{l)}{}^{a}\nabla^{d}E_{a}{}^{c}$
$\displaystyle=2a^{a}E_{(n}{}^{c}\epsilon_{l)ac}-2B_{ln}\chi^{a}{}_{a}-B_{ac}h_{ln}\chi^{ac}$
$\displaystyle+2\chi^{a}{}_{(l}B_{n)a}+B_{a(n}\chi_{l)}{}^{a}+\tfrac{1}{2}\epsilon_{cd(n}h_{l)}{}^{a}\nabla^{d}S_{a}{}^{c},$
(19c) $\displaystyle h_{n}{}^{a}\nabla_{c}B_{a}{}^{c}$
$\displaystyle=a^{a}B_{na}-E_{c}{}^{d}\epsilon_{nad}\chi^{ac}+2E_{a}{}^{d}\epsilon_{ncd}\chi^{ac}$
$\displaystyle+\tfrac{1}{2}\epsilon_{ncd}u^{a}\nabla^{d}S_{a}{}^{c}$ (19d)
where $S_{ab}$ is the Schouten tensor and defined in the customary way — see
equation (7). If the Einstein equations are assumed, then the Schouten tensor
is related to the Energy-momentum tensor $\tau_{ab}$ according to
$S_{ab}=\tau_{ab}-\tfrac{1}{3}\tau^{c}{}_{c}\ g_{ab}.$ (20)
Thus, a solution $(E_{ab},B_{ab})$ of the evolution equations (19a) and (19c),
satisfying the constraint equaitons (19b) and (19d), together with equation
(20) is equivalent to a metric solution of the Einstein equations (18) for a
given energy momentum tensor $\tau_{ab}$ — again the reader is referred to [6]
for more details.
Observe that $Z_{abc}$ contains all the fields necessary for a description of
both gravity and electromagnetism. That is, the spatial fields
($P_{a},Q_{a},\Psi_{ab},\Phi_{ab}$) has the correct rank, trace and symmetry
to represent ($E_{a},B_{a},E_{ab},B_{ab}$), respectively. The strategy to find
the correct field equations for the unified field tensor $Z_{abc}$ is to
compare the proposed equations so that they reduce to the Maxwell equations
and Bianchi equations in the case of no gravity and electromagnetism,
respectively. That is, we must construct the equations such that $\bm{\Psi}$
and $\bm{\Phi}$ will be a solution of equations (19a) - (19d) when
$P_{a}=Q_{a}=0$. Similarly, $P_{a},Q_{a}$ is requiered to be a solution of
equations (17a) - (17d) in the limit of $\Psi_{ab}=\Phi_{ab}=0$.
Due to the form of the decomposition of ${\bm{Z}}$, we propose field equations
on the form,
$\displaystyle\nabla^{b}Z_{abc}$ $\displaystyle=T_{ac},$ (21a)
$\displaystyle\nabla^{b}Z^{\ast}{}_{abc}$ $\displaystyle=A_{ac}.$ (21b)
Note that as a consequence of the antisymmetry in the Z-tensor and the
symmetry of the Ricci tensor, it follows that
$\nabla^{c}T_{ac}=\nabla^{c}A_{ac}=0.$ (22)
###### Remark 3.
For generality we shall not impose symmetry about the indices $\\{a,c\\}$, so
as to make $\bm{T}$ and $\bm{A}$ symmetric tensors. But strictly speaking such
an assumption should be made in order to study the equations on the form that
most resembles the Bianchi equations and the relativistic Maxwell equations.
Furthermore, this will make the tensors $\bm{A}$ and $\bm{T}$ divergence free.
In what follows, we will show that there exists tensors $A_{ab}$ and $T_{ab}$
such that the proposed field equations encompass the relativistic Maxwell
equations as well as the Bianchi equations. Recall that any tensor ${\bm{T}}$
may be decomposed in parts orthogonal and parallel to the four velocity
${\bm{u}}$ according to,
$T_{ab}=T_{a^{\prime}b^{\prime}}+T_{a^{\prime}0}u_{b}+T_{0b^{\prime}}u_{a}+T_{00}u_{a}u_{b}.$
We consider first the spatial components of the field equations — i.e
$\displaystyle h_{m}{}^{a}h_{n}{}^{c}\nabla^{b}Z_{abc}$
$\displaystyle=T_{m^{\prime}n^{\prime}}$ (23a) $\displaystyle
h_{m}{}^{a}h_{n}{}^{c}\nabla^{b}Z{}^{\ast}_{abc}$
$\displaystyle=A_{m^{\prime}n^{\prime}}.$ (23b)
Using the decomposition of $Z_{abc}$ and $Z^{\ast}{}_{abc}$ (23a) and (23b)
are equivalent to,
$\displaystyle
u^{a}h_{m}{}^{b}h_{n}{}^{c}\nabla_{a}\Psi_{bc}+\epsilon_{mbc}h_{n}{}^{a}\nabla^{c}\Phi_{a}{}^{b}$
$\displaystyle=-a_{n}P_{m}+a^{a}\epsilon_{nab}\Phi_{m}{}^{b}+a^{a}\epsilon_{mab}\Phi_{n}{}^{b}-2\Psi_{mn}\chi^{a}{}_{a}$
(24a) $\displaystyle-
h_{mn}\Psi_{ab}\chi^{ab}+2\Psi_{na}\chi^{a}{}_{m}+\epsilon_{mna}Q^{a}\chi^{b}{}_{b}-\epsilon_{nab}Q^{a}\chi^{b}{}_{m}$
$\displaystyle+\epsilon_{mab}Q^{a}\chi^{b}{}_{n}+\Psi_{ma}\chi_{n}{}^{a}-h_{m}{}^{b}h_{n}{}^{c}P^{a}\nabla_{a}h_{bc}-h_{mn}\nabla_{a}P^{a}$
$\displaystyle+\epsilon_{mnb}u^{a}\nabla_{a}Q^{b}-\tfrac{1}{2}u^{a}h_{m}{}^{b}h_{n}{}^{c}\nabla_{a}S_{bc}+h_{n}{}^{a}P_{m}\nabla_{b}h_{a}{}^{b}$
$\displaystyle+h_{ma}h_{nb}\nabla^{b}P^{a}+\tfrac{1}{2}u^{a}h_{m}{}^{b}h_{n}{}^{c}\nabla_{c}S_{ab},$
$\displaystyle
u^{a}h_{m}{}^{b}h_{n}{}^{c}\nabla_{a}\Phi_{bc}-\epsilon_{mbc}h_{n}{}^{a}\nabla^{c}\Psi_{a}{}^{b}$
$\displaystyle=-2a^{a}\epsilon_{mab}\Psi_{n}{}^{b}+a_{n}Q_{m}-2\Phi_{mn}\chi^{a}{}_{a}-h_{mn}\Phi_{ab}\chi^{ab}$
(24b)
$\displaystyle+2\Phi_{ma}\chi^{a}{}_{n}+\epsilon_{mna}P^{a}\chi^{b}{}_{b}-\epsilon_{nab}P^{a}\chi^{b}{}_{m}+\epsilon_{mab}P^{a}\chi^{b}{}_{n}$
$\displaystyle+\Phi_{na}\chi_{m}{}^{a}+h_{m}{}^{b}h_{n}{}^{c}Q^{a}\nabla_{a}h_{bc}+\epsilon_{mnb}u^{a}\nabla_{a}P^{b}+\epsilon_{mnb}\nabla_{a}\Psi^{ab}$
$\displaystyle+h_{mn}\nabla_{a}Q^{a}-h_{n}{}^{a}Q_{m}\nabla_{b}h_{a}{}^{b}-h_{ma}h_{nb}\nabla^{b}Q^{a}-\tfrac{1}{2}\epsilon_{mbc}h_{n}{}^{a}\nabla^{c}S_{a}{}^{b}$
where we have defined,
$\displaystyle h_{mc}h_{na}T^{ac}$ $\displaystyle\equiv
a^{a}\epsilon_{nac}\Phi_{m}{}^{c}-\Psi_{mn}\chi^{a}{}_{a}-h_{mn}\Psi_{ac}\chi^{ac}+\Psi_{na}\chi^{a}{}_{m}$
(25a)
$\displaystyle+\Psi_{ma}\chi_{n}{}^{a}-\tfrac{1}{2}u^{a}h_{m}{}^{c}h_{n}{}^{d}\nabla_{a}S_{cd}+\tfrac{1}{2}u^{a}h_{m}{}^{c}h_{n}{}^{d}\nabla_{d}S_{ac}$
$\displaystyle A^{ac}h_{mc}h_{na}$ $\displaystyle\equiv
a^{a}\epsilon_{mac}\Psi_{n}{}^{c}+\Phi_{mn}\chi^{a}{}_{a}+h_{mn}\Phi_{ac}\chi^{ac}+\Phi_{na}\chi^{a}{}_{m}-2\Phi_{ma}\chi^{a}{}_{n}$
(25b)
$\displaystyle-\Phi_{na}\chi_{m}{}^{a}-\epsilon_{mnc}\nabla_{a}\Psi^{ac}+\tfrac{1}{2}\epsilon_{mcd}h_{n}{}^{a}\nabla^{d}S_{a}{}^{c}$
Thus the spatial components of ${\bm{T}}$ and ${\bm{A}}$ are determined by the
assumption that in the absence of electromagnetic fields, equations (23a) and
(23b) reduce to equations (19a) and (19d), respectively, under the
identifications $\Psi_{ab}=E_{ab}$ and $\Phi_{ab}=-B_{ab}$. Next we consider
mixed components. $T_{a^{\prime}0}$ and $A_{a^{\prime}0}$ are obtained by
comparing with the Bianchi constraint equations. We consider the equations
$\displaystyle h^{a}{}_{d}u^{c}\nabla^{c}Z_{abc}$
$\displaystyle=h^{a}{}_{d}u^{c}T_{ac},$ (26a) $\displaystyle
h^{a}{}_{d}u^{c}\nabla^{b}Z{}^{\ast}_{abc}$
$\displaystyle=h^{a}{}_{d}u^{c}A_{ac}.$ (26b)
Again, using the decomposition of the Z tensor and its dual, (26a) and (26b)
are equivalent to
$\displaystyle h_{n}{}^{a}\nabla_{b}\Psi_{a}{}^{b}$
$\displaystyle=a^{a}\Psi_{na}+\epsilon_{nbc}\Phi_{a}{}^{c}\chi^{ab}+\epsilon_{abc}\Phi_{n}{}^{c}\chi^{ab}-P^{a}\chi_{na}$
(27a)
$\displaystyle-\tfrac{1}{2}u^{a}u^{b}h_{n}{}^{c}\nabla_{b}S_{ac}+\epsilon_{nab}\nabla^{b}Q^{a}+\tfrac{1}{2}u^{a}u^{b}h_{n}{}^{c}\nabla_{c}S_{ab},$
$\displaystyle h_{n}{}^{a}\nabla_{b}\Phi_{a}{}^{b}$
$\displaystyle=a^{a}\Phi_{na}-2\epsilon_{nbc}\Psi_{a}{}^{c}\chi^{ab}+\epsilon_{nac}\Psi_{b}{}^{c}\chi^{ab}+Q^{a}\chi_{na}$
(27b)
$\displaystyle+\epsilon_{nab}\nabla^{b}P^{a}-\tfrac{1}{2}\epsilon_{nbc}u^{a}\nabla^{c}S_{a}{}^{b},$
(27c)
where we have defined
$\displaystyle h^{b}{}_{d}u^{a}T_{ba}$
$\displaystyle\equiv\epsilon_{dcf}\Phi_{a}{}^{f}\chi^{ac}-\tfrac{1}{2}u^{a}u^{c}\nabla_{c}S_{da}+\tfrac{1}{2}u^{a}u^{c}\nabla_{d}S_{ac},$
(28a) $\displaystyle h^{b}{}_{d}u_{a}A_{n}{}^{a}$ $\displaystyle\equiv
2\epsilon_{ncd}\Psi_{a}{}^{d}\chi^{ac}-\epsilon_{nad}\Psi_{c}{}^{d}\chi^{ac}-\epsilon_{acd}\Psi_{n}{}^{d}\chi^{ac}$
(28b) $\displaystyle+\tfrac{1}{2}\epsilon_{ncd}u^{a}\nabla^{d}S_{a}{}^{c}.$
The other mixed components $T_{0b^{\prime}}$ and $A_{0b^{\prime}}$ are
determined by comparing with the relativistic Maxwell equations in the limit
of no gravitational fields:
$\displaystyle h^{c}{}_{d}u^{a}\nabla^{b}Z_{abc}$
$\displaystyle=h^{c}{}_{d}u^{a}T_{ac},$ (29a) $\displaystyle
h^{c}{}_{d}u^{a}\nabla^{b}Z{}^{\ast}_{abc}$
$\displaystyle=h^{c}{}_{d}u^{a}A_{ac}.$ (29b)
The decomposed equations are given by
$\displaystyle u^{a}h_{mb}\nabla_{a}P^{b}-\epsilon_{mab}\nabla^{b}Q^{a}$
$\displaystyle=J^{a}h_{ma}-a^{a}\Psi_{ma}-a^{a}\epsilon_{mab}Q^{b}+P^{a}\chi_{am}$
(30a) $\displaystyle-
P_{m}\chi^{a}{}_{a}+\epsilon_{mac}\Phi_{b}{}^{c}\chi^{ab},$ $\displaystyle
u^{a}h_{mb}\nabla_{a}Q^{b}+\epsilon_{mab}\nabla^{b}P^{a}$
$\displaystyle=a^{a}\epsilon_{mab}P^{b}+a^{a}\Phi_{ma}+Q^{a}\chi_{am}-Q_{m}\chi^{a}{}_{a}$
$\displaystyle+\epsilon_{mac}\Psi_{b}{}^{c}\chi^{ab},$ (30b)
where,
$\displaystyle u^{a}h_{mb}T_{a}{}^{b}$
$\displaystyle=-J^{a}h_{ma}+a^{a}\epsilon_{mab}Q^{b}-P^{a}\chi_{am}+P_{m}\chi^{a}{}_{a},$
(31a) $\displaystyle A^{ba}u_{b}h^{d}{}_{a}$
$\displaystyle=-a^{b}\epsilon^{d}{}_{ba}P^{a}-Q^{b}\
\chi_{b}{}^{d}+Q^{d}\chi^{b}{}_{b}.$ (31b)
Finally, we find $T_{00}$ and $A_{00}$ by using the electromagnetic divergence
equations. Thus, we consider the equations,
$\displaystyle u^{c}u^{a}\nabla^{b}Z_{abc}$ $\displaystyle=u^{c}u^{a}T_{ac},$
(32a) $\displaystyle u^{c}u^{a}\nabla^{b}Z{}^{\ast}_{abc}$
$\displaystyle=u^{c}u^{a}A_{ac}.$ (32b)
Again, by the decomposition of the Z-tensor and its dual, these are equivalent
to the divergence equations
$\displaystyle\nabla_{a}P^{a}$
$\displaystyle=u^{a}J_{a}+a^{a}P_{a}-\Psi_{ab}\chi^{ab}-\epsilon_{abc}Q^{a}\chi^{bc},$
(33a) $\displaystyle\nabla_{a}Q^{a}$
$\displaystyle=a^{a}Q_{a}+\Phi_{ab}\chi^{ab}+\epsilon_{abc}P^{a}\chi^{bc}.$
(33b)
As before, we have in the above equations defined,
$\displaystyle u^{a}u^{b}T_{ab}$
$\displaystyle=-u^{a}J_{a}+\epsilon_{abc}Q^{a}\chi^{bc}$ (34a) $\displaystyle
A^{ba}u_{a}u_{b}$ $\displaystyle=-\epsilon_{bac}P^{b}\chi^{ac}.$ (34b)
We have thereby shown that if $\bm{A}$ and ${\bm{T}}$ are given by,
$\displaystyle T_{ab}$
$\displaystyle=-u_{a}u_{b}u^{m}J_{m}-u_{a}J^{m}h_{bm}+a^{m}\epsilon_{amn}\Phi_{b}{}^{n}+a^{m}\epsilon_{bmn}u_{a}Q^{n}$
$\displaystyle\quad+\Psi_{bm}\chi_{a}{}^{m}-u_{a}P^{m}\chi_{mb}+\Psi_{am}\chi^{m}{}_{b}+u_{a}P_{b}\chi^{m}{}_{m}-\Psi_{ab}\chi^{m}{}_{m}$
$\displaystyle\quad+\epsilon_{anc}u_{b}\Phi_{m}{}^{c}\chi^{mn}-h_{ab}\Psi_{mn}\chi^{mn}+\epsilon_{mnc}u_{a}u_{b}Q^{m}\chi^{nc}+\tfrac{1}{2}u_{b}u^{m}u^{n}h_{a}{}^{c}\nabla_{c}S_{mn}$
$\displaystyle\quad-\tfrac{1}{2}u^{m}h_{a}{}^{n}h_{b}{}^{c}\nabla_{m}S_{nc}-\tfrac{1}{2}u_{b}u^{m}u^{n}h_{a}{}^{c}\nabla_{n}S_{mc}+\tfrac{1}{2}u^{m}h_{a}{}^{n}h_{b}{}^{c}\nabla_{n}S_{mc},$
(35a) $\displaystyle A_{ab}$
$\displaystyle=-a^{m}\epsilon_{bmn}u_{a}P^{n}+a^{m}\epsilon_{bmn}\Psi_{a}{}^{n}-\Phi_{am}\chi_{b}{}^{m}-u_{a}Q^{m}\chi_{mb}$
$\displaystyle\quad-2\Phi_{bm}\chi^{m}{}_{a}+\Phi_{am}\chi^{m}{}_{b}+\Phi_{ab}\chi^{m}{}_{m}+u_{a}Q_{b}\chi^{m}{}_{m}$
$\displaystyle\quad+h_{ab}\Phi_{mn}\chi^{mn}-\epsilon_{mnc}u_{b}\Psi_{a}{}^{c}\chi^{mn}+2\epsilon_{anc}u_{b}\Psi_{m}{}^{c}\chi^{mn}-\epsilon_{amc}u_{b}\Psi_{n}{}^{c}\chi^{mn}$
$\displaystyle\quad-\epsilon_{mnc}u_{a}u_{b}P^{m}\chi^{nc}+\tfrac{1}{2}\epsilon_{anc}u_{b}u^{m}\nabla^{c}S_{m}{}^{n}+\tfrac{1}{2}\epsilon_{bnc}h_{a}{}^{m}\nabla^{c}S_{m}{}^{n}+\epsilon_{abn}\nabla_{m}\Psi^{mn},$
(35b)
then there exists a solution of the field equations (21a) and (21b), which are
also solutions to the Bianchi equations and the relativistic Maxwell equations
under appropriate limits. Then, they will also be a solution of the Einstein
equations if the constraint equation (20) is imposed. Observe that the
divergence of $\bm{\Psi}$ in equation (35b) will vanish if symmetry of
${\bm{A}}$ and ${\bm{T}}$ is assumed. Since the tensors ${\bm{T}}$ and
${\bm{A}}$ act as sources for the field tensor, it is worth mentioning that it
is perturbations of the four velocity and the Schouten tensor which is
responsible for a non-vanishing source. That is, a solution
$(\bm{e}_{a},\bm{\Gamma})$ to the geometric equations, determines a solution
to the field equations (21a) and (21b). Wee see in this formalism that the
Einstein equation is only a particular solution for a specific choice of
geometry — i.e. the Ricci tensor and scalar takes a specific form according to
the matter distribution. In the theory proposed here, the perturbations of the
Schouten tensor and frame components creates a matter distribution in space
time which in turn produces gravitational and electromagnetic fields.
## 5 Discussion
It has been shown that it is possible to interpret $\bm{\Psi}$, $\bm{\Phi}$,
${\bm{P}}$ and ${\bm{Q}}$ as the gravitational and electromagnetic fields,
respectively. Although there remains work to be done on the interpretations of
these equations as well as the relation to the Einstein-Maxwell equations, we
have shown that the tensor ${\bm{Z}}$ can be considered a viable candidate for
a unified field theory where the tensors $\bm{T}$ and $\bm{A}$ are the sources
— see equations (21a) and (21b) — and the field equations are first order
divergence equations, in striking similarity to the Maxwell equations. Due to
the existence of a global tetrad field it is natural to consider the spinorial
formulation of the equations. This would be of interest for a possible quantum
description as well as a more lucid interpretation of the equations. Another
interesting further study would be the existence of solutions representing a
charged point particle. The similarity of the equations with the Maxwell
equations may suggest that such a solution exists and makes sense. But observe
that although the field equations resembles the form of the Maxwell equations,
there are derivatives in the sources which may create complications.
## References
* [1] H. Friedrich. Evolution equations for gravitating ideal fluid bodies in general relativity. Physical Review D, 57(4):2317–2322, Feb. 1998.
* [2] R. Geroch. Spinor Structure of Space‐Times in General Relativity. I. Journal of Mathematical Physics, 9(11):1739–1744, 10 2003.
* [3] R. Geroch and J. Traschen. Strings and other distributional sources in general relativity. Phys. Rev. D, 36:1017–1031, Aug 1987.
* [4] H. Goenner. On the history of unified field theories. Living Reviews in Relativity, 7, 2004.
* [5] C. Lanczos. Lagrangian multiplier and riemannian spaces. Rev. Mod. Phys., 21:497–502, Jul 1949.
* [6] B. D. Normann and I. Brevik. General Bulk-Viscous Solutions and Estimates of Bulk Viscosity in the Cosmic Fluid. arXiv e-prints, page arXiv:1601.04519, Jan. 2016.
* [7] M. Normann and J. Valiente Kroon. Evolution equations for a wide range of Einstein-matter systems. arXiv e-prints, page arXiv:2005.14678, May 2020.
* [8] D. Pugliese and J. Valiente-Kroon. On the evolution equations for ideal magnetohydrodynamics in curved spacetime. General Relativity and Gravitation, 44, 2012.
* [9] M. D. Roberts. The physical interpretation of the lanczos tensor. Il Nuovo Cimento B Series 11, 110(10):1165–1176, Oct 1995.
* [10] R. Steinbauer and J. A. Vickers. The use of generalized functions and distributions in general relativity. Classical and Quantum Gravity, 23(10):R91, apr 2006.
* [11] J. A. Valiente Kroon. Conformal Methods in General Relativity. Cambridge University Press, 2016.
* [12] R. M. Wald. General Relativity. Chicago Univ. Pr., Chicago, USA, 1984.
* [13] R. S. Ward and R. O. Wells, Jr. Linear field theories, page 241–262. Cambridge Monographs on Mathematical Physics. Cambridge University Press, 1990.
|
Janus solutions in three-dimensional ${\cal N}=8$ gauged supergravity
Kevin Chen and Michael Gutperle
Mani L. Bhaumik Institute for Theoretical Physics
Department of Physics and Astronomy
University of California, Los Angeles, CA 90095, USA
###### Abstract
Janus solutions are constructed in $d=3$, ${\cal N}=8$ gauged supergravity. We
find explicit half-BPS solutions where two scalars in the
$\operatorname{SO}(8,1)/\operatorname{SO}(8)$ coset have a nontrivial profile.
These solutions correspond on the CFT side to an interface with a position-
dependent expectation value for a relevant operator and a source which jumps
across the interface for a marginal operator.
## 1 Introduction
Janus configurations are solutions of supergravity theories which are dual to
interface CFTs. The original solution [1] was obtained by considering a
deformation of $\operatorname{AdS}_{5}\times S^{5}$ in type IIB supergravity
where the dilaton has a nontrivial profile with respect to the slicing
coordinate of an $\operatorname{AdS}_{4}$ slicing of $\operatorname{AdS}_{5}$.
Subsequently, many more Janus solutions have been found in many different
settings. One may distinguish two kinds of solutions: First, there are top-
down constructions of Janus solutions in ten-dimensional type IIB or eleven-
dimensional M-theory which preserve half of the supersymmetry. Such solutions
are generically constructed by considering a warped product of
$\operatorname{AdS}$ and sphere factors over a two-dimensional Riemann surface
with boundary (see e.g. [2, 3, 4, 5]). Second, there are solutions of gauged
supergravities in lower dimensions with various amounts of broken and unbroken
supersymmetries (see e.g. [6, 7, 8, 9, 10, 11, 12, 13, 14]). Solutions of the
second kind are useful since holographic calculations of quantities such as
the entanglement entropy, sources and expectation values of operators, and
correlation functions in the Janus background are easier to perform in the
lower-dimensional supergravity. In many cases, such solutions can be
constructed as consistent truncations, which can be lifted to solutions of
ten- or eleven-dimensional supergravity.
In the present paper, we consider a particular example of the second approach.
We construct Janus solutions in three-dimensional $\mathcal{N}=8$ gauged
supergravity. Such theories are naturally related to
$\operatorname{AdS}_{3}\times S^{3}\times M_{4}$ compactifications of type
IIB, where $M_{4}$ is either $T_{4}$ or $K3$. We consider one of the simplest
nontrivial settings where we find solutions which preserve eight of the
sixteen supersymmetries of the $\operatorname{AdS}_{3}$ vacuum, where only two
scalars in the coset have a nontrivial profile. One interesting feature of
these solutions is that one scalar is dual to a marginal operator with
dimension $\Delta=2$ where the source terms have different values on the two
sides of the interface. This behavior is the main feature of the original
Janus solution [1, 15]. On the other hand, the second scalar is dual to a
relevant operator with dimension $\Delta=1$ with a vanishing source term and a
position-dependent expectation value. This behavior is a feature of the Janus
solution in M-theory [5].
The structure of the paper is as follows: in section 2 we review
$\mathcal{N}=8$ gauged supergravity in three dimensions, and in section 3 we
construct the half-BPS Janus solutions and investigate some of their
properties using the AdS/CFT dictionary, including the calculation of the
holographic entanglement entropy. We discuss some generalizations and
directions for future research in section 4. Some technical details are
relegated to appendix A.
## 2 $d=3$, $\mathcal{N}=8$ gauged supergravity
In the following, we will use the notation and conventions of [16]. The scalar
fields of $d=3$, $\mathcal{N}=8$ gauged supergravity are parameterized by a
$G/H=\operatorname{SO}(8,n)/\quantity\big(\operatorname{SO}(8)\times\operatorname{SO}(n))$
coset, which has $8n$ independent scalar degrees of freedom. This theory can
be obtained by a truncation of six-dimensional $\mathcal{N}=(2,0)$
supergravity on $\operatorname{AdS}_{3}\times S^{3}$ coupled to $n_{T}\geq 1$
tensor multiplets, where $n_{T}=n-3$. The cases $n_{T}=5$ and $21$ correspond
to compactifications of ten-dimensional type IIB on $T^{3}$ and $K3$,
respectively. See [17] for a discussion of consistent truncations of six-
dimensional $\mathcal{N}=(1,1)$ and $\mathcal{N}=(2,0)$ using exceptional
field theory.
For future reference, we use the following index conventions:
* –
$I,J,\dotsc=1,2,\dotsc,8$ for $\operatorname{SO}(8)$.
* –
$r,s,\dotsc=9,10,\dotsc,n+8$ for $\operatorname{SO}(n)$.
* –
$\bar{I},\bar{J},\dotsc=1,2,\dotsc,n+8$ for $\operatorname{SO}(8,n)$.
* –
$\mathcal{M},\mathcal{N},\dotsc$ for generators of $\operatorname{SO}(8,n)$.
Let the generators of $G$ be
$\\{t^{\mathcal{M}}\\}=\\{t^{\bar{I}\bar{J}}\\}=\\{X^{IJ},X^{rs},Y^{Ir}\\}$,
where $Y^{Ir}$ are the non-compact generators. Explicitly, the generators of
the vector representation are given by
$\displaystyle\tensor{(t^{\bar{I}\bar{J}})}{{}^{\bar{K}}_{\bar{L}}}=\eta^{\bar{I}\bar{K}}\delta^{\bar{J}}_{\bar{L}}-\eta^{\bar{J}\bar{K}}\delta^{\bar{I}}_{\bar{L}}$
(2.1)
where $\eta^{\bar{I}\bar{J}}=\operatorname{diag}(++++++++-\cdots)$ is the
$\operatorname{SO}(8,n)$-invariant tensor. These generators satisfy the
following commutation relations,
$\displaystyle[t^{\bar{I}\bar{J}},t^{\bar{K}\bar{L}}]=2\quantity(\eta^{\bar{I}[\bar{K}}t^{\bar{L}]\bar{J}}-\eta^{\bar{J}[\bar{K}}t^{\bar{L}]\bar{I}})$
(2.2)
The scalars fields can be parametrized by a $G$-valued matrix $L(x)$ in the
vector representation, which transforms under $H$ and the gauge group
$G_{0}\subseteq G$ by
$\displaystyle L(x)\longrightarrow g_{0}(x)L(x)h^{-1}(x)$ (2.3)
for $g_{0}\in G_{0}$ and $h\in H$. The Lagrangian is invariant under such
transformations. We can pick a
$\operatorname{SO}(8)\times\operatorname{SO}(n)$ gauge to put the coset
representative into symmetric gauge,
$\displaystyle L=\exp(\phi_{Ir}Y^{Ir})$ (2.4)
for scalar fields $\phi_{Ir}$. The
$\tensor{\mathcal{V}}{{}^{\mathcal{M}}_{\mathcal{A}}}$ tensors are defined by
$\displaystyle
L^{-1}t^{\mathcal{M}}L=\tensor{\mathcal{V}}{{}^{\mathcal{M}}_{\mathcal{A}}}t^{\mathcal{A}}=\frac{1}{2}\tensor{\mathcal{V}}{{}^{\mathcal{M}}_{IJ}}X^{IJ}+\frac{1}{2}\tensor{\mathcal{V}}{{}^{\mathcal{M}}_{rs}}X^{rs}+\tensor{\mathcal{V}}{{}^{\mathcal{M}}_{Ir}}Y^{Ir}$
(2.5)
The gauging of the supergravity is accomplished by introducing Chern-Simons
gauge fields $B^{\mathcal{M}}_{\mu}$ and choosing an embedding tensor
$\Theta_{\mathcal{M}\mathcal{N}}$ (which has to satisfy various identities
[18]) that determines which isometries are gauged, the coupling to the Chern-
Simons fields, and additional terms in the supersymmetry transformations and
action depending on the gauge couplings. In the following, we will make one of
the simplest choices and gauge a $G_{0}=\operatorname{SO}(4)$ subset of
$\operatorname{SO}(8)$. Explicitly, we further divide the $I,J$ indices into
* –
$i,j,\dotsc=1,2,3,4$ for $G_{0}=\operatorname{SO}(4)$.
* –
$\bar{\imath},\bar{\jmath},\dotsc=5,6,7,8$ for the remaining ungauged
$\operatorname{SO}(4)\subset\operatorname{SO}(8)$.
The embedding tensor we will employ in the following has the non-zero entries
$\displaystyle\Theta_{IJ,KL}=\varepsilon_{ijk\ell}$ (2.6)
As this is totally antisymmetric, the trace is $\theta=0$. As discussed in
[16], this choice of embedding tensor produces a supersymmetric
$\operatorname{AdS}_{3}$ ground state with
$\displaystyle\operatorname{SU}(2|1,1)_{L}\times\operatorname{SU}(2|1,1)_{R}$
(2.7)
super-algebra of isometries. From the embedding tensor, the $G_{0}$-covariant
currents can be obtained,
$\displaystyle
L^{-1}(\partial_{\mu}+g\Theta_{\mathcal{M}\mathcal{N}}B_{\mu}^{\mathcal{M}}t^{\mathcal{N}})L=\frac{1}{2}\mathcal{Q}^{IJ}_{\mu}X^{IJ}+\frac{1}{2}\mathcal{Q}^{rs}_{\mu}X^{rs}+\mathcal{P}^{Ir}_{\mu}Y^{Ir}$
(2.8)
It is convenient to define the $T$-tensor,
$\displaystyle
T_{\mathcal{A}|\mathcal{B}}=\Theta_{\mathcal{M}\mathcal{N}}\tensor{\mathcal{V}}{{}^{\mathcal{M}}_{\mathcal{A}}}\tensor{\mathcal{V}}{{}^{\mathcal{N}}_{\mathcal{B}}}$
(2.9)
as well as the tensors $A_{1,2,3}$ which will appear in the potential and the
supersymmetry transformations.
$\displaystyle A_{1}^{AB}$
$\displaystyle=-\frac{1}{48}\Gamma^{IJKL}_{AB}T_{IJ|KL}$ $\displaystyle
A_{2}^{A\dot{A}r}$
$\displaystyle=-\frac{1}{12}\Gamma^{IJK}_{A\dot{A}}T_{IJ|Kr}$ $\displaystyle
A_{3}^{\dot{A}r\dot{B}s}$
$\displaystyle=\frac{1}{48}\delta^{rs}\Gamma^{IJKL}_{\dot{A}\dot{B}}T_{IJ|KL}+\frac{1}{2}\Gamma^{IJ}_{\dot{A}\dot{B}}T_{IJ|rs}$
(2.10)
$A,B$ and $\dot{A},\dot{B}$ are $\operatorname{SO}(8)$-spinor indices and our
conventions for the $\operatorname{SO}(8)$ Gamma matrices are presented in
appendix A.1.
We take the spacetime signature $\eta^{ab}=\operatorname{diag}(+--)$ to be
mostly negative. The bosonic Lagrangian is
$\displaystyle e^{-1}\mathcal{L}$
$\displaystyle=-\frac{1}{4}R+\frac{1}{4}\mathcal{P}_{\mu}^{Ir}\mathcal{P}^{\mu\,Ir}+W-\frac{1}{4}e^{-1}\varepsilon^{\mu\nu\rho}g\Theta_{\mathcal{M}\mathcal{N}}B_{\mu}^{\mathcal{M}}\quantity(\partial_{\nu}B_{\rho}^{\mathcal{N}}+\frac{1}{3}g\Theta_{\mathcal{K}\mathcal{L}}\tensor{f}{{}^{\mathcal{N}\mathcal{K}}_{\mathcal{P}}}B_{\nu}^{\mathcal{L}}B_{\rho}^{\mathcal{P}})$
$\displaystyle W$
$\displaystyle=\frac{1}{4}g^{2}\quantity(A^{AB}_{1}A^{AB}_{1}-\frac{1}{2}A^{A\dot{A}r}_{2}A^{A\dot{A}r}_{2})$
(2.11)
The SUSY transformations are
$\displaystyle\delta\chi^{\dot{A}r}$
$\displaystyle=\frac{1}{2}i\Gamma^{I}_{A\dot{A}}\gamma^{\mu}\varepsilon^{A}\mathcal{P}^{Ir}_{\mu}+gA^{A\dot{A}r}_{2}\varepsilon^{A}$
$\displaystyle\delta\psi^{A}_{\mu}$
$\displaystyle=\quantity(\partial_{\mu}\varepsilon^{A}+\frac{1}{4}\omega_{\mu}^{ab}\gamma_{ab}\varepsilon^{A}+\frac{1}{4}\mathcal{Q}^{IJ}_{\mu}\Gamma^{IJ}_{AB}\varepsilon^{B})+igA^{AB}_{1}\gamma_{\mu}\varepsilon^{B}$
(2.12)
### 2.1 The $n=1$ case
In this section we will consider the $n=1$ theory, i.e. the scalar fields lie
in a $\operatorname{SO}(8,1)/\operatorname{SO}(8)$ coset. The reason for this
is that the resulting expressions for the supersymmetry transformations and
BPS conditions are compact and everything can be worked out in detail.
Furthermore, we believe that this case illustrates the important features of
more general solutions.
As the index $r=9$ takes only one value in this case, the scalar fields in the
coset representative (2.4) are denoted by $\phi_{i}\equiv\phi_{i9}$ for
$i=1,2,\dotsc,8$. We define the following quantities for notational
convenience,
$\displaystyle\Phi^{2}$
$\displaystyle\equiv\phi_{I}\phi_{I}=\phi_{1}^{2}+\phi_{2}^{2}+\phi_{3}^{2}+\phi_{4}^{2}+\phi_{5}^{2}+\phi_{6}^{2}+\phi_{7}^{2}+\phi_{8}^{2}$
$\displaystyle\phi^{2}$
$\displaystyle\equiv\phi_{i}\phi_{i}=\phi_{1}^{2}+\phi_{2}^{2}+\phi_{3}^{2}+\phi_{4}^{2}$
$\displaystyle\bar{\phi}^{2}$
$\displaystyle\equiv\phi_{\bar{\imath}}\phi_{\bar{\imath}}=\phi_{5}^{2}+\phi_{6}^{2}+\phi_{7}^{2}+\phi_{8}^{2}$
(2.13)
The components of the $\tensor{\mathcal{V}}{{}^{\mathcal{M}}_{\mathcal{A}}}$
tensor are, with no summation over repeated indices and $I,J,K,L$ being unique
indices,
$\displaystyle\tensor{\mathcal{V}}{{}^{IJ}_{IJ}}$
$\displaystyle=1+(\phi_{I}^{2}+\phi_{J}^{2})\frac{\cosh\Phi-1}{\Phi^{2}}$
$\displaystyle\tensor{\mathcal{V}}{{}^{IJ}_{IK}}$
$\displaystyle=\phi_{J}\phi_{K}\frac{\cosh\Phi-1}{\Phi^{2}}$
$\displaystyle\tensor{\mathcal{V}}{{}^{IJ}_{KL}}$ $\displaystyle=0$
$\displaystyle\tensor{\mathcal{V}}{{}^{I9}_{I9}}$
$\displaystyle=\cosh\Phi-\phi_{I}^{2}\frac{\cosh\Phi-1}{\Phi^{2}}$
$\displaystyle\tensor{\mathcal{V}}{{}^{I9}_{J9}}$
$\displaystyle=-\phi_{I}\phi_{J}\frac{\cosh\Phi-1}{\Phi^{2}}$
$\displaystyle\tensor{\mathcal{V}}{{}^{IJ}_{I9}}$
$\displaystyle=\tensor{\mathcal{V}}{{}^{I9}_{IJ}}=\phi_{J}\frac{\sinh\Phi}{\Phi}$
$\displaystyle\tensor{\mathcal{V}}{{}^{IJ}_{K9}}$
$\displaystyle=\tensor{\mathcal{V}}{{}^{K9}_{IJ}}=0$ (2.14)
The $u$-components of the $\mathcal{Q}^{IJ}_{\mu}$ and $\mathcal{P}^{I}_{\mu}$
tensors are
$\displaystyle\mathcal{Q}_{u}^{IJ}$
$\displaystyle=(\phi_{I}^{\prime}\phi_{J}-\phi_{I}\phi_{J}^{\prime})\frac{\cosh\Phi-1}{\Phi^{2}}+g\Theta_{\mathcal{M}\mathcal{N}}B^{\mathcal{M}}_{u}\mathcal{V}^{\mathcal{N}}_{IJ}$
$\displaystyle\mathcal{P}_{u}^{I}$
$\displaystyle=\phi_{I}^{\prime}\frac{\sinh\Phi}{\Phi}-\phi_{I}\Phi^{\prime}\frac{\sinh\Phi-\Phi}{\Phi^{2}}+g\Theta_{\mathcal{M}\mathcal{N}}B^{\mathcal{M}}_{u}\mathcal{V}^{\mathcal{N}}_{I9}$
(2.15)
where the prime ${}^{\prime}\equiv\partialderivative*{u}$ denotes the
derivative with respect to $u$. The terms involving the gauge field have
different forms depending on whether $I,J$ are in $i$ or $\bar{\imath}$.
$\displaystyle\Theta_{\mathcal{M}\mathcal{N}}B^{\mathcal{M}}_{u}\mathcal{V}^{\mathcal{N}}_{ij}$
$\displaystyle=\varepsilon_{ijk\ell}\quantity[\frac{1}{2}B^{k\ell}_{u}\quantity(1+(\phi_{i}^{2}+\phi_{j}^{2})\frac{\cosh\Phi-1}{\Phi^{2}})+\quantity(\phi_{i}B^{ik}_{u}\phi_{\ell}+\phi_{j}B^{jk}_{u}\phi_{\ell})\frac{\cosh\Phi-1}{\Phi^{2}}]$
$\displaystyle\Theta_{\mathcal{M}\mathcal{N}}B^{\mathcal{M}}_{u}\mathcal{V}^{\mathcal{N}}_{i\bar{\imath}}$
$\displaystyle=\frac{1}{2}\varepsilon_{ijk\ell}\phi_{\bar{\imath}}\phi_{j}B^{k\ell}_{u}\frac{\cosh\Phi-1}{\Phi^{2}}$
$\displaystyle\Theta_{\mathcal{M}\mathcal{N}}B^{\mathcal{M}}_{u}\mathcal{V}^{\mathcal{N}}_{\bar{\imath}\bar{\jmath}}$
$\displaystyle=0$
$\displaystyle\Theta_{\mathcal{M}\mathcal{N}}B^{\mathcal{M}}_{u}\mathcal{V}^{\mathcal{N}}_{i9}$
$\displaystyle=\frac{1}{2}\varepsilon_{ijk\ell}\phi_{j}B^{k\ell}_{u}\frac{\sinh\Phi}{\Phi}$
$\displaystyle\Theta_{\mathcal{M}\mathcal{N}}B^{\mathcal{M}}_{u}\mathcal{V}^{\mathcal{N}}_{\bar{\imath}9}$
$\displaystyle=0$ (2.16)
The $T$-tensor has non-zero components
$\displaystyle T_{ij|k\ell}$
$\displaystyle=\varepsilon_{ijk\ell}\quantity(\phi^{2}\frac{\cosh\Phi-1}{\Phi^{2}}+1)$
$\displaystyle T_{ij|k\bar{\imath}}$
$\displaystyle=\varepsilon_{ijk\ell}\phi_{\ell}\phi_{\bar{\imath}}\frac{\cosh\Phi-1}{\Phi^{2}}$
$\displaystyle T_{ij|k9}$
$\displaystyle=\varepsilon_{ijk\ell}\phi_{\ell}\frac{\sinh\Phi}{\Phi}$ (2.17)
Taking $\varepsilon_{1234}=1$, we can use the $T$-tensor to compute
$\displaystyle A_{1}^{AB}$
$\displaystyle=-\frac{1}{2}\Gamma^{1234}_{AC}\Bigg{[}\quantity(\phi^{2}\frac{\cosh\Phi-1}{\Phi^{2}}+1)\delta_{CB}+(\Gamma^{i}_{C\dot{A}}\phi_{i})(\Gamma^{\bar{\imath}}_{\dot{A}B}\phi_{\bar{\imath}})\frac{\cosh\Phi-1}{\Phi^{2}}\Bigg{]}$
$\displaystyle A_{2}^{A\dot{A}}$
$\displaystyle=-\frac{1}{2}\Gamma^{1234}_{AB}(\Gamma^{i}_{B\dot{A}}\phi_{i})\frac{\sinh\Phi}{\Phi}$
$\displaystyle A_{3}^{\dot{A}\dot{B}}$
$\displaystyle=-A_{1}^{AB}\delta_{A\dot{A}}\delta_{B\dot{B}}$ (2.18)
Note that $A_{1}^{AB}=A_{1}^{BA}$ and
$\displaystyle A_{1}^{AC}A_{1}^{BC}$
$\displaystyle=\frac{1}{4}\delta_{AB}\quantity(\frac{\phi^{2}\sinh^{2}\Phi}{\Phi^{2}}+1)$
$\displaystyle A_{2}^{A\dot{A}}A_{2}^{B\dot{A}}$
$\displaystyle=\frac{1}{4}\delta_{AB}\frac{\phi^{2}\sinh^{2}\Phi}{\Phi^{2}}$
(2.19)
so the scalar potential (2.11) becomes
$\displaystyle
W=\frac{g^{2}}{4}\quantity(\frac{\phi^{2}\sinh^{2}\Phi}{\Phi^{2}}+2)$ (2.20)
## 3 Half-BPS Janus solutions
In this section, we construct Janus solutions which preserve eight of the
sixteen supersymmetries of the $\operatorname{AdS}_{3}$ vacuum. Our strategy
is to use an $\operatorname{AdS}_{2}$ slicing of $\operatorname{AdS}_{3}$ and
make the scalar fields as well as the metric functions only dependent on the
slicing coordinate. One complication is given by the presence of the gauge
fields; due to the Chern-Simons action, the only consistent Janus solution
will have vanishing field strength. We show that the gauge fields can be
consistently set to zero for our solutions.
### 3.1 Janus ansatz
We take the Janus ansatz for the metric, scalar fields and Chern-Simons gauge
fields,
$\displaystyle\differential{s^{2}}$
$\displaystyle=e^{2B(u)}\quantity(\frac{\differential{t^{2}}-\differential{z^{2}}}{z^{2}})-e^{2D(u)}\differential{u}^{2}$
$\displaystyle\phi_{I}$ $\displaystyle=\phi_{I}(u)$ $\displaystyle
B^{\mathcal{M}}$ $\displaystyle=B^{\mathcal{M}}(u)\differential{u}$ (3.1)
The $\operatorname{AdS}_{3}$ vacuum solution given by $\phi_{I}\equiv 0$ and
$e^{B}=e^{D}=L\sec u$ has a curvature radius related to the coupling constant
by $L^{-1}=g$. The spin connection 1-forms are
$\displaystyle\omega^{01}$ $\displaystyle=\frac{\differential{t}}{z}$
$\displaystyle\omega^{02}$
$\displaystyle=-\frac{B^{\prime}e^{B-D}}{z}\differential{t}$
$\displaystyle\omega^{12}$
$\displaystyle=-\frac{B^{\prime}e^{B-D}}{z}\differential{z}$ (3.2)
so the gravitino supersymmetry variation $\delta\psi^{A}_{\mu}=0$ is
$\displaystyle 0$
$\displaystyle=\partial_{t}\varepsilon+\frac{1}{2z}\gamma_{0}\quantity(\gamma_{1}-B^{\prime}e^{B-D}\gamma_{2}+2ige^{B}A_{1})\varepsilon$
$\displaystyle 0$
$\displaystyle=\partial_{z}\varepsilon+\frac{1}{2z}\gamma_{1}\quantity(-B^{\prime}e^{B-D}\gamma_{2}+2ige^{B}A_{1})\varepsilon$
$\displaystyle 0$
$\displaystyle=\partial_{u}\varepsilon+\frac{1}{4}\mathcal{Q}_{u}^{IJ}\Gamma^{IJ}\varepsilon+ige^{D}\gamma_{2}A_{1}\varepsilon$
(3.3)
where we have suppressed the $\operatorname{SO}(8)$-spinor indices. As shown
in appendix A.2, the integrability conditions are
$\displaystyle 0$
$\displaystyle=\quantity(1-(2ge^{B}A_{1})^{2}+(B^{\prime}e^{B-D})^{2})\varepsilon$
$\displaystyle 0$
$\displaystyle=2ige^{B}\quantity(A_{1}^{\prime}-\frac{1}{4}[A_{1},\mathcal{Q}_{u}^{IJ}\Gamma^{IJ}])\varepsilon+\quantity(-\derivative{u}(B^{\prime}e^{B-D})+(2ge^{B}A_{1})^{2}e^{D-B})\gamma_{2}\varepsilon$
(3.4)
The first integrability condition gives a first-order equation which must be
true for all $\varepsilon$, using the replacement for $A_{1}^{2}$ in (2.19),
$\displaystyle
0=1-g^{2}e^{2B}\quantity(\frac{\phi^{2}\sinh^{2}\Phi}{\Phi^{2}}+1)+(B^{\prime}e^{B-D})^{2}$
(3.5)
The derivative of this simplifies the second integrability condition to
$\displaystyle
0=\quantity(A_{1}^{\prime}-\frac{1}{4}[A_{1},\mathcal{Q}_{u}^{IJ}\Gamma^{IJ}])\varepsilon+\frac{ige^{D}}{4B^{\prime}}\derivative{u}\quantity(\frac{\phi^{2}\sinh^{2}\Phi}{\Phi^{2}})\gamma_{2}\varepsilon$
(3.6)
The BPS equation $\smash{\delta\chi^{\dot{A}}=0}$ is
$\displaystyle\quantity(-\frac{i}{2}e^{-D}\Gamma^{I}\mathcal{P}_{u}^{I}\gamma_{2}+gA_{2})_{A\dot{A}}\varepsilon^{A}=0$
(3.7)
When $gA_{2}^{2}\neq 0$, this equation can be rearranged into the form of a
projector
$\displaystyle 0$
$\displaystyle=\quantity(iM_{AB}\gamma_{2}+\delta_{AB})\varepsilon^{A}$ (3.8)
where $M_{AB}$ is given by
$\displaystyle M_{AB}$
$\displaystyle=\frac{e^{-D}}{g}\frac{\Phi}{\phi^{2}\sinh\Phi}(\Gamma^{I}_{A\dot{A}}\mathcal{P}_{u}^{I})(\Gamma^{i}_{\dot{A}C}\phi_{i})\Gamma^{1234}_{CB}$
(3.9)
For consistency of the projector, we must have
$\displaystyle M_{AB}M_{BC}=\delta_{AC}$ (3.10)
As $M^{2}=1$, every generalized eigenvector of rank $\geq 2$ is automatically
an eigenvector, so $M$ is diagonalizable and has eight eigenvectors with
eigenvalues $\pm 1$. $M$ is traceless as it is a sum of products of 2 or 4
Gamma matrices, so it has an equal number of $+1$ and $-1$ eigenvectors. The
operator $iM_{AB}\gamma_{2}$ in the projector (3.8) squares to one and is
traceless, and projects onto an eight-dimensional space of unbroken
supersymmetry generators. If this is the only projection imposed on the
solution, it will be half-BPS and hence preserve eight of the sixteen
supersymmetries of the vacuum.
The condition $M^{2}=1$ gives an equation first-order in derivatives of
scalars.
$\displaystyle
M^{2}=\quantity(\frac{e^{-D}\Phi}{g\phi^{2}\sinh\Phi})^{2}\Big{(}$
$\displaystyle\phi^{2}(-\mathcal{P}_{u}^{i}\mathcal{P}_{u}^{i}+\mathcal{P}_{u}^{\bar{\imath}}\mathcal{P}_{u}^{\bar{\imath}})-2\phi^{2}(\Gamma^{\bar{\imath}}\mathcal{P}_{u}^{\bar{\imath}})(\Gamma^{i}\mathcal{P}_{u}^{i})$
$\displaystyle+2(\mathcal{P}_{u}^{j}\phi_{j})(\Gamma^{\bar{\imath}}\mathcal{P}_{u}^{\bar{\imath}}+\Gamma^{i}\mathcal{P}_{u}^{i})(\Gamma^{k}\phi_{k})\big{)}$
(3.11)
For this to be proportional to the identity, we need all
$\Gamma^{\bar{\imath}}\Gamma^{i}$ and $\Gamma^{i}\Gamma^{j}$ terms to vanish.
Vanishing of the latter requires us to impose the condition
$\displaystyle\mathcal{P}^{i}_{u}\phi_{j}=\mathcal{P}^{j}_{u}\phi_{i}$ (3.12)
As the ratio $\mathcal{P}_{u}^{i}/\phi_{i}$ is the same for all $i$, this
implies
$\displaystyle\sum_{i}\mathcal{P}_{u}^{i}\phi_{i}=\sum_{i}\frac{\mathcal{P}_{u}^{i}}{\phi_{i}}\phi_{i}^{2}=\frac{\mathcal{P}_{u}^{1}}{\phi_{1}}\phi^{2}\qquad\implies\qquad-\phi^{2}\mathcal{P}_{u}^{i}+\phi_{i}\sum_{j}\mathcal{P}_{u}^{j}\phi_{j}=0$
(3.13)
This means that imposing Eq. (3.12) also ensures that the
$\Gamma^{\bar{\imath}}\Gamma^{i}$ terms vanish. Note that
$\displaystyle\sum_{i}\mathcal{P}_{u}^{i}\mathcal{P}_{u}^{i}=\sum_{i}\frac{\mathcal{P}_{u}^{i}}{\phi_{i}}\frac{\mathcal{P}_{u}^{i}}{\phi_{i}}\phi_{i}^{2}=\quantity(\frac{\mathcal{P}_{u}^{1}}{\phi_{1}})^{2}\phi^{2}$
(3.14)
so the $M^{2}=1$ condition becomes
$\displaystyle
M^{2}=\quantity(\frac{e^{-D}\Phi}{g\phi^{2}\sinh\Phi})^{2}\phi^{2}(\mathcal{P}_{u}^{i}\mathcal{P}_{u}^{i}+\mathcal{P}_{u}^{\bar{\imath}}\mathcal{P}_{u}^{\bar{\imath}})=1$
(3.15)
We now give the argument why the Chern-Simons gauge fields can be set to zero.
Since we demand that the $B^{\mathcal{M}}_{\mu}$ only has a component along
the $u$ direction and only depends on $u$, the field strength vanishes,
consistent with the equation of motion coming from the variation of the Chern-
Simons term in the action (2.11) with respect to the gauge field. However,
there is another term which contains the gauge field, namely the kinetic term
of the scalars via (2.15). For the gauge field to be consistently set to zero,
we have to impose
$\displaystyle\left.{\delta\mathcal{L}\over\delta
B^{k\ell}_{u}}\right|_{B^{\mathcal{M}}_{u}=0}=0$ (3.16)
For the Janus ansatz, we find
$\displaystyle\left.{\delta\mathcal{L}\over\delta
B^{k\ell}_{u}}\right|_{B^{\mathcal{M}}_{u}=0}=eg\varepsilon_{ijk\ell}\mathcal{P}^{i\,u}\phi_{j}{\sinh\Phi\over\Phi}$
(3.17)
which indeed vanishes due to Eq. (3.12) imposed by the half-BPS condition.
For a half-BPS solution, the second integrability condition (3.6) should be
identical to the projector (3.8). Indeed, we have the simplification
$\displaystyle A_{1}^{\prime}-\frac{1}{4}$
$\displaystyle[A_{1},\mathcal{Q}_{u}^{IJ}\Gamma^{IJ}]=-\frac{1}{2}\frac{\phi^{2}\sinh^{2}\Phi}{\Phi^{2}}M^{\top}$
(3.18)
so the Gamma matrix structures of the two equations match. Equating the
remaining scalar magnitude gives us an equation for the metric factor $e^{B}$,
$\displaystyle-B^{\prime}=\derivative{u}\ln\frac{\phi\sinh\Phi}{\Phi}$ (3.19)
We can now solve for the metric. Let us define
$\displaystyle\alpha(u)\equiv\frac{\phi\sinh\Phi}{\Phi}$ (3.20)
and set the integration constant for $B$ to be
$\displaystyle e^{B}=\frac{|C|}{g\alpha}$ (3.21)
Plugging this into the first integrability condition (3.5) and picking the
gauge $e^{-D}\equiv g$, we have a first-order equation for $\alpha$,
$\displaystyle 0=\alpha^{2}-C^{2}(\alpha^{2}+1-\alpha^{\prime 2}/\alpha^{2})$
(3.22)
The solution depends on the value of $C\in[0,1]$ and up to translations in $u$
is
$\displaystyle\alpha$ $\displaystyle=e^{\pm u}$ $\displaystyle\text{if }C=1$
$\displaystyle\alpha$ $\displaystyle=\frac{|C|}{\sqrt{1-C^{2}}}\sech u$
$\displaystyle\text{if }0\leq C<1$ (3.23)
We will take the case $0\leq C<1$. This implies that the metric is
$\displaystyle\differential{s^{2}}=g^{-2}\quantity[(1-C^{2})\cosh^{2}u\quantity(\frac{\differential{t^{2}}-\differential{z^{2}}}{z^{2}})-\differential{u^{2}}]$
(3.24)
The choice $C=0$ corresponds to the $\operatorname{AdS}_{3}$ vacuum.
### 3.2 $\phi_{4},\phi_{5}$ truncation
We have yet to fully solve the half-BPS conditions (3.12) and (3.15). For
simplicity, let us consider the case where only $\phi_{4},\phi_{5}$ are non-
zero and the other scalars are identically zero, which trivially satisfies Eq.
(3.12). It turns out that the important features of the Janus solution are
captured by this truncation.
We introduce the following abbreviations
$\displaystyle\Phi^{2}$ $\displaystyle=\phi_{4}^{2}+\phi_{5}^{2}$
$\displaystyle\phi$ $\displaystyle=|\phi_{4}|$ $\displaystyle\bar{\phi}$
$\displaystyle=|\phi_{5}|$ (3.25)
Let us define
$\displaystyle\beta(u)$ $\displaystyle\equiv\frac{\phi_{5}\sinh\Phi}{\Phi}$
(3.26)
so that
$\displaystyle\alpha^{2}+\beta^{2}$ $\displaystyle=\sinh^{2}\Phi$
$\displaystyle\mathcal{P}^{4}_{u}$
$\displaystyle=\alpha^{\prime}+\alpha\Phi^{\prime}\frac{1-\cosh\Phi}{\sinh\Phi}$
$\displaystyle\mathcal{P}^{5}_{u}$
$\displaystyle=\beta^{\prime}+\beta\Phi^{\prime}\frac{1-\cosh\Phi}{\sinh\Phi}$
(3.27)
Plugging these into Eq. (3.15) simplifies to
$\displaystyle\alpha^{\prime 2}+\beta^{\prime
2}-\frac{(\alpha^{\prime}\alpha+\beta^{\prime}\beta)^{2}}{1+\alpha^{2}+\beta^{2}}$
$\displaystyle=\alpha^{2}$ (3.28)
This can be rearranged into a first-order equation in
$f\equiv\beta/\sqrt{1+\alpha^{2}}$,
$\displaystyle f^{\prime}=\frac{\alpha^{2}/C}{1+\alpha^{2}}\sqrt{1+f^{2}}$
(3.29)
where a sign ambiguity from taking a square-root has been absorbed into $C$,
which is now extended to $C\in(-1,1)$. Using the explicit solution (3.23) for
$\alpha$, by noting that
$\displaystyle\derivative{u}\tanh^{-1}(C\tanh
u)=\frac{C\sech^{2}u}{1-C^{2}\tanh^{2}u}=\frac{\alpha^{2}/C}{1+\alpha^{2}}$
(3.30)
the general solution is
$\displaystyle f(u)$ $\displaystyle=\frac{\sinh p+C\cosh p\tanh
u}{\sqrt{1-C^{2}\tanh^{2}u}}$ $\displaystyle\beta(u)$
$\displaystyle=\frac{1}{\sqrt{1-C^{2}}}(\sinh p+C\cosh p\tanh u)$ (3.31)
for some constant $p\in\mathbb{R}$. For later convenience, we also redefine
$C=\tanh q$ for $q\in\mathbb{R}$.
In summary, we have solved for the scalars $\phi_{4},\phi_{5}$ implicitly
through the functions $\alpha,\beta$,
$\displaystyle\frac{|\phi_{4}|\sinh\Phi}{\Phi}$ $\displaystyle=|\sinh q|\sech
u$ $\displaystyle\frac{\phi_{5}\sinh\Phi}{\Phi}$ $\displaystyle=\sinh p\cosh
q+\cosh p\sinh q\tanh u$ (3.32)
for real constants $p,q$. Note that the reflection $\phi_{4}\to-\phi_{4}$ also
gives a valid solution. We have explicitly checked that the Einstein equation
and scalar equations of motion are satisfied.
The $\phi_{4}$ scalar goes to zero at $u=\pm\infty$ as it is a massive scalar
degree of freedom, and has a sech-like profile near the defect. The $\phi_{5}$
scalar interpolates between two boundary values at $u=\pm\infty$, and has a
tanh-like profile. The constant $p$ is related to the boundary values of the
$\phi_{5}$ scalar, as we can note that
$\displaystyle\phi_{5}(\pm\infty)=p\pm q$ (3.33)
The constant $q$ is then related to the jump value of the $\phi_{5}$ scalar.
The defect location $u=0$ can also be freely translated to any point along the
axis. Below is a plot of the solution for the choice $(p,q)=(0,1)$.
Figure 1: Plot of $\phi_{4}$ and $\phi_{5}$ for $(p,q)=(0,1)$
### 3.3 Holography
In our AdS-sliced coordinates, the boundary is given by the two
$\operatorname{AdS}_{2}$ components at $u=\pm\infty$, which are joined
together at the $z=0$ interface. Using $C=\tanh q$, the metric (3.24) becomes
$\displaystyle\differential{s^{2}}=g^{-2}\quantity[\sech^{2}q\cosh^{2}u\quantity(\frac{\differential{t^{2}}-\differential{z^{2}}}{z^{2}})-\differential{u^{2}}]$
(3.34)
Note that this is not $\operatorname{AdS}_{3}$ unless $q=0$, which corresponds
to the vacuum solution with all scalars vanishing. The spacetime is, however,
asymptotically $\operatorname{AdS}_{3}$. In the limit of $u\to\pm\infty$, the
$\sech^{2}q$ can be eliminated from the leading $e^{\pm 2u}$ term in the
metric (3.34) by a coordinate shift. We will present the asymptotic mapping to
a Fefferman-Graham (FG) coordinate system below. In the following, we will set
the $\operatorname{AdS}$ length scale to unity for notational simplicity, i.e.
$g\equiv 1$.
According to the AdS/CFT correspondence, the mass $m^{2}$ of a supergravity
scalar field in $d=3$ is related to the scaling dimension $\Delta$ of the dual
CFT operator by
$\displaystyle m^{2}=\Delta(\Delta-2)$ (3.35)
This relation comes from the linearized equations of motion for the scalar
field near the asymptotic $\operatorname{AdS}_{3}$ boundary. Expanding the
supergravity action (2.11) to quadratic order around the
$\operatorname{AdS}_{3}$ vacuum shows that the $\phi_{4}$ field has mass
$m^{2}=-1$, so the dual operator is relevant with $\Delta=1$ and saturates the
Breitenlohner-Freedman (BF) bound [19]. Note that we choose the standard
quantization [20], which is the correct one for a supersymmetric solution. The
$\phi_{5}$ field is massless, so the dual CFT operator is marginal with
scaling dimension $\Delta=2$.
In FG coordinates,111The $\operatorname{AdS}_{3}$ metric in Poincaré
coordinates is
$\differential{s^{2}}=\frac{-\differential{\rho^{2}}+\differential{t^{2}}-\differential{x^{2}}}{\rho^{2}}$
and is related to the AdS-sliced metric by the coordinate change
$\displaystyle z$ $\displaystyle=\sqrt{x^{2}+\rho^{2}}$ $\displaystyle\sinh u$
$\displaystyle=x/\rho$ the general expansion for a scalar field near the
asymptotic $\operatorname{AdS}_{3}$ boundary at $\rho=0$ is
$\displaystyle\phi_{\Delta=1}$
$\displaystyle\sim\psi_{0}\,\rho\ln\rho+\phi_{0}\,\rho+\cdots$
$\displaystyle\phi_{\Delta\neq 1}$
$\displaystyle\sim\tilde{\phi}_{0}\,\rho^{2-\Delta}+\tilde{\phi}_{2}\,\rho^{\Delta}+\cdots$
(3.36)
Since $\phi_{\Delta=1}$ saturates the BF bound, holographic renormalization
and the holographic dictionary are subtle due to the presence of the logarithm
[21]. As we show below for the solution (3.32), there is no logarithmic term
present and $\phi_{0}$ can be identified with the expectation value of the
dual operator [21, 22]. For the massless field $\phi_{\Delta=2}$, we can
identify $\tilde{\phi}_{0}$ with the source and $\tilde{\phi}_{2}$ with the
expectation value of the dual operator.
It is difficult to find a global map which puts the metric (3.34) in FG form.
Here, we limit our discussion to the coordinate region away from the defect,
where we take $u\to\pm\infty$ and keep $z$ finite [23, 24]. This limit probes
the region away from the interface on the boundary. The coordinate change
suitable for the $u\to\infty$ limit can be expressed as a power series,
$\displaystyle z$ $\displaystyle=x+\frac{\rho^{2}}{2x}+\mathcal{O}(\rho^{4})$
$\displaystyle e^{u}$ $\displaystyle=\cosh
q\quantity(\frac{2x}{\rho}+\frac{\rho}{2x}+\mathcal{O}(\rho^{3}))$ (3.37)
The metric becomes
$\displaystyle\differential{s^{2}}$
$\displaystyle=\frac{1}{\rho^{2}}\quantity[-\differential{\rho}^{2}+\quantity(1-\frac{\rho^{2}\tanh^{2}q}{2x^{2}})(\differential{t}^{2}-\differential{x^{2}})+\mathcal{O}(\rho^{3})]$
(3.38)
In the $u\to-\infty$ limit, the asymptotic form of the metric is the same and
the coordinate change is (3.37) with the replacements $e^{u}\to e^{-u}$ and
$x\to-x$.
Using this coordinate change, the expansions of the scalar fields near the
boundary is
$\displaystyle|\phi_{4}|$ $\displaystyle=|\tanh
q|\frac{p+\tilde{q}}{\sinh(p+\tilde{q})}\cdot\frac{\rho}{|x|}+\mathcal{O}(\rho^{3})$
$\displaystyle\phi_{5}$
$\displaystyle=(p+\tilde{q})-\frac{1}{2\sinh(p+\tilde{q})}\quantity(\frac{p+\tilde{q}}{\sinh(p+\tilde{q})}\tanh^{2}q+\frac{\sinh
p\tanh{\tilde{q}}}{\cosh q})\cdot\frac{\rho^{2}}{x^{2}}+\mathcal{O}(\rho^{4})$
(3.39)
where $\tilde{q}\equiv qx/|x|$ (see appendix A.3 for details). The defect is
located on the boundary at $x=0$. We can see that the relevant operator
corresponding to $\phi_{4}$ has no term proportional to $\rho\ln\rho$ in the
expansion. This implies that the source is zero and the dual operator has a
position-dependent expectation value. The marginal operator corresponding to
$\phi_{5}$ has a source term which takes different values on the two sides of
the defect, corresponding to a Janus interface where the modulus associated
with the marginal operator jumps across the interface.
Another quantity which can be calculated holographically is the entanglement
entropy for an interval $A$ using the Ryu-Takanayagi prescription [25],
$\displaystyle S_{\rm EE}={{\rm Length}(\Gamma_{A})\over 4G_{N}^{(3)}}$ (3.40)
where $\Gamma_{A}$ is the minimal curve in the bulk which ends on $\partial
A$.
There are two qualitatively different choices for location of the interval in
an interface CFT, as shown in figure 2. First, the interval can be chosen
symmetrically around the defect [26, 27]. The minimal surface for such a
symmetric interval is particularly simple in the $\operatorname{AdS}$-sliced
coordinates (3.34), and is given by $z=z_{0}$ and $u\in(-\infty,\infty)$. The
regularized length is given by
$\displaystyle{\rm
Length}(\Gamma_{A})=\int\differential{u}=u_{\infty}-u_{-\infty}$ (3.41)
We can use (3.37) to relate the FG cutoff $\rho=\varepsilon$, which furnishes
the UV cutoff on the CFT side, to the cutoff $u_{\pm\infty}$ in the
$\operatorname{AdS}$-sliced metric,
$\displaystyle
u_{\pm\infty}=\pm\quantity\big(-\log\varepsilon+\log(2z_{0})+\log(\cosh q))$
(3.42)
Putting this together and using the expression for the central charge in terms
of $G_{N}^{(3)}$ gives
$\displaystyle S_{\rm EE}={c\over 3}\log{2z_{0}\over\varepsilon}+{c\over
3}\log(\cosh q)$ (3.43)
Figure 2: (a) The entagling surface $A$ is symmetric around the interface
${\cal I}$, (b) The entagleing surface $A$ is ends at the interface ${\cal I}$
Note that the first logarithmically divergent term is the standard expression
for the entanglement entropy for a CFT without an interface present [28],
since $2z_{0}$ is the length of the interval. The constant term is universal
in the presence of an interface and can be interpreted as the defect entropy
(sometimes called g-factor [29]) associated with the interface.
Second, we can consider an interval which lies on one side of the interface
and borders the interface [30, 31]. As shown in [32], the entangling surface
is located at $u=0$ and the entanglement entropy for an interval of length $l$
bordering the interface is given by
$\displaystyle S^{\prime}_{\rm EE}={c\over 6}\sech{q}\log{l\over\varepsilon}$
(3.44)
### 3.4 All scalars
For completeness, we also present the general solution with all $\phi_{I}$
scalars turned on. Let us define
$\displaystyle\alpha_{i}(u)$
$\displaystyle\equiv\frac{\phi_{i}\sinh\Phi}{\Phi}$ $\displaystyle i$
$\displaystyle=1,2,3,4$ $\displaystyle\beta_{\bar{\imath}}(u)$
$\displaystyle\equiv\frac{\phi_{\bar{\imath}}\sinh\Phi}{\Phi}$
$\displaystyle\bar{\imath}$ $\displaystyle=5,6,7,8$ (3.45)
As a consequence of Eq. (3.12), the ratio $\phi_{i}^{\prime}/\phi_{i}$ is the
same for all $i$ so all the $\phi_{i}$ scalars are proportional to each other.
In other words, we have $\alpha_{i}=n_{i}\alpha$ for constants $n_{i}$
satisfying $n_{i}n_{i}=1$, where $\alpha$ is given in Eq. (3.23). Then Eq.
(3.15) becomes
$\displaystyle\alpha^{\prime
2}+\beta_{\bar{\imath}}^{\prime}\beta_{\bar{\imath}}^{\prime}-\frac{(\alpha^{\prime}\alpha+\beta_{\bar{\imath}}^{\prime}\beta_{\bar{\imath}})^{2}}{1+\alpha^{2}+\beta_{\bar{\imath}}\beta_{\bar{\imath}}}=\alpha^{2}$
(3.46)
We can note that there exists a family of solutions where all
$\beta_{\bar{\imath}}$ functions satisfy
$\displaystyle\beta_{\bar{\imath}}=n_{\bar{\imath}}\beta$ (3.47)
for some function $\beta$ and constants $n_{\bar{\imath}}$ satisfying
$n_{\bar{\imath}}n_{\bar{\imath}}=1$. When this is the case, Eq. (3.46) then
further simplifies to
$\displaystyle\alpha^{\prime 2}+\beta^{\prime
2}-\frac{(\alpha^{\prime}\alpha+\beta^{\prime}\beta)^{2}}{1+\alpha^{2}+\beta^{2}}=\alpha^{2}$
(3.48)
which has already been solved in the previous section. We can prove that these
are the only solutions to Eq. (3.46) which satisfy the equations of motion.
The scalar dependence of the Lagrangian is
$\displaystyle e^{-1}\mathcal{L}$
$\displaystyle\supset-\frac{g^{2}}{4}\mathcal{P}^{I}_{u}\mathcal{P}^{I}_{u}+W$
$\displaystyle=-\frac{g^{2}}{4}\quantity(\alpha^{\prime
2}+\beta_{\bar{\imath}}^{\prime}\beta_{\bar{\imath}}^{\prime}-\frac{(\alpha^{\prime}\alpha+\beta_{\bar{\imath}}^{\prime}\beta_{\bar{\imath}})^{2}}{1+\alpha^{2}+\beta_{\bar{\imath}}\beta_{\bar{\imath}}}-(\alpha^{2}+2))$
(3.49) If we write the $\beta_{\bar{\imath}}$ in spherical coordinates, where
we call the radius $\beta$, this becomes
$\displaystyle=-\frac{g^{2}}{4}\quantity(\alpha^{\prime 2}+\beta^{\prime
2}+\beta^{2}K^{2}-\frac{(\alpha^{\prime}\alpha+\beta^{\prime}\beta)^{2}}{1+\alpha^{2}+\beta^{2}}-(\alpha^{2}+2))$
(3.50)
where $K^{2}$ is the kinetic energy of the angular coordinates.222Explicitly,
let $K^{2}=\theta^{\prime 2}+\sin^{2}\theta\,\phi^{\prime
2}+\sin^{2}\theta\sin^{2}\phi\,\psi^{\prime 2}$. We can treat $\alpha,\beta$,
and the three angles as the coordinates of this Lagrangian. The equation of
motion from varying the Lagrangian with respect to $\alpha$ will only involve
$\alpha$ and $\beta$ and their derivatives. Plugging-in (3.23) for $\alpha$,
satisfying this equation of motion fixes the form of $\beta$ to be what was
found previously in Eq. (3.31). This means that Eq. (3.46) simplifies to
$\beta^{2}K^{2}=0$ and the three angles must be constant.
Therefore, the general solution is
$\displaystyle\frac{\phi\sinh\Phi}{\Phi}$ $\displaystyle=|\sinh q|\sech u$
$\displaystyle\beta$ $\displaystyle=\sinh p\cosh q+\cosh p\sinh q\tanh u$
$\displaystyle\phi_{i}$ $\displaystyle=n_{i}\phi\qquad,\quad n_{i}n_{i}=1$
$\displaystyle\frac{\phi_{\bar{\imath}}\sinh\Phi}{\Phi}$
$\displaystyle=n_{\bar{\imath}}\beta\qquad,\quad
n_{\bar{\imath}}n_{\bar{\imath}}=1$ (3.51)
## 4 Discussion
In this paper, we have presented Janus solutions for $d=3$, ${\cal N}=8$
gauged supergravity. We constructed the simplest solutions with the smallest
number of scalars, namely the $\operatorname{SO}(n,1)/\operatorname{SO}(8)$
coset. The solutions we found have only two scalars displaying a nontrivial
profile. One scalar is dual to a marginal operator $O_{2}$ with scaling
dimension $\Delta=2$ and the other scalar is dual to a relevant operator
$O_{1}$ with scaling dimension $\Delta=1$. We used the holographic
correspondence to find the dual CFT interpretation of these solutions. It is
given by a superconformal interface, with a constant source of the operator
$O_{2}$ which jumps across the interface. For the operator $O_{1}$, the source
vanishes but there is an expectation value which depends on the distance from
the interface. It would be interesting to study whether half-BPS Janus
interfaces which display these characteristics can be constructed in the two-
dimensional $\mathcal{N}=(4,4)$ SCFTs.
We considered solutions for the $\operatorname{SO}(n,1)/\operatorname{SO}(8)$
coset, but these solutions can be trivially embedded into the
$\operatorname{SO}(8,n)/\quantity\big(\operatorname{SO}(8)\times\operatorname{SO}(n))$
cosets with $n>1$. Constructing solutions with more scalars with nontrivial
profiles is in principle possible, but the explicit expressions for the
quantities involved in the BPS equations are becoming very complicated. We
also believe that the $n=1$ case already illustrates the important features of
the more general $n>1$ cosets. Another possible generalization is given by
considering more general gaugings. One important example is given by replacing
the embedding tensor (2.6) with
$\displaystyle\Theta_{IJ,KL}=\varepsilon_{ijk\ell}+\alpha\varepsilon_{\bar{\imath}\bar{\jmath}\bar{k}\bar{\ell}}$
(4.1)
This is a deformation produces an $\operatorname{AdS}_{3}$ vacuum which is
dual to a SCFT with a large $D^{1}(2,1;\alpha)\times D^{1}(2,1;\alpha)$
superconformal algebra. As discussed in [16], this gauging is believed to be a
truncation type II supergravity compactified on $\operatorname{AdS}_{3}\times
S^{3}\times S^{3}\times S^{1}$ [33, 34]. It should be straightforward to adapt
the methods for finding solutions developed in the present paper to this case.
We calculated the holographic defect entropy for our solution. It would be
interesting to investigate whether this quantity can be related to the Calabi
diastasis function following [35, 36]. For this identification to work we
would have to consider the case $n=2$ for which the scalar coset is a Kähler
manifold.
We leave these interesting questions for future work.
## Acknowledgements
We would like to thank Matteo Vicino for collaboration at the initial stages
of this work and Per Kraus for useful conversations. The work of M. G. was
supported, in part, by the National Science Foundation under grant
PHY-19-14412. K. C. and M. G. are grateful to the Mani L. Bhaumik Institute
for Theoretical Physics for support.
## Appendix A Technical details
In this appendix, we present various technical details which are used in the
main part of the paper.
### A.1 $\operatorname{SO}(8)$ Gamma matrices
We are working with $8\times 8$ Gamma matrices $\Gamma^{I}_{A\dot{A}}$ and
their transposes $\Gamma^{I}_{\dot{A}A}$, which satisfy the Clifford algebra,
$\displaystyle\Gamma^{I}_{A\dot{A}}\Gamma^{J}_{\dot{A}B}+\Gamma^{J}_{A\dot{A}}\Gamma^{I}_{\dot{A}B}=2\delta^{IJ}\delta_{AB}$
(A.1)
Explicitly, we use the basis in [37],
$\displaystyle\Gamma^{8}_{A\dot{A}}$ $\displaystyle=1\otimes 1\otimes 1$
$\displaystyle\Gamma^{1}_{A\dot{A}}$ $\displaystyle=i\sigma_{2}\otimes
i\sigma_{2}\otimes i\sigma_{2}$ $\displaystyle\Gamma^{2}_{A\dot{A}}$
$\displaystyle=1\otimes\sigma_{1}\otimes i\sigma_{2}$
$\displaystyle\Gamma^{3}_{A\dot{A}}$ $\displaystyle=1\otimes\sigma_{3}\otimes
i\sigma_{2}$ $\displaystyle\Gamma^{4}_{A\dot{A}}$
$\displaystyle=\sigma_{1}\otimes i\sigma_{2}\otimes 1$
$\displaystyle\Gamma^{5}_{A\dot{A}}$ $\displaystyle=\sigma_{3}\otimes
i\sigma_{2}\otimes 1$ $\displaystyle\Gamma^{6}_{A\dot{A}}$
$\displaystyle=i\sigma_{2}\otimes 1\otimes\sigma_{1}$
$\displaystyle\Gamma^{7}_{A\dot{A}}$ $\displaystyle=i\sigma_{2}\otimes
1\otimes\sigma_{3}$ (A.2)
The matrices $\Gamma^{IJ}_{AB}$, $\Gamma^{IJ}_{\dot{A}\dot{B}}$ and similar
are defined as unit-weight antisymmetrized products of Gamma matrices with the
appropriate indices contracted. For instance,
$\displaystyle\Gamma^{IJ}_{AB}\equiv\frac{1}{2}(\Gamma^{I}_{A\dot{A}}\Gamma^{J}_{\dot{A}B}-\Gamma^{J}_{A\dot{A}}\Gamma^{I}_{\dot{A}B})$
(A.3)
### A.2 Integrability conditions
For BPS equations of the form
$\displaystyle\partial_{t}\varepsilon$
$\displaystyle=-\frac{1}{2z}\gamma_{0}\quantity\big(\gamma_{1}+f(u)+g(u)\gamma_{2})\varepsilon$
$\displaystyle\partial_{z}\varepsilon$
$\displaystyle=-\frac{1}{2z}\gamma_{1}\quantity\big(f(u)+g(u)\gamma_{2})\varepsilon$
$\displaystyle\partial_{u}\varepsilon$
$\displaystyle=\quantity\big(F(u)+G(u)\gamma_{2})\varepsilon$
where $f,g,F,G$ are matrices acting on $\varepsilon$ that commute with
$\gamma_{a}$, the integrability conditions are
$\displaystyle t,z:\qquad 0$
$\displaystyle=(1+f^{2}+g^{2})\varepsilon+[f,g]\gamma_{2}\varepsilon$
$\displaystyle t,u:\qquad 0$
$\displaystyle=(f^{\prime}+[f,F]-\\{g,G\\})\varepsilon+(g^{\prime}+[g,F]+\\{f,G\\})\gamma_{2}\varepsilon$
$\displaystyle z,u:\qquad\phantom{0}$ $\displaystyle\qquad\text{same as for
}t,u$
### A.3 Scalar asymptotics
The asymptotic expansions of the $\phi_{4}$ and $\phi_{5}$ scalar fields, as
given in (3.32), in the limits $u\to\pm\infty$ are
$\displaystyle|\phi_{4}|=$ $\displaystyle\;2|\sinh q|\frac{p\pm q}{\sinh(p\pm
q)}e^{\mp u}$ $\displaystyle-\frac{2|\sinh q|}{\sinh^{2}(p\pm
q)}\quantity(\frac{p\pm q}{\sinh(p\pm q)}(\sinh^{2}p+\sinh^{2}q)\pm 2\sinh
p\sinh q)e^{\mp 3u}+\mathcal{O}(e^{\mp 5u})$ $\displaystyle\phi_{5}=$
$\displaystyle\;(p\pm q)-\frac{2}{\sinh(p\pm q)}\quantity(\frac{p\pm
q}{\sinh(p\pm q)}\sinh^{2}q\pm\sinh p\sinh q)e^{\mp 2u}+\mathcal{O}(e^{\mp
4u})$ (A.4)
## References
* [1] D. Bak, M. Gutperle and S. Hirano, _A Dilatonic deformation of AdS(5) and its field theory dual_ , _JHEP_ 05 (2003) 072, [hep-th/0304129].
* [2] E. D’Hoker, J. Estes and M. Gutperle, _Exact half-BPS Type IIB interface solutions. I. Local solution and supersymmetric Janus_ , _JHEP_ 06 (2007) 021, [0705.0022].
* [3] E. D’Hoker, J. Estes and M. Gutperle, _Interface Yang-Mills, supersymmetry, and Janus_ , _Nucl. Phys. B_ 753 (2006) 16–41, [hep-th/0603013].
* [4] E. D’Hoker, J. Estes, M. Gutperle and D. Krym, _Exact Half-BPS Flux Solutions in M-theory. I: Local Solutions_ , _JHEP_ 08 (2008) 028, [0806.0605].
* [5] E. D’Hoker, J. Estes, M. Gutperle and D. Krym, _Janus solutions in M-theory_ , _JHEP_ 06 (2009) 018, [0904.3313].
* [6] A. Clark and A. Karch, _Super Janus_ , _JHEP_ 10 (2005) 094, [hep-th/0506265].
* [7] N. Bobev, K. Pilch and N. P. Warner, _Supersymmetric Janus Solutions in Four Dimensions_ , _JHEP_ 06 (2014) 058, [1311.4883].
* [8] M. Suh, _Supersymmetric Janus solutions in five and ten dimensions_ , _JHEP_ 09 (2011) 064, [1107.2796].
* [9] M. Chiodaroli, E. D’Hoker, Y. Guo and M. Gutperle, _Exact half-BPS string-junction solutions in six-dimensional supergravity_ , _JHEP_ 12 (2011) 086, [1107.1722].
* [10] M. Gutperle, J. Kaidi and H. Raj, _Janus solutions in six-dimensional gauged supergravity_ , _JHEP_ 12 (2017) 018, [1709.09204].
* [11] K. Pilch, A. Tyukov and N. P. Warner, _$\mathcal{N}=2$ Supersymmetric Janus Solutions and Flows: From Gauged Supergravity to M Theory_, _JHEP_ 05 (2016) 005, [1510.08090].
* [12] P. Karndumri, _Supersymmetric Janus solutions in four-dimensional N=3 gauged supergravity_ , _Phys. Rev. D_ 93 (2016) 125012, [1604.06007].
* [13] N. Bobev, F. F. Gautason, K. Pilch, M. Suh and J. Van Muiden, _Janus and J-fold Solutions from Sasaki-Einstein Manifolds_ , _Phys. Rev. D_ 100 (2019) 081901, [1907.11132].
* [14] N. Bobev, F. F. Gautason, K. Pilch, M. Suh and J. van Muiden, _Holographic Interfaces in N=4 SYM: Janus and J-folds_ , 2003.09154.
* [15] A. Clark, D. Freedman, A. Karch and M. Schnabl, _Dual of the Janus solution: An interface conformal field theory_ , _Phys. Rev. D_ 71 (2005) 066003, [hep-th/0407073].
* [16] H. Nicolai and H. Samtleben, _N=8 matter coupled AdS(3) supergravities_ , _Phys. Lett. B_ 514 (2001) 165–172, [hep-th/0106153].
* [17] H. Samtleben and O. Sarioglu, _Consistent $S^{3}$ reductions of six-dimensional supergravity_, _Phys. Rev. D_ 100 (2019) 086002, [1907.08413].
* [18] B. de Wit, I. Herger and H. Samtleben, _Gauged locally supersymmetric D = 3 nonlinear sigma models_ , _Nucl. Phys. B_ 671 (2003) 175–216, [hep-th/0307006].
* [19] P. Breitenlohner and D. Z. Freedman, _Positive Energy in anti-De Sitter Backgrounds and Gauged Extended Supergravity_ , _Phys. Lett. B_ 115 (1982) 197–201.
* [20] I. R. Klebanov and E. Witten, _AdS / CFT correspondence and symmetry breaking_ , _Nucl. Phys. B_ 556 (1999) 89–114, [hep-th/9905104].
* [21] E. Witten, _Multitrace operators, boundary conditions, and AdS / CFT correspondence_ , hep-th/0112258.
* [22] D. Marolf and S. F. Ross, _Boundary Conditions and New Dualities: Vector Fields in AdS/CFT_ , _JHEP_ 11 (2006) 085, [hep-th/0606113].
* [23] I. Papadimitriou and K. Skenderis, _Correlation functions in holographic RG flows_ , _JHEP_ 10 (2004) 075, [hep-th/0407071].
* [24] K. Jensen and A. O’Bannon, _Holography, Entanglement Entropy, and Conformal Field Theories with Boundaries or Defects_ , _Phys. Rev. D_ 88 (2013) 106006, [1309.4523].
* [25] S. Ryu and T. Takayanagi, _Holographic derivation of entanglement entropy from AdS/CFT_ , _Phys. Rev. Lett._ 96 (2006) 181602, [hep-th/0603001].
* [26] T. Azeyanagi, A. Karch, T. Takayanagi and E. G. Thompson, _Holographic calculation of boundary entropy_ , _JHEP_ 03 (2008) 054, [0712.1850].
* [27] M. Chiodaroli, M. Gutperle and L.-Y. Hung, _Boundary entropy of supersymmetric Janus solutions_ , _JHEP_ 09 (2010) 082, [1005.4433].
* [28] P. Calabrese and J. L. Cardy, _Entanglement entropy and quantum field theory_ , _J. Stat. Mech._ 0406 (2004) P06002, [hep-th/0405152].
* [29] I. Affleck and A. W. Ludwig, _Universal noninteger ’ground state degeneracy’ in critical quantum systems_ , _Phys. Rev. Lett._ 67 (1991) 161–164.
* [30] K. Sakai and Y. Satoh, _Entanglement through conformal interfaces_ , _JHEP_ 12 (2008) 001, [0809.4548].
* [31] E. M. Brehm and I. Brunner, _Entanglement entropy through conformal interfaces in the 2D Ising model_ , _JHEP_ 09 (2015) 080, [1505.02647].
* [32] M. Gutperle and J. D. Miller, _Entanglement entropy at holographic interfaces_ , _Phys. Rev. D_ 93 (2016) 026006, [1511.08955].
* [33] J. de Boer, A. Pasquinucci and K. Skenderis, _AdS / CFT dualities involving large 2-D N=4 superconformal symmetry_ , _Adv. Theor. Math. Phys._ 3 (1999) 577–614, [hep-th/9904073].
* [34] S. Gukov, E. Martinec, G. W. Moore and A. Strominger, _The Search for a holographic dual to AdS(3) x S**3 x S**3 x S**1_ , _Adv. Theor. Math. Phys._ 9 (2005) 435–525, [hep-th/0403090].
* [35] C. P. Bachas, I. Brunner, M. R. Douglas and L. Rastelli, _Calabi’s diastasis as interface entropy_ , _P_ hys. Rev. D 90 (2014) no.4, 045004[1311.2202].
* [36] E. D’Hoker and M. Gutperle, _Holographic entropy and Calabi‘s diastasis_ , _J_ HEP 10 (2014), 093, [1406.5124]
* [37] M. B. Green, J. Schwarz and E. Witten, _SUPERSTRING THEORY. VOL. 1: INTRODUCTION_. Cambridge Monographs on Mathematical Physics. 7, 1988.
|
# Opportunistic Qualitative Planning in Stochastic Systems with Preferences
over Temporal Logic Objectives
Abhishek Ninad Kulkarni∗ and Jie Fu A. N. Kulkarni and J. Fu are with the
Dept. of Electrical and Computer Engineering, University of Florida,
Gainesville, FL 32603 USA<EMAIL_ADDRESS>
###### Abstract
Preferences play a key role in determining what goals/constraints to satisfy
when not all constraints can be satisfied simultaneously. In this work, we
study preference-based planning in a stochastic system modeled as a Markov
decision process, subject to a possible incomplete preference over temporally
extended goals. Our contributions are three folds: First, we introduce a
preference language to specify preferences over temporally extended goals.
Second, we define a novel automata-theoretic model to represent the preorder
induced by given preference relation. The automata representation of
preferences enables us to develop a preference-based planning algorithm for
stochastic systems. Finally, we show how to synthesize opportunistic
strategies that achieves an outcome that improves upon the current satisfiable
outcome, with positive probability or with probability one, in a stochastic
system. We illustrate our solution approaches using a robot motion planning
example.
## I Introduction
Preference-based planning decides what constraints to satisfy when not all
constraints can be achieved [1]. In this paper, we study a class of
qualitative, preference-based probabilistic planning problem in which the
agent aims to strategically exploit the _opportunities_ that arise due to
stochasticity in its environment to achieve a more preferred outcome than what
may be achieved from its initial state. Such problems are encountered in many
applications of autonomous systems.
In existing methods for probabilistic planning with temporal goals, the
desired behavior of the system is specified by a temporal logic formula [2],
and the goal is to compute a policy that either maximizes the probability of
satisfying the formula [3, 4], or enforces the satisfaction as a constraint
[5, 6]. In recent work, preference-based planning with temporal logic
objectives have been studied: minimum violation planning in a deterministic
system [7] decides which low-priority constraints to be violated. Automated
specification-revision is proposed in [8] where the formula can be revised
with a cost and the planning problem is formulated into a multi-objective
Markov Decision Process (MDP) that trades off minimizing the cost of revision
and maximizing the probability of satisfying the revised formula. [9]
introduced weights with Boolean and temporal operators in signal temporal
logic to specify the importance of satisfying the subformula and priority in
the timing of satisfaction. They developed a gradient-based optimization
method to maximize the weighted satisfaction in deterministic dynamical
systems. Robust and recovery specifications are introduced by [10] and pre-
specify what behaviors are expected when the part of the system specification
(i.e., the environment assumption) fails to be satisfied. Existing preference-
based planning methods with temporal goals assume the preference relation to
be _complete_.
Unfortunately, in many applications, the completeness assumption does not
always hold. For instance, it can be impractical to elicit user’s preference
between every pair of outcomes when the set of outcomes is large; or in some
situation, such as the trolley problem [11], the outcomes (sacrificing
passengers or pedestrians) are incomparable. Preference languages have been
proposed to represent both the complete and incomplete preferences over
propositional formulas [12] and temporal logic formulas [13]. For planning,
CP-net and its variants [14, 15] have been proposed as a computational model.
But they are defined over propositional preferences. To the best of our
knowledge, there is no computational model that can express incomplete
preferences over temporal goals. Such a model is needed to facilitate planning
in stochastic environments.
In this paper, we propose a novel automata-theoretic approach to qualitative
planning in MDPs with incomplete preferences over temporal logic objectives.
Our approach consists of three steps. First, we express (incomplete)
preferences over the satisfaction of temporal goals specified using a fragment
of Linear Temporal Logic (LTL). Unlike propositional preferences that are
interpreted over states, preferences over temporal goals are interpreted over
infinite words. Second, we define an _automata-theoretic model_ to represent
the preorder induced by the preference relation and describe a procedure to
construct the automata-theoretic model given a preference formula. Thirdly, we
present an algorithm to solve preference-based strategies in a stochastic
system modeled as a labeled MDP. We presented _safe and positively improving_
and _safe and almost-surely improving_ strategies, that identify and exploit
opportunities for improvements with positive probability and probability one,
respectively. A running example is employed to illustrate the notions and
solution approaches.
## II Preliminaries
Notation. Given a finite set $X$, let $\mathcal{D}(X)$ be the set of
probability distributions over $X$. Let $\Sigma$ be an alphabet (a finite set
of symbols). We denote the set of finite (resp., infinite) words that can be
generated using $\Sigma$ by $\Sigma^{\ast}$ (resp., $\Sigma^{\omega}$). Given
a word $w\in\Sigma^{\omega}$, a prefix of $w$ is a word $u\in\Sigma^{\ast}$
such that there exists $v\in\Sigma^{\omega}$, $w=uv$. We denote the set of all
finite prefixes of $w$ by $\mathsf{Pref}(w)$.
We consider a class of decision-making problems in stochastic systems modeled
as a labeled MDP [16].
###### Definition 1 (Labeled MDP).
A labeled MDP is a tuple $M=\langle S,A,P,\mathcal{AP},L\rangle,$ where $S$
and $A$ are finite state and action sets, $P:S\times
A\rightarrow\mathcal{D}(S)$ is the transition probability function such that
$P(s^{\prime}\mid s,a)$ is the probability of reaching $s^{\prime}\in S$ given
that action $a\in A$ is chosen at state $s\in S$, $\mathcal{AP}$ is a finite
set of atomic propositions, and $L:S\rightarrow 2^{\mathcal{AP}}$ is a
labeling function that maps each state to a set of atomic propositions which
are true in that state.
A finite-memory, randomized strategy in the MDP is a function
$\pi:S^{\ast}\rightarrow\mathcal{D}(A)$. A Markovian, randomized strategy in
the MDP is a function $\pi:S\rightarrow\mathcal{D}(A)$. Given an MDP $M$ and
an initial distribution $\nu_{0}$, a strategy $\pi$ induces a stochastic
process $M_{\pi}=\\{S_{t}\mid t\geq 1\\}$ where $S_{k}$ is the random variable
for the $k$-th state in the stochastic process $M_{\pi}$ and it holds that
$S_{0}\sim\nu_{0}$ and $S_{i+1}\sim P(\cdot\mid S_{i},a_{i})$ and
$a_{i}\sim\pi(\cdot\mid S_{0}\ldots S_{i})$ for $i\geq 0$.
We express the objective of the planning agent as preferences over a set of
outcomes, each of which is expressed by a syntactically co-safe LTL (scLTL)
formula [17].
###### Definition 2.
Given a set of atomic propositions $\mathcal{AP}$, an scLTL formula is defined
inductively as follows:
$\varphi\coloneqq p\mid\neg
p\mid\varphi\land\varphi\mid\bigcirc\,\varphi\mid\varphi\mbox{$\,{\sf
U}\,$}\varphi,$
where $p\in\mathcal{AP}$ is an atomic proposition. The operators $\neg$
(negation) and $\land$ (and) are propositional logic operators. The operators
$\bigcirc\,$ (next) and $\,{\sf U}\,$ (until) are temporal operators [17]. The
operator $\Diamond\,$ (eventually) is derived using $\,{\sf U}\,$ as follows:
$\Diamond\,\varphi=\top\mbox{$\,{\sf U}\,$}\varphi$ where $\top$ is
unconditionally true. The formula $\Diamond\,\varphi$ is true if $\varphi$
holds in some future time.
The scLTL formulas are a subclass of LTL formulas with a special property that
an infinite word satisfying an scLTL only needs to have a ‘good’ prefix
(formalized after Definition 3). The set of good prefixes can be compactly
represented as the language accepted by a Deterministic Finite Automaton
(DFA).
###### Definition 3.
A deterministic finite automaton (DFA) is a tuple $\mathcal{A}=\langle
Q,\Sigma,\delta,{q_{0}},F\rangle,$ where $Q$ is a finite set of states;
$\Sigma=2^{\mathcal{AP}}$ is a finite set of symbols called the alphabet;
$\delta:Q\times\Sigma\rightarrow Q$ is a deterministic transition function
that maps a state and a symbol to a next state. The transition function is
extended recursively over words as follows: $\delta(q,\sigma
u)=\delta(\delta(q,\sigma),u)$ given $\sigma\in\Sigma$ and
$u\in\Sigma^{\ast}$; ${q_{0}}\in Q$ is the initial state; $F\subseteq Q$ is a
set of accepting states. A word $u$ is accepted by $\mathcal{A}$ if
$\delta({q_{0}},u)\in F$.
Given an scLTL formula $\varphi$ and an infinite word $w\in\Sigma^{\omega}$, a
‘good’ prefix is a finite word $u\in\Sigma^{\ast}$ such that
$u\in\mathsf{Pref}(w)$ and $u$ is accepted by the DFA, $\mathcal{A}$. A word
$w\in\Sigma^{\omega}$ satisfies an scLTL formula $\varphi$, denoted by
$w\models\varphi$, if $w$ has a good prefix. The set of words satisfying an
scLTL formula $\varphi$ is denoted by
$\mathsf{Mod}(\varphi)=\\{w\in\Sigma^{\omega}\mid w\models\varphi\\}$. For an
scLTL formula, all accepting states of its corresponding DFA are absorbing,
i.e., $\delta(q,\sigma)=q$ for any $q\in F$ and $\sigma\in\Sigma$. We assume
the transition function of DFA to be _complete_. That is, $\delta(q,\sigma)$
is defined for any pair $(q,\sigma)\in Q\times\Sigma$. An incomplete
transition function can be made complete by introducing a sink state and
redirecting all undefined transitions to that sink state.
An infinite path in a labeled MDP $\rho=s_{0}s_{1}\ldots$ induces a word
$w=L(s_{0})L(s_{1})\ldots$ in the DFA. We say the path $\rho$ satisfies an
scLTL formula $\varphi$ if and only if the induced word $w$ satisfies the
formula, i.e., $w\models\varphi$.
###### Definition 4 (Almost-Sure/Positive Winning Strategy).
Given an MDP $M$ and an scLTL formula $\varphi$, a strategy
$\pi:S^{\ast}\rightarrow\mathcal{D}(A)$ is said to be almost-sure (resp.,
positive) winning if, in the stochastic process $M_{\pi}$ induced by $\pi$,
the formula $\varphi$ can be satisfied with probability one (resp. with a
probability $\geq 0$). Formally, in the stochastic process
$M_{\pi}=\\{S_{t}\mid t\geq 1\\}$,
$\mathbf{Pr}(S_{0}S_{1},\ldots\models\varphi)=1$ (resp. $>0$).
The set of _states_ in the MDP $M$, starting from which the agent has an
almost-sure (resp. positive) winning strategy to satisfy an scLTL formula
$\varphi$ is called the _almost-sure (resp., positive) winning region_. Given
an MDP and an scLTL formula, the product operation [18] reduces the problem of
computing almost-sure (resp., positive) winning region to that of computing
the set of states from which a subset of final states in the product MDP can
be reached with almost-surely (resp., positive probability). It is known that
there exists a _memoryless, almost-sure winning strategy_ $\pi$ to ensure the
subset of final states is reached with probability one from a state in the
almost-sure winning region. Polynomial (resp., linear) time algorithm to
compute almost-sure (resp., positive) winning strategy in MDPs with
reachability objectives can be found in the book by [16, Chap. 10].
### II-A Running Example
We use a motion planning problem for an cleaning robot to illustrate the the
concepts discussed in this paper. The robot is to operate in a $5\times 5$
stochastic gridworld as shown in Figure 1. At every step, the robot must
choose to move in one of the North, East, South, West directions. If the
action results in an obstacle cell (shown in black), the robot returns to the
cell it started from. If the robot enters a cell marked with green arrows, it
may either stay in that cell or move into an adjacent cell along a direction
indicated by the arrows each with a positive probability. If the robot moves
into any cell with no arrows, it remains in that cell with probability one.
The robot has a limited battery capacity measured in units. Every action costs
$1$ unit of battery. We consider two preferences objectives for the robot.
Figure 1: A gridworld MDP with $6$ regions of interest $A$-$F$.
1. (PO1)
The robot must visit $A,B$ and/or $E$, given the preference that: visiting $B$
is strictly preferred to visiting $A$, and visiting $E$ is strictly preferred
to visiting $A$.
2. (PO2)
The robot must visit exactly one of $A,B,C,D$ or $F$, given the preference
that: visiting $B$ is strictly preferred to visiting $A$, visiting $D$ is
strictly preferred to visiting $B$, visiting $F$ is strictly preferred to
visiting $C$, and visiting $B$ is indifferent to visiting $C$.
The preference relations expressed by both the objectives are incomplete. In
the first objective, neither the relation between $B$ and $E$ is given nor can
it be deduced using the properties (e.g., transitivity) of preferences. Hence,
visiting $B$ and visiting $E$ are incomparable outcomes due to incompletely
known preferences.
In the second objective, since $B$ and $C$ are indifferent, it follows by
transitivity that visiting $C$ is strictly preferred to visiting $A$, and
visiting $D$ is strictly preferred to visiting $C$. However, visiting $D$ is
incomparable to visiting $F$ since no relation is either given or can be
deduced between them.
## III Preference Modeling
In this section, we propose a language to compactly represent incomplete
preferences over temporal goals.
Let $\Phi=\\{\varphi_{1},\ldots,\varphi_{n}\\}$ be an indexed set of outcomes,
i.e., temporal goals expressed by scLTL formulas.
###### Definition 5.
A preference on $\Phi$ is a reflexive binary relation $\trianglerighteq$ on
$\Phi$. For any $1\leq i,j\leq n$, a pair of outcomes
$(\varphi_{i},\varphi_{j})\in~{}\trianglerighteq$ means that satisfying
$\varphi_{i}$ is considered “at least as good as” satisfying $\varphi_{j}$.
We also denote $(\varphi_{i},\varphi_{j})\in~{}\trianglerighteq$ by
$\varphi_{i}\trianglerighteq\varphi_{j}$. Given any pair of outcomes,
$\varphi_{i},\varphi_{j}\in\Phi$, exactly one of the following four relations
holds:
1. 1.
$\varphi_{i}$ is _indifferent_ to $\varphi_{j}$:
$\varphi_{i}\trianglerighteq\varphi_{j}$ and
$\varphi_{j}\trianglerighteq\varphi_{i}$,
2. 2.
$\varphi_{i}$ is _strictly preferred_ to $\varphi_{j}$:
$\varphi_{i}\trianglerighteq\varphi_{j}$ and
$\varphi_{j}\not\trianglerighteq\varphi_{i}$,
3. 3.
$\varphi_{j}$ is _strictly preferred_ to $\varphi_{i}$:
$\varphi_{j}\trianglerighteq\varphi_{i}$ and
$\varphi_{i}\not\trianglerighteq\varphi_{j}$,
4. 4.
$\varphi_{i}$ is _incomparable_ to $\varphi_{j}$:
$\varphi_{i}\not\trianglerighteq\varphi_{2}$ and
$\varphi_{j}\not\trianglerighteq\varphi_{i}$.
When the agent is indifferent to two outcomes $\varphi_{i},\varphi_{j}$, it
may choose to satisfy either one of them. This can equivalently be expressed
in scLTL by the disjunction of the two formulas. Based on this observation, we
hereby assume that for any two outcomes $\varphi_{i},\varphi_{j}\in\Phi$,
$\varphi_{i}\trianglerighteq\varphi_{j}$ and
$\varphi_{j}\trianglerighteq\varphi_{i}$ do not hold simultaneously, i.e., no
two outcomes in $\Phi$ are indifferent to each other. As a result, the binary
relation $\trianglerighteq$ on $\Phi$ can equivalently be expressed using the
two sets $P,J\subseteq\Phi\times\Phi$ constructed as follows: given a pair of
outcomes $\varphi_{i},\varphi_{j}\in\Phi$, $1\leq i,j,\leq n$,
1. 1.
$(\varphi_{i},\varphi_{j})\in P$ iff $\varphi_{i}$ is strictly preferred to
$\varphi_{j}$,
2. 2.
$(\varphi_{i},\varphi_{j})\in J$ iff $\varphi_{i}$ is incomparable to
$\varphi_{j}$.
###### Remark 1.
We closely follow the notation in [19, Ch. 2]. In contrast, we use the
properties of scLTL formulas to simplify the notation to avoid expressing
indifference explicitly.
Notice that the sets $P,J$ induce a mutually exclusive and exhaustive
partition of $\Phi\times\Phi$. Let
$P^{-}=\\{(\varphi_{j},\varphi_{i})\in\Phi\times\Phi\mid(\varphi_{i},\varphi_{j})\in
P\\}$. Then, $P\cup P^{-}\cup J=\Phi\times\Phi$ and $P\cap P^{-}=P^{-}\cap
J=J\cap P=\emptyset$.
###### Example 1.
Consider the running example introduced in Sect. II-A. In preference objective
(PO1), since there is no constraint on visiting multiple regions of interests,
each outcome can be represented using “eventually” operator. Hence, the set of
outcomes is given by $\Phi=\\{\Diamond\,A,\Diamond\,B,\Diamond\,E\\}$. The
components of preference structure $\langle P,J\rangle$ are given as follows:
$P=\\{(\Diamond\,B,\Diamond\,A),(\Diamond\,E,\Diamond\,A)\\}$, and
$J=\\{(\Diamond\,B,\Diamond\,E),(\Diamond\,E,\Diamond\,B)\\}$.
In preference objective (PO2), since exactly one region is to be visited, the
outcomes can be represented as scLTL formulas: $\varphi_{A}=\neg(B\lor C\lor
D\lor F)\mbox{$\,{\sf U}\,$}A$, $\varphi_{B}=\neg(A\lor C\lor D\lor
F)\mbox{$\,{\sf U}\,$}B$, and so on. Because of the indifference, we replace
$\varphi_{B}$ and $\varphi_{C}$ by their disjunction,
$\varphi_{B}\lor\varphi_{C}$. Hence, the set of outcomes is
$\Phi=\\{\varphi_{A},\varphi_{B}\lor\varphi_{C},\varphi_{D},\varphi_{F}\\}$.
And, the components of preference structure are given by:
$P=\\{(\varphi_{B}\lor\varphi_{C},\varphi_{A}),(\varphi_{D},\varphi_{B}\lor\varphi_{C}),(\varphi_{F},\varphi_{B}\lor\varphi_{C}),(\varphi_{D},\varphi_{A}),(\varphi_{F},\varphi_{A})\\}$,
and $J=\\{(\varphi_{F},\varphi_{D}),(\varphi_{D},\varphi_{F})$.
Because an scLTL formula is interpreted over infinite words, the preference
structure $\trianglerighteq$ induces a preference structure $\succeq$ on the
set of infinite words in $\Sigma^{\omega}$. Therefore, we can define a pre-
order $\succeq\in\Sigma^{\omega}\times\Sigma^{\omega}$ based on the preference
structure $\trianglerighteq$ (and equivalently to the tuple $\langle
P,J\rangle$). This is a non-trivial task because any word in $\Sigma^{\omega}$
could satisfy more than one of the scLTL formulas in $\Phi$. Thus, to
determine whether a word is strictly preferred over another, we need a way to
compare two arbitrary subsets of $\Phi$ that contain outcomes satisfied by
these two words.
###### Definition 6 (Most-Preferred Satisfied Outcomes).
Given a word $w\in\Sigma^{\omega}$, let
$\mathsf{Outcomes}(w)=\\{\varphi\in\Phi\mid w\models\varphi\\}$ be the set of
outcomes satisfied by $w$. Given a subset $\Psi\subseteq\Phi$, let
$\mathsf{MP}(\Psi)=\\{\varphi\in\Psi\mid\nexists\varphi^{\prime}\in\Psi:(\varphi,\varphi^{\prime})\in
P\\}$ and let $\mathsf{MP}(w)=\mathsf{MP}(\mathsf{Outcomes}(w))$ be the set of
_most-preferred outcomes_ satisfied by the word $w$.
###### Lemma 1.
Given a word $w\in\Sigma^{\omega}$, any pair
$\varphi,\varphi^{\prime}\in\mathsf{MP}(w)$ are incomparable to each other.
The proof follows from the definition.
###### Definition 7 (Semantics).
Given two words $w_{1},w_{2}\in\Sigma^{\omega}$, $w_{1}$ is strictly preferred
to $w_{2}$, denoted $w_{1}\succ w_{2}$, if and only if the following
conditions hold: 1. there exist $\varphi_{i}\in\mathsf{MP}(w_{1})$ and
$\varphi_{j}\in\mathsf{MP}(w_{2})$ such that $(\varphi_{i},\varphi_{j})\in P$,
and 2. for every pair $\varphi_{i}\in\mathsf{MP}(w_{1})$ and
$\varphi_{j}\in\mathsf{MP}(w_{2})$, $(\varphi_{j},\varphi_{i})\notin P$. Word
$w_{1}$ is indifferent to $w_{2}$, denoted $w_{1}\sim w_{2}$, if and only if
$\mathsf{MP}(w_{1})=\mathsf{MP}(w_{2})$. Two words $w_{1}$ and $w_{2}$ are
incomparable, denoted $w_{1}\|w_{2}$, if neither $w_{1}\succ w_{2}$, nor
$w_{2}\succ w_{1}$, nor $w_{1}\sim w_{2}$ holds.
In words, $w_{1}$ is strictly preferred to $w_{2}$ iff: first, $w_{1}$
satisfies at least one scLTL formula that is strictly preferred to some scLTL
formula satisfied by $w_{2}$. Second, every scLTL formula satisfied by $w_{1}$
is either strictly preferred to, or incomparable to any scLTL formula
satisfied by $w_{2}$.
###### Example 2.
Consider preference objective (PO2). Consider two paths $\rho_{1},\rho_{2}$ in
Fig. 1 that sequentially visit $A,F,D$ and $A,C$, respectively. Let
$w_{1}=L(\rho_{1})$, $w_{2}=L(\rho_{2})$ be the words induced by
$\rho_{1},\rho_{2}$, respectively. For the word $w_{1}$, we have
$\mathsf{Outcomes}(w_{1})=\\{\varphi_{A},\varphi_{D},\varphi_{F}\\}$ and
$\mathsf{MP}(w)=\\{\varphi_{D},\varphi_{F}\\}$ since visiting $D$ and $F$
individually is strictly preferred to $A$, and visiting $D$ and visiting $F$
are incomparable. Similarly,
$\mathsf{MP}(w_{2})=\\{\varphi_{C}\lor\varphi_{B}\\}$. Therefore, we have
$w_{1}\succ w_{2}$ because, condition (1) of strict preference semantics holds
for the pair $(\varphi_{D},\varphi_{C}\lor\varphi_{B})$ and, condition (2) is
also satisfied because $\varphi_{F}$ is incomparable to
$\varphi_{C}\lor\varphi_{B}$.
## IV Automata-Theoretic Computational Model for Incomplete Preferences
We now introduce a novel automata-theoretic computational model called a
preference DFA.
###### Definition 8 (Preference DFA).
A preference DFA is the tuple
$\mathcal{B}=\langle Q,\Sigma,\delta,{q_{0}},F,G\rangle,$
where $Q,\Sigma,\delta,{q_{0}}$ are the (finite) set of states, the alphabet,
the deterministic transition function, and an initial state, similar to these
components in a DFA. $F\subseteq Q$ is a set of final states. The last
component $G=(\mathcal{X},E)$ is a preference graph, where each node
$X\in\mathcal{X}$ represents a subset of final states $F$ such that $X_{i}\cap
X_{j}=\emptyset$ for every $X_{i},X_{j}\in\mathcal{X}$. The edges
$E\subseteq\mathcal{X}\times\mathcal{X}$ is a set of directed edges.
Intuitively, a preference DFA $\mathcal{B}$ encodes the preference relation
$\succeq$ over subsets of words (languages in $\Sigma^{\omega}$) represented
using different classes by defining a preorder over the acceptance conditions
(sets of final states). Next, we describe construction a preference DFA from a
preference structure.
Given a preference structure $\trianglerighteq=\langle P,J\rangle$, the
preference DFA is constructed in two steps. First, the underlying DFA $\langle
Q,\Sigma,\delta,{q_{0}},F\rangle$ is constructed as a cross product of DFAs
representing the union of languages of all scLTL formulas in $\Phi$. Letting
$\mathcal{A}_{i}=\langle Q_{i},\Sigma,\delta_{i},{q_{0}}_{i},F_{i}\rangle$ to
be the DFA corresponding to $\varphi_{i}$ for all $1\leq i\leq n$, we have
$Q=Q_{1}\times\ldots\times Q_{n}$,
$\delta(q,\sigma)=(\delta_{1}(q_{1},\sigma),\ldots,\delta_{n}(q_{n},\sigma))$,
${q_{0}}=({q_{0}}_{1},\ldots,{q_{0}}_{n})$ and $F=(F_{1}\times
Q_{2}\times\ldots\times Q_{n})\cup(Q_{1}\times F_{2}\times\ldots\times
Q_{n})\cup\ldots\cup(Q_{1}\times Q_{2}\times\ldots\times F_{n})$. By
definition, any word that induces a visit to a final state in preference
automaton achieves at least one outcome in $\Phi$.
In the second step, we construct the preference graph $G=(\mathcal{X},E)$.
Intuitively, every node of the preference graph represents an equivalence
class of final states such that any two words that visit any final state
represented by the same node are indifferent under $\succeq$. To define the
nodes, we first associate each final state with a set of _tags_ :
1. 1.
A tag $x_{ij}$ is associated with a final state $q=(q_{1},\ldots,q_{n})\in F$
to denote that a word reaching $q$ satisfies a more preferred outcome among
$\varphi_{i}$ and $\varphi_{j}$. Hence, $x_{ij}$ is assigned to $q$ iff the
following conditions hold: (a) $q_{i}\in F_{i}$, (b)
$(\varphi_{i},\varphi_{j})\in P$, (c)
$\varphi_{i}\in\mathsf{MP}(\\{\varphi_{k}\in\Phi\mid q_{k}\in F_{k}\\})$.
2. 2.
A tag $y_{ij}$ is associated to a final state $q=(q_{1},\ldots,q_{n})\in F$ to
denote that a word reaching $q$ satisfies the less preferred outcome among
$\varphi_{i}$ and $\varphi_{j}$. Hence, $y_{ij}$ is assigned to $q$ iff: (a)
$q_{i}\in Q_{i}\setminus F_{i}$, (b) $q_{j}\in F_{j}$, (c)
$(\varphi_{i},\varphi_{j})\in P$, (d)
$\varphi_{j}\in\mathsf{MP}(\\{\varphi_{k}\in\Phi\mid q_{k}\in F_{k}\\})$.
We denote the set of tags associated to a final state $q\in F$ by
$\lambda(q)$. A node $X\in\mathcal{X}$ represents a set of final states that
have the same set of tags. That is, for any $q,q^{\prime}\in X$,
$\lambda(q)=\lambda(q^{\prime})$. We write $\lambda(X):=\lambda(q)$ to denote
the set of tags associated with any final state represented by $X$. An edge
$(X_{2},X_{1})$ in $E$ represents that any final state in $X_{1}$ is strictly
preferred to any final state in $X_{2}$. Thus, $(X_{2},X_{1})$ is included in
$E$ if and only if (1) there exists $1\leq i,j\leq n$ such that
$x_{ij}\in\lambda(X_{1})$ and $y_{ij}\in\lambda(X_{2})$; (2) for all $1\leq
i,j\leq n$ such that $x_{ij}\in\lambda(X_{1})$ and $y_{ij}\in\lambda(X_{2})$
does not hold, $y_{ij}\in\lambda(X_{1})$ and $x_{ij}\in\lambda(X_{2})$ also
does not hold.
An edge $(X_{1},X_{2})\in E$ is intuitively understood as follows. Condition
(1) states that there must exist a pair of scLTL formulas
$\varphi_{i},\varphi_{j}\in\Phi$ such that $(\varphi_{i},\varphi_{j})\in P$,
and any word that visits $X_{2}$ must satisfy $\varphi_{i}$ and any word that
visits $X_{1}$ must satisfy $\neg\varphi_{i}\land\varphi_{j}$. Condition (2)
asserts that the opposite of condition (1) should never hold. That is, there
must not exist a pair of scLTL formulas $\varphi_{i},\varphi_{j}\in\Phi$ such
that $(\varphi_{i},\varphi_{j})\in P$, and any word that visits $X_{1}$
satisfies $\varphi_{i}$ and any word that visits $X_{2}$ satisfies
$\neg\varphi_{i}\land\varphi_{j}$.
###### Example 3.
We describe the construction of preference DFA for first preference objective
(PO1). The underlying DFA of the preference DFA for (PO1) is constructed as
the union of DFAs corresponding to $\Diamond\,A,\Diamond\,B,\Diamond\,C$, and
is shown in Fig. 2. Every state in preference DFA is annotated as a tuple
$(a_{i},b_{j},e_{k})$ where $i,j,k=0,1$. The subscript $i,j,k=0$ means that
corresponding region has been visited. Therefore, all states except
$(a_{1},b_{1},e_{1})$ are final states since at least one of the formulas is
satisfied in all states but $(a_{1},b_{1},e_{1})$.
$(a_{1},b_{1},e_{1}),\\{\\}$$(a_{0},b_{1},e_{1}),\\{y_{EA},y_{BA}\\}$$(a_{1},b_{1},e_{0}),\\{x_{EA}\\}$$(a_{0},b_{1},e_{0}),\\{y_{BA}\\}$$(a_{1},b_{0},e_{1}),\\{x_{BA}\\}$$(a_{0},b_{0},e_{1}),\\{y_{EA}\\}$$(a_{1},b_{0},e_{0}),\\{x_{BA},x_{EA}\\}$$(a_{0},b_{0},e_{0}),\\{x_{BA},x_{EA}\\}$$A$$A$$A$$A$$E$$E$$E$$E$$B$$B$$B$$B$
Figure 2: Preference DFA representing preference objective (PO1).
$X_{2},\\{x_{EA}\\}$$X_{1},\\{y_{EA}\\}$$X_{6},\\{x_{EA},x_{BA}\\}$$X_{5},\\{y_{EA},y_{BA}\\}$$X_{4},\\{x_{BA}\\}$$X_{3},\\{y_{EA}\\}$
Figure 3: Preference graph corresponding to preference DFA in Fig. 2.
Each final state is assigned a set of labels. For instance, the state
$\lambda((a_{1},b_{0},e_{0}))=\\{x_{BA},x_{EA}\\}$ 111We use $A,B,E$ in places
of numerical indices. since by any word that visits the state satisfies
$\Diamond\,B$ and $\Diamond\,E$. This results in $6$ unique labels
corresponding to a different class of equivalent words in $\Sigma^{\omega}$
that visit that final state. These classes form the nodes $X_{k}$ for
$k=1\ldots 6$ of the preference graph shown in Fig. 3. An edge $(X_{2},X_{5})$
expresses that any word that visits $X_{5}$ is strictly preferred to any word
that visits $X_{2}$ but not $X_{5}$. Similarly, any word that visits $X_{4}$
only is incomparable to any word that visits $X_{6}$ only.
## V Opportunistic Qualitative Planning with Incomplete Preferences
In this section, we define two types of strategies, that exploit the
_opportunities_ that arise due to stochasticity with a positive probability or
with probability one, respectively.
###### Definition 9 (Product of an MDP with a Preference DFA).
Given an MDP $M=\langle S,A,P,\mathcal{AP},L\rangle$ and the preference DFA
$\mathcal{B}=\langle Q,\Sigma,\delta,{q_{0}},F,G=(\mathcal{X},E)\rangle$, the
product of MDP with preference DFA is defined as the tuple,
$\mathcal{M}:=\langle V,A,\Delta,\mathcal{F},\mathcal{G}\rangle,$
where $V\coloneqq S\times Q$ is the finite set of states. $A$ is the same set
of actions as $M$. The transition function $\Delta:V\times
A\rightarrow\mathcal{D}(V)$ is defined as follows: for any states
$(s,q),(s^{\prime},q^{\prime})\in V$ and any action $a\in A$,
$\Delta((s^{\prime},q^{\prime})\mid(s,q),a)=P(s^{\prime}\mid s,a)$ if
$q^{\prime}\in\delta(q,L(s^{\prime}))$ and $0$ otherwise.
$\mathcal{F}\subseteq V$ is the set of final states by reaching which at least
some outcome is achieved. The component
$\mathcal{G}=(\mathcal{W},\mathcal{E})$ is a graph where
$\mathcal{W}:=\\{S\times X\mid X\in\mathcal{X}\\}$ is the set of nodes and
$\mathcal{E}$ is a set of edges such that, for any $W_{i}=S\times X_{i}$ and
$W_{j}=S\times X_{j}$, $(W_{i},W_{j})\in\mathcal{E}$ if and only if
$(X_{i},X_{j})\in E$.
In the product construction, an edge $(W_{i},W_{j})\in\mathcal{W}$ denotes
that any path $\rho\in V^{\ast}$ that reaches $W_{j}$ is strictly preferred to
any path $\rho^{\prime}\in V^{\ast}$ that reaches $W_{i}$ but not $W_{j}$
under the given preference. Thus, we transform the preference over words given
by the preference DFA to a preference over outcomes, each of which reaches a
subsets of states in $\mathcal{W}$. For each node $W\in\mathcal{W}$, we can
compute a set of states, denoted $\mathsf{ASWin}(W)$, from which the agent has
a strategy to reach $W$ with probability one, using the solution of almost-
sure winning in MDPs with reachability objective [16]. It is possible that
$v\in\mathsf{ASWin}(W)$ and $v\in\mathsf{ASWin}(W^{\prime})$ where $W\neq
W^{\prime}$ and $(W,W^{\prime})\in\mathcal{E}$. In this case, a preference
satisfying strategy must visit the preferred node $W^{\prime}$. To generalize,
let $Z\subseteq\mathcal{W}$ be a subset of nodes in the product, we overload
the notation $\mathsf{MP}$ such that $\mathsf{MP}(Z)=\\{W\in Z\mid\nexists
W^{\prime}\in Z,(W,\mathcal{W}^{\prime})\in\mathcal{E}\\}$. A preference
satisfying strategy from $v$ must visit a node in $\mathsf{MP}(Z_{v})$ where
$Z_{v}=\\{W\in\mathcal{W}\mid v\in\mathsf{ASWin}(W)\\}$.
However, at some states, the uncertainty may create opportunities to
transition from the state $v$ to $v^{\prime}\in V$ such that a more preferred
node can be reached almost-surely from $v^{\prime}$. We call such a transition
to be an _improvement_.
###### Definition 10 (Improvement).
Given any states $v_{1},v_{2}\in V$, $v_{2}$ is said to be an _improvement_
over $v_{1}$ if and only if there exists a pair of preference nodes
$W_{1}\in\mathsf{MP}(\\{W\in\mathcal{W}\mid v_{1}\in\mathsf{ASWin}(W)\\})$ and
$W_{2}\in\mathsf{MP}(\\{W\in\mathcal{W}\mid v_{2}\in\mathsf{ASWin}(W)\\})$
such that $(W_{1},W_{2})\in E$.
A transition from state $v\in V$ to $v^{\prime}\in V$ is said to be
_improving_ if $v^{\prime}$ is an improvement over $v$.
###### Definition 11.
A strategy $\pi:V\rightarrow 2^{A}\cup\\{\uparrow\\}$ 222$\pi(v)=\uparrow$
means the function $\pi$ is undefined at $v$. is said to be _safe and
positively improving (resp., safe and almost-surely improving)_ if, the
following conditions hold for any state $v\in V$ such that
$\pi(v)\neq\uparrow$: (a) there exists (resp., for all) a path $\rho$ in
$\mathcal{M}_{\pi}$ with $\rho[0]=v$ such that, for some $i\geq 0$,
$\rho[i+1]$ is an improvement over $\rho[i]$; (b) there does not exist a path
$\rho$ in $\mathcal{M}_{\pi}$ with $\rho[0]=v$ such that, for some $i\geq 0$,
$\rho[i]$ is an improvement over $\rho[i+1]$.
Intuitively, the SPI and SASI strategies exploit opportunities by inducing an
improving transition with a positive probability and with probability one,
respectively.
We now define a new model called _an improvement MDP_ that differentiates the
states reached by improving transitions.
###### Definition 12 (Improvement MDP).
Given a product MDP $\mathcal{M}$, an _improvement MDP_ is the tuple,
$\tilde{M}=\langle\tilde{V},A,\tilde{\Delta},\tilde{\mathcal{F}}\rangle,$
where $\tilde{V}=\\{(v,\top),(v,\bot)\mid v\in V\\}$ is the set of states,
$\tilde{\mathcal{F}}=\\{(v,\top)\mid v\in V\\}$ is the set of final states. An
action $a\in A$ is enabled at a state $v\in\tilde{V}$ if and only if for for
any $v^{\prime}$ such that $\Delta(v,a,v^{\prime})>0$, $v$ is _not_ an
improvement over $v^{\prime}$. The transition function
$\tilde{\Delta}:\tilde{V}\times A\rightarrow\mathcal{D}(\tilde{V})$ is defined
as follows: For any $v\in V$, for an action $a$ _enabled from_ $v$, if
$\Delta(v,a,v^{\prime})>0$ and $v^{\prime}$ is an improvement from $v$, then
let $\tilde{\Delta}((v,\bot),a,(v^{\prime},\top))=\Delta(v,a,v^{\prime})$.
Else, if $\Delta(v,a,v^{\prime})>0$ and $v^{\prime}$ is not an improvement
from $v$, then let
$\tilde{\Delta}((v,\bot),a,(v^{\prime},\bot))=\Delta(v,a,v^{\prime})$ and
$\tilde{\Delta}((v,\top),a,(v^{\prime},\bot))=\Delta(v,a,v^{\prime})$.
###### Theorem 1.
The following statements hold for any state $v\in V$ in product MDP.
1. 1.
An SPI strategy at $v$ is a positive winning strategy in improvement MDP at
the state $(v,\bot)$ to visit $\tilde{\mathcal{F}}$.
2. 2.
An SASI strategy at $v$ is an almost-sure winning strategy in improvement MDP
at the state $(v,\bot)$ to visit $\tilde{\mathcal{F}}$.
###### Proof (Sketch).
Statement (1). By construction, any action which induces a transition that
violates condition (b) in Def. 11 with positive probability is disabled in the
improvement MDP. Also, by construction, any final state in $\mathcal{F}$ can
only be reached by making an improvement. Hence, a positive winning strategy
in improvement MDP which visits $\tilde{\mathcal{F}}$ satisfies condition (a)
in Def. 11. The proof of statement (2) is similar to that of statement (1). ∎
The SPI and SASI strategies may exploit multiple opportunities by inducing
sequential improvements: Whenever the agent reaches a state
$(v,\top)\in\tilde{V}$, he will check if a SPI (or SASI) strategy exists for
$(v,\bot)$. If yes, then the agent will carry out the SPI (or SASI) strategy.
Otherwise, the agent will carry out the almost-sure winning strategy for one
of the most preferred and satisfied objective at $v$.
###### Example 4.
Consider the case when robot is at $(2,1)$ with $4$ units of battery and is to
satisfy (PO1). Although the robot cannot almost-surely visit either $B$ or $E$
individually, it can almost-surely visit one of $B,E$ by moving West. Since
visiting both $B$ and $E$ is strictly preferred to visiting $A$, moving West
is a safe and almost-surely improving strategy at $(2,1)$. Instead of $4$
units, if the robot starts with $2$ units of battery, it can reach neither of
$A,B$ or $C$ almost-surely. In this case, the SASI strategy is undefined. The
SPI strategy is to choose West because, with positive probability, it leads to
cells $(1,2),(1,0)$ with $1$ units of battery remaining. From these states,
one of $B,E$ can be reached almost-surely.
Consider the robot whose objective is (PO2) starting at the cell $(2,1)$ with
$4$ units of battery. From this state, only $A$ can be visited almost-surely.
The SASI strategy at $v_{0}$ is to move North because, with positive
probability, the robot would reach one of the cells—$(1,2),(2,2),(2,3)$—with
$3$ units of battery remaining. Since from each of these states at least one
of $B,C,D,F$ can almost-surely be achieved, the robot almost-surely makes an
improvement. Suppose the robot reaches $(2,3)$ with $3$ units of battery. From
this state, only visiting $C$ is almost-surely winning. However, the SASI
strategy is to move North and then East, thereby ensuring a visit to either
$F$ or $D$ with probability one. Hence, we see that SASI strategy not only
plans for a single improvement, but it may also induces multiple sequential
improvements.
## VI Conclusion
In this work, we propose a language to specify incomplete preferences as a
pre-order over temporal objectives. This allows us to synthesize qualitatively
plans even when some outcomes are incomparable. We define two types of
opportunistic strategies that strategically, and whenever possible, improve
the outcome they can achieve sequentially. Our work provides a method for
stochastic planning with incomplete preferences over a subclass of temporal
logic objectives. Building on this work, we consider a number of future
directions: 1) we will consider a preference over temporal objectives that
encompass more general LTL properties such as safety, recurrence, and
liveness; 2) we will build on the qualitative reasoning to study quantitative
planning with such preference specifications. The later requires us to jointly
consider how well (the probability) an objective is satisfied and how
preferred is the objective.
## References
* [1] R. Hastie and R. M. Dawes, _Rational choice in an uncertain world: The psychology of judgment and decision making_. Sage, 2010.
* [2] Z. Manna and A. Pnueli, _The temporal logic of reactive and concurrent systems: Specification_. Springer Science & Business Media, 2012.
* [3] X. C. Ding, S. L. Smith, C. Belta, and D. Rus, “Mdp optimal control under temporal logic constraints,” in _2011 50th IEEE Conference on Decision and Control and European Control Conference_. IEEE, 2011, pp. 532–538.
* [4] M. Hasanbeig, Y. Kantaros, A. Abate, D. Kroening, G. J. Pappas, and I. Lee, “Reinforcement learning for temporal logic control synthesis with probabilistic satisfaction guarantees,” in _2019 IEEE 58th Conference on Decision and Control (CDC)_. IEEE, 2019, pp. 5338–5343.
* [5] B. Lacerda, D. Parker, and N. Hawes, “Optimal and dynamic planning for markov decision processes with co-safe ltl specifications,” in _2014 IEEE/RSJ International Conference on Intelligent Robots and Systems_ , 2014, pp. 1511–1516.
* [6] M. Wen, R. Ehlers, and U. Topcu, “Correct-by-synthesis reinforcement learning with temporal logic constraints,” in _2015 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)_. IEEE, 2015, pp. 4983–4990.
* [7] J. Tumova, G. C. Hall, S. Karaman, E. Frazzoli, and D. Rus, “Least-violating control strategy synthesis with safety rules,” in _Proceedings of the 16th international conference on Hybrid systems: computation and control_. ACM, 2013, pp. 1–10.
* [8] M. Lahijanian and M. Kwiatkowska, “Specification revision for Markov decision processes with optimal trade-off,” in _Proc. 55th Conference on Decision and Control (CDC’16)_ , 2016, pp. 7411–7418.
* [9] N. Mehdipour, C.-I. Vasile, and C. Belta, “Specifying User Preferences Using Weighted Signal Temporal Logic,” _IEEE Control Systems Letters_ , vol. 5, no. 6, pp. 2006–2011, Dec. 2021.
* [10] R. Bloem, H. Chockler, M. Ebrahimi, and O. Strichman, “Synthesizing reactive systems using robustness and recovery specifications,” in _2019 Formal Methods in Computer Aided Design (FMCAD)_. IEEE, 2019, pp. 147–151.
* [11] J. J. Thomson, “Killing, letting die, and the trolley problem,” _The monist_ , vol. 59, no. 2, pp. 204–217, 1976.
* [12] J. Van Benthem, S. Van Otterloo, and O. Roy, “Preference logic, conditionals and solution concepts in games,” _Modality Matters,_ , 2005.
* [13] M. Bienvenu, J. Lang, and N. Wilson, “From Preference Logics to Preference Languages, and Back,” _Twelfth International Conference on the Principles of Knowledge Representation and Reasoning_ , p. 11, 2010.
* [14] G. R. Santhanam, S. Basu, and V. Honavar, “Representing and reasoning with qualitative preferences: Tools and applications,” _Synthesis Lectures on Artificial Intelligence and Machine Learning_ , vol. 10, no. 1, pp. 1–154, 2016\.
* [15] C. Boutilier, R. I. Brafman, C. Domshlak, H. H. Hoos, and D. Poole, “Cp-nets: A tool for representing and reasoning withconditional ceteris paribus preference statements,” _Journal of artificial intelligence research_ , vol. 21, pp. 135–191, 2004.
* [16] C. Baier and J.-P. Katoen, _Principles of model checking_. MIT press, 2008.
* [17] O. Kupferman and M. Y. Vardi, “Model checking of safety properties,” _Formal Methods in System Design_ , vol. 19, no. 3, pp. 291–314, 2001.
* [18] C. Baier, M. Größer, M. Leucker, B. Bollig, and F. Ciesinski, “Controller Synthesis for Probabilistic Systems (Extended Abstract),” in _Exploring New Frontiers of Theoretical Informatics_ , J.-J. Levy, E. W. Mayr, and J. C. Mitchell, Eds. Boston: Kluwer Academic Publishers, 2004, vol. 155, pp. 493–506.
* [19] D. Bouyssou, D. Dubois, and M. Pirlot, _Concepts & Methods of Decision-Making_. John Wiley & Sons Inc., 2009.
*[MDP]: Markov Decision Process
*[LTL]: Linear Temporal Logic
*[scLTL]: syntactically co-safe LTL
*[DFA]: Deterministic Finite Automaton
*[ MDP]: Markov Decision Process
|
# $B\to K_{1}\pi(K)$ decays in the perturbative QCD approach
Zhi-Qing Zhang1 , Zhi-Wei Hou1, Yueling Yang2, Junfeng Sun2
1 Department of Physics, Henan University of Technology,
Zhengzhou, Henan 450001, China;
2 College of Physics and Information Engineering,
Henan Normal University, Xinxiang 453007, China
###### Abstract
Within the framework of the perturbative QCD approach, we study the two-body
charmless decays $B\to K_{1}(1270)(K_{1}(1400))\pi(K)$. We find the following
results: (i) The decays $\bar{B}^{0}\to
K_{1}(1270)^{+}\pi^{-},K_{1}(1400)^{+}\pi^{-}$ are incompatible with the
present experimental data. There exists a similar situation for the decays
$\bar{B}^{0}\to a_{1}(1260)^{+}K^{-},b_{1}(1235)^{+}K^{-}$, which are usually
considered that the nonperturbative contributions are needed to explain the
data. But the difference is that the nonperturbative contributions seem to
play opposite roles in these two groups of decays.(ii) The pure annihilation
type decays $\bar{B}^{0}\to K_{1}^{\pm}(1270)K^{\mp},K_{1}^{\pm}(1400)K^{\mp}$
are good channels to test whether an approach can be used to calculate
correctly the strength of the penguin-annihilation amplitudes. Their branching
ratios are predicted at $10^{-7}$ order, which are larger than the QCDF
results. (iii) The dependence of the direct CP-violating asymmetries of these
decays on the mixing angle $\theta_{K_{1}}$ are also considered.
###### pacs:
13.25.Hw, 12.38.Bx, 14.40.Nd
## I Introduction
In general, the mesons are classified in $J^{PC}$ multiplets. There are two
types of orbitally excited axial-vector mesons, namely $1^{++}$ and $1^{+-}$.
The former includes $a_{1}(1260),f_{1}(1285),f_{1}(1420)$ and $K_{1A}$, which
compose the ${}^{3}P_{1}$-nonet, and the latter includes
$b_{1}(1235),h_{1}(1170),h_{1}(1380)$ and $K_{1B}$, which compose the
${}^{1}P_{1}$-nonet. Except $a_{1}(1260)$ and $b_{1}(1235)$, other axial-
vector mesons exist mixing problem, which makes their inner structure become
more ambiguous, for example, $K_{1A}$ and $K_{1B}$ can mix with each other and
form two physical mass eigenstates $K_{1}(1270),K_{1}(1400)$. Various values
about the mixing angle $\theta_{K_{1}}$ can be found in different literatures,
which will be examined in more detail in Sec.III. For the mixings of the
SU(3)-singlet and SU(3)-octet mesons, specifically, the
$f_{1}(1285)-f_{1}(1420)$ mixing angle $\theta_{{}^{3}P_{1}}$ and the
$h_{1}(1170)-h_{1}(1380)$ mixing angle $\theta_{{}^{1}P_{1}}$, there also
exist several values in the phenomenal analysis. Certainly, these two angles
can associate with $\theta_{K_{1}}$ through the Gell-Mann-Okubo mass formula.
For the lack of sufficient experimental data, none of them can be accurately
determined up to now. So the decays involving these mesons become more
ambiguous compared with the decays involving $a_{1}(1260)$ or/and
$b_{1}(1235)$ meson(s), which have been discussed in the previous works wwang
; zqzhang1 ; zqzhang2 ; cmv ; vnp ; cy .
In this paper, we would like to discuss the decays $B\to
K_{1}(1270)\pi(K),K_{1}(1400)\pi(K)$. On the theoretical side, many approaches
have been used to study these decays, such as the naive factorization cmv ,
the generalized factorization vnp , and the QCD factorization approach cy .
From the predictions of these approaches, One can find that the branching
ratios of the decays $B\to K_{1}(1270)\pi,K_{1}(1400)\pi$ are in the order of
$10^{-6}$, for example, $Br(B^{0}\to K_{1}(1270)^{+}\pi^{-})=(3\sim 8)\times
10^{-6}$, $Br(B^{0}\to K_{1}(1400)^{+}\pi^{-})=(2\sim 5)\times 10^{-6}$, those
of almost all the decays $B\to K_{1}(1270)K,K_{1}(1400)K$ are in the order of
$10^{-8}\sim 10^{-7}$. While on the experimental side, the large upper limits
are given for the decays $B^{0}\to K_{1}(1400)^{+}\pi^{-}$ and $B^{+}\to
K_{1}(1400)^{0}\pi^{+}$ at the $90\%$ level (C.L.) of $1.1\times 10^{-3}$ and
$2.6\times 10^{-3}$, respectively argus , and the Heavy Flavor Averaging
Group(HFAG) gives the following results hfag :
$\displaystyle Br(B^{+}\to K_{1}(1270)^{0}\pi^{+})$ $\displaystyle<$
$\displaystyle 40\times 10^{-6},Br(B^{+}\to K_{1}(1270)^{0}\pi^{+})<39\times
10^{-6},$ (1) $\displaystyle Br(B^{0}\to K_{1}(1270)^{+}\pi^{-})$
$\displaystyle=$ $\displaystyle(17^{+8}_{-11})\times 10^{-6},Br(B^{0}\to
K_{1}(1400)^{+}\pi^{-})=(17^{+7}_{-9})\times 10^{-6}.\;\;\;\;\;$ (2)
The preliminary data are given by BABAR barbar1 ,
$\displaystyle BR(B^{0}\to K_{1}^{+}(1270)\pi^{-})$ $\displaystyle=$
$\displaystyle(12.0\pm 3.1^{+9.3}_{-4.5})\times 10^{-6},$ (3) $\displaystyle
BR(B^{0}\to K_{1}^{+}(1400)\pi^{-})$ $\displaystyle=$ $\displaystyle(16.7\pm
2.6^{+3.5}_{-5.0})\times 10^{-6}.$ (4)
Furthermore, BABAR has also measured the branching ratios $Br(B^{0}\to
K_{1}(1270)^{+}\pi^{-}+K_{1}(1400)^{+}\pi^{-})=3.1^{+0.8}_{-0.7}\times
10^{-5}$ and $Br(B^{+}\to
K_{1}(1270)^{0}\pi^{+}+K_{1}(1400)^{0}\pi^{+})=2.9^{+2.9}_{-1.7}\times
10^{-5}$ with $7.5\sigma$ and $3.2\sigma$ significance, respectively. In the
paper barbar2 , the two sided intervals for some of the decays $B\to
K_{1}(1270)\pi,K_{1}(1400)\pi$ are evaluated at $68\%$ probability ($\times
10^{-5}$):
$\displaystyle BR(B^{-}\to\bar{K}_{1}(1270)^{0}\pi^{-})$ $\displaystyle\in$
$\displaystyle[0.0,2.1],BR(B^{-}\to\bar{K}_{1}(1400)^{0}\pi^{-})\in[0.0,2.5],$
(5) $\displaystyle BR(B^{0}\to K_{1}(1270)^{+}\pi^{-})$ $\displaystyle\in$
$\displaystyle[0.6,2.5],BR(B^{0}\to K_{1}(1400)^{+}\pi^{-})\in[0.8,2.4].$ (6)
In view of the differences between the theories and experiments, we are going
to use the PQCD approach to explore these decays and analyze whether the
nonperturbtive contributions are necessary to explain the experimental data.
In the following, $K_{1}(1270)$ and $K_{1}(1400)$ are denoted as $K_{1}$ in
some places for convenience. The layout of this paper is as follows. In
Sec.II, the decay constants and the light-cone distribution amplitudes of the
relevant mesons are introduced. In Sec.III, we then analyze these decay
channels by using the PQCD approach. The numerical results and the discussions
are given in Sec. IV. The conclusions are presented in the final part.
## II decay constants and distribution amplitudes
For the wave function of the heavy $B$ meson, we take
$\displaystyle\Phi_{B}(x,b)=\frac{1}{\sqrt{2N_{c}}}(P/_{B}+m_{B})\gamma_{5}\phi_{B}(x,b).$
(7)
Here only the contribution of Lorentz structure $\phi_{B}(x,b)$ is taken into
account, since the contribution of the second Lorentz structure
$\bar{\phi}_{B}$ is numerically small cdlu and has been neglected. For the
distribution amplitude $\phi_{B}(x,b)$ in Eq.(7), we adopt the following
model:
$\displaystyle\phi_{B}(x,b)=N_{B}x^{2}(1-x)^{2}\exp[-\frac{M^{2}_{B}x^{2}}{2\omega^{2}_{b}}-\frac{1}{2}(\omega_{b}b)^{2}],$
(8)
where $\omega_{b}$ is a free parameter, we take $\omega_{b}=0.4\pm 0.04$ Gev
in numerical calculations, and $N_{B}=101.4$ is the normalization factor for
$\omega_{b}=0.4$.
The distribution amplitudes of the axial-vector $K_{1}$ are written as :
$\displaystyle\langle
K_{1}(P,\epsilon^{*}_{L})|\bar{q}_{2\beta}(z)q_{1\alpha}(0)|0\rangle$
$\displaystyle=$
$\displaystyle\frac{i\gamma_{5}}{\sqrt{2N_{c}}}\int^{1}_{0}dx\;e^{ixp\cdot
z}[m_{K_{1}}\epsilon/^{*}_{L}\phi_{K_{1}}(x)+\epsilon/^{*}_{L}P/\phi_{K_{1}}^{t}(x)+m_{K_{1}}\phi^{s}_{K_{1}}(x)]_{\alpha\beta},$
$\displaystyle\langle
K_{1}(P,\epsilon^{*}_{T})|\bar{q}_{2\beta}(z)q_{1\alpha}(0)|0\rangle$
$\displaystyle=$
$\displaystyle\frac{i\gamma_{5}}{\sqrt{2N_{c}}}\int^{1}_{0}dx\;e^{ixp\cdot
z}\left[m_{K_{1}}\epsilon/^{*}_{T}\phi^{v}_{K_{1}}(x)+\epsilon/^{*}_{T}P/\phi_{K_{1}}(x)\right.$
(9)
$\displaystyle\left.+m_{K_{1}}i\epsilon_{\mu\nu\rho\sigma}\gamma_{5}\gamma^{\mu}\epsilon^{*v}_{T}n^{\rho}v^{\sigma}\phi^{a}_{K_{1}}(x)\right]_{\alpha\beta},$
where $K_{1}$ refers to the two flavor states $K_{1A}$ and $K_{1B}$, and the
corresponding distribution functions can be calculated by using light-cone QCD
sum rule and listed as follows:
$\displaystyle\begin{cases}\phi_{K_{1}}(x)=\frac{f_{K_{1}}}{2\sqrt{2N_{c}}}\phi_{\parallel}(x),\phi^{T}_{K_{1}}(x)=\frac{f_{K_{1}}}{2\sqrt{2N_{c}}}\phi_{\perp}(x),\\\
\phi^{t}_{K_{1}}(x)=\frac{f_{K_{1}}}{2\sqrt{2N_{c}}}h^{(t)}_{\parallel}(x),\phi^{s}_{K_{1}}(x)=\frac{f_{K_{1}}}{2\sqrt{4N_{c}}}\frac{d}{dx}h^{(s)}_{\parallel}(x),\\\
\phi^{v}_{K_{1}}(x)=\frac{f_{K_{1}}}{2\sqrt{2N_{c}}}g^{(v)}_{\perp}(x),\phi^{a}_{K_{1}}(x)=\frac{f_{K_{1}}}{8\sqrt{2N_{c}}}\frac{d}{dx}g^{(a)}_{\perp}(x).\end{cases}$
(10)
Here we use $f_{K_{1}}$ to present both the longitudinally and transversely
polarized states $K_{1A}(K_{1B})$ by assuming
$f^{T}_{K_{1A}}=f_{K_{1A}}=f_{K_{1}}$ for $K_{1A}$ and
$f_{K_{1B}}=f^{T}_{K_{1B}}=f_{K_{1}}$ for $K_{1B}$, respectively. It is
similar for the case of $a_{1}(b_{1})$ states, and the difference is that here
$K_{1A}$ and $K_{1B}$ are not the mass eigenstates. In Eq.(10), the twist-2
distribution functions are in the first line and can be expanded as:
$\displaystyle\phi_{\parallel,\perp}$ $\displaystyle=$ $\displaystyle
6x(1-x)\left[a^{\parallel,\perp}_{0}+3a^{\parallel,\perp}_{1}t+a^{\parallel,\perp}_{2}\frac{3}{2}(5t^{2}-1)\right],$
(11)
the twist-3 light-cone distribution amplitudes (LCDAs) are used the following
forms for $K_{1A}$ and $K_{1B}$ states:
$\displaystyle h^{(t)}_{\parallel}(x)$ $\displaystyle=$ $\displaystyle
3a^{\perp}_{0}t^{2}+\frac{3}{2}a^{\perp}_{1}t(3t^{2}-1),h^{(s)}_{\parallel}(x)=6x(1-x)(a^{\perp}_{0}+a^{\perp}_{1}t),$
$\displaystyle g^{(a)}_{\perp}(x)$ $\displaystyle=$ $\displaystyle
6x(1-x)(a^{\parallel}_{0}+a^{\parallel}_{1}t),g^{(v)}_{\perp}(x)=\frac{3}{4}a^{\parallel}_{0}(1+t^{2})+\frac{3}{2}a^{\parallel}_{1}t^{3},$
(12)
where $t=2x-1$ and the Gegenbauer moments cheng
$a^{\perp}_{0}(K_{1A})=0.26^{+0.03}_{-0.22},a^{\parallel}_{0}(K_{1B})=-0.15\pm
0.15,a^{\parallel}_{0}(K_{1A})=a^{\perp}_{0}(K_{1B})=1$,
$a^{\perp}_{1}(K_{1A})=-1.08\pm
0.48,a^{\perp}_{1}(K_{1B})=0.30^{+0.00}_{-0.31}$,
$a^{\parallel}_{1}(K_{1A})=-0.30^{+0.26}_{-0.00}$ ,
$a^{\parallel}_{1}(K_{1B})=-1.95\pm 0.45$, $a^{\parallel}_{2}(K_{1A})=-0.05\pm
0.03,a^{\parallel}_{2}(K_{1B})=0.09^{+0.16}_{-0.18}$.
The wave functions for the pseudoscalar (P) mesons $K,\pi$ are given as:
$\displaystyle\Phi_{K(\pi)}(P,x,\zeta)\equiv\frac{1}{\sqrt{2N_{C}}}\gamma_{5}\left[P/\phi^{A}_{K(\pi)}(x)+m_{0}\phi^{P}_{K(\pi)}(x)+\zeta
m_{0}(v/n/-v\cdot n)\phi^{T}_{K(\pi)}(x)\right],$ (13)
where the parameter $\zeta$ is either $+1$ or $-1$ depending on the assignment
of the momentum fraction $x$. The chiral scale parameter $m_{0}$ is defined as
$m_{0}=\frac{m^{2}_{\pi}}{m_{u}+m_{d}}$ for $\pi$ meson and
$m_{0}=\frac{m^{2}_{K}}{m_{u}+m_{s}}$ for $K$ meson. The distribution
amplitudes are expanded as:
$\displaystyle\phi^{A}_{K(\pi)}(x)$ $\displaystyle=$
$\displaystyle\frac{3f_{K(\pi)}}{\sqrt{6}}x(1-x)\left[1+a_{1K(\pi)}C^{3/2}_{1}(t)+a_{2K(\pi)}C^{3/2}_{2}(t)+a_{4K(\pi)}C^{3/2}_{4}(t)\right],$
(14) $\displaystyle\phi^{P}_{K(\pi)}(x)$ $\displaystyle=$
$\displaystyle\frac{3f_{K(\pi)}}{2\sqrt{6}}\left[1+\left(30\eta_{3}-\frac{5\rho^{2}_{K(\pi)}}{2}\right)C^{1/2}_{2}(t)\right.$
(15)
$\displaystyle\left.-3\left(\eta_{3}\omega_{3}+\frac{9\rho^{2}_{K(\pi)}}{20}(1+6a_{2K(\pi)})\right)C^{1/2}_{4}(t)\right],$
$\displaystyle\phi^{T}_{K(\pi)}(x)$ $\displaystyle=$
$\displaystyle\frac{-f_{K(\pi)}t}{2\sqrt{6}}\left[1+6(5\eta_{3}-\frac{\eta_{3}\omega_{3}}{2}-\frac{7\rho^{2}_{K(\pi)}}{20}-\frac{3\rho^{2}_{K(\pi)}a_{2K(\pi)}}{5})(1-10x+10x^{2})\right],\;\;\;\;\;$
(16)
where the decay constants $f_{K}=0.16$ GeV, $f_{\pi}=0.13$ GeV and the
Gegenbauer moments, Gegenbauer polynomials are defined as:
$\displaystyle a_{1K}$ $\displaystyle=$ $\displaystyle 0.17\pm
0.17,a_{1\pi}=0,a_{2K}=a_{2\pi}=0.115\pm 0.115,a_{4K}=a_{4\pi}=-0.015,$
$\displaystyle C^{3/2}_{1}(t)$ $\displaystyle=$ $\displaystyle
3t,C^{3/2}_{2}(t)=\frac{3}{2}(5t^{2}-1),C^{3/2}_{4}(t)=\frac{15}{8}(1-14t^{2}+21t^{4}),$
$\displaystyle C^{1/2}_{2}(t)$ $\displaystyle=$
$\displaystyle\frac{1}{2}(3t^{2}-1),C^{1/2}_{4}(t)=\frac{1}{8}(3-30t^{2}+35t^{4}),$
(17)
and the constants $\eta_{3}=0.015,\omega_{3}=-3$, the mass ratio
$\rho_{K(\pi)}=m_{K(\pi)}/m_{0K(\pi)}$ with $m_{K}=0.49$ GeV, $m_{0K}=1.7$
GeV, $m_{\pi}=0.135$ GeV, $m_{0\pi}=1.4$ GeV.
## III the perturbative QCD calculation
The PQCD approach is an effective theory to handle hadronic $B$ decays cdlu2 ;
keum ; mishima . Because it takes into account the transverse momentum of the
valence quarks in the hadrons, one will encounter the double logarithm
divergences when the soft and the collinear momenta overlap. Fortunately,
these large double logarithm can be resummed into the Sudakov factor hnli0 .
There also exit another type of double logarithms which arise from the loop
corrections to the weak decay vertex. These double logarithms can also be
resummed and resulted in the threshold factor hnli00 . This factor decreases
faster than any other power of the momentum fraction in the threshold region,
which removes the endpoint singularity. It is often parameterized into a
simple form which is independent on channels, twists and flavors hnli .
Certainly, when the higher order diagrams only suffer from soft or collinear
infrared divergence, it is ease to cure by using the eikonal approximation
hnli2 . Controlling these kinds of divergences reasonably makes the PQCD
approach more self-consistent.
Figure 1: Diagrams contributing to the decay
$\bar{B}^{0}\to\bar{K}^{0}_{1A}\pi^{0}$.
For these two axial vector mesons, their mass eigenstates and flavor
eigenstates are not the same with each other, and the former can be obtained
by the latter through a mixing angle $\theta_{K_{1}}$:
$\displaystyle
K_{1}(1270)=K_{1A}\sin\theta_{K_{1}}+K_{1B}\cos\theta_{K_{1}},K_{1}(1400)=K_{1A}\cos\theta_{K_{1}}-K_{1B}\sin\theta_{K_{1}}.$
(18)
Unfortunately, there are many uncertainties about this mixing angle. From
various phenomenological analysis and experimental data on the masses of these
two physical states, it indicates that this mixing angle is around either
$33^{\circ}$ or $58^{\circ}$ rkc ; iw ; dma ; su ; bg ; pvc ; gi ; vfv ; tky ;
div . Certainly, the author of cheng1 stresses that the sign of
$\theta_{K_{1}}$ depends on the relative sign of flavor states $K_{1A}$ and
$K_{1B}$, which can be determined by fixing the relative sign of the decay
constants of $K_{1A}$ and $K_{1B}$. If the decay constants $f_{1A},f_{1B}$ are
the same in sign (it means that the transitions $B\to K_{1A}$ and $B\to
K_{1B}$ have the opposite signs), then the mixing angle $\theta_{K_{1}}$
defined in (18) is positive. It is noticed that the mixing angle for the
antiparticle states $\bar{K}_{1}(1270),\bar{K}_{1}(1400)$, which is denoted as
$\theta_{\bar{K}_{1}}$, is of opposite sign to that for the particle states
$K_{1}(1270),K_{1}(1400)$. But even so, we cannot confirm whether
$\theta_{K_{1}}$ is larger or less than $45^{\circ}$ up to now. Different
approaches and models are used and different values of the mixing angle are
obtained. In order to pin down it, Cheng cheng1 advocates to determine the
mixing angles $\theta_{{}^{3}P_{1}}$ and $\theta_{{}^{1}P_{1}}$ between
$f_{1}(1285)-f_{1}(1420)$ and $h_{1}(1170)-h_{1}(1380)$, respectively, which
in turn depend on the $K_{1A}-K_{1B}$ mixing angle $\theta_{K_{1}}$ through
the mass relation. Through analyzing the present data of the $h_{1},f_{1}$
mesons’ strong/radiative decay modes, the author prefers $\theta_{K_{1}}\sim
33^{\circ}$ over $58^{\circ}$. In view of the present limited data, we will
still include the mixing angle $\theta_{K_{1}}\sim 58^{\circ}$ in our
calculations.
It is just because of the ambiguous mixing angle that makes the study very
difficult. Here we take the decay $\bar{B}^{0}\to\bar{K}_{1}(1270)^{0}\pi^{0}$
as an example, which is contributed by the decays
$\bar{B}^{0}\to\bar{K}^{0}_{1A}\pi^{0}$ and
$\bar{B}^{0}\to\bar{K}^{0}_{1B}\pi^{0}$. Figure 1 is for the Feynman diagrams
of the decay $\bar{B}^{0}\to\bar{K}^{0}_{1A}\pi^{0}$ (it is similar to the
decay $\bar{B}^{0}\to\bar{K}^{0}_{1B}\pi^{0}$), through which the amplitudes
can be calculated directly, and the total amplitudes of the decay
$\bar{B}^{0}\to\bar{K}_{1}(1270)^{0}\pi^{0}$ can be obtained by combining the
two sets of flavor state amplitudes according to Eq.(18):
$\displaystyle\sqrt{2}A(\bar{K}_{1}(1270)^{0}\pi^{0})$ $\displaystyle=$
$\displaystyle-\xi_{t}(f_{K_{1A}}\sin\theta_{K_{1}}+f_{K_{1B}}\cos\theta_{K_{1}})F^{LL}_{e\pi}(a_{4}-\frac{1}{2}a_{10})$
(19)
$\displaystyle-\xi_{t}(M^{LL;K_{1A}}_{e\pi}\sin\theta_{K_{1}}+M^{LL;K_{1B}}_{e\pi}\cos\theta_{K_{1}})(C_{3}-\frac{1}{2}C_{9})$
$\displaystyle-\xi_{t}(M^{LR;K_{1A}}_{e\pi}\sin\theta_{K_{1}}+M^{LR;K_{1B}}_{e\pi}\cos\theta_{K_{1}})(C_{5}-\frac{1}{2}C_{7})$
$\displaystyle-\xi_{t}(M^{LL;K_{1A}}_{a\pi}\sin\theta_{K_{1}}+M^{LL;K_{1B}}_{a\pi}\cos\theta_{K_{1}})(C_{3}-\frac{1}{2}C_{9})$
$\displaystyle-\xi_{t}(M^{LR;K_{1A}}_{a\pi}\sin\theta_{K_{1}}+M^{LR;K_{1B}}_{a\pi}\cos\theta_{K_{1}})(C_{5}-\frac{1}{2}C_{7})$
$\displaystyle-\xi_{t}f_{B}(F^{LL;K_{1A}}_{a\pi}\sin\theta_{K_{1}}+F^{LL;K_{1B}}_{a\pi}\cos\theta_{K_{1}})(a_{4}-\frac{1}{2}a_{10})$
$\displaystyle-\xi_{t}f_{B}(F^{SP;K_{1A}}_{a\pi}\sin\theta_{K_{1}}+F^{SP;K_{1B}}_{a\pi}\cos\theta_{K_{1}})(a_{6}-\frac{1}{2}a_{8})$
$\displaystyle+f_{\pi}(F^{LL}_{eK_{1A}}\sin\theta_{K_{1}}+F^{LL}_{eK_{1B}}\cos\theta_{K_{1}})\left[\xi_{u}a_{1}-\xi_{t}\left(\frac{3C_{9}}{2}+\frac{C_{10}}{2}\right.\right.$
$\displaystyle\left.\left.-\frac{3C_{7}}{2}-\frac{C_{8}}{2}\right)\right]+(M^{LL;\pi}_{eK_{1A}}\sin\theta_{K_{1}}+M^{LL;\pi}_{eK_{1B}}\cos\theta_{K_{1}})\left[\xi_{u}C_{2}\right.$
$\displaystyle\left.-\xi_{t}\frac{3C_{10}}{2}\right]-\xi_{t}(M^{SP;\pi}_{eK_{1A}}\sin\theta_{K_{1}}+M^{SP;\pi}_{eK_{1B}}\cos\theta_{K_{1}})\frac{3C_{8}}{2},$
where $\xi_{u}=V_{ub}V^{*}_{us},\xi_{t}=V_{tb}V^{*}_{ts}$,
$F^{M_{2}}_{e(a)M_{1}}$ and $M^{M_{2}}_{e(a)M_{1}}$ denote the amplitudes of
factorizable and nonfactorizable emission (annihilation) diagrams, where the
subscript meson $M_{1}$ is involved in the $\bar{B}^{0}$ meson transition, the
superscript meson $M_{2}$ is the emitted particle. The other superscript in
each amplitude denotes different current operators, $(V-A)(V-A),(V-A)(V+A)$
and $(S-P)(S+P)$ corresponding to $LL,LR$ and $SP$, respectively. If
exchanging the positions of $K_{1A}$ and $\pi^{0}$ in Fig.1(a), 1(b), 1(c) and
1(d), we will get the new Feynman diagrams, which can also contribute to the
decay $\bar{B}^{0}\to\bar{K}^{0}_{1A}\pi^{0}$, and the corresponding
amplitudes are given in the last three lines of Eq.(19). The amplitudes for
the decay $\bar{B}^{0}\to\bar{K}^{0}_{1A}(\bar{K}^{0}_{1B})\pi^{0}$ can be
obtained from those for the decay $B\to K\pi$ which can be found in ali , only
changing the variables of $K$ meson with those of $K^{0}_{1A}(K^{0}_{1B})$
meson. So we do not list the analytic expressions for these amplitudes.
Certainly, it is noticed that if the axial-vector meson $K_{1A}(K_{1B})$ is on
the emitted position in the factorizable emission diagrams, there is no scalar
or pseudoscalar current contribution. The total amplitudes for the other three
$B\to K_{1}(1270)\pi$ decay modes can also be written out similarly:
$\displaystyle A(K_{1}(1270)^{-}\pi^{+})$ $\displaystyle=$
$\displaystyle(f_{K_{1A}}\sin\theta_{K_{1}}+f_{K_{1B}}\cos\theta_{K_{1}})F^{LL}_{e\pi}(\xi_{u}a_{1}-\xi_{t}(a_{4}+a_{10}))$
(20)
$\displaystyle+(M^{LL;K_{1A}}_{e\pi}\sin\theta_{K_{1}}+M^{LL;K_{1B}}_{e\pi}\cos\theta_{K_{1}})(\xi_{u}C_{1}-\xi_{t}(C_{3}+C_{9}))$
$\displaystyle-\xi_{t}(M^{LR;K_{1A}}_{e\pi}\sin\theta_{K_{1}}+M^{LR;K_{1B}}_{e\pi}\cos\theta_{K_{1}})(C_{5}+C_{7})$
$\displaystyle-\xi_{t}(M^{LL;K_{1A}}_{a\pi}\sin\theta_{K_{1}}+M^{LL;K_{1B}}_{a\pi}\cos\theta_{K_{1}})(C_{3}-\frac{1}{2}C_{9})$
$\displaystyle-\xi_{t}(M^{LR;K_{1A}}_{a\pi}\sin\theta_{K_{1}}+M^{LR;K_{1A}}_{a\pi}\cos\theta_{K_{1}})(C_{5}-\frac{1}{2}C_{7})$
$\displaystyle-\xi_{t}f_{B}(F^{LL;K_{1A}}_{a\pi}\sin\theta_{K_{1}}+F^{LL;K_{1B}}_{a\pi}\cos\theta_{K_{1}})(a_{4}-\frac{1}{2}a_{10})$
$\displaystyle-\xi_{t}f_{B}(F^{SP;K_{1A}}_{a\pi}\sin\theta_{K_{1}}+F^{SP;K_{1B}}_{a\pi}\cos\theta_{K_{1}})(a_{6}-\frac{1}{2}a_{8}),$
$\displaystyle\sqrt{2}A(K_{1}(1270)^{-}\pi^{0})$ $\displaystyle=$
$\displaystyle(f_{K_{1A}}\sin\theta_{K_{1}}+f_{K_{1B}}\cos\theta_{K_{1}})F^{LL}_{e\pi}\left[\xi_{u}a_{1}-\xi_{t}(a_{4}+a_{10})\right]$
(21)
$\displaystyle+(M^{LL;K_{1A}}_{e\pi}\sin\theta_{K_{1}}+M^{LL;K_{1B}}_{e\pi}\cos\theta_{K_{1}})\left[\xi_{u}C_{1}-\xi_{t}(C_{3}+C_{9})\right]$
$\displaystyle-\xi_{t}(M^{LR;K_{1A}}_{e\pi}\sin\theta_{K_{1}}+M^{LR;K_{1B}}_{e\pi}\cos\theta_{K_{1}})(C_{5}+C_{7})$
$\displaystyle+(M^{LL;K_{1A}}_{a\pi}\sin\theta_{K_{1}}+M^{LL;K_{1B}}_{a\pi}\cos\theta_{K_{1}})\left[\xi_{u}C_{1}-\xi_{t}(C_{3}+C_{9})\right]$
$\displaystyle-\xi_{t}(M^{LL;K_{1A}}_{a\pi}\sin\theta_{K_{1}}+M^{LL;K_{1B}}_{a\pi}\cos\theta_{K_{1}})(C_{5}+C_{7})$
$\displaystyle+f_{B}(F^{LL;K_{1A}}_{a\pi}\sin\theta_{K_{1}}+F^{LL;K_{1B}}_{a\pi}\cos\theta_{K_{1}})\left[\xi_{u}a_{2}-\xi_{t}(a_{4}+a_{10})\right]$
$\displaystyle-
f_{B}(F^{SP;K_{1A}}_{a\pi}\sin\theta_{K_{1}}+F^{SP;K_{1B}}_{a\pi}\cos\theta_{K_{1}})\xi_{t}(a_{6}+a_{8})$
$\displaystyle+f_{\pi}(F^{LL}_{eK_{1A}}\sin\theta_{K_{1}}+F^{LL}_{eK_{1B}}\cos\theta_{K_{1}})\left[\xi_{u}a_{1}-\xi_{t}\left(\frac{3C_{9}}{2}+\frac{C_{10}}{2}\right.\right.$
$\displaystyle\left.\left.-\frac{3C_{7}}{2}-\frac{C_{8}}{2}\right)\right]+(M^{LL;\pi}_{eK_{1A}}\sin\theta_{K_{1}}+M^{LL;\pi}_{eK_{1B}}\cos\theta_{K_{1}})\left[\xi_{u}C_{2}\right.$
$\displaystyle\left.-\xi_{t}\frac{3C_{10}}{2}\right]-\xi_{t}(M^{SP;\pi}_{eK_{1A}}\sin\theta_{K_{1}}+M^{SP;\pi}_{eK_{1B}}\cos\theta_{K_{1}})\frac{3C_{8}}{2},$
$\displaystyle A(\bar{K}_{1}(1270)^{0}\pi^{-})$ $\displaystyle=$
$\displaystyle-\xi_{t}(f_{K_{1A}}\sin\theta_{K_{1}}+f_{K_{1B}}\cos\theta_{K_{1}})F^{LL}_{e\pi}(a_{4}-\frac{1}{2}a_{10})$
(22)
$\displaystyle-\xi_{t}(M^{LL;K_{1A}}_{e\pi}\sin\theta_{K_{1}}+M^{LL;K_{1B}}_{e\pi}\cos\theta_{K_{1}})(C_{3}-\frac{1}{2}C_{9})$
$\displaystyle-\xi_{t}(M^{LR;K_{1A}}_{e\pi}\sin\theta_{K_{1}}+M^{LR;K_{1B}}_{e\pi}\cos\theta_{K_{1}})(C_{5}-\frac{1}{2}C_{7})$
$\displaystyle+(M^{LL;K_{1A}}_{e\pi}\sin\theta_{K_{1}}+M^{LL;K_{1B}}_{e\pi}\cos\theta_{K_{1}})\left[\xi_{u}C_{1}-\xi_{t}(C_{3}+C_{9})\right]$
$\displaystyle-\xi_{t}(M^{LR;K_{1A}}_{e\pi}\sin\theta_{K_{1}}+M^{LR;K_{1B}}_{e\pi}\cos\theta_{K_{1}})(C_{5}+C_{7})$
$\displaystyle+f_{B}(F^{LL;K_{1A}}_{a\pi}\sin\theta_{K_{1}}+F^{LL;K_{1B}}_{a\pi}\cos\theta_{K_{1}})\left[\xi_{u}a_{2}-\xi_{t}(a_{4}+a_{10})\right]$
$\displaystyle-\xi_{t}f_{B}(F^{SP;K_{1A}}_{a\pi}\sin\theta_{K_{1}}+F^{SP;K_{1B}}_{a\pi}\cos\theta_{K_{1}})(a_{6}+a_{8}).$
It is easy to get the total amplitudes for the decay modes including
$\bar{K}_{1}(1400)^{0}$/$K_{1}(1400)^{-}$ by making the replacements with
$\sin\theta_{K_{1}}\to\cos\theta_{K_{1}},\cos\theta_{K_{1}}\to-\sin\theta_{K_{1}}$
in Eqs.(19-22), respectively. The total amplitudes for each $B\to
K_{1}(1270)K,K_{1}(1400)K$ decay are given in Appendix A.
## IV Numerical results and discussions
The input parameters in the numerical calculations pdg12 ; ckmfit are listed
as follows:
$\displaystyle f_{B}$ $\displaystyle=$ $\displaystyle
210MeV,f_{K_{1A}}=250MeV,f^{\perp}_{K_{1B}}=190MeV$ (23)
$\displaystyle\tau_{B^{\pm}}$ $\displaystyle=$ $\displaystyle 1.638\times
10^{-12}s,\tau_{B^{0}}=1.525\times 10^{-12}s,$ (24) $\displaystyle|V_{ud}|$
$\displaystyle=$ $\displaystyle 0.974,|V_{td}|=8.67\times
10^{-3},|V_{ub}|=3.51\times 10^{-3},$ (25) $\displaystyle|V_{ts}|$
$\displaystyle=$ $\displaystyle 0.0404,|V_{us}|=0.22534,,|V_{tb}|=0.999.$ (26)
Using the input parameters and the wave functions as specified in this section
and Sec.II, it is easy to get the branching ratios for the considered decays
which are listed in Table 1, where the first error comes from the uncertainty
in the $B$ meson shape parameter $\omega_{b}=0.40\pm 0.04$ GeV, the second
error is from the hard scale $t$, which we vary from $0.8t$ to $1.2t$, and the
third error is from the combined uncertainties of the Gegenbauer moments
$a^{\perp}_{1}(K_{1A})=-1.08\pm 0.48$ and $a^{\parallel}_{1}(K_{1B})=-1.95\pm
0.45$.
Table 1: Branching ratios (in units of $10^{-6}$) for the decays $B\to K_{1}(1270)\pi,K_{1}(1400)\pi$ and $B\to K_{1}(1270)K,K_{1}(1400)K$ for mixing angle $\theta_{\bar{K}_{1}}=-33^{\circ}$. Other model predictions are also presented here for comparison. It is noticed that the results of cmv and vnp are obtained for mixing angle $32^{\circ}$, while those in cy are obtained for mixing angle $-37^{\circ}$. | cmv | vnp | cy | this work
---|---|---|---|---
$\bar{B}^{0}\to K^{-}_{1}(1270)\pi^{+}$ | $4.3$ | $7.6$ | $3.0^{+0.8+1.5+4.2}_{-0.6-0.9-1.4}$ | $4.6^{+0.3+0.9+1.5}_{-0.1-0.8-1.2}$
$\bar{B}^{0}\to\bar{K}^{0}_{1}(1270)\pi^{0}$ | $2.3$ | $0.4$ | $1.0^{+0.0+0.6+1.7}_{-0.0-0.3-0.6}$ | $1.4^{+0.1+0.7+0.6}_{-0.1-0.5-0.5}$
$B^{-}\to\bar{K}^{0}_{1}(1270)\pi^{-}$ | $4.7$ | $5.8$ | $3.5^{+0.1+1.8+5.1}_{-0.1-1.1-1.9}$ | $3.5^{+0.4+1.9+1.6}_{-0.2-1.1-1.2}$
$B^{-}\to K^{-}_{1}(1270)\pi^{0}$ | $2.5$ | $4.9$ | $2.7^{+0.1+1.1+3.1}_{-0.1-0.7-1.0}$ | $3.9^{+0.9+1.0+1.1}_{-0.5-0.7-1.0}$
$\bar{B}^{0}\to K^{-}_{1}(1400)\pi^{+}$ | $2.3$ | $4.0$ | $5.4^{+1.1+1.7+9.9}_{-1.0-1.3-2.8}$ | $3.0^{+0.5+0.1+0.9}_{-0.3-0.1-0.7}$
$\bar{B}^{0}\to K^{0}_{1}(1400)\pi^{0}$ | $1.7$ | $3.0$ | $2.9^{+0.3+0.7+5.5}_{-0.3-0.6-1.7}$ | $3.3^{+0.9+0.1+1.0}_{-0.7-0.0-0.8}$
$B^{-}\to\bar{K}^{0}_{1}(1400)\pi^{-}$ | $2.5$ | $3.0$ | $6.5^{+1.0+2.0+11.6}_{-0.9-1.6-3.6}$ | $5.0^{+1.3+1.0+1.4}_{-0.7-0.8-1.1}$
$B^{-}\to K^{-}_{1}(1400)\pi^{0}$ | $0.7$ | $1.0$ | $3.0^{+0.4+1.1+5.2}_{-0.4-0.7-1.3}$ | $1.8^{+0.3+0.1+0.4}_{-0.2-0.2-0.3}$
$\bar{B}^{0}\to K^{-}_{1}(1270)K^{+}$ | | | $0.01^{+0.01+0.00+0.02}_{-0.00-0.00-0.01}$ | $0.13^{+0.01+0.00+0.23}_{-0.01-0.01-0.08}$
$\bar{B}^{0}\to K^{+}_{1}(1270)K^{-}$ | | | $0.06^{+0.01+0.00+0.46}_{-0.01-0.00-0.06}$ | $0.26^{+0.02+0.05+0.19}_{-0.02-0.04-0.12}$
$B^{-}\to K^{0}_{1}(1270)K^{-}$ | $0.22$ | | $0.25^{+0.01+0.15+0.39}_{-0.01-0.08-0.09}$ | $1.11^{+0.01+0.19+0.43}_{-0.01-0.03-0.35}$
$B^{-}\to K^{-}_{1}(1270)K^{0}$ | $0.02$ | | $0.05^{+0.02+0.07+0.10}_{-0.02-0.03-0.04}$ | $1.84^{+0.37+0.29+0.65}_{-0.28-0.25-0.42}$
$\bar{B}^{0}\to\bar{K}^{0}_{1}(1270)K^{0}$ | $0.02$ | | $2.30^{+0.16+1.13+1.43}_{-0.15-0.61-0.61}$ | $1.71^{+0.34+0.27+0.51}_{-0.26-0.23-0.43}$
$\bar{B}^{0}\to K^{0}_{1}(1270)\bar{K}^{0}$ | $0.20$ | | $0.24^{+0.01+0.11+0.33}_{-0.01-0.07-0.13}$ | $0.26^{+0.03+0.17+0.14}_{-0.06-0.01-0.08}$
$\bar{B}^{0}\to K^{-}_{1}(1400)K^{+}$ | | | $0.09^{+0.01+0.00+0.23}_{-0.01-0.00-0.09}$ | $0.64^{+0.14+0.00+0.13}_{-0.06-0.01-0.08}$
$\bar{B}^{0}\to K^{+}_{1}(1400)K^{-}$ | | | $0.02^{+0.00+0.00+0.04}_{-0.00-0.00-0.00}$ | $0.31^{+0.02+0.11+0.12}_{-0.00-0.01-0.09}$
$B^{-}\to K^{0}_{1}(1400)K^{-}$ | $0.12$ | | $0.48^{+0.08+0.15+0.81}_{-0.08-0.12-0.26}$ | $0.90^{+0.13+0.11+1.21}_{-0.08-0.09-0.16}$
$B^{-}\to K^{-}_{1}(1400)K^{0}$ | $4.4$ | | $0.01^{+0.00+0.01+0.14}_{-0.00-0.00-0.01}$ | $1.33^{+0.14+0.31+0.33}_{-0.10-0.22-0.22}$
$\bar{B}^{0}\to\bar{K}^{0}_{1}(1400)K^{0}$ | $4.1$ | | $0.08^{+0.01+0.17+0.59}_{-0.01-0.06-0.08}$ | $1.46^{+0.16+0.31+0.33}_{-0.13-0.25-0.28}$
$\bar{B}^{0}\to K^{0}_{1}(1400)\bar{K}^{0}$ | $0.11$ | | $0.50^{+0.08+0.13+0.92}_{-0.07-0.11-0.32}$ | $0.14^{+0.04+0.04+0.07}_{-0.03-0.03-0.02}$
From Table 1 we can find that the branching ratios of $B\to
K_{1}(1270)\pi,K_{1}(1400)\pi$ decays fall in $10^{-6}$ order. The
experimental data for the branching ratios of the decays $\bar{B}^{0}\to
K_{1}(1270)^{-}\pi^{+},K_{1}(1400)^{-}\pi^{+}$, which are given as $(12.0\pm
3.1^{+9.3}_{-4.5})\times 10^{-6}$ and $(16.7\pm 2.6^{+3.5}_{-5.0})\times
10^{-6}$, respectively, are large and incompatible with all the present theory
predictions. Even for the two sided intervals $Br(\bar{B}^{0}\to
K_{1}(1270)^{-}\pi^{+})\in[0.6,2.5]\times 10^{-5}$ and $Br(\bar{B}^{0}\to
K_{1}(1270)^{-}\pi^{+})\in[0.8,2.4]\times 10^{-5}$, they almost can not
contain the different theoretical results. While the branching ratios of the
charged $B$ decays can be explained by the theories for the large
uncertainties of the intervals
$Br(B^{-}\to\bar{K}_{1}(1270)^{0}\pi^{-})\in[0.0,2.1]\times
10^{-5},Br(B^{-}\to\bar{K}_{1}(1400)^{0}\pi^{-})\in[0.0,2.5]\times 10^{-5}$.
The large large differences between theories and experiments do not happen to
the decays $\bar{B}^{0}\to a_{1}(1260)^{\pm}\pi^{\mp}$, which are tree-
dominated. If the decay constants $f_{a_{1}},f_{\pi}$ and the form factors
$V^{B\to a_{1}}_{0},F^{B\to\pi}_{0}$ can be well determined, it is not
difficult for us to predict the branching ratios of the decays $\bar{B}^{0}\to
a_{1}(1260)^{\pm}\pi^{\mp}$ accurately, because the penguin contributions can
be neglected and there are fewer uncertainties. For the considered decays
$\bar{B}^{0}\to K_{1}^{\pm}\pi^{\mp}$, the tree operators are suppressed by
the CKM matrix elements $V_{ub}V^{*}_{us}/(V_{cb}V^{*}_{cs})\sim 0.02$, and
the penguin operators will play a significant role. If the future data are
really larger than the present predictions for here considered decays, the
authors cy claimed that there are two possible reasons: one is because the
larger corrections from the weak annihilation and the hard spectator
contributions, the other is from the charming penguin contributions. In our
calculations, the hard spectator contributions which correspond to the non-
factorization emission diagram ones are very small. Although the factorizable
annihilation contributions are more important, they can not promote the
branching ratios too much. So we consider that the charming penguins are more
likely to explain the large data. Unfortunately, the charming penguins are
non-perturbative in nature and remain untouched by many theory approaches.
While it is helpful to consider these decays by using the soft-collinear-
effective-theory (SECT) bauer , where the charming penguin contributions from
loop diagrams are included. Certainly, these contributions can also be
incorporated in the final-state interactions hycheng1 . There exits the
similar situation for the decays $\bar{B}^{0}\to
a_{1}(1260)^{+}K^{-},b_{1}(1235)^{+}K^{-}$ wwang , where the PQCD predictions
are larger than the data. The nonperturbative contributions, such as the final
state interactions or the charming penguins, are suggested to explain the
data. The penguin contributions from the factorization annihilation diagrams
in the $K_{1B}\pi$ modes are much larger than those in the $K_{1A}\pi$ modes.
So we can find that the branching ratios of $B\to K_{1B}\pi$ decays are always
larger than those of $B\to K_{1A}\pi$ decays, which is shown in Table 2.
Table 2: Branching ratios (in units of $10^{-6}$) for the decays $B\to K_{1A}\pi,K_{1B}\pi$ and $B\to K_{1A}K,K_{1B}K$. The errors for these entries correspond to the uncertainties from $\omega_{B}=0.4\pm 0.04GeV$, the hard scale $t$ varying from $0.8t$ to $1.2t$, and the Gegenbauer moments $a_{1}^{\perp}(K_{1A})=-1.08\pm 0.48$ for $K_{1A}$ meson, $a_{1}^{\parallel}(K_{1B})=-1.95\pm 0.45$ for $K_{1B}$ meson, respectively. $\bar{B}^{0}\to K^{-}_{1A}\pi^{+}$ | $2.1^{+1.0+0.1+0.0}_{-0.6-0.1-0.3}$ | $\bar{B}^{0}\to K^{-}_{1B}\pi^{+}$ | $5.6^{+0.1+0.8+2.1}_{-0.2-0.9-1.9}$
---|---|---|---
$\bar{B}^{0}\to\bar{K}^{0}_{1A}\pi^{0}$ | $1.3^{+0.7+0.2+0.9}_{-0.5-0.2-0.6}$ | $\bar{B}^{0}\to\bar{K}^{0}_{1B}\pi^{0}$ | $3.4^{+0.1+1.0+1.1}_{-0.1-0.7-0.9}$
$B^{-}\to\bar{K}^{0}_{1A}\pi^{-}$ | $3.9^{+1.9+0.6+1.7}_{-1.3-0.5-1.5}$ | $B^{-}\to\bar{K}^{0}_{1B}\pi^{-}$ | $4.7^{+0.2+2.2+1.8}_{-0.3-1.5-1.6}$
$B^{-}\to K^{-}_{1A}\pi^{0}$ | $2.1^{+0.9+0.2+0.6}_{-0.7-0.2-0.8}$ | $B^{-}\to K^{-}_{1B}\pi^{0}$ | $3.7^{+0.1+0.7+1.2}_{-0.2-0.8-1.1}$
$\bar{B}^{0}\to K^{-}_{1A}K^{+}$ | $0.47^{+0.03+0.00+0.28}_{-0.04-0.00-0.04}$ | $\bar{B}^{0}\to K^{-}_{1B}K^{+}$ | $0.34^{+0.04+0.01+0.14}_{-0.03-0.01-0.07}$
$\bar{B}^{0}\to K^{+}_{1A}K^{-}$ | $0.14^{+0.01+0.01+0.11}_{-0.00-0.01-0.13}$ | $\bar{B}^{0}\to K^{+}_{1B}K^{-}$ | $0.38^{+0.03+0.03+0.26}_{-0.03-0.02-0.19}$
$B^{-}\to K^{0}_{1A}K^{-}$ | $1.24^{+0.13+0.08+1.74}_{-0.12-0.07-0.65}$ | $B^{-}\to K^{0}_{1B}K^{-}$ | $0.60^{+0.04+0.19+0.13}_{-0.04-0.12-0.08}$
$B^{-}\to K^{-}_{1A}K^{0}$ | $0.29^{+0.02+0.05+1.26}_{-0.01-0.03-0.03}$ | $B^{-}\to K^{-}_{1B}K^{0}$ | $2.65^{+0.53+0.48+0.67}_{-0.34-0.41-0.57}$
$\bar{B}^{0}\to\bar{K}^{0}_{1A}K^{0}$ | $0.10^{+0.00+0.05+0.10}_{-0.00-0.03-0.04}$ | $\bar{B}^{0}\to\bar{K}^{0}_{1B}K^{0}$ | $2.71^{+0.30+0.52+0.66}_{-0.30-0.43-0.58}$
$\bar{B}^{0}\to K^{0}_{1A}\bar{K}^{0}$ | $0.16^{+0.12+0.06+0.18}_{-0.06-0.03-0.10}$ | $\bar{B}^{0}\to K^{0}_{1B}\bar{K}^{0}$ | $0.17^{+0.01+0.08+0.09}_{-0.01-0.05-0.06}$
For the decays $B\to K_{1}(1270)K,K_{1}(1400)K$, there are no experimental
data or upper limits up to now. Although the decays $\bar{B}^{0}\to
K_{1}^{\pm}K^{\mp}$ can occur only via annihilation type diagrams, their
branching ratios might not be so small as those predicted by the QCDF
approach. If our predictions can be confirmed by the future LHCb or the super
B experiments, one can say that the PQCD approach is one of the few methods,
which can be used to quantitatively calculate the annihilation type
contributions. In the previous years both the experimenters and the theorists
considered that the branching ratio of $B^{0}\to K^{+}K^{-}$ was at $10^{-8}$
order, but two years ago the CDF and LHCb collaborations gave their first
measurements of this decay by $(2.3\pm 1.0\pm 1.0)\times 10^{-7}$ cdf and
$(1.3^{+0.6}_{-0.5}\pm 0.7)\times 10^{-7}$ lhcb , respectively. Later, these
results are confirmed by the PQCD recalculated result $1.56\times 10^{-7}$
xiao without introducing too much uncertainties. It shows that the PQCD
approach can determine correctly the strength of penguin-annihilation
amplitudes. Whether the PQCD approach can give reasonable predictions for the
pure annihilation decays $\bar{B}^{0}\to
K_{1}(1270)^{\pm}K^{\mp},K_{1}(1400)^{\pm}K^{\mp}$ also deserves our attention
and research. For the decay $\bar{B}^{0}\to K^{0}_{1B}\bar{K}^{0}$ can not
receive a large emission factorization amplitude, because of the small decay
constant $f_{K_{1B}}$ compared with $f_{K_{1A}}$, while it has a large
annihilation factorization amplitude, which makes its branching ratio slightly
larger than that of $\bar{B}^{0}\to K^{0}_{1A}\bar{K}^{0}$. The branching
ratios of these two decays are at the order of $10^{-7}$. But it is very
different to the decay $\bar{B}^{0}\to\bar{K}^{0}_{1B}K^{0}$: Except having a
large annihilation factorization amplitude, it can also obtain a large
emission factorization amplitude at the same time, because here the emission
meson is $K^{0}$ with a larger decay constant $f_{K}=0.16$. So this decay gets
a large branching ratio, which amounts to $2.71\times 10^{-6}$. Even though
the decay $\bar{B}^{0}\to\bar{K}^{0}_{1A}K^{0}$ has a small branching ratio,
the physical final states
$\bar{K}_{1}(1200)^{0}K^{0},\bar{K}_{1}(1400)^{0}K^{0}$, which are mixes of
the former two group flavor states, still might get a large branching ratio.
It has been verified by the different theories, which are shown in Table 1.
But the branching ratio of the decay
$\bar{B}^{0}\to\bar{K}_{1}(1400)^{0}K^{0}$ predicted by the QCDF approach
seems too small compared with the results given by the PQCD and the naive
factorization approaches, which can be clarified by the future experiments.
There exists the similar situation for the decay $B^{-}\to
K_{1}(1400)^{-}K^{0}$. Another decay channel, where exists large divergence
between the predictions, is $B^{-}\to K_{1}(1200)^{-}K^{0}$. The Feynman
diagrams of this decay can be obtained from those of the decay
$\bar{B}^{0}\to\bar{K}_{1}(1200)^{0}K^{0}$ by replacing the spectator quark
$d$ with $u$, so the difference of the branching ratios of these two decays
should not be so large. In a word, the branching ratios of the charged $B$
decays are at or near the order of $10^{-6}$, those of the pure annihilation
decays are at the order of $10^{-7}$ by taking the mixing angle
$\theta_{K_{1}}=33^{\circ}$.
In order to compare with other theoretical predictions, we also list the
branching ratios with the mixing angle $\theta_{\bar{K}_{1}}=-58^{\circ}$
shown in Table 3. One can find that the branching ratios of the decays
$B^{-}\to K_{1}^{-}(1270)K^{0},\bar{B}^{0}\to\bar{K}_{1}^{0}(1270)K^{0}$ have
a remarkable decrease from the mixing angles $-33^{\circ}$ to $-58^{\circ}$,
while those of the decays $B^{-}\to
K_{1}^{-}(1400)K^{0},\bar{B}^{0}\to\bar{K}_{1}^{0}(1400)K^{0}$ have a
remarkable increase.
Table 3: Same as Table1 except for the mixing angle $\theta_{\bar{K}_{1}}=-58^{\circ}$. | cmv | vnp | cy | this work
---|---|---|---|---
$\bar{B}^{0}\to K^{-}_{1}(1270)\pi^{+}$ | $4.3$ | $7.6$ | $2.7^{+0.6+1.3+4.4}_{-0.5-0.8-1.5}$ | $3.2^{+0.7+0.5+0.8}_{-0.5-0.5-0.8}$
$\bar{B}^{0}\to\bar{K}^{0}_{1}(1270)\pi^{0}$ | $2.1$ | $0.4$ | $0.8^{+0.1+0.5+1.7}_{-0.1-0.3-0.6}$ | $0.5^{+0.2+0.0+0.4}_{-0.0-0.2-0.2}$
$B^{-}\to\bar{K}^{0}_{1}(1270)\pi^{-}$ | $4.7$ | $5.8$ | $3.0^{+0.2+0.1+2.7}_{-0.2-0.2-2.2}$ | $3.2^{+1.3+1.2+1.3}_{-0.9-0.8-1.2}$
$B^{-}\to K^{-}_{1}(1270)\pi^{0}$ | $1.6$ | $4.9$ | $2.5^{+0.1+1.0+3.2}_{-0.1-0.7-1.0}$ | $3.3^{+1.1+0.7+0.8}_{-0.8-0.6-1.1}$
$\bar{B}^{0}\to K^{-}_{1}(1400)\pi^{+}$ | $2.3$ | $4.0$ | $2.2^{+1.1+0.7+2.6}_{-0.8-0.6-1.3}$ | $4.5^{+0.0+0.3+1.5}_{-0.0-0.5-1.3}$
$\bar{B}^{0}\to K^{0}_{1}(1400)\pi^{0}$ | $1.6$ | $1.7$ | $1.5^{+0.4+0.3+1.7}_{-0.3-0.3-0.9}$ | $4.1^{+0.8+0.7+1.2}_{-0.4-0.4-0.8}$
$B^{-}\to\bar{K}^{0}_{1}(1400)\pi^{-}$ | $2.5$ | $3.0$ | $2.8^{+1.0+0.9+3.0}_{-0.8-0.9-1.7}$ | $5.4^{+0.3+1.6+1.5}_{-0.2-1.2-1.4}$
$B^{-}\to K^{-}_{1}(1400)\pi^{0}$ | $0.6$ | $1.4$ | $1.0^{+0.4+0.4+1.2}_{-0.3-0.4-0.5}$ | $2.5^{+0.0+0.3+0.8}_{-0.0-0.4-0.7}$
$\bar{B}^{0}\to K^{-}_{1}(1270)K^{+}$ | | | $0.01^{+0.00+0.00+0.02}_{-0.00-0.00-0.01}$ | $0.19^{+0.01+0.00+0.37}_{-0.01-0.00-0.09}$
$\bar{B}^{0}\to K^{+}_{1}(1270)K^{-}$ | | | $0.04^{+0.01+0.00+0.27}_{-0.01-0.00-0.04}$ | $0.16^{+0.00+0.02+0.12}_{-0.02-0.03-0.06}$
$B^{-}\to K^{0}_{1}(1270)K^{-}$ | $0.22$ | | $0.22^{+0.01+0.12+0.39}_{-0.01-0.07-0.12}$ | $1.47^{+0.10+0.16+1.59}_{-0.06-0.10-0.58}$
$B^{-}\to K^{-}_{1}(1270)K^{0}$ | $0.75$ | | $0.05^{+0.02+0.09+0.10}_{-0.01-0.03-0.04}$ | $0.78^{+0.17+0.09+0.97}_{-0.13-0.08-0.19}$
$\bar{B}^{0}\to\bar{K}^{0}_{1}(1270)K^{0}$ | $0.70$ | | $2.10^{+0.13+1.23+1.31}_{-0.13-0.65-0.57}$ | $0.46^{+0.13+0.07+0.17}_{-0.09-0.05-0.13}$
$\bar{B}^{0}\to K^{0}_{1}(1270)\bar{K}^{0}$ | $0.20$ | | $0.26^{+0.10+0.12+0.47}_{-0.01-0.08-0.17}$ | $0.23^{+0.09+0.13+0.18}_{-0.06-0.08-0.16}$
$\bar{B}^{0}\to K^{-}_{1}(1400)K^{+}$ | | | $0.07^{+0.02+0.00+0.16}_{-0.02-0.00-0.06}$ | $0.58^{+0.06+0.01+0.15}_{-0.06-0.01-0.13}$
$\bar{B}^{0}\to K^{+}_{1}(1400)K^{-}$ | | | $0.01^{+0.00+0.00+0.16}_{-0.02-0.00-0.06}$ | $0.42^{+0.03+0.01+0.22}_{-0.02-0.00-0.16}$
$B^{-}\to K^{0}_{1}(1400)K^{-}$ | $0.12$ | | $0.22^{+0.07+0.07+0.24}_{-0.07-0.07-0.13}$ | $0.54^{+0.04+0.14+0.76}_{-0.02-0.11-0.13}$
$B^{-}\to K^{-}_{1}(1400)K^{0}$ | $3.9$ | | $0.01^{+0+0.02+0.04}_{-0-0.00-0.00}$ | $2.39^{+0.34+0.50+0.48}_{-0.25-0.39-0.48}$
$\bar{B}^{0}\to\bar{K}^{0}_{1}(1400)K^{0}$ | $3.6$ | | $0.10^{+0.02+0.21+0.15}_{-0.02-0.08-0.10}$ | $2.24^{+0.36+0.40+0.59}_{-0.28-0.34-0.51}$
$\bar{B}^{0}\to K^{0}_{1}(1400)\bar{K}^{0}$ | $0.11$ | | $0.25^{+0.07+0.08+0.31}_{-0.07-0.07-0.15}$ | $0.21^{+0.02+0.13+0.09}_{-0.01-0.07-0.07}$
Figure 2: The dependence of the direct CP-violating asymmetries on the mixing
angle $\theta_{\bar{K}_{1}}$: the solid lines represent the decays
$\bar{B}^{0}\to K_{1}(1270)^{0}\pi^{0}$ (left), $\bar{B}^{0}\to
K_{1}(1270)^{-}\pi^{+}$ (right), and the dashed lines are for the decays
$\bar{B}^{0}\to K_{1}(1400)^{0}\pi^{0}$ (left), $\bar{B}^{0}\to
K_{1}(1400)^{-}\pi^{+}$ (right), respectively.
Figure 3: The dependence of the direct CP-violating asymmetries on the mixing
angle $\theta_{\bar{K}_{1}}$: the solid lines represent the decays $B^{-}\to
K_{1}(1270)^{0}\pi^{-}$ (left), $B^{-}\to K_{1}(1270)^{-}\pi^{0}$ (right), and
the dashed lines are for the decays $B^{-}\to K_{1}(1400)^{0}\pi^{-}$ (left),
$B^{-}\to K_{1}(1400)^{-}\pi^{0}$ (right), respectively.
Figure 4: The dependence of the direct CP-violating asymmetries on the mixing angle $\theta_{\bar{K}_{1}}$: the solid lines represent the decays $B^{-}\to K_{1}(1270)^{-}K^{0}$ (left), $\bar{B}^{0}\to K_{1}(1270)^{-}K^{+}$ (right), the dashed lines are for the decays $B^{-}\to K_{1}(1270)^{0}K^{-}$ (left), $\bar{B}^{0}\to K_{1}(1270)^{+}K^{-}$ (right), the dot lines are for the decays $B^{-}\to K_{1}(1400)^{-}K^{0}$ (left), $B^{-}\to K_{1}(1400)^{-}K^{+}$ (right), and the dash-dot lines represent the decays $B^{-}\to K_{1}(1400)^{0}K^{-}$ (left), $\bar{B}^{0}\to K_{1}(1400)^{+}K^{-}$ (right), respectively. Table 4: Direct CP violation (in units of $\%$) for the decays $B\to K_{1A}\pi,K_{1B}\pi$ and $B\to K_{1A}K,K_{1B}K$. The errors for these entries correspond to the uncertainties from $\omega_{B}=0.4\pm 0.04$ GeV, the hard scale $t$ varying from $0.8t$ to $1.2t$, and the Gegenbauer moment $a_{1}^{\perp}(K_{1A})=-1.08\pm 0.48$ for $K_{1A}$ meson, $a_{1}^{\parallel}(K_{1B})=-1.95\pm 0.45$ for $K_{1B}$ meson, respectively.. $\bar{B}^{0}\to K^{-}_{1A}\pi^{+}$ | $9.1^{+2.4+0.8+3.0}_{-2.0-0.8-3.4}$ | $\bar{B}^{0}\to K^{-}_{1B}\pi^{+}$ | $-14.7^{+1.2+0.0+1.1}_{-1.4-0.2-1.6}$
---|---|---|---
$\bar{B}^{0}\to\bar{K}^{0}_{1A}\pi^{0}$ | $-6.6^{+1.3+0.9+2.8}_{-1.4-1.0-8.4}$ | $\bar{B}^{0}\to\bar{K}^{0}_{1B}\pi^{0}$ | $-9.2^{+1.0+3.3+1.6}_{-0.7-3.5-1.9}$
$B^{-}\to\bar{K}^{0}_{1A}\pi^{-}$ | $-2.3^{+0.8+0.8+1.5}_{-1.2-0.6-6.8}$ | $B^{-}\to\bar{K}^{0}_{1B}\pi^{-}$ | $3.3^{+0.1+0.6+1.9}_{-0.1-0.6-1.3}$
$B^{-}\to K^{-}_{1A}\pi^{0}$ | $17.7^{+4.1+3.0+17.1}_{-3.5-3.1-7.4}$ | $B^{-}\to K^{-}_{1B}\pi^{0}$ | $3.4^{+1.2+0.0+0.0}_{-1.4-4.6-6.8}$
$\bar{B}^{0}\to K^{-}_{1A}K^{+}$ | $43.9^{+1.7+0.5+0.0}_{-1.3-3.1-35.6}$ | $\bar{B}^{0}\to K^{-}_{1B}K^{+}$ | $-13.9^{+2.5+1.8+0.4}_{-2.6-2.0-0.4}$
$\bar{B}^{0}\to K^{+}_{1A}K^{-}$ | $46.5^{+0.5+4.4+40.3}_{-1.3-3.3-29.5}$ | $\bar{B}^{0}\to K^{+}_{1B}K^{-}$ | $-3.3^{+1.1+6.8+1.6}_{-0.7-4.1-1.7}$
$B^{-}\to K^{0}_{1A}K^{-}$ | $6.6^{+1.6+3.1+4.9}_{-1.7-3.8-1.8}$ | $B^{-}\to K^{0}_{1B}K^{-}$ | $-80.7^{+1.3+4.4+11.1}_{-1.7-3.5-2.9}$
$B^{-}\to K^{-}_{1A}K^{0}$ | $-29.4^{+7.6+2.6+86.7}_{-6.3-1.8-0.0}$ | $B^{-}\to K^{-}_{1B}K^{0}$ | $0.8^{+2.7+0.4+4.0}_{-3.6-0.5-2.9}$
Now we turn to the evaluations of the CP-violating asymmetries in the PQCD
approach. For the neutral $\bar{B}^{0}$ (the charged $B^{-}$) decays the
direct CP-violating asymmetries can be defined as
$\displaystyle{\cal A}_{CP}^{dir}$ $\displaystyle=$
$\displaystyle\frac{\Gamma(\bar{B}^{0}(B^{-})\to
f)-\Gamma(B^{0}(B^{+})\to\bar{f})}{\Gamma(\bar{B}^{0}(B^{-})\to
f)+\Gamma(B^{0}(B^{+})\to\bar{f})}=\frac{2z\sin\theta\sin\delta}{(1+2z\cos\theta\cos\delta+z^{2})}\;,$
(27)
where $\delta$ is the relative strong phase between the tree and penguin
amplitudes, and $\theta$ the CKM weak phase $\theta=\alpha$ for $b\to d$
transition, $\theta=\gamma$ for $b\to s$ transition. Certainly, if the final
states are the same for $B^{0}$ and $\bar{B}^{0}$, that is $f=\bar{f}$, the
CP-asymmetries may be time-dependent, including not only the direct CP
violation but also the mixing-induced CP violation. Using the input parameters
and the wave functions as specified in this section and Sec.II, it is easy to
get the PQCD predictions (in units of $10^{-2}$) for the direct CP-violating
asymmetries of $B$ decaying to each flavor final state, which are listed in
Table 4. For the real physical final states, which are mixes of the
corresponding flavor states, their direct CP-violating asymmetries will be
dependent on the mixing angle $\theta_{\bar{K}_{1}}$. As has been emphasised
before, $\theta_{\bar{K}_{1}}$ for the antiparticle states
$\bar{K}_{1}(1270),\bar{K}_{1}(1400)$ is of opposite sign to that for the
particle states $K_{1}(1270),K_{1}(1400)$. For taking the convention of decay
constant $f_{K_{1B}}$ in this work, so $\theta_{K_{1}}$ is positive and
$\theta_{\bar{K}_{1}}$ is negative. In Fig.2-Fig.4, we give the dependence of
the direct CP-violating asymmetries on the mixing angle $\theta_{\bar{K}_{1}}$
for each decay. Here taking $\theta_{\bar{K}_{1}}=-33^{\circ}$ or
$\theta_{\bar{K}_{1}}=-58^{\circ}$, we can read each direct CP-violating
asymmetry from these figures. It is noticed that for the decays
$\bar{B}^{0}\to K_{1}(1270)^{+}K^{-},K_{1}(1400)^{+}K^{-}$, $B^{-}\to
K_{1}(1270)^{0}K^{-},K_{1}(1400)^{0}K^{-}$, which include the particle states,
their direct CP-violating asymmetry values are still read at $-33^{\circ}$ or
$-58^{\circ}$ for $\theta_{K_{1}}=-\theta_{\bar{K}_{1}}$ and so the
corresponding mixing angle is positive. The signs of the direct CP-violating
asymmetries of $B\to K_{1}(1270)K(\pi)$ and $B\to K_{1}(1400)K(\pi)$ are
opposite at the mixing angle $\theta_{\bar{K}_{1}}=-33^{\circ}$ for most of
these decays except only two groups, whose direct CP-violating asymmetries are
predicted as ${\cal
A}_{CP}^{dir}(\bar{B}^{0}\to\bar{K}_{1}(1270)^{0}\pi^{0})=-12.6\%,{\cal
A}_{CP}^{dir}(\bar{B}^{0}\to\bar{K}_{1}(1400)^{0}\pi^{0})=-6.7\%$ and ${\cal
A}_{CP}^{dir}(\bar{B}^{0}\to K_{1}(1270)^{+}K^{-})=12.2\%,{\cal
A}_{CP}^{dir}(\bar{B}^{0}\to K_{1}(1400)^{+}K^{-})=9.6\%$, respectively. From
Table 4, one can find that the direct CP-violating asymmetries of each decay
$B\to K_{1A}\pi,K_{1B}\pi$ are not large, while those for some real physical
final states become very large. For example, the direct CP-violating
asymmetries of the decays $\bar{B}^{0}\to
K_{1}(1270)^{-}\pi^{+},K_{1}(1400)^{-}\pi^{+}$ are about $-58.1\%$ and
$68.4\%$ at the mixing angle $-33^{\circ}$, respectively. Certainly, we only
learn phenomenally about the mixing angle $\theta_{K_{1}}$ at present and have
no accurate calculations or measurements. Furthermore, the direct CP-violating
asymmetries are sensitive to the mixing angle. It is much more complex for
some considered decays where the nonperturbative contributions, such as
charming penguins, give large corrections, and the corresponding direct CP-
violating asymmetries may also change. So we can’t confirm that these decays
must have so large direct CP-violating asymmetries. As for the decays
$\bar{B}^{0}\to\bar{K}_{1}(1270)^{0}K^{0},\bar{K}_{1}(1400)^{0}K^{0}$, there
is no tree contribution at the leading order, so the direct CP-violating
asymmetry is naturally zero.
## V Conclusion
In this paper, by using the decay constants and the light-cone distribution
amplitudes derived from the QCD sum-rule method, we research the decays $B\to
K_{1}(1270)\pi(K),K_{1}(1400)\pi(K)$ in the PQCD approach and find that
* •
All the theoretical predictions for the branching ratios of the decays
$\bar{B}^{0}\to K_{1}(1270)^{+}\pi^{-},K_{1}(1400)^{+}\pi^{-}$ are
incompatible with the present experimental data. There exists the similar
situation for the decays $\bar{B}^{0}\to
a_{1}(1260)^{+}K^{-},b_{1}(1235)^{+}K^{-}$, where the nonperturbative
contributions, such as the final state interactions or the charming penguins,
are needed to explain the data. But the difference is that the nonperturbative
contributions seem to play opposite roles in these two groups of decays. If
the future data are really larger than the present predictions for some
considered decays, it might indicate that the nonperturbative contributions
have pronounced corrections for some decay channels which include the higher
resonances in the final states.
* •
The pure annihilation type decays $\bar{B}^{0}\to
K_{1}^{\pm}(1270)K^{\mp},K_{1}^{\pm}(1400)K^{\mp}$ are good channels to test
whether an approach can be used to calculate correctly the strength of the
penguin-annihilation amplitudes. Their branching ratios are predicted at
$10^{-7}$ order.
* •
In the four final neutral flavor states
$K^{0}_{1A}\bar{K}^{0},K^{0}_{1B}\bar{K}^{0},\bar{K}^{0}_{1A}K^{0},\bar{K}^{0}_{1B}K^{0}$,
the decay $\bar{B}^{0}\to\bar{K}^{0}_{1B}K^{0}$ have the largest branching
ratio which is of $10^{-6}$ order, while the other decays with the branching
ratios at $10^{-7}$ order. So the decays
$\bar{B}^{0}\to\bar{K}_{1}(1200)^{0}K^{0},\bar{K}_{1}(1400)K^{0}$ which
include the real physical states can have large branching ratios at the mixing
angle $\theta_{\bar{K}_{1}}=-33^{\circ}$ compare with the decays
$\bar{B}^{0}\to K_{1}(1200)^{0}\bar{K}^{0},K_{1}(1400)\bar{K}^{0}$.
* •
The signs of the direct CP-violating asymmetries are opposite between almost
of the decays $B\to K_{1}(1270)K(\pi)$ and $B\to K_{1}(1400)K(\pi)$ at mixing
angle $\theta_{K_{1}}=-33^{\circ}$ except only two groups, whose direct CP-
violating asymmetries are predicted as ${\cal
A}_{CP}^{dir}(\bar{B}^{0}\to\bar{K}_{1}(1270)^{0}\pi^{0})=-12.6\%,{\cal
A}_{CP}^{dir}(\bar{B}^{0}\to\bar{K}_{1}(1400)^{0}\pi^{0})=-6.7\%$ and ${\cal
A}_{CP}^{dir}(\bar{B}^{0}\to K_{1}(1270)^{+}K^{-})=12.2\%,{\cal
A}_{CP}^{dir}(\bar{B}^{0}\to K_{1}(1400)^{+}K^{-})=9.6\%$, respectively.
* •
The strong phase introduced by the nonperturbative contributions might produce
dramatic effects on some of the considered decays, such as $\bar{B}^{0}\to
K_{1}(1270)^{-}\pi^{+},K_{1}(1400)^{-}\pi^{+},K_{1}(1270)^{-}\pi^{0},K_{1}(1270)^{-}\pi^{0}$,
and these effects could exceed those from the parametric uncertainties in the
case of the CP asymmetries.
## Acknowledgment
This work is partly supported by the National Natural Science Foundation of
China under Grant No. 11147004, No.11147008, No.11347030, by the Program of
the Youthful Key Teachers in Universities of Henan Province under Grants No.
001166, and by the Program for Science and Technology Innovation Talents in
Universities of Henan Province 14HASTIT037. The authors would like to thank
Prof. Hai-Yang Cheng and Prof. Cai-Dian Lu for helpful discussions.
## Appendix A Analytic formulas for the decay amplitudes
$\displaystyle A(K_{1}(1270)^{0}\bar{K}^{0})$ $\displaystyle=$
$\displaystyle-\xi_{t}(f_{K_{1A}}\sin\theta_{K_{1}}+f_{K_{1B}}\cos\theta_{K_{1}})F^{LL}_{eK}(a_{4}-\frac{1}{2}a_{10})$
(28)
$\displaystyle-\xi_{t}(M^{LL;K_{1A}}_{eK}\sin\theta_{K_{1}}+M^{LL;K_{1B}}_{eK}\cos\theta_{K_{1}})(C_{3}-\frac{1}{2}C_{9})$
$\displaystyle-\xi_{t}(M^{LR;K_{1A}}_{eK}\sin\theta_{K_{1}}+M^{LR;K_{1B}}_{eK}\cos\theta_{K_{1}})(C_{5}-\frac{1}{2}C_{7})$
$\displaystyle-\xi_{t}(M^{LL;K_{1A}}_{aK}\sin\theta_{K_{1}}+M^{LL;K_{1B}}_{aK}\cos\theta_{K_{1}})(C_{3}-\frac{1}{2}C_{9})$
$\displaystyle-\xi_{t}(M^{LL;K_{1A}}_{aK}\sin\theta_{K_{1}}+M^{LL;K_{1B}}_{aK}\cos\theta_{K_{1}})(C_{4}-\frac{1}{2}C_{10})$
$\displaystyle-\xi_{t}(M^{LR;K_{1A}}_{aK}\sin\theta_{K_{1}}+M^{LR;K_{1B}}_{aK}\cos\theta_{K_{1}})(C_{5}-\frac{1}{2}C_{7})$
$\displaystyle-\xi_{t}(M^{SP;K_{1A}}_{aK}\sin\theta_{K_{1}}+M^{SP;K_{1B}}_{aK}\cos\theta_{K_{1}})(C_{6}-\frac{1}{2}C_{8})$
$\displaystyle-\xi_{t}f_{B}(F^{LL;K_{1A}}_{aK}\sin\theta_{K_{1}}+F^{LL;K_{1B}}_{aK}\cos\theta_{K_{1}})(a_{3}-\frac{1}{2}a_{9})$
$\displaystyle-\xi_{t}f_{B}(F^{LL;K_{1A}}_{aK}\sin\theta_{K_{1}}+F^{LL;K_{1B}}_{aK}\cos\theta_{K_{1}})(a_{4}-\frac{1}{2}a_{10})$
$\displaystyle-\xi_{t}f_{B}(F^{LL;K_{1A}}_{aK}\sin\theta_{K_{1}}+F^{LL;K_{1B}}_{aK}\cos\theta_{K_{1}})(a_{5}-\frac{1}{2}a_{7})$
$\displaystyle-\xi_{t}f_{B}(F^{SP;K_{1A}}_{aK}\sin\theta_{K_{1}}+F^{SP;K_{1B}}_{aK}\cos\theta_{K_{1}})(a_{6}-\frac{1}{2}a_{8})$
$\displaystyle-\xi_{t}(M^{LL;K}_{aK_{1A}}\sin\theta_{K_{1}}+M^{LL;K}_{aK_{1B}}\cos\theta_{K_{1}})(C_{4}-\frac{1}{2}C_{10})$
$\displaystyle-\xi_{t}(M^{SP;K}_{aK_{1A}}\sin\theta_{K_{1}}+M^{SP;K}_{aK_{1B}}\cos\theta_{K_{1}})(C_{6}-\frac{1}{2}C_{8})$
$\displaystyle-\xi_{t}f_{B}(F^{LL;K}_{aK_{1A}}\sin\theta_{K_{1}}+F^{LL;K}_{aK_{1B}}\cos\theta_{K_{1}})(a_{3}-\frac{1}{2}a_{9})$
$\displaystyle-\xi_{t}f_{B}(F^{LL;K}_{aK_{1A}}\sin\theta_{K_{1}}+F^{LL;K}_{aK_{1B}}\cos\theta_{K_{1}})(a_{5}-\frac{1}{2}a_{7}),$
$\displaystyle A(K_{1}(1270)^{0}K^{-})$ $\displaystyle=$
$\displaystyle-\xi_{t}(f_{K_{1A}}\sin\theta_{K_{1}}+f_{K_{1B}}\cos\theta_{K_{1}})F^{LL}_{eK}(a_{4}-\frac{1}{2}a_{10})$
(29)
$\displaystyle-\xi_{t}(M^{LL;K_{1A}}_{eK}\sin\theta_{K_{1}}+M^{LL;K_{1B}}_{eK}\cos\theta_{K_{1}})(C_{3}-\frac{1}{2}C_{9})$
$\displaystyle-\xi_{t}(M^{LR;K_{1A}}_{eK}\sin\theta_{K_{1}}+M^{LR;K_{1B}}_{eK}\cos\theta_{K_{1}})(C_{5}-\frac{1}{2}C_{7})$
$\displaystyle+(M^{LL;K_{1A}}_{aK}\sin\theta_{K_{1}}+M^{LL;K_{1B}}_{aK}\cos\theta_{K_{1}})(\xi_{u}C_{1}-\xi_{t}(C_{3}+C_{9}))$
$\displaystyle-\xi_{t}(M^{LR;K_{1A}}_{aK}\sin\theta_{K_{1}}+M^{LR;K_{1B}}_{aK}\cos\theta_{K_{1}})(C_{5}+C_{7})$
$\displaystyle+f_{B}(F^{LL;K_{1A}}_{aK}\sin\theta_{K_{1}}+F^{LL;K_{1B}}_{aK}\cos\theta_{K_{1}})(\xi_{u}a_{2}-\xi_{t}(a_{4}+a_{10})$
$\displaystyle-\xi_{t}f_{B}(F^{SP;K_{1A}}_{aK}\sin\theta_{K_{1}}+F^{SP;K_{1B}}_{aK}\cos\theta_{K_{1}})(a_{6}+a_{8}).$
In the upper two formulae, if changing the first term as
$-\xi_{t}f_{K}(F^{LL}_{eK_{1A}}\sin\theta_{K_{1}}+F^{LL}_{eK_{1B}}\cos\theta_{K_{1}})(a_{4}-\frac{1}{2}a_{10}))-\xi_{t}f_{K}(F^{SP}_{eK_{1A}}\sin\theta_{K_{1}}+F^{SP}_{eK_{1B}}\cos\theta_{K_{1}})(a_{6}-\frac{1}{2}a_{8})$,
and at the same time exchanging the positions of $K_{1A}(K_{1B})$ and $K$ in
other terms, we will get the decay amplitudes of
$\bar{B}^{0}\to\bar{K}_{1}(1270)^{0}K^{0}$ and $B^{-}\to
K_{1}(1270)^{-}K^{0}$, respectively.
$\displaystyle A(K_{1}(1270)^{+}K^{-})$ $\displaystyle=$
$\displaystyle(M^{LL;K_{1A}}_{aK}\sin\theta_{K_{1}}+M^{LL;K_{1B}}_{aK}\cos\theta_{K_{1}})(\xi_{u}C_{2}-\xi_{t}(C_{4}+C_{10}))$
(30)
$\displaystyle-\xi_{t}(M^{SP;K_{1A}}_{aK}\sin\theta_{K_{1}}+M^{SP;K_{1B}}_{aK}\cos\theta_{K_{1}})(C_{6}+C_{8})$
$\displaystyle+f_{B}(F^{LL;K_{1A}}_{aK}\sin\theta_{K_{1}}+F^{LL;K_{1B}}_{aK}\cos\theta_{K_{1}})(\xi_{u}a_{1}-\xi_{t}(a_{3}+a_{5}+a_{7}+a_{9}))$
$\displaystyle-\xi_{t}f_{B}(F^{LL;K_{1A}}_{aK}\sin\theta_{K_{1}}+F^{LL;K_{1B}}_{aK}\cos\theta_{K_{1}})(a_{3}+a_{5}-\frac{1}{2}a_{7}-\frac{1}{2}a_{9})$
$\displaystyle-\xi_{t}(M^{LL;K}_{aK_{1A}}\sin\theta_{K_{1}}+M^{LL;K}_{aK_{1B}}\cos\theta_{K_{1}})(C_{4}-\frac{1}{2}C_{10})$
$\displaystyle-\xi_{t}(M^{SP;K}_{aK_{1A}}\sin\theta_{K_{1}}+M^{SP;K}_{aK_{1B}}\cos\theta_{K_{1}})(C_{6}-\frac{1}{2}C_{8}).$
In Eq.(30), if exchanging the positions of $K_{1A}(K_{1B})$ and $K$, we will
get the total amplitude of the decay $\bar{B}^{0}\to K_{1}(1270)^{-}K^{+}$.
The total amplitudes of the decays $B\to K_{1}(1400)K$ can be obtained by
making the replacements with
$\sin\theta_{K_{1}}\to\cos\theta_{K_{1}},\cos\theta_{K_{1}}\to-\sin\theta_{K_{1}}$
in Eqs.(28-30), respectively.
## References
* (1) W. Wang, R.H. Li, C.D. Lu, Phys. Rev. D 78, 074009 (2008).
* (2) Z.Q. Zhang, Phys. Rev. D 85, 114005 (2012).
* (3) Z.Q. Zhang, H.X. Guo, G.S. Kang, Y.W. Luo, Eur. Phys. J. C 73, 2270 (2013).
* (4) G. Calderon, J.H. Munoz, and C.E. Vera, Phys.Rev.D 76, 094019 (2007).
* (5) V. Laporta, G. Nardulli, and T.N. Pham, Phys.Rev.D 74, 054035 (2006).
* (6) H.Y. Cheng and K.C. Yang, Phys. Rev. D 76, 114020, (2007).
* (7) ARGUS Collaboration, H. Albrecht et al., Phys. Lett. B 254, 288 (1991).
* (8) Heavy Flavor Averagering Group, online update at http://www.slac.stanford.edu/xorg/hfag.
* (9) F. Blanc, invited talk presented at Moriond QCD, La Thuile, Italy, March 17-24, 2007.
* (10) BABAR Collabration, B. Aubert et al., Phys. Rev. D 81, 052009 (2010).
* (11) C.D. Lu and M.Z. Yang, Eur. Phys. J. C 28, 515 (2003).
* (12) H.Y. Cheng and K. C. Yang, Phys. Rev. D 78, 094001 (2008).
* (13) C. D. Lu, K. Ukai, and M. Z. Yang, Phys. Rev. D 63, 074009 (2001).
* (14) Y. Y. Keum, H. n. Li, and A. I. Sanda, Phys. Lett. B 504, 6 (2001); Phys. Rev. D 63, 054008 (2001).
* (15) S. Mishima, Phys. Lett. B 521, 252 (2001); C. H. Chen, Y. Y. Keum, and H. n. Li, Phys. Rev. D 64, 112002 (2001).
* (16) H. n. Li and B. Tseng, Phys. Rev. D 57, 443 (1998).
* (17) H. n. Li, Phys. Rev. D 66, 094010 (2002).
* (18) H. n. Li and K. Ukai, Phys. Lett. B 555, 197 (2003).
* (19) H. n. Li and H. L. Yu, Phys. Rev. D 53, 2480 (1996).
* (20) R. K. Carnegic et al., Phys. Lett. B 68, 289 (1977).
* (21) N.Isgur, M. B. Wise, Phys. Lett. B 232, 113 (1989).
* (22) H. Y. Cheng. Phys. Rev. D 67, 094007 (2003).
* (23) M. Suzuki, Phys. Rev. D 47, 1252 (1997).
* (24) L. Burkovsky, T. Goldman, Phys. Rev. D 56, 1368 (1997); Phys. Rev. D 57, 2879 (1998).
* (25) P.V. Chliapnikov, Phys. Lett. B 496, 129 (2000).
* (26) S. Godfrey, N. Isgur, Phys. Rev. D 32, 189 (1985).
* (27) J. Vijande, F. Fernandez, A. Valcarce, J. Phys.G 31, 481 (2005).
* (28) A. Tayduganov, E. Kou, and A. Le Yaouanc, Phys. Rev. D 85, 074011 (2012).
* (29) F. Divotgey, L. Olbrich and F. Giacosa, Eur. Phys. J. A 49, 135 (2103).
* (30) H.Y. Cheng, Phys. Lett. B 707, 116 (2012).
* (31) A. Ali, G. Kramer, Y. Li, C.D. Lu, Y.L. Shen, W. Wang, Y.M. Wang, Phys. Rev. D 76, 074018 (2007).
* (32) J. Beringer, et al., [Particle Data Group], Phys. Rev. D 86, 010001 (2012).
* (33) CKMfitter Group, http://ckmfitter.in2p3.fr.
* (34) C.W. Bauer, D. Pirjol, I.Z. Rothstein, and I.W. Stewart, Phys. Rev. D 70, 054015 (2004).
* (35) H.Y. Cheng, C.K. Chua, and A. Soni, Phys. Rev. D 71, 014030 (2005).
* (36) F. Ruffini, CDF Collaboration, talk given at the Flavor Physics and CP violation 2011, May 23-27, Israel, arXiv:1107.5760[hep-ex]; M.J. Morello et al., (CDF Collaboratioin), CDF pulic note 10498 (2011); T. Aaltonen et al., (CDF Collaboratioin), arXiv:1111.0485v2[hep-ex].
* (37) A. Powell, LHCb Collaboration, talk given at PANIC 2011, MIT, July 2011; V. Vagnoni, LHCb Collaboration, LHCb-CONF-2011-024, Sept. 20, 2011.
* (38) Z.J. Xiao, W.F. Wang, Y.Y. Fan, Phys. Rev. D 85, 094003 (2012).
|
11institutetext: Bonn-Rhein-Sieg University of Applied Sciences, Sankt
Augustin, Germany
<EMAIL_ADDRESS>Leiden Institute
of Advanced Computer Science, Leiden University, Leiden, The Netherlands
<EMAIL_ADDRESS>
# An Analysis of Phenotypic Diversity in Multi-Solution Optimization††thanks:
This work received funding from the German Federal Ministry of Education and
Research (BMBF) (grant agreement no. 03FH012PX5)
Alexander Hagg(✉) 1122 0000-0002-8668-1796 Mike Preuss 22 0000-0003-4681-1346
Alexander Asteroth 11 0000-0003-1133-9424 Thomas Bäck 22 0000-0001-6768-1478
###### Abstract
More and more, optimization methods are used to find diverse solution sets. We
compare solution diversity in multi-objective optimization, multimodal
optimization, and quality diversity in a simple domain. We show that
multiobjective optimization does not always produce much diversity, multimodal
optimization produces higher fitness solutions, and quality diversity is not
sensitive to genetic neutrality and creates the most diverse set of solutions.
An autoencoder is used to discover phenotypic features automatically,
producing an even more diverse solution set with quality diversity. Finally,
we make recommendations about when to use which approach.
###### Keywords:
Evolutionary computation Multimodal optimization Multi-objective optimization
Quality diversity Autoencoder.
## 1 Introduction
With the advent of 3D printing and generative design, a new goal in
optimization is emerging. Having the option of choosing from different
solutions that are good enough to fulfill a task can be more effective than
being guided by single-solution algorithms. The optimization field should aim
to understand how to solve a problem in different ways.
Three major paradigms for multi-solution optimization exist. The major
difference between multi-objective optimization (MOO), multimodal optimization
(MMO) and quality diversity (QD) is the context in which solution diversity is
maintained. In MOO the goal is to find the Pareto set, which represents the
trade-offs between multiple criteria. MMO finds solutions that cover the
search space as well as possible. QD finds combinations of phenotypic features
to maximize the variation in solutions’ expressed shape or behavior - a new
focus in evolutionary optimization [17].
We analyze the diversity of solution sets in the three paradigms and introduce
a new niching method that allows comparing genetic and phenotypic diversity
(Section 2). State of the art diversity metrics (Section 3) are used in a new
problem domain (Section 4) to evaluate all paradigms (Section 5) after which
we make recommendations when to use which approach (Section 6).
## 2 Diversity in Optimization
The intuitive understanding of diversity assumes that there are more ways to
“do” or to “be” something and involves the concepts of dissimilarity and
distance. Evidence can be found in the large number of approaches and metrics,
and the lack of agreement in when to use which one. This section gives an
overview over three paradigms that have arisen in the last decades.
Finding solutions that are diverse with respect to objective space has been a
paradigm since the 1970s. Multi-objective optimization tries to discover the
Pareto set of trade-off solutions with respect to two or more objectives. The
method has no control over the diversity of genomes or their expression other
than the expectation that trade-offs require different solutions. The most
successful method is the Non-dominated Sorting Genetic Algorithm (NSGA-II)
[5].
The first ideas to use genetic diversity in optimization were not used to find
different solutions, but to deal with premature convergence to local optima.
The concept of niching was integrated into evolutionary optimization by
introducing sharing and crowding [8, 6]. In the 1990s, multi-local or
multimodal optimization came into focus. This paradigm has the explicit goal
to find a diverse set of high quality locations in the search space, based on
a single criterion. Various algorithms have been introduced, like basin
hopping [26], topographical selection [23], nearest-better clustering [16] and
restarted local search (RLS) [15].
The introduction of novelty search [11] led to studying the search for novel,
non-optimal solutions. QD, reintroducing objectives [3, 12], finds a diverse
set of high quality optimizers by performing niching in phenotypic space. In
applications for developing artificial creatures and robot controller
morphologies [3, 12], QD only allows solutions that belong the same phenotypic
niche to compete. To this end it keeps track of an archive of niches.
Solutions are added to the archive if their phenotype is novel enough or
better than that of a similar solution.
This work does not aim at giving an exhaustive overview over all methods, for
which we refer to some of the many survey papers [4, 1, 15, 21, 22, 27]. We
consciously choose not to talk about methods that combine ideas from the three
paradigms, but rather compare the three paradigms in their “purest” form.
### 2.1 Niching with Voronoi Tessellation
To remove variations in the search dynamics when comparing different
algorithms, we introduce a niching variant using ideas from Novelty Search
with Local Competition (NSLC) [12] and CVT-Elites [25]. Voronoi-Elites (VE)
accepts all new solutions until the maximum number of archive bins is
surpassed (Alg. 1). Then the pair of elites that are phenotypically closest to
each other are compared, rejecting the worst-performing. An example archive is
shown in Fig. 6 at step five). By locating selection pressure on the closest
solutions, VE tries to equalize the distances between individuals. The
generators of the Voronoi cells do not have to coincide with the centroids,
like in CVT-Elites, and the boundaries of the archive are not fixed. VE can be
used to compare archive spaces of different dimensionality. When the genetic
parameters are used as archive dimensions, VE behaves like an MMO algorithm by
performing niching in genetic space. When we use phenotypic descriptors, VE
behaves like a QD algorithm.
Algorithm 1 Voronoi-Elites
Initialize population
for iter 1 to n do
Select parents $\mathcal{P}$ randomly
Mutate $\mathcal{P}$ using normal distribution to create offspring
$\mathcal{O}$
Evaluate performance and descriptors of $\mathcal{O}$
Add $\mathcal{O}$ to archive $\mathcal{A}$
while $|\mathcal{A}|>maxSize$ do
Find pair in $\mathcal{A}$ with smallest distance
Remove individual (in pair) with lowest fitness
end while
end for
### 2.2 Related Work
A number of survey and analysis articles have appeared in the last decade. In
[1] a taxonomy for diversity in optimization was introduced. [28] investigates
how genetically diverse solution sets in MOO are found and shows that quality
indicators used in MOO can be applied to MMO. [24] compares two algorithms
from MMO to two QD algorithms in a robotics task, showing that clearing’s
performance can be comparable to that of QD. Finally, [13] discusses 100
solution set quality indicators in MOO and [22] discusses diversity indicators
for MOO.
## 3 Metrics
From the large number of diversity metrics available we only consider metrics
that do not depend on precise domain knowledge, because no knowledge about
actual local optima is available in real world applications. Three commonly
used distance-based metrics are selected to evaluate the experiments in this
work. The Sum of Distances to Nearest Neighbor (SDNN) measures the size of a
solution set as well as the dispersion between members of that set. Solow-
Polasky Diversity (SPD) measures the effective number of species by using
pairwise distances between the species in the set [20]. If the solutions are
similar with respect to each other, SPD tends to 1, otherwise to $N$. The
sensitive parameter $\theta$, which determines how fast a population tends to
$N$ with increasing distance, needs to be parameterized for every domain. It
is set to 1 for genetic distances and to 100 for phenotypic distances in this
work. Pure Diversity (PD) is used in high-dimensional many-objective
optimization [21, 27]. It does not have parameters, which makes it more
robust, and depends on a dissimilarity measure ($L_{0.1}$-norm).
Publications in the field of QD have focus on a small number of metrics. The
total fitness is used directly or through the QD-score [18], which calculates
the total fitness of all filled niches in a phenotypic archive. To achieve
this, the solutions from a non-QD algorithm are projected into a fixed
phenotypic niching space. This score is domain-dependent and does not allow
comparing QD algorithms that have different archiving methods. A comparison
between archives created from different features introduces a bias towards one
of the archives. The collection size indicates the proportion of the niching
space that is covered by the collection, but again can only be used on a
reference archive [4]. Archive-dependent metrics do not generalize well and
introduce biases. We therefore only use distance-based diversity metrics. The
high dimensionality of phenotypic spaces is taken into account by using
appropriate distance norms.
## 4 Polygon Domain
We construct a domain of free form deformed, eight-sided polygons. The genome
(Fig. 1a) consists of 16 parameters controlling the polar coordinate deviation
of the polygon control points. The first eight genes determine the deviation
of the radius of the polygon’s control points, the second eight genes
determine their angular deviation. Since the phenotypes can be expressed as
binary bitmap images (Figs. 1b and 1c, resolution of 64x64 pixels) we use the
Hamming distance in the diversity metrics to circumvent the problem of high
dimensionality [7].
Figure 1: Free form encoding of polygons. The genome (a) consists of 16
parameters that define axial and radial deformations (b). The phenotype is
considered to be the pixel representation of the polygon (c). Shown is a 20x20
phenotype, although we use 64x64 pixels. Features/criteria are shown in (d).
Three aspects describing the polygons are defined that can be used either as
criteria or as features (Fig. 1d): the area of the polygon $A$, its
circumference $l$ and point symmetry $P$ through the center. The polygon is
sampled at $n=1000$ equidistant locations on the polygon circumference. The
symmetry error $E_{s}$ is calculated as the sum of distances of all $n/2$
opposing sampling locations. The symmetry metric is calculated as shown in Eq.
1.
$f_{P}(x_{i})={1\over{1+E_{s}(x_{i})}},E_{s}(x_{i})=\sum_{j=1}^{n/2}||x_{i}^{j},x_{i}^{j+n/2}||$
(1)
## 5 Evaluation
We ask which paradigm (objective space, search space or phenotype space)
provides the highest phenotypic diversity of shapes. We compare VE, RLS and
NSGA-II in multiple experiments. Throughout these experiments we fix the
number of function evaluations and solutions and use five replicates per
configuration. In NSGA-II the features are used as optimization criteria,
maximizing $A$ and minimizing $l$. The true Pareto set consists of circles
with varying sizes. The number of generations is set to 1024 and mutation
strength to 10% of the parameter range. The probability of crossover for NSGA-
II is 90% and probability of mutation ${1\over dof}=0.0625$%, with $dof=16$
degrees of freedom. VE’s archive size is varied throughout the experiments.
The number of children and population size is set to the same value. RLS uses
as many restarts as the size of the VE archive, the step size is set to
$\rho=0.065$ (after a small parameter sweep) and L-BFGS-B is used as a local
search method (within the bounds of the domain). The initial solution set for
VE and NSGA-II is created with a Sobol sequence - the initial RLS solution is
in the center of the parameter range but RLS’ space filling character assures
a good search space coverage.
### 5.1 Genetic or Phenotypic Diversity
Biology has inspired evolutionary optimization to compose a solution of a
genome, its encoding, and a phenotype, its expression. The phenotype often is
a very high-dimensional object, for example a high-resolution 2D image, and
can involve the interaction with the environment. Since the phenotypic space
is usually too large, a low-dimensional representation, the genome, is used as
search space. An expression function is constructed that turns a genome into
its phenotype. Although the expression function should ideally be a bijective
mapping, it often does not prevent multiple genomes to be mapped to the same
phenotype. The phenomenon of such a surjective mapping is called genetic
neutrality, which is not the same but akin to genetic neutrality in biology.
In biology, a neutral mutation is understood to be a mutation that has no
effect on the survivability of a life form. In evolutionary computation,
genetic neutrality is referred to as genetic variants that have the same
phenotype [9].
Figure 2: Genetic neutrality. The same phenotype is expressed when rotating
the control points by a $\pi\over 8$ angle (left) or by translating the
control points as shown (right).
Figure 2(a) shows an example polygon. If the angle $\theta$ equals 0°or 45°,
phenotypically speaking, these shapes are the same. In this case, eight
genomes all point to the same phenotype. Similarly, Figure 2(b) shows how,
through translations of the keypoints, a similar shape can appear based on
different genomes. We postulate the first hypothesis: diversity maintenance in
a neutral, surjective genetic space leads to lower phenotypic diversity than
when using phenotypic niching.
While diversity is often thought about in terms of the distribution of points
in the search space, we make a case to measure diversity in phenotypic space,
which is independent of the encoding and does not suffer from the effects of
genetic neutrality. Phenotypes may also include other factors that are not
embodied within the solution’s shape itself, but emerge through interaction
with the environment. This is taken advantage of in several publications on
neuroevolution [11, 12]. In this work we only analyse the narrow
interpretation of phenotypes, which does not include behavior.
Figure 3: Voronoi-Elites (VE) performed in 16D genetic and 2D phenotypic
space. Top: genetic diversity (SDNN = Sum of Distances to Nearest Neighbor,
SPD = Solow-Polasky Diversity, and PD = Pure Diversity) and median fitness,
bottom: phenotypic diversity. The number of bins/solutions is increased
(x-axis).
The Voronoi tessellation used in VE makes it easy to compare archives of
different dimensionality by fixing the number of niches. We apply VE as an MMO
algorithm, performing niching in 16-dimensional genetic space, and as a QD
algorithm with a two-dimensional phenotypic space. The number of bins is
increased to evaluate when differences between genetic and phenotypic VE
appear (Fig. 3). At 25 solutions, the approaches produce about the same
diversity, but genetic VE finds higher quality solutions. As the number of
bins is increased, based on where niching is performed (genetic or phenotypic
space), the diversity in that space becomes higher. Phenotypic VE beats
genetic VE in terms of phenotypic diversity, which gives us evidence that the
first hypothesis is valid. At the same time, the average fitness values of
genetic VE are higher than that of phenotypic VE, although the difference gets
lower towards 400 solutions.
Table 1: Parameter settings in order of increasing genetic neutrality. case | axial min. | axial max. | radial min. | radial max. | neutrality
---|---|---|---|---|---
A | 0 | 1 | -0.05 | 0.05 | -
B | 0 | 1 | -0.125 | 0.125 | +
C | -0.25 | 1 | -0.25 | 0.25 | ++
D | -0.5 | 1 | -0.5 | 0.5 | +++
E | -1 | 1 | -1 | 1 | ++++
We compare phenotypic VE to NSGA-II and RLS. When we bound $dr$ between $0$
and $1$ and $d\theta$ between $+/-0.125\times\pi$, we can minimize genetic
neutrality. Neutrality is increased by expanding those bounds (Table 1). In
contrast to VE, the phenotypic diversity of RLS’ solutions is expected to
decrease as genetic neutrality increases. Since there is no mechanism to
distinguish between similar shapes with different genomes, there is an
increasing probability that RLS finds similar solutions. We expect that the
solution set produced by RLS due to its space filling character is more
diverse than using NSGA-II.
Figure 4: Genetic (top) and phenotypic (bottom) diversity, and median fitness.
Right of red marker: neutrality increases, using parameter bounds shown in
Table 1.
Finally, it can make more sense to treat objectives as features and, instead
of searching for the Pareto set, allowing all combinations of features and
increasing the diversity of the solution set. We expect NSGA-II to easily find
the Pareto set, which consists of circles of various scales, maximizing the
area while minimizing the length of the circumference, while QD should find a
variety of shapes that can be any combination of large and small $A$ and $l$.
We postulate the second hypothesis: allowing all criteria combinations,
instead of using a Pareto approach, leads to higher diversity, while still
approximating the Pareto set.
The number of solutions is set to 400. A result similar to Fig. 3 appears for
the standard algorithms in Fig. 4. Phenotypic diversity is highest for VE,
especially after the genetic neutrality threshold is crossed (at B). Diversity
of NSGA-II is lowest, as is expected for this setup. Although diversity of VE
is higher than that of RLS, the latter’s solutions are all maximally symmetric
(see fitness plots), making RLS much more appropriate when quality is more
important than diversity. These results confirm the first part of the second
hypothesis.
The Pareto set can be calculated a priori, as we know that circular shapes
maximize area while minimizing circumference. The members of the Pareto set
adhere to the following genome:
$(r_{1},\dots,r_{8},\theta_{1},\dots,\theta_{8})$, where $r_{i}$ and
$\theta_{i}$ have the same respective value. To create 100 shapes from the
Pareto set we take ten equidistant values for $r$ and $\theta$ and combine
them.
Figure 5: The ground truth Pareto set is shown over the entire parameter
range, with negative as well as positive values for the radial deformation.
Bottom left: closeness to Pareto set, measured as pixel errors. The six
figures on the right show example solution sets for low and high neutrality.
Part of the resulting Pareto set is shown in Fig. 5. The distance to the
Pareto set is measured in phenotypic space, by measuring the smallest pixel
error, the sum of pixel-wise differences, between a solution and the Pareto
set. We see that the a number of solutions in VE and RLS are close to the
Pareto set (Fig. 5 bottom left). Example results with low and high neutrality
are shown on the right. Solutions that are close to the Pareto set are shown
in the brightest green color. This is evidence for the second half of the
second hypothesis. VE again seems to be more robust w.r.t. genetic neutrality,
as it finds more solutions close to the Pareto set in high-neutrality domains
(bottom row) than RLS.
### 5.2 Phenotypic Diversity without Domain Knowledge
Up to this point we have used domain knowledge to construct a phenotypic
niching space with VE. Intuitively, the area and circumference seem like good
indicators for phenotypic differences. But this comparison between QD and MMO
is not completely fair, as the latter does not get any domain information. On
the other hand, the features used in QD might not be the most diversifying.
Figure 6: AutoVE. Generating phenotypic features with an autoencoder. A random
set of genomes is created (0), their phenotypes are calculated (1) and used as
a training set for an autoencoder (2). The autoencoder can now be used to
predict phenotypic features of new solutions (3), which is used to fill the
archive (4), after which the elite solutions are extracted from the archive
(5) and used to retrain the autoencoder.
We remove the domain knowledge from QD and construct a phenotypic niching
space by using a well known dimensionality reduction technique to map the
phenotypes to a latent space, as was done in [14, 2]. To our best knowledge,
this data driven phenotypic niching approach, which we name Auto-Voronoi-
Elites (AutoVE), has never been applied to shape optimization. An initial set
of genomes, drawn from a quasi-random, space-filling Sobol sequence [19] and
expressed into their phenotypes, is used to train a convolutional autoencoder
(cAE) (see Fig. 6). The bottleneck in the cAE is a compressed, latent space
that assigns every phenotype to a coordinate tupel. The encoder predicts these
coordinates of new shapes in the latent space, which are used as phenotypic
features. QD searches phenotypes that expand and improve the cAE archive. The
cAE is retrained with the new samples. The cAE consists of two convolutional
layers in the encoder and four transposed convolutional layers in the decoder.
We set the filter size to three pixels, the stride to two pixels, and the
number of filters to eight. The cAE is trained using ADAM [10] with a learning
rate of 0.001 and 350 training epochs and a mean square error loss function.
Latent coordinates are normalized between 0 and 1. The number of generations
(1024) is divided over two iterations for AutoVE and the number of latent
dimensions is set to two (to compare with manual VE), five or ten.
Figure 7: Phenotypic diversity and fitness of manually crafted features (VE)
compared to using an autoencoder (AutoVE) with 2, 5 or 10 latent dimensions.
Fig. 7 shows that the two-dimensional manual and autoencoded phenotypic space
(AutoVE 2D) produce similar diversity, whereby the quality of solutions from
AutoVE 2D is higher. The higher-dimensional latent spaces increase the
solution set diversity at the cost of fitness. This is to be expected, as
lower-fitness optima are protected in their own niches. Finally, the diversity
of higher-dimensional AutoVE is around 50% higher than any of the other tested
methods.
## 6 Conclusion
The main contributions of this work are as follows: a domain was introduced
that allows comparing three different diversity paradigms; a case was made to
measure diversity in phenotypic rather than genetic space; the hypothesis that
QD is less sensitive to genetic neutrality than MMO was confirmed; the
hypothesis that while the diversity of solutions sets of QD and RLS is higher
than that of MOO, they also find some solutions close to the ground truth
Pareto set, was confirmed; we showed that phenotypic diversity in QD is higher
than MMO and MOO. Furthermore, we introduced VE, a simpler and self-expanding
version of QD. We also used an autoencoder to discover phenotypic features in
a shape optimization problem, showing that we do not need to manually
predefine features to get a highly diverse solution set, allowing us to fairly
compare QD to MOO and MMO. Using an autoencoder produces higher diversity than
manually defined features, making AutoVE a strong choice for high diversity
multi-solution optimization.
Since all paradigms have their strengths and weaknesses, we propose a guide
for when to use which approach. MOO should be used when you want to optimize
all the criteria and want to know the trade-off solutions between those
criteria. MMO is appropriate when you have a non-neutral bijective encoding,
when you have a single criterion you want to optimize for or if you want to
perform a gradient-based, Quasi-Newton or (direct) evolutionary local search
to refine local optima. We cannot easily do this in QD due to the effect of
neutrality that allows a search to “jump out of” a phenotypic niche. QD should
be used if you have some criteria where you are less determined about whether
to optimize for them, for example during the first phase of a design process.
Some representatives from the Pareto set will still be discovered. When you
are interested in the largest diversity of solutions and are more willing to
get some solutions with lower fitness than when using MMO, QD is the better
alternative. One of the biggest strengths of QD is the possibility to
understand relationships between features or even to discover features
automatically.
Some research effort should be focused on hybridization. MOO and QD are
connected, as the boundary of valid solutions in the phenotypic archive is
close to the Pareto front, yet there is room for improvement. Connecting MMO
and QD means to use a local search method in QD, which needs to overcome the
genetic neutrality problem. We cannot search close to a solution in genetic
space and expect newly created solutions to be close in phenotypic space.
We gave insights about different variations of diversity and when and where to
apply them, depending on whether one is most interested in trade-offs between
criteria, increasing diversity while maximizing fitness, or maximizing
diversity while finding high-performing solutions in a manually defined or
automatically extracted phenotypic space. It is often easy to manually define
two or three phenotypic descriptors, but human imagination can run out of
options quickly. Automatic discovery of phenotypic features is a more
attractive option for increasing solution diversity. Real world multi-solution
optimization and understanding solution diversity are important steps towards
increasing the efficacy and efficiency at which engineers solve problems.
## References
* [1] Basto-Fernandes, V., Yevseyeva, I., Emmerich, M.: A survey of diversity-oriented optimization. EVOLVE 2013-A Bridge between Probability, Set Oriented Numerics, and Evolutionary Computing pp. 101–109 (2013)
* [2] Cully, A.: Autonomous skill discovery with quality-diversity and unsupervised descriptors. In: Proceedings of the Genetic and Evolutionary Computation Conference. pp. 81–89 (2019)
* [3] Cully, A., Clune, J., Tarapore, D., Mouret, J.B.: Robots that can adapt like animals. Nature 521(7553), 503–507 (2015)
* [4] Cully, A., Demiris, Y.: Quality and diversity optimization: A unifying modular framework. IEEE Transactions on Evolutionary Computation 22(2), 245–259 (2017)
* [5] Deb, K., Pratap, A., Agarwal, S., Meyarivan, T.: A fast and elitist multiobjective genetic algorithm: Nsga-ii. IEEE transactions on evolutionary computation 6(2), 182–197 (2002)
* [6] DeJong, K.: Analysis of the behavior of a class of genetic adaptive systems. Dept. Computer and Communication Sciences, University of Michigan, Ann Arbor (1975)
* [7] Hamming, R.W.: Error detecting and error correcting codes. The Bell system technical journal 29(2), 147–160 (1950)
* [8] Holland, J.H.: Adaptation in natural and artificial systems. MIT press (1975)
* [9] Hu, T., Payne, J.L., Banzhaf, W., Moore, J.H.: Robustness, evolvability, and accessibility in linear genetic programming. In: European Conference on Genetic Programming. pp. 13–24. Springer (2011)
* [10] Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014)
* [11] Lehman, J., Stanley, K.O.: Abandoning objectives: Evolution through the search for novelty alone. Evolutionary computation 19(2), 189–223 (2011)
* [12] Lehman, J., Stanley, K.O.: Evolving a diversity of virtual creatures through novelty search and local competition. In: Proceedings of the 13th annual conference on Genetic and evolutionary computation. pp. 211–218 (2011)
* [13] Li, M., Yao, X.: Quality evaluation of solution sets in multiobjective optimisation: A survey. ACM Computing Surveys (CSUR) 52(2), 1–38 (2019)
* [14] Meyerson, E., Lehman, J., Miikkulainen, R.: Learning behavior characterizations for novelty search. GECCO 2016 - Proceedings of the 2016 Genetic and Evolutionary Computation Conference pp. 149–156 (2016)
* [15] Pošík, P., Huyer, W.: Restarted local search algorithms for continuous black box optimization. Evolutionary Computation 20(4), 575–607 (2012)
* [16] Preuss, M.: Improved topological niching for real-valued global optimization. In: European Conference on the Applications of Evolutionary Computation. pp. 386–395. Springer (2012)
* [17] Pugh, J.K., Soros, L.B., Stanley, K.O.: Quality diversity: A new frontier for evolutionary computation. Frontiers in Robotics and AI 3, 40 (2016)
* [18] Pugh, J.K., Soros, L.B., Szerlip, P.A., Stanley, K.O.: Confronting the challenge of quality diversity. In: Proceedings of the 2015 Annual Conference on Genetic and Evolutionary Computation. pp. 967–974 (2015)
* [19] Sobol’, I.M.: On the distribution of points in a cube and the approximate evaluation of integrals. Zhurnal Vychislitel’noi Matematiki i Matematicheskoi Fiziki 7(4), 784–802 (1967)
* [20] Solow, A.R., Polasky, S.: Measuring biological diversity. Environmental and Ecological Statistics 1(2), 95–103 (1994)
* [21] Tian, Y., Cheng, R., Zhang, X., Jin, Y.: Platemo: A matlab platform for evolutionary multi-objective optimization. IEEE Computational Intelligence Magazine 12(4), 73–87 (2017)
* [22] Tian, Y., Cheng, R., Zhang, X., Li, M., Jin, Y.: Diversity assessment of multi-objective evolutionary algorithms: Performance metric and benchmark problems [research frontier]. IEEE Computational Intelligence Magazine 14(3), 61–74 (2019)
* [23] Törn, A., Viitanen, S.: Topographical global optimization. Recent advances in global optimization pp. 384–398 (1992)
* [24] Vassiliades, V., Chatzilygeroudis, K., Mouret, J.B.: Comparing multimodal optimization and illumination. In: Proceedings of the Genetic and Evolutionary Computation Conference Companion. pp. 97–98 (2017)
* [25] Vassiliades, V., Chatzilygeroudis, K., Mouret, J.B.: Using centroidal voronoi tessellations to scale up the multidimensional archive of phenotypic elites algorithm. IEEE Transactions on Evolutionary Computation 22(4), 623–630 (2017)
* [26] Wales, D.J., Doye, J.P.: Global optimization by basin-hopping and the lowest energy structures of lennard-jones clusters containing up to 110 atoms. The Journal of Physical Chemistry A 101(28), 5111–5116 (1997)
* [27] Wang, H., Jin, Y., Yao, X.: Diversity assessment in many-objective optimization. IEEE transactions on cybernetics 47(6), 1510–1522 (2016)
* [28] Wessing, S., Preuss, M.: On multiobjective selection for multimodal optimization. Computational Optimization and Applications 63(3), 875–902 (2016)
|
fixing the form of the soft anomalous dimension at this order. As was
expected, three loop corrections to the dipole formula (7.12) depend
exclusively on CICRs. The structure of the result is of direct relevance to
the functions we shall encounter at four loops, and therefore we shall review
it below in the context of the general form the anomalous dimension takes at
four loops.
Taking into account the complete set of connected colour structures complying
with the non-Abelian exponentiation theorem Gatheral1983ExponentiationOE ;
Frenkel1984NonabelianEE ; Gardi:2013ita , Becher and Neubert wrote down
Becher:2019avh a general parametrisation191919Previous work along these lines
has been done e.g. in refs. Gardi:2009qi ; Becher:2009qa ; Dixon:2009ur ;
Ahrens:2012qz ; Almelid:2017qju . – with unknown kinematic functions – which
satisfied the aforementioned collinear anomaly constraints of eq. (7.3) along
with Bose symmetry. The contributions appearing through four loops can be
classified as follows:
$\displaystyle\begin{split}\mathbf{\Gamma}_{n}\left(\\{s_{ij}\\},\lambda,\alpha_{s}(\lambda^{2})\right)&=\mathbf{\Gamma}^{\rm
dip.}_{n}\left(\\{s_{ij}\\},\lambda,\alpha_{s}\right)+\mathbf{\Gamma}_{n,\rm
4T-3L}\left(\alpha_{s}\right)+\mathbf{\Gamma}_{n,\rm
4T-4L}\left(\\{\beta_{ijkl}\\},\alpha_{s}\right)\\\\[2.84544pt]
&+\mathbf{\Gamma}_{n,\rm
Q4T-2,3L}\left(\\{s_{ij}\\},\lambda,\alpha_{s}\right)+\mathbf{\Gamma}_{n,\rm
Q4T-4L}\left(\\{\beta_{ijkl}\\},\alpha_{s}\right)\\\\[2.84544pt]
&+\mathbf{\Gamma}_{n,\rm
5T-4L}\left(\\{\beta_{ijkl}\\},\alpha_{s}\right)+\mathbf{\Gamma}_{n,\rm
5T-5L}\left(\\{\beta_{ijkl}\\},\alpha_{s}\right)+{\cal
O}(\alpha_{s}^{5})\,,\end{split}$ (7.13)
where the subscript for each term includes, in addition to the number of
partons $n$, the following attributes of each (connected) colour
factor:$\,{\mathrm{Q}}$ for a quartic Casimir related contribution; a number
followed by ${\mathrm{T}}$ to indicate the number generators, and a number
followed by ${\mathrm{L}}$ to indicate the number of distinct lines that
interact.
The first term in eq. (7.13) is the sum-over-dipoles formula, which is the
complete result for $\mathbf{\Gamma}_{n}$ to two loops, while all others start
contributing at three (the second and third terms) and four loops (all
others). Notice that this formula includes terms with explicit dependence on
the scale (the first and the fourth) as well as terms that depend exclusively
on CICRs (all others). The former can be identified with those that constitute
a particular solution satisfying eq. (7.3) (through four loops) and are
strictly linear in $l_{ij}$, while the latter involve higher transcendental
functions of the CICRs. The last two terms, consisting of five generators, are
excluded based on the argument of ref. Vladimirov:2017ksc . We retain them
here to see, independently of the latter argument, what constraints emerge
from the Regge-limit analysis. Let us now introduce explicitly each term of
eq. (7.13) in turn, where we adopt much of the notation from ref.
Becher:2019avh .
The second and third terms in (7.13) start at three loops, where they were
explicitly computed Almelid:2015jia . These terms involve the colour and
kinematic degrees of freedom of subsets of three or four partons, they are
non-planar and depend exclusively on CICRs. They read:
$\displaystyle\mathbf{\Gamma}_{n,\rm 4T-3L}\left(\alpha_{s}\right)$
$\displaystyle=$ $\displaystyle f(\alpha_{s})\sum_{(i,j,k)}\bm{{\cal
T}}_{iijk},$ (7.14) $\displaystyle\mathbf{\Gamma}_{n,\rm
4T-4L}\left(\\{\beta_{ijkl}\\},\alpha_{s}\right)$ $\displaystyle=$
$\displaystyle\sum_{(i,j,k,l)}\bm{{\cal T}}_{ijkl}\,{\cal
F}(\beta_{ijlk},\beta_{iklj};\alpha_{s}),$ (7.15)
where the summation is over tuples (with no restriction on the relative order
of indices). The colour structure involves four generators:
$\bm{{\cal T}}_{ijkl}\equiv f^{ade}f^{bce}\\{{\bf T}_{i}^{a},{\bf
T}_{j}^{b},{\bf T}_{k}^{c},{\bf T}_{l}^{d}\\}_{+}.$ (7.16)
Notice that the curly brackets represent symmetrisation, defined as
$\\{{\bf T}_{i}^{a_{1}},{\bf T}_{j}^{a_{2}},\dots\bf
T_{l}^{a_{n}}\\}_{+}\equiv\frac{1}{n!}\sum_{\pi}{\bf T}_{i}^{a_{\pi(1)}}{\bf
T}_{j}^{a_{\pi(2)}}\dots\bf T_{l}^{a_{\pi(n)}},$ (7.17)
where the sum is over all permutations of the indices. The symmetrisation only
acts on generators attached to the same line, as those attached to distinct
lines commute. For example,
$\bm{{\cal T}}_{iijk}=f^{ade}f^{bce}\\{{\bf T}_{i}^{a},{\bf
T}_{i}^{b}\\}_{+}{\bf T}_{j}^{c}{\bf
T}_{k}^{d}=\frac{1}{2}f^{ade}f^{bce}\left({\bf T}_{i}^{a}{\bf T}_{i}^{b}+{\bf
T}_{i}^{b}{\bf T}_{i}^{a}\right){\bf T}_{j}^{c}{\bf T}_{k}^{d}.$
The functions $f(\alpha_{s})$ and ${\cal
F}(\beta_{ijlk},\beta_{iklj};\alpha_{s})$ have a perturbative expansion
$f(\alpha_{s})=\left(\frac{\alpha_{s}}{\pi}\right)^{3}f^{(3)}+\left(\frac{\alpha_{s}}{\pi}\right)^{4}\sum_{R}f^{(4)}_{R}+O(\alpha_{s}^{5}),$
(7.18)
and
${\cal
F}(\beta_{ijlk},\beta_{iklj};\alpha_{s})=\left(\frac{\alpha_{s}}{\pi}\right)^{3}{\cal
F}^{(3)}(\beta_{ijlk},\beta_{iklj})+\left(\frac{\alpha_{s}}{\pi}\right)^{4}\sum_{R}{\cal
F}^{(4)}_{R}(\beta_{ijlk},\beta_{iklj})+O(\alpha_{s}^{5})\,,$ (7.19)
where $f^{(\ell)}$ are transcendental constants while ${\cal F}^{(\ell)}$ are
transcendental functions of the CICRs defined in eq. (7.6). At four loops,
these two functions involve a sum over the gauge group representations $R$,
which we write explicitly in eqs. (7.18) and (7.19). This is a general feature
of the colour structures appearing in the anomalous dimension: there is an
implicit sum over the representations, once they are considered at a loop
order higher than when they first appear. This is a manifestation of the fact
that any structure first appears owing to a purely gluonic diagram, and, as
such, it has a universal nature, being entirely independent of the matter
contents of the theory.
The functions $f(\alpha_{s})$ and ${\cal
F}(\beta_{ijlk},\beta_{iklj};\alpha_{s})$ have been calculated at three loops
Almelid:2015jia . Expressing the CICRs in terms of variables $z_{ijkl}$ and
$\bar{z}_{ijkl}$:
$\displaystyle\rho_{ijkl}$ $\displaystyle=z_{ijkl}\bar{z}_{ijkl},\,\,\,\hskip
28.45274pt\rho_{ilkj}=(1-z_{ijkl})(1-\bar{z}_{ijkl}),$ (7.20)
the function ${\cal F}(\alpha_{s})$ reads
$\displaystyle{\cal F}^{(3)}(\beta_{ijlk},\beta_{iklj})$
$\displaystyle=\frac{1}{32}\bigg{(}F(1-z_{ijlk})-F(z_{ijlk})\bigg{)},$ (7.21)
where in turn $F(z)$ is a function of single-valued harmonic polylogarithms
Brown:2004ugm ; Dixon:2012yy ; Brown:2013gia ; Schnetz:2013hqa :
$F(z)\equiv{\cal L}_{10101}(z)+2\zeta_{2}\Big{[}{\cal L}_{001}(z)+{\cal
L}_{100}(z)\Big{]}\,,$ (7.22)
while
$\displaystyle f^{(3)}$
$\displaystyle=\frac{1}{4}\left(\zeta_{5}+2\zeta_{2}\zeta_{3}\right)\,.$
(7.23)
The other terms in eq. (7.13) start at four loops. The quartic term involving
four generators with attachments to two and three legs can be expressed as
Becher:2019avh
$\mathbf{\Gamma}_{n,\rm
Q4T-2,3L}\left(\\{s_{ij}\\},\lambda,\alpha_{s}\right)=-\frac{1}{2}\sum_{R}g_{R}(\alpha_{s})\bigg{[}\sum_{(i,j)}\,\big{(}\bm{{\cal
D}}_{iijj}^{R}+2\bm{{\cal D}}_{iiij}^{R}\big{)}l_{ij}+\sum_{(i,j,k)}\bm{{\cal
D}}_{ijkk}^{R}\,l_{ij}\bigg{]},$ (7.24)
where the colour operator is defined as
$\bm{{\cal D}}_{ijkl}^{R}\equiv\frac{1}{4!}\sum_{\sigma\in
S_{4}}\operatorname{Tr}_{R}\left(T^{\sigma(a)}T^{\sigma(b)}T^{\sigma(c)}T^{\sigma(d)}\right)\,{\bf
T}_{i}^{a}{\bf T}_{j}^{b}{\bf T}_{k}^{c}{\bf T}_{l}^{d}\,.$ (7.25)
Similarly to the dipole term, $\mathbf{\Gamma}_{n,\rm Q4T-2,3L}$ is part of
the inhomogeneous solution of eq. (7.3): upon substituting eq. (7.24) into the
left-hand side of eq. (7.3) and using colour conservation, one obtains the
quartic Casimir component of $\Gamma^{\rm{cusp}}_{i}(\alpha_{s})$, where
$\bm{\mathcal{D}}^{R}_{iiii}=\frac{d_{RR_{i}}}{N_{R_{i}}}=\frac{1}{4!}\sum_{\sigma\in\mathcal{S}_{4}}\mathrm{Tr}_{R}\left[T^{\sigma(a)}T^{\sigma(b)}T^{\sigma(c)}T^{\sigma(d)}\right]\mathbf{T}^{a}_{i}\mathbf{T}^{b}_{i}\mathbf{T}^{c}_{i}\mathbf{T}^{d}_{i}\,,$
(7.26)
and where $g_{R}(\alpha_{s})$ is the function multiplying
$\frac{d_{RR_{i}}}{N_{R_{i}}}$ in eq. (7.5). As discussed in section 2.3, the
function $g_{R}(\alpha_{s})$ begins at four loops and is known at this order
Henn:2019swt ; Huber:2019fxe ; vonManteuffel:2020vjv . The result is quoted in
eq. (B.3). We provide a more detailed discussion concerning the relation
between $\mathbf{\Gamma}_{n}(\alpha_{s})$ and
$\Gamma^{\rm{cusp}}_{i}(\alpha_{s})$ in appendix F.
The remaining terms in (7.13) are part of the solution to the homogeneous
equation associated to eq. (7.3), therefore, the functions appearing in these
terms depend exclusively on CICRs. The term ${\mathrm{Q4T-4L}}$ involves the
quartic Casimir operator as well, and reads202020Owing to the complete
permutation symmetry of the colour factor $\bm{{\cal D}}_{ijkl}^{R}$ with
respect to $i,j,k$ and $l$, the kinematic function ${\cal G}_{R}$ admits a
similar symmetry. Consequently, ${\cal G}_{R}$ may be factored out of the sum
over the permutations of a given subset of indices.
$\mathbf{\Gamma}_{n,\rm
Q4T-4L}\left(\\{\beta_{ijkl}\\},\alpha_{s}\right)=\sum_{R}\sum_{(i,j,k,l)}\\!\bm{{\cal
D}}_{ijkl}^{R}\,{\cal G}_{R}(\beta_{ijlk},\beta_{iklj};\alpha_{s}).$ (7.27)
Finally, there are then two terms involving five colour generators: they are
given by
$\displaystyle\mathbf{\Gamma}_{n,\rm
5T-4L}\left(\\{\beta_{ijkl}\\},\alpha_{s}\right)$
$\displaystyle=\sum_{(i,j,k,l)}\\!\bm{{\cal T}}_{ijkli}\,{\cal
H}_{1}(\beta_{ijlk},\beta_{iklj};\alpha_{s}),$ (7.28a)
$\displaystyle\mathbf{\Gamma}_{n,\rm
5T-5L}\left(\\{\beta_{ijkl}\\},\alpha_{s}\right)$
$\displaystyle=\sum_{(i,j,k,l,m)}\\!\bm{{\cal T}}_{ijklm}\,{\cal
H}_{2}(\beta_{ijkl},\beta_{ijmk},\beta_{ikmj},\beta_{jiml},\beta_{jlmi};\alpha_{s}),$
(7.28b)
where the colour structure is defined as
$\bm{{\cal T}}_{ijklm}=if^{adf}f^{bcg}f^{efg}\\{{\bf T}_{i}^{a},{\bf
T}_{j}^{b},{\bf T}_{k}^{c},{\bf T}_{l}^{d},{\bf T}_{m}^{e}\\}_{+}.$ (7.29)
The functions ${\cal G}_{R}(\alpha_{s})$, ${\cal H}_{1}(\alpha_{s})$ and
${\cal H}_{2}(\alpha_{s})$ start at four loops, and have not yet been
computed. Similarly, the four-loop contributions to the functions
$f(\alpha_{s})$ and ${\cal F}(\alpha_{s})$ are to date unknown. In all these
cases, the structure of these functions can be constrained by analysing
amplitudes in specific kinematic limits, where additional information can be
obtained. The collinear limit offers one such instance Becher:2009qa ;
Dixon:2009ur ; Almelid:2017qju ; Becher:2019avh , and we briefly summarise
below what constraints it provides based on ref. Becher:2019avh , before
returning to the Regge limit.
It is well known that when two particles in either the initial or final state
become collinear, the amplitude $\mathcal{M}_{n}$ factorises into a splitting
amplitude Sp and the parent amplitude $\mathcal{M}_{n-1}$ Berends:1988zn ;
Mangano_1991 ; Bern_1995 ; Kosower_1999 ; Catani:2011st , with
$\lim_{p_{1}||p_{2}}\mathcal{M}_{n}\left(\\{p_{1},\dots,p_{n}\\},\lambda,\alpha_{s}\right)=\textbf{Sp}\left(\\{p_{1},p_{2}\\},\lambda,\alpha_{s}\right)\mathcal{M}_{n-1}\left(\\{p_{1}+p_{2},p_{3},\dots
p_{n}\\},\lambda,\alpha_{s}\right).$ (7.30)
The splitting amplitude has an anomalous dimension defined as
$\frac{d}{d\log\lambda}\textbf{Sp}\left(\\{p_{1},p_{2}\\},\lambda,\alpha_{s}\right)={\bf\Gamma}_{\text{Sp}}\left(\\{p_{1},p_{2}\\},\lambda,\alpha_{s}\right)\textbf{Sp}\left(\\{p_{1},p_{2}\\},\lambda,\alpha_{s}\right)$
(7.31)
which, just like the function Sp itself, is independent of the momenta and
colour generators of the particles that are not collinear. Performing infrared
factorisation of each of the ingredients in eq. (7.30), one obtains
Becher:2009qa ; Dixon:2009ur
${\bf\Gamma}_{\bf
Sp}\left(\\{p_{1},p_{2}\\},\lambda,\alpha_{s}\right)=\underset{p_{1}||p_{2}}{\text{lim}}{\bf\Gamma}_{n}\left(\\{p_{1},\dots,p_{n}\\},\lambda,\alpha_{s}\right)-{\bf\Gamma}_{n-1}\left(\\{p_{1}+p_{2},p_{3},\dots,p_{n}\\},\lambda,\alpha_{s}\right).$
(7.32)
This provides the non-trivial constraint on ${\bf\Gamma}_{n}$ itself: the
splitting amplitude anomalous dimension on the left-hand side only depends on
the two particles that become collinear, hence so must the right-hand side of
eq. (7.32). This translates into concrete constraints for the functions in eq.
(7.13). The functions $f(\alpha_{s})$ and ${\cal F}(\alpha_{s})$ are related
by the condition Becher:2009qa ; Dixon:2009ur ; Almelid:2017qju ;
Becher:2019avh
$\underset{\beta_{12ij}\to-\infty}{\text{lim}}{\cal
F}(\beta_{12ij},0;\alpha_{s})=\frac{f(\alpha_{s})}{2},$ (7.33)
which, in particular, provide a constraint for the coefficients $f^{(4)}_{R}$
and ${\cal F}^{(4)}_{R}$. Similarly, the functions ${\cal G}_{R}(\alpha_{s})$
and $g_{R}(\alpha_{s})$ are related by Becher:2019avh
$\underset{\beta_{12ij}\to-\infty}{\text{lim}}{\cal
G}_{R}(\beta_{12ij},0;\alpha_{s})=-\frac{g_{R}(\alpha_{s})}{12}\,\beta_{12ij}.$
(7.34)
Furthermore, one has Becher:2019avh
$\underset{\beta_{12ij}\to-\infty}{\text{lim}}{\cal
H}_{1}(\beta_{12ij},0;\alpha_{s})=0\,.$ (7.35)
Last, we have constraints from the high-energy limit, which is of course the
topic of this section. Given our explicit calculation of $2\to 2$ parton
scattering in this limit, we are able to determine the four-loop contribution
to the functions appearing in eq. (7.13) in this limit. In order to proceed we
need first to specialise eq. (7.13) to the case of two-parton scattering, and
then take the high-energy limit. In this kinematic configuration no
constraints can be obtained for ${\cal H}^{(4)}_{2}$, which involves at least
five partons. However, we are able to obtain constraints for ${\cal F}^{(4)}$
and ${\cal G}_{R}^{(4)}$, as well as ${\cal H}^{(4)}_{1}$.
### 7.2 The soft anomalous dimension in the high-energy limit
We now take the general form of the soft anomalous dimension as written in eq.
(7.13), and specialise it to the case of $2\to 2$ particle scattering in the
high-energy limit. In short, the procedure is as follows.
* •
In eq. (7.13) we drop the contributions which only appear for more than four
external partons, i.e., we do not consider ${\cal H}_{2}$.
* •
We express the colour operators of eq. (7.13) in what we call _a Regge-limit
basis_ , i.e., in terms of a minimal212121By using the matrix expression of
$\mathbf{T}_{t}^{2}$ and $\mathbf{T}_{s-u}^{2}$, obtained by specialising to
projectile and target states either in the fundamental or in the adjoint
representation Caron-Huot:2017fxr , we verified that there are no linear
relations among the colour structures appearing in the reduced amplitude of
eq. (5.3). set of colour operators made out of ${\bf T}_{t}^{2}$, ${\bf
T}_{s-u}^{2}$, their commutators and quartic Casimir operators, as discussed
in section 4.2. Notice that, in particular, this will naturally split the
terms in eq. (7.13) into even and odd signature contributions.
* •
We specialise the kinematic functions appearing in eq. (7.13) to the high-
energy limit. First, owing to Bose symmetry, each kinematic function will
acquire a definite signature symmetry, matching the symmetry of the
corresponding colour operator it multiplies. Furthermore, each function will
be implicitly understood as an expansion in the high-energy logarithm $L$
defined in eq. (2.9).
We expand the soft anomalous dimension in powers of the strong coupling,
according to
$\mathbf{\Gamma}_{n}(\\{s_{ij}\\},\lambda,\alpha_{s})=\sum_{\ell}\left(\frac{\alpha_{s}}{\pi}\right)^{\ell}\mathbf{\Gamma}^{(\ell)}_{n}(\\{s_{ij}\\},\lambda)\,,$
(7.36)
where $\ell$ is the loop order. In what follows we are interested to obtain
constraints on the coefficient functions appearing in
$\mathbf{\Gamma}^{(4)}_{4}$ by using the results for the NLL, ${\cal
O}(\alpha_{s}^{4}L^{3})$, and the NNLL, ${\cal O}(\alpha_{s}^{4}L^{2})$, in
the anomalous dimension, summarised in section 6.5. The four-loop order
coefficients $\gamma_{K,R}^{(4)}$, $g_{R}^{(4)}$, $f^{(4)}_{R}$ are associated
exclusively with linear and kinematically-independent contributions, ${\cal
O}(L^{1})$ and ${\cal O}(L^{0})$, and we will not consider them in this
section. Their high-energy limit is considered instead in appendix G, and we
will return to analyse the resulting ${\cal O}(L^{1})$ terms in section 7.4.
This leaves the terms proportional to ${\cal F}$, ${\cal G}_{R}$ and ${\cal
H}_{1}$, i.e. we consider
$\displaystyle\begin{split}\mathbf{\Gamma}_{4}^{(4)}\left(\\{s_{ij}\\},\lambda\right)&=\mathbf{\Gamma}_{\rm
4T-4L}^{(4)}\left(\\{\beta_{ijkl}\\}\right)+\mathbf{\Gamma}_{\rm
Q4T-4L}^{(4)}\left(\\{\beta_{ijkl}\\}\right)+\mathbf{\Gamma}_{\rm
5T-4L}^{(4)}\left(\\{\beta_{ijkl}\\}\right)+\mathcal{O}(L)\,,\end{split}$
(7.37)
where for individual terms we drop the subscript indicating the number of
partons, since we focus exclusively on the $n=4$ case below. The remaining
subscripts are the defining characteristics of the colour operator as in eq.
(7.13).
#### 7.2.1 Four-generator four-line term ($\mathbf{4T}$$-$$\mathbf{4L}$)
We start by considering the first term in eq. (7.37), i.e.
$\displaystyle\begin{split}\mathbf{\Gamma}_{\rm{4T-4L}}(\\{\beta_{ijkl}\\},\alpha_{s})&\equiv\sum_{(i,j,k,l)}f^{ade}f^{bce}{\bf
T}_{i}^{a}{\bf T}_{j}^{b}{\bf T}_{k}^{c}{\bf T}_{l}^{d}\,{\cal
F}(\beta_{ijlk},\beta_{iklj};\alpha_{s}).\end{split}$ (7.38)
An expression for this term that is specialised to two-parton scattering in
the high-energy limit has been discussed already in refs. Caron-Huot:2017fxr ;
Almelid:2017qju , however, we provide here a short derivation for pedagogical
purposes, in order to introduce useful notation for the elaboration of the
other terms in eq. (7.37).
The colour structure in eq. (7.38) is antisymmetric under the exchange of
$i\leftrightarrow l$ or $j\leftrightarrow k$. Due to Bose symmetry, the
function ${\cal F}(\alpha_{s})$ must be antisymmetric under the exchange of
the same indices. Under this exchange one has ${\cal
F}(\beta_{ijlk},\beta_{iklj};\alpha_{s})=-{\cal
F}(\beta_{iklj},\beta_{ijlk};\alpha_{s})$. Using this property, we write eq.
(7.38) for the case of two-parton scattering as
$\displaystyle\begin{split}\mathbf{\Gamma}_{\rm
4T-4L}(\\{\beta_{ijkl}\\},\alpha_{s})=&\,8{\bf T}_{1}^{a}{\bf T}_{2}^{b}{\bf
T}_{3}^{c}{\bf T}_{4}^{d}\bigg{[}f^{abe}f^{cde}{\cal
F}(\beta_{1324},\beta_{1423};\alpha_{s})\\\ &\hskip
56.9055pt+f^{ace}f^{bde}{\cal F}(\beta_{1234},\beta_{1432};\alpha_{s})\\\
&\hskip 56.9055pt+f^{ade}f^{bce}{\cal
F}(\beta_{1243},\beta_{1342};\alpha_{s})\bigg{]}.\end{split}$ (7.39)
As we have seen, in the high-energy limit, signature symmetry plays a major
role. In eq. (7.39) it can be implemented by considering symmetric and
antisymmetric combinations under the exchange $2\leftrightarrow 3$. This leads
us to introduce the following two functions:
$\displaystyle\begin{split}{\cal
F}^{(+)}(\\{\beta_{ijkl}\\},\alpha_{s})&\equiv\frac{1}{2}\bigg{\\{}{\cal
F}(\beta_{1324},\beta_{1423};\alpha_{s})+{\cal
F}(\beta_{1234},\beta_{1432};\alpha_{s})\bigg{\\}},\\\ {\cal
F}^{(-)}(\\{\beta_{ijkl}\\},\alpha_{s})&\equiv\frac{1}{2}\bigg{\\{}{\cal
F}(\beta_{1234},\beta_{1432};\alpha_{s})-{\cal
F}(\beta_{1324},\beta_{1423};\alpha_{s})\bigg{\\}}+{\cal
F}(\beta_{1243},\beta_{1342};\alpha_{s}),\end{split}$ (7.40)
such that eq. (7.39) becomes
$\displaystyle\begin{split}\mathbf{\Gamma}_{\rm
4T-4L}(\\{\beta_{ijkl}\\},\alpha_{s})=&\,8{\bf T}_{1}^{a}{\bf T}_{2}^{b}{\bf
T}_{3}^{c}{\bf
T}_{4}^{d}\bigg{[}\left(f^{abe}f^{cde}+f^{ace}f^{bde}\right){\cal
F}^{(+)}(\\{\beta_{ijkl}\\},\alpha_{s})\\\ &\hskip
85.35826pt+\,f^{ade}f^{bce}{\cal
F}^{(-)}(\\{\beta_{ijkl}\\},\alpha_{s})\bigg{]}.\end{split}$ (7.41)
Due to Bose symmetry, the symmetry of ${\cal F}^{(\pm)}$ must be mirrored into
the colour structure. This becomes evident when expressing the colour
operators in eq. (7.41) in our Regge-limit basis. Using the colour algebra
identity of eq. (4.34), i.e. $f^{abc}{\bf T}^{c}=-i[{\bf T}^{a},{\bf T}^{b}]$,
we have for instance
$\displaystyle f^{abe}f^{cde}{\bf T}_{1}^{a}{\bf T}_{2}^{b}{\bf T}_{3}^{c}{\bf
T}_{4}^{d}$ $\displaystyle=$ $\displaystyle-\Big{[}{\bf T}_{1}\cdot{\bf
T}_{2},[{\bf T}_{3}\cdot{\bf T}_{4},{\bf T}_{1}\cdot{\bf T}_{3}]\Big{]}$
(7.42) $\displaystyle=$ $\displaystyle-\frac{1}{8}\Big{[}{\bf T}_{s}^{2},[{\bf
T}_{s}^{2},{\bf T}_{u}^{2}]\Big{]}$ $\displaystyle=$
$\displaystyle\frac{1}{16}\left(\Big{[}\mathbf{T}_{t}^{2},[\mathbf{T}_{t}^{2},\mathbf{T}_{s-u}^{2}]\Big{]}+2\Big{[}\mathbf{T}_{s-u}^{2},[\mathbf{T}_{s-u}^{2},\mathbf{T}_{t}^{2}]\Big{]}\right),$
where we used the definitions in eqs. (2.17) and (2.19), and any Casimirs
arising vanish in the commutators. With similar steps, the two other colour
operators in eq. (7.41) are written as
$\displaystyle f^{ace}f^{bde}{\bf T}_{1}^{a}{\bf T}_{2}^{b}{\bf T}_{3}^{c}{\bf
T}_{4}^{d}$
$\displaystyle=\frac{1}{16}\left(-\Big{[}\mathbf{T}_{t}^{2},[\mathbf{T}_{t}^{2},\mathbf{T}_{s-u}^{2}]\Big{]}+2\Big{[}\mathbf{T}_{s-u}^{2},[\mathbf{T}_{s-u}^{2},\mathbf{T}_{t}^{2}]\Big{]}\right)\,,$
(7.43a) $\displaystyle f^{ade}f^{bce}{\bf T}_{1}^{a}{\bf T}_{2}^{b}{\bf
T}_{3}^{c}{\bf T}_{4}^{d}$
$\displaystyle=-\frac{1}{8}\Big{[}\mathbf{T}_{t}^{2},[\mathbf{T}_{t}^{2},\mathbf{T}_{s-u}^{2}]\Big{]}.$
(7.43b)
Inserting the expressions in eqs. (7.42) and (7.43) into eq. (7.41) we get
$\displaystyle\begin{split}\mathbf{\Gamma}_{\rm
4T-4L}(\\{\beta_{ijkl}\\},\alpha_{s})&=2{\cal
F}^{(+)}(\\{\beta_{ijkl}\\},\alpha_{s})\Big{[}\mathbf{T}_{s-u}^{2},[\mathbf{T}_{s-u}^{2},\mathbf{T}_{t}^{2}]\Big{]}\\\
&\quad\mbox{}-{\cal
F}^{(-)}(\\{\beta_{ijkl}\\},\alpha_{s})\Big{[}\mathbf{T}_{t}^{2},[\mathbf{T}_{t}^{2},\mathbf{T}_{s-u}^{2}]\Big{]}\,.\end{split}$
(7.44)
It is easy to see that the symmetry properties of ${\cal F}^{(\pm)}$ are
nicely mirrored into the colour structure: the first nested commutator is
signature-even, containing two $\mathbf{T}_{s-u}^{2}$ operators, while the
second is odd, having a single $\mathbf{T}_{s-u}^{2}$.
At three loops, using the properties of the variables $z_{ijkl}$ introduced in
(7.20):
$z_{ijkl}=\frac{1}{z_{ikjl}}=1-z_{ilkj}=\frac{z_{ijlk}}{z_{ijlk}-1},$ (7.45)
one can write the functions ${\cal F}^{(\pm)}$ as
$\displaystyle\begin{split}{\cal
F}^{(+,3)}\left(\\{\beta_{ijkl}\\}\right)&=\frac{1}{64}F_{1}(z_{1234}),\\\
{\cal
F}^{(-,3)}\left(\\{\beta_{ijkl}\\}\right)&=\frac{1}{64}\Big{(}F_{2}(z_{1234})-F_{3}(z_{1234})\Big{)},\end{split}$
(7.46)
where the functions $F_{1}$, $F_{2}$ and $F_{3}$ have been introduced in ref.
Almelid:2017qju , and read
$\displaystyle\begin{split}F_{1}(z)&\equiv F(1-1/z)-F(1/z)+F(1-z)-F(z),\\\
F_{2}(z)&\equiv F(1/z)-F(1-1/z)+F(1/(1-z))-F(z/(z-1)),\\\ F_{3}(z)&\equiv
F(z)-F(1-z)+F(z/(z-1))-F(1/(1-z))=-F_{1}(z)-F_{2}(z).\end{split}$ (7.47)
Here we consider the four-loop contribution to eq. (7.44). Taking into account
the perturbative expansion introduced in eq. (7.19) one has
$\displaystyle\begin{split}\mathbf{\Gamma}_{\rm{4T-4L}}^{(4)}(\\{\beta_{ijkl}\\})&=2\left(\sum_{R}{\cal
F}^{(+,4)}_{R}(\\{\beta_{ijkl}\\})\right)\Big{[}\mathbf{T}_{s-u}^{2},[\mathbf{T}_{s-u}^{2},\mathbf{T}_{t}^{2}]\Big{]}\\\
&\hskip 100.0pt-\left(\sum_{R}{\cal
F}^{(-,4)}_{R}(\\{\beta_{ijkl}\\})\right)\Big{[}\mathbf{T}_{t}^{2},[\mathbf{T}_{t}^{2},\mathbf{T}_{s-u}^{2}]\Big{]},\end{split}$
(7.48)
where we recall that the sum over representations starts at this order due to
an additional internal loop, which gives rise to either a factor of $C_{A}$,
or $n_{f}T_{F}$, depending on the particles propagating in the loop222222Here
we have only considered QCD particle types: adjoint gluons and fundamental
quarks. The factor will change depending on the gauge theory considered.. The
first term in eq. (7.48) is signature even, and the second signature odd. It
is worthwhile recalling that, upon expansion, the soft anomalous dimension in
eq. (7.48) will be multiplied by the odd tree-level amplitude in eq. (2.5):
hence, odd signature in the amplitude corresponds to even signature in the
soft anomalous dimension. Taking this into account, we can already make a few
observations. At NLL accuracy there are only gluonic contributions in the even
amplitude, as calculated in ref. Caron-Huot:2017zfo . Therefore, only ${\cal
F}^{(-,4)}_{A}|_{\rm NLL}$ will be non-zero in eq. (7.48), while ${\cal
F}^{(-,4)}_{F}|_{\rm NLL}=0$. Similarly, the NNLL contribution to the odd
amplitude, first presented in ref. Falcioni:2020lvv and discussed in detail
in section 5 above, is also given in terms of gluonic contributions only.
Following the reasoning above, we expect that ${\cal F}^{(+,4)}_{A}|_{\rm
NNLL}$ may be non-zero and ${\cal F}^{(+,4)}_{F}|_{\rm NNLL}=0$. No
predictions for ${\cal F}^{(-,4)}_{R}|_{\rm NNLL}$ can be made at this stage,
however, given that the even amplitude is still unknown at this logarithmic
accuracy.
#### 7.2.2 Quartic Casimir four-generator four-line term
($\mathbf{Q4T}$$-$$\mathbf{4L}$)
The quartic Casimir term only appears starting at four loops. Restricting to
the case of $2\to 2$ scattering and writing explicitly the colour structure,
eq. (7.27) becomes
$\displaystyle\begin{split}\mathbf{\Gamma}_{\rm
Q4T-4L}(\\{\beta_{1234}\\},\alpha_{s})=&\sum_{R}\,{\cal
G}_{R}(\beta_{1243},\beta_{1342};\alpha_{s})\times\\\ &\hskip
56.9055pt\sum_{\sigma\in
S_{4}}\operatorname{Tr}_{R}\left(T^{\sigma(a)}T^{\sigma(b)}T^{\sigma(c)}T^{\sigma(d)}\right){\bf
T}_{1}^{a}{\bf T}_{2}^{b}{\bf T}_{3}^{c}{\bf T}_{4}^{d}\,,\end{split}$ (7.49)
where again there is a sum over the representations $R$ propagating in the
loop. Here we extracted the function ${\cal G}_{R}$ out of the sum over
permutations of the legs $(i,j,k,l)$ using its symmetry: the colour structure
is symmetric under the exchange of any pair of indices due to the symmetrised
trace. Having done that, we performed the sum over permutations $(i,j,k,l)$ on
the colour structure. Because ${\cal
G}_{R}(\beta_{ijlk},\beta_{iklj};\alpha_{s})$ is a completely symmetric
function under permutations, ${\cal G}^{(-)}_{R}=0$, and we can identify
${\cal G}^{(+)}_{R}={\cal G}_{R}$.
In order to conveniently express eq. (7.49), we first introduce some new
colour notation for terms involving a symmetrised trace over four generators
attached to four numbered partonic generators. The colour structures can be
expressed using the colour-flow channels defined in eq. (2.17), with
$\displaystyle\begin{split}\bm{\mathcal{D}}^{R}_{pppp}\equiv&\,\frac{1}{4!}\sum_{\sigma\in
S_{4}}\operatorname{Tr}_{R}\left(T^{\sigma(a)}T^{\sigma(b)}T^{\sigma(c)}T^{\sigma(d)}\right){\bf
T}_{p}^{a}{\bf T}_{p}^{b}{\bf T}_{p}^{c}{\bf T}_{p}^{d},\end{split}$ (7.50)
where $p\in\\{s,t,u\\}$, for example
$\displaystyle\begin{split}\bm{\mathcal{D}}^{R}_{ssss}=&\,\frac{1}{4!}\sum_{\sigma\in
S_{4}}\operatorname{Tr}_{R}\left(T^{\sigma(a)}T^{\sigma(b)}T^{\sigma(c)}T^{\sigma(d)}\right){\bf
T}_{s}^{a}{\bf T}_{s}^{b}{\bf T}_{s}^{c}{\bf T}_{s}^{d}\\\
=\,&\bm{\mathcal{D}}^{R}_{1111}+4\bm{\mathcal{D}}^{R}_{1112}+6\bm{\mathcal{D}}^{R}_{1122}+4\bm{\mathcal{D}}^{R}_{1222}+\bm{\mathcal{D}}^{R}_{2222}.\end{split}$
(7.51)
A general formula is
$\bm{\mathcal{D}}^{R}_{pppp}=\,\bm{\mathcal{D}}^{R}_{kkkk}+4\bm{\mathcal{D}}^{R}_{kkkl}+6\bm{\mathcal{D}}^{R}_{kkll}+4\bm{\mathcal{D}}^{R}_{klll}+\bm{\mathcal{D}}^{R}_{llll},$
(7.52)
where
$\bm{\mathcal{D}}^{R}_{pppp}=\begin{cases}\bm{\mathcal{D}}^{R}_{ssss}&(k,l)\in\\{(1,2),(3,4)\\}\\\
\bm{\mathcal{D}}^{R}_{uuuu}&(k,l)\in\\{(1,3),(2,4)\\}\\\
\bm{\mathcal{D}}^{R}_{tttt}&(k,l)\in\\{(1,4),(2,3)\\}.\end{cases}$ (7.53)
The expression in eq. (7.52) is symmetric under $k\leftrightarrow l$. For each
of the channels, corresponding to the respective Mandelstam invariants
$p\in\\{s,t,u\\}$, the indices $(k,l)$ can be assigned to be either of the two
combinations shown in eq. (7.53).
Using colour conservation, we can write the colour structure of eq. (7.49) as
$\displaystyle\sum_{\sigma\in
S_{4}}\operatorname{Tr}_{R}\left(T^{\sigma(a)}T^{\sigma(b)}T^{\sigma(c)}T^{\sigma(d)}\right){\bf
T}_{1}^{a}{\bf T}_{2}^{b}{\bf T}_{3}^{c}{\bf
T}_{4}^{d}\mathcal{M}_{\text{tree}}$
$\displaystyle=\bigg{[}2\Big{(}\bm{\mathcal{D}}^{R}_{ssss}+\bm{\mathcal{D}}^{R}_{uuuu}+\bm{\mathcal{D}}^{R}_{tttt}\Big{)}-4\left(\frac{d_{RR_{i}}}{N_{R_{i}}}+\frac{d_{RR_{j}}}{N_{R_{j}}}\right)\bigg{]}\mathcal{M}_{\text{tree}},$
(7.54)
where the quartic Casimirs correspond to the projectile $i$ (partons 1 and 4)
and the target $j$ (partons 2 and 3). The whole expression is signature-even
as expected. This expression is useful as it holds for any representation. We
will see in eqs. (7.85) and (7.86) and in appendix G that the colour
structures multiplying the quartic component of the cusp anomalous dimension
$g_{R}$ can be expressed in a similar way.
##### Adjoint representation.
In the following we restrict our attention to the four-loop coefficient ${\cal
G}^{(+,4)}_{R}$ in the adjoint representation, $R=A$. The reason we focus
specifically on this representation was already explained at the end of the
previous section considering ${\cal F}^{(+,4)}_{R}$, that is: the result for
the signature-odd amplitude at NNLL accuracy, presented in section 5, only
receives a contribution from purely gluonic diagrams. Thus, only ${\cal
G}^{(+,4)}_{A}$ contributes to the sum over $R$ in eq. (7.49) at NNLL
accuracy. It is then possible to use the identity Becher:2019avh
$\displaystyle\begin{split}\sum_{\sigma\in
S_{4}}\operatorname{Tr}\left(F^{\sigma(a)}F^{\sigma(b)}F^{\sigma(c)}F^{\sigma(d)}\right)&=12\left[\mbox{Tr}\big{(}F^{a}F^{b}F^{c}F^{d}\big{)}+\mbox{Tr}\big{(}F^{d}F^{c}F^{b}F^{a}\big{)}\right]\\\
&\hskip
56.9055pt+4C_{A}\left(f^{abe}f^{cde}-f^{ade}f^{bce}\right),\end{split}$ (7.55)
to write
$\displaystyle\begin{split}\mathbf{\Gamma}_{\rm
Q4T-4L,A}^{(4)}(\\{\beta_{ijkl}\\})&=\big{(}{\bf Q}_{1}^{(4),A}+{\bf
Q}_{2}^{(4),A}\big{)}\,{\cal
G}_{A}^{(+,4)}(\beta_{1234},\beta_{1432}),\end{split}$ (7.56)
where we have defined
$\displaystyle{\bf Q}_{1}^{(4),A}$ $\displaystyle=$ $\displaystyle 12\,{\bf
T}^{a}_{1}{\bf T}^{b}_{2}{\bf T}^{c}_{3}{\bf
T}^{d}_{4}\,\left[\mbox{Tr}\big{(}F^{a}F^{b}F^{c}F^{d}\big{)}+\mbox{Tr}\big{(}F^{d}F^{c}F^{b}F^{a}\big{)}\right],$
(7.57a) $\displaystyle{\bf Q}_{2}^{(4),A}$ $\displaystyle=$ $\displaystyle
4C_{A}\,{\bf T}^{a}_{1}{\bf T}^{b}_{2}{\bf T}^{c}_{3}{\bf
T}^{d}_{4}\left(f^{abe}f^{cde}-f^{ade}f^{bce}\right),$ (7.57b)
and $(F^{x})^{ab}\equiv if^{axb}$. The second term, i.e. ${\bf
Q}_{2}^{(4),A}$, can readily be written in a Regge-limit basis by using the
identities in eqs. (7.42) and (7.43b). We get
${\bf
Q}_{2}^{(4),A}=\frac{C_{A}}{4}\left(3\Big{[}\mathbf{T}_{t}^{2},[\mathbf{T}_{t}^{2},\mathbf{T}_{s-u}^{2}]\Big{]}+2\Big{[}\mathbf{T}_{s-u}^{2},[\mathbf{T}_{s-u}^{2},\mathbf{T}_{t}^{2}]\Big{]}\right).$
(7.58)
Concerning ${\bf Q}_{1}^{(4),A}$, repeated use of the commutator relation eq.
(4.34) allows us to write it as
${\bf Q}_{1}^{(4),A}=12\bigg{\\{}\Big{[}\big{[}{\bf T}^{b}_{1},{\bf
T}^{e}_{1}\big{]},{\bf T}^{f}_{1}\Big{]}{\bf T}_{2}^{b}\Big{[}\big{[}{\bf
T}^{d}_{3},{\bf T}^{f}_{3}\big{]},{\bf T}^{e}_{3}\Big{]}{\bf T}_{4}^{d}+\,{\bf
T}_{1}^{a}\Big{[}{\bf T}^{e}_{2},\big{[}{\bf T}^{f}_{2},{\bf
T}^{a}_{2}\big{]}\Big{]}{\bf T}_{3}^{c}\Big{[}{\bf T}^{f}_{4},\big{[}{\bf
T}^{e}_{4},{\bf T}^{c}_{4}\big{]}\Big{]}\bigg{\\}}.$ (7.59)
Notice that we apply the commutator relation such as to obtain an expression
with manifest target-projectile symmetry (where, as usual, partons $1$ and $4$
represent the projectile while partons $2$ and $3$ the target). At this point
we recall that the colour operator in the soft anomalous dimension acts on the
tree-level amplitude, to give the part of the four-loop amplitude from which
the single-pole singularities are extracted. Therefore, it is sufficient to
obtain a representation for the colour operator ${\bf Q}_{1}^{(4),A}$ when
acting on the tree-level colour structure ${\bf T}_{i}\cdot{\bf T}_{j}$, as
defined in eq. (2.5). The commutators in eq. (7.59) can then be expressed as
attachments to the projectile ($i$) or target ($j$), as in section 4.2.3, so
eq. (7.59) becomes
${\bf Q}_{1}^{(4),A}({\bf T}_{i}\cdot{\bf T}_{j})=12\left({\bf
T}^{([[b,e],f],x,d)}\right)_{i}\left({\bf
T}^{(b,x,[[d,f],e])}\right)_{j}\,\,+\,\,i\leftrightarrow j.$ (7.60)
It is now in a suitable form to apply the identities in section 4.2 and
appendix C, converting the operator to the Regge-limit basis:
$\displaystyle\begin{split}{\bf Q}_{1}^{(4),A}({\bf T}_{i}\cdot{\bf
T}_{j})&=\bigg{\\{}2\left(\frac{d_{AA}}{N_{A}}-\frac{C_{A}^{4}}{24}\right)-\frac{3C_{A}}{4}\Big{[}\mathbf{T}_{t}^{2},[\mathbf{T}_{t}^{2},\mathbf{T}_{s-u}^{2}]\Big{]}\\\
&\hskip
14.22636pt-\,\frac{1}{2}\mathbf{T}_{t}^{2}[(\mathbf{T}_{s-u}^{2})^{2},\mathbf{T}_{t}^{2}]+\frac{3}{2}[\mathbf{T}_{s-u}^{2},\mathbf{T}_{t}^{2}]\mathbf{T}_{t}^{2}\mathbf{T}_{s-u}^{2}\bigg{\\}}\,({\bf
T}_{i}\cdot{\bf T}_{j}).\end{split}$ (7.61)
Inserting eqs. (7.58) and (7.61) into eq. (7.56) we have
$\displaystyle\begin{split}\mathbf{\Gamma}_{\rm
Q4T-4L,A}^{(4)}(\\{\beta_{ijkl}\\})\mathcal{M}_{\text{tree}}&={\cal
G}_{A}^{(+,4)}(\beta_{1234},\beta_{1432})\bigg{\\{}2\left(\frac{d_{AA}}{N_{A}}-\frac{C_{A}^{4}}{24}\right)-\frac{1}{2}\mathbf{T}_{t}^{2}[(\mathbf{T}_{s-u}^{2})^{2},\mathbf{T}_{t}^{2}]\\\
&\quad\mbox{}+\frac{3}{2}[\mathbf{T}_{s-u}^{2},\mathbf{T}_{t}^{2}]\mathbf{T}_{t}^{2}\mathbf{T}_{s-u}^{2}+\frac{C_{A}}{2}\Big{[}\mathbf{T}_{s-u}^{2},[\mathbf{T}_{s-u}^{2},\mathbf{T}_{t}^{2}]\Big{]}\bigg{\\}}\,\mathcal{M}_{\text{tree}}.\end{split}$
(7.62)
Notice that after a cancellation of the signature-odd commutator term between
${\bf Q}_{1}^{(4),A}$ and ${\bf Q}_{2}^{(4),A}$, the resulting colour operator
in eq. (7.62) is manifestly signature-even, as anticipated at the beginning of
the section. Importantly, we observe that the quartic four-generator four-leg
term $\mathbf{\Gamma}_{\rm Q4T-4L,A}^{(4)}$ is entirely non-planar, given that
the commutators in eq. (7.62) and the combination $d_{AA}/N_{A}-C_{A}^{4}/24$
are separately non-planar (see eq. (5.35)). $\mathbf{\Gamma}_{\rm
Q4T-4L,A}^{(4)}$ now is expressed in the Regge-limit basis, and eq. (7.62)
will be used in section 7.3, along with the other terms, to derive constraints
based on the explicit NNLL results of section 5.
Finally, we can also equate eq. (7.2.2) to eq. (7.62) in the adjoint
representation to express the previously-unknown signature-even combination of
quartic $s$ and $u$ channel operators acting on the tree amplitude, in terms
of nested commutators:
$\displaystyle\begin{split}\Big{(}\bm{\mathcal{D}}^{A}_{ssss}+\bm{\mathcal{D}}^{A}_{uuuu}\Big{)}\mathcal{M}_{\text{tree}}&=\bigg{(}2\left(\frac{d_{AR_{i}}}{N_{R_{i}}}+\frac{d_{AR_{j}}}{N_{R_{j}}}\right)-\frac{C_{A}^{4}}{24}-\frac{1}{4}\mathbf{T}_{t}^{2}[(\mathbf{T}_{s-u}^{2})^{2},\mathbf{T}_{t}^{2}]\\\
&\quad+\frac{3}{4}[\mathbf{T}_{s-u}^{2},\mathbf{T}_{t}^{2}]\mathbf{T}_{t}^{2}\mathbf{T}_{s-u}^{2}+\frac{C_{A}}{4}\Big{[}\mathbf{T}_{s-u}^{2},[\mathbf{T}_{s-u}^{2},\mathbf{T}_{t}^{2}]\Big{]}\bigg{)}\mathcal{M}_{\text{tree}},\end{split}$
(7.63)
while the quartic $t$-channel operator acting on the tree amplitude simply
gives
$\bm{\mathcal{D}}^{A}_{tttt}\mathcal{M}_{\text{tree}}=\frac{d_{AA}}{N_{A}}\mathcal{M}_{\text{tree}}.$
(7.64)
These results will be useful in appendix G, where we analyse the colour
structures multiplying the quartic component of the cusp anomalous dimension
$g_{R}$.
#### 7.2.3 Five-generator four-line term ($\mathbf{5T}$$-$$\mathbf{4L}$)
The third term in eq. (7.37) reads
$\displaystyle\begin{split}\mathbf{\Gamma}_{\rm
5T-4L}\left(\\{\beta_{ijkl}\\},\alpha_{s}\right)&=\sum_{(i,j,k,l)}\,\bm{\mathcal{T}}_{ijkli}\,{\cal
H}_{1}(\beta_{ijlk},\beta_{iklj};\alpha_{s})\\\
&=\sum_{(i,j,k,l)}if^{adg}f^{bch}f^{egh}\,\\{{\bf T}_{i}^{a},{\bf
T}_{i}^{e}\\}_{+}{\bf T}_{j}^{b}{\bf T}_{k}^{c}{\bf T}_{l}^{d}\,{\cal
H}_{1}(\beta_{ijlk},\beta_{iklj};\alpha_{s}).\end{split}$ (7.65)
The colour structure is antisymmetric under $j\leftrightarrow k$ so
$\bm{\mathcal{T}}_{ijkli}=-\bm{\mathcal{T}}_{ikjli}$ and therefore ${\cal
H}_{1}$ is antisymmetric under a swap of its arguments, due to Bose symmetry.
We want to write an expression with manifest symmetry under $s\leftrightarrow
u$, which can be achieved by exploiting the symmetries under swaps of
$2\leftrightarrow 3$ or $1\leftrightarrow 4$ of $\bm{\mathcal{T}}_{ijkli}$ and
${\cal H}_{1}$. Similarly to ${\cal F}(\beta_{ijlk},\beta_{iklj},\alpha_{s})$,
let us introduce symmetric and antisymmetric combinations under
$2\leftrightarrow 3$ of ${\cal H}_{1}$:
$\displaystyle{\cal H}_{1}^{(+)}(\\{\beta_{ijkl}\\},\alpha_{s})$
$\displaystyle\equiv\frac{1}{2}\Big{\\{}{\cal
H}_{1}(\beta_{1324},\beta_{1423};\alpha_{s})+{\cal
H}_{1}(\beta_{1234},\beta_{1432};\alpha_{s})\Big{\\}},$ (7.66a)
$\displaystyle{\cal H}_{1}^{(-)}(\\{\beta_{ijkl}\\},\alpha_{s})$
$\displaystyle\equiv\frac{1}{2}\Big{\\{}{\cal
H}_{1}(\beta_{1324},\beta_{1423};\alpha_{s})-{\cal
H}_{1}(\beta_{1234},\beta_{1432};\alpha_{s})\Big{\\}},$ (7.66b)
$\displaystyle\tilde{{\cal H}}_{1}^{(-)}(\\{\beta_{ijkl}\\},\alpha_{s})$
$\displaystyle\equiv{\cal H}_{1}(\beta_{1243},\beta_{1342};\alpha_{s}).$
(7.66c)
As shown in appendix G, with these definitions we can write eq. (7.65) as
$\displaystyle\begin{split}\mathbf{\Gamma}_{\rm
5T-4L}^{(4)}(\\{\beta_{ijkl}\\})\mathcal{M}_{\text{tree}}&=\Bigg{[}{\cal
H}_{1}^{(+,4)}(\\{\beta_{ijkl}\\})\bigg{(}-\frac{C_{A}}{2}\Big{[}\mathbf{T}_{s-u}^{2},[\mathbf{T}_{s-u}^{2},\mathbf{T}_{t}^{2}]\Big{]}+C_{A}\mathbf{T}_{s-u}^{2}[\mathbf{T}_{s-u}^{2},\mathbf{T}_{t}^{2}]\\\
&-\frac{1}{6}\mathbf{T}_{t}^{2}[(\mathbf{T}_{s-u}^{2})^{2},\mathbf{T}_{t}^{2}]\bigg{)}+\frac{1}{4}\tilde{{\cal
H}}_{1}^{(-,4)}(\\{\beta_{ijkl}\\})\bigg{[}\mathbf{T}_{t}^{2},\Big{[}\mathbf{T}_{t}^{2},\big{[}\mathbf{T}_{t}^{2},\mathbf{T}_{s-u}^{2}\big{]}\Big{]}\bigg{]}\\\
&+{\cal
H}_{1}^{(-,4)}(\\{\beta_{ijkl}\\})\bigg{(}-\frac{1}{2}\bigg{[}\mathbf{T}_{s-u}^{2},\Big{[}\mathbf{T}_{s-u}^{2},\big{[}\mathbf{T}_{s-u}^{2},\mathbf{T}_{t}^{2}\big{]}\Big{]}\bigg{]}+\frac{1}{8}\bigg{[}\mathbf{T}_{t}^{2},\Big{[}\mathbf{T}_{t}^{2},\big{[}\mathbf{T}_{t}^{2},\mathbf{T}_{s-u}^{2}\big{]}\Big{]}\bigg{]}\bigg{)}\Bigg{]}\mathcal{M}_{\text{tree}},\end{split}$
(7.67)
with all colour operators expressed in terms of nested commutators. Thus, as
expected, all three terms in eq. (7.37), given in eqs. (7.48), (7.62) and
(7.67), are strictly non-planar.
#### 7.2.4 The Regge limit of the soft anomalous dimension
We have now specialised eq. (7.37) to the case of two-parton scattering, and
expressed the colour operators involved in these terms in a Regge-limit basis
in eqs. (7.48), (7.62) and (7.67). In order to compare the resulting
expression for the soft anomalous dimension with the high-energy limit
calculation summarised in section 6.5 we formally consider each of the
kinematic functions as an expansion in the high-energy logarithm $L$, for
instance:
${\cal F}^{(-,4)}_{A}(L)={\cal F}^{(-,4,3)}_{A}L^{3}+{\cal
F}^{(-,4,2)}_{A}L^{2}+{\cal F}^{(-,4,1)}_{A}L+{\cal F}^{(-,4,0)}_{A},$ (7.68)
with unknown coefficients ${\cal F}^{(-,\ell,n)}_{A}$ for $\ell=4$ and
$n=3,2,1,0$, which we expect on general grounds to be transcendental numbers
of weight $2\ell-1-n=7-n$ or lower.
Separating the four-loop soft anomalous dimension $\bm{\Gamma}^{(4)}$ into
components with definite signature symmetry, we have at four loops
$\bm{\Gamma}^{(4)}_{\text{Regge}}=\bm{\Gamma}^{(+,4)}_{\text{Regge}}+\bm{\Gamma}^{(-,4)}_{\text{Regge}}\,,$
(7.69)
where we added the subscript Regge, to indicate that the Regge limit has been
taken. Explicitly, using the results in eqs. (7.48), (7.62) and (7.67) we
obtain
$\displaystyle\mathbf{\Gamma}^{(+,4)}_{\text{Regge}}(L)\mathcal{M}_{\text{tree}}=$
$\displaystyle\Bigg{\\{}2{\cal
F}_{A}^{(+,4)}(L)\Big{[}\mathbf{T}_{s-u}^{2},[\mathbf{T}_{s-u}^{2},\mathbf{T}_{t}^{2}]\Big{]}$
(7.70) $\displaystyle\quad\mbox{}+{\cal
G}_{A}^{(+,4)}(L)\bigg{(}2\left(\frac{d_{AA}}{N_{A}}-\frac{C_{A}^{4}}{24}\right)-\frac{1}{2}\mathbf{T}_{t}^{2}[(\mathbf{T}_{s-u}^{2})^{2},\mathbf{T}_{t}^{2}]$
$\displaystyle\hskip
71.13188pt+\frac{3}{2}[\mathbf{T}_{s-u}^{2},\mathbf{T}_{t}^{2}]\mathbf{T}_{t}^{2}\mathbf{T}_{s-u}^{2}+\frac{C_{A}}{2}\Big{[}\mathbf{T}_{s-u}^{2},[\mathbf{T}_{s-u}^{2},\mathbf{T}_{t}^{2}]\Big{]}\bigg{)}$
$\displaystyle\quad\mbox{}+{\cal
H}_{1}^{(+,4)}(L)\bigg{(}-\frac{1}{2}C_{A}\Big{[}\mathbf{T}_{s-u}^{2},[\mathbf{T}_{s-u}^{2},\mathbf{T}_{t}^{2}]\Big{]}+C_{A}\mathbf{T}_{s-u}^{2}[\mathbf{T}_{s-u}^{2},\mathbf{T}_{t}^{2}]$
$\displaystyle\hskip
56.9055pt-\frac{1}{6}\mathbf{T}_{t}^{2}[(\mathbf{T}_{s-u}^{2})^{2},\mathbf{T}_{t}^{2}]\bigg{)}\Bigg{\\}}\mathcal{M}_{\text{tree}}+{\cal
O}(L),$
for the signature-even part, while the odd component reads
$\displaystyle\mathbf{\Gamma}^{(-,4)}_{\text{Regge}}(L)\mathcal{M}_{\text{tree}}$
$\displaystyle=\Bigg{\\{}-\left(\sum_{R}{\cal
F}_{R}^{(-,4)}(L)\right)\Big{[}\mathbf{T}_{t}^{2},[\mathbf{T}_{t}^{2},\mathbf{T}_{s-u}^{2}]\Big{]}$
(7.71) $\displaystyle\hskip-4.0pt+{\cal
H}_{1}^{(-,4)}(L)\bigg{(}-\frac{1}{2}\bigg{[}\mathbf{T}_{s-u}^{2}\Big{[}\mathbf{T}_{s-u}^{2},\big{[}\mathbf{T}_{s-u}^{2},\mathbf{T}_{t}^{2}\big{]}\Big{]}\bigg{]}+\frac{1}{8}\bigg{[}\mathbf{T}_{t}^{2},\Big{[}\mathbf{T}_{t}^{2},\big{[}\mathbf{T}_{t}^{2},\mathbf{T}_{s-u}^{2}\big{]}\Big{]}\bigg{]}\bigg{)}$
$\displaystyle\hskip-4.0pt+\frac{1}{4}\tilde{{\cal
H}}_{1}^{(-,4)}(L)\bigg{[}\mathbf{T}_{t}^{2},\Big{[}\mathbf{T}_{t}^{2},\big{[}\mathbf{T}_{t}^{2},\mathbf{T}_{s-u}^{2}\big{]}\Big{]}\bigg{]}\Bigg{\\}}\mathcal{M}_{\text{tree}}+{\cal
O}(L).$
Functions that do not contribute through NNLL, i.e., only generate ${\cal
O}(L)$ and ${\cal O}(L^{0})$ contributions, are not shown in eqs. (7.70) and
(7.71). We discuss these in section 7.4 and appendix G.
### 7.3 Constraints on the kinematic functions in the soft anomalous
dimension
We are now ready to compare the general parametrisation of the four-loop soft
anomalous dimension to the explicit results of our calculation in the high-
energy limit. Before considering the four-loop case, where the kinematic
functions are unknown, it is useful to conduct a similar exercise at three
loops, where the functions are known Almelid:2015jia and their high-energy
limit has been previously obtained Caron-Huot:2017fxr ; Almelid:2017qju .
Using eqs. (7.44) and (G.41) we have
$\displaystyle\mathbf{\Delta}^{(+,3)}_{\text{Regge}}(L)\mathcal{M}_{\text{tree}}$
$\displaystyle=\bigg{\\{}2\Big{(}{\cal F}^{(+,3,2)}L^{2}+{\cal
F}^{(+,3,1)}L+{\cal
F}^{(+,3,0)}\Big{)}\Big{[}\mathbf{T}_{s-u}^{2},[\mathbf{T}_{s-u}^{2},\mathbf{T}_{t}^{2}]\Big{]}$
(7.72a)
$\displaystyle+f^{(3)}\bigg{(}\Big{[}\mathbf{T}_{s-u}^{2},[\mathbf{T}_{s-u}^{2},\mathbf{T}_{t}^{2}]\Big{]}+\frac{C_{A}^{3}}{2}-6\frac{d_{AR_{i}}}{N_{R_{i}}C_{i}}-6\frac{d_{AR_{j}}}{N_{R_{j}}C_{j}}\bigg{)}\bigg{\\}}\mathcal{M}_{\text{tree}}$
$\displaystyle\mathbf{\Delta}^{(-,3)}_{\text{Regge}}(L)\mathcal{M}_{\text{tree}}$
$\displaystyle=-\Big{(}{\cal F}^{(-,3,2)}L^{2}+{\cal F}^{(-,3,1)}L+{\cal
F}^{(-,3,0)}\Big{)}\Big{[}\mathbf{T}_{t}^{2},[\mathbf{T}_{t}^{2},\mathbf{T}_{s-u}^{2}]\Big{]}\mathcal{M}_{\text{tree}}$
(7.72b)
Matching these expressions to $\mathbf{\Delta}$ in appendix E we obtain the
following expansion coefficients:
$\displaystyle\begin{array}[]{lll}{\displaystyle{\cal
F}^{(+,3,2)}=0},&{\displaystyle{\cal F}^{(+,3,1)}=0},&{\displaystyle{\cal
F}^{(+,3,0)}=\frac{1}{8}\left(4\zeta_{3}\zeta_{2}-\zeta_{5}\right)}\\\
{\displaystyle{\cal F}^{(-,3,2)}=0},&{\displaystyle{\cal
F}^{(-,3,1)}=-\frac{i\pi}{4}\zeta_{3}},&{\displaystyle{\cal
F}^{(-,3,0)}=-\frac{11i\pi}{4}\zeta_{4}}\,,\end{array}$ (7.75)
consistently with refs. Caron-Huot:2017fxr ; Almelid:2017qju ; Almelid:2015jia
; Henn:2016jdu . A similar procedure will be followed below at four loops
where there are more functions contributing, all of which are yet unknown. To
this end we consider the expressions in eqs. (7.70) and (7.71) in the Regge-
limit basis, which we comapre with the explicit results we obtained through
NNLL accuracy in eq. (6.48).
##### Constraints at four loops NLL for signature-odd functions.
We start by considering the signature-odd contribution to the soft anomalous
dimension at NLL accuracy. We expand the functions in eq. (7.71) as in eq.
(7.68), and match it to eq. (7.71) order by order in the high-energy logarithm
$L$. At $\mathcal{O}(L^{3})$, equating eq. (7.71) to eq. (6.46), we have
$\displaystyle\begin{split}-i\pi\frac{\zeta_{3}}{24}\Big{[}\mathbf{T}_{t}^{2},[\mathbf{T}_{t}^{2},\mathbf{T}_{s-u}^{2}]\Big{]}\mathbf{T}_{t}^{2}\,\,\overset{!}{=}\,\,&-\Big{[}\mathbf{T}_{t}^{2},[\mathbf{T}_{t}^{2},\mathbf{T}_{s-u}^{2}]\Big{]}\,{\cal
F}^{(-,4,3)}_{A}\\\
&-\frac{1}{2}\bigg{[}\mathbf{T}_{s-u}^{2},\Big{[}\mathbf{T}_{s-u}^{2},\big{[}\mathbf{T}_{s-u}^{2},\mathbf{T}_{t}^{2}\big{]}\Big{]}\bigg{]}{\cal
H}^{(-,4,3)}_{1}\\\
&+\frac{1}{8}\bigg{[}\mathbf{T}_{t}^{2},\Big{[}\mathbf{T}_{t}^{2},\big{[}\mathbf{T}_{t}^{2},\mathbf{T}_{s-u}^{2}\big{]}\Big{]}\bigg{]}\left(2\tilde{{\cal
H}}^{(-,4,3)}_{1}+{\cal H}^{(-,4,3)}_{1}\right).\end{split}$ (7.76)
Firstly, ${\cal H}^{(-,4,3)}_{1}$ is the only term involving a colour operator
$\propto(\mathbf{T}_{s-u}^{2})^{3}$, which does not appear on the left-hand
side, so we conclude that ${\cal H}^{(-,4,3)}_{1}=0$. Next, $\tilde{{\cal
H}}^{(-,4,3)}_{1}$ multiplies a fully nested commutator, which also cannot be
matched to the colour operators on the left-hand side, so it must vanish as
well. In order to match the single term that remains, we recall that at four
loops the soft anomalous dimension acts directly on the tree-level amplitude,
so we can use
$\mathbf{T}_{t}^{2}\mathcal{M}_{\text{tree}}=C_{A}\mathcal{M}_{\text{tree}}$.
This is consistent with the expectation that ${\cal F}^{(-,4,3)}_{A}$ should
contain a factor of $C_{A}$, while ${\cal F}^{(-,4,3)}_{F}$ does not
contribute at NLL accuracy (see the discussion following eq. (7.48)). We
deduce
$\displaystyle{\cal F}^{(-,4,3)}_{A}$ $\displaystyle=i\pi
C_{A}\frac{\zeta_{3}}{24}\hskip 56.9055pt{\cal F}^{(-,4,3)}_{F}=0$ (7.77a)
$\displaystyle{\cal H}^{(-,4,3)}_{1}$ $\displaystyle=0\hskip
89.626pt\tilde{{\cal H}}^{(-,4,3)}_{1}=0\,.$ (7.77b)
We note that the even amplitude at four loops for NNLL (and beyond) is still
unknown. As a consequence,
$\mathbf{\Gamma}^{(-,4,m)}=\text{Im}[\mathbf{\Gamma}^{(4,m)}]$, with
$m=\\{0,1,2\\}$ remain unconstrained.
##### Constraints at four loops NLL for signature-even functions.
Consider now kinematic functions multiplying signature-even colour structures,
which must be real and symmetric under $s\leftrightarrow u$.
We express eq. (7.70) at $L^{3}$ order and equate it to the relevant $L^{3}$
coefficient in eq. (6.47), which vanishes identically, getting
$\displaystyle 0$
$\displaystyle\,\,\overset{!}{=}\,\,\Big{[}\mathbf{T}_{s-u}^{2},[\mathbf{T}_{s-u}^{2},\mathbf{T}_{t}^{2}]\Big{]}\bigg{\\{}2{\cal
F}^{(+,4,3)}_{A}+\frac{C_{A}}{2}\left({\cal G}^{(+,4,3)}_{A}-{\cal
H}^{(+,4,3)}_{1}\right)\bigg{\\}}$ (7.78)
$\displaystyle\quad\mbox{}+\left(2\frac{d_{AA}}{N_{A}}-\frac{C_{A}^{4}}{12}-\frac{1}{2}\mathbf{T}_{t}^{2}[(\mathbf{T}_{s-u}^{2})^{2},\mathbf{T}_{t}^{2}]+\frac{3}{2}[\mathbf{T}_{s-u}^{2},\mathbf{T}_{t}^{2}]\mathbf{T}_{t}^{2}\mathbf{T}_{s-u}^{2}\right){\cal
G}^{(+,4,3)}_{A}$
$\displaystyle\quad\mbox{}+\bigg{(}\mathbf{T}_{s-u}^{2}[\mathbf{T}_{s-u}^{2},\mathbf{T}_{t}^{2}]\mathbf{T}_{t}^{2}-\frac{1}{6}\mathbf{T}_{t}^{2}[(\mathbf{T}_{s-u}^{2})^{2},\mathbf{T}_{t}^{2}]\bigg{)}{\cal
H}^{(+,4,3)}_{1}.$
The only function that multiplies quartic Casimir terms is ${\cal
G}^{(+,4,3)}_{A}$ so it must be zero. In the last line ${\cal
H}^{(+,4,3)}_{1}$ multiplies linearly independent colour structures so it must
vanish as well. While ${\cal F}^{(+,4,3)}_{A}$ appears in combination with
other functions, those vanish hence so does ${\cal F}^{(+,4,3)}_{A}$. At
$L^{3}$ order we thus obtain the following constraints:
$\displaystyle{\cal F}^{(+,4,3)}_{A}$ $\displaystyle=0\hskip 71.13188pt{\cal
F}^{(+,4,3)}_{F}=0$ (7.79a) $\displaystyle{\cal G}^{(+,4,3)}_{A}$
$\displaystyle=0\hskip 73.97733pt{\cal G}^{(+,4,3)}_{F}=0$ (7.79b)
$\displaystyle{\cal H}^{(+,4,3)}_{1}$ $\displaystyle=0.$ (7.79c)
These results are of course in line with the fact that the signature-even NLL
anomalous dimension is two-loop exact.
##### Constraints at four loops NNLL for signature-even functions.
At $L^{2}$ order, upon equating the relevant terms of eq. (7.70) to eq. (6.39)
we have
$\displaystyle\zeta_{2}\zeta_{3}C_{\mathbf{\Delta}}^{(+,4,2)}$
$\displaystyle\,\,\overset{!}{=}\,\,2C_{\mathbf{\Delta}}^{(+,4,2)}{\cal
G}^{(+,4,2)}_{A}+\bigg{(}\mathbf{T}_{s-u}^{2}[\mathbf{T}_{s-u}^{2},\mathbf{T}_{t}^{2}]\mathbf{T}_{t}^{2}-\frac{1}{6}\mathbf{T}_{t}^{2}[(\mathbf{T}_{s-u}^{2})^{2},\mathbf{T}_{t}^{2}]\bigg{)}{\cal
H}^{(+,4,2)}_{1}$ (7.80) $\displaystyle\hskip
8.5359pt+\Big{[}\mathbf{T}_{s-u}^{2},[\mathbf{T}_{s-u}^{2},\mathbf{T}_{t}^{2}]\Big{]}\bigg{\\{}2{\cal
F}^{(+,4,2)}_{A}+\frac{C_{A}}{2}\left({\cal G}^{(+,4,2)}_{A}-{\cal
H}^{(+,4,2)}_{1}\right)\bigg{\\}}.$
We immediately see the same colour term $C_{\mathbf{\Delta}}^{(+,4,2)}$,
defined in eq. (6.39), on the left-hand side and multiplying ${\cal
G}^{(+,4,2)}_{A}$ on the right-hand side. This fixes ${\cal G}^{(+,4,2)}_{A}$
and implies that the combination of the other terms must be zero. ${\cal
H}^{(+,4,2)}_{1}$ multiplies colour operators that are linearly independent of
the others, and must vanish. Finally the combination of functions in the
second line of eq. (7.80) multiplying the fully-nested commutator must vanish,
and since ${\cal H}^{(+,4,2)}_{1}=0$, it implies a simple relation between
${\cal F}^{(+,4,2)}_{A}$ and ${\cal G}^{(+,4,2)}_{A}$. The constraints at
$L^{2}$ order for even-signature functions are then
$\displaystyle{\cal F}^{(+,4,2)}_{A}$
$\displaystyle=-C_{A}\frac{\zeta_{2}\zeta_{3}}{8}\hskip 71.13188pt{\cal
F}^{(+,4,2)}_{F}=0$ (7.81a) $\displaystyle{\cal G}^{(+,4,2)}_{A}$
$\displaystyle=\frac{\zeta_{2}\zeta_{3}}{2}\hskip 96.73918pt{\cal
G}^{(+,4,2)}_{F}=0$ (7.81b) $\displaystyle{\cal H}^{(+,4,2)}_{1}$
$\displaystyle=0.$ (7.81c)
The expressions for
$\mathbf{\Delta}^{(+,4,m)}=\text{Re}[\mathbf{\Delta}^{(4,m)}]$, $m=\\{0,1\\}$
are currently not known, so our firm constraints for the even signature part
of the soft anomalous dimension at four loops end at NNLL accuracy. The $m=1$
term, however, has a rather special status due to its connection with the cusp
anomalous dimension, which we discuss in the next section before summarising
the complete set of constraints.
### 7.4 The soft anomalous dimension at four loops
In this section, we present expressions parametrising the four-loop soft
anomalous dimension in the high-energy limit through all powers of the high-
energy logarithm $L$. Although this goes beyond the logarithmic accuracy of
any explicit calculation of the amplitude, we also discuss here the
generalisation of the relation in eq. (2.40) between the cusp anomalous
dimension and the singularities of the gluon Regge trajectory to four loops.
We show that this generalisation is natural despite the presence of quartic
Casimir contributions and it leads to an extra set of constraints on the soft
anomalous dimension.
##### The soft anomalous dimension at four loops.
To begin, it is useful to define an operator representation of the cusp
anomalous dimension
$\bm{\Gamma}_{p}^{\rm{cusp}}\equiv\frac{1}{2}\gamma_{K}(\alpha_{s}){\bf
T}_{p}^{2}+\sum_{R}g_{R}(\alpha_{s})\bm{\mathcal{D}}^{R}_{pppp}+{\cal
O}(\alpha_{s}^{5}),$ (7.82)
associated with a channel $p\in\\{s,t,u\\}$, where we suppress corrections
containing sextic and higher Casimir operators. The quartic operator
$\bm{\mathcal{D}}^{R}_{pppp}$ is defined in eq. (7.52).
When the $t$-channel operators act on the tree amplitude, they reproduce it,
multiplied by the respective adjoint Casimir, namely
$\mathbf{T}_{t}^{2}\mathcal{M}_{\text{tree}}=C_{A}\mathcal{M}_{\text{tree}},\hskip
42.67912pt\bm{\mathcal{D}}^{R}_{tttt}\mathcal{M}_{\text{tree}}=\frac{d_{RA}}{N_{A}}\mathcal{M}_{\text{tree}},$
(7.83)
which yields
$\bm{\Gamma}_{t}^{\rm{cusp}}\mathcal{M}_{\text{tree}}={\Gamma}_{A}^{\rm{cusp}}\mathcal{M}_{\text{tree}}\,,$
(7.84)
where ${\Gamma}_{A}^{\rm{cusp}}$ on the right-hand side is simply the cusp
anomalous dimension defined by an adjoint Wilson line. In contrast, when
$\bm{\mathcal{D}}^{R}_{ssss}$ and $\bm{\mathcal{D}}^{R}_{uuuu}$ act on the
tree amplitude they generate mixing into other colour states, similarly to
their quadratic counterparts ${\bf T}_{s}^{2}$ and ${\bf T}_{u}^{2}$. In
particular, their adjoint signature-even combination is given in eq. (7.63)
and signature-odd combination is given in eq. (G.35).
With these definitions and properties in place we are ready to present the
general form of the soft anomalous dimension for $2\to 2$ scattering in the
high-energy limit. The signature-even part reads
$\displaystyle\bm{\Gamma}^{(+,4)}_{ij\to
ij}\left(L,\frac{-t}{\lambda^{2}}\right)\mathcal{M}_{\text{tree}}$
$\displaystyle=\Bigg{\\{}L\,\bm{\Gamma}_{t}^{\rm{cusp},(4)}+\log\frac{-t}{\lambda^{2}}\left(\Gamma_{i}^{\rm{cusp},(4)}+\Gamma_{j}^{\rm{cusp},(4)}\right)+2\gamma_{i}^{(4)}+2\gamma_{j}^{(4)}$
(7.85)
$\displaystyle+\left(\sum_{R}f^{(4,R)}\right)\bigg{(}\Big{[}\mathbf{T}_{s-u}^{2},[\mathbf{T}_{s-u}^{2},\mathbf{T}_{t}^{2}]\Big{]}+\frac{C_{A}^{3}}{2}-6\frac{d_{AR_{i}}}{N_{R_{i}}C_{i}}-6\frac{d_{AR_{j}}}{N_{R_{j}}C_{j}}\bigg{)}$
$\displaystyle+2\sum_{R}\left({\cal
G}^{(4)}_{R}(L)-\frac{g^{(4)}_{R}}{6}L\right)\left(\bm{\mathcal{D}}^{R}_{tttt}+\bm{\mathcal{D}}^{R}_{ssss}+\bm{\mathcal{D}}^{R}_{uuuu}-2\left(\frac{d_{RR_{i}}}{N_{R_{i}}}+\frac{d_{RR_{j}}}{N_{R_{j}}}\right)\right)$
$\displaystyle+2\left(\sum_{R}{\cal
F}^{(+,4)}_{R}(L)\right)\Big{[}\mathbf{T}_{s-u}^{2},[\mathbf{T}_{s-u}^{2},\mathbf{T}_{t}^{2}]\Big{]}+{\cal
H}_{1}^{(+,4)}(L)\bigg{(}C_{A}\mathbf{T}_{s-u}^{2}[\mathbf{T}_{s-u}^{2},\mathbf{T}_{t}^{2}]$
$\displaystyle-\frac{1}{2}C_{A}\Big{[}\mathbf{T}_{s-u}^{2},[\mathbf{T}_{s-u}^{2},\mathbf{T}_{t}^{2}]\Big{]}-\frac{1}{6}\mathbf{T}_{t}^{2}[(\mathbf{T}_{s-u}^{2})^{2},\mathbf{T}_{t}^{2}]\bigg{)}\Bigg{\\}}\mathcal{M}_{\text{tree}},$
while the signature-odd part is
$\displaystyle\bm{\Gamma}^{(-,4)}_{ij\to ij}(L)\mathcal{M}_{\text{tree}}$
$\displaystyle=\Bigg{\\{}\frac{i\pi}{2}\bigg{[}\bm{\Gamma}_{s}^{\rm{cusp},(4)}-\bm{\Gamma}_{u}^{\rm{cusp},(4)}\bigg{]}-\left(\sum_{R}{\cal
F}^{(-,4)}_{R}(L)\right)\Big{[}\mathbf{T}_{t}^{2},[\mathbf{T}_{t}^{2},\mathbf{T}_{s-u}^{2}]\Big{]}$
(7.86) $\displaystyle+{\cal
H}^{(-,4)}_{1}(L)\bigg{(}-\frac{1}{2}\bigg{[}\mathbf{T}_{s-u}^{2},\Big{[}\mathbf{T}_{s-u}^{2},\big{[}\mathbf{T}_{s-u}^{2},\mathbf{T}_{t}^{2}\big{]}\Big{]}\bigg{]}+\frac{1}{8}\bigg{[}\mathbf{T}_{t}^{2},\Big{[}\mathbf{T}_{t}^{2},\big{[}\mathbf{T}_{t}^{2},\mathbf{T}_{s-u}^{2}\big{]}\Big{]}\bigg{]}\bigg{)}$
$\displaystyle+\tilde{{\cal
H}}^{(-,4)}_{1}(L)\,\frac{1}{4}\bigg{[}\mathbf{T}_{t}^{2},\Big{[}\mathbf{T}_{t}^{2},\big{[}\mathbf{T}_{t}^{2},\mathbf{T}_{s-u}^{2}\big{]}\Big{]}\bigg{]}\Bigg{\\}}\mathcal{M}_{\text{tree}}.$
These two expressions generalise eqs. (7.70) and (7.71), respectively, to
include ${\cal O}(L^{1})$ and ${\cal O}(L^{0})$ terms. The derivation of these
contributions is presented in appendices F and G.
In both the signature-even expression of eq. (7.85) and the odd one in eq.
(7.86) we have restored the full $p$-channel ($p\in\\{s,t,u\\}$) cusp
anomalous dimension $\bm{\Gamma}_{p}^{\rm{cusp}}$ of eq. (7.82), consisting of
both the quadratic and quartic components. Specifically, the function
$\gamma_{K}$, which was originally used to express the dipole term in eq.
(7.13), only appears now as part of the full cusp anomalous dimension
$\bm{\Gamma}_{p}^{\rm{cusp}}$, along with its quartic counterpart $g_{R}$. The
way the full $\bm{\Gamma}_{p}^{\rm{cusp}}$ gets restored is explained in
appendix F.
Note that, in line with the general expectation, all the contributions that
survive in the planar limit – the terms in the first line of eq. (7.85) and
the first term in the first line of eq. (7.86) – involve just one or two
partons, while all those involving three or four partons are non-planar. For
most terms the behaviour in the large-$N_{c}$ limit is already manifest in the
above equations owing to the (nested) commutator structure, which is
inherently non-planar, or the behaviour of the quartic Casimir contributions
given in eq. (5.10). There are a couple of terms for which a closer
examination is required: the first of these is the third line in eq. (7.85),
where in the adjoint representation one may use eq. (7.62) to obtain a
manifestly non-planar expression (while the fundamental representation
contribution is automatically subleading in the large-$N_{c}$ limit). The
second is the first term in the first line of eq. (7.86), which contains
planar as well as non-planar contributions, as one may verify using eq.
(F.15b) to express the linear terms.
For pure Yang-Mills, or SYM, where only the adjoint representation is
relevant, one may substitute eq. (7.62) and (F.15b) into the above equations
to obtain more explicit expressions the soft anomalous dimension in the high-
energy limit, separated by signature, including all powers of $L$. The
signature-even part is
$\displaystyle\bm{\Gamma}^{(+,4)}_{ij\rightarrow
ij,\,{\rm(S)YM}}(L)\mathcal{M}_{\text{tree}}$
$\displaystyle=\Bigg{\\{}L\,\bm{\Gamma}_{t}^{\rm{cusp},(4)}+\log\frac{-t}{\lambda^{2}}\left(\Gamma_{i}^{\rm{cusp},(4)}+\Gamma_{j}^{\rm{cusp},(4)}\right)+2\gamma_{i}^{(4)}+2\gamma_{j}^{(4)}$
(7.87)
$\displaystyle+f^{(4,A)}\bigg{(}\Big{[}\mathbf{T}_{s-u}^{2},[\mathbf{T}_{s-u}^{2},\mathbf{T}_{t}^{2}]\Big{]}+\frac{C_{A}^{3}}{2}-6\frac{d_{AR_{i}}}{N_{R_{i}}C_{i}}-6\frac{d_{AR_{j}}}{N_{R_{j}}C_{j}}\bigg{)}$
$\displaystyle+\left({\cal
G}^{(4)}_{A}(L)-\frac{g^{(4)}_{A}}{6}L\right)\bigg{(}2\left(\frac{d_{AA}}{N_{A}}-\frac{C_{A}^{4}}{24}\right)-\frac{1}{2}\mathbf{T}_{t}^{2}[(\mathbf{T}_{s-u}^{2})^{2},\mathbf{T}_{t}^{2}]$
$\displaystyle+\frac{3}{2}[\mathbf{T}_{s-u}^{2},\mathbf{T}_{t}^{2}]\mathbf{T}_{t}^{2}\mathbf{T}_{s-u}^{2}+\frac{C_{A}}{2}\Big{[}\mathbf{T}_{s-u}^{2},[\mathbf{T}_{s-u}^{2},\mathbf{T}_{t}^{2}]\Big{]}\bigg{)}$
$\displaystyle+2{\cal
F}^{(+,4)}_{A}(L)\Big{[}\mathbf{T}_{s-u}^{2},[\mathbf{T}_{s-u}^{2},\mathbf{T}_{t}^{2}]\Big{]}+{\cal
H}^{(+,4)}(L)\bigg{(}-\frac{1}{2}C_{A}\Big{[}\mathbf{T}_{s-u}^{2},[\mathbf{T}_{s-u}^{2},\mathbf{T}_{t}^{2}]\Big{]}$
$\displaystyle+C_{A}\mathbf{T}_{s-u}^{2}[\mathbf{T}_{s-u}^{2},\mathbf{T}_{t}^{2}]-\frac{1}{6}\mathbf{T}_{t}^{2}[\mathbf{T}_{s-u}^{2},\mathbf{T}_{t}^{2}]\mathbf{T}_{s-u}^{2}\bigg{)}\Bigg{\\}}\mathcal{M}_{\text{tree}},$
and the signature-odd part is
$\displaystyle\bm{\Gamma}^{(-,4)}_{ij\rightarrow
ij,\,{\rm(S)YM}}(L)\mathcal{M}_{\text{tree}}$
$\displaystyle=\Bigg{\\{}\frac{i\pi}{C_{A}}\Gamma_{A}^{\rm{cusp},(4)}\mathbf{T}_{s-u}^{2}-{\cal
F}^{(-,4)}_{A}(L)\Big{[}\mathbf{T}_{t}^{2},[\mathbf{T}_{t}^{2},\mathbf{T}_{s-u}^{2}]\Big{]}$
(7.88)
$\displaystyle+2\,{i\pi}g_{A}^{(4)}\left(\frac{d_{AR_{i}}}{C_{i}N_{R_{i}}}+\frac{d_{AR_{j}}}{C_{j}N_{R_{j}}}-\frac{2d_{AA}}{C_{A}N_{A}}-\frac{C_{A}^{3}}{16}\right)\mathbf{T}_{s-u}^{2}$
$\displaystyle-\frac{i\pi
g_{A}^{(4)}}{16}\bigg{(}3\Big{[}\mathbf{T}_{t}^{2},\big{[}\mathbf{T}_{t}^{2},[\mathbf{T}_{t}^{2},\mathbf{T}_{s-u}^{2}]\big{]}\Big{]}+\Big{[}\mathbf{T}_{t}^{2},[\mathbf{T}_{t}^{2},\mathbf{T}_{s-u}^{2}]\Big{]}\mathbf{T}_{t}^{2}-3\mathbf{T}_{t}^{2}[\mathbf{T}_{t}^{2},\mathbf{T}_{s-u}^{2}]\mathbf{T}_{t}^{2}\bigg{)}$
$\displaystyle+{\cal
H}^{(-,4)}_{1}(L)\bigg{(}-\frac{1}{2}\bigg{[}\mathbf{T}_{s-u}^{2},\Big{[}\mathbf{T}_{s-u}^{2},\big{[}\mathbf{T}_{s-u}^{2},\mathbf{T}_{t}^{2}\big{]}\Big{]}\bigg{]}+\frac{1}{8}\bigg{[}\mathbf{T}_{t}^{2},\Big{[}\mathbf{T}_{t}^{2},\big{[}\mathbf{T}_{t}^{2},\mathbf{T}_{s-u}^{2}\big{]}\Big{]}\bigg{]}\bigg{)}$
$\displaystyle+\frac{1}{4}\tilde{{\cal
H}}^{(-,4)}_{1}(L)\bigg{[}\mathbf{T}_{t}^{2},\Big{[}\mathbf{T}_{t}^{2},\big{[}\mathbf{T}_{t}^{2},\mathbf{T}_{s-u}^{2}\big{]}\Big{]}\bigg{]}\Bigg{\\}}\mathcal{M}_{\text{tree}}.$
These expressions make manifest the fact that the planar contributions are all
captured by the $\Gamma^{\rm{cusp}}$ and collinear anomalous dimension terms.
##### The singularities of the Regge trajectory and the cusp anomalous
dimension.
We have seen that the connection Korchemskaya:1994qp ; Korchemskaya:1996je
between the infrared singularities of the gluon Regge trajectory and the
integral of the cusp anomalous dimension (2.40), namely
$\tilde{\alpha}_{g}(t)=K+\mathcal{O}(\epsilon^{0})\,,$ (7.89)
where
$\displaystyle K(\alpha_{s}(\mu^{2}))$
$\displaystyle\equiv-\frac{1}{4}\int_{0}^{\mu^{2}}\frac{d\lambda^{2}}{\lambda^{2}}\gamma_{K}(\alpha_{s}(\lambda^{2}))=\sum_{n=0}^{\infty}\left(\frac{\alpha_{s}(\mu^{2})}{\pi}\right)^{n}K^{(n)}=\frac{1}{2\epsilon}\frac{\alpha_{s}(\mu^{2})}{\pi}+\ldots,$
(7.90)
extends to three loops, despite the presence of a Regge cut contribution at
this order, i.e. ${\cal O}(\alpha_{s}^{3}L^{1})$. For clarity we recall that
this is contingent on defining the trajectory $\tilde{\alpha}_{g}$ in the
Regge-cut scheme, which we defined by absorbing all planar contributions
generated at two and three loops into the Regge-pole term. Specifically, the
trajectory $\tilde{\alpha}_{g}$ was related to the one in the MRS scheme by
eq. (5.60), and the resulting cut-scheme subtracted trajectory is finite (see
eq. (A.10)):
$\displaystyle\hat{\tilde{\alpha}}_{g}(t)\equiv\tilde{\alpha}_{g}(t)-K={\cal
O}(\epsilon^{0})\,.\,$ (7.91)
From the perspective of the soft anomalous dimension this entails a remarkably
simple structure through three loops, namely its even-signature part takes the
form
$\displaystyle\begin{split}\mathbf{\Gamma}^{(+)}_{ij\to
ij}\left(\alpha_{s},L,\frac{-t}{\lambda^{2}}\right)=\frac{1}{2}\gamma_{K}(\alpha_{s})L\mathbf{T}_{t}^{2}&\,+\,\Gamma_{i}\left(\alpha_{s},\frac{-t}{\lambda^{2}}\right)\,+\,\Gamma_{j}\left(\alpha_{s},\frac{-t}{\lambda^{2}}\right)\\\
&\,\,\,+\,\mathbf{\Delta}^{(+,3,0)}\left(\frac{\alpha_{s}}{\pi}\right)^{3}+{\cal
O}(\alpha_{s}^{4})\,,\end{split}$ (7.92)
where, crucially, we used the fact that232323Note that its signature-odd
counterpart, $\mathbf{\Delta}^{(-,3,1)}$ is non-vanishing, see appendix E for
details $\mathbf{\Delta}^{(+,3,1)}=0$ Almelid:2015jia ; Caron-Huot:2017fxr .
We thus see that the only term in the (signature-even) soft anomalous
dimension which is linear in the high-energy logarithms $L$ is the one
proportional to the cusp anomalous dimension. Consequently, the exponentiation
of the singularities via eq. (6.3) directly determines the singularities of
the exponent of $(s/(-t))$, which is precisely the singular part of the gluon
Regge trajectory.
This naturally generalises to four loops, where quartic Casimir contributions
become relevant, as displayed in the definition of the cusp anomalous
dimension eq. (7.5). In order to write an equation such as eq. (7.89) valid
through four loops (and beyond), let us define
$K_{\rm
cusp}(\alpha_{s}(\mu^{2}))\equiv-\frac{1}{2}\int_{0}^{\mu^{2}}\frac{d\lambda^{2}}{\lambda^{2}}\Gamma^{\rm{cusp}}_{A}(\alpha_{s}(\lambda^{2}))\,.$
(7.93)
Generalising eq. (7.89), we have
$C_{A}\tilde{\alpha}_{g}(t)=K_{\rm cusp}+{\cal O}(\epsilon^{0})$ (7.94)
through four loops, now including the quartic Casimir contributions. The
Regge-pole exponential in eq. (2.39) can then be expressed as $\exp(K_{\rm
cusp}L)$.
##### Linear terms in the soft anomalous dimension at four loops.
Let us now analyse the implications of this relation from the perspective of
the soft anomalous dimension. We do that by directly comparing the two
exponentiation pictures, that of the singularities via eq. (6.3) on the one
hand and that of the high-energy logarithms as a Regge pole, on the other. The
exponents in the two pictures take the form:
$-\frac{1}{2}\int_{0}^{\mu^{2}}\frac{d\lambda^{2}}{\lambda^{2}}\mathbf{\Gamma}^{(+)}_{ij\to
ij}\left(\alpha_{s},L,\frac{-t}{\lambda^{2}}\right)\quad\longleftrightarrow\quad
C_{A}\tilde{\alpha}_{g}(t)L\,.$ (7.95)
For the two to agree for the terms that are simultaneously ${\cal
O}(1/\epsilon)$ and ${\cal O}(L^{1})$, one requires, just as in eq. (7.92) at
three loops, that the linear term in $L$ within $\mathbf{\Gamma}^{(+)}_{ij\to
ij}$ would be given precisely by $L\bm{\Gamma}_{t}^{\rm{cusp}}$, where
$\bm{\Gamma}_{t}^{\rm{cusp}}$ is defined in eq. (7.82). Upon acting on the
tree amplitude, the $t$-channel operators produce Casimirs in the adjoint
representation according to eq. (7.83), and one recovers the cusp anomalous
dimension in the adjoint representation as in eq. (7.84). In this way the
singularities of the gluon Regge trajectory satisfy eq. (7.94).
We thus conclude that the natural generalisation of the relation of eq. (7.94)
between the gluon Regge trajectory and the cusp anomalous dimension amounts to
the requirement that the terms linear in $L$ within the signature-even part of
the soft anomalous dimension would simply be $L\bm{\Gamma}_{t}^{\rm{cusp}}$.
This conjecture can also be formulated as
$\displaystyle\left.\frac{d}{dL}\mathbf{\Gamma}^{(+)}_{ij\to
ij}\left(\alpha_{s}(\mu^{2}),\frac{-t}{\mu^{2}}\right)\right|_{L=0}{\cal
M}^{\rm tree}_{ij\to ij}\,=\,$
$\displaystyle\,\Gamma^{\rm{cusp}}_{A}(\alpha_{s}(-t))\,{\cal M}^{\rm
tree}_{ij\to ij}\,,$ (7.96)
which of course holds through three loops using eq. (7.92) and (7.82), where
only the quadratic Casimir term ${\bf T}_{t}^{2}$ is present in
$\bm{\Gamma}_{t}^{\rm{cusp}}$. In contrast, at four loops also the
$\bm{\mathcal{D}}^{R}_{tttt}$ becomes relevant.
With this in mind, let us examine the general structure of the signature-even
soft anomalous dimension at four loops in eq. (7.85). The expected
$L\bm{\Gamma}_{t}^{\rm{cusp},(4)}$ term is indeed there. So, for our
conjecture to hold, all other terms which depend on $L$ must not contain any
further linear contribution. Differentiating eq. (7.85) and suppressing higher
logarithms one finds:
$\displaystyle\begin{split}\frac{d\bm{\Gamma}^{(+,4)}_{ij\to
ij,\text{Regge}}}{dL}\bigg{|}_{L=0}\\!\\!\mathcal{M}_{\text{tree}}&=\Bigg{\\{}\bm{\Gamma}_{t}^{\rm{cusp},(4)}\,+\,2\left(\sum_{R}{\cal
F}^{(+,4,1)}_{R}\right)\Big{[}\mathbf{T}_{s-u}^{2},[\mathbf{T}_{s-u}^{2},\mathbf{T}_{t}^{2}]\Big{]}\\\
&\hskip-30.0pt+2\sum_{R}\left({\cal
G}^{(4,1)}_{R}-\frac{g^{(4)}_{R}}{6}\right)\bigg{(}\bm{\mathcal{D}}^{R}_{tttt}+\bm{\mathcal{D}}^{R}_{ssss}+\bm{\mathcal{D}}^{R}_{uuuu}-2\left(\frac{d_{RR_{i}}}{N_{R_{i}}}+\frac{d_{RR_{j}}}{N_{R_{j}}}\right)\bigg{)}\\\
&\hskip-30.0pt+{\cal
H}^{(+,4,1)}\bigg{(}-\frac{1}{2}C_{A}\Big{[}\mathbf{T}_{s-u}^{2},[\mathbf{T}_{s-u}^{2},\mathbf{T}_{t}^{2}]\Big{]}+C_{A}\mathbf{T}_{s-u}^{2}[\mathbf{T}_{s-u}^{2},\mathbf{T}_{t}^{2}]\\\
&\quad\mbox{}\hskip
142.26378pt-\frac{1}{6}\mathbf{T}_{t}^{2}[(\mathbf{T}_{s-u}^{2})^{2},\mathbf{T}_{t}^{2}]\bigg{)}\Bigg{\\}}\mathcal{M}_{\text{tree}}\,,\end{split}$
(7.97)
which satisfies the conjectured relation in eq. (7.96) subject to the
following constraints
$\displaystyle{\cal F}^{(+,4,1)}_{R}=0,\hskip 28.45274pt{\cal G}^{(4,1)}_{R}$
$\displaystyle=\frac{g^{(4)}_{R}}{6},\hskip 28.45274pt{\cal
H}^{(+,4,1)}_{1}=0\,.$ (7.98)
The coefficients $g_{R}^{(4)}$ are known in QCD Boels:2017ftb ; Boels:2017skl
; Moch:2017uml ; Grozin:2017css ; Henn:2019swt ; Huber:2019fxe ;
vonManteuffel:2020vjv ; Agarwal:2021zft ; see eq. (B.3). We stress that the
constraints in eq. (7.98), which concern the uncharted territory of N3LLs,
have a very different status as compared to those of eqs. (7.77), (7.79) and
(7.81): while the latter are based on explicit calculations in the high-energy
limit, the former relies on a conjectured generalisation of the relation
between the gluon Regge trajectory and the cusp anomalous dimension. One may
note the intriguing similarity between the constraint on ${\cal G}^{(4)}_{R}$
in eq. (7.98) and the collinear constraint Becher:2019avh on the same
function in eq. (7.34). We stress that these are different kinematic limits.
Whereas in the high-energy limit the function ${\cal G}^{(4)}_{A}$ does have a
double logarithmic contribution – see eq. (7.81a) – in the collinear limit eq.
(7.34) forbids any non-linear dependence on the relevant logarithm.
##### Summary: Regge-limit constraints on $\mathbf{\Gamma}_{n}^{(4)}$.
We derived the soft anomalous dimension in the high-energy limit and used it
to constrain the kinematic functions parametrising this quantity in general
kinematics as proposed in ref. Becher:2019avh . The computed four-loop result,
taking into account NLLs of both even and odd signature, along with the newly-
computed NNLL of even signature, appears in eq. (6.48). In turn, upon taking
the general-kinematics parametrisation and specialising it to $2\to 2$
kinematics in the Regge limit, we obtained eqs. (7.70) and (7.71). Having
chosen a common basis of colour operators for both the computed result and the
parametrised one, the values of the expansion coefficients of the unknown
kinematic functions in powers of $L$ can be directly deduced, and are
summarised in eqs. (7.77), (7.79) and (7.81).
In addition, analysing the connection between the gluon Regge trajectory and
the cusp anomalous dimension originally proposed in ref. Korchemskaya:1994qp ;
Korchemskaya:1996je , we conjectured that the linear term in the soft
anomalous dimension in the Regge limit is given precisely by
$\mathbf{\Gamma}_{t}^{\rm cusp}L$ such that eq. (7.96) holds. This directly
implies an additional set of constraints on the N3LL signature-even
contributions to the soft anomalous dimension according to eq. (7.98).
Signature even | Signature odd
---|---
| $L^{3}$ | $L^{2}$ | $L^{1}$ (conj.) | | $L^{3}$ | $L^{2}$ | $L^{1}$
${\cal F}^{(+,4)}_{A}$ | 0 | $-\frac{C_{A}}{8}\zeta_{2}\zeta_{3}$ | 0 | ${\cal F}^{(-,4)}_{A}$ | $i\pi\frac{C_{A}}{24}\zeta_{3}$ | ? | ?
${\cal F}^{(+,4)}_{F}$ | 0 | 0 | 0 | ${\cal F}^{(-,4)}_{F}$ | 0 | ? | ?
${\cal G}^{(+,4)}_{A}$ | 0 | $\frac{1}{2}\zeta_{2}\zeta_{3}$ | $\frac{1}{6}g_{A}^{(4)}$ | | | |
${\cal G}^{(+,4)}_{F}$ | 0 | 0 | $\frac{1}{6}g_{F}^{(4)}$ | | | |
${\cal H}^{(+,4)}_{1}$ | 0 | 0 | 0 | ${\cal H}^{(-,4)}_{1}$ | 0 | ? | ?
| | | | $\tilde{{\cal H}}^{(-,4)}_{1}$ | 0 | ? | ?
Table 1: Constraints on the high-energy limit of the kinematic functions
entering the soft anomalous dimension at four loops, separated by signature.
Note that ${\cal G}$ only has a signature-even component, and ${\cal
H}_{1}^{(4)}$ is purely gluonic. All constraints at order $L^{3}$ and $L^{2}$
in this table are based on explicit computations in the high-energy limit,
while those for order $L^{1}$ are based on the conjectured generalisation of
the relation between cusp singularities and the Regge pole to four loops. The
coefficients $g_{R}^{(4)}$ are known in QCD Boels:2017ftb ; Boels:2017skl ;
Moch:2017uml ; Grozin:2017css ; Henn:2019swt ; Huber:2019fxe ;
vonManteuffel:2020vjv ; Agarwal:2021zft and are quoted in eq. (B.3).
The full set of constraints on the four-loop kinematic functions is summarised
in Table 1. Here, the left half of the table summarises the constraints on
signature-even (real) functions, while the right half the signature-odd
(imaginary) ones. While the function ${\cal G}_{R}$ multiplying the quartic
four-line term is by construction signature-even, the two other kinematic
functions ${\cal F}_{R}$ and ${\cal H}_{1}$ have both even and odd components
and their decomposition is given in eqs. (7.40) and (7.66), respectively. We
note that our current knowledge of the signature-even contributions is far
greater than that of the odd. In the table we represented unknown expansion
coefficients by question marks.
As a final note we emphasise that the above constraints are fully consistent
with the result by Vladimirov Vladimirov:2017ksc , that only colour operators
consisting of an even number of generators can appear in the soft anomalous
dimension. This implies that the functions multiplying the five generator term
${\cal H}_{1}$ and ${\cal H}_{2}$ in the soft anomalous dimension in eq.
(7.13) vanish identically. This is in line with the last row in Table 1, as
well as the collinear-limit constraints on these functions in ref.
Becher:2019avh .
## 8 Conclusion
In this work we take a step forward in the understanding of $2\to 2$ gauge-
theory amplitudes in the high-energy limit, by studying the tower of NNLL in
the signature-odd (real) amplitude and computing these explicitly through four
loops. This tower of corrections is particularly interesting for the analysis
of the Regge limit, because amplitudes at this logarithmic accuracy develop a
rich structure, featuring both a Regge pole and a Regge cut. Furthermore,
taking the high-energy limit gives us access to properties of four-loop
amplitudes, which are beyond the reach of perturbative calculations with
state-of-the-art techniques. Chief among these is the long-distance
singularity structure in fixed-angle scattering, for which the high-energy
limit is highly constraining.
In order to compute amplitudes in the Regge limit, we employ the method
described in refs. Caron-Huot:2013fea ; Caron-Huot:2017fxr . In essence this
approach allows us to compute transition amplitudes between the projectile and
the target at widely separated rapidities, each described by a state
consisting of a given numbers of Reggeons. The Balitsky-JIMWLK Hamiltonian is
then used to evolve the Reggeon states to the same rapidity. At NNLL accuracy
this involves, beyond the single Reggeon state, also triple-Reggeon states and
mixing amongst these. The sum of all transition amplitudes defines a reduced
amplitude, which is related to the full amplitude by simple multiplicative
factors, eq. (2.44).
We classify the transition amplitudes entering the NNLL of the reduced
amplitude to all loop orders, in eq. (3.60). These fall into two distinct
categories, one being a purely Single Reggeon State (SRS) transition and the
other including four transitions involving Multiple Reggeon States (MRS). The
former features a single Reggeon in both the projectile and the target,
undergoing trivial rapidity evolution as a single Reggeon across the entire
rapidity interval. The latter include all transitions in which a triple-
Reggeon state is generated at any stage during the evolution, be it at the
projectile or the target ends, or during the course of rapidity evolution.
Specifically, the aforementioned four are: $3\to 3$ transitions, $1\to 3$ or
$3\to 1$ and $1\to 1$ which are mediated by a triple-Reggeon state in the
evolution. We show that at NNLL accuracy MRS transition amplitudes can be
computed to any perturbative order by iterating the _leading-order_ Balitsky-
JIMWLK Hamiltonian. Thus, the MRS are universal quantities in any gauge
theory, which do not depend on the matter content.
We computed the reduced amplitudes through four loops, given in eqs. (5.13),
(5.21) and (5.36), providing a detailed derivation of results presented in
ref. Falcioni:2020lvv . In particular, we developed a new method to calculate
the colour factor of the amplitude, when the target and the projectile belong
to general representations of the gauge group. This allowed us to derive new
colour identities and obtain expressions of the reduced amplitudes in an
operator form, which is suitable to investigate universal features of both the
infrared and of the high-energy factorisation. We found that only the $3\to 3$
transitions feature non-trivial colour structure, where different colour
components mix during evolution. All the other MRS transitions are
proportional to the colour octet exchange to all perturbative orders. We
observed that $3\to 1$ and $1\to 3$ transitions at three and at four loops, in
eqs. (5.20) and (5.30) respectively, cancel exactly against corresponding
terms (which involve quartic Casimirs associated with the representations of
the projectile and the target) in the $3\to 3$ exchanges of eqs. (5.18) and
(5.26). We conjecture that such a mechanism is in place to all perturbative
orders and that it completely removes all contributions to the amplitude from
$3\to 1$ and $1\to 3$ transitions. As a result, only the $1\to 1$ transition
generates mixing between states with one and three Reggeons, as we check
explicitly at four loops. There, a further cancellation takes place: the $1\to
1$ contribution in eq. (5.32) cancels the planar terms of $3\to 3$ transitions
in eq. (5.26), to all orders in $\epsilon$. This renders the reduced amplitude
at four loops, eq. (5.3), manifestly non-planar.
The complete cancellation of the contributions emerging from mixing between
single and triple Reggeon states, against corresponding terms associated with
quartic Casimirs in the $3\to 3$ evolution, is highly suggestive of a general
pattern, extending to all orders in this tower of logarithms. As we have seen,
it leads to a partial cancellation of planar contributions in the reduced
amplitude at three loops, and a complete cancellation of such at four loops.
Our expectation is that the reduced amplitude will be non-planar at any order
beyond four loops. The only planar contributions in the reduced amplitude then
occur at two and three loops, before the full set of single-triple Reggeon
transitions opens up.
The non-planar nature of the total contribution to the reduced amplitude from
multiple Reggeon states at four loops (and likely beyond) points to a simple
relation between these quantities and Regge cuts, which are known to arise
only from non-planar diagrams Mandelstam:1963cw ; Eden:1966dnq ;
Collins:1977jy . However, the separation between single-Reggeon state (SRS)
and multiple-Reggeon state (MRS) contributions to the amplitude as defined in
our calculation, is not in one-to-one correspondence with the separation
between the Regge pole and the Regge cut contributions. This is already clear
at two and at three loops, where MRS do contain planar contributions. Hence,
the MRS give rise to both pole and cut contributions, while the SRS
contributes exclusively to the Regge-pole exchange.
In order to elucidate the separation between Regge cut and pole, we rely again
on the structure of the reduced amplitudes in the planar limit. We find that
MRS contributions that are leading in $N_{c}$ appear only in the colour octet
component and are independent of the process, both at two loops, eq. (5.15),
and at three loops, eq. (5.22). Following this analysis of colour factors, we
show that both the SRS contribution and the planar terms of the MRS
contribution may be described by Regge-pole factorisation, while all remaining
non-planar MRS terms define a cut contribution, as done in eq. (5.40). We name
this separation of the amplitude the Regge-cut scheme. It departs from the one
adopted in ref. Caron-Huot:2017fxr , dubbed MRS scheme, where the SRS
contribution alone is factorised as a Regge pole. The change of scheme
modifies the definition of the impact factors and of the Regge trajectory,
which determine the Regge pole contribution, by the planar part of the MRS
contribution. Notably, the two-loop impact factors and the three-loop Regge
trajectory completely characterise the Regge-pole contribution to the NNLL to
all orders. At four loops and beyond there is no parameter which could allow
one to shuffle planar MRS contributions to the Regge pole. Therefore, starting
at four loops the MRS transition amplitudes must contribute exclusively to the
cut and must be entirely non-planar. This is indeed what we find in our four-
loop calculation, eqs. (5.3) and (5.36).
In section 5.4 we construct the complete amplitudes and then we distinguish
pole and cut contributions to the amplitude according to the Regge-cut scheme
(5.40). At two loops, we provide in eq. (5.50) the definition of the Regge cut
coefficient $\mathcal{M}^{(-,2,0),\,\text{cut}}_{ij\to ij}$ in an operator
form, which is valid to all orders in $\epsilon$ for every colour component in
any process. This coincides with the MRS contribution of eq. (5.13), with its
planar limit, eq. (5.15), subtracted. We find that, in the octet component,
$\mathcal{M}^{(-,2,0),\,\text{cut}}_{ij\to ij}$ agrees with the Regge-pole
factorisation breaking term $R^{(2),0,[8]}_{ij}$, defined in refs.
DelDuca:2013ara ; DelDuca:2014cya on the basis of infrared factorisation. We
determine the corresponding quark and gluon impact factors in this scheme, eq.
(5.48), by giving their relation with the results in the MRS scheme Caron-
Huot:2017fxr . Remarkably, it is possible to move into the impact factors
further terms that appear in $\mathcal{M}^{(-,2,0),\,\text{cut}}_{ij\to ij}$
and are subleading in $N_{c}$, as done in eq. (5.51). This follows from the
structure of the non-planar terms in the reduced amplitude at two loops, given
in eq. (5.12). By following this redefinition, we obtain a new cut,
$\mathcal{M}^{(-,2,0),\,\text{FL-cut}}_{ij\to ij}$, defined in eq. (5.52),
which agrees with the two-loop Regge cut $A_{\text{eik}}\,C_{ij}^{C}$ computed
by Fadin and Lipatov Fadin:2016wso ; Fadin:2017nka .
At three loops, the Regge cut $\mathcal{M}^{(-,3,1),\,\text{cut}}_{ij\to ij}$
takes the form of eq. (5.64). It includes a term proportional to
$\mathcal{M}^{(-,2,0),\,\text{cut}}_{ij\to ij}$ plus the reduced amplitude at
three loops $\hat{\mathcal{M}}^{(-,3,1)}_{ij\to ij}$, eq. (5.21), with its
planar part subtracted. The latter is assigned to the Regge pole and thus it
enters the Regge trajectory at three loops. Eq. (5.60) provides the relation
between the three-loop trajectory in the Regge-cut scheme and in the MRS
scheme of ref. Caron-Huot:2017fxr . In that work, the three-loop trajectory
was determined in the MRS scheme for $\mathcal{N}=4$ SYM. There, it was also
pointed out that the MRS scheme breaks a well-known relation
Korchemskaya:1994qp ; Korchemskaya:1996je between the infrared singularities
of the gluon Regge trajectory and $K(\alpha_{s})$ of eq. (2.30a), the integral
over the lightlike cusp anomalous dimension. This relation holds for the two-
loop Regge trajectory, but it is violated at three loops in the MRS scheme. In
contrast, we find that the three-loop Regge trajectory in the Regge-cut
scheme, $\tilde{\alpha}_{g}^{(3)}$, features precisely the singularities
predicted by the cusp anomalous dimension, as shown in eq. (5.62).
We compute also the finite contribution to $\tilde{\alpha}_{g}^{(3)}$ in
$\mathcal{N}=4$ SYM in full colour. Notably, we find that the latter agrees
with the known result in the planar theory Drummond:2007aua ; Naculich:2007ub
, without any non-planar correction. In other words, the trajectory features a
maximally non-Abelian colour factor, which is in line with the expected
eikonal origin for this quantity Korchemskaya:1994qp ; Korchemskaya:1996je ;
Falcioni:2019nxk .
Our three-loop analysis suggests that the Regge-cut scheme captures the
analytic structure of high-energy amplitudes. As a confirmation, we find that,
in this scheme, the Regge cut agrees with the function $R^{(3),1,[8]}_{ij\to
ij}$ of refs. DelDuca:2013ara ; DelDuca:2014cya , which contains the
factorisation-breaking singularities in the octet component. However,
different choices are also possible. In particular, as mentioned above, using
eq. (5.12) we identify a specific set of non-planar terms in the reduced two-
loop amplitude that are consistent with Regge-pole factorisation. Absorbing
these into the Regge-pole term at two loops (eq. (5.52)) modifies the
contribution of the Regge cut at three loops of eq. (5.64), only by replacing
$\mathcal{M}^{(-,2,0),\,\text{cut}}_{ij\to ij}$ with the expression of the cut
in the new scheme, $\mathcal{M}^{(-,2,0),\,\text{FL-cut}}_{ij\to ij}$. We
verify that the three-loop cut defined in this way coincides with the cut
contribution, $-A_{\text{eik}}C_{ij}^{C}\,(C_{R}+C_{3})$, in refs.
Fadin:2016wso ; Fadin:2017nka . Notably, the three-loop Regge trajectory is
not affected by colour subleading terms and, even with the FL definition of
the cut, it maintains its relation with the lightlike cusp anomalous
dimension, as well as its maximally non-Abelian colour factor. Therefore, our
new analysis of the colour factors in the reduced amplitudes, allows us to
find the precise relation between the computational scheme introduced in refs.
Caron-Huot:2013fea ; Caron-Huot:2017fxr and the study of factorisation
breaking and of the Regge cut, performed respectively in refs. DelDuca:2013ara
; DelDuca:2013dsa ; DelDuca:2014cya and Fadin:2016wso ; Fadin:2017nka ,
finding complete agreement.
Our expression for the Regge cut at four loops is given in eq. (5.71), in
terms of the reduced amplitude at four loops and the cut contributions at two
and three loops. Since the former, eq. (5.36), is non-planar by direct
computation, and the latter two terms are defined in the Regge-cut scheme to
be non-planar by construction, we find the four-loop cut contribution to the
amplitude $\mathcal{M}^{(-,4,2),\text{cut}}_{ij\to ij}$ is non-planar as a
whole. Furthermore, we show in eq. (5.72) that by the same mechanism, in this
scheme the non-planar nature of the reduced amplitude ensures that the cut
remains non-planar to all loop orders.
In sections 6 and 7 we proceed with the investigation of infrared
factorisation at four loops, employing our explicit NNLL calculation as an
input. The comparison between the exponentiation of infrared singularities and
that of high-energy logarithms is useful in several ways. First, it is a
highly non-trivial check of the results. Second, it provides a rich source of
constraints on the yet-unknown soft anomalous dimension at four loops (see
below). Third, it allows us to extract the hard function containing finite
terms in the amplitude through four loops, both in QCD and in $\mathcal{N}=4$
super Yang-Mills (SYM), finding an intriguing relation between the hard
function and the finite parts of the gluon Regge trajectory, eq. (6.40). The
planar terms in the hard function in SYM agree with the predicted
large-$N_{c}$ limit Bern:2005iz ; Drummond:2007aua .
We study the soft anomalous dimension through four loops. In the high-energy
limit, we separate the contributions of the dipole formula Gardi:2009qi ;
Gardi:2009zv ; Becher:2009cu ; Becher:2009qa , from a general remainder
$\bf\Delta$ that starts at three loops, expanding both in powers of the
signature-even logarithm $L$, defined in eq. (2.9). While dipole contributions
are at most linear in $L$, the remainder contains higher powers of the
logarithm, for instance its imaginary part contains terms $L^{3}$ at four
loops Caron-Huot:2013fea ; Caron-Huot:2017zfo . Notably, the real part of the
remainder at three loops ${\bf\Delta}^{(+,3)}$ does not depend on $L$
Almelid:2015jia . In particular, since it lacks linear terms in $L$, it does
not contribute to the tower of NNLLs in the soft anomalous dimension to that
order Caron-Huot:2017fxr . Here we compute the real part of the remainder to
four loops, ${\bf\Delta}^{(+,4)}$, through NNLL, i.e. ${\cal
O}(\alpha_{s}^{4}L^{2})$, finding the first non-vanishing contribution to the
NNLL tower, eq. (6.39). This quantity is manifestly non-planar: it is written
in terms of commutators of the channel operators and the combination of
Casimir invariants $\frac{d_{AA}}{N_{A}}-\frac{C_{A}^{4}}{24}$, which is
subleading in the large-$N_{c}$ limit. This is a strong check on our
calculation, because in the planar limit only diagrams connecting up to two
legs can contribute to the soft anomalous dimension.
We characterise our result for the soft anomalous dimension in eq. (6.39)
further, by comparing it with the general parametrisation for the four-loop
soft anomalous dimension in general kinematics Becher:2019avh , which consists
of all connected Gardi:2013ita colour structures that may arise at this
order, each multiplying a yet-unknown kinematic function. We compute the high-
energy limit of this parametrisation of the soft anomalous dimension through
NNLLs, both in the real part, eq. (7.70), and in the imaginary part, eq.
(7.71). Focusing on the terms in the real part of the anomalous dimension, we
find only three contributions. Two of them involve colour quadrupole
correlations, featuring four generators, one on each of the four lines. Of
these one colour structure is of the same form that appears at three loops
Almelid:2015jia ; Almelid:2017qju multiplied by the kinematic function
$\mathcal{F}_{A}^{(+,4)}$, while the second involves a quartic Casimir type
structure (a symmetric trace of four adjoint generators) multiplied by
$\mathcal{G}_{A}^{(+,4)}$. In the Regge-limit both are expressed in terms of
nested commutators of channel colour operators, where the second also features
the combination $\frac{d_{AA}}{N_{A}}-\frac{C_{A}^{4}}{24}$. The final
potential contribution to the soft anomalous dimension, featuring the unknown
function $\mathcal{H}_{1}^{(+,4)}$, generates correlations among four lines
using five colour generators. Matching the general parametrisation with our
result of the anomalous dimension provides non-zero constraints on the high-
energy limit of the functions $\mathcal{F}_{A}^{(+,4)}$, in eq. (7.81a), and
$\mathcal{G}_{A}^{(+,4)}$, in eq. (7.81b). Interestingly, the function
$\mathcal{H}^{(+,4)}_{1}$ must vanish to this logarithmic accuracy. This is
consistent with the result of ref. Vladimirov:2017ksc , which shows that the
correlation of an odd number of colour operators is always prohibited in the
soft anomalous dimension. We determine all constraints on the parametrisation
in ref. Becher:2019avh that can be derived from the available information on
the Regge limit and we summarise our findings in Table 1.
We expect that the interplay between high-energy and infrared factorisation
will provide further insight into gauge-theory dynamics. This conclusion is
already suggested by arguments about the gluon Regge trajectory. We have
pointed out that this quantity – both its singular and its finite parts – is
expected to be associated to the anomalous dimension of Wilson-line geometries
Korchemskaya:1994qp ; Korchemskaya:1996je ; Falcioni:2019nxk . Here we
verified, up to three loops, the correspondence between the infrared
singularities of the Regge trajectory and the terms proportional to the
quadratic Casimir in the lightlike cusp anomalous dimension. We conjecture
that this relation generalises to four loops and beyond as in eq. (7.94),
where we identify the singularities of the Regge trajectory with the integral
of the complete cusp anomalous dimension, eq. (7.5), including quartic Casimir
(and higher) contributions. This has profound implications on the structure of
the soft anomalous dimension, beyond the accuracy of our calculation.
Specifically, at four loops, the N3LL contribution to the soft anomalous
dimension must be related to the four-loop Regge trajectory. More generally,
if our conjecture holds, linear terms in $L$ in the real part of the soft
anomalous dimension in the Regge limit must be simply proportional to complete
cusp anomalous dimension in the adjoint representation, or equivalently we
expect eq. (7.96) should hold to all orders. At four loops this provides three
new constraints on the soft anomalous dimension, given in eq. (7.98). The
vanishing of $\mathcal{H}^{(+,4)}_{1}$ is of course consistent with the
finding of ref. Vladimirov:2017ksc . The results are included in Table 1,
which provides important input to the bootstrap program to determine the soft
anomalous dimension in general kinematics, which has already been successful
at the three-loop level Almelid:2017qju . Our work paves the way for
bootstrapping this quantity to four loops, where direct calculations are not
yet feasible.
###### Acknowledgements.
We would like to thank Simon Caron-Huot for insightful comments and Claude
Duhr and Andrew McLeod for collaboration on a related project on the soft
anomalous dimension. EG, GF and NM are supported by the STFC Consolidated
Grant ‘Particle Physics at the Higgs Centre’. GF is supported by the ERC
Starting Grant 715049 ‘QCDforfuture’ with Prinipal Investigator Jennifer
Smillie. CM’s work is supported by the Italian Ministry of University and
Research (MIUR), grant PRIN 20172LNEEZ. LV is supported by Fellini, Fellowship
for Innovation at INFN, funded by the European Union’s Horizon 2020 research
programme under the Marie Skłodowska-Curie Cofund Action, grant agreement no.
754496.
## Appendix A Coefficients of the Regge pole amplitude
In this appendix we collect the coefficients describing the Regge pole part of
the two-parton scattering amplitude, namely, the Regge trajectory and impact
factors. As discussed in section 2.1, this component of the amplitude is
scheme-dependent, starting at NNLL accuracy. Below we begin by compoiling the
coefficients in the MRS scheme of eq. (2.38) and then proceed to discuss the
cut scheme of eq. (2.39).
Following the definition in eq. (6.15), we split the Regge trajectory into a
component proportional to the integral of the cusp anomalous dimension,
$K(\alpha_{s})$ defined in eq. (2.30a), and a remainder, $\hat{\alpha}_{g}$:
$\alpha_{g}(t)=K\left(\alpha_{s}(-t)\right)+\hat{\alpha}_{g}(t).$ (A.1)
Expanding the term in this equation according to eq. (2.6), in QCD one has
$\displaystyle\begin{split}K^{(1)}&=\frac{{\gamma}_{K}^{(1)}}{4\epsilon},\\\
K^{(2)}&=\frac{{\gamma}_{K}^{(2)}}{8\epsilon}-\frac{b_{0}\,{\gamma}_{K}^{(1)}}{32\epsilon^{2}},\\\
K^{(3)}&=\frac{{\gamma}_{K}^{(3)}}{12\epsilon}-\frac{b_{0}\,{\gamma}_{K}^{(2)}+b_{1}\,{\gamma}_{K}^{(1)}}{48\epsilon^{2}}+\frac{b_{0}^{2}\,{\gamma}_{K}^{(1)}}{192\epsilon^{3}},\end{split}$
(A.2a)
where the coefficients $\gamma_{K}^{(i)}$ are given in (B). The remainder,
cusp-subtracted trajectory $\hat{\alpha}_{g}(t)$ is known up to two loops in
QCD. Its coefficients read Lipatov:1976zz ; Kuraev:1976ge ; Fadin:1995xg ;
Fadin:1996tb ; Fadin:1995km ; Blumlein:1998ib
$\displaystyle\hat{\alpha}_{g}^{(1)}$
$\displaystyle=\frac{1}{2\epsilon}(r_{\Gamma}-1)=-\frac{1}{4}\zeta_{2}\,\epsilon-\frac{7}{6}\zeta_{3}\,\epsilon^{2}+{\cal
O}(\epsilon^{3}),$ (A.3a) $\displaystyle\hat{\alpha}_{g}^{(2)}$
$\displaystyle=C_{A}\left(\frac{101}{108}-\frac{\zeta_{3}}{8}\right)-\frac{7n_{f}}{54}+{\cal
O}(\epsilon)\,.$ (A.3b)
The cusp-subtracted trajectory at three loops has been calculated in ${\cal
N}=4$ SYM, Caron-Huot:2017fxr , extracting it from the two-parton scattering
amplitude obtained in ref. Henn:2016jdu . It reads
$\hat{\alpha}_{g}^{(3)}\rvert_{\text{SYM}}=C_{A}^{2}\left(-\frac{\zeta_{2}}{144\epsilon^{3}}+\frac{5\zeta_{4}}{192}\frac{1}{\epsilon}+\frac{107}{144}\zeta_{2}\zeta_{3}+\frac{\zeta_{5}}{4}+{\cal
O}\left(\epsilon\right)\right).$ (A.4)
The description of the Regge-pole component of the amplitude is completed by
the information provided by the quark and gluon impact factors. Following the
definition in eq. (2.41), we split the impact factors into a term
$Z_{i/j}(t)$, defined as the integral of the anomalous dimension
$\Gamma_{i/j}$, see eq. (2.42), and a collinear-subtracted remainder
$D_{i/j}(t)$:
$C_{i/j}(t)=Z_{i/j}(t)\,D_{i/j}(t).$ (A.5)
In terms of the coefficients in eqs. (B) and (B), and setting $\mu^{2}=-t$,
the perturbative expansion of $Z_{i}(t)$ (see eq. (2.43a)) reads
$\displaystyle\begin{split}Z_{i}^{(0)}&=1,\\\
Z_{i}^{(1)}&=-\,C_{i}\,{\gamma}_{K}^{(1)}\frac{1}{4\epsilon^{2}}+\frac{\gamma_{i}^{(1)}}{\epsilon},\\\
Z_{i}^{(2)}&=C^{2}_{i}\left({\gamma}_{K}^{(1)}\right)^{2}\frac{1}{32\epsilon^{4}}\,+C_{i}\,\Bigg{[}\frac{1}{\epsilon^{3}}\frac{{\gamma}_{K}^{(1)}}{4}\left(\frac{3b_{0}}{16}-\gamma_{i}^{(1)}\right)-\,\frac{1}{\epsilon^{2}}\frac{{\gamma}_{K}^{(2)}}{16}\Bigg{]}\\\
&\hskip
14.22636pt+\frac{1}{\epsilon^{2}}\frac{\gamma_{i}^{(1)}}{2}\left(\gamma_{i}^{(1)}-\frac{b_{0}}{4}\right)+\frac{\gamma_{i}^{(2)}}{2\epsilon}.\end{split}$
(A.6)
The one- and two-loop coefficients of the quark and gluon collinear-subtracted
impact factors have been calculated in the MRS scheme of eq. (2.38) in Caron-
Huot:2017fxr . For instance, at one loop one has
$\displaystyle\begin{split}D_{g}^{(1)}&=-N_{c}\left(\frac{67}{72}-\zeta_{2}\right)+\frac{5}{36}n_{f}+\epsilon\bigg{[}N_{c}\left(-\frac{101}{54}+\frac{11}{48}\zeta_{2}+\frac{17}{12}\zeta_{3}\right)+n_{f}\left(\frac{7}{27}-\frac{\zeta_{2}}{24}\right)\bigg{]}\\\
&+\epsilon^{2}\bigg{[}N_{c}\left(-\frac{607}{162}+\frac{67}{144}\zeta_{2}+\frac{77}{72}\zeta_{3}+\frac{41}{32}\zeta_{4}\right)+n_{f}\left(\frac{41}{81}-\frac{5}{72}\zeta_{2}-\frac{7}{36}\zeta_{3}\right)\bigg{]}+{\cal
O}(\epsilon^{3})\,,\end{split}$ (A.7a)
$\displaystyle\begin{split}D_{q}^{(1)}&=N_{c}\left(\frac{13}{72}+\frac{7}{8}\zeta_{2}\right)+\frac{1}{N_{c}}\left(1-\frac{1}{8}\zeta_{2}\right)-\frac{5}{36}n_{f}+\epsilon\bigg{[}N_{c}\left(\frac{10}{27}-\frac{\zeta_{2}}{24}+\frac{5}{6}\zeta_{3}\right)\\\
&+\frac{1}{N_{c}}\left(2-\frac{3}{16}\zeta_{2}-\frac{7}{12}\zeta_{3}\right)+n_{f}\left(-\frac{7}{27}+\frac{\zeta_{2}}{24}\right)\bigg{]}+\epsilon^{2}\bigg{[}N_{c}\left(\frac{121}{162}-\frac{13}{144}\zeta_{2}-\frac{7}{36}\zeta_{3}+\frac{35}{64}\zeta_{4}\right)\\\
&+\frac{1}{N_{c}}\left(4-\frac{\zeta_{2}}{2}-\frac{7}{8}\zeta_{3}-\frac{47}{64}\zeta_{4}\right)+n_{f}\left(-\frac{41}{81}+\frac{5}{72}\zeta_{2}+\frac{7}{36}\zeta_{3}\right)\bigg{]}+{\cal
O}(\epsilon^{3})\,.\end{split}$ (A.7b)
At two loops:
$\displaystyle\begin{split}D_{g}^{(2)}&=-\frac{\zeta_{2}}{32\epsilon^{2}}N_{c}^{2}+N_{c}^{2}\bigg{(}-\frac{26675}{10368}+\frac{335}{288}\zeta_{2}+\frac{11}{18}\zeta_{3}-\frac{\zeta_{4}}{64}\bigg{)}\\\
&+N_{c}n_{f}\bigg{(}\frac{2063}{3456}-\frac{25}{144}\zeta_{2}+\frac{\zeta_{3}}{72}\bigg{)}+\frac{n_{f}}{N_{c}}\bigg{(}-\frac{55}{384}+\frac{\zeta_{3}}{8}\bigg{)}-\frac{25}{2592}n_{f}^{2}+{\cal
O}(\epsilon)\,,\end{split}$ (A.8a)
$\displaystyle\begin{split}D_{q}^{(2)}&=-\frac{\zeta_{2}}{32\epsilon^{2}}N_{c}^{2}+N_{c}^{2}\bigg{(}\frac{22537}{41472}+\frac{87}{64}\zeta_{2}+\frac{41}{144}\zeta_{3}-\frac{15}{256}\zeta_{4}\bigg{)}+\frac{28787}{10368}+\frac{19}{32}\zeta_{2}\\\
&-\frac{205}{288}\zeta_{3}-\frac{47}{128}\zeta_{4}+\frac{1}{N_{c}^{2}}\bigg{(}\frac{255}{512}+\frac{21}{64}\zeta_{2}-\frac{15}{32}\zeta_{3}-\frac{83}{256}\zeta_{4}\bigg{)}\\\
&+N_{c}n_{f}\bigg{(}-\frac{325}{648}-\frac{\zeta_{2}}{4}-\frac{23}{144}\zeta_{3}\bigg{)}+\frac{n_{f}}{N_{c}}\bigg{(}-\frac{505}{1296}-\frac{\zeta_{2}}{16}-\frac{19}{144}\zeta_{3}\bigg{)}+\frac{25}{864}n_{f}^{2}+{\cal
O}(\epsilon)\,.\end{split}$ (A.8b)
The whole impact factors $C_{i}$ can be found inserting the results from eqs.
(A.6)-(A.8b) into eq. (A.5), and expanding order by order in the strong
coupling. At one loop we get
$\displaystyle C_{q}^{(1)}=$
$\displaystyle-\frac{C_{F}}{2\epsilon^{2}}-\frac{3C_{F}}{4\epsilon}+C_{A}\left(\frac{3\zeta_{2}}{4}+\frac{85}{72}\right)+C_{F}\left(\frac{\zeta_{2}}{4}-2\right)-\frac{5n_{f}}{36}+\epsilon\bigg{[}C_{A}\left(\frac{64}{27}-\frac{11\zeta_{2}}{48}+\frac{\zeta_{3}}{4}\right)$
$\displaystyle+C_{F}\left(\frac{3\zeta_{2}}{8}+\frac{7\zeta_{3}}{6}-4\right)+n_{f}\left(\frac{\zeta_{2}}{24}-\frac{7}{27}\right)\bigg{]}+\epsilon^{2}\bigg{[}C_{A}\left(\frac{769}{162}-\frac{85\zeta_{2}}{144}-\frac{77\zeta_{3}}{72}-\frac{3\zeta_{4}}{16}\right)$
$\displaystyle+C_{F}\left(\zeta_{2}+\frac{7\zeta_{3}}{4}+\frac{47\zeta_{4}}{32}-8\right)+n_{f}\left(\frac{5\zeta_{2}}{72}+\frac{7\zeta_{3}}{36}-\frac{41}{81}\right)\bigg{]}+\mathcal{O}(\epsilon^{3}),$
(A.9a) $\displaystyle C_{g}^{(1)}=$
$\displaystyle-\frac{C_{A}}{2\epsilon^{2}}-\frac{b_{0}}{4\epsilon}+C_{A}\left(\zeta_{2}-\frac{67}{72}\right)+\frac{5n_{f}}{36}+\epsilon\bigg{[}C_{A}\left(\frac{11\zeta_{2}}{48}+\frac{17\zeta_{3}}{12}-\frac{101}{54}\right)$
$\displaystyle+n_{f}\left(\frac{7}{27}-\frac{\zeta_{2}}{24}\right)\bigg{]}+\epsilon^{2}\bigg{[}C_{A}\left(\frac{67\zeta_{2}}{144}+\frac{77\zeta_{3}}{72}+\frac{41\zeta_{4}}{32}-\frac{607}{162}\right)$
$\displaystyle+n_{f}\left(\frac{41}{81}-\frac{5\zeta_{2}}{72}-\frac{7\zeta_{3}}{36}\right)\bigg{]}+\mathcal{O}(\epsilon^{3}).$
(A.9b)
In the Regge-cut scheme, the one-loop and two-loop Regge trajectories are
identical to the MRS scheme, since multiple-Reggeon exchanges do not
contribute to the odd amplitude at LL and NLL. Thus, with
$\hat{\tilde{\alpha}}_{g}=\tilde{\alpha}_{g}-K$, and using eq. (5.60) for the
three-loop case, we have
$\displaystyle\hat{\tilde{\alpha}}_{g}^{(1)}=$
$\displaystyle\,{\hat{\alpha}}_{g}^{(1)}=\frac{1}{2\epsilon}(r_{\Gamma}-1),$
(A.10a) $\displaystyle\hat{\tilde{\alpha}}_{g}^{(2)}=$
$\displaystyle\,{\hat{\alpha}}_{g}^{(2)}=C_{A}\left(\frac{101}{108}-\frac{\zeta_{3}}{8}\right)-\frac{7n_{f}}{54}+O(\epsilon),$
(A.10b) $\displaystyle\hat{\tilde{\alpha}}_{g}^{(3)}\rvert_{\text{SYM}}=$
$\displaystyle\,N_{c}^{2}\left(\frac{5}{24}\zeta_{2}\zeta_{3}+\frac{\zeta_{5}}{4}+\mathcal{O}(\epsilon)\right).$
(A.10c)
Similarly, the impact factors at one loop in the Regge-cut scheme are
identical to the MRS scheme: $\tilde{C}_{i}^{(1)}=C_{i}^{(1)}$ for both quarks
and gluons. However, due to eq. (5.48), at two loops we have, in the cut
scheme
$\displaystyle\tilde{C}_{q}^{(2)}=$
$\displaystyle\frac{C_{F}^{2}}{8\epsilon^{4}}+\frac{1}{\epsilon^{3}}\left(\frac{11C_{A}C_{F}}{32}+\frac{3C_{F}^{2}}{8}-\frac{C_{F}n_{f}}{16}\right)+\frac{1}{\epsilon^{2}}\bigg{[}C_{F}^{2}\left(\frac{41}{32}-\frac{\zeta_{2}}{8}\right)-\frac{3C_{A}^{2}\zeta_{2}}{32}+\frac{C_{F}n_{f}}{24}$
$\displaystyle-
C_{A}C_{F}\left(\frac{5\zeta_{2}}{16}+\frac{23}{48}\right)\bigg{]}+\frac{1}{\epsilon}\bigg{[}C_{A}C_{F}\left(-\frac{19\zeta_{2}}{24}+\frac{11\zeta_{3}}{16}-\frac{1513}{576}\right)+C_{F}^{2}\left(\frac{221}{64}-\frac{4\zeta_{3}}{3}\right)$
$\displaystyle+C_{F}n_{f}\left(\frac{\zeta_{2}}{24}+\frac{89}{288}\right)\bigg{]}+C_{A}^{2}\left(\frac{73\zeta_{2}}{32}-\frac{43\zeta_{3}}{48}-\frac{19\zeta_{4}}{32}+\frac{13195}{3456}\right)$
$\displaystyle-
C_{A}C_{F}\left(\frac{1171\zeta_{2}}{576}-\frac{175\zeta_{3}}{48}-\frac{17\zeta_{4}}{8}+\frac{40423}{3456}\right)+C_{F}^{2}\left(\frac{1151}{128}+\frac{17\zeta_{2}}{32}-\frac{29\zeta_{3}}{8}-\frac{65\zeta_{4}}{32}\right)$
$\displaystyle+C_{F}n_{f}\left(\frac{265}{216}+\frac{17\zeta_{2}}{288}+\frac{\zeta_{3}}{6}\right)-C_{A}n_{f}\left(\frac{385}{432}+\frac{5\zeta_{2}}{16}+\frac{7\zeta_{3}}{24}\right)+\frac{25}{864}n_{f}^{2}+\mathcal{O}(\epsilon),$
(A.11a) $\displaystyle\tilde{C}_{g}^{(2)}=$
$\displaystyle\frac{C_{A}^{2}}{8\epsilon^{4}}+\frac{1}{\epsilon^{3}}\left(\frac{77C_{A}^{2}}{96}-\frac{7C_{A}n_{f}}{48}\right)+\frac{1}{\epsilon^{2}}\left[C_{A}^{2}\left(\frac{103}{96}-\frac{17\zeta_{2}}{32}\right)-\frac{49C_{A}n_{f}}{144}+\frac{n_{f}^{2}}{36}\right]$
$\displaystyle+\frac{1}{\epsilon}\left[C_{A}^{2}\left(\frac{853}{864}-\frac{11\zeta_{2}}{12}-\frac{31\zeta_{3}}{48}\right)+C_{A}n_{f}\left(\frac{\zeta_{2}}{6}-\frac{19}{72}\right)+\frac{C_{F}n_{f}}{16}+\frac{5n_{f}^{2}}{216}\right]$
$\displaystyle+C_{A}^{2}\left(\frac{415\zeta_{2}}{576}-\frac{11\zeta_{3}}{9}-\frac{\zeta_{4}}{2}+\frac{10525}{10368}\right)+C_{A}n_{f}\left(-\frac{\zeta_{2}}{16}+\frac{17\zeta_{3}}{36}-\frac{113}{324}\right)$
$\displaystyle+C_{F}n_{f}\left(\frac{55}{192}-\frac{\zeta_{3}}{4}\right)+n_{f}^{2}\left(\frac{29}{864}-\frac{\zeta_{2}}{144}\right)+\mathcal{O}(\epsilon).$
(A.11b)
## Appendix B Anomalous dimensions
In this appendix we collect the coefficients of the various anomalous
dimensions considered in the main text. All anomalous dimensions are expanded
in powers of the strong coupling according to
$\gamma_{\phi}=\sum_{\ell=1}^{\infty}\bigg{(}\frac{\alpha_{s}}{\pi}\bigg{)}^{\ell-1}\gamma^{(\ell)}_{\phi}.$
(B.1)
First of all we have the cusp anomalous dimension, defined in eq. (2.28),
which involves quadratic and quartic Casimir terms, recently calculated up to
four loops in QCD Boels:2017ftb ; Boels:2017skl ; Moch:2017uml ;
Grozin:2017css ; Henn:2019swt ; Huber:2019fxe ; vonManteuffel:2020vjv ;
Agarwal:2021zft ; Bruser:2019auj ; Bruser:2020bsh . In QCD, the quadratic
Casimir component, $\gamma_{K}(\alpha_{s})$, has the following expansion
coefficients through three loops242424Three-loop contributions to lightlike
cusp anomalous dimension were first determined in Berger:2002sv ; Moch:2004pa
, by using the connection between this quantity and the large-$x$ limit of
non-singlet splitting functions Korchemsky:1988si . The complete calculation
of the non-singlet three-loop splitting functions has been recently confirmed
in ref. Blumlein:2021enk . Independent calculations of the three-loop cusp
anomalous dimension were also obtained by computing form factors FormFactors ;
Gehrmann:2010ue and cusped Wilson loop Grozin:2014hna ; Grozin:2015kna to
this loop order. More recently, such calculations have been completed at four
loops Boels:2017ftb ; Boels:2017skl ; Moch:2017uml ; Grozin:2017css ;
Henn:2019swt ; Huber:2019fxe ; vonManteuffel:2020vjv ; Agarwal:2021zft ;
Bruser:2019auj ; Bruser:2020bsh . Korchemsky:1985xj ; Korchemsky:1987wg ;
Moch:2004pa :
$\displaystyle{\gamma}_{K}^{(1)}$ $\displaystyle=$ $\displaystyle 2,$
$\displaystyle{\gamma}_{K}^{(2)}$ $\displaystyle=$
$\displaystyle\left(\frac{67}{18}-\zeta_{2}\right)C_{A}-\frac{10}{9}T_{R}n_{f},$
$\displaystyle{\gamma}_{K}^{(3)}$ $\displaystyle=$
$\displaystyle\frac{C_{A}^{2}}{96}\left(490-\frac{1072}{3}\zeta_{2}+88\zeta_{3}+264\zeta_{4}\right)+\frac{C_{F}T_{R}n_{f}}{32}\left(-\frac{220}{3}+64\zeta_{3}\right)$
(B.2)
$\displaystyle+\,\frac{C_{A}T_{R}n_{f}}{96}\left(-\frac{1672}{9}+\frac{320}{3}\zeta_{2}-224\zeta_{3}\right)-\frac{2T_{R}^{2}n_{f}^{2}}{27},$
where the fundamental trace is $T_{R}={\rm Tr}(t^{a}t^{a})=\frac{1}{2}$. The
second term, $g_{R}(\alpha_{s})$, multiplying the quartic Casimir, starts at
four loops, and depends on the gauge-group representation $R$. Its
coefficients for $R=A$ (adjoint) and $R=F$ (fundamental) in QCD read
$\displaystyle\begin{split}g_{A}^{(4)}&=\frac{\zeta_{3}}{6}-\frac{3\zeta_{3}^{2}}{2}+\frac{55\zeta_{5}}{12}-\frac{\zeta_{2}}{2}-\frac{31\zeta_{6}}{8},\\\
g_{F}^{(4)}&=n_{f}\left(\zeta_{2}-\frac{\zeta_{3}}{3}-\frac{5\zeta_{5}}{3}\right)\,.\end{split}$
(B.3)
The contribution in ${\cal N}=4$ SYM, for $\gamma_{K}(\alpha_{s})$ and
$g_{A}(\alpha_{s})$, is obtained, according to principle of maximum
trascendentality, by retaining only the terms with highest trascendental
weight at each order.
Next, we have the collinear anomalous dimension $\gamma_{i}$ corresponding to
the parton $i$ FormFactors ; DelDuca:2014cya ; Falcioni:2019nxk ;
Dixon:2017nat , which is part of the anomalous dimension $\Gamma_{i}$ and
$\Gamma_{j}$ defined in eq. (2.29). The collinear anomalous dimension
$\gamma_{i}$ has been recently calculated up to four loops
vonManteuffel:2020vjv ; Agarwal:2021zft . We provide here its coefficients up
to two loops, as needed in the main text, for quarks and gluons. One has
FormFactors ; Gehrmann:2010ue
$\displaystyle\gamma_{q}^{(1)}$ $\displaystyle=$
$\displaystyle-\frac{3}{4}\,C_{F},$ $\displaystyle\gamma_{q}^{(2)}$
$\displaystyle=$
$\displaystyle\frac{C_{F}^{2}}{16}\left(-\frac{3}{2}+12\zeta_{2}-24\zeta_{3}\right)$
(B.4)
$\displaystyle+\,\frac{C_{A}C_{F}}{16}\left(-\frac{961}{54}-11\zeta_{2}+26\zeta_{3}\right)+\frac{C_{F}T_{R}n_{f}}{16}\left(\frac{130}{27}+4\zeta_{2}\right),$
for quarks, and
$\displaystyle\gamma_{g}^{(1)}$ $\displaystyle=$
$\displaystyle-\frac{b_{0}}{4},$ $\displaystyle\gamma_{g}^{(2)}$
$\displaystyle=$
$\displaystyle\frac{C_{A}^{2}}{16}\left(-\frac{692}{27}+\frac{11}{3}\zeta_{2}+2\zeta_{3}\right)+\frac{C_{A}T_{R}n_{f}}{16}\left(\frac{256}{27}-\frac{4}{3}\zeta_{2}\right)+\frac{C_{F}T_{R}n_{f}}{4},$
(B.5)
for gluons.
## Appendix C Computing colour factors for arbitrary representations
In this appendix we review computational techniques to evaluate the colour
factors of the transition amplitudes. Universality of the Regge limit implies
that the colour structure of every amplitude is given by the same colour
operators, regardless whether the scattering process involves quarks or gluons
in the initial and final state. In order to determine such operators, we need
to develop techniques to evaluate colour tensors for general representations
of the external particles. Indeed, while it is straightforward to compute
directly colour Feynman rules by specialising the representations of the
scattering particles, such explicit results would completely obscure
universality of the Regge limit. Instead, we would like to express our results
in terms of Casimir operators of the colour channel operators defined in eq.
(2.22)
$\mathbf{T}^{2}_{t},\quad\mathbf{T}^{2}_{s-u}=\frac{\mathbf{T}^{2}_{s}-\mathbf{T}^{2}_{u}}{2},$
(C.1)
which manifest the signature properties under $s\leftrightarrow u$ crossing.
These operators emerge naturally in diagrams that feature connections among
the outermost Reggeon indices, for example
$\displaystyle\begin{split}\includegraphics{Figures/T1T2-crop.pdf}&\raisebox{30.0pt}{$=\left(\mathbf{T}_{1}^{a}\cdot\mathbf{T}_{2}^{a}\right)\,\mathcal{M}_{4},$}\\\
&=\frac{1}{2}\left(\mathbf{T}^{2}_{s-u}-\frac{\mathbf{T}^{2}_{t}}{2}\right)\,\mathcal{M}_{4}.\end{split}$
(C.2)
where, in the second line, we applied colour conservation according to eq.
(2.18). The result above is independent on the four-point matrix element
$\mathcal{M}_{4}$, thus providing a graphical derivation of the relation in
eq. (LABEL:eq:relTsu-Tt/2). We apply a similar procedure whenever a Reggeon is
emitted from an initial-state parton and absorbed by a final-state one,
according to eq. (LABEL:eq:relTsu+Tt/2). Therefore, colour structures up to
two loops are written for general representations by
* ($i$)
Using the Lie algebra, eq. (4.34), to write three-point vertices
$(F^{a})_{bc}$ in terms of Reggeons connecting the target and the projectile.
* ($ii$)
Applying repeatedly eqs. (LABEL:eq:relTsu-Tt/2) and (LABEL:eq:relTsu+Tt/2) to
obtain the colour-channel operators of eq. (C.1), acting on the tree-level
amplitude.
The second step may not applicable for diagrams where all Reggeons have one or
more internal attachment, namely they are all either emitted or absorbed
between two other Reggon vertices. We refer to these irreducible
configurations as entangled colour structures.
Indeed, entangled colour structures may occur starting at three loops. At
three loops, we find two such colour tensors252525At three loops there are two
additional irreducible configurations, depicted in figure 12a and 12b. These,
however, can be recast into the form of eq. (C.2) using commutation relations.
corresponding to the graphs in figure 11a and 11b.
(a) (b)
Figure 11: Diagrammatic representation of the irreducible configurations: (a)
$d_{A}$ in eq. (C.3) and (b) $d_{B}$ in eq. (C.4).
(a) (b)
Figure 12: Both the double cross diagram (a) and the saltire diagram (b) are
immediately written in terms of colour dipole operators by commuting the pair
of Reggeon emission vertices at the end of either the top or of the bottom
line.
While these diagrams drop out the three-loop amplitude, such entangled
configurations do not cancel in general and we will encounter them in the
four-loop calculation. Hence, we need to extend the techniques summarised
above.
### Permutation diagrams
We begin by introducing a compact notation for colour factors involving $k$
Reggeon attachments on both target and projectile. These configuration are
naturally associated to permutations of $k$ indices. Choosing the top line as
target state $i$, the diagram in figure 11a is written as
$\displaystyle\begin{split}d_{A}&=\Big{(}{\bf T}^{a_{1}}{\bf T}^{a_{2}}{\bf
T}^{a_{3}}{\bf T}^{a_{4}}\Big{)}_{i}\Big{(}{\bf T}^{a_{3}}{\bf T}^{a_{1}}{\bf
T}^{a_{4}}{\bf
T}^{a_{2}}\Big{)}_{j}\equiv\left(\begin{array}[]{cccc}a_{1}&a_{2}&a_{3}&a_{4}\\\
a_{3}&a_{1}&a_{4}&a_{2}\end{array}\right).\end{split}$ (C.3)
Similarly, the diagram in figure 11b is
$\displaystyle\begin{split}d_{B}&=\Big{(}{\bf T}^{a_{1}}{\bf T}^{a_{2}}{\bf
T}^{a_{3}}{\bf T}^{a_{4}}\Big{)}_{i}\Big{(}{\bf T}^{a_{2}}{\bf T}^{a_{4}}{\bf
T}^{a_{1}}{\bf
T}^{a_{3}}\Big{)}_{j}\equiv\left(\begin{array}[]{cccc}a_{1}&a_{2}&a_{3}&a_{4}\\\
a_{2}&a_{4}&a_{1}&a_{3}\end{array}\right).\end{split}$ (C.4)
We do not have an expression of $d_{A}$ and $d_{B}$ separately which is valid
for general representations. However, we are interested only in the
combination $d_{A}+d_{B}$, which manifests the symmetry under the interchanged
of the projectile and the target. For this combination we have the following
identity:
$\displaystyle d_{A}+d_{B}$
$\displaystyle=\left(\begin{array}[]{cccc}a_{1}&a_{2}&a_{3}&a_{4}\\\
a_{4}&a_{1}&a_{3}&a_{2}\end{array}\right)+\left(\begin{array}[]{cccc}a_{1}&a_{2}&a_{3}&a_{4}\\\
a_{1}&a_{4}&a_{2}&a_{3}\end{array}\right)\,,$ (C.9)
which we will prove below. The two terms on right-hand side of eq. (C.9),
depicted in figures 13a and 13b, feature outmost Reggeon interactions
represented by the indices $a_{1}$ and $a_{4}$, respectively. Therefore, these
terms are easily written in terms of colour channel operators by applying eqs.
(LABEL:eq:relTsu-Tt/2) and (LABEL:eq:relTsu+Tt/2), as described in step $(ii)$
above. The resulting two-loop graphs are again reducible, and one obtains:
(a) (b)
Figure 13: Diagrammatic representation of the terms on the right hand side of
eq. (C.9).
$\displaystyle\begin{split}d_{A}+d_{B}&=\frac{1}{4}\left\\{\left(\mathbf{T}_{s-u}^{2}\right)^{3}-\frac{1}{4}\left(\mathbf{T}_{t}^{2}\right)^{2}\mathbf{T}_{s-u}^{2}+\frac{C_{A}}{4}\mathbf{T}_{t}^{2}\mathbf{T}_{s-u}^{2}-\frac{C_{A}^{2}}{4}\mathbf{T}_{s-u}^{2}\right\\}{\bf
T}_{i}^{a}\,{\bf T}_{j}^{a},\end{split}$ (C.10)
Eq. (C.10) is a general expression of $d_{A}+d_{B}$ for arbitrary
representations of external particles. This is the only relation needed to
compute three-loop colour structures. The identity (C.9) was crucial to obtain
the result. This identity is conveniently derived by starting from an
auxiliary three-loop configuration
$\displaystyle\begin{split}\tilde{d}^{(3)}&=\Big{(}{\bf T}^{a_{1}}{\bf
T}^{a_{2}}{\bf T}^{a_{3}}\Big{)}_{i}\,\Big{(}{\bf T}^{x}{\bf T}^{a_{1}}{\bf
T}^{a_{3}}{\bf T}^{x}{\bf T}^{a_{2}}\Big{)}_{j}.\end{split}$ (C.11)
The colour factor above is not associated to a permutation of indices, because
it features a boomerang, namely the contraction of a pair of indices on the
same line (in this case the projectile). Using the Lie algebra relations,
there are two ways of moving the $x$ on line $i$: either by commuting $x$ with
$a_{1}$ or $x$ with $a_{3}$. We find respectively
$\displaystyle\tilde{d}^{(3)}_{1}=$
$\displaystyle\,\Big{(}C_{2}(j)-\frac{C_{A}}{2}\Big{)}\left({\bf
T}^{a_{1}}{\bf T}^{a_{2}}{\bf T}^{a_{3}}\right)_{i}\left({\bf T}^{a_{1}}{\bf
T}^{a_{3}}{\bf T}^{a_{2}}\right)_{j}$ (C.12a) $\displaystyle\hskip
128.0374pt+if^{a_{3}xk}\left({\bf T}^{a_{1}}{\bf T}^{a_{2}}{\bf
T}^{a_{3}}\right)_{i}\left({\bf T}^{x}{\bf T}^{a_{1}}{\bf T}^{k}{\bf
T}^{a_{2}}\right)_{j},$ $\displaystyle\tilde{d}^{(3)}_{2}=$
$\displaystyle\,\Big{(}C_{2}(j)-\frac{C_{A}}{2}\Big{)}\left({\bf
T}^{a_{1}}{\bf T}^{a_{2}}{\bf T}^{a_{3}}\right)_{i}\left({\bf T}^{a_{1}}{\bf
T}^{a_{3}}{\bf T}^{a_{2}}\right)_{j}$ (C.12b) $\displaystyle\hskip
128.0374pt+if^{xa_{1}k}\left({\bf T}^{a_{1}}{\bf T}^{a_{2}}{\bf
T}^{a_{3}}\right)_{i}\left({\bf T}^{k}{\bf T}^{a_{3}}{\bf T}^{x}{\bf
T}^{a_{2}}\right)_{j}.$
The two expressions are of course identical, so their difference must vanish
$0=i\left({\bf T}^{a_{1}}{\bf T}^{a_{2}}{\bf
T}^{a_{3}}\right)_{i}\Big{[}f^{a_{3}xk}\left({\bf T}^{x}{\bf T}^{a_{1}}{\bf
T}^{k}{\bf T}^{a_{2}}\right)_{j}-f^{a_{1}xk}\left({\bf T}^{x}{\bf
T}^{a_{3}}{\bf T}^{k}{\bf T}^{a_{2}}\right)_{j}\Big{]}.$ (C.13)
Finally, writing the structure constants in terms of commutators on line $i$,
we obtain eq. (C.9), concluding the proof.
### Four-loop colour factors
All the colour structures appearing at four loops are written in terms of
contractions of five pairs of generators, by applying repeatedly the Lie
algebra. We identify eight independent colour factors that cannot be reduced
in terms of $\mathbf{T}_{s-u}^{2}$ and $\mathbf{T}_{t}^{2}$ by following steps
$(i)$ and $(ii)$ above. We choose to collect them into the following terms:
$\displaystyle\begin{split}d_{1}&=\left(\begin{array}[]{ccccc}a_{1}&a_{2}&a_{3}&a_{4}&a_{5}\\\
a_{2}&a_{5}&a_{3}&a_{1}&a_{4}\end{array}\right)+\left(\begin{array}[]{ccccc}a_{1}&a_{2}&a_{3}&a_{4}&a_{5}\\\
a_{4}&a_{1}&a_{3}&a_{5}&a_{2}\end{array}\right),\end{split}$ (C.14a)
$\displaystyle\begin{split}d_{2}&=\left(\begin{array}[]{ccccc}a_{1}&a_{2}&a_{3}&a_{4}&a_{5}\\\
a_{2}&a_{4}&a_{1}&a_{5}&a_{3}\end{array}\right)+\left(\begin{array}[]{ccccc}a_{1}&a_{2}&a_{3}&a_{4}&a_{5}\\\
a_{3}&a_{1}&a_{5}&a_{2}&a_{4}\end{array}\right),\end{split}$ (C.14b)
$\displaystyle\begin{split}d_{3}&=\left(\begin{array}[]{ccccc}a_{1}&a_{2}&a_{3}&a_{4}&a_{5}\\\
a_{2}&a_{4}&a_{5}&a_{1}&a_{3}\end{array}\right)+\left(\begin{array}[]{ccccc}a_{1}&a_{2}&a_{3}&a_{4}&a_{5}\\\
a_{4}&a_{1}&a_{5}&a_{2}&a_{3}\end{array}\right),\end{split}$ (C.14c)
$\displaystyle\begin{split}d_{4}&=\left(\begin{array}[]{ccccc}a_{1}&a_{2}&a_{3}&a_{4}&a_{5}\\\
a_{2}&a_{5}&a_{1}&a_{4}&a_{3}\end{array}\right)+\left(\begin{array}[]{ccccc}a_{1}&a_{2}&a_{3}&a_{4}&a_{5}\\\
a_{3}&a_{1}&a_{5}&a_{4}&a_{2}\end{array}\right),\end{split}$ (C.14d)
$\displaystyle\begin{split}d_{5}&=\left(\begin{array}[]{ccccc}a_{1}&a_{2}&a_{3}&a_{4}&a_{5}\\\
a_{3}&a_{2}&a_{5}&a_{1}&a_{4}\end{array}\right)+\left(\begin{array}[]{ccccc}a_{1}&a_{2}&a_{3}&a_{4}&a_{5}\\\
a_{4}&a_{2}&a_{1}&a_{5}&a_{3}\end{array}\right),\end{split}$ (C.14e)
$\displaystyle\begin{split}d_{6}&=\left(\begin{array}[]{ccccc}a_{1}&a_{2}&a_{3}&a_{4}&a_{5}\\\
a_{3}&a_{4}&a_{1}&a_{5}&a_{2}\end{array}\right)+\left(\begin{array}[]{ccccc}a_{1}&a_{2}&a_{3}&a_{4}&a_{5}\\\
a_{3}&a_{5}&a_{1}&a_{2}&a_{4}\end{array}\right),\end{split}$ (C.14f)
$\displaystyle\begin{split}d_{7}&=\left(\begin{array}[]{ccccc}a_{1}&a_{2}&a_{3}&a_{4}&a_{5}\\\
a_{3}&a_{5}&a_{1}&a_{4}&a_{2}\end{array}\right),\end{split}$ (C.14g)
$\displaystyle\begin{split}d_{8}&=\left(\begin{array}[]{ccccc}a_{1}&a_{2}&a_{3}&a_{4}&a_{5}\\\
a_{4}&a_{2}&a_{5}&a_{1}&a_{3}\end{array}\right),\end{split}$ (C.14h)
where each term $d_{1}\dots d_{8}$ is manifestly symmetric under target-
projectile exchange $i~{}\leftrightarrow~{}j$. In addition, $d_{1}$ is
symmetric under signature symmetry, because the two terms in eq. (C.14a) are
related to each other by reversing the order of indices on one of the lines.
In order to express these colour factors in terms of channel operators, we
consider again the configurations generated by operating with the Lie algebra
on the corresponding boomerang diagrams. In particular, we consider the
following four-loop diagrams
$\displaystyle\begin{split}\tilde{d}^{(4)}_{L}&=\Big{(}{\bf T}^{a_{1}}{\bf
T}^{a_{2}}{\bf T}^{a_{3}}{\bf T}^{a_{4}}\Big{)}_{i}\,\Big{(}{\bf T}^{x}{\bf
T}^{a_{\sigma(1)}}{\bf T}^{a_{\sigma(2)}}{\bf T}^{x}{\bf
T}^{a_{\sigma(3)}}{\bf T}^{a_{\sigma(4)}}\Big{)}_{j},\\\
\tilde{d}^{(4)}_{R}&=\Big{(}{\bf T}^{a_{1}}{\bf T}^{a_{2}}{\bf T}^{a_{3}}{\bf
T}^{a_{4}}\Big{)}_{i}\,\Big{(}{\bf T}^{a_{\sigma(1)}}{\bf
T}^{a_{\sigma(2)}}{\bf T}^{x}{\bf T}^{a_{\sigma(3)}}{\bf
T}^{a_{\sigma(4)}}{\bf T}^{x}\Big{)}_{j},\\\ \tilde{d}^{(4)}_{C}&=\Big{(}{\bf
T}^{a_{1}}{\bf T}^{a_{2}}{\bf T}^{a_{3}}{\bf
T}^{a_{4}}\Big{)}_{i}\,\Big{(}{\bf T}^{a_{\sigma(1)}}{\bf T}^{x}{\bf
T}^{a_{\sigma(2)}}{\bf T}^{a_{\sigma(3)}}{\bf T}^{x}{\bf
T}^{a_{\sigma(4)}}\Big{)}_{j},\end{split}$ (C.15)
which generalise the three-loop boomerang of eq. (C.11) by including one more
pair of indices, and the target-projectile symmetric configurations obtained
from eq. (C.15). Starting from these boomerang configurations, we operate as
in eqs. (C.12) and (C.13) and we get six independent linear relations for
$d_{1}\dots d_{8}$. We derive one more constraint from the colour factor
$\displaystyle\tilde{d}^{(4)}_{P}$
$\displaystyle=\text{Tr}\left[F^{x}F^{a}F^{b}F^{c}\right]\text{Tr}\left[F^{y}F^{a}F^{b}F^{c}\right]\,{\bf
T}^{x}_{i}\,{\bf
T}^{y}_{j}=\left(\frac{d_{AA}}{N_{A}}+\frac{C_{A}^{4}}{12}\right)\,{\bf
T}^{x}_{i}\,{\bf T}^{x}_{j},$ (C.16)
which can be written as a combination of $d_{1}\dots d_{8}$ by using the Lie
algebra to replace traces of generators in the adjoint representation with
commutators of ${\bf T}_{i}$ or ${\bf T}_{j}$. The seven identities obtained
in this way determine the signature-odd contributions to $d_{1}\dots d_{8}$.
In turn, this is sufficient in order to express the real part of the
amplitude, thus allowing us to perform the calculations of section 4.2.
However, we need one more equation in order to determine also contributions of
even signature. In particular, such terms were needed to compute the Regge
limit of the soft anomalous dimension, discussed in section 7. The last
constraint was determined by writing a general ansatz for $d_{3}$ in terms of
products of Casimir operators $\mathbf{T}_{s-u}^{2}$ and $\mathbf{T}_{t}^{2}$
acting on the tree-level amplitude ${\bf T}^{x}_{i}\,{\bf T}^{x}_{j}$. The
unknown coefficients were fitted by comparing the ansatz with explicit results
obtained by specialising the generators ${\bf T}_{i}$ and ${\bf T}_{j}$ in eq.
(C.14c) either in the adjoint or in the fundamental representation. We report
here the final expressions of the colour factors in eqs. (C.14a)-(C.14h),
which all apply for any representation of the external particles. The results
are:
$\displaystyle d_{1}$
$\displaystyle=\Bigg{\\{}\frac{1}{12}\left(\frac{d_{AA}}{N_{A}}+\frac{5}{96}C_{A}^{4}\right)-\frac{3}{32}C_{A}^{2}\,\left(\mathbf{T}_{s-u}^{2}\right)^{2}+\frac{C_{A}}{32}\left[5\,\mathbf{T}_{s-u}^{2}\mathbf{T}_{t}^{2}\mathbf{T}_{s-u}^{2}-\frac{7}{3}\mathbf{T}_{t}^{2}\left(\mathbf{T}_{s-u}^{2}\right)^{2}\right]$
$\displaystyle+\frac{1}{8}\left[\left(\mathbf{T}_{s-u}^{2}\right)^{4}+\frac{3}{4}\Big{[}\mathbf{T}_{t}^{2},\mathbf{T}_{s-u}^{2}\Big{]}\mathbf{T}_{t}^{2}\mathbf{T}_{s-u}^{2}-\frac{5}{12}\left(\mathbf{T}_{t}^{2}\right)^{2}\left(\mathbf{T}_{s-u}^{2}\right)^{2}\right]\Bigg{\\}}\,{\bf
T}^{x}_{i}\,{\bf T}^{x}_{j},$ (C.17a)
$\displaystyle\begin{split}d_{2}&=\Bigg{\\{}\frac{1}{12}\left(\frac{d_{AA}}{N_{A}}+\frac{5}{96}C_{A}^{4}\right)-\frac{C_{A}^{2}}{8}\left(\mathbf{T}_{s-u}^{2}\right)^{2}+\frac{C_{A}}{16}\left[3\mathbf{T}_{s-u}^{2}\mathbf{T}_{t}^{2}\mathbf{T}_{s-u}^{2}-\frac{7}{6}\mathbf{T}_{t}^{2}\left(\mathbf{T}_{s-u}^{2}\right)^{2}\right]\\\
&+\frac{1}{8}\left[\left(\mathbf{T}_{s-u}^{2}\right)^{4}-\frac{3}{4}\mathbf{T}_{s-u}^{2}\left(\mathbf{T}_{t}^{2}\right)^{2}\mathbf{T}_{s-u}^{2}+\frac{1}{2}\mathbf{T}_{t}^{2}\mathbf{T}_{s-u}^{2}\mathbf{T}_{t}^{2}\mathbf{T}_{s-u}^{2}-\frac{1}{6}\left(\mathbf{T}_{t}^{2}\right)^{2}\left(\mathbf{T}_{s-u}^{2}\right)^{2}\right]\\\
&-\frac{C_{A}^{3}}{64}\mathbf{T}_{s-u}^{2}-\frac{C_{A}^{2}}{64}\mathbf{T}_{t}^{2}\mathbf{T}_{s-u}^{2}+\frac{C_{A}}{16}\left[\left(\mathbf{T}_{s-u}^{2}\right)^{3}+\frac{3}{4}\left(\mathbf{T}_{t}^{2}\right)^{2}\mathbf{T}_{s-u}^{2}\right]\\\
&+\frac{1}{16}\Bigg{[}\Big{[}\mathbf{T}_{s-u}^{2},\mathbf{T}_{t}^{2}\Big{]}\left(\mathbf{T}_{s-u}^{2}\right)^{2}-\left(\mathbf{T}_{s-u}^{2}\right)^{2}\mathbf{T}_{t}^{2}\mathbf{T}_{s-u}^{2}-\frac{1}{4}\left(\mathbf{T}_{t}^{2}\right)^{3}\mathbf{T}_{s-u}^{2}\Bigg{]}\Bigg{\\}}\,{\bf
T}^{x}_{i}\,{\bf T}^{x}_{j},\end{split}$ (C.17b)
$\displaystyle\begin{split}d_{3}&=d_{6}=\Bigg{\\{}\frac{1}{8}\left[\left(\mathbf{T}_{s-u}^{2}\right)^{4}-\frac{1}{4}\Big{[}\mathbf{T}_{s-u}^{2},\mathbf{T}_{t}^{2}\Big{]}\mathbf{T}_{t}^{2}\mathbf{T}_{s-u}^{2}-\frac{1}{4}\left(\mathbf{T}_{t}^{2}\right)^{2}\left(\mathbf{T}_{s-u}^{2}\right)^{2}\right]\\\
&+\frac{C_{A}^{3}}{64}\mathbf{T}_{s-u}^{2}-\frac{C_{A}^{2}}{16}\mathbf{T}_{t}^{2}\mathbf{T}_{s-u}^{2}-\frac{C_{A}}{16}\left[\left(\mathbf{T}_{s-u}^{2}\right)^{3}-\frac{3}{2}\left(\mathbf{T}_{t}^{2}\right)^{2}\mathbf{T}_{s-u}^{2}\right]\\\
&-\frac{1}{16}\left(\mathbf{T}_{s-u}^{2}\Big{[}\mathbf{T}_{s-u}^{2},\mathbf{T}_{t}^{2}\Big{]}\mathbf{T}_{s-u}^{2}+\frac{1}{2}\left(\mathbf{T}_{t}^{2}\right)^{3}\mathbf{T}_{s-u}^{2}\right)\Bigg{\\}}\,{\bf
T}^{x}_{i}\,{\bf T}^{x}_{j},\end{split}$ (C.17c)
$\displaystyle\begin{split}d_{4}&=d_{5}=\Bigg{\\{}\frac{1}{8}\left[\left(\mathbf{T}_{s-u}^{2}\right)^{4}-\frac{1}{4}\Big{[}\mathbf{T}_{s-u}^{2},\mathbf{T}_{t}^{2}\Big{]}\mathbf{T}_{t}^{2}\mathbf{T}_{s-u}^{2}-\frac{1}{4}\left(\mathbf{T}_{t}^{2}\right)^{2}\left(\mathbf{T}_{s-u}^{2}\right)^{2}\right]\\\
&-\frac{C_{A}^{3}}{64}\mathbf{T}_{s-u}^{2}+\frac{C_{A}^{2}}{16}\mathbf{T}_{t}^{2}\mathbf{T}_{s-u}^{2}+\frac{C_{A}}{16}\left[\left(\mathbf{T}_{s-u}^{2}\right)^{3}-\frac{3}{2}\left(\mathbf{T}_{t}^{2}\right)^{2}\mathbf{T}_{s-u}^{2}\right]\\\
&+\frac{1}{16}\left(\mathbf{T}_{s-u}^{2}\Big{[}\mathbf{T}_{s-u}^{2},\mathbf{T}_{t}^{2}\Big{]}\mathbf{T}_{s-u}^{2}+\frac{1}{2}\left(\mathbf{T}_{t}^{2}\right)^{3}\mathbf{T}_{s-u}^{2}\right)\Bigg{\\}}\,{\bf
T}^{x}_{i}\,{\bf T}^{x}_{j},\end{split}$ (C.17d)
$\displaystyle\begin{split}d_{7}&=d_{8}=\Bigg{\\{}\frac{1}{24}\left(\frac{d_{AA}}{N_{A}}+\frac{5}{96}C_{A}^{4}\right)-\frac{C_{A}^{2}}{16}\left(\mathbf{T}_{s-u}^{2}\right)^{2}+\frac{C_{A}}{32}\left[3\mathbf{T}_{s-u}^{2}\mathbf{T}_{t}^{2}\mathbf{T}_{s-u}^{2}-\frac{7}{6}\mathbf{T}_{t}^{2}\left(\mathbf{T}_{s-u}^{2}\right)^{2}\right]\\\
&+\frac{1}{16}\left[\left(\mathbf{T}_{s-u}^{2}\right)^{4}-\frac{3}{4}\mathbf{T}_{s-u}^{2}\left(\mathbf{T}_{t}^{2}\right)^{2}\mathbf{T}_{s-u}^{2}+\frac{1}{2}\mathbf{T}_{t}^{2}\mathbf{T}_{s-u}^{2}\mathbf{T}_{t}^{2}\mathbf{T}_{s-u}^{2}-\frac{1}{6}\left(\mathbf{T}_{t}^{2}\right)^{2}\left(\mathbf{T}_{s-u}^{2}\right)^{2}\right]\\\
&+\frac{C_{A}^{3}}{128}\mathbf{T}_{s-u}^{2}+\frac{C_{A}^{2}}{128}\mathbf{T}_{t}^{2}\mathbf{T}_{s-u}^{2}-\frac{C_{A}}{32}\left[\left(\mathbf{T}_{s-u}^{2}\right)^{3}+\frac{3}{4}\left(\mathbf{T}_{t}^{2}\right)^{2}\mathbf{T}_{s-u}^{2}\right]\\\
&-\frac{1}{32}\Bigg{[}\Big{[}\mathbf{T}_{s-u}^{2},\mathbf{T}_{t}^{2}\Big{]}\left(\mathbf{T}_{s-u}^{2}\right)^{2}-\left(\mathbf{T}_{s-u}^{2}\right)^{2}\mathbf{T}_{t}^{2}\mathbf{T}_{s-u}^{2}-\frac{1}{4}\left(\mathbf{T}_{t}^{2}\right)^{3}\mathbf{T}_{s-u}^{2}\Bigg{]}\Bigg{\\}}\,{\bf
T}^{x}_{i}\,{\bf T}^{x}_{j}.\end{split}$ (C.17e)
## Appendix D The reduced amplitude in an explicit colour basis
It is interesting to evaluate the NNLL reduced amplitude for different
external partons. We will compute the reduced NNLL odd amplitude at two loops,
three loops and four loops for $qq$, $gg$ and $qg$ scattering. We utilise the
orthornormal $t$-channel basis of ref. DelDuca:2014cya , and use the same
notation as in appendix B of ref. Caron-Huot:2017fxr with the relevant colour
tensors given in eq. (2.15).
Projecting the two-loop NNLL amplitude in eq. (5.13) in the octet channel for
$qq$, $gg$ and $qg$ scattering we find
$\displaystyle\mathcal{\hat{M}}^{(-,2,0),[8]}_{qq\to qq}=$
$\displaystyle\,\left[2D_{q}^{(2)}+D_{q}^{(1)}D_{q}^{(1)}-(i\pi)^{2}r_{\Gamma}^{2}S^{(2)}(\epsilon)\left(\frac{N_{c}^{2}}{6}-1+\frac{3}{N_{c}^{2}}\right)\right]{\cal
M}^{{\rm tree},[8]}_{qq\to qq},$ (D.1a)
$\displaystyle\mathcal{\hat{M}}^{(-,2,0),[8_{a}]}_{gg\to gg}=$
$\displaystyle\,\left[2D_{g}^{(2)}+D_{g}^{(1)}D_{g}^{(1)}-(i\pi)^{2}r_{\Gamma}^{2}S^{(2)}(\epsilon)\left(\frac{N_{c}^{2}}{6}+6\right)\right]{\cal
M}^{{\rm tree},[8_{a}]}_{gg\to gg},$ (D.1b)
$\displaystyle\mathcal{\hat{M}}^{(-,2,0),[8_{a}]}_{qg\to qg}=$
$\displaystyle\,\left[D_{q}^{(2)}+D_{g}^{(2)}+D_{q}^{(1)}D_{g}^{(1)}-(i\pi)^{2}r_{\Gamma}^{2}S^{(2)}(\epsilon)\left(\frac{N_{c}^{2}}{6}+1\right)\right]{\cal
M}^{{\rm tree},[8_{a}]}_{qg\to qg},$ (D.1c)
where we have normalised by the tree-amplitude octet projection defined in eq.
(2.13) and $S^{(2)}(\epsilon)$ is given in eq. (5.14).
The three-loop reduced amplitude of eq. (5.21) in the $t$-channel octet
representation is
$\displaystyle\mathcal{\hat{M}}^{(-,3,1),[8]}_{qq\to qq}=$
$\displaystyle\,(i\pi)^{2}r_{\Gamma}^{3}\left(N_{c}^{2}+18-\frac{18}{N_{c}^{2}}\right)\frac{N_{c}}{864}\left(-\frac{1}{\epsilon^{3}}+70\hat{\zeta}_{3}+\mathcal{O}(\epsilon^{2})\right){\cal
M}^{{\rm tree},[8]}_{qq\to qq},$ (D.2a)
$\displaystyle\mathcal{\hat{M}}^{(-,3,1),[8_{a}]}_{gg\to gg}=$ |
# An Explicit Expansion of the Kullback-Leibler Divergence along its Fisher-
Rao Gradient Flow
Carles Domingo-Enrich<EMAIL_ADDRESS>
Courant Institute of Mathematical Sciences
New York University Aram-Alexandre Pooladian<EMAIL_ADDRESS>
Center for Data Science
New York University
###### Abstract
Let $V_{*}:\mathbb{R}^{d}\to\mathbb{R}$ be some (possibly non-convex)
potential function, and consider the probability measure $\pi\propto
e^{-V_{*}}$. When $\pi$ exhibits multiple modes, it is known that sampling
techniques based on Wasserstein gradient flows of the Kullback-Leibler (KL)
divergence (e.g. Langevin Monte Carlo) suffer poorly in the rate of
convergence, where the dynamics are unable to easily traverse between modes.
In stark contrast, the work of Lu et al. (2019; 2022) has shown that the
gradient flow of the KL with respect to the Fisher-Rao (FR) geometry exhibits
a convergence rate to $\pi$ is that independent of the potential function. In
this short note, we complement these existing results in the literature by
providing an explicit expansion of $\text{KL}(\rho_{t}^{\text{FR}}\|{\pi})$ in
terms of $e^{-t}$, where $(\rho_{t}^{\text{FR}})_{t\geq 0}$ is the FR gradient
flow of the KL divergence. In turn, we are able to provide a clean asymptotic
convergence rate, where the burn-in time is guaranteed to be finite. Our proof
is based on observing a similarity between FR gradient flows and simulated
annealing with linear scaling, and facts about cumulant generating functions.
We conclude with simple synthetic experiments that demonstrate our theoretical
findings are indeed tight. Based on our numerics, we conjecture that the
asymptotic rates of convergence for Wasserstein-Fisher-Rao gradient flows are
possibly related to this expansion in some cases.
## 1 Introduction
Sampling from a distribution with an unknown normalization constant is a
widespread task in several scientific domains. Namely, the goal is to generate
samples from a probability measure
$\displaystyle\pi(x)\propto e^{-V_{*}(x)}\,,$
where $V_{*}:\mathbb{R}^{d}\to\mathbb{R}$ is some (possibly non-convex)
potential function that is available for queries. In most cases, the target
measure $\pi$ is only known up to the normalization constant. Applications of
sampling from $\pi$ include Bayesian statistics, high-dimensional integration,
differential privacy, statistical physics and uncertainty quantification; see
Gelman et al. (1995); Robert et al. (1999); MacKay (2003); Johannes & Polson
(2010); Von Toussaint (2011); Kobyzev et al. (2020); Chewi (2022) for thorough
treatments.
Recent interest in the task of sampling stems from the following paradigm:
sampling is nothing but optimization over the space of probability measures
(Wibisono, 2018). This interpretation is due to the connection between the
celebrated work of Jordan, Kinderleher, and Otto (Jordan et al., 1998) and the
Langevin diffusion dynamics given by
$\displaystyle\,{\textnormal{d}}X_{t}=-\nabla
V_{*}(X_{t})\,{\textnormal{d}}t+\sqrt{2}\,{\textnormal{d}}B_{t}\,,$ (1)
where $\,{\textnormal{d}}B_{t}$ is Brownian motion.111This equation is to be
understood from the perspective of Itô calculus. Indeed, the work of Jordan et
al. (1998) demonstrates that the path in the space of proabability measures
given by the law of Eq. (1) is the same as the Wasserstein gradient flow (i.e.
steepest descent curve in the Wasserstein metric) of the Kullback-Leibler (KL)
divergence
$\displaystyle\text{KL}(\rho\|\pi)=\int\log\frac{\rho}{\pi}\,{\textnormal{d}}\rho\,.$
We write $(\rho_{t}^{\text{W}})_{t\geq 0}\subseteq\mathcal{P}(\mathbb{R}^{d})$
for the law of the path given by Eq. (1) (see Section 2.2.1 for a precise
definition).
A central problem in this area has been to bound the convergence rate of
$\rho_{t}^{\text{W}}$ to $\pi$ in certain similarity metrics (e.g. the KL
divergence itself, or the Wasserstein distance) under different conditions on
$\pi$. These bounds translate to convergence rates for the Langevin Monte
Carlo (LMC) sampling algorithm (Dalalyan & Tsybakov, 2012; Vempala & Wibisono,
2019; Durmus et al., 2021; Chewi et al., 2022), upon accounting for
discretization errors.
The classical result is as follows: assuming that $\pi$ satisfies a Log-
Sobolev inequality (LSI) with constant $C_{\texttt{LSI}}>0$, we obtain the
following convergence rate (Stam, 1959; Gross, 1975; Markowich & Villani,
1999)
$\displaystyle\text{KL}(\rho_{t}^{\text{W}}\|\pi)\leq\text{KL}(\rho_{0}^{\text{W}}\|\pi)e^{-\frac{2t}{C_{\texttt{LSI}}}}\,,$
(2)
which holds for all $t\geq 0$. Recall that $\pi$ satisfies a LSI if for all
smooth test functions $g$,
$\displaystyle\text{ent}_{\pi}(f^{2})\leq
2C_{\texttt{LSI}}\mathbb{E}_{\pi}\|\nabla f\|^{2}\,,$ (3)
where $\text{ent}_{\pi}(g)\coloneqq\mathbb{E}_{\pi}(g\log
g)-\mathbb{E}_{\pi}g\log\mathbb{E}_{\pi}g.$ For example, when $V_{*}$
$\alpha$-strongly convex, an LSI with $C_{\texttt{LSI}}=1/\alpha$ holds. LSI
hold more generally, but sometimes with very large constants
$C_{\texttt{LSI}}$. Indeed, for multimodal distributions such as mixtures of
Gaussians, $C_{\texttt{LSI}}$ scales exponentially in the height of the
potential barrier between modes (Holley & Stroock, 1987; Arnold et al., 2000).
This impacts convergence at the discrete-time level, and thus hinders our
ability to generate samples using LMC.
Another geometry that gives rise to gradient flows over probability measures
is the Fisher-Rao (FR) geometry; see Section 2.2.2 for definitions. Similar to
the case of Wasserstein gradient flows, we let $(\rho_{t}^{\text{FR}})_{t\geq
0}$ be the FR gradient flow of the KL divergence. Recent work by Lu and
collaborators has shown that the convergence $\rho_{t}^{\text{FR}}\to\pi$
occurs at a rate that is independent of the potential function $V_{*}$. This
is in stark contrast to the case of Wasserstein gradient flows, where the rate
of convergence is intimately related to the structure of $V_{*}$ through the
LSI constant. In their first work, Lu et al. (2019) show that for any
$\delta\in(0,\tfrac{1}{4}]$ there exists a $t_{*}\gtrsim\log(\delta^{3})$ such
that for all $t\geq t_{*}$,
$\displaystyle\text{KL}(\rho_{t}^{\text{FR}}\|\pi)\leq\text{KL}(\rho_{0}^{\text{FR}}\|\pi)e^{-(2-3\delta)(t-t_{*})}\,,$
(4)
where they require a warm-start condition
$\text{KL}(\rho_{0}^{\text{FR}}\|\pi)\leq 1$, and assumption (B) (see Section
3). In Lu et al. (2022), the authors show that the KL divergence is always
contracting under $(\rho_{t}^{\text{FR}})_{t\geq 0}$ even in the absence of a
warm-start, though with a worse rate. Combined, these two results provide the
first continuous-time convergence rates of the gradient flow of the KL
divergence under the FR geometry to $\pi$.
Merging both these geometries gives rise to the well-defined Wasserstein-
Fisher-Rao (WFR) geometry. The WFR geometry has recently been used to analyse
the convergence dynamics of parameters of neural networks Chizat (2022), mean-
field games Rotskoff et al. (2019), and has shown to be useful in statistical
tasks such as Gaussian variational inference Lambert et al. (2022), and
identifying parameters of a Gaussian mixture model Yan et al. (2023). In the
context of sampling, particle-based methods that follow dynamics governed by
WFR gradient flow of the KL, written $(\rho_{t}^{\text{WFR}})_{t\geq 0}$, are
known to escape the clutches of slow-convergence that plague the Wasserstein
geometry. A simple observation (Lu et al., 2022, Remark 2.4) gives the
following continuous-time convergence rate for $t\geq t_{*}$:
$\displaystyle\text{KL}(\rho_{t}^{\text{WFR}}\|\pi)\leq\min\\{\text{KL}(\rho_{t}^{\text{W}}\|\pi),\text{KL}(\rho_{t}^{\text{FR}}\|\pi)\\}\leq\text{KL}(\rho_{0}^{\text{WFR}}\|\pi)\min\left\\{e^{-C_{\texttt{LSI}}t},e^{-(2-3\delta)(t-t_{*})}\right\\}\,,$
(5)
where $\delta$ and $t_{*}$ are as in the FR convergence rate (4). Loosely
speaking, this “decoupled rate” is a consequence of the Wasserstein and FR
geometries being orthogonal to one another; this is made precise in Gallouët &
Monsaingeon (2017).
As elegant as this last connection may seem, the convergence rate in Eq. (4),
and consequently Eq. (5), should appear somewhat unsatisfactory to the reader.
It raises the natural question of whether or not the factor of $\delta$
appearing in the rate is avoidable, and whether the upper bound in Eq. (4) is
tight.
### 1.1 Main contributions
We close this gap for the KL divergence and any $q$-Rényi divergence. Using a
different proof technique than existing work, we prove the following
asymptotic rate of convergence for the flow $(\rho_{t}^{\text{FR}})_{t\geq
0}$, namely for $t$ sufficiently large,
$\displaystyle\text{KL}(\rho_{t}^{\text{FR}}\|\pi)=\tfrac{1}{2}\text{Var}_{\pi}\left(\log\frac{\rho_{0}^{\text{FR}}}{\pi}\right)e^{-2t}+O(e^{-3t})\,,$
(6)
and a similar result holds for all $q$-Rényi divergences. Our assumptions are
weaker to that of prior work, and given that this is a tight asymptotic
convergence rate, we conjecture that the assumptions are likely unavoidable in
the large $t$ regime. Our proof technique provides an explicit expansion of
$\text{KL}(\rho_{t}^{\text{FR}}\|\pi)$ (and $q$-Rényi) in terms of $e^{-t}$.
We supplement our finding with simulations for all three geometries,
indicating that our convergence rate is in fact tight for Fisher-Rao gradient
flows, and sheds light on possible conjectures for the convergence rate of WFR
gradient flows.
#### Notation
For a probability measure $\rho\in\mathcal{P}(\mathbb{R}^{d})$ and a function
$f:\mathbb{R}^{d}\to\mathbb{R}$, we sometimes use the shorthand $\langle
f\rangle_{\rho}\coloneqq\int f\,{\textnormal{d}}\rho$. We let $\log(\cdot)$
denote the natural logarithm, and we use the standard shorthand notation
$f=O(g)$, meaning there exists a constant $C>0$ such that $f\leq Cg$.
## 2 Background
### 2.1 Definitions
The study of gradient flows has a rich history in both pure and applied
mathematics. The development of the relevant calculus to understand gradient
flows is not the purpose of this note, and we instead provide a barebones
introduction. However, we strongly recommend the interested reader consult
standard textbooks on the topic, namely Ambrosio et al. (2005), and the first
chapter of Chewi (2022).
Let $\mathcal{P}(\mathbb{R}^{d})$ be the space of probability measures over
$\mathbb{R}^{d}$. A functional
$\mathcal{F}:\mathcal{P}(\mathbb{R}^{d})\to\mathbb{R}$ is defined on the space
of probability measures, with $\rho\mapsto\mathcal{F}(\rho)\in\mathbb{R}$. We
call ${\delta\mathcal{F}}(\rho)$ the first variation of $\mathcal{F}$ at
$\rho$ if for a signed measure $\eta$ such that
$\int\,{\textnormal{d}}\eta=0$, it holds that
$\displaystyle\lim_{\varepsilon\to
0}\frac{\mathcal{F}(\rho+\varepsilon\eta)-\mathcal{F}(\rho)}{\varepsilon}=\int{\delta\mathcal{F}}(\rho)\,{\textnormal{d}}\eta\,.$
(7)
The Kullback-Leibler (KL) divergence of a measure $\rho$ with respect to some
fixed target measure $\pi$ is defined as
$\text{KL}(\rho\|\pi)=\int\log\frac{\rho}{\pi}\,{\textnormal{d}}\rho$ for
$\rho$ absolutely continuous with respect to $\pi$. For $\pi\propto
e^{-V_{*}}$, the first variation of the KL divergence is given by
$\displaystyle{\delta\text{KL}(\cdot\|\pi)}(\rho)(x)=\log\frac{\rho(x)}{\pi(x)}=\log\rho(x)+V_{*}(x)+\log
Z_{1}\,,$ (8)
where $Z_{1}$ is the normalizing constant for $\pi$.
A more general notion of dissimilarity between probability measures is the
$q$-Rényi divergence: for $q\in[1,\infty]$, we define
$\mathcal{R}_{q}(\rho\|\pi)$ to be the $q$-Rényi divergence with respect to
$\pi$, given by
$\displaystyle\mathcal{R}_{q}(\rho\|\pi)\coloneqq\frac{1}{q-1}\log\int\left(\frac{\rho}{\pi}\right)^{q}\,{\textnormal{d}}\pi\,,$
(9)
for measures $\rho$ that are absolutely continuous with respect to $\pi$.
$\mathcal{R}_{q}$ recovers the KL divergence in the limit $q\to 1$, and when
$q=2$, $\mathcal{R}_{2}(\rho\|\pi)=\log(\chi^{2}(\rho\|\pi)+1)$, where
$\chi^{2}$ is the chi-squared divergence, written explicitly as
$\displaystyle\chi^{2}(\rho\|\pi)=\text{Var}_{\pi}\left(\frac{\rho}{\pi}\right)=\int\left(\frac{\rho}{\pi}\right)^{2}\,{\textnormal{d}}\pi-1\,.$
### 2.2 Gradient flows of the Kullback-Leibler divergence
#### 2.2.1 Wasserstein gradient flow
In its dynamic formulation, the 2-Wasserstein distance between two probability
measures $\rho_{0},\rho_{1}$ with bounded second moments can be written as
(Villani, 2008; Benamou & Brenier, 2000)
$\displaystyle\mathrm{W}_{2}^{2}(\rho_{0},\rho_{1})\coloneqq\inf_{(\rho_{t},v_{t})}\int_{0}^{1}\int\|v_{t}(x)\|^{2}\rho_{t}(x)\,{\textnormal{d}}x\,{\textnormal{d}}t\quad\text{s.t.}\quad\partial_{t}\rho_{t}+\nabla\cdot(\rho_{t}v_{t})=0\,,$
(10)
where $(\rho_{t})_{t\in[0,1]}$ is a curve of probability densities over
$\mathbb{R}^{d}$, and $(v_{t})_{t\in[0,1]}$ is a curve of
$L^{2}(\mathbb{R}^{d})^{d}$ vector fields. The constraint is known as the
continuity equation, with endpoints $\rho_{0}$ and $\rho_{1}$. For a
functional $\mathcal{F}:\mathcal{P}(\mathbb{R}^{d})\to\mathbb{R}$, the
Wasserstein gradient flow is the curve of measures
$(\rho_{t}^{\text{W}})_{t\geq 0}$ that satisfies the continuity equation with
the vector field replaced by the steepest descent under the Wasserstein
geometry,
$\displaystyle
v_{t}=-\nabla_{W_{2}}\mathcal{F}(\rho_{t}^{\text{W}})\coloneqq\nabla{\delta\mathcal{F}}(\rho_{t}^{\text{W}})\,,$
where the last equation is simply the (standard) spatial gradient of the first
variation of $\mathcal{F}$. Plugging in the expression for the first variation
of the KL divergence (8), we see that the law of the Langevin diffusion is
given by $\rho_{t}^{\text{W}}$ which satisfies
$\displaystyle\partial_{t}\rho_{t}^{\text{W}}=\nabla\cdot\left(\rho_{t}^{\text{W}}(\nabla\log\rho_{t}^{\text{W}}+\nabla
V_{*})\right)\,.$ (11)
This equation may be rewritten as
$\partial_{t}\rho_{t}^{\text{W}}=\nabla\cdot(\nabla
V_{*}\rho_{t})+\Delta\rho_{t}$, which one readily identifies as the Fokker-
Planck equation for the potential $V_{*}$. The equation describes the
evolution of the distribution of a particle that moves according to the
stochastic differential equation 1. At the particle level, the key aspect of
Wasserstein gradient flows is that they model particle transport, and that
makes them useful for high-dimensional applications such as LMC. In what
follows, we will sometimes abbreviate Wasserstein gradient flow to W-GF.
#### 2.2.2 Fisher-Rao gradient flow
The Fisher-Rao distance, or Hellinger-Kakutani distance, between probability
measures has a long history in statistics and information theory (Hellinger,
1909; Kakutani, 1948). It can be defined as (Bogachev, 2007; Gallouët &
Monsaingeon, 2017)
$\displaystyle\mathrm{FR}^{2}(\rho_{0},\rho_{1})\coloneqq\inf_{(\rho_{t},r_{t})}\int_{0}^{1}\int
r_{t}(x)^{2}\rho_{t}(x)\,{\textnormal{d}}x\,{\textnormal{d}}t\quad\text{s.t.}\quad\partial_{t}\rho_{t}=r_{t}\rho_{t}\,,$
where $(\rho_{t})_{t\in[0,1]}$ is again a curve of probability measures, and
$(r_{t})_{t\in[0,1]}$ is a curve of $L^{2}(\mathbb{R}^{d})$ functions.
Together, they satisfy the prescribed equation, with endpoints equal to
$\rho_{0}$ and $\rho_{1}$. The Fisher-Rao gradient flow of the KL divergence,
also known as Birth-Death dynamics, is the curve of measures
$(\rho_{t}^{\text{FR}})_{t\geq 0}$ that satisfies (Gallouët & Monsaingeon,
2017; Lu et al., 2019)
$\displaystyle\partial_{t}\rho^{\text{FR}}_{t}=-\rho^{\text{FR}}_{t}\alpha_{t}\,,\quad\alpha_{t}\coloneqq\log\frac{\rho^{\text{FR}}_{t}}{\pi}-\text{KL}(\rho^{\text{FR}}_{t}\|\pi)\,.$
The first term adjusts mass (i.e. gives birth to or kills mass) according to
the log-ratio of $\rho_{t}^{\text{FR}}$ and the target measure $\pi$. The last
term preserves the total mass, so that
$\rho_{t}^{\text{FR}}\in\mathcal{P}(\mathbb{R}^{d})$ for all time.
Expanding this equation, we have
$\displaystyle\partial_{t}\rho_{t}^{\text{FR}}(x)=-\big{(}\log(\rho_{t}^{\text{FR}}(x))+V_{*}(x)-\big{\langle}\log(\rho_{t}^{\text{FR}})+V_{*}\big{\rangle}_{\rho_{t}^{\text{FR}}}\big{)}\rho_{t}^{\text{FR}}(x).$
(12)
We henceforth omit the superscript FR for the Fisher-Rao gradient flow of the
KL divergence unless the notation becomes ambiguous. For short-hand, we make
use of the abbreviation FR-GF for Fisher-Rao gradient flows.
The FR-GF may be simulated using a system of weighted particles (see Appendix
B). Unlike for the W-GF, in this case the positions of the particles are
fixed; only the weights change over time. Hence, to simulate the FR-GF one is
forced to grid the underlying space $\mathbb{R}^{d}$. This is feasible only
for small dimensions $d$. Consequently, FR-GFs cannot be simulated in high
dimensions, which makes them impractical for sampling applications.
#### 2.2.3 Wasserstein-Fisher-Rao geometry gradient flow
The Wasserstein-Fisher-Rao distance between probability measures arises as a
combination of the Wasserstein and the Fisher-Rao distances (Chizat et al.,
2018; 2015; Kondratyev et al., 2016; Liero et al., 2016; 2018). It is defined
as
$\displaystyle\mathrm{WFR}^{2}(\rho_{1},\rho_{1})\coloneqq\inf_{(\rho_{t},v_{t},r_{t})}\int_{0}^{1}\int(\|v_{t}(x)\|^{2}+r_{t}(x)^{2})\rho_{t}(x)\,{\textnormal{d}}x\,{\textnormal{d}}t\quad\text{s.t.}\quad\partial_{t}\rho_{t}+\nabla\cdot(\rho_{t}v_{t})=r_{t}\rho_{t}\,,$
where, for each $t\in[0,1]$, the triple $(\rho_{t},v_{t},r_{t})$ lives in
$\mathcal{P}(\mathbb{R}^{d})\times L^{2}(\mathbb{R}^{d})^{d}\times
L^{2}(\mathbb{R}^{d}),$ and they simultaneously satisfy the constraint
equation, which has endpoints $\rho_{0}$ and $\rho_{1}$, as well. Similarly,
the Wasserstein-Fisher-Rao gradient flow of the KL divergence is the solution
of PDE that incorporates the terms in the Wasserstein and Fisher-Rao gradient
flows (Eq. 11 and Eq. 12):
$\displaystyle\partial_{t}\rho_{t}^{\text{WFR}}=\nabla\cdot\left(\rho_{t}^{\text{WFR}}(\nabla\log\rho_{t}^{\text{WFR}}+\nabla
V_{*})\right)-\big{(}\log(\rho_{t}^{\text{WFR}})+V_{*}-\big{\langle}\log(\rho_{t}^{\text{WFR}})+V_{*}\big{\rangle}_{\rho_{t}^{\text{WFR}}}\big{)}\rho_{t}^{\text{WFR}}$
(13)
Similar to the other geometries, we write WFR-GF as shorthand for Wasserstein-
Fisher-Rao gradient flow At the particle level, WFR-GFs are able to capture
both _transport_ and _weight updates_ , which is why they enjoy a convergence
rate that at least matches the better rate between W- and FR-GFs (recall Eq.
5), and is clearly superior in practice in some instances. Hence, any
improvement in the convergence analysis of either W- or FR-GFs translates to
improving our understanding of WFR-GFs.
### 2.3 Simulated annealing dynamics
Simulated annealing is a technique seen in several works when attempting to
either optimize a function or sample from a multimodal probability
distribution, and has a long history (Pincus, 1970; Kirkpatrick et al., 1983),
and plays a crucial role in our analysis. In what follows, we introduce the
annealing path with linear scaling, and conclude with a proposition.
Consider the time-dependent measure $(\mu_{\tau})_{\tau\in[0,1]}$
corresponding to the annealing path, with linear scaling, initialized at the
measure $\mu_{0}=\rho_{0}\propto e^{-V_{0}}$. By definition, $\mu_{\tau}$
admits the density
$\displaystyle\mu_{\tau}(x)=\frac{e^{-\tau(V_{*}(x)-V_{0}(x))-V_{0}(x)}}{Z_{\tau}},\quad
Z_{\tau}=\int_{\mathbb{R}^{d}}e^{-\tau(V_{*}(x)-V_{0}(x))-V_{0}(x)}\,dx,$ (14)
for $\tau\in[0,1]$. Note that indeed, $\mu_{1}=\pi$. To this end, it will be
convenient to rewrite Eq. 14 in terms of the log-density of $\mu_{\tau}$.
Remark that
$\displaystyle\log(\mu_{\tau}(x))=-\tau(V_{*}(x)-V_{0}(x))-V_{0}(x)-\log
Z_{\tau}\,.$ (15)
One can check that the pointwise derivative of the density $\mu_{\tau}$ (with
respect to $\tau$) is
$\displaystyle\partial_{\tau}\mu_{\tau}(x)=-(V_{*}(x)-V_{0}(x)-\langle
V_{*}-V_{0}\rangle_{\mu_{\tau}})\mu_{\tau}(x)\,.$ (16)
From this, we obtain that
$\displaystyle\begin{split}&\log(\mu_{\tau}(x))+E(x)-\big{\langle}\log(\mu_{\tau})+V_{*}\big{\rangle}_{\mu_{\tau}}\\\
&=-\tau(V_{*}(x)-V_{0}(x))-V_{0}(x)-\langle-\tau(V_{*}-V_{0})-V_{0}+V_{*}\rangle_{\mu_{\tau}}+V_{*}(x)-\langle
V_{*}\rangle_{\mu_{\tau}}\\\
&=-\tau(V_{*}(x)-V_{0}(x))-V_{0}(x)+V_{*}(x)-\langle-\tau(V_{*}-V_{0})-V_{0}+V_{*}\rangle_{\mu_{\tau}}\\\
&=(1-\tau)\big{(}V_{*}(x)-V_{0}(x)\big{)}-(1-\tau)\langle
V_{*}-V_{0}\rangle_{\mu_{\tau}}\\\ &=(1-\tau)\big{(}V_{*}(x)-V_{0}(x)-\langle
V_{*}-V_{0}\rangle_{\mu_{\tau}}\big{)}\,.\end{split}$ (17)
Note that in the first equality we used that the log-partition is a constant
and gets cancelled. Consequently, Eq. 16 can be rewritten, for $\tau\in(0,1)$,
as
$\displaystyle\partial_{\tau}\mu_{\tau}(x)=-\frac{1}{1-\tau}\big{(}\log(\mu_{\tau}(x))+V_{*}(x)-\big{\langle}\log(\mu_{\tau})+V_{*}\big{\rangle}_{\mu_{\tau}}\big{)}\mu_{\tau}(x).$
(18)
A first observation is that that the linear schedule $\tau$ in the exponent of
Eq. 14 results in dynamics that resemble the Fisher-Rao gradient flow of the
KL divergence, up to a reparameterization that can be made explicit. Indeed,
if one compares Eq. 18 with Eq. 12, the only difference is the factor
$\frac{1}{1-\tau}$ in the right-hand side of Eq. 18. Since the solution of the
Fisher-Rao gradient flow of the KL divergence is unique (see Proposition 4 in
Appendix A), an appropriate time reparameterization of the annealed dynamics
(14) will yield the solution (12). We summarize this observation in the
following proposition, which we were unable to find a citation for in the
literature.
###### Proposition 1.
Let $(\mu_{\tau})_{\tau\in[0,1]}$ be as defined in Eq. 14. The Fisher-Rao
gradient flow $(\rho_{t})_{t\geq 0}$ of $\text{KL}(\rho\|\pi)$ (i.e. solving
Eq. 12) is given by $\rho_{t}=\mu_{1-e^{-t}}$.
###### Proof.
If we write $t$ as a function of $\tau$, we have that
$\displaystyle\partial_{\tau}\rho_{t(\tau)}=\partial_{t}\rho_{t(\tau)}\frac{dt}{d\tau}(\tau)=-\frac{dt}{d\tau}(\tau)\big{(}\log(\rho_{t(\tau)}(x))+E(x)-\big{\langle}\log(\rho_{t(\tau)})+E\big{\rangle}_{\rho_{t(\tau)}}\big{)}\rho_{t(\tau)}(x).$
(19)
Identifying $\rho_{t(\tau)}$ with $\rho_{\tau}$, and establishing a direct
comparison with Eq. 18, we obtain that for Eq. 19 to hold, $t(\tau)$ must
fulfill $\frac{dt}{d\tau}(\tau)=\frac{1}{1-\tau}$. With the initial condition
that $\tau(0)=0$, this differential equation has the following unique
solution:
$\displaystyle t(\tau)=\int_{0}^{\tau}\frac{1}{1-s}\,ds=-\log(1-\tau)\,.$ (20)
That is, we have that $t(\tau)=-\log(1-\tau)$, or equivalently,
$\tau(t)=1-e^{-t}$. ∎
### 2.4 Cumulants and their power series
Our core argument hinges on observing a relation between the above gradient
flows and their connection to cumulants of a random variable. Recall that for
a random variable $Y$, its cumulant-generating function to be
$K_{Y}(z)=\log\mathbb{E}[e^{Yz}]$. The $n^{\text{th}}$ cumulant $\kappa_{n}$
of the random variable $Y$ is defined as the $n^{\text{th}}$ derivative of
$K_{Y}$ evaluated at $z=0$, that is, $\kappa_{n}=K^{(n)}_{Y}(0)$. Similar to
moment-generating functions, if $K_{Y}(z)$ is finite in some neighborhood of
$z\in(-\epsilon_{0},\epsilon_{0})$, then it holds that $K_{Y}$ is smooth (in
fact, holomorphic) (see e.g. (Shiryaev, 1984, Section II.12.8). Moreover,
$K_{Y}(z)$ admits the following infinite series expansion
$\displaystyle K_{Y}(z)=\sum_{n\geq 1}\frac{\kappa_{n}}{n!}z^{n}\,.$
In particular, one can easily check that $\kappa_{1}=\mathbb{E}[Y]$ and
$\kappa_{2}=\text{Var}(Y)$.
## 3 Main result
The goal of this section is to prove our main result, which is an explicit
expansion of the KL divergence in terms of log-cumulants of the random
variable $\log\frac{\rho_{0}(X)}{\pi(X)}$ where $X\sim\pi$. We make the
following assumptions throughout, and we will make their uses explicit when
necessary.
(A1) $V_{*}\in L_{1}(\pi)$,
(A2) There exists $\alpha\in\mathbb{R}_{+}$, such that
$\inf_{x}\frac{\rho_{0}(x)}{\pi(x)^{1+\alpha}}>0$.
Assumption (A1) ensures that $\pi$ has finite differential entropy, and is a
relatively weak condition. (A2) asks that at least some mass is initially
placed along the support of $\pi$. (A2) is, however, a much weaker assumption
that what is currently used in the literature. To be precise, Lu et al. (2019;
2022) assume a particular case of (A2), namely
(B) There exists $M>0$ such that $\inf_{x}\frac{\rho_{0}(x)}{\pi(x)}\geq
e^{-M}$.
This is essentially the same as (A2), though $\alpha$ is constrained to be 0,
and a precise lower bound on the infimum is needed. Note that (A2) is weaker
the larger $\alpha$ is, as $\pi(x)^{1+\alpha}$ decreases faster. As a
comparison, if $\rho_{0}$ and $\pi$ are Gaussians, (A2) covers the setting
where both have arbitrary means and covariances, while constraining $\alpha=0$
only covers the cases in which the covariance matrix of $\rho_{0}$ is strictly
larger than the one of $\pi$ in the positive definite order.
The following theorem is our main contribution. While here we have stated an
asymptotic expression, in fact a more general expression is available as an
infinite power series under the same assumptions, and appears explicitly in
the proof.
###### Theorem 1.
Suppose (A1) and (A2) hold. Then for $t$ large enough and any
$q\in(1,\infty)$,
$\displaystyle\text{KL}(\rho_{t}\|\pi)=\frac{\kappa_{2}}{2}e^{-2t}+O(e^{-3t})\,,\quad\text{
and
}\quad\mathcal{R}_{q}(\rho_{t}\|\pi)=\frac{q\kappa_{2}}{2}e^{-2t}+O_{q}(e^{-3t})\,,$
(21)
where $\kappa_{2}=\text{Var}_{\pi}\left(\log\frac{\rho_{0}}{\pi}\right)$.
###### Remark 1.
The coefficient $\kappa_{2}$ is nothing more than the variance under $\pi$ of
the first-variation of the KL divergence at $\rho_{0}$ (recall Eq. 8).
### 3.1 Proof
Henceforth, we will always write
$\displaystyle Y\coloneqq\log\frac{\rho_{0}(X)}{\pi(X)}\text{ where
}X\sim\pi\,.$ (22)
The first step in our proof is to represent these divergences as a function of
the cumulants of $Y$, which is possible due to the aforementioned time-
reparameterization of the FR flow.
###### Proposition 2.
Let $\pi\propto e^{-V_{*}}$ and $\rho_{0}\propto e^{-V_{0}}$ be probability
measures on $\mathbb{R}^{d}$, and let $Y$ be as in Eq. 22. Let
$(\mu_{\tau})_{\tau\in[0,1]}$ be follow the simulated annealing dynamics from
Eq. 14. It holds that
$\displaystyle\text{KL}(\mu_{\tau}\|\pi)=(1-\tau)K_{Y}^{\prime}(1-\tau)-K_{Y}(1-\tau)\,,$
(23)
$\displaystyle\mathcal{R}_{q}(\mu_{\tau}\|\pi)=\frac{1}{q-1}K_{Y}(q(1-\tau))-\frac{q}{q-1}K_{Y}(1-\tau)\,.$
(24)
###### Proof.
We first identify the following relationship, which arises from a simple
manipulation of Eq. 14
$\displaystyle\log Z_{\tau}=K_{Y}(1-\tau)+\log Z_{1}\,.$ (25)
Using this expression, we can expand the KL divergence between $\mu_{\tau}$
and $\pi$ as follows:
$\displaystyle\text{KL}(\mu_{\tau}\|\pi)$
$\displaystyle=\int\log\frac{\mu_{\tau}}{\pi}\mu_{\tau}=\int\log\left(\frac{e^{-\tau(V_{*}-V_{0})-V_{0}}Z_{\tau}^{-1}}{e^{-V_{*}}Z_{1}^{-1}}\right)\,{\textnormal{d}}\mu_{\tau}$
$\displaystyle=\log Z_{1}-\log Z_{\tau}+(1-\tau)\langle
V_{*}-V_{0}\rangle_{\mu_{\tau}}$ $\displaystyle=(1-\tau)\langle
V_{*}-V_{0}\rangle_{\mu_{\tau}}-K_{Y}(1-\tau)\,.$
Another fact about cumulant generating functions that we can exploit is the
following differential relationship
$\displaystyle-\langle
V_{*}-V_{0}\rangle_{\mu_{\tau}}=\frac{\,{\textnormal{d}}}{\,{\textnormal{d}}\tau}Z_{\tau}=-K_{Y}^{\prime}(1-\tau)\,.$
(26)
Altogether, this gives
$\displaystyle\text{KL}(\mu_{\tau}\|\pi)=(1-\tau)K_{Y}^{\prime}(1-\tau)-K_{Y}(1-\tau)\,.$
(27)
The general $q$-Rényi case is deferred to the appendix, where the computation
is similar. ∎
The following lemma uses both (A1) and (A2) to establish that $K_{Y}(z)$ is
finite in some neighborhood of $z\in B_{\epsilon_{0}}(0)$, which implies that
$K_{Y}$ admits the series expansion we will require in the sequel. The proof
is deferred to the appendix.
###### Proposition 3.
Suppose (A1) and (A2) are satisfied. Then there exists some constant
$\epsilon_{0}>0$ such that the cumulant generating function of $Y$,
$K_{Y}(z)=\log\mathbb{E}[e^{Yz}]$ is finite on some neighborhood of $z\in
B_{\epsilon_{0}}(0)$. Moreover, inside this neighborhood, $K_{Y}(z)$ is
holomorphic and we have the series expansion
$\displaystyle K_{Y}(z)=\sum_{n\geq 1}\frac{\kappa_{n}}{n!}z^{n}\,.$ (28)
We conclude with the proof of our main result.
###### Proof of Theorem 1.
We begin with the expression of the KL divergence. Note that since $K_{Y}(z)$
is smooth for $z$ sufficiently close to the origin, it holds that
$\displaystyle K_{Y}^{\prime}(z)=\sum_{n\geq
1}\frac{\kappa_{n}}{(n-1)!}z^{n-1}\,.$
Using the parameterization of Eq. 27 and the series expansion for
$K_{Y}^{\prime}(1-\tau)$, our expression for $\text{KL}(\mu_{\tau}\|\pi)$
reads
$\displaystyle\text{KL}(\mu_{\tau}\|\pi)$ $\displaystyle=(1-\tau)\sum_{n\geq
1}\frac{\kappa_{n}}{(n-1)!}(1-\tau)^{n-1}-\sum_{n\geq
1}\frac{\kappa_{n}}{n!}(1-\tau)^{n}$ $\displaystyle=\sum_{n\geq
1}\kappa_{n}\left(\frac{n}{n!}-\frac{1}{n!}\right)(1-\tau)^{n}$
$\displaystyle=\sum_{n\geq 2}\frac{\kappa_{n}}{n(n-2)!}(1-\tau)^{n}\,.$
Expanding the relation and replacing $\tau(t)=1-e^{-t}$ gives
$\displaystyle\text{KL}(\rho_{t}\|\pi)=\frac{\kappa_{2}}{2}e^{-2t}+\sum_{n\geq
3}\frac{\kappa_{n}}{n(n-2)!}e^{-nt}\,.$
We now do the same manipulations for $\mathcal{R}_{q}(\mu_{\tau}\|\pi)$.
$\displaystyle\mathcal{R}_{q}(\mu_{\tau}\|\pi)$
$\displaystyle=\frac{1}{q-1}\sum_{n\geq
1}\frac{\kappa_{n}}{n!}(q(1-\tau))^{n}-\frac{q}{q-1}\sum_{n\geq
1}\frac{\kappa_{n}}{n!}(1-\tau)^{n}$
$\displaystyle=\frac{1}{q-1}\left(\frac{\kappa_{1}}{q}(1-\tau)+\sum_{n\geq
2}q^{n}\frac{\kappa_{n}}{n!}(1-\tau)^{n}\right)-\frac{q}{q-1}\left(\kappa_{1}(1-\tau)+\sum_{n\geq
2}\frac{\kappa_{n}}{n!}(1-\tau)^{n}\right)$ $\displaystyle=\sum_{n\geq
2}\frac{q^{n}-q}{q-1}\frac{\kappa_{n}}{n!}(1-\tau)^{n}\,.$
Substituting $\tau(t)=1-e^{-t}$ and expanding out the first term yields
$\displaystyle\mathcal{R}_{q}(\rho_{t}\|\pi)=q\frac{\kappa_{2}}{2}e^{-2t}+\sum_{n\geq
3}\frac{q^{n}-q}{q-1}\frac{\kappa_{n}}{n!}e^{-nt}\,.$
Our proof concludes by taking the limit $t\to\infty$, which we fully justify
in the appendix (Lemma 2). ∎
## 4 Numerical simulations
We present simple numerical simulations that demonstrates our asymptotic
convergence rate of the KL divergence the FR gradient flows, as well as a
comparison with the WFR- and W-GFs. We consider two target distributions over
the set $[-\pi,\pi)$, each with two initializations:
1. 1.
Target distribution $\pi_{1}$: We set $\pi_{1}\propto e^{-V_{1}}$ with
$V_{1}(x)=2.5\cos(2x)+0.5\sin(x)$. This distribution has two modes with
different weights and has been studied previously by Lu et al. (2019). We
consider two initial distributions:
1. (a)
$\pi_{a}\propto e^{-V_{a}}$ with $V_{a}=-V_{1}$, which has two modes in
locations where $\pi$ has little mass.
2. (b)
$\pi_{b}\propto e^{-V_{b}}$ with $V_{b}=2.5\cos(2x)$, which has two modes in
almost the same positions as $\pi$, but with equal weight.
2. 2.
Target distribution $\pi_{2}$: We set $\pi_{2}\propto e^{-V_{2}}$ with
$V_{2}(x)=-6\cos(x)$. This distribution has one mode. We consider two initial
distributions:
1. (c)
$\pi_{c}\propto e^{-V_{c}}$ with $V_{c}=-V_{2}$, which has one mode in a
location where $\pi$ has little mass.
2. (d)
$\pi_{d}\propto e^{-V_{d}}$ with $V_{d}=0$, which is the uniform distribution.
|
---|---
Figure 1: Energies of the target and initial distributions.
Fig. 1 shows the target energies $V_{1}$, $V_{2}$ and the initial energies
$V_{a}$, $V_{b}$, $V_{c}$, $V_{d}$ introduced above. Fig. 2 shows the
evolution of the KL divergence along the FR, WFR and W gradient flows. It also
contains plots of the dominant term $\frac{\kappa_{2}}{2}e^{-2t}$ of the
approximation of the KL divergence decay for FR flows (see Theorem 1),
displayed as dotted lines. Table 1 shows the slopes of each curve from Fig. 2,
at large times (see Appendix B for details on the computation of slopes).
|
---|---
Figure 2: Evolution of the KL divergence with respect to $\pi_{1}$ (_left_)
and $\pi_{2}$ along their respective FR (_solid lines_), WFR (_dash-dotted
lines_) and W (_dashed lines_) gradient flows. Each plot contains flows
initialized at two probability measures: in the left plot these are $\pi_{a}$
(_blue_ , top curves at $t=0$) and $\pi_{b}$ (_orange_); in the right plot,
$\pi_{c}$ (_blue_ , top curves at $t=0$) and $\pi_{d}$ (_orange_). The
_dotted_ lines show the curves $\frac{\kappa_{2}}{2}e^{-2t}$ (for the
appropriate values $\kappa_{2}$), introduced in Theorem 1.
Some observations are in order:
* •
As predicted by Theorem 1, the curves $\text{KL}(\rho_{t}^{\text{FR}}\|\pi)$
approach the curves $\frac{\kappa_{2}}{2}e^{-2t}$ as $t$ grows.
* •
For $\pi_{1}$, the curves $\text{KL}(\rho_{t}^{\text{FR}}\|\pi)$ and
$\text{KL}(\rho_{t}^{\text{WFR}}\|\pi)$ initialized at $\pi_{b}$ are very
close for small times. The reason is that $\nabla V_{1}$ and $\nabla V_{b}$
are very close in the regions where $\pi_{1}$ and $\pi_{b}$ have most of the
mass. Consequently, the term
$\nabla\cdot\left(\rho_{t}^{\text{WFR}}(\nabla\log\rho_{t}^{\text{WFR}}+\nabla
V_{1})\right)$, which is the difference between the FR and the WFR PDEs, is
small at initialization.
* •
The curves $\text{KL}(\rho_{t}^{\text{W}}\|\pi)$ behave very differently for
$\pi_{1}$ and $\pi_{2}$ (see Table 1). Indeed, since $\pi_{1}$ is bimodal
$C_{\texttt{LSI}}(\pi_{1})$ is quite large (thus convergence is slow), whereas
$\pi_{2}$ is unimodal, with a much smaller log-Sobolev constant.
* •
The curves $\text{KL}(\rho_{t}^{\text{WFR}}\|\pi)$ also behave differently for
both target distributions. For $\pi_{1}$, it decays only slightly faster than
$\text{KL}(\rho_{t}^{\text{FR}}\|\pi)$, while for $\pi_{2}$ it goes down much
faster than both $\text{KL}(\rho_{t}^{\text{FR}}\|\pi)$ and
$\text{KL}(\rho_{t}^{\text{WFR}}\|\pi)$. Interestingly, looking at Table 1 we
observe that the asymptotic slopes of the WFR are very close to the sum of the
slopes for FR and W. This seems to indicate that at large times, the KL
divergence decays like $e^{-2t-\frac{2t}{C_{\texttt{LSI}}}}$, i.e. that the W
and FR terms act more or less independently.
| Target $\pi_{1}$ | Target $\pi_{2}$
---|---|---
| Init. $\pi_{a}$ | Init. $\pi_{b}$ | Init. $\pi_{c}$ | Init. $\pi_{d}$
FR | -2.0016 | -2.0002 | -2.0028 | -2.0014
WFR | -2.0771 | -2.0759 | -12.8190 | -12.8632
W | -0.0811 | -0.0811 | -10.7784 | -10.8538
Table 1: Large-time slopes of the KL divergence vs. time curves in a semi-
logarithmic plot (Fig. 2), for the three flows. See Appendix B for details on
the computation of the slopes.
## 5 Conclusion
In this work, using a relatively simple proof technique, we showed that the
Kullback-Leibler divergence along its Fisher-Rao gradient flow
$(\rho_{t}^{\text{FR}})_{t\geq 0}$ can be written as a power-series expansion,
resulting in a tight asymptotic convergence rate for large times. A similar
expansion holds for $\mathcal{R}_{q}(\rho_{t}^{\text{FR}}\|\pi)$, where
$\mathcal{R}_{q}$ is any $q$-Rényi divergence. Our findings were verified with
simple numerical experiments, where we also simulated Wasserstein and
Wasserstein-Fisher-Rao gradient flows. Our simulations indicated that, in some
cases, the convergence rate of the WFR gradient flow scales like
$e^{-(2+(2/C_{\texttt{LSI}}))t}$, an observation that we hope can be made
precise in future work. A second direction is to extend our proof technique
from the KL divergence to general Bregman divergences.
#### Acknowledgments
The authors thank Joan Bruna, Jonathan Niles-Weed, Sinho Chewi, and Andre
Wibisono for helpful discussions. CD acknowledges Meta AI Research as a
funding source. AAP acknowledges NSF Award 1922658 and and Meta AI Research.
## References
* Ambrosio et al. (2005) Luigi Ambrosio, Nicola Gigli, and Giuseppe Savaré. _Gradient flows: in metric spaces and in the space of probability measures_. Springer Science & Business Media, 2005.
* Arnold et al. (2000) Anton Arnold, Peter Markowich, and Andreas Unterreiter. On convex Sobolev inequalities and the rate of convergence to equilibrium for Fokker-Planck type equations. _Communications in Partial Differential Equations_ , 26, 05 2000.
* Benamou & Brenier (2000) Jean-David Benamou and Yann Brenier. A computational fluid mechanics solution to the Monge-Kantorovich mass transfer problem. _Numerische Mathematik_ , 84(3):375–393, 2000\.
* Bogachev (2007) V.I. Bogachev. _Measure Theory_. Number 1 in Measure Theory. Springer Berlin Heidelberg, 2007.
* Chewi (2022) Sinho Chewi. _Log-concave sampling_. 2022\.
* Chewi et al. (2022) Sinho Chewi, Murat A Erdogdu, Mufan Li, Ruoqi Shen, and Shunshi Zhang. Analysis of Langevin Monte Carlo from Poincare to Log-Sobolev. In _Proceedings of Thirty Fifth Conference on Learning Theory_ , volume 178 of _Proceedings of Machine Learning Research_. PMLR, 02–05 Jul 2022.
* Chizat (2022) Lenaic Chizat. Sparse optimization on measures with over-parameterized gradient descent. _Mathematical Programming_ , 194(1-2):487–532, 2022.
* Chizat et al. (2015) Lenaic Chizat, Bernhard Schmitzer, Gabriel Peyré, and François-Xavier Vialard. An interpolating distance between optimal transport and Fisher-Rao. _Foundations of Computational Mathematics_ , 18, 06 2015.
* Chizat et al. (2018) Lénaïc Chizat, Gabriel Peyré, Bernhard Schmitzer, and François-Xavier Vialard. Unbalanced optimal transport: dynamic and Kantorovich formulations. _Journal of Functional Analysis_ , 274(11):3090–3123, 2018.
* Dalalyan & Tsybakov (2012) A.S. Dalalyan and A.B. Tsybakov. Sparse regression learning by aggregation and Langevin Monte-Carlo. _Journal of Computer and System Sciences_ , 78(5):1423–1443, 2012.
* Durmus et al. (2021) Alain Durmus, Szymon Majewski, and Błażej Miasojedow. Analysis of Langevin Monte Carlo via convex optimization. _J. Mach. Learn. Res._ , 20(1):2666–2711, 2021\.
* Gallouët & Monsaingeon (2017) Thomas O Gallouët and Leonard Monsaingeon. A JKO splitting scheme for Kantorovich–Fisher–Rao gradient flows. _SIAM Journal on Mathematical Analysis_ , 49(2):1100–1130, 2017.
* Gelman et al. (1995) Andrew Gelman, John B Carlin, Hal S Stern, and Donald B Rubin. _Bayesian data analysis_. Chapman and Hall/CRC, 1995.
* Gross (1975) Leonard Gross. Logarithmic Sobolev inequalities. _American Journal of Mathematics_ , 97(4):1061–1083, 1975.
* Hellinger (1909) E. Hellinger. Neue Begründung der Theorie quadratischer Formen von unendlichvielen Veränderlichen. _Journal für die reine und angewandte Mathematik_ , (136):210–271, 1909.
* Holley & Stroock (1987) Richard Holley and Daniel Stroock. Logarithmic Sobolev inequalities and stochastic Ising models. _Journal of Statistical Physics_ , 46(5):1159–1194, Mar 1987. ISSN 1572-9613.
* Johannes & Polson (2010) Michael Johannes and Nicholas Polson. MCMC methods for continuous-time financial econometrics. In _Handbook of Financial Econometrics: Applications_ , pp. 1–72. Elsevier, 2010.
* Jordan et al. (1998) Richard Jordan, David Kinderlehrer, and Felix Otto. The variational formulation of the Fokker–Planck equation. _SIAM journal on mathematical analysis_ , 29(1):1–17, 1998.
* Kakutani (1948) Shizuo Kakutani. On equivalence of infinite product measures. _Annals of Mathematics_ , 49(1):214––224, 1948\.
* Kirkpatrick et al. (1983) S. Kirkpatrick, C. D. Gelatt, and M. P. Vecchi. Optimization by simulated annealing. _Science_ , 220(4598):671–680, 1983.
* Kobyzev et al. (2020) Ivan Kobyzev, Simon JD Prince, and Marcus A Brubaker. Normalizing flows: An introduction and review of current methods. _IEEE transactions on pattern analysis and machine intelligence_ , 43(11):3964–3979, 2020.
* Kondratyev et al. (2016) Stanislav Kondratyev, Léonard Monsaingeon, and Dmitry Vorotnikov. A new optimal transport distance on the space of finite Radon measures. _Advances in Differential Equations_ , 21(11/12):1117 – 1164, 2016.
* Lambert et al. (2022) Marc Lambert, Sinho Chewi, Francis Bach, Silvère Bonnabel, and Philippe Rigollet. Variational inference via Wasserstein gradient flows. _arXiv preprint arXiv:2205.15902_ , 2022.
* Liero et al. (2016) Matthias Liero, Alexander Mielke, and Giuseppe Savaré. Optimal transport in competition with reaction: The Hellinger–Kantorovich distance and geodesic curves. _SIAM Journal on Mathematical Analysis_ , 48(4):2869–2911, 2016.
* Liero et al. (2018) Matthias Liero, Alexander Mielke, and Giuseppe Savaré. Optimal entropy-transport problems and a new Hellinger-Kantorovich distance between positive measures. _Inventiones mathematicae_ , 211, 03 2018.
* Lu et al. (2019) Yulong Lu, Jianfeng Lu, and James Nolen. Accelerating Langevin sampling with birth-death. _arXiv preprint arXiv:1905.09863_ , 2019.
* Lu et al. (2022) Yulong Lu, Dejan Slepčev, and Lihan Wang. Birth-death dynamics for sampling: Global convergence, approximations and their asymptotics. _arXiv preprint arXiv:2211.00450_ , 2022.
* MacKay (2003) David JC MacKay. _Information theory, inference and learning algorithms_. Cambridge university press, 2003.
* Markowich & Villani (1999) P. A. Markowich and C. Villani. On the trend to equilibrium for the Fokker-Planck equation: An interplay between physics and functional analysis. In _Physics and Functional Analysis, Matematica Contemporanea (SBM) 19_ , pp. 1–29, 1999.
* Pincus (1970) Martin Pincus. A Monte Carlo method for the approximate solution of certain types of constrained optimization problems. _Operations Research_ , 18(6):1225–1228, 1970\.
* Robert et al. (1999) Christian P Robert, George Casella, and George Casella. _Monte Carlo statistical methods_ , volume 2. Springer, 1999.
* Rotskoff et al. (2019) Grant Rotskoff, Samy Jelassi, Joan Bruna, and Eric Vanden-Eijnden. Global convergence of neuron birth-death dynamics. _arXiv preprint arXiv:1902.01843_ , 2019.
* Shiryaev (1984) Al’bert Nikolaevich Shiryaev. _Probability_. Graduate texts in mathematics ; 95. Springer-Verlag, New York, 1984. ISBN 9781489900180.
* Stam (1959) A.J. Stam. Some inequalities satisfied by the quantities of information of Fisher and Shannon. _Information and Control_ , 2(2):101–112, 1959\.
* Vempala & Wibisono (2019) Santosh Vempala and Andre Wibisono. Rapid convergence of the unadjusted Langevin algorithm: Isoperimetry suffices. In _Advances in Neural Information Processing Systems_ , volume 32. Curran Associates, Inc., 2019.
* Villani (2008) C. Villani. _Optimal Transport: Old and New_. Grundlehren der mathematischen Wissenschaften. Springer Berlin Heidelberg, 2008.
* Von Toussaint (2011) Udo Von Toussaint. Bayesian inference in physics. _Reviews of Modern Physics_ , 83(3):943, 2011\.
* Wibisono (2018) Andre Wibisono. Sampling as optimization in the space of measures: The Langevin dynamics as a composite optimization problem. In _Conference on Learning Theory_ , pp. 2093–3027. PMLR, 2018\.
* Yan et al. (2023) Yuling Yan, Kaizheng Wang, and Philippe Rigollet. Learning Gaussian mixtures using the Wasserstein-Fisher-Rao gradient flow. _arXiv preprint arXiv:2301.01766_ , 2023.
## Appendix A Remaining proofs
###### Proposition 4 (Uniqueness of the Fisher-Rao gradient flow of the KL
divergence).
Given a target potential $V_{*}$ and an initial measure $\rho_{0}$, the
solution of Eq. 12 is unique.
###### Proof.
Consider the PDE
$\displaystyle\partial_{t}\mu_{t}(x)=-\big{(}\log(\mu_{t}(x))+V_{*}(x)\big{)}\mu_{t}(x),\qquad\mu_{0}=\rho_{0}$
(29)
Note that this is in fact an ODE for each point $x$, that we can rewrite as
$\partial_{t}\log\mu_{t}(x)=-\big{(}\log(\mu_{t}(x))+V_{*}(x)\big{)}$. The
unique solution of this ODE with initial condition $\log\mu_{0}(x)$ is
$\log\mu_{t}(x)=(\log\mu_{0}(x)-V_{*}(x))e^{-t}+V_{*}(x)$. Thus, we conclude
that Eq. 29 has a unique solution.
Now, given a solution $\rho_{t}$ of Eq. 12 with initial condition $\rho_{0}$,
define $\tilde{\rho}_{t}$ as
$\displaystyle\log\tilde{\rho}_{t}(x)=\log\rho_{t}(x)-\int_{0}^{t}\big{\langle}\log(\rho_{s})+V_{*}\big{\rangle}_{\rho_{s}}\,{\textnormal{d}}s.$
(30)
Remark that $\tilde{\rho}_{t}$ is a solution of Eq. 29, since
$\displaystyle\partial_{t}\log\tilde{\rho}_{t}(x)$
$\displaystyle=\partial_{t}\log\rho_{t}(x)-\big{\langle}\log(\rho_{t})+V_{*}\big{\rangle}_{\rho_{t}}=-\big{(}\log(\rho_{t}(x))+V_{*}(x)-\big{\langle}\log(\rho_{t})+V_{*}\big{\rangle}_{\rho_{t}}\big{)}-\big{\langle}\log(\rho_{t})+V_{*}\big{\rangle}_{\rho_{t}}$
$\displaystyle=-\big{(}\log(\rho_{t}(x))+V_{*}(x)\big{)}.$
Also, note that the map $(\rho_{t})_{t\geq 0}\to(\tilde{\rho}_{t})_{t\geq 0}$
defined by Eq. 30 is invertible, as
$\rho_{t}(x)=\tilde{\rho_{t}}(x)/\int\tilde{\rho}_{t}(y)\,{\textnormal{d}}y$.
This follows from the fact that $\rho_{t}$ and $\tilde{\rho}_{t}$ are
proportional to each other, and that $\rho_{t}$ integrates to 1.
Finally, suppose that $\rho_{t}^{a}$ and $\rho_{t}^{b}$ are two solutions of
Eq. 12 with initial condition $\rho_{0}$. Via the construction Eq. 30, they
yield solutions $\tilde{\rho}_{t}^{a}$ and $\tilde{\rho}_{t}^{b}$ of Eq. 29
with initial condition $\rho_{0}$. The uniqueness of the solution of Eq. 29
implies that $\tilde{\rho}_{t}^{a}=\tilde{\rho}_{t}^{b}$. Since the map
$(\rho_{t})_{t\geq 0}\to(\tilde{\rho}_{t})_{t\geq 0}$ is invertible, we obtain
that $\rho_{t}^{a}=\rho_{t}^{b}$, which concludes the proof ∎
###### Proof of Proposition 2 (Continued).
We perform similar manipulations as in the case with the KL divergence:
$\displaystyle\mathcal{R}_{q}(\mu_{\tau}\|\pi)$
$\displaystyle=\frac{1}{q-1}\log\int\frac{e^{-q\tau(V_{*}-V_{0})-qV_{0}}(Z_{\tau})^{-q}}{e^{-qV_{*}}(Z_{1})^{-q}}\,{\textnormal{d}}\pi$
$\displaystyle=\frac{1}{q-1}\log\int
e^{q(1-\tau)(V_{*}-V_{0})}\left(\frac{Z_{\tau}}{Z_{1}}\right)^{q}\,{\textnormal{d}}\pi$
$\displaystyle=\frac{1}{q-1}K_{Y}(1-\tau)-\frac{q}{q-1}(\log Z_{\tau}-\log
Z_{1})$
$\displaystyle=\frac{1}{q-1}K_{Y}(1-\tau)-\frac{q}{q-1}K_{Y}(1-\tau)\,,$
where in the last line we again used Eq. 25. This completes the proof. ∎
###### Proof of Proposition 3.
By (A1), the partition function $F(t)=\int_{\mathbb{R}^{d}}e^{-tV_{*}(x)}\,dx$
is differentiable at $t=1$. This is because $F^{\prime}(t)=-\int
V_{*}(x)\,d\pi(x)$. Hence, $F(t)$ is finite on an interval
$(1-2\epsilon_{1},1]$ for some $\epsilon_{1}$.
Note that the assumption (A2) can be written equivalently as
$\xi:=\inf_{x}\alpha V_{*}(x)-V_{0}(x)>-\infty$. We obtain that for all
$\epsilon\in[0,\epsilon_{1}/\alpha)$,
$\displaystyle\begin{split}-\epsilon(V_{*}(x)-V_{0}(x))-V_{*}(x)&=-\epsilon((1+\alpha)V_{*}(x)-V_{0}(x))+(\epsilon\alpha-1)V_{*}(x)\\\
&\leq-\epsilon\xi+(\epsilon\alpha-1)V_{*}(x)\leq-\epsilon\xi+(\epsilon_{1}-1)V_{*}(x)\end{split}$
(31)
Equivalently,
$\displaystyle\exp(K_{Y}(-\epsilon))=\int_{\mathbb{R}^{d}}e^{-\epsilon(V_{*}(x)-V_{0}(x))-V_{*}(x)}\,dx\leq
e^{-\epsilon\xi}\int_{\mathbb{R}^{d}}e^{-(1-\epsilon_{1})V_{*}(x)}\,dx=e^{-\epsilon\xi}F(1-\epsilon_{1})<+\infty.$
(32)
Also, for all $\epsilon\in[0,1)$, using the convexity of the exponential
function we have that
$\displaystyle\exp(K_{Y}(\epsilon))$
$\displaystyle=\int_{\mathbb{R}^{d}}e^{\epsilon(V_{*}(x)-V_{0}(x))-V_{*}(x)}\,dx=\int_{\mathbb{R}^{d}}e^{-(1-\epsilon)V_{*}(x)-\epsilon
V_{0}(x)}\,dx$ (33)
$\displaystyle\leq\int_{\mathbb{R}^{d}}(1-\epsilon)e^{-V_{*}(x)}+\epsilon
e^{-V_{0}(x)}\,dx=(1-\epsilon)Z_{1}+\epsilon Z_{0}<+\infty.$ (34)
Hence, the cumulant-generating function $K_{Y}(t)=\log\mathbb{E}e^{tY}$ is
finite on a neighborhood $(-\epsilon_{0},\epsilon_{0})$ with
$\epsilon_{0}=\min\\{1,\epsilon_{1}/\alpha\\}$. Applying Lemma 1, we conclude
that there exists $\epsilon>0$ such that for $z\in B_{\epsilon}(0)$, we have
that $K_{Y}(z)=\sum_{n=1}^{+\infty}\frac{\kappa_{n}}{n!}z^{n}$. ∎
The following lemma, which we make explicit, is a well-known fact in
probability theory. In short, since the moment-generating function is analytic
in some neighborhood, and is non-negative, taking the logarithm is safe as
everything is analytic. The interested reader can consult e.g. (Shiryaev,
1984, Section II.12.8) which dissects this in detail.
###### Lemma 1.
Assume that the cumulant-generating function $K_{Y}(t)=\log\mathbb{E}e^{tY}$
is finite on a neighborhood $(-\epsilon_{0},\epsilon_{0})$ of zero. Then,
$K_{Y}(z)=\log\mathbb{E}e^{zY}$ as a function on the complex plane is
holomorphic on the open ball $B_{\epsilon}(0)$ of radius $\epsilon$ centered
at zero, for some $\epsilon>0$. Moreover, for $z\in B_{\epsilon}(0)$, we have
that
$\displaystyle K_{Y}(z)=\sum_{n=1}^{+\infty}\frac{\kappa_{n}}{n!}z^{n}.$ (35)
###### Lemma 2 (End of the proof of Theorem 1).
We have that
$\displaystyle|\text{KL}(\rho_{t}\|\pi)-\frac{\kappa_{2}}{2}e^{-2t}|=O(e^{-3t}),\qquad|\mathcal{R}_{q}(\rho_{t}\|\pi)-\frac{q\kappa_{2}}{2}e^{-2t}|=O(e^{-3t}).$
(36)
###### Proof.
Lemma 1 implies that the series for $K_{Y}$ centered at zero has convergence
radius $\epsilon$, for some $\epsilon>0$. Since the derivative of a series has
the same radius of convergence, we obtain that
$\displaystyle H(z):=zK_{Y}^{\prime}(z)-K_{Y}(z)=\sum_{n\geq
2}\frac{\kappa_{n}}{n(n-2)!}z^{n}.$
has convergence radius $\epsilon$ as well. Hence, by the Cauchy-Hadamard
theorem, $\frac{1}{\epsilon}\geq\limsup_{n\to\infty}(|c_{n}|^{1/n})$, where
$c_{n}:=\frac{\kappa_{n}}{n(n-2)!}$.
This implies that for all $0<\epsilon^{\prime}<\epsilon$, there exists a
constant $C_{\epsilon^{\prime}}>0$ such that for all $n\geq 0$, $|c_{n}|\leq
C_{\epsilon^{\prime}}/(\epsilon^{\prime})^{n}$. Consequently, for all
$z\in\mathbb{C}$ with $|z|<1/\epsilon^{\prime}$,
$\displaystyle|H(z)-\frac{\kappa_{2}}{2}z^{2}|=\big{|}\sum_{n=3}^{+\infty}\frac{\kappa_{n}}{n(n-2)!}z^{n}\big{|}\leq
C_{\epsilon^{\prime}}\sum_{n=3}^{+\infty}\big{(}\frac{|z|}{\epsilon^{\prime}}\big{)}^{n}=C_{\epsilon^{\prime}}\frac{\big{(}\frac{|z|}{\epsilon^{\prime}}\big{)}^{3}}{1-\frac{|z|}{\epsilon^{\prime}}}$
(37)
Using Eq. 23, we get that for any constant $\gamma>0$, if
$t\geq-\log\epsilon^{\prime}+\gamma$ (or equivalently,
$e^{-t}\leq\epsilon^{\prime}e^{-\gamma}$),
$\displaystyle|\text{KL}(\rho_{t}\|\pi)-\frac{\kappa_{2}}{2}e^{-2t}|\leq
C_{\epsilon^{\prime}}\frac{\big{(}\frac{e^{-t}}{\epsilon^{\prime}}\big{)}^{3}}{1-\frac{e^{-t}}{\epsilon^{\prime}}}=C_{\epsilon^{\prime}}\frac{e^{-3t}}{(\epsilon^{\prime})^{3}(1-e^{-\gamma})}=O(e^{-3t}),$
(38)
which concludes the proof for the KL divergence. For the Rényi divergence, the
proof is analogous (note that in that case the series
$\frac{1}{q-1}K_{Y}(qz)-\frac{q}{q-1}K_{Y}(z)$ has convergence radius
$\epsilon/q$). ∎
## Appendix B Details on the numerical simulations
To run the simulations in Section 4, we discretized the interval $[-\pi,\pi)$
in $n=2000$ equispaced points. Let $h=2\pi/n$. For each algorithm and
initialization, we construct sequences ${(x_{k})}_{k\geq 0}$, where
$x_{k}\in\mathbb{R}^{n}$ represents the normalized log-density at each point.
We let $v_{*}\in\mathbb{R}^{n}$ be the (non-normalized) energy of the target
distribution, obtained by evaluating $V_{*}$ at the discretization points.
Similarly, $\nabla v_{*},\Delta v_{*}\in\mathbb{R}^{n}$ are the evaluations of
$\nabla V_{*}$ and $\Delta V_{*}$ at the $n$ points (note that $\nabla V_{*}$
is a scalar because the distributions are one-dimensional).
We used the following discretizations for the Fisher-Rao, Wasserstein and
Wasserstein-Fisher-Rao gradient flows:
1. (i)
Fisher-Rao GF: We use mirror descent in log-space. The update reads:
$\displaystyle\tilde{x}_{k+1}$ $\displaystyle\leftarrow
x_{k}+\epsilon(-v_{*}-x_{k}),$ $\displaystyle x_{k+1}$
$\displaystyle\leftarrow\tilde{x}_{k+1}-\log\bigg{(}\sum_{i=1}^{n}e^{-\tilde{x}_{k+1}^{i}}\bigg{)}.$
2. (ii)
Wasserstein GF: We approximate numerically the gradient and the laplacian of
the log-density:
$\displaystyle\begin{split}\forall i\in[n],\qquad(\nabla
x_{k})^{i}&\leftarrow(x_{k}^{i+1}-x_{k}^{i-1})/(2h),\\\ \forall
i\in[n],\qquad(\Delta
x_{k})^{i}&\leftarrow(x_{k}^{i+1}+x_{k}^{i-1}-2x_{k}^{i})/h^{2},\\\
x_{k+1}&\leftarrow x_{k}+\epsilon(\Delta v_{*}+\Delta x_{k}+(\nabla
v_{*}+\nabla x_{k})\nabla x_{k}).\end{split}$ (39)
We use periodic boundary conditions, so that the first discretization point is
adjacent to the last one for the purposes of computing derivatives.
3. (iii)
Wasserstein-Fisher-Rao GF: We combine the two previous updates. Letting
$\nabla x_{k}$ and $\Delta x_{k}$ be as in Eq. 39, we have
$\displaystyle\tilde{x}_{k+1}$ $\displaystyle\leftarrow
x_{k}+\epsilon(-v_{*}-x_{k}+\Delta v_{*}+\Delta x_{k}+(\nabla v_{*}+\nabla
x_{k})\nabla x_{k}),$ $\displaystyle x_{k+1}$
$\displaystyle\leftarrow\tilde{x}_{k+1}-\log\bigg{(}\sum_{i=1}^{n}e^{-\tilde{x}_{k+1}^{i}}\bigg{)}.$
We used stepsizes $\epsilon=$2.5\text{\times}{10}^{-6}$$ and
$\epsilon=$1\text{\times}{10}^{-6}$$ for the experiments on target
distributions (1) and (2), respectively. The slopes in Table 1 are obtain by
taking $0<t_{1}<t_{2}$ and computing
$\displaystyle\frac{\log(\text{KL}(\rho_{t_{2}}\|\pi))-\log(\text{KL}(\rho_{t_{1}}\|\pi))}{t_{2}-t_{1}}.$
We use different values for $t_{1}$ and $t_{2}$ for each target distribution;
$t_{1}$ and $t_{2}$ must be large enough to capture the asymptotic slope of
the curve, but not too large to avoid numerical errors. For all the curves
corresponding to target $\pi_{1}$, we take $t_{1}=7.0$ and $t_{2}=7.5$. For
target $\pi_{2}$, we take: for FR, $t_{1}=6.875$ and $t_{2}=7.0$; for WFR,
$t_{1}=1.875$ and $t_{2}=2.0$; for W, $t_{1}=2.75$ and $t_{2}=2.875$.
|
# Single and Multi-Speaker Cloned Voice Detection: From Perceptual to Learned
Features ††thanks: This work was partially funded by a grant from the UC
Berkeley Center For Long-Term Cybersecurity (CLTC), an award for open-source
innovation from the Digital Public Goods Alliance and United Nations
Development Program, and from an unrestricted gift from Meta. The public
codebase can be found at https://github.com/audio-df-ucb/ClonedVoiceDetection.
Sarah Barrington1, Romit Barua1, Gautham Koorma1, Hany Farid1,2 School of
Information1, Electrical Engineering and Computer Sciences2, University of
California, Berkeley
Berkeley, CA USA
{sbarrington, romit_barua, gautham.koorma<EMAIL_ADDRESS>
###### Abstract
Synthetic-voice cloning technologies have seen significant advances in recent
years, giving rise to a range of potential harms. From small- and large-scale
financial fraud to disinformation campaigns, the need for reliable methods to
differentiate real and synthesized voices is imperative. We describe three
techniques for differentiating a real from a cloned voice designed to
impersonate a specific person. These three approaches differ in their feature
extraction stage with low-dimensional perceptual features offering high
interpretability but lower accuracy, to generic spectral features, and end-to-
end learned features offering less interpretability but higher accuracy. We
show the efficacy of these approaches when trained on a single speaker’s voice
and when trained on multiple voices. The learned features consistently yield
an equal error rate between $0\%$ and $4\%$, and are reasonably robust to
adversarial laundering.
###### Index Terms:
deepfakes, generative AI, audio forensics
## I Introduction
Computational techniques for modifying a recorded voice to sound like another
person while preserving the original semantic meaning–voice
conversion–predates today’s deepfakes and generative AI by some $65$ years
[23]. The semiannual voice conversion challenge111http://vc-challenge.org
evaluates voice cloning submissions on naturalness (rated from 1 = completely
unnatural to 5 = completely natural) and speaker identity (rated on a scale of
“same, absolutely sure,” “same, not sure,” “different, not sure,” or
“different, absolutely sure”). In the first challenge of 2016, the best-
performing system received an average of $3.0$ on the five-point naturalness
scale and $70\%$ of the samples were judged on identity to be “same.” In 2018,
the best-performing system received an average $4.1$ naturalness score, and
$80\%$ of the samples were judged on identity to be “same.” In 2020, the best
naturalness scores continued to hover around $4.0$, but identity ratings were
nearly perfect.
Over the past few years, AI-powered voice synthesis has continued to improve
(in terms of naturalness and identity), culminating this year in dramatic
breakthroughs. Perhaps most striking is zero-shot, multi-speaker text-to-
speech (ZS-TTS)222https://edresson.github.io/YourTTS for cloning a voice
identity not seen during training from a few seconds to minutes of reference
audio [6]. Also striking is the easy access to these voice-cloning
technologies through low-cost commercial
services333http://https://beta.elevenlabs.io.
While these advances are a major success of the research community, they have
also come at a price. Reports of phone scams have emerged in which a call
purportedly from a family member claims they were in an accident, arrested, or
kidnapped after which the scammer takes over in an attempt to extort money
[13, 15]. Similar reports have emerged that financial institutions using voice
identification can now be spoofed with voice cloning [7]. And, fake audio is
adding to already existing problems of disinformation [16].
From these disinformation campaigns to small- and large-scale fraud and to the
continued erosion in trust of all digital media, it is critical that we
develop techniques to distinguish the real from the fake.
Detection strategies fall into two general categories: (1) active techniques
which, at the point of synthesis, embed a perceptible or imperceptible
watermark into [22], or extract a perceptual fingerprint [22] from,
synthetically-generated content. These watermarks/fingerprints can then be
used to identify content once it is released into the wild; and (2) in the
absence of watermarks/fingerprints, passive techniques detect a range of
statistical to semantic inconsistencies in synthetically-generated content
(see Section I-A).
Our efforts fall into the second category where we describe three related
passive approaches for distinguishing real from cloned voices using
handcrafted perceptual, generic spectral, or learned features. The benefit of
the perceptual features is that they afford a low-dimensional, explainable
classifier, while the learned features generally afford better classification
performance, with the spectral features affording a compromise between these.
These different approaches (Section III) are evaluated (Section IV) against
two different real audio datasets and three cloned audio datasets (Section
II).
We consider two basic scenarios in which the three feature sets are trained to
distinguish real from cloned voices of a single speaker (Section IV-A) and
trained simultaneously from multiple speakers (Section IV-C).
### I-A Related Work
By way of background, Almutairi and Elgibreen [2] provide a review and a
quantitative comparison of various audio deepfake detection methods, and the
First International Workshop on Deepfake Detection for Audio Multimedia
focused on synthetic audio detection [27]. In this section, we highlight a few
of these approaches and those most closely related to ours.
Classical approaches for detecting synthetic speech typically exploit
statistical differences between synthetic and human speech. Ogihara et al.
[20], for example, proposed a technique that exploits differences in pitch
between synthetic and human speech. De Leon et al. [8] extended this work by
exploiting additional pitch features including stability and jitter. In
addition to these pitch differences, they also observed that the transition
between phonemes occurs more rapidly in synthetic speech. AlBadawy et al. [1]
showed that synthetic speech contains specific and unusual higher-order
spectral correlations that are not typically found in human speech.
Moving beyond these statistical approaches, more recent approaches have
incorporated explicit vocal and perceptual models. Blue et al. [4] employed
fluid-dynamic models to estimate the arrangement of the vocal tract during
speech generation, and argued that synthetic speech yields unlikely anatomical
structures. Li et al. [18] compared $16$ physical and perceptual features for
synthetic audio detection and highlighted the importance of perceptual
features. They found that in noisier conditions where the quality of the
synthetic audio is low, the perceptual linear prediction technique [12], which
combines spectral analysis with linear prediction analysis, outperforms other
features. They also analyzed the distribution of these features for real and
synthetic speech, providing useful benchmarks for selecting discriminative
features.
Variations in prosody have also been used to detect synthetic audio. For
example, Attorresi et al. [3] combined a speaker embedding representing
distinct voice features (e.g., timbre and pitch contour) with a prosodic
embedding representing variational style (e.g., rhythm and intonation). Their
experiments on the ASVspoof19 dataset show that a combination of these two
embeddings yields a $3-15$ percentage point improvement in equal error rate
(EER) over baseline models (RawNet2, MFCC-ResNet, Spec-ResNet).
End-to-end deep learning has also been deployed to identify synthetically-
generated speech. Muller et al. [19], for example, evaluated the
generalizability of various deepfake detection algorithms of $12$ end-to-end
architectures, and tested them on a novel in-the-wild (IWA) dataset of public
figures collected from social networks and video-streaming
platforms444https://deepfake-demo.aisec.fraunhofer.de/in_the_wild. They
observed that the raw audio-based end-to-end models outperformed the feature-
based models, with the RawNet2 model proposed by Tak et al. [24] achieving the
lowest equal error rate (EER) of $3.2\%$ on the ASVspoof19 dataset and an EER
of $33.9\%$ on the IWA dataset (with chance performance at $50\%$).
Lastly, Pianese et al. [21] evaluated the use of various off-the-shelf speaker
verification tools for synthetic voice detection and found them effective and
robust to intentional and unintentional laundering (e.g., transcoding,
resampling, etc.). This approach yielded an average EER of $15.0\%$ on the
ASVspoof19, FakeAVCeleb, and IWA datasets.
Most forensic approaches seek to distinguish real from synthetic voices
regardless of identity. A more personalized biometric approach can also be
taken in which a person’s distinct voice characteristics are used to
distinguish the real from the fake [21].
Beyond classifying speech as synthetic or real, recent efforts have also
focused on identifying fingerprints that can identify specific synthesis
architectures [26]. And, although somewhat outside of the scope of our work,
there has also been an effort to detect audio spoofing in the form of a
rebroadcast attack in which a person’s voice is recorded and replayed [25,
24].
We take a hybrid approach in terms of the audio features–leveraging learned,
spectral, and perceptual features–and in terms of considering both single-
speaker (personalized) detectors and multi-speaker (non-personalized)
detectors. We evaluate our detectors on a number of real and cloned voices and
evaluate the vulnerability to standard laundering attacks.
SINGLE-SPEAKER
---
Type | Name | Clips (#) | Duration (sec)
Real | LJSpeech | $13{\small,}100$ | $86{\small,}117$
Synthetic | WaveFake | $91{\small,}700$ | $603{\small,}081$
| ElevenLabs | $13{\small,}077$ | $78{\small,}441$
| Uberduck | $13{\small,}094$ | $83{\small,}322$
MULTI-SPEAKER
Type | Name | Clips (#) | Duration (sec)
Real | TIMIT | $4{\small,}620$ | $14{\small,}192$
Synthetic | ElevenLabs | $5{\small,}499$ | $15{\small,}413$
TABLE I: An overview of the real and synthetic datasets used in our single-
speaker (top) and multi-speaker (bottom) evaluations. The $91{\small,}700$
WaveFake samples correspond to $13{\small,}100$ samples per each of seven
different vocoder architectures, hence the larger number of clips and
duration. REAL
---
SYNTHETIC
Figure 1: Example real audio (top) and synthetic audio (bottom) temporal
waveforms (each normalized into the amplitude range $[-1,1]$) for the same
utterance. Note the difference in the length of the silences and the
differences in overall amplitude and amplitude modulation over time.
## II Datasets
A selection of publicly available datasets was used to develop and test our
models (see Table I). For the evaluation of single-speaker detection, the
LJSpeech [14] and WaveFake datasets [10] were used. The LJSpeech dataset is a
publicly available dataset consisting of $13{\small,}100$ short audio clips of
a single female speaker, Linda Johnson, reading passages from seven non-
fiction books. The WaveFake dataset555https://github.com/RUB-SysSec/WaveFake
comprises $117{\small,}985$ audio clips generated from the LJSpeech dataset
using seven different vocoder architectures. Linda Johnson’s voice was cloned
from the LJSpeech dataset using the leading commercial text-to-speech (TTS)
platforms ElevenLabs and Uberduck. Each transcript from the LJSpeech corpus
was re-generated in the cloned voice.
For the evaluation of our perceptual features (Section III) and multi-speaker
detection (Section IV-C), we used the TIMIT dataset[11], consisting of $462$
real male and female American-English speakers, uttering a total of
$1{\small,}718$ different phonetically-rich sentences [11]. Each of these
phrases was fed to ElevenLabs with one of $11$ distinct voices: nine of the
voices were built into ElevenLabs, and we cloned the remaining two voices to
mimic Presidents Biden and Obama using $1{\small,}038$ and $1{\small,}192$
seconds of audio recordings. The resulting dataset provided a diverse range of
real and synthesized voices with a one-to-one correspondence of the underlying
utterances. To ensure balanced representation, utterances with only one human
speaker were removed from the dataset, and the remaining audio clips were
randomly sampled to select clips with the greater count of the real or
synthetic voice per utterance. This process yielded a total of $763$ real and
$763$ synthesized audio clips. Lastly, each real and synthesized audio was
normalized into the amplitude range $[-1,1]$.
All audio files were downsampled to 16khz and the seven WaveFake architectures
were randomly sampled such that the total number of WaveFake clips were equal
to that of Uberduck and ElevenLabs.
## III Methods
We describe three approaches for classifying speech as synthetic or real
(single class), and for identifying the underlying synthesis architecture
(multi class). These approaches range from low-dimensional (and interpretable)
handcrafted features to higher-dimensional generic spectral audio features, to
even higher-dimensional (and less interpretable) learned neural features. The
next three sections describe these features followed by a description of a
simple classifier that ingests these features for the purpose of single- and
multi-class classification.
### III-A Perceptual
Shown in Fig. 1 is a pair of real (top) and synthetic (bottom) waveforms (each
normalized into the amplitude range $[-1,1]$) for the same utterance (“nuclear
rockets can destroy airfields with ease”) from which we can see some
qualitative differences. For the same utterance, the real human voice shows a
lower average normalized amplitude and higher amplitude variability. And, we
observe that real voices exhibit more frequent and noticeable pauses between
certain words. Using the real and fake TIMIT dataset, and as described next,
we designed a set of handcrafted features to determine if these simple
temporal-domain observations would yield reliable classification between real
and synthetic audio.
Pause: A pause in the temporal waveform is identified as a segment of audio
with $100$ consecutive samples with a rolling average amplitude less than
$0.5\%$ of the maximum normalized amplitude (all audios are normalized into
the range $[-1,1]$).
The mean/standard deviation of pause length (as a percentage of the audio
length) for real and synthetic audio contained within the TIMIT dataset is
$27.27/8.49$ and $13.57/6.56$. A two-sided t-test reveals a strong statistical
difference in these distributions ($p\ll 10^{-10}$).
We quantify these differences by extracting four summary statistics from the
identified pauses: the pause ratio (the ratio of pauses relative to the length
of the audio), the mean pause length (specified as the number of samples), the
standard deviation of pause length, and the number of pauses (the number of
pauses, of course, depends on the number of words per utterance, but our
training dataset consisted of the same utterances for both real and synthetic
audio).
Amplitude: Two amplitude features are extracted capturing the consistency and
variation in voices. To begin, the absolute values of each waveform are
temporally smoothed with a fifth-order Butterworth low-pass filter. From this
smoothed waveform, we compute the overall mean amplitude and mean amplitude of
the temporal derivative. The mean/standard deviation of mean amplitude for
real and synthetic audio contained within the TIMIT dataset is $0.06/0.02$ and
$0.10/0.02$ ($p\ll 10^{-10}$), again showing a significant difference.
### III-B Spectral
For generic spectral features, we employed the openSMILE library (speech &
music interpretation by large-space extraction) [9]. For an arbitrary-length
audio clip, openSMILE generates $6{\small,}373$ scalar-valued features such as
summary statistics (mean, standard deviation, etc.), regression coefficients,
linear predictive coding coefficients, and various peak-related functionals. A
simple dimensionality reduction (SelectFromModel666https://scikit-
learn.org/stable/modules/generated/sklearn.feature_selection.SelectFromModel.html)
was used to reduce the number of features to $20$.
### III-C Learned
For the end-to-end learned audio features, we employed Nvidia’s open-source
TitaNet model [17]. TitaNet was initially trained for speaker identification
using end-to-end additive margin angular loss, which enhances the separation
of speaker identity in the latent space. Using an encoder-decoder
architecture, TitaNet converts 16KHz sampled raw audio files into $192$-D
embeddings. We treat these embeddings as features for the downstream
classification task.
### III-D Classification
For each of the three feature sets described above, we employed a linear
logistic regression and a non-linear random forest classifier for a single-
class (real vs. synthetic) or multi-class (real vs. specific synthesis
architecture) task. In each case, the full data set was split into a $60\%$
training, $20\%$ validation (for hyper-parameter tuning), and $20\%$ testing.
All results below are for the testing portion of the dataset.
## IV Results
We describe classification accuracy for a personalized, single-speaker task in
which a classifier is trained on learned, spectral, or perceptual features for
a single-speaker identity. We next describe the generalization of these
classifiers to a multi-speaker task in which a classifier is trained across
multiple speakers. The classifiers are evaluated against the generated voices,
and laundered voices. Lastly, we compare our results to a ElevenLabs’
detector.
SINGLE-SPEAKER
---
Dataset | Model | Synthetic Accuracy (%) | Real Accuracy (%) | EER (%)
| | Learned | Spectral | Perceptual | Learned | Spectral | Perceptual | Learned | Spectral | Perceptual
EL | single (L) | 100.0 | 99.2 | 78.2 | 100.0 | 99.9 | 72.5 | 0.0 | 0.5 | 24.9
| single (NL) | 100.0 | 99.9 | 82.2 | 100.0 | 100.0 | 80.4 | 0.0 | 0.1 | 18.6
UD | single (L) | 99.8 | 98.9 | 51.9 | 99.9 | 98.9 | 54.0 | 0.1 | 1.1 | 47.2
| single (NL) | 99.7 | 99.2 | 54.4 | 99.9 | 99.0 | 56.5 | 0.2 | 0.9 | 44.5
WF | single (L) | 96.5 | 78.4 | 57.8 | 97.1 | 82.3 | 45.6 | 3.3 | 19.7 | 48.5
| single (NL) | 94.5 | 87.6 | 50.3 | 96.7 | 90.2 | 52.7 | 4.4 | 11.2 | 48.6
EL+UD | single (L) | 99.7 | 94.8 | 63.4 | 99.9 | 97.1 | 60.3 | 0.2 | 4.2 | 37.9
| single (NL) | 99.7 | 99.2 | 57.3 | 99.9 | 99.6 | 69.0 | 0.2 | 0.8 | 37.6
EL+UD+WF | single (L) | 93.2 | 79.7 | 58.4 | 98.7 | 93.0 | 57.6 | 3.6 | 15.9 | 42.1
| single (NL) | 91.2 | 90.6 | 53.1 | 99.0 | 94.1 | 64.7 | 4.1 | 7.9 | 41.6
EL+UD | multi (L) | 99.9 | 96.6 | 61.0 | 100.0 | 94.6 | 35.7 | - | - | -
| multi (NL) | 99.7 | 98.3 | 65.6 | 100.0 | 97.2 | 43.2 | - | - | -
EL+UD+WF | multi (L) | 98.8 | 80.2 | 45.1 | 97.3 | 64.3 | 22.9 | - | - | -
| multi (NL) | 98.1 | 94.2 | 48.6 | 96.3 | 84.4 | 27.6 | - | - | -
SINGLE-SPEAKER: ADVERSARIAL LAUNDERING
Dataset | Model | Synthetic Accuracy (%) | Real Accuracy (%) | EER (%)
| | Learned | Spectral | Perceptual | Learned | Spectral | Perceptual | Learned | Spectral | Perceptual
EL | single (L) | 95.5 | 94.3 | 61.1 | 94.5 | 92.6 | 65.2 | 4.9 | 6.7 | 36.6
| single (NL) | 96.0 | 96.2 | 70.4 | 95.4 | 95.6 | 69.6 | 4.1 | 4.1 | 30.1
UD | single (L) | 95.4 | 81.1 | 61.4 | 91.8 | 84.3 | 44.7 | 6.3 | 17.3 | 46.7
| single (NL) | 95.4 | 86.8 | 52.9 | 93.3 | 86.1 | 55.9 | 5.5 | 13.6 | 45.6
WF | single (L) | 87.6 | 60.7 | 59.6 | 85.0 | 70.4 | 42.5 | 13.9 | 34.4 | 49.4
| single (NL) | 83.6 | 77.1 | 51.4 | 85.6 | 76.7 | 53.9 | 15.3 | 23.1 | 47.3
EL+UD | single (L) | 95.2 | 79.1 | 54.0 | 91.7 | 78.4 | 59.8 | 6.2 | 21.3 | 43.1
| single (NL) | 94.8 | 86.1 | 55.2 | 93.3 | 90.0 | 62.4 | 6.0 | 12.0 | 41.4
EL+UD+WF | single (L) | 83.7 | 70.9 | 50.6 | 88.6 | 72.9 | 59.7 | 13.2 | 28.2 | 44.8
| single (NL) | 83.4 | 79.2 | 53.0 | 90.7 | 85.1 | 60.7 | 12.5 | 17.9 | 43.6
EL+UD | multi (L) | 94.2 | 85.6 | 50.9 | 91.0 | 77.1 | 29.1 | - | - | -
| multi (NL) | 94.5 | 91.7 | 53.2 | 90.3 | 82.9 | 41.3 | - | - | -
EL+UD+WF | multi (L) | 89.8 | 65.4 | 35.3 | 83.1 | 44.3 | 26.2 | - | - | -
| multi (NL) | 88.8 | 78.8 | 39.8 | 82.1 | 63.0 | 28.6 | - | - | -
MULTI-SPEAKER
Dataset | Model | Synthetic Accuracy (%) | Real Accuracy (%) | EER (%)
| | Learned | Spectral | Perceptual | Learned | Spectral | Perceptual | Learned | Spectral | Perceptual
EL | single (L) | 100.0 | 94.2 | 83.9 | 99.9 | 98.3 | 86.9 | 0.0 | 3.0 | 13.1
| single (NL) | 92.3 | 96.3 | 82.2 | 100.0 | 99.7 | 87.7 | 0.1 | 1.6 | 13.7
TABLE II: Accuracy for a personalized, single-speaker classification of
unlaundered audio (top) and audio subject to adversarial laundering in the
form of additive noise and transcoding (middle). Shown in the bottom table is
the non-personalized, multi-speaker accuracy. Dataset corresponds to
ElevenLabs (EL), Uberduck (UD), and WaveFake (WF); Model corresponds to a
linear (L) or non-linear (NL) classifier, and for a single-classifier (real v.
synthetic) or multi-classifier (real vs. specific synthethis architecture;
accuracy (%) is reported for synthetic audio, real audio, and (for the single-
classifiers) equal error rate (EER).
### IV-A Single Speaker
Shown in Table II (top panel) is the accuracy for distinguishing real from
synthetic audio (model: single) and real from specific synthetic audio
architecture (model: multi) using a linear (model: L) and non-linear (model:
NL) classifier, evaluated against single or multiple datasets (ElevenLabs
[EL], Uberduck [UD], WaveForm [WF]). Each column corresponds to the accuracy
for correctly classifying real and synthetic audio using the learned,
spectral, or perceptual features. The far-right columns report the equal error
rate (the EER is the point on the receiver operating curve (ROC) where the
false acceptance rate (incorrectly classifying a synthetic voice as real) and
false rejection rate (incorrectly classifying a real voice as synthetic) are
equal).
As expected, the non-linear classifier generally affords better accuracy. For
the spectral features, for example, across all dataset combinations the non-
linear classifiers afford an average $4.1$ percentage point reduction in EER.
Accuracy on the learned features outperforms the spectral and perceptual
features, with an average EER on single datasets (and linear classifier) of
$0.0\%$, $0.1\%$, and $3.3\%$ for the learned features as compared to $0.5\%$,
$1.1\%$, and $19.7\%$ for the spectral features, and $24.9\%$, $47.2\%$, and
$48.5\%$ for the perceptual features.
Generally speaking, classifiers trained and tested on a single dataset (EL,
UD, or WF) perform better than those trained on two or more datasets. And,
accuracy on the single-class task is higher than on multi-class.
### IV-B Laundering
To test the robustness of our methods against unintentional or intentional
adversarial laundering attacks, we split our real and synthetic datasets into
four equal classes consisting of the unlaundered audio, the unlaundered audio
corrupted with additive Gaussian noise with an SNR sampled uniformly between
$10$ and $80$dB, the unlaundered audio transcoded (AAC) at a bitrate of 64K,
127K, or 196K, and the unlaundered audio transcoded and corrupted with noise.
Shown in Table II (middle panel) are the resulting classification accuracies
in the same format as described above.
As expected, laundering degrades classification accuracy. The spectral
features were particularly impacted which is perhaps not surprising since the
additive noise and transcoding introduce broad-band spectral distortions.
As compared to the unlaundered voices, the EER for the learned features jumps
by $7.5$ percentage points for the linear classifier and $6.9$ percentage
points for the non-linear classifier.
### IV-C Multi Speaker
The above results are based on personalized classifiers trained to distinguish
real from synthetic audio for a specific individual. Shown in the lower panel
of Table II is the accuracy for a multi-speaker classifier trained and tested
on the TIMIT-ElevenLabs dataset. This classifier is trained to detect
synthetic voices regardless of the underlying identity. The learned features
yield similar EER as compared to single speaker and the spectral EER is only
slightly higher. The perceptual features, on the other hand, yield a lower EER
dropping from $18.6\%$ to $13.7\%$ (for the nonlinear classifier). We
hypothesize that this improvement is because the cadence for the single
speaker (LJ) as she is reading is highly structured, as compared to a more
conversational style. Regardless, these results imply that our features are
not speaker specific, but seem to capture synthesis artifacts regardless of
identity.
### IV-D Comparison
ElevenLabs recently released a classifier designed to determine if an audio
sample was generated by their synthesis
engine777https://beta.elevenlabs.io/blog/ai-speech-classifier. With a reported
accuracy of ${\small>}99\%$ accuracy for unlaundered samples and
${\small>}90\%$ accuracy for laundered samples, this classifier is on par with
our classifier based on learned features (Table II, top and middle panels, row
EL). We tested the ElevenLabs classifier on a random sample of $50$ real and
$50$ ElevenLabs synthesized audio samples, each laundered with additive
Gaussian noise and transcoded at varying compression levels (see Section
IV-B). Classification accuracy was perfect, as compared to our average
accuracy of $95.8\%$ using the learned features and non-linear classifier.
Despite this slightly lower performance, our classifier, unlike the ElevenLabs
classifier, can detect samples from other synthesis engines: we verified that
ElevenLabs mis-classifies synthetically-generated audio from Uberduck and
WaveFake.
Although comparison to other published techniques is difficult due to
differences in the underlying training and tresting datasets, generally
speaking we achieve lower or equal EERs to the techniques described in Section
I-A.
## V Discussion
In the field of digital forensics, image- and video-based techniques have
outpaced those of audio forensics. And for good reason. Until fairly recently
synthetic voices were not particularly natural or identity-preserving. This,
however, is no longer the case and it is now possible to create highly natural
and realistic voices from only a few minutes of a person’s voice. When coupled
with increasingly high-quality deepfake videos, it is quickly becoming
possible to create highly realistic deepfake videos of anyone saying anything.
Combining video and audio analyses (e.g., [5]) offers the advantage of a
richer data source and more chances to detect statistical or semantic
inconsistencies. Purely audio-based techniques, however, are needed to contend
with phone-based scams and fake leaked audios of world or corporate leaders.
While low-dimensional, interpretable features are attractive, it is clear that
the end-to-end learned features afford better discrimination. We did not
combine all three features because the learned features significantly
outperformed the others.
The advantage of a single-speaker approach is that it can learn highly
specific and distinct speaking styles that are difficult for a synthetic voice
to perfectly mimic. The drawback is that, unlike multi-speaker techniques, it
does not scale well to protect a large number of possible victims of voice
cloning. We see the need for both single- and multi-speaker approaches. Our
results suggest that the same underlying feature selection and classification
can be adapted for both tasks.
As new voice synthesis architectures emerge, it will be important for forensic
techniques to generalize across new architectures. Our results suggest that
this type of generalization is possible, but that performance generally
degrades as the classifier is tasked with categorizing voices from
increasingly more diverse synthesis architectures. To the extent that the goal
is to distinguish real from synthetic voices, a single-class approach can be
taken. It may be informative, however, to also refine multi-class approaches
in which the classifier is able to specify which synthesis architecture was
used to generate a fake voice; such information could be useful in tracking
down the source of disinformation campaigns or illegal activities.
As our field continues to develop techniques for distinguishing real from fake
content, we encourage those on the synthesis side to help mitigate potential
abuse from deepfakes by embedding imperceptible watermarks into synthetically
generated content (see, for example, Adobe’s Content Authenticity
Initiative888https://contentauthenticity.org). While this is not a panacea,
it, along with the types of forensic techniques described here, will take us a
long way to mitigating abuse from AI-generated content.
## References
* [1] Ehab A AlBadawy, Siwei Lyu, and Hany Farid. Detecting AI-synthesized speech using bispectral analysis. In International Conference on Computer Vision and Pattern Recognition Workshop, pages 104–109, 2019.
* [2] Zaynab Almutairi and Hebah Elgibreen. A review of modern audio deepfake detection methods: Challenges and future directions. Algorithms, 15(5):155, 2022.
* [3] Luigi Attorresi, Davide Salvi, Clara Borrelli, Paolo Bestagini, and Stefano Tubaro. Combining automatic speaker verification and prosody analysis for synthetic speech detection. arXiv:2210.17222, 2022.
* [4] Logan Blue, Kevin Warren, Hadi Abdullah, Cassidy Gibson, Luis Vargas, Jessica O’Dell, Kevin Butler, and Patrick Traynor. Who are you (I really wanna know)? Detecting audio deepfakes through vocal tract reconstruction. In USENIX Security Symposium, pages 2691–2708, 2022.
* [5] Matyáš Boháček and Hany Farid. Protecting world leaders against deep fakes using facial, gestural, and vocal mannerisms. Proceedings of the National Academy of Sciences, 119(48):e2216035119, 2022.
* [6] Edresson Casanova, Julian Weber, Christopher D Shulby, Arnaldo Candido Junior, Eren Gölge, and Moacir A Ponti. Yourtts: Towards zero-shot multi-speaker tts and zero-shot voice conversion for everyone. In International Conference on Machine Learning, pages 2709–2720. PMLR, 2022.
* [7] Joseph Cox. How I broke into a bank account with an AI-generated voice. https://www.vice.com/en/article/dy7axa/how-i-broke-into-a-bank-account-with-an-ai-generated-voice.
* [8] Phillip L De Leon, Bryan Stewart, and Junichi Yamagishi. Synthetic speech discrimination using pitch pattern statistics derived from image analysis. In Interspeech, pages 370–373, 2012.
* [9] Florian Eyben, Martin Wöllmer, and Björn Schuller. Opensmile: the munich versatile and fast open-source audio feature extractor. In ACM International Conference on Multimedia, pages 1459–1462, 2010.
* [10] Joel Frank and Lea Schönherr. Wavefake: A data set to facilitate audio deepfake detection. arXiv:2111.02813, 2021.
* [11] J. S. Garofolo, L. F. Lamel, W. M. Fisher, J. G. Fiscus, D. S. Pallett, and N. L. Dahlgren. DARPA TIMIT acoustic phonetic continuous speech corpus, 1993.
* [12] Hynek Hermansky. Perceptual linear predictive (PLP) analysis of speech. the Journal of the Acoustical Society of America, 87(4):1738–1752, 1990.
* [13] Joe Hernandez. That panicky call from a relative? It could be a thief using a voice clone, FTC warns. https://www.npr.org/2023/03/22/1165448073/voice-clones-ai-scams-ftc.
* [14] Keith Ito and Linda Johnson. The lj speech dataset. https://keithito.com/LJ-Speech-Dataset/, 2017.
* [15] Faith Karimi. ’Mom, these bad men have me’: She believes scammers cloned her daughter’s voice in a fake kidnapping. https://www.cnn.com/2023/04/29/us/ai-scam-calls-kidnapping-cec/index.html.
* [16] Josh Kelety. Fake audio falsely claims to reveal private Biden comments. https://apnews.com/article/fact-check-biden-audio-banking-fake-746021122607.
* [17] Nithin Rao Koluguri, Taejin Park, and Boris Ginsburg. Titanet: Neural model for speaker representation with 1D depth-wise separable convolutions and global context. In IEEE International Conference on Acoustics, Speech and Signal Processing, pages 8102–8106. IEEE, 2022.
* [18] Menglu Li, Yasaman Ahmadiadli, and Xiao-Ping Zhang. A comparative study on physical and perceptual features for deepfake audio detection. In International Workshop on Deepfake Detection for Audio Multimedia, pages 35–41, 2022.
* [19] Nicolas M Müller, Pavel Czempin, Franziska Dieckmann, Adam Froghyar, and Konstantin Böttinger. Does audio deepfake detection generalize? arXiv:2203.16263, 2022.
* [20] Akio Ogihara, Hitoshi Unno, and Akira Shiozaki. Discrimination method of synthetic speech using pitch frequency against synthetic speech falsification. IEICE Transactions on Fundamentals of Electronics, Communications and Computer Sciences, 88(1):280–286, 2005.
* [21] Alessandro Pianese, Davide Cozzolino, Giovanni Poggi, and Luisa Verdoliva. Deepfake audio detection by speaker verification. In IEEE International Workshop on Information Forensics and Security, pages 1–6. IEEE, 2022.
* [22] Amna Qureshi, David Megías, and Minoru Kuribayashi. Detecting deepfake videos using digital watermarking. In Asia-Pacific Signal and Information Processing Association Annual Summit and Conference, pages 1786–1793, 2021.
* [23] Berrak Sisman, Junichi Yamagishi, Simon King, and Haizhou Li. An overview of voice conversion and its challenges: From statistical modeling to deep learning. IEEE/ACM Transactions on Audio, Speech, and Language Processing, 29:132–157, 2020.
* [24] Hemlata Tak, Jose Patino, Massimiliano Todisco, Andreas Nautsch, Nicholas Evans, and Anthony Larcher. End-to-end anti-spoofing with RawNet2. In IEEE International Conference on Acoustics, Speech and Signal Processing, pages 6369–6373. IEEE, 2021.
* [25] Francis Tom, Mohit Jain, and Prasenjit Dey. End-to-end audio replay attack detection using deep convolutional networks with attention. In Interspeech, pages 681–685, 2018.
* [26] Xinrui Yan, Jiangyan Yi, Jianhua Tao, Chenglong Wang, Haoxin Ma, Tao Wang, Shiming Wang, and Ruibo Fu. An initial investigation for detecting vocoder fingerprints of fake audio. In International Workshop on Deepfake Detection for Audio Multimedia, page 61–68, 2022.
* [27] Jiangyan Yi, Ruibo Fu, Jianhua Tao, Shuai Nie, Haoxin Ma, Chenglong Wang, Tao Wang, Zhengkun Tian, Ye Bai, Cunhang Fan, et al. ADD 2022: The first audio deep synthesis detection challenge. In IEEE International Conference on Acoustics, Speech and Signal Processing, pages 9216–9220. IEEE, 2022.
|
# On the categoricity of complete second order theories††thanks: The first and
second author would like to thank the Academy of Finland, grant no: 322795.
This project has received funding from the European Research Council (ERC)
under the European Union’s Horizon 2020 research and innovation programme
(grant agreement No 101020762).
Tapio Saarinen
University of Helsinki Jouko Väänänen
University of Helsinki and
University of Amsterdam Hugh Woodin
Harvard University
###### Abstract
We show, assuming PD, that every complete finitely axiomatized second order
theory with a countable model is categorical, but that there is, assuming
again PD, a complete recursively axiomatized second order theory with a
countable model which is non-categorical. We show that the existence of even
very large (e.g. supercompact) cardinals does not imply the categoricity of
all finite complete second order theories. More exactly, we show that a non-
categorical complete finitely axiomatized second order theory can always be
obtained by (set) forcing. We also show that the categoricity of all finite
complete second order theories with a model of a certain singular cardinality
$\kappa$ of uncountable cofinality can be forced over any model of set theory.
Previously, Solovay had proved, assuming $V=L$, that every complete finitely
axiomatized second order theory (with or without a countable model) is
categorical, and that in a generic extension of $L$ there is a complete
finitely axiomatized second order theory with a countable model which is non-
categorical.
## 1 Introduction
A second order theory $T$ is _complete_ if it decides, in the semantical
sense, every second order sentence $\phi$ in its own vocabulary i.e. if for
every such $\phi$ either $T\models\phi$ or $T\models\neg\phi$, or
equivalently, all models of $T$ are second order equivalent. The question we
investigate in this paper is whether every complete second order theory is
_categorical_ in the sense that all of its models are isomorphic. Already in
1928 Fraenkel [8] mentions this question as a question ‘calling for
clarification’. Carnap [6] claimed a positive answer, but his proof had an
error (see [5]).
For mere cardinality reasons there are always complete non-categorical second
order theories. One needs only consider models of the empty vocabulary. Since
there are only continuum many different second order theories, there must be
two such models of different cardinality with the same (_a fortiori_ complete)
second order theory.
Categoricity of complete second order theories would follow if all second
order equivalent models were isomorphic, which is not the case again for
cardinality reasons. However, if $V=L$, then _countable_ second order
equivalent models are isomorphic [1] and, moreover, every complete finitely
axiomatized second order theory is categorical [26]. But if a Cohen real is
added to a model of $V=L$, then there are countable non-isomorphic second
order equivalent models [1], and if $\aleph_{1}$ Cohen-reals are added to a
model of $V=L$, there is a complete finitely axiomatized second order theory
(with a countable model) which is non-categorical [26].
Fraïssé [9, 10] conjectured that countable second order equivalent ordinals
are equal. Marek [18, 19] showed that Fraïssé’s conjecture is true under the
assumption $V=L$, and false in a forcing extension obtained by collapsing an
inaccessible cardinal to $\omega_{1}$.
The ambitious goal in the area of this paper is to decide in a definitive way
the status of categoricity of complete second order theories. Since we are
dealing with a question that cannot be decided in ZFC alone, it is natural to
make an assumption such as PD, a consequence of the existence of large
cardinals (e.g. infinitely many Woodin cardinals). We offer a partial solution
to the full question by solving the case of second order theories with
countable models. We have also partial results about theories with uncountable
models. In particular, we show that a non-categorical complete finitely
axiomatized second order theory can always be obtained by (set) forcing. This
shows that large cardinal assumptions cannot imply, as $V=L$ does, the
categoricity of all complete finitely axiomatized second order theories.
_Notation:_ We recall the usual definition of the beth hierarchy:
$\beth_{0}=\omega$, $\beth_{\alpha+1}=2^{\beth_{\alpha}}$, and
$\beth_{\nu}=\sup_{\alpha<\nu}\beth_{\alpha}$ for limit $\nu$. An ordinal
$\alpha$ is called a _beth fixed point_ if $\alpha=\beth_{\alpha}$. If $\mu$
is a cardinal, we use $\mbox{Fn}(I,J,\mu)$ to denote the poset of partial
functions $I\to J$ of cardinality $<\mu$, ordered by $p\leq q\iff q\subseteq
p$. The trivial poset $\mbox{Fn}(\emptyset,\emptyset,1)$ is denoted
$(\\{0\\},=)$.
We denote the second order theory of a structure $M$ by
$\operatorname{Th}_{2}(M)$. A second order theory $T$ is complete if
$\operatorname{Th}_{2}(M)=\operatorname{Th}_{2}(N)$ for all $M,N\models T$,
and $T$ is categorical if $M\cong N$ for all $M,N\models T$. For second order
sentences $\phi,\psi$ we write $\phi\models\psi$ to mean $M\models\phi$
implies $M\models\psi$ for all $M$, and similarly $T\models T^{\prime}$ for
second order theories $T,T^{\prime}$, and we say $T$ axiomatizes $T^{\prime}$.
If $T$ is a finite (resp. recursive) set of sentences and $T\models
T^{\prime}$, we say $T^{\prime}$ is finitely (resp. recursively)
axiomatizable.
A cardinal $\lambda$ is second order characterizable if there is a second
order sentence $\phi$ in the empty vocabulary such that $N\models\phi$ if and
only if $\lvert N\rvert=\lambda$.
## 2 The case of $L[U]$
It is already known that if $V=L$, then every complete finitely axiomatized
second order theory is categorical [26]. We now show that this also holds if
$V=L[U]$, and we show there are complete recursively axiomatized second order
theories that are non-categorical (with very large models).
Assuming $V=L[U]$, we write $\kappa$ for the sole measurable cardinal, $U$ for
the unique normal measure on $\kappa$ and $<_{L[U]}$ for the canonical well-
order. By a $L[U]$-premouse we mean a structure $(L_{\alpha}[W],\in,W)$ where
$W$ is an $L_{\alpha}[W]$-ultrafilter on some $\gamma<\alpha$. Recall that a
premouse $(L_{\alpha}[W],\in,W)$ is iterable (under taking iterated
ultrapowers), i.e. that every iterate is well-founded, if and only if every
iterate in an iteration of any countable length is well-founded. Observe that
every iterate in an iteration of countable length has the same cardinality as
the original premouse, so the iterability of a premouse is expressible in
second order logic. See for example [12, chapter 20] for more details.
###### Theorem 1.
Assume $V=L[U]$. Every complete finitely axiomatized second order theory is
categorical.
###### Proof.
Suppose $\phi$ is a complete second order sentence in a vocabulary with a
single binary relation symbol $R$ (for simplicity). Note first that $\phi$ has
models in only one cardinality. If not, let $N$ be a model of $\phi$ of least
cardinality, and $M$ another model with $\lvert M\rvert>\lvert N\rvert$. Let
$\theta$ be the sentence
$\exists P\exists
R^{\prime}(\theta^{\prime}(P)\land\phi^{\prime}(P,R^{\prime}))$
where
* •
$P$ is a unary predicate, not occurring in $\phi$, and $R^{\prime}$ is a
binary relation symbol not occurring in $\phi$.
* •
$\phi^{\prime}(P,R^{\prime})$ is a modification of the sentence $\phi$, where
the first order quantifiers $\exists x\dots$ are relativised to $P$ as
$\exists x(P(x)\land{\dots})$, and each occurrence of $R$ is replaced by
$R^{\prime}$.
* •
$\theta^{\prime}(P)$ says that the cardinality of $P$ is smaller than the
ambient domain of the model (for example, that there is no injective function
with range contained in $P$).
As $\phi$ is complete and $M\models\theta$ (by taking $(P,R^{\prime})$
isomorphic to $N$), also $N\models\theta$, so there is a model of $\phi$ of
cardinality smaller than that of $N$, which is a contradiction. Thus all
models of $\phi$ have the same cardinality.
Now let $M_{0}$ be the $<_{L[U]}$-least model of $\phi$. Suppose first that
$\lvert M_{0}\rvert>\kappa$: in this case we can mimic the categoricity
argument for $L$ as follows. Let $\theta$ be the sentence
$\exists E\exists u\exists m\exists P\exists
R^{\prime}(\theta^{\prime}(E,u)\land\phi^{\prime}(P,R^{\prime})\land\theta_{least}(E,u,m)\land\theta_{isom}(E,m,P,R^{\prime}))$
where
* •
$E,R^{\prime}$ are binary predicate symbols, $P$ a unary predicate symbol and
$u,m$ are first order variables, none occurring in $\phi$ (the intuition is
that $E$ is $\in$, $u$ is a normal ultrafilter, $m$ is a structure in the
vocabulary of $\phi$, $P$ is the domain of $m$ and $R$ is the single binary
relation of $m$).
* •
$\theta^{\prime}(E,u)$ states $E$ is well-founded and extensional (so that $E$
has a transitive collapse, and the domain of the model equipped with $E$ can
be thought of as a transitive set), and its collapse is a level of $L[u]$
having a normal measure $u$ as an element.
* •
$\phi^{\prime}(P,R^{\prime})$ is (as before) a modification of the sentence
$\phi$ where each first order quantifier $\exists x\dots$ is relativised to
$P$ as $\exists x(P(x)\land\dots)$, and each occurrence of $R$ is replaced by
$R^{\prime}$.
* •
$\theta_{least}(E,u,m)$ says $m<_{L[u]}m^{\prime}$ for any other
$m^{\prime}=(Q,S)$ also satisfying $\phi^{\prime}(Q,S)$ (using the formula
defining the canonical well-order of $L[u]$ with $u$ as a parameter).
* •
$\theta_{isom}(E,m,P,R^{\prime})$ states that $m=(P,R^{\prime})$, and that
$(P,R^{\prime})$ is isomorphic to the ambient model (so there is an injection
$F$ with range $P$ such that $R(x,y)\leftrightarrow R^{\prime}(F(x),F(y))$ for
all $x,y$).
If $M\models\theta$ with witnesses $E$, $u$ and $m=(P,R^{\prime})$, and
$\pi\colon(M,E)\to(N,\in)$ is the transitive collapse, then $\pi(u)=U$ is the
unique normal measure $U$ on $\kappa$, $N=L_{\alpha}[U]$ for some $\alpha$ and
$\pi(m)$ is the $<_{L[U]}$-least model $M_{0}$ of $\phi$, so $M$ is isomorphic
to $M_{0}$.
Conversely, let $\alpha$ be least such that $M_{0}\in L_{\alpha}[U]$. Then
$\kappa<\alpha<\lvert M_{0}\rvert^{+}$ and $U\in L_{\alpha}[U]$, so we may
pick a bijection $\pi\colon M_{0}\to L_{\alpha}[U]$ and let $E$, $u$ and
$m=(P,R^{\prime})$ be the preimages of $\in$, $U$ and $M_{0}$ under $\pi$ to
witness $M_{0}\models\theta$.
Thus the above sentence $\theta$ is such that $M\models\theta$ if and only if
$M$ is isomorphic to the $<_{L[U]}$-least model of $\phi$. Now if
$M\models\phi$, also $M\models\theta$ by completeness of $\phi$, so $M$ is
isomorphic to $M_{0}$ and $\phi$ is categorical.
Suppose now that $\lvert M_{0}\rvert=\lambda<\kappa$. In this case we cannot
find a binary relation $E$ on $M_{0}$ and $u\in M_{0}$ such that $u$ is a
normal measure in the transitive collapse of $(M_{0},E)$, so we modify the
previously produced sentence $\theta$. This argument relies on a
straightforward modification of the $\Delta^{1}_{3}$ well-order of reals in
$L[U]$. We make the further assumption that the domain of $M_{0}$ is a
cardinal (and that $M_{0}$ is the $<_{L[U]}$-least among such models), and let
$\theta$ be the sentence
$\exists E\exists W\exists m\exists P\exists
R^{\prime}(\theta^{\prime}(E,W)\land\phi^{\prime}(E,P,R^{\prime})\land\theta_{least}(E,W,m)\land\theta_{isom}(E,m,P,R^{\prime}))$
where
* •
$E,R^{\prime}$ are binary and $W,P$ unary predicate symbols, and $m$ a first
order variable, none occurring in $\phi$.
* •
$\theta^{\prime}(E,W)$ states $E$ is well-founded and extensional, whose
transitive collapse is an iterable $L[U]$-premouse $(L_{\alpha}[W],\in,W)$ for
some $\alpha$, where $W$ is a $L[W]$-ultrafilter on some $\gamma$, where
$\gamma$ is an ordinal greater than the cardinality of the ambient model.
* •
$\phi^{\prime}(E,P,R^{\prime})$ is the sentence $\phi^{\prime}(P,R^{\prime})$
from before, with the additional stipulation that the extent of the predicate
$P$ is a cardinal.
* •
$\theta_{least}(E,W,m)$ says $m<_{L[W]}m^{\prime}$ for any other
$m^{\prime}=(Q,S)$ also satisfying $\phi^{\prime}(E,Q,S)$ (using the formula
defining the canonical well-order of $L[W]$ with $W$ as a predicate).
* •
$\theta_{isom}(E,m,P,R^{\prime})$ remains unchanged from earlier.
We claim that $\theta$ is a sentence such that $M\models\theta$ if and only if
$M$ is isomorphic to the $<_{L[U]}$-least model of $\phi$ (among models whose
domain is a cardinal). So suppose $M\models\theta$ with witnesses $E,W$ and
$m=(P,R^{\prime})$, and let $\pi\colon(M,E,W)\to(N,\in,W^{\prime})$ be the
transitive collapse. Then $W^{\prime}=\pi^{\prime\prime}(W)$ is a
$N$-ultrafilter on some $\gamma>\lambda$ and $N=L_{\alpha}[W^{\prime}]$ for
some $\alpha>\gamma$, and $\pi(m)$ is the $<_{L[W^{\prime}]}$-least model of
$\phi$ in $L_{\alpha}[W^{\prime}]$, to which $M$ is isomorphic.
To see why $\pi(m)$ is $M_{0}$, let $j\colon L[U]\to L[F]$ and $k\colon
L_{\alpha}[W^{\prime}]\to L_{\delta}[F]$ be long enough iterations of $L[U]$
and $L_{\alpha}[W]$ respectively such that they become comparable. Then
$\operatorname{crit}(j)=\kappa>\lambda$ and
$\operatorname{crit}(k)=\gamma>\lambda$, so $j(M_{0})=M_{0}$ and
$k(\pi(m))=\pi(m)$. By elementarity, both $M_{0}$ and $\pi(m)$ are now the
$<_{L[F]}$-least model of $\phi$ among models whose domain is a cardinal, so
$\pi(m)=M_{0}$ and $M$ is isomorphic to $M_{0}$.
Conversely, to see $M_{0}\models\theta$ amounts to finding an appropriate
premouse $(L_{\alpha}[W],\in,W)$. Let $\delta$ be a large enough cardinal such
that $M_{0},U\in L_{\delta}[U]$, and that $(L_{\delta}[U],\in,U)$ is an
iterable premouse. Then let $N$ be the Skolem hull of $\lambda\cup\\{M_{0}\\}$
in $L_{\delta}[U]$ of cardinality $\lambda$, and let $\pi\colon(N,\in,U\cap
N)\to(L_{\alpha}[W],\in,W)$ be the transitive collapse. Now
$(L_{\alpha}[W],\in,W)$ is also an iterable premouse with $\lvert
L_{\alpha}[W]\rvert=\lambda$, $W$ is a $L_{\alpha}[W]$-ultrafilter on some
$\gamma=\pi(\kappa)>\lambda$, and $\pi(M_{0})=M_{0}$, so by elementarity
$M_{0}$ is the $<_{L[W]}$-least model of $\phi$ as required. So $\theta$ is a
sentence such that $M\models\theta$ if and only if $M$ is isomorphic to
$M_{0}$, implying as before that $\phi$ is categorical.
Finally, observe that the case $\lvert M_{0}\rvert=\kappa$ is impossible,
since the measurable cardinal $\kappa$ is $\Pi^{2}_{1}$-indescribable [11].
Thus $\phi$ is categorical. ∎
It turns out that finite axiomatizability is key for the preceding theorem.
For every second order characterizable cardinal $\lambda>\kappa$, we produce a
non-categorical recursively axiomatizable theory whose models have cardinality
$\lambda$.
###### Theorem 2.
Assume $V=L[U]$. Suppose $\kappa$ is measurable and $\lambda$ is second order
characterizable with $\lambda>\kappa$. Then there is a recursively
axiomatizable theory $T$ with $\kappa$ many non-isomorphic models of
cardinality $\lambda$.
###### Proof.
For $\alpha<\kappa$ let $M_{\alpha}=(\lambda+\alpha,<)$, so in a structure of
cardinality $\lambda$, $M_{\alpha}$ is straightforwardly definable from
$\alpha$ (as $\lambda$ is second order characterizable). These models have the
property that $M_{\alpha}\cong M_{\beta}$ implies $M_{\alpha}=M_{\beta}$.
For a second order sentence $\phi$ in vocabulary $(<)$, let
$S_{\phi}=\\{\alpha<\kappa:M_{\alpha}\models\phi\\},$
and let $T_{0}$ be the set of sentences $\phi$ such that $S_{\phi}\in U$. As
$U$ is an ultrafilter, $T_{0}$ is a complete theory (so for any $\phi$,
exactly one of $\phi\in T_{0}$ or $\lnot\phi\in T_{0}$ hold), and by the
$\sigma$-completeness of $U$ the intersection $X=\bigcap\\{S_{\phi}:\phi\in
T_{0}\\}\in U$ is nonempty. The set $X$ is such that for any $\alpha,\beta\in
X$, the structures $M_{\alpha}$, $M_{\beta}$ have the same second order theory
$T_{0}$, so it remains to see that the theory $T_{0}$ is recursively
axiomatizable.
For a second order sentence $\phi$ in vocabulary $(<)$, let $E$ be a binary
relation symbol and $u$ a first order variable, neither occurring in $\phi$,
and let $\phi^{+}$ be the second order sentence
$\exists E\exists u(\theta^{\prime}(E,u)\land(\exists x\in u)(\forall\alpha\in
x)"M_{\alpha}\models\phi")$
where $\theta^{\prime}(E,u)$ says $E$ is well-founded and extensional, and its
transitive collapse is a level of $L[u]$ containing $\lambda$ and having a
normal measure $u$ as an element. Note that $\phi^{+}$ is a sentence in the
empty vocabulary. Intuitively, $\phi^{+}$ states that $M_{\alpha}\models\phi$
for a $U$-big set of ordinals $\alpha<\kappa$, so for any structure $N$ with
$\lvert N\rvert=\lambda$ we have the equivalences
$\displaystyle N\models\phi^{+}$
$\displaystyle\iff\\{\alpha<\kappa:M_{\alpha}\models\phi\\}=S_{\phi}\in U$
$\displaystyle\iff M_{\alpha}\models\phi\text{ for some }\alpha\in X$
$\displaystyle\iff\phi\in T_{0}.$
The import of the vocabulary of $\phi^{+}$ being empty is that for a structure
$N$, the truth of $N\models\phi^{+}$ depends only on $\lvert N\rvert$, so we
get that for all structures $N$ with $\lvert N\rvert=\lambda$,
$N\models\phi^{+}\iff M_{\alpha}\models\phi^{+}\text{ for some }\alpha\in
X\iff\phi^{+}\in T_{0}$
so also $\phi\leftrightarrow\phi^{+}\in T_{0}$ for all second order sentences
$\phi$ in vocabulary $(<)$.
Now define the recursive set of sentences
$T=\\{\phi\leftrightarrow\phi^{+}:\phi\text{ is a second order sentence in
vocabulary }(<)\\}.$
Observe that any model $N$ of the theory $T$ has cardinality $\lambda$, since
taking $\theta_{\lambda}$ to be the second order characterization of
$\lambda$, we have $M_{\alpha}\models\theta_{\lambda}$ for all
$\alpha<\kappa$, so $N\models\theta_{\lambda}^{+}$ and thus
$N\models\theta_{\lambda}$ since
$\theta_{\lambda}^{+}\leftrightarrow\theta_{\lambda}\in T$.
To see $T$ axiomatizes $T_{0}$, suppose $N\models T$ so $\lvert
N\rvert=\lambda$, and that $\phi$ is a second order sentence in the vocabulary
$(<)$, so either $\phi\in T_{0}$ or $\lnot\phi\in T_{0}$. In the former case
we have $S_{\phi}\in U$ so $N\models\phi^{+}$, so $N\models\phi$, and in the
latter case we have $S_{\lnot\phi}\in U$ so $N\models\lnot\phi$. Thus
$\operatorname{Th}_{2}(N)=T_{0}$, so $T$ recursively axiomatizes $T_{0}$ as
desired. ∎
In conclusion, all complete finitely axiomatizable theories are categorical in
$L[U]$ as in $L$, and in $L[U]$ there are complete recursively axiomatizable
second order theories that are non-categorical (whereas this is still unknown
in $L$).
## 3 Countable models
We already remarked earlier that if $V=L$, then every complete finitely
axiomatized second order theory is categorical [26]. We now show that for
theories with a countable model this is a consequence of PD, and therefore a
consequence of large cardinals:
###### Theorem 3.
Assume PD. Every complete finitely axiomatized second order theory with a
countable model is categorical.
###### Proof.
Suppose $\phi$ is a complete second order sentence with a countable model.
Then by completeness all models of $\phi$ are countable. Suppose $\phi$ is on
the level $\Sigma^{1}_{n}$ of second order logic and its vocabulary is, for
simplicity, just one binary predicate symbol $P$. Let $R$ be the
$\Sigma^{1}_{n}$ (lightface) set of real numbers coding models of $\phi$. By
PD and its consequence, the Projective Uniformization Theorem [22, Theorem
6C5], there is a $\Sigma^{1}_{n+1}$ (even $\Sigma^{1}_{n}$ if $n$ is even)
element $r$ in $R$. Suppose $r$ codes the model $M$ of $\phi$. We show that
every model of $\phi$ is isomorphic to $M$. Suppose $N$ is a model of $\phi$.
Let $\theta$ be the following second order sentence:
$\begin{array}[]{l}\exists Q_{+}\exists
Q_{\times}(\theta_{1}(Q_{+},Q_{\times})\wedge\exists
A(\theta_{2}(Q_{+},Q_{\times},A)\wedge\\\ \exists
B(\theta_{3}(Q_{+},Q_{\times},A,B)\wedge\exists
F\theta_{4}(F,B)))),\end{array}$
where
* •
$\theta_{1}(Q_{+},Q_{\times})$ is the standard second order characterization
of $({\mathbb{N}},+,\times)$.
* •
$\theta_{2}(Q_{+},Q_{\times},A)$ says that the set $A$ satisfies the
$\Sigma^{1}_{n+1}$ definition of $r$ in terms of $Q_{+}$ and $Q_{\times}$.
* •
$\theta_{3}(Q_{+},Q_{\times},A,B)$ says in a domain $N$ that $(N,B)$ is the
binary structure coded by $A$ in terms of $Q_{+}$ and $Q_{\times}$.
* •
$\theta_{4}(F,B)$ is the second order sentence which says that $F$ is a
bijection and
$\forall x\forall y(P(x,y)\leftrightarrow B(F(x),F(y))).$
Thus, $\theta$ essentially says “I am isomorphic to the model coded by $r$.”
Trivially, $M\models\theta$. Recall that $M\models\phi$. Since $\phi$ is
complete, $\phi\models\theta$. Therefore our assumption $N\models\phi$ implies
$N\models\theta$ and therefore $N\cong M$. ∎
We make a few remarks about the proof. First, if $n=2$, then we can use the
Novikov-Kondo-Addison Uniformisation Theorem and PD is not needed. Thus we
obtain:
###### Corollary 4.
A complete $\Sigma^{1}_{2}$-sentence of second order logic with a countable
model is always categorical.
In fact, the categorical finite second order axiomatizations of structures
such as $({\mathbb{N}},+,\times)$, $({\mathbb{R}},+,\times,0,1)$ and
$({\mathbb{C}},+,\times,0,1)$ (any many other classical structures) are all on
the $\Pi^{1}_{1}$-level of second order logic.
Second, the above proof gives also the following more general result: Assume
$Det(\mathbf{\Delta}^{1}_{2n})$. Suppose $T$ is a recursively axiomatized
theory on the $\Sigma^{1}_{2n+2}$-level of second order logic, which is
complete for sentences on this level of second order logic. Then $T$ is
categorical.
An essential ingredient of the proof of Theorem 3 was the assumption that the
complete second order theory is finitely axiomatized. The following theorem
shows that “finitely” cannot be replaced by “recursively”.
###### Theorem 5.
Assume PD. There is a recursively axiomatized complete second order theory
with $2^{\omega}$ non-isomorphic countable models.
###### Proof.
For any $x\subseteq\omega$ let
$M_{x}=(V_{\omega}\cup\\{y\subseteq\omega:y\equiv_{T}x\\},\in),$
where $y\equiv_{T}x$ means the Turing-equivalence of $y$ and $x$. We denote
the second order theory of $M_{x}$ by $\operatorname{Th}_{2}(M_{x})$. By
construction, $x\equiv_{T}y$ implies
$\operatorname{Th}_{2}(M_{x})=\operatorname{Th}_{2}(M_{y})$. On the other
hand, if $x\not\equiv_{T}y$, then clearly $M_{x}\ncong M_{y}$. If $\phi$ is a
second order sentence, then ‘$M_{x}\models\phi$’ is a projective property of
$x$, closed under $\equiv_{T}$, and hence by Turing Determinacy for projective
sets [20] has a constant truth value on a cone of reals $x$. By intersecting
the cones we get a cone $C$ of reals $x$ on which
$\operatorname{Th}_{2}(M_{x})$ is constant. For any second order $\phi$ let
$\phi^{+}$ be the second order sentence
$``M_{y}\models\phi\mbox{ for a cone of $y$}"$
and $\hat{\phi}$ the sentence $\phi\leftrightarrow\phi^{+}$. Let us consider
the recursive second order theory $T$ consisting of $\hat{\phi}$, when $\phi$
ranges over second order sentences in the vocabulary of the structures
$M_{x}$. We may immediately conclude that $T$ is complete, for if a second
order sentence $\phi$ is given, then by the choice of $C$ either
$M_{x}\models\phi$ for $x\in C$ or $M_{x}\models\neg\phi$ for $x\in C$. In the
first case $\hat{\phi}\models\phi$ and in the second case
$\hat{\phi}\models\neg\phi$. Therefore, $T\models\phi$ or $T\models\neg\phi$.
There are a continuum of non Turing equivalent reals in the cone $C$. Hence
there are a continuum of non-isomorphic $M_{x}$ with $x\in C$. ∎
## 4 Models of cardinality $\aleph_{1}$
Next, we show that the $(*)$ axiom (see Definition 4.33 in [28]) has
categoricity consequences for theories with a model of cardinality
$\aleph_{1}$. Thus these consequences can also be derived from forcing axioms,
namely MM++ which implies the $(*)$ axiom (as shown in [4]). The following
theorem answers a question of Boban Veličković.
###### Theorem 6.
Assume $(*)$. Then there is a complete finitely axiomatizable second order
theory with $\omega_{2}\,(=2^{\omega_{1}})$ non-isomorphic models of
cardinality $\aleph_{1}$.
###### Proof.
The pertinent consequence of $(*)$ is the quasihomogeneity of the
nonstationary ideal on $\omega_{1}$ (see Section 5.8 in [28], particularly
Definition 5.100). We take “NS${}_{\omega_{1}}$ is quasihomogeneous” to be the
following statement: if $X\subseteq\operatorname{\mathcal{P}}(\omega_{1})$ is
ordinal definable from parameters in
${\mathbb{R}}\cup\\{$NS${}_{\omega_{1}}\\}$, and $X$ is closed under equality
modulo NS${}_{\omega_{1}}$, and $X$ contains one bistationary subset of
$\omega_{1}$, then $X$ contains every bistationary subset of $\omega_{1}$.
We focus on the $\omega_{1}$-like dense linear orders
$\Phi(S)=\eta+\sum_{\alpha<\omega_{1}}\eta_{\alpha}$, where
$\eta_{\alpha}=\begin{cases}\eta,&\alpha\notin S\\\ 1+\eta,&\alpha\in
S,\end{cases}$
$\eta$ is the order type of the rationals, and $S\subseteq\omega_{1}$ is
bistationary. These models have the property that
$\Phi(S)\cong\Phi(S^{\prime})$ if and only if $S\triangle S^{\prime}\in
NS_{\omega_{1}}$. For a second order sentence $\phi$ in vocabulary $(<)$, the
set
$X_{\phi}=\\{S\subseteq\omega_{1}:S\text{ bistationary},\Phi(S)\models\phi\\}$
is ordinal definable, and closed under equality modulo NS${}_{\omega_{1}}$, so
the quasihomogeneity of NS${}_{\omega_{1}}$ implies that $X_{\phi}$ contains
either every bistationary subset of $\omega_{1}$, or none of them.
This shows the models $\Phi(S)$ for bistationary $S\subseteq\omega_{1}$ all
have the same complete second order theory, which is thus non-categorical.
This theory is axiomatized by the second order sentence in vocabulary $(<)$
expressing “I am isomorphic to $\Phi(S)$ for some bistationary
$S\subseteq\omega_{1}$”, so it is finitely axiomatizable, as required. ∎
Some categoricity consequences of $(*)$ can already be derived from AD, the
axiom of determinacy. As the axiom $(*)$ states that
$L[\operatorname{\mathcal{P}}(\omega_{1})]$ is a homogeneous forcing extension
of a model of AD by a forcing that does not add reals, the categoricity
consequences of AD for theories with a model of cardinality $\leq\aleph_{1}$
also follow from $(*)$. (Of course, the existence of recursively axiomatized
non-categorical theories under $(*)$ is overshadowed by the existence of even
finitely axiomatized such theories.)
###### Theorem 7.
Assume AD. Then there is a complete recursively axiomatized second order
theory with at least $2^{\aleph_{0}}$ many models of cardinality $\aleph_{1}$.
###### Proof.
By Martin, AD implies $\omega_{1}\to(\omega_{1})^{\omega}$, and moreover the
homogeneous set given by $\omega_{1}\to(\omega_{1})^{\omega}$ can be taken to
be a club (see [14]). We may then intersect $\omega$ many homogeneous clubs
for $\omega$ many colorings to obtain
$\omega_{1}\to(\omega_{1})^{\omega}_{2^{\omega}}$, and the homogeneous subset
can still be taken to be a club.
We focus on models of the form $M_{X}=(\omega_{1},<,X)$ for
$X\in[\omega_{1}]^{\omega}$. The second order theory
$\operatorname{Th}_{2}(M_{X})$ in the vocabulary $(<,X)$ can be encoded by a
real $f(X)\in 2^{\omega}$ consisting of the Gödel numbers of the sentences
true in $M_{X}$. This gives a coloring $f\colon[\omega_{1}]^{\omega}\to
2^{\omega}$, so we find a homogeneous club subset $H_{0}\subseteq\omega_{1}$
such that $f(X)$ does not depend on $X\in[H_{0}]^{\omega}$. Hence the models
$M_{X}$ with $X\in[H_{0}]^{\omega}$ all have the same complete second order
theory $T_{0}$, which is thus non-categorical.
The theory $T_{0}$ is axiomatized by
$T=\\{\phi\leftrightarrow\phi^{+}:\phi\text{ is a second order sentence}\\}$
where for a given second order sentence $\phi$ in vocabulary $(<,X)$, the
sentence $\phi^{+}$ expresses “there exists a club $C\subseteq\omega_{1}$ such
that $M_{X}\models\phi$ for all $X\in[C]^{\omega}$”.
For a given second order sentence $\phi$, if $M_{X}\models\phi$ for each
$X\in[H_{0}]^{\omega}$, then $H_{0}$ serves to witness that $\phi^{+}$ holds,
so $T\models\phi$. Conversely, if $\phi^{+}$ holds, there is a club $C$ such
that $M_{X}\models\phi$ for every $X\in[C]^{\omega}$, and taking $X\in[C\cap
H_{0}]^{\omega}$ we see also that $M_{X}\models\phi$ for all
$X\in[H_{0}]^{\omega}$. Thus $T\models\phi$ for exactly those $\phi$ such that
$M_{X}\models\phi$ for all $X\in[H_{0}]^{\omega}$, so we see that $T$ is a
recursive axiomatization of the theory $T_{0}$ as desired. ∎
The same can be analogously derived from the $(*)$ axiom, as follows:
###### Corollary 8.
Assume $(*)$. Then there is a complete recursively axiomatized second order
theory with $\omega_{2}$ many models of cardinality $\aleph_{1}$.
###### Proof.
Recall $(*)$ states that
$L[\operatorname{\mathcal{P}}(\omega_{1})]=L({\mathbb{R}})^{{{\mathbb{P}}_{\text{max}}}}$
and AD holds in $L({\mathbb{R}})$. As ${{\mathbb{P}}_{\text{max}}}$ is
homogeneous and does not add reals under AD (see Lemmas 4.40 and 4.43 in
[28]), $\omega_{1}=\omega_{1}^{L({\mathbb{R}})}$ and
$[\omega_{1}]^{\omega}=([\omega_{1}]^{\omega})^{L({\mathbb{R}})}$.
We again look at models $M_{X}=(\omega_{1},<,X)$ for
$X\in[\omega_{1}]^{\omega}$, and working in $L({\mathbb{R}})$, define a
coloring $f\colon[\omega_{1}]^{\omega}\to 2^{\omega}$ by
$f(X)=r\quad\iff\quad
L({\mathbb{R}})\models{{\mathbb{P}}_{\text{max}}}\Vdash"\check{r}\text{ codes
}\operatorname{Th}_{2}(M_{\check{X}})".$
That $f$ is a well-defined total function relies on the homogeneity of
${{\mathbb{P}}_{\text{max}}}$. By ADL(R) we find a club $H_{0}\in
L({\mathbb{R}})$, $H_{0}\subseteq\omega_{1}$ homogeneous for $f$. Stepping out
of $L({\mathbb{R}})$, we see that the models $M_{X}$, $X\in[H_{0}]^{\omega}$
all have the same complete second order theory $T_{0}$ (in
$L({\mathbb{R}})^{{\mathbb{P}}_{\text{max}}}=L[\operatorname{\mathcal{P}}(\omega_{1})]$
and in $V$ both).
Working now in $V$, we again define
$T=\\{\phi\leftrightarrow\phi^{+}:\phi\text{ is a second order sentence}\\}$
where for a given second order sentence $\phi$, the sentence $\phi^{+}$
expresses “there exists a club $C\subseteq\omega_{1}$ such that
$M_{X}\models\phi$ for all $X\in[C]^{\omega}$”. The proof concludes
analogously to the preceding theorem.
We note that $(*)$ calculates $\lvert\omega_{1}^{\omega}\rvert$ to be
$\omega_{2}$, so $T_{0}$ has $\omega_{2}$ many non-isomorphic models as
claimed. ∎
Of course, we may also use the fact that the club filter on $\omega_{1}$ is an
ultrafilter under AD to get another complete recursively axiomatized non-
categorical second order theory, the difference being that this theory has
$\omega_{1}$ many models instead. The proof, analogous to the proof of Theorem
2, is omitted:
###### Theorem 9.
Assume AD. Then there is a complete recursively axiomatized second order
theory with $\omega_{1}$ many models of cardinality $\aleph_{1}$. ∎
This proof is also easily modified to assume $(*)$ instead:
###### Corollary 10.
Assume $(*)$. Then there is a complete recursively axiomatized second order
theory with $\omega_{1}$ many models of cardinality $\aleph_{1}$. ∎
Thus, under $(*)$, a complete non-categorical theory with a model of
cardinality $\aleph_{1}$ may have either $\omega_{1}$ or $\omega_{2}$ many
non-isomorphic models.
## 5 Forcing non-categoricity
We shall show (Theorem 14) that we can force, over any model of set theory, a
finite complete non-categorical second order theory with a model of
cardinality $\aleph_{1}$. This shows that large cardinals cannot imply the
categoricity of finite complete second order theories in general and, in
particular, in the case that the theory has a model of cardinality
$\aleph_{1}$. This is in contrast to finite complete second order theories
with a countable model where PD implies categoricity (Theorem 3).
Here is an outline of the proof. We start with a preparatory countably closed
forcing ${\mathbb{P}}$ obtaining a generic extension $V[G]$. Then we add
$\aleph_{1}$ Cohen-reals obtaining a further generic extension $V[G][H]$. In
this model we consider for every $x\subseteq\omega$ the model
$M_{x}=(HC^{V[x]},HC^{V},\in).$ (1)
We show that if $x$ is Cohen-generic over $V[G]$, then the complete second
order theory of $M_{x}$ is finitely axiomatizable (in second order logic), and
if $x$ and $y$ are mutually Cohen-generic over $V[G]$, then $M_{x}$ and
$M_{y}$ are second order equivalent but non-isomorphic.
We begin by recalling the following _fast club_ forcing
${\mathbb{P}}_{\mbox{\scriptsize fast}}$, due to R. Jensen: Conditions are
pairs $p=(c_{p},E_{p})$ where $c_{p}$ is a countable closed subset of
$\omega_{1}$ and $C_{p}$ is club in $\omega_{1}$. We define
$(c_{p},E_{p})\leq(c_{q},E_{q})$ if $c_{q}$ is an initial segment of $c_{p}$,
$E_{q}\subseteq E_{p}$, and $c_{p}\setminus c_{q}\subseteq E_{q}$. This
forcing is countably closed. If we assume CH, this forcing has the
$\aleph_{2}$-c.c. It is called fast club forcing because of the following
property: Suppose $G$ is ${\mathbb{P}}_{{\mbox{\scriptsize fast}}}$-generic.
If $C_{G}$ is the union of the sets $c_{p}$ such that $p\in G$, then the
following holds: If $D$ is any club in $V$, then there is $\alpha$ such that
$C_{G}\setminus\alpha\subseteq D$. The set $C_{G}$ is called a _fast club_
(over $V$).
Let ${\mathbb{Q}}$ be the poset $\mbox{Fn}(\omega_{1}\times\omega,2,\omega)$
for adding $\aleph_{1}$ Cohen reals. We use fast club forcing to build a
preparatory iterated forcing in such a way that after forcing with
${\mathbb{Q}}$ the ground model reals are second order definable from any set
$A\subseteq\omega_{1}$ with a certain second order property. The following
lemma is crucial in the iteration:
###### Lemma 11.
Suppose $G\times H$ is ${\mathbb{P}}_{\mbox{\scriptsize
fast}}\times{\mathbb{Q}}$-generic over $V$. Suppose $A\subseteq\omega_{1}$ is
in $V[H]$ and $D\subseteq C_{G}$ is a club in $V[G\times H]$ such that
$V[G\times H]$ satisfies $\forall\alpha<\omega_{1}(D\cap\alpha\in L[A])$. Then
${\mathcal{P}}(\omega)^{V}\subseteq L[A]$.
###### Proof.
We modify a construction from the proof of [30, Lemma 4.33] to our context.
Let us call a pair $(A,B)$ of sets of ordinals an _interlace_ , if $A\cap
B=\emptyset$, above every element of $A$ there is an element of $B$, and vice
versa. Suppose we have disjoint sets $X,Y,Z\subseteq\omega_{1}$ such that
$(X\cup Y,Z)$ is an interlace. Let $z\sim z^{\prime}$ in $Z$ if
$(z,z^{\prime})\cap(X\cup Y)=\emptyset$. Let $[z_{n}]$, $n<\omega$, be the
first $\omega$ $\sim$-equivalence classes in $Z$ in increasing order. The
triple $(X,Y,Z)$ is said to _code_ the set $a\subseteq\omega$ if for all
$n<\omega$:
$n\in a\iff\min\\{\alpha\in X\cup Y:[z_{n}]<\alpha<[z_{n+1}]\\}\in X.$
It suffices to prove that for every $a\subseteq\omega$ in $V$ there is a
triple $(X,Y,Z)\in L[A]$ such that $(X\cup Y,Z)$ is an interlace, and
$(X,Y,Z)$ codes $a$. To this end, suppose $a\in{\mathcal{P}}(\omega)^{V}$.
Suppose $\dot{A}$ is a ${\mathbb{Q}}$-name for $A$ in $V$, $\tau\in V$ is a
${\mathbb{P}}_{\mbox{\scriptsize fast}}$-name for a ${\mathbb{Q}}$-name
$\dot{D}$ for $D$, and $\dot{F}$ a ${\mathbb{Q}}$-name for a function
$\omega_{1}\to\omega_{1}$ which lists the elements of $\dot{D}$ in increasing
order. W.l.o.g. $\tau$ is a ${\mathbb{P}}_{\mbox{\scriptsize fast}}$-name
$\langle\dot{f}_{\alpha}:\alpha<\omega_{1}\rangle$ for a sequence of countable
partial functions defined on $\omega_{1}$ such that
$\\{\dot{f}_{\alpha}(\gamma):\gamma\in\mbox{dom}(f_{\alpha})\\}$ is a maximal
antichain in ${\mathbb{Q}}$ and $\dot{f}_{\alpha}(\gamma)$ forces
$\dot{F}(\alpha)=\gamma$. Suppose (w.l.o.g.) the weakest condition in
${\mathbb{P}}_{\mbox{\scriptsize fast}}\times{\mathbb{Q}}$ forces what is
assumed about $\dot{A}$, $\dot{F}$, $\tau$ and $\dot{D}$. Since
${\mathbb{P}}_{\mbox{\scriptsize
fast}}\Vdash``{\mathbb{Q}}\Vdash\dot{D}\subseteq C_{\dot{G}}"$, we have
$\Vdash\mbox{dom}(\dot{f}_{\alpha})\subseteq C_{\dot{G}}$. More generally, if
$p\in{\mathbb{P}}_{\mbox{\scriptsize fast}}$ decides the countable set
$\mbox{dom}(\dot{f}_{\alpha})$, then
$p\Vdash\mbox{dom}(\dot{f}_{\alpha})\subseteq c_{p}\setminus\alpha.$ (2)
If $\delta<\omega_{2}$, let $W_{\delta}$ be the set of conditions
$p\in{\mathbb{P}}_{\mbox{\scriptsize fast}}$ such that $p$ decides
$\mbox{dom}(\dot{f}_{\delta})$. It is easy to see that $W_{\delta}$ is dense.
We construct descending $\omega$-sequences $(p_{n}),(q_{n})$ and $(r_{n})$ in
${\mathbb{P}}_{\mbox{\scriptsize fast}}$ as follows. We let
$p_{0}=q_{0}=r_{0}$ be the weakest condition in
${\mathbb{P}}_{\mbox{\scriptsize fast}}$. Suppose $p_{n},q_{n}$ and $r_{n}$
have been defined already. Let $\delta_{n}=\max(c_{r_{n}}\cup{\\{0\\}})$. Now
there are two cases:
1. 1.
Case $n\in a$:
1. (a)
Let $p_{n+1}\leq p_{n}$ such that $\min(c_{p_{n+1}}\setminus
c_{p_{n}})>\delta_{n}$ and $p_{n+1}\in W_{\delta_{n}}$.
2. (b)
Let $q_{n+1}\leq q_{n}$ such that $\min(c_{q_{n+1}}\setminus
c_{q_{n}})>\max(c_{p_{n+1}})$ and $q_{n+1}\in W_{\delta_{n}}$.
3. (c)
Let $r_{n+1}\leq r_{n}$ such that $\min(c_{r_{n+1}}\setminus
c_{r_{n}})>\max(c_{q_{n+1}})$ and $q_{n+1}\in W_{\delta_{n}}$.
2. 2.
Case $n\notin a$:
1. (a)
Let $q_{n+1}\leq q_{n}$ such that $\min(c_{q_{n+1}}\setminus
c_{q_{n}})>\delta_{n}$ and $q_{n+1}\in W_{\delta_{n}}$.
2. (b)
Let $p_{n+1}\leq p_{n}$ such that $\min(c_{p_{n+1}}\setminus
c_{p_{n}})>\max(c_{q_{n+1}})$ and $p_{n+1}\in W_{\delta_{n}}$.
3. (c)
Let $r_{n+1}\leq r_{n}$ such that $\min(c_{r_{n+1}}\setminus
c_{r_{n}})>\max(c_{p_{n+1}})$ and $r_{n+1}\in W_{\delta_{n}}$.
Note that if $\delta_{n}<\alpha<\min(c_{p_{n+1}}\setminus c_{p_{n}})$, then
$p_{n+1}\Vdash\alpha\notin C_{\dot{G}}$, whence
$p_{n+1}\Vdash\alpha\notin\tau$. Respectively, if
$\delta_{n}<\alpha<\min(c_{q_{n+1}}\setminus c_{q_{n}})$, then
$q_{n+1}\Vdash\alpha\notin C_{\dot{G}}$, whence
$q_{n+1}\Vdash\alpha\notin\tau$, and if
$\delta_{n}<\alpha<\min(c_{r_{n+1}}\setminus c_{r_{n}})$, then
$r_{n+1}\Vdash\alpha\notin C_{\dot{G}}$, whence
$r_{n+1}\Vdash\alpha\notin\tau$.
Similarly, if $\max(c_{p_{n+1}})<\alpha<\delta_{n+1}$, then
$p_{n+2}\Vdash\alpha\notin C_{\dot{G}}$, whence
$p_{n+2}\Vdash\alpha\notin\tau$. Respectively, if
$\max(c_{q_{n+1}})<\alpha<\delta_{n+1}$, then $q_{n+2}\Vdash\alpha\notin
C_{\dot{G}}$, whence $q_{n+2}\Vdash\alpha\notin\tau$.
Finally, if $\alpha\in I=[\min(c_{p_{n+1}}),\max(c_{p_{n+1}})]$, then
$p_{n+1}$ may leave the sentence $\alpha\in\tau$ undecided, but still
$p_{n+1}\Vdash I\cap\tau\neq\emptyset$, since $p_{n+1}$ decides
$\mbox{dom}(\dot{f_{\delta_{n}}})$ and we have (2). Respectively, $q_{n+1}$
forces $[\min(c_{q_{n+1}}),\max(c_{q_{n+1}})]\cap\tau\neq\emptyset$, and
$r_{n+1}$ forces $[\min(c_{r_{n+1}}),\max(c_{r_{n+1}})]\cap\tau\neq\emptyset$.
Let
$p_{\omega}=\inf_{n}p_{n},q_{\omega}=\inf_{n}q_{n},r_{\omega}=\inf_{n}r_{n}$,
and let $\delta=\sup\\{\delta_{n}:n<\omega\\}$. Let
$G_{0}\subseteq{\mathbb{P}}_{\mbox{\scriptsize fast}}$ be generic over $V[H]$
such that $p_{\omega}\in G_{0}$,
$G_{1}\subseteq{\mathbb{P}}_{\mbox{\scriptsize fast}}$ generic over $V[H]$
such that $q_{\omega}\in G_{1}$, and
$G_{2}\subseteq{\mathbb{P}}_{\mbox{\scriptsize fast}}$ generic over $V[H]$
such that $r_{\omega}\in G_{2}$. Lastly, let
$X=\tau_{G_{0}\times H}\cap\delta,\ Y=\tau_{G_{1}\times H}\cap\delta,\
Z=\tau_{G_{2}\times H}\cap\delta.$
As $\Vdash_{{\mathbb{P}}_{\mbox{\scriptsize
fast}}\times{\mathbb{Q}}}\tau\cap\delta\in L[\dot{A}]$ and
$\dot{A}_{G_{0}\times H}=\dot{A}_{H}$, we have $V[G_{0}\times H]\models X\in
L[A]$. By absoluteness, $V[H]\models X\in L[A]$. Similarly, $V[H]\models
Y,Z\in L[A]$.
Now by construction, $(X\cup Y,Z)$ is an interlace and $(X,Y,Z)$ codes $a$.
Hence $a\in L[A]$.
∎
We need another auxiliary lemma for the iteration:
###### Lemma 12.
Assume $G$ is ${\mathbb{P}}_{\mbox{\scriptsize fast}}$-generic over $V$,
${\mathbb{R}}\in V[G]$ is a $\sigma$-closed forcing, $K$ is
${\mathbb{R}}$-generic over $V[G]$, $H$ is ${\mathbb{Q}}$-generic over
$V[G][K]$, $A\subseteq\omega_{1}$ is in $V[H]$, and in $V[G][K][H]$, there is
a club $D\subseteq C_{G}$ such that $D\cap\alpha\in L[A]$ for all
$\alpha<\omega_{1}$. Then such a club $D$ must already exist in $V[G][H]$.
###### Proof.
Suppose $\dot{A}\in V$ is a ${\mathbb{Q}}$-name for $A$ and $\dot{D}\in V[G]$
is an ${\mathbb{R}}$-name for a ${\mathbb{Q}}$-name for $D$. Suppose
$\dot{F}\in V[G]$ is an ${\mathbb{R}}$-name for a ${\mathbb{Q}}$-name for a
function $\omega_{1}\to\omega_{1}$ listing the elements of $\dot{D}$ in
increasing order. W.l.o.g. $\dot{D}$ is a ${\mathbb{R}}$-name
$\langle\dot{f}_{\alpha}:\alpha<\omega_{1}\rangle$ for a sequence of countable
partial functions defined on $\omega_{1}$ such that
$\\{\dot{f}_{\alpha}(\gamma):\gamma\in\mbox{dom}(f_{\alpha})\\}$ is a maximal
antichain in ${\mathbb{Q}}$ and $\dot{f}_{\alpha}(\gamma)$ forces
$\dot{F}(\alpha)=\gamma$. Suppose (w.l.o.g.) the weakest condition in
${\mathbb{R}}\times{\mathbb{Q}}$ forces what is assumed about $\dot{A}$,
$\dot{F}$, and $\dot{D}$. Since $\Vdash\dot{D}\subseteq C_{\dot{G}}$, we have
$\Vdash\mbox{dom}(\dot{f}_{\alpha})\subseteq C_{\dot{G}}$.
We shall define a descending sequence $(r_{\alpha})_{\alpha<\omega_{1}}$ in
$K$. For a start, $r_{0}\in K$ can be arbitrary. Suppose $r_{\alpha}\in K$ has
been defined already. Let $r_{\alpha+1}\leq r_{\alpha}$ such that
$r_{\alpha+1}\in K$ and $r_{\alpha+1}$ decides $\mbox{dom}(\dot{f}_{\beta})$
and $\dot{f}_{\alpha}(\gamma)$ for $\beta\leq\alpha$ and
$\gamma\in\mbox{dom}(\dot{f}_{\alpha})$. Let $g_{\alpha}\in V[G]$ such that
$r_{\alpha+1}\Vdash\dot{f}_{\alpha}=g_{\alpha}$. Let $\dot{S}$ be a
${\mathbb{Q}}$-name in $V$ for a function $\omega_{1}\to\omega_{1}$ such that
$g_{\alpha}(\gamma)\Vdash\dot{S}(\alpha)=\gamma$. Let $\dot{E}\in V$ be a
${\mathbb{Q}}$-name such that
$\Vdash\dot{E}=\\{\dot{S}(\alpha):\alpha<\omega_{1}\\}$. Now
$V[K][H]\models\dot{E}_{H}=\dot{D}_{K\times H}\ \wedge\
\dot{E}_{H}\cap\delta\in L[A],$
whence $V[H]\models\dot{E}_{H}\cap\delta\in L[A]$ follows by absoluteness. ∎
Now we can construct the iteration in such a way that after forcing with the
iteration and then with ${\mathbb{Q}}$ the ground model reals, which are the
same as the reals after the iteration, are second order definable from any set
$A\subseteq\omega_{1}$ with a certain second order property.
###### Lemma 13.
We assume CH. Suppose ${\mathbb{P}}$ is the countable support iteration of
fast club forcing of length $\omega_{2}$. Let $G$ be ${\mathbb{P}}$-generic
over $V$. Suppose $H$ is ${\mathbb{Q}}$-generic over $V[G]$. Suppose in
$V[G][H]$ there is a set $A\subseteq\omega_{1}$ such that for every club $C$,
there is a club $D\subseteq C$ such that $D\cap\alpha\in L[A]$ for all
$\alpha<\omega_{1}$. Then $P(\omega)^{V}\subseteq L[A]$.
###### Proof.
Let ${\mathbb{P}}=\langle{\mathbb{P}}_{\alpha}:\alpha<\omega_{2}\rangle$ be
the countable support iteration of
$\langle\dot{Q}_{\alpha}:\alpha<\omega_{2}\rangle$, where
${\mathbb{P}}_{\alpha}\Vdash``\dot{Q}_{\alpha}\mbox{ is the fast club}$
forcing ${\mathbb{P}}_{{\mbox{\scriptsize fast}}}$”. Let
$G_{\alpha}=G\cap{\mathbb{P}}_{\alpha}$. Let $\dot{A}\in V[G]$ be an $H$-name
for $A$. Choose $\beta$ large enough such that $\dot{A}\in V[\langle
G_{\alpha}:\alpha<\beta\rangle].$ Now $G_{\beta}$ is
${\mathbb{P}}_{\mbox{\scriptsize fast}}$-generic over $V[\langle
G_{\alpha}:\alpha<\beta\rangle]$. But, $V[G]$ is a generic extension of
$V[\langle G_{\alpha}:\alpha<\beta\rangle][G_{\beta}]$ by countably closed
forcing and by assumption, in $V[G][H]$, there is a club $D\subseteq
C_{G_{\beta}}$ such that $D\cap\eta\in L[A]$ for all $\eta<\omega_{1}$. We
apply Lemma 12 in $V[\langle G_{\alpha}:\alpha<\beta\rangle]$ and conclude
that there is a club $D\subseteq C_{G_{\beta}}$ in $V[\langle
G_{\alpha}:\alpha<\beta\rangle][G_{\beta}][H]$ such that $D\cap\alpha\in L[A]$
for all $\alpha<\omega_{1}$. By Lemma 11, $P(\omega)^{V}\subseteq L[A]$. ∎
###### Theorem 14.
There is a set of forcing conditions that forces the existence of a complete
non-categorical finite second order theory with a model of cardinality
$\aleph_{1}$.
###### Proof.
Assume w.l.o.g., CH. As said above, we start with some preparatory countably
closed forcing ${\mathbb{P}}$ obtaining a generic extension $V[G]$. Then we
add $\aleph_{1}$ Cohen-reals obtaining a further generic extension $V[G][H]$.
In this model we consider for every $x\subseteq\omega$ the model $M_{x}$ as
defined in (1). Clearly, the cardinality of $M_{x}$ is $\aleph_{1}$. We shall
now show that if $x$ is Cohen-generic over $V[G]$, e.g. one of the
$\aleph_{1}$ many coded by $H$, then the complete second order theory of
$M_{x}$ is finitely axiomatizable (in second order logic). To end the proof of
the theorem, we show that if $x$ and $y$ are mutually Cohen-generic over
$V[G]$, then $M_{x}$ and $M_{y}$ are second order equivalent but non-
isomorphic.
In order to use second order logic over $\omega_{1}$ to talk about $HC^{V}$
and Cohen-genericity over $V$ we need to be able to decide, by the means
offered by second order logic, which reals in $V[G][H]$ are in $V$ (or,
equivalently, in $V[G]$) and which are not. This is precisely the purpose of
the preparatory forcing ${\mathbb{P}}$.
We denote the starting ground model by $V$ and assume, w.l.o.g., that $V$
satisfies CH. We let the preparatory forcing
${\mathbb{P}}=\langle{\mathbb{P}}_{\alpha}:\alpha<\omega_{2}\rangle$ be the
countable support iteration of
$\langle\dot{Q}_{\alpha}:\alpha<\omega_{2}\rangle$, where
${\mathbb{P}}_{\alpha}\Vdash``\dot{Q}_{\alpha}\mbox{ is the fast club}$
forcing ${\mathbb{P}}_{{\mbox{\scriptsize fast}}}$”. Let $G$ be
${\mathbb{P}}$-generic over $V$ and $G_{\alpha}=G\cap{\mathbb{P}}_{\alpha}$.
In $V[G]$ we force with ${\mathbb{Q}}$ a generic $H$. Note that
$\aleph_{1}^{V[G][H]}=\aleph_{1}^{V}$ and
${\mathcal{P}}(\omega)^{V[G]}={\mathcal{P}}(\omega)^{V}$. Working in
$V[G][H]$, let the second order sentence $\phi(R,E)$, where $R$ is unary and
$E$ is binary, say in a model $M$:
1. (1)
$E^{M}$ is a well-founded relation satisfying $ZFC^{-}$ $+$ “every set is
countable”. This should be also true when relativized to $R^{M}$.
2. (2)
$|M|=\aleph_{1}$.
3. (3)
If $P^{\prime}\in R^{M}$ denotes (in $M$) the set $\mbox{Fn}(\omega,2,\omega)$
of conditions for adding one Cohen real, then there is $K\subseteq P^{\prime}$
such that $K$ is $P^{\prime}$-generic over $R^{M}$ and $M\models``V=R[K]"$.
4. (4)
If $a\subseteq\omega$ and the transitive collapse of $M$ is $N$, then the
following conditions are equivalent:
1. (a)
$a\in R^{N}$.
2. (b)
If $A\subseteq\omega_{1}$ and for every club $C\subseteq\omega_{1}$ there is a
club $D\subseteq C$ such that $D\cap\alpha\in L[A]$ for every
$\alpha<\omega_{1},$ then $a\in L[A]$.
Note that we can express $``D\cap\alpha\in L[A]"$, or equivalently
$``\exists\beta(|\beta|=\aleph_{1}\wedge D\cap\alpha\in L_{\beta}[A]"$, in
second order logic on $M$ since second order logic gives us access to all
structures of cardinality $|M|$ (=$\aleph_{1}$).
Claim: The following conditions are equivalent in $V[G][H]$:
1. (i)
$M\models\phi(R,E)$.
2. (ii)
$M\cong M_{x}$ for some real $x$ which is Cohen generic over $V$.
###### Proof.
(i) implies (ii): Suppose $M\models\phi(R,E)$. Let $(N,U,\in)$ be the
transitive collapse of $(M,R^{M},E^{M})$. By (3), there is $r$ which is Cohen-
generic over $U$ and $N=HC^{U[r]}$. We show that $U=HC^{V}$. Suppose
$a\in{\mathcal{P}}(\omega)^{V}$. We use condition (4) to demonstrate that
$a\in U$. To this end, let $A$ be as in (4b). By Lemma 13, $a\in L[A]$. Thus
(4) implies $a\in U$. On the other hand, suppose
$a\in({\mathcal{P}}(\omega))^{U}$. We again use (4) to show that
$a\in{\mathcal{P}}(\omega)^{V}$. Let $A\subseteq\omega_{1}$ code
$([\omega_{1}]^{\omega})^{V}$. If $C$ is any club in $V[G][H]$, then, since
$H$ is obtained by means of a CCC forcing, there is a club $D\subseteq C$ in
$V[G]$. Now $D\cap\alpha\in V$, whence $D\cap\alpha\in L[A]$, for all
$\alpha<\omega_{1}$. It follows that $a\in L[A]$. Since $A\in V$, we may
conclude $a\in{\mathcal{P}}(\omega)^{V}$. Hence, $U=HC^{V}$ and $r$ is Cohen-
generic over $V$. We have proved (ii).
(ii) implies (i): Suppose $(N,R^{N},E^{N})=(HC^{V[r]},HC^{V},\in)$, where $r$
is $\mbox{Fn}(\omega,2,\omega)$-generic over $V$. We show that
$(N,R^{N},E^{N})\models\phi(R,E)$. Conditions (1) and (2) are trivially
satisfied. Condition (3) holds by construction. To prove that condition (4)
holds, suppose $a\subseteq\omega$ and let $A$ be as in (4). By Lemma 13, $a\in
L[A]$. Condition (4) and thereby the Claim is proved. ∎
We continue the proof of Theorem 14. The sentence $\phi(R,E)$ is non-
categorical in $V[G][H]$ because if we take two mutually generic (over $V[G]$)
Cohen reals $r_{0}$ and $r_{1}$, then $M_{r_{0}}$ and $M_{r_{1}}$ are non-
isomorphic models of $\phi(R,E)$. To prove that $\phi(R,E)$ is complete,
suppose $(M,R^{M},E^{M})$ and $(N,R^{N},E^{N})$ are two models of $\phi(R,E)$.
W.l.o.g., they are of the form $(M,R^{M},\in)$ and $(N,R^{N},\in)$, where $M$
and $N$ are transitive sets. By construction, they are of the form $M_{r_{0}}$
and $M_{r_{1}}$ where both $r_{0}$ and $r_{1}$ are Cohen generic over
$HC^{V}$, hence over $HC^{V[G]}$. They are subsumed by the generic $H$. By
homogeneity of Cohen forcing $\mbox{Fn}(\omega,2,\omega)$ the models are
second order equivalent. ∎
In fact the forcing gives something stronger. If $\kappa$ is a cardinal that
is second order characterizable in the forcing extension, we may replace the
model $M_{x}=(HC^{V[x]},HC^{V},\in)$, where $x\subseteq\omega$ is Cohen over
$V$, with the model $(\kappa\cup HC^{V[x]},HC^{V},\in)$, and the proof of
Theorem 14 goes through mutatis mutandis:
###### Corollary 15.
There is a set of forcing conditions that forces the following: if $\kappa$ is
any second order characterizable cardinal, there is a complete non-categorical
finitely axiomatizable second order theory with a model of cardinality
$\kappa$. ∎
Since the non-isomorphic models above derive from mutually generic Cohen
reals, it follows that the non-categorical theories in question have (at most)
continuum many non-isomorphic models. We lastly mention how to get non-
categorical theories with more models than this.
It is straightforward to see that in theorem 14 and the constructions
preceding it, the cardinal $\aleph_{1}$ may be replaced with any cardinal
$\mu^{+}$ with $\mu$ regular. That is, the $\omega_{2}$-length countable
support iteration of fast club forcing at $\omega_{1}$ is replaced by a
$\mu^{++}$-length $\leq\mu$-sized support iteration of fast club forcing at
$\mu^{+}$, and the forcing to add $\aleph_{1}$ many Cohen subsets of $\omega$
is replaced by adding $\mu^{+}$ many Cohen subsets of $\mu$. The model $M_{x}$
is then taken to be of the form $(H(\mu)^{V[x]},H(\mu)^{V},\in)$ where $x$ is
a Cohen subset of $\mu$ generic over $V$.
From this variation, we then get the following corollary.
###### Corollary 16.
Suppose $\mu$ is a regular cardinal. There is then a set of forcing conditions
that forces the following: if $\mu$ is second order characterizable, and if
$\kappa\geq\mu$ is any second order characterizable cardinal, there is a
complete non-categorical finite second order theory $T$ with a model of
cardinality $\kappa$. Also, the theory $T$ has between $\mu^{+}$ and $2^{\mu}$
many models up to isomorphism.
Note that the concern of the second order characterizability of $\mu$ and
$\kappa$ in the forcing extension are irrelevant for cardinals with simple
definitions such as $\aleph_{n}$, $n<\omega$ or $\aleph_{\omega_{1}+1}$, for
example.
In conclusion we cannot hope to prove the categoricity of finite complete
second order theories from large cardinals even if we restrict to theories
which have a model of regular uncountable cardinality.
## 6 Forcing categoricity
In [2] (for $\kappa>\omega_{1}$) and [3] (for $\kappa=\omega_{1}$), Aspero and
Friedman proved the following:
###### Theorem 17.
Suppose $\kappa$ is the successor of a regular cardinal, and uncountable. Then
there is a poset ${\mathbb{P}}$ such that in a generic extension by
${\mathbb{P}}$, there is a lightface first order definable well-order of
$H(\kappa^{+})$.
Since we can translate a first order lightface definable well-order of
$H(\kappa^{+})$ into a well-order of $\operatorname{\mathcal{P}}(\kappa)$ that
is second order definable over any structure of cardinality $\kappa$, we
obtain the following corollary.
###### Theorem 18.
Suppose $\kappa$ is the successor of a regular cardinal, uncountable, and that
$\kappa$ is second order characterizable. Then there is a poset ${\mathbb{P}}$
that forces the following: every finitely axiomatizable second order theory
with a model of cardinality $\kappa$ is categorical. ∎
We are thus left to consider the case of theories with models of limit
cardinality, whether regular or singular.
The following theorem shows that the categoricity of complete second order
theories with a model of singular cardinality is (relatively) consistent with
large cardinals. We are indebted to Boban Veličković for suggesting how to
improve an earlier weaker version of this result.
###### Theorem 19.
Suppose $\kappa$ is a singular strong limit with uncountable cofinality
$\lambda$. Then there is a forcing notion ${\mathbb{P}}$ of cardinality
$\kappa$ such that
1. 1.
${\mathbb{P}}$ preserves $\kappa$ singular strong limit of uncountable
cofinality $\lambda$.
2. 2.
${\mathbb{P}}$ forces the statement: Every finitely axiomatizable complete
second order theory with a model of cardinality $\kappa$ is categorical.
###### Proof.
W.l.o.g. we assume GCH up to $\kappa$. We first force a second order definable
well-order of the bounded subsets of $\kappa$ with a reverse Easton type
iteration of length $\kappa$ described in [21, Theorem 20].
Let $e:\kappa\to\kappa$ be the function which lists the set $B$ of beth fixed
points $>\lambda$ in increasing order, and let
$S=\langle\kappa_{\xi}:\xi<\lambda\rangle\subseteq B$ be an increasing cofinal
sequence in $\kappa$ such that $\kappa_{0}>\lambda$. Let
$\pi:\kappa\times\kappa\to\kappa$ be the Gödel pairing function. Let $W$ be a
well-order of $V_{\kappa}$. Suppose $A\subseteq\mu$, where $\mu\in B$. We
write $A\sim V_{\mu}$ if
$(V_{\mu},\in)\cong(\mu,\\{(\alpha,\beta):\pi(\alpha,\beta)\in A\\}).$
Let the poset $E(\mu,A)$ be the iteration (product) of the posets
${\mathbb{R}}_{\alpha}$, $\alpha<\mu$, where
${\mathbb{R}}_{\alpha}=\left\\{\begin{array}[]{ll}\mbox{Fn}(\aleph_{\mu+\alpha+3}\times\aleph_{\mu+\alpha+1},2,\aleph_{\mu+\alpha+1}),&\mbox{
if $\alpha=\omega\cdot\beta$ and $\beta\in A$}\\\
\mbox{Fn}(\aleph_{\mu+\alpha+4}\times\aleph_{\mu+\alpha+2},2,\aleph_{\mu+\alpha+2}),&\mbox{
if $\alpha=\omega\cdot\kappa_{\xi}+1$, $\xi<\lambda$}\\\ (\\{0\\},=)&\mbox{
otherwise}\end{array}\right.$
with Easton support i.e. $E(\mu,A)$ consists of functions
$p\in\prod_{\alpha<\mu}{\mathbb{R}}_{\alpha}$ such that, denoting the support
$\\{\alpha:f(\alpha)\neq\emptyset\\}$ of $f$ by $\mbox{supp}(p)$,
$|\mbox{supp}(p)\cap\gamma|<\gamma$ for all regular $\gamma$.
We now define an iteration $\langle{\mathbb{P}}_{\alpha}:\alpha<\kappa\rangle$
with the property that ${\mathbb{P}}_{\alpha}$ does not change beth fixed
points $\beta=\beth_{\beta}$ for any $\beta$. We let
${\mathbb{P}}=\langle{\mathbb{P}}_{\alpha}:\alpha<\kappa\rangle$ be the
following iteration: If $\alpha$ is a limit ordinal, we use direct limits for
regular $\alpha$ and inverse limits for singular $\alpha$. Suppose then
$\alpha=\beta+1$. Let $\dot{A}$ be the $W$-first ${\mathbb{P}}_{\beta}$-name
$\dot{A}$ in $V_{\kappa}$ such that ${\mathbb{P}}_{\beta}\Vdash\dot{A}\sim
V_{\check{e}(\check{\beta})}$. Then
${\mathbb{P}}_{\alpha}={\mathbb{P}}_{\beta}\star
E(\check{e}(\check{\beta}),\dot{A})$. Let $G$ be ${\mathbb{P}}$-generic over
$V$ and $G_{\alpha}=G\cap{\mathbb{P}}_{\alpha}$.
In the forcing extension $V[G]$, for every $\mu\in B$ there is a set
$A\subseteq\mu$ which codes, via the canonical bijection
$\pi:\kappa\times\kappa\to\kappa$, a bijection
$f_{A}\colon\mu\to(V_{\mu})^{V[G]}$. The set $A$ itself satisfies
$V[G]\models
A=\\{\alpha<\mu:2^{\aleph_{\mu+\omega\cdot\alpha+1}}=\aleph_{\mu+\omega\cdot\alpha+3}\\}$
and from $A$ we can read off $f_{A}$ and a well-order $<_{\mu}^{*}$ of
$(V_{\mu})^{V[G]}$:
$V[G]\models f_{A}(\alpha)<_{\mu}^{*}f_{A}(\beta)\iff\alpha<\beta<\mu.$
Now working in $V[G]$, fix a collection
$\mathcal{F}\subseteq\operatorname{\mathcal{P}}(\kappa)$, and we set out to
define a well-order not on the whole of $\mathcal{F}$ but a certain subset of
it. Define a relation $R$ on $\mathcal{F}$ by
$XRY\iff X\cap\kappa_{\xi}<^{*}_{\kappa_{\xi}}Y\cap\kappa_{\xi}\text{ for all
but boundedly many }\xi<\lambda.$
As $\lambda$ is uncountable, $R$ is well-founded, so the set
$\mathcal{W}=\\{X\in\mathcal{F}:X\text{ is minimal in }R\\}$
is nonempty, and if $X,Y\in\mathcal{W}$ with $X\neq Y$, then both
$X\cap\kappa_{\xi}<^{*}_{\kappa_{\xi}}Y\cap\kappa_{\xi}$ and
$Y\cap\kappa_{\xi}<^{*}_{\kappa_{\xi}}X\cap\kappa_{\xi}$ occur for unboundedly
many $\xi<\lambda$.
To see that $\lvert\mathcal{W}\rvert<\kappa$, suppose to the contrary that
$\lvert\mathcal{W}\rvert\geq\kappa$ and define a coloring
$c\colon[\mathcal{W}]^{2}\to\lambda$ by $c(\\{X,Y\\})=\pi(\xi_{1},\xi_{2})$
where $\xi_{1}$ is the least $\xi<\lambda$ such that
$X\cap\kappa_{\xi}<^{*}_{\kappa_{\xi}}Y\cap\kappa_{\xi}$, and $\xi_{2}$ is the
least $\xi<\lambda$ such that
$Y\cap\kappa_{\xi}<^{*}_{\kappa_{\xi}}X\cap\kappa_{\xi}$. Since
$\lvert\mathcal{W}\rvert\geq\kappa>(2^{\lambda})^{+}$, by the Erdös-Rado
theorem there is a set $H\subseteq\mathcal{W}$ homogeneous for $c$ of color
$\pi(\xi_{1},\xi_{2})$ and cardinality $\lambda^{+}$. But this is a
contradiction, since ordering $H$ in $<^{*}_{\kappa_{\xi_{1}}}$-increasing
order yields an infinite decreasing sequence in the well-order
$<^{*}_{\kappa_{\xi_{2}}}$, so $\lvert\mathcal{W}\rvert<\kappa$.
Now for each $X\in\mathcal{W}$, define $f_{Y}\colon\lambda\to\kappa$ such that
$f_{X}(\xi)$ is the index of $X\cap\kappa_{\xi}$ in the well-order
$<^{*}_{\kappa_{\xi}}$. Then the set
$\bigcup\\{\operatorname{ran}(f_{X}):X\in\mathcal{W}\\}$ has some cardinality
$\gamma<\kappa$, and we can let
$h\colon\bigcup\\{\operatorname{ran}(f_{X}):X\in\mathcal{W}\\}\to\gamma$ be
the transitive collapse map.
Then for $X\in\mathcal{W}$, the function $h\circ f_{X}\colon\lambda\to\gamma$
can be encoded as a subset of a large enough $\mu\in B$, and obviously $h\circ
f_{X}\neq h\circ f_{Y}$ if $X\neq Y$, so we can well-order $\mathcal{W}$ by
$X\lhd Y\iff h\circ f_{X}<^{*}_{\mu}h\circ f_{Y}$
and all this is second order definable in $V[G]$ in a structure of size
$\kappa$, if the collection $\mathcal{F}$ is. This allows us to pick a
distinguished element of $\mathcal{F}$ as the $\lhd$-least $R$-minimal
element.
Suppose now that $\phi$ is a complete second order sentence with a model $M$
of cardinality $\kappa$, and let $\mathcal{F}$ consist of the set of
$X\subseteq\kappa$ encoding a model of $\phi$. Note that over a model of
cardinality $\kappa$ we can write a formula $\phi_{R}(X,Y)$ expressing $XRY$
for $X,Y\in\mathcal{F}$, a formula $\phi_{\mathcal{W}}(X)$ expressing
$X\in\mathcal{W}$, and a formula $\phi_{\lhd}(X,Y)$ expressing $X\lhd Y$ if
$X$ and $Y$ are $R$-minimal.
Let $M\models\Phi$ now say that $X\subseteq M$ encodes a model isomorphic to
$M$ (and thus satisfies $\phi$), and for any $Y\subseteq M$ that also encodes
a model of $\phi$, $\lnot\phi_{R}(Y,X)$, and moreover if for all $Z\subseteq
M$ that encode a model of $\phi$ also $\lnot\phi_{R}(Z,Y)$, then $X=Y$ or
$\phi_{\lhd}(X,Y)$. That is, $X\in\mathcal{W}$ and if also $Y\in\mathcal{W}$
then $X=Y$ or $X\lhd Y$, which uniquely specifies $X$. As the model of $\phi$
with the least code in this sense satisfies $\Phi$ and $\phi$ is complete,
$\phi$ implies $\Phi$ and thus that all models of $\phi$ are isomorphic, so
$\phi$ is categorical. ∎
The method of the preceding proof does not extend to the cases of the limit
cardinal $\kappa$ being regular, or of countable cofinality, so these cases
are left open.
In conclusion, no known large cardinal axiom (e.g. the existence of huge
cardinals) can decide whether all complete second order theories with a model
of singular cardinality are categorical. In particular, such axioms cannot
imply that all finite complete second order theories are categorical.
## 7 Theories with only countably many models
Since under PD we have non-categorical complete recursively axiomatized second
order theories, we may ask how badly categoricity can fail in those cases?
Echoing Vaught’s Conjecture, we may ask whether the number of countable non-
isomorphic models of a complete recursively axiomatized second order theory is
always countable or $2^{\omega}$. Leaving this question unresolved, we have
the following result which demonstrates the ability of categorical theories to
‘capture’ (in the sense of [23]) the models of non-categorical theories.
###### Theorem 20.
Assume $AD^{L({\mathbb{R}})}$. If $T$ is a recursively axiomatized complete
second order theory with only countably many non-isomorphic countable models,
then there is a recursively axiomatized categorical second order theory $S$
the unique model of which interprets all the countable models of $T$.
###### Proof.
Let $T$ be a recursively axiomatized second order theory with only countably
many non-isomorphic countable models. Let $A$ be the $\Pi^{1}_{\omega}$ (i.e.
an intersection of a recursively coded family of sets each of which is
$\Pi^{1}_{n}$ for some $n$) set of reals that code a model of $T$. Since $A$
is a countable union of equivalence classes of the
$\Sigma^{1}_{1}$-equivalence relation of isomorphism, we may conclude that $A$
is $\mathbf{\Sigma}^{1}_{1}$.
We wish to show that $A$ is $\Pi^{1}_{2}(r_{0})$ in a parameter $r_{0}$ which
is a $\Pi^{1}_{\omega}$ singleton. For this, we mimic a proof of Louveau
(Theorem 1 in [17]) to show:
###### Theorem 21.
Assume $AD^{L({\mathbb{R}})}$. Every $\mathbf{\Sigma}^{1}_{1}$ set which is
$\Pi^{1}_{\omega}$ is $\Pi^{1}_{2}(r_{0})$ for some real $r_{0}$ such that
$\\{r_{0}\\}$ is a $\Delta^{1}_{\omega+1}$-singleton.
###### Proof.
Let $A$ be a $\mathbf{\Sigma}^{1}_{1}$ set that is also $\Pi^{1}_{\omega}$,
say $A=\bigcap_{n}A_{n}$ with each $A_{n}$ being $\Pi^{1}_{n}$. Let also
$U\subseteq(\omega^{\omega})^{2}$ be a universal $\Sigma^{1}_{1}$ set.
We define for each $n$ a game $G_{n}$ on $\omega$ where players I and II take
turns to play the digits of reals $\alpha$ and $\gamma$ respectively (there is
no need to let II pass turns here). Then II wins a play of $G_{n}$ if
$\alpha\in A\implies\gamma\in U$ and $\alpha\notin A_{n}\implies\gamma\notin
U$.
$\begin{array}[]{c|ccccc}\text{I}&n_{0}&&n_{1}&&\cdots\\\
\hline\cr\text{II}&&m_{0}&&m_{1}&\cdots\end{array}\quad\begin{matrix}\alpha\\\
\gamma\end{matrix}$
As in Louveau’s proof, II has a winning strategy as follows: since $A$ is
$\mathbf{\Sigma}^{1}_{1}$, we have $A(x)\iff U(y,x)$ for some $y$, so II wins
by playing the digits of $\langle y,\alpha\rangle$ (as I is playing the digits
of $\alpha$). The complexity of the winning set for II in $G_{n}$ is
$\Sigma^{1}_{\omega}$, so by Moschovakis’s strategic basis theorem ([22],
Theorem 6E.2), II has a winning strategy $\sigma_{n}$ that is a
$\Delta^{1}_{\omega+1}$-singleton. Note that the pointclass
$\Sigma^{1}_{\omega}$, i.e. the collection of countable unions of recursively
coded families of projective sets, is both adequate and scaled (see Remark 2.2
in [24], essentially Theorem 2.1 in [27]).
Then the set $B_{n}=\\{y\mid(y*\sigma_{n})_{\text{II}}\in U\\}$ is a
$\Sigma^{1}_{1}(\sigma_{n})$ set with $A\subseteq B_{n}\subseteq A_{n}$ (where
$(y*\sigma_{n})_{\text{II}}$ denotes the real $\gamma$ the strategy
$\sigma_{n}$ produces as I plays $\alpha=y$), so altogether
$A=\bigcap_{n}B_{n}$ is a $\Pi^{1}_{2}(s_{0})$ set where
$s_{0}=\langle\sigma_{n}\mid n<\omega\rangle$ is a
$\Delta^{1}_{\omega+1}$-singleton. ∎
We may reduce the complexity of the parameter down to being a
$\Pi^{1}_{\omega}$ singleton by the following theorem of Rudominer:
###### Theorem 22 (Rudominer [24]).
Assume $AD^{L({\mathbb{R}})}$. Then every real $s_{0}$ which is a
$\Sigma^{1}_{\omega+1}$ singleton, is recursive in a real $r_{0}$ which is a
$\Pi^{1}_{\omega}$ singleton. ∎
Therefore the set $A$ is a $\Pi^{1}_{2}(r_{0})$ set where $r_{0}$ is a
$\Pi^{1}_{\omega}$ singleton. Let $\eta(r,s)$ be a second order $\Pi^{1}_{2}$
formula which defines the predicate $s\in A$ on
$({\mathbb{N}},+,\times,r_{0})$. Let $\theta_{1}(Q_{+},Q_{\times})$ be the
standard second order characterization of $({\mathbb{N}},+,\times)$, as above
in the proof of Theorem 3. Let $\psi_{n}(Q_{+},Q_{\times},s)$, $n<\omega$, be
second order formulas such that if $X_{n}$ is the set of reals $s$ satisfying
$\psi_{n}(Q_{+},Q_{\times},s)$ in $({\mathbb{N}},+,\times)$, then
$\\{r_{0}\\}=\bigcap_{n}X_{n}$. Let $P$ be a new unary predicate symbol and
$S=\\{\theta_{1}(Q_{+},Q_{\times})\\}\cup\\{\psi_{n}(Q_{+},Q_{\times},P):n<\omega\\}.$
Suppose $M$ is a model of $S$. W.l.o.g. the arithmetic part of $M$ consists of
the standard $+$ and $\times$ on ${\mathbb{N}}$. Let $s$ be the interpretation
of $P$ in $M$. Then $s=r_{0}$. Thus $S$ is categorical. The theory $S$ is
recursive because the proofs of Theorems 21 and 22 are sufficiently uniform.
In conclusion, $M$ is categorically characterized by the recursive second
order theory $S$.
Now the countable models of $T$ are interpretable in $S$ in the following
sense: a real $s$ codes a model of $T$ if and only if $M\models\eta(r_{0},s)$.
We also get a translation of sentences: if $\phi$ is a second-order sentence
in the vocabulary of $T$, letting $\hat{\phi}$ be the sentence $\exists
X(\eta(r_{0},X)\land X\models\phi)$, we have that $\phi\in T$ if and only if
$\hat{\phi}\in S$. ∎
## 8 Definable models of categorical theories
Suppose we are given a categorical second order theory $T$. Naturally, we
assume that $T$ has a model, otherwise categoricity is vacuous. But what can
be said about the models of $T$ apart from their isomorphism with each other?
In particular, can we always find a model which is definable in some
reasonable sense, e.g. hereditarily ordinal definable? To emphasize this
point, consider the second order sentence which characterizes the structure
$({\mathbb{N}},+,\cdot,0^{\sharp})$. This categorical sentence has no models
in $L$. We ask, can we have a categorical sentence with no models in
$\mathop{\mbox{HOD}}$? Since it could be that $V=\mathop{\mbox{HOD}}$, we are
looking at this question under assumptions stronger than ZFC.
The following result of Kaplan and Shelah is useful for us:
###### Theorem 23 ([13]).
If ${\mathbb{P}}$ forces the collapse of $|\omega_{2}|$ to $\omega$, then
there is a ${\mathbb{P}}$-term $\tau$ for a countable model such that
1. 1.
If $G_{1}\times G_{2}$ is generic for ${\mathbb{P}}\times{\mathbb{P}}$ then
$V[G_{1}][G_{2}]\models M_{1}\cong M_{2},$
where $M_{1}$ is the interpretation $\tau^{G_{1}}$ of $\tau$ by $G_{1}$ and
$M_{2}$ is $\tau^{G_{2}}$.
2. 2.
${\mathbb{P}}\Vdash``\tau$ is not isomorphic to $\check{M}$”, for any $M$ in
$V$.
We make some observations about the proof. It involves a construction of
Laskowski and Shelah:
###### Theorem 24 ([16]).
There is a countable consistent first order theory $T$, with a predicate $V$
in its vocabulary, having the following property. For any model $M\models T$
and any $A\subseteq V^{M}$, isolated types are dense over $A$ but the theory
$T(A)=\operatorname{Th}(M,a)_{a\in A}$ has an atomic model if and only if
$\lvert A\rvert<\omega_{2}$.
The theory $T$ is as follows. Let $L$ be a countable vocabulary consisting of
two unary predicates $U,V$, one unary function symbol $p$, as well as binary
relations $R_{n}$ and binary functions $f_{n}$ for $n<\omega$ (the functions
will not be total, but instead have domain $U$). Let $K$ be the collection of
all finite $L$-structures satisfying a certain finite list of first order
axioms (see [16]). Let $\mathcal{B}$ be the Fraïsse limit of $K$ and let
$T=\operatorname{Th}(\mathcal{B})$. The theory $T$ is well defined since
$\mathcal{B}$ is unique up to isomorphism.
We then form an uncountable model of the theory $T$ as follows. For an ordinal
$\alpha$ let $L_{\alpha}$ be the vocabulary $L$ together with $\alpha$ many
new constant symbols $c_{\beta}$, $\beta<\alpha$. Using a standard Henkin
construction, we form a term model for the theory $T$ together with the
additional axioms stating that the new constant symbols name distinct
elements. We let $T(A_{\alpha})$ be the theory of this term model in the
vocabulary $L_{\alpha}$. (Although the Henkin construction involves forming
the completion of a theory, we can make the choice of which completion to use
definable by referring to the well-ordering of the sentences.)
We can also observe that for a countable ordinal $\alpha$, the class of
countable atomic models of $T(A_{\alpha})$ is definable from $T(A_{\alpha})$,
which itself is definable from $\alpha$, and the definitions can be carried
out in $H(\omega_{1})$. Using these two observations, the following obtains:
###### Theorem 25 (ZF).
Assume $\omega_{2}^{\scriptsize\mathop{\mbox{HOD}}}$ is countable. Then there
is a countable model M such that
1. 1.
The isomorphism class of M is ordinal definable.
2. 2.
There is no model in $\mathop{\mbox{HOD}}$ which is isomorphic to M.
Moreover, if the property of a linear order of being of order-type
$\omega_{2}^{\scriptsize\mathop{\mbox{HOD}}}$ is second order definable in the
countably infinite structure of the empty vocabulary, then the second order
theory of $M$ is finitely axiomatizable.
###### Proof.
Let $\alpha=\omega_{2}^{\text{HOD}}$. Let $T(A_{\alpha})$ be the theory
constructed above. Finally, let $M$ be a countable atomic model of
$T(A_{\alpha})$. Since $\mathop{\mbox{HOD}}$ satisfies $\lvert
T(A_{\alpha})\rvert=\omega_{2}$, the theory $T(A_{\alpha})$ has no atomic
model in $\mathop{\mbox{HOD}}$, but as being an atomic model is absolute, this
shows that there is no model in $\mathop{\mbox{HOD}}$ isomorphic to $M$.
The isomorphism class of $M$ is ordinal definable as the class of countable
atomic models of $T(A_{\alpha})$, which is definable from $\alpha$.
Additionally, if $\alpha$ is second order definable in the countably infinite
structure of the empty vocabulary, we can define the theories $T$ and
$T(A_{\alpha})$ in second order logic expressing “I am isomorphic to a
countable atomic model of $T(A_{\alpha})$” with a single second order
sentence. This finitely axiomatizes the second order theory of $M$. ∎
Of course, the assumption that $\omega_{2}^{\scriptsize\mathop{\mbox{HOD}}}$
is second order definable in the countably infinite structure of the empty
vocabulary is somewhat ad hoc. However, it holds, for example, in $L[G]$,
where $G$ is $P$-generic over $L$ for
$P=\operatorname{Coll}(\omega,<\omega_{3})^{L}$. This is because the poset $P$
is weakly homogeneous, so
$\mathop{\mbox{HOD}}^{L[G]}=\mathop{\mbox{HOD}}^{L}(P)=L$, whence
$\omega_{2}^{\scriptsize\mathop{\mbox{HOD}}}=\omega_{2}^{L}$ is countable and
second order definable in any countable model in $L[G]$.
We also obtain the following variation:
###### Corollary 26.
Assume $ZFC+AD^{L({\mathbb{R}})}+``\mathop{\mbox{HOD}}\hskip 2.0pt\cap\hskip
2.0pt{\mathbb{R}}=\mathop{\mbox{HOD}}^{L({\mathbb{R}})}\cap\hskip
2.0pt{\mathbb{R}}"$ and that $\omega_{2}^{\scriptsize\mathop{\mbox{HOD}}}$ is
definable in
$\mathop{\mbox{HOD}}^{L({\mathbb{R}})}\restriction\Theta^{L({\mathbb{R}})}$
and countable. Let $M$ be the countable model of Theorem 25. Let
$N=(\Theta^{L({\mathbb{R}})},<,M)$ (w.l.o.g. the domain of $M$ is $\omega$).
Then the second order theory of $N$ is finitely axiomatizable and categorical
but has no model which belongs to $\mathop{\mbox{HOD}}$.
###### Proof.
We can use [7, Theorem 3.10, Chapter 23]) to define
$\mathop{\mbox{HOD}}^{L({\mathbb{R}})}\restriction\Theta^{L({\mathbb{R}})}$
and $L_{\Theta^{L({\mathbb{R}})}}({\mathbb{R}})$ from
$\Theta^{L({\mathbb{R}})}$ in second order logic, which then allows us to
define $\omega_{2}^{\scriptsize\mathop{\mbox{HOD}}}$ and $M$ as in Theorem 25.
∎
The assumptions of Corollary 26 follow, for example, from
$ZFC+AD^{L({\mathbb{R}})}+V=L({\mathbb{R}})[G]$, where $G$ is
${{\mathbb{P}}_{\text{max}}}$-generic, as then
$\mathop{\mbox{HOD}}^{L({\mathbb{R}})}=\mathop{\mbox{HOD}}^{L({\mathbb{R}})[G]}$
and $\omega_{2}^{\scriptsize\mathop{\mbox{HOD}}}$ is countable.
## 9 Open questions
The following question was raised by Solovay [26]:
###### Open Problem 1.
Assuming $V=L$, is every recursively axiomatized complete second order theory
categorical?
Our results do not solve this one way or another, and it remains an
interesting open question. In $L[U]$ there are recursively axiomatized
complete non-categorical second order theories, but we do not know if such
theories necessarily have only large models:
###### Open Problem 2.
Suppose $V=L[U]$, $\kappa$ is the sole measurable cardinal of $L[U]$, and $T$
is a complete recursively axiomatized second order theory that has a model of
cardinality $\lambda<\kappa$ such that $\lambda$ is second order
characterizable. Is $T$ categorical?
There are many other open questions related to finite or recursively
axiomatized complete second order theories with uncountable models. We showed
that we can force categoricity for successor cardinals of regular cardinals,
and some singular limit cardinals, but the following two cases were left open:
###### Open Problem 3.
Can we always force the categoricity of all finite complete second order
theories with a model of cardinality $\kappa$, where $\kappa$ is either a
regular (non-measurable) limit cardinal, or singular of cofinality $\omega$?
An $I_{0}$-_cardinal_ is a cardinal $\lambda$ such that there is $j\colon
L(V_{\lambda+1})\to L(V_{\lambda+1})$ with critical point below $\lambda$.
Note that then $\lambda$ is singular of cofinality $\omega$, $\lambda^{+}$ is
measurable in $L(V_{\lambda+1})$ ([29]), and the Axiom of Choice fails in
$L(V_{\lambda+1})$ ([15]). This is in sharp contrast to the result of Shelah
that if $\lambda$ is a singular strong limit cardinal of uncountable
cofinality, then $L({\mathcal{P}}(\lambda))$ satisfies the Axiom of Choice
([25]). Since Axiom of Choice fails in $L(V_{\lambda+1})$, there can be no
well-order of ${\mathcal{P}}(\lambda)$ which is second order definable on
$\lambda$. This raises the following question:
###### Open Problem 4.
Is every finite complete second order theory with a model of cardinality of an
$I_{0}$-cardinal categorical (or, at least categorical among all models of
that cardinality)?
## References
* [1] Miklós Ajtai. Isomorphism and higher order equivalence. Annals of Mathematical Logic, 1979.
* [2] David Asperó and Sy-David Friedman. Large cardinals and locally defined well-orders of the universe. Ann. Pure Appl. Logic, 157(1):1–15, 2009.
* [3] David Asperó and Sy-David Friedman. Definable well-orders of $H(\omega_{2})$ and GCH. J. Symbolic Logic, 77(4):1101–1121, 2012.
* [4] David Asperó and Ralf Schindler. Martin’s Maximum++ implies Woodin’s axiom $(*)$. Ann. of Math. (2), 193(3):793–835, 2021.
* [5] Steve Awodey and Erich H. Reck. Completeness and categoricity. I. Nineteenth-century axiomatics to twentieth-century metalogic. Hist. Philos. Logic, 23(1):1–30, 2002.
* [6] Rudolf Carnap. Untersuchungen zur allgemeinen Axiomatik. Wissenschaftliche Buchgesellschaft, Darmstadt, 2000. Edited and with a foreword by Thomas Bonk and Jesus Mosterin.
* [7] M. Foreman and A. Kanamori. Handbook of Set Theory. Springer Netherlands, 2009.
* [8] Abraham Fraenkel. Einleitung in die Mengenlehre. 3. Aufl., volume 9. Springer, Berlin, 1928.
* [9] Roland Fraïssé. Sur les types de polyrelations et sur une hypothèse d’origine logistique. C. R. Acad. Sci. Paris, 230:1557–1559, 1950.
* [10] Roland Fraïssé. Sur la signification d’une hypothèse de la théorie des relations, du point de vue du calcul logique. C. R. Acad. Sci. Paris, 232:1793–1795, 1951.
* [11] William P Hanf and Dana Scott. Classifying inaccessible cardinals. Notices of the American mathematical Society, 8:445, 1961.
* [12] Akihiro Kanamori. The higher infinite : large cardinals in set theory from their beginnings. Springer monographs in mathematics. Springer, Berlin ;, 2nd ed. edition, 2003.
* [13] Itay Kaplan and Saharon Shelah. Forcing a countable structure to belong to the ground model. MLQ Math. Log. Q., 62(6):530–546, 2016.
* [14] Eugene M. Kleinberg. Infinitary combinatorics and the axiom of determinateness, volume Vol. 612 of Lecture Notes in Mathematics. Springer-Verlag, Berlin-New York, 1977.
* [15] Kenneth Kunen. Elementary embeddings and infinitary combinatorics. J. Symbolic Logic, 36:407–413, 1971.
* [16] M. C. Laskowski and S. Shelah. On the existence of atomic models. J. Symbolic Logic, 58(4):1189–1194, 1993.
* [17] Alain Louveau. Borel sets and the analytical hierarchy. In Proceedings of the Herbrand symposium (Marseilles, 1981), volume 107 of Stud. Logic Found. Math., pages 209–215. North-Holland, Amsterdam, 1982.
* [18] Wiktor Marek. Consistance d’une hypothèse de Fraïssé sur la définissabilité dans un langage du second ordre. C. R. Acad. Sci. Paris Sér. A-B, 276:A1147–A1150, 1973.
* [19] Wiktor Marek. Sur la consistance d’une hypothèse de Fraïssé sur la définissabilité dans un langage du second ordre. C. R. Acad. Sci. Paris Sér. A-B, 276:A1169–A1172, 1973.
* [20] Donald A. Martin. The axiom of determinateness and reduction principles in the analytical hierarchy. Bull. Amer. Math. Soc., 74:687–689, 1968.
* [21] Telis K. Menas. Consistency results concerning supercompactness. Trans. Amer. Math. Soc., 223:61–91, 1976.
* [22] Yiannis N. Moschovakis. Descriptive set theory, volume 155 of Mathematical Surveys and Monographs. American Mathematical Society, Providence, RI, second edition, 2009.
* [23] Michael O. Rabin. A simple method for undecidability proofs and some applications. In Logic, Methodology and Philos. Sci. (Proc. 1964 Internat. Congr.), pages 58–68. North-Holland, Amsterdam, 1965.
* [24] Mitch Rudominer. The mouse set theorem just past projective. Journal of Mathematical Logic, 0(0):2450014, 0.
* [25] Saharon Shelah. Set theory without choice: not everything on cofinality is possible. Arch. Math. Logic, 36(2):81–125, 1997.
* [26] Robert Solovay. FOM posting, 2006. http://cs.nyu.edu/pipermail/fom/2006-May/010561.html.
* [27] John R. Steel. Scales in $L({\bf R})$. In Cabal seminar 79–81, volume 1019 of Lecture Notes in Math., pages 107–156. Springer, Berlin, 1983.
* [28] W. Hugh Woodin. The axiom of determinacy, forcing axioms, and the nonstationary ideal, volume 1 of De Gruyter Series in Logic and its Applications. Walter de Gruyter GmbH & Co. KG, Berlin, revised edition, 2010.
* [29] W. Hugh Woodin. Suitable extender models II: beyond $\omega$-huge. J. Math. Log., 11(2):115–436, 2011.
* [30] W. Hugh Woodin. In search of Ultimate-$L$: the 19th Midrasha Mathematicae Lectures. Bull. Symb. Log., 23(1):1–109, 2017.
|
# Applying SDN to Mobile Networks: A New Perspective for 6G Architecture
Rashmi Yadav Department of Electrical Engineering
Indian Institute of Technology Kanpur,
India
<EMAIL_ADDRESS>Rashmi Kamran Department of Electrical Engineering,
Indian Institute of Technology Bombay,
India
<EMAIL_ADDRESS>Pranav Jha Abhay Karandikar Department of
Electrical Engineering,
Indian Institute of Technology Bombay,
India
<EMAIL_ADDRESS>Director, Indian Institute Technology Kanpur, India
<EMAIL_ADDRESS>
Department of Electrical Engineering,
Indian Institute of Technology Bombay, India
<EMAIL_ADDRESS>
###### Abstract
The upcoming Sixth Generation (6G) mobile communications system envisions
supporting a variety of use cases with differing characteristics, e.g., very
low to extremely high data rates, diverse latency needs, ultra massive
connectivity, sustainable communications, ultra-wide coverage etc. To
accommodate these diverse use cases, the 6G system architecture needs to be
scalable, modular, and flexible; both in its user plane and the control plane.
In this paper, we identify some limitations of the existing Fifth Generation
System (5GS) architecture, especially that of its control plane. Further, we
propose a novel architecture for the 6G System (6GS) employing Software
Defined Networking (SDN) technology to address these limitations of the
control plane. The control plane in existing 5GS supports two different
categories of functionalities – handling end user signalling (e.g., user
registration, authentication) and control of user plane functions. We propose
to move the “end-user signalling functionality” out of the mobile network
control plane and treat it as user service, i.e., as payload or data. This
proposal results in an evolved service-driven architecture for mobile networks
bringing increased simplicity, modularity, scalability, flexibility and
security to its control plane. The proposed architecture can also support
service specific signalling support, if needed, making it better suited for
diverse 6GS use cases. To demonstrate the advantages of the proposed
architecture, we also compare its performance with the 5GS using a process
algebra-based simulation tool.
###### Index Terms:
Software-defined networking, Mobile networks, Service-driven architecture.
## I Introduction
The notable rise in the range of diverse use cases with differing attributes
has paved the way for the continued evolution of mobile networks. The upcoming
6th Generation Mobile Communication System (6GS) is envisioned to support peak
data rate ($\geq$200 Gbps), very high mobility (500-1000 Km/h), very low
latency (0.1-1 ms), connection density in the range of 106-108 devices/Km2,
reliability of 10-5-10-7 [1]. Moreover, it is expected to witness further
diversity of use cases with the emergence of newer categories of use cases.
Focus Group on Technologies for Network 2030 (FG NET-2030) [2] has identified
and included the following use cases in its report: Holographic-type
communications, Tactile Internet for Remote Operations, Intelligent Operation
Networks, Network and Computing Convergence, Digital Twin, Space-Terrestrial
Integrated Network, Industrial IoT with cloudification etc. A scalable,
flexible and modular network architecture is one of the essential ingredients
towards tackling this immense diversity of use cases in future mobile
networks. Third Generation Partnership Project (3GPP) adopted technologies
such as Network Function Virtualization, Control and User Plane Separation,
Network slicing for Fifth Generation System (5GS), which resulted in improved
scalability and flexibility of 5GS over the previous generation mobile
communications systems such as Fourth Generation System (4GS).
However, there is scope for further improvement in mobile network architecture
especially that of its control plane through the application of Software
Defined Networking (SDN) technology. A survey of the existing research related
to SDN-based enhancements in the mobile network control plane is presented
next. The work in [3] proposes a centralised control plane for multi-Radio
Access Technology (multi-RAT) Radio Access Network (RAN) to enhance the
simplicity and flexibility of the network. Relocation of the control plane
functionality of RAN to the Core Network (CN) to reduce the signalling cost
between RAN and core has been discussed in [4]. Authors in [5] proposed a
decentralized control plane architecture for the 5GS with independent control
functions for different control events for flexible and scalable networks. An
SDN architecture where a middle cell and a middle cell controller are
introduced between the macro cell and the small cell to reduce the control
overhead of the macro cell and to address the scalability problems is proposed
in [6]. In [7], authors proposed a new 5GS core architecture based on the SDN
concept. They introduced a centralised SDN controller for easier and more
flexible management of the user plane. In [8], a hierarchical control plane is
designed to lighten the load of the controller. It focuses on the vertical
scalability of the control plane. In [9], a scalability metric for the SDN
control plane is proposed. Besides, a comparison between different SDN
architectures is analysed via mathematical methods. In addition, there is a
vast amount of literature on SDN-based network architectures, albeit unrelated
to mobile networks [10], [11].
To summarize, current research in the context of the application of SDN
technology to mobile networks mainly focuses on the centralized or distributed
architecture of the control plane for reduced control overheads or scalability
purposes. However, to the best of our knowledge, there is a limited
discussion/rethink on certain other aspects of network architecture, such as,
what functionality should constitute the mobile network control plane within
an SDN-based framework. Is the network control plane right place for “end user
signalling handling” functionality? Should “Non-Access Stratum (NAS) messages”
be handled by CN control plane functions such as Access and Mobility
Management Function (AMF) or should this functionality be moved out of AMF?
Should the user authentication function (Authentication Server Function (AUSF)
in 5GS) be part of the CN control plane? These questions assume even more
importance in the upcoming 6GS era, where a massive increase in the number of
UEs is expected and an accompanying growth in end-user signalling has the
potential to over-burden the network control plane. In one of our earlier
works [12], we briefly analysed these questions.
In order to bring in additional enhancements to mobile network architecture,
especially to its control plane, we propose to separate end user (User
Equipment (UE)) signalling handling from the control plane functions. In a
significant departure from the existing cellular networks, the proposed
architecture views UE signalling as payload, i.e., a form of data traversing
through the cellular network, not much different from other types of data such
as Video streaming or Web browsing. We analyse the proposed architecture using
Performance Evaluation Process Algebra (PEPA) [13], a formal language used to
model distributed systems. We also provide a comparative analysis of the
proposed architecture and the existing 5GS architecture through example call
flows for Protocol Data Unit (PDU) session establishment and UE handover
procedures. We demonstrate a significant reduction in the number of control
messages exchanged in the proposed architecture along with the network’s
scalability.
The rest of the paper is organised as follows: Section II provides limitations
of the existing 5GS mobile network architecture. Section III provides an
overview of the proposed architecture and highlights its advantages. Section
IV includes an information flow comparison of the existing and proposed
architecture for PDU session establishment and handover procedures. Section V
describes the system model using PEPA. Section VI covers the performance
analysis. Section VII provides the conclusion and future work.
## II Limitations of existing 5GS Architecture
In this section, we have captured some of the limitations of the existing 5GS
architecture especially that of its control plane. Although there can be other
limitations too say pertaining to radio technology, etc., those are not
discussed here.
### II-A Tight coupling of user plane control and UE signalling in control
plane
The existing 5GS architecture supports the control and user plane separation.
The 5GS control plane performs user plane control (network resource control,
e.g., setting up data path through the user plane) and UE signalling handling
functionalities (e.g., NAS/RRC (Radio Resource Control) message exchange with
UEs). There is a tight coupling between these two categories of
functionalities, i.e., between user plane control and UE signalling handling
and certain CN (e.g., AMF) and RAN gNodeB-Centralized Unit-Control Plane (gNB-
CU-CP) control plane functions in the existing 5GS perform both. A detailed
description of control plane functionality is provided in [14]. As
demonstrated here, decoupling of UE signalling handling functionality from
User plane control functionality may lead to a more modular and scalable
network architecture.
### II-B Limited alignment with SDN paradigm
SDN is a networking paradigm which separates the control plane of a network
from its user (data) plane and centralizes the network’s intelligence in the
control plane. Although there are differing views in industry/academia on how
to define an SDN-based network architecture, we can still discern a broad
agreement on the topic [5], [15], [16]. The existing 5GS architecture
incorporates the concept of SDN, resulting in architectural features such as
the separation of the user plane from the control plane [14]. However, closer
observation shows that the 5GS architecture does not align completely with the
SDN paradigm. Besides controlling the user plane, the 5GS control plane also
exchanges signalling messages with UEs to provide services such as
authentication and also collect service requirements, e.g., requirements for
PDU connectivity service. The functionality of signalling exchange with UEs
may fit better within the service plane instead of the control plane.
### II-C Non-uniform handling of services
Services in the existing 5GS can be categorized into the following two types:
1. 1.
Application-based services such as Media streaming services, IP Multimedia
subsystem services, Mission-critical services, Multicast/Broadcast Services
(MBS) etc.
2. 2.
Other than these application-based services, the 5GS network also provides
services such as initial access, registration, authentication, PDU
connectivity (connectivity to data networks), and connected mode mobility
support. Such services can be called built-in (or intrinsic) network services.
The two categories of services (Application based services and built-in
network services) are enabled differently in the 5GS. As Application (Service)
Functions (AFs) are independent and decoupled from the core and RAN functions
of mobile networks, they access the control plane functions of the mobile CN
over a standardized interface to enable service delivery through the user
plane. However, the delivery of built-in services is tightly integrated within
the control plane of the 5GS network (RAN and CN) itself. It also leads to the
usage of special paths for signalling exchange with UEs, different from the
regular data paths and brings certain inconsistencies to the architecture. For
example, the Performance Measurement Function (PMF), a sub-function within the
User Plane Function (UPF), exchanges “Measurement Assistance Information”, a
type of signalling information with UEs to aid the access traffic steering,
switching, and splitting (ATSSS) functionality at UPF. This signalling
information is exchanged via a regular data path (i.e. user plane) between the
UE and the PMF. This mechanism is different from how other signalling
information such as “radio measurement reports” to support the handover
procedure is exchanged.
### II-D Complex protocols between control plane and user plane
The existing 5GS control plane architecture impacts the interface design
(protocols) between the control and user planes. For instance, F1 Application
Protocol (F1AP) is the protocol used on the interface between the RAN control
plane (gNB-CU-CP) and the RAN user plane (gNB-Distributed Unit (gNB-DU) or
RAN-DU). It is used to configure gNB-DU and also carries RRC (UE signalling)
messages for UEs. Integrating both these types of functionalities in a single
protocol results in a relatively complex communication protocol between gNB-
CU-CP and gNB-DU.
Figure 1: Control plane architecture for proposed architecture [17]
## III Service driven architecture for 6GS mobile networks
This section presents the proposed architecture, which addresses the
architectural limitations of the existing 5GS (as discussed in Section II) and
highlights a few other advantages. In the proposed work, we aim to separate
the UE signalling handling from the control plane and treat them as a service
to the user to enhance modularity and flexibility in the mobile network
control plane. With the proposed separation, the control plane is left with
only the user plane control functionality, as shown in Fig. 1. The UE
signalling handling functionality is moved out of the control plane to the
service/application plane. The service plane consists of various in-built and
external service functions, as shown in Fig. 1, such as the PDU Session
Service Function (handles PDU session establishment and management providing
PDU connectivity service), Mobility Service Function (responsible for handling
UE mobility), Registration Service Function (handles UE registration with the
network), Authentication Service Function (manages UE authentication),
Multicast/Broadcast Services and a few others. Due to the reorganisation of
the architecture, it offers various architectural and performance advantages
discussed next. Please note that there may be separate controllers in the CN
and RAN, as shown in Fig. 3. Similarly, we have a separate resource plane
(user plane) for RAN and the CN. Further, the proposed architecture’s user or
resource plane may remain the same as the 3GPP 5GS.
### III-A Advantages of the proposed 6GS architecture
This section highlights a few advantages of the proposed work. Segregation of
UE signalling handling functionality from the control plane simplifies the
control plane, which enhances the modularity of the control plane. The
reorganised architecture also aligns well with the SDN paradigm as the control
plane is redesigned to perform only user plane control functionality as
discussed in Section II-B. The proposed architecture also allows internal (or
built-in 5GS) services to be treated the same way as external application-
based services, leading to uniform handling of various services. Further, this
proposal results in the simplification of the control messages. For instance,
the number of sessions management-related messages is reduced due to the setup
of a direct path between UE and the service function (detailed in Section
IV-B), leading to simplified call flows. Also, the number of hops between the
RAN controller and the CN controller in the proposed architecture is less than
the corresponding entities in 5GS, i.e., between gNB-CU-CP and the Session
Management Function (SMF), respectively, which further results in the
performance improvement in terms of control plane latency and resource
utilisation. Transposition of UE signalling handling functionality to
functions in service plane simplifies the protocols between the control pane
and the user plane such as Next Generation Application Protocol (NGAP) between
the CN control plane and RAN and F1AP between the RAN control plane (gNB-CU-
CP) and the RAN user plane (gNB-DU).
The existing 5GS uses the same type of signalling messages for all use cases.
However, it is possible to have different signalling requirements for
different use cases, e.g., the Internet of Things (IoT) and human users. The
proposed architecture may support this requirement by employing use case
specific signalling service functions. Our proposal can also support flexible
function deployment and chaining as various service functions, such as the PDU
session service function, mobility service function, registration service
function, and authentication service function, can be placed flexibly and
chained together to serve UEs.
An additional advantage towards signalling security is presented here. 3GPP
specification [18] highlights the exposed AMF which is vulnerable to replay
attacks of NAS signalling messages between the UE and AMF (control plane of
the CN). In a similar way, [19] presents the exposed RAN which is susceptible
to replay attacks to RRC signalling messages between the UE and RAN (gNB-CU-CP
(control plane of RAN)) as the Uu interface also carries sensitive RRC
signalling. Furthermore, the European Union Agency for Cybersecurity (ENISA)
[20], in its report recommends that the N2 interface between the 5GS RAN and
AMF is a target for attackers since they carry sensitive signalling between
the RAN and the CN. Therefore, in this context, the proposed architecture may
have some advantages towards the UE signalling security between the UE and the
signalling service function. Since UE signalling is segregated from the
control plane (of RAN and CN) and is terminated to a separate signalling
server, it leads to the possibility of localizing the attack originating from
a UE within the signalling server without compromising the network control
plane where the architectural and logical control and management of RAN and CN
are located. This segregation allows us to improve the UE-related signalling
security of future mobile networks.
## IV Information Flow Comparison
In this section, we compare the information flows of the proposed architecture
and the existing 5GS architecture. We consider the PDU session establishment
and mobility services example to differentiate the working of the existing 5GS
and the proposed architectures.
Figure 2: Network entities, signalling and control message flow for PDU
session establishment in 5GS Figure 3: Network entities, signalling and
control message flow for PDU session establishment in the proposed
architecture
### IV-A PDU session establishment as a service
Figure 4: PDU session establishment procedure in the proposed architecture
Fig. 2 and Fig. 3 show the entities involved in PDU session signalling for the
5GS and the proposed architecture, respectively. In 5GS, messages are
exchanged between UE and SMF for PDU session-related signalling via RAN (it
requires both gNB-DU and gNB-CU) and AMF. However, signalling messages are
directly exchanged between UE and the (PDU session service function (PSSF))
service function via RAN (it requires only RAN-DU) in the proposed
architecture, as shown in Fig. 3, which implies that in the existing 5GS,
signalling takes place through multiple hops. In contrast, the number of hops
is reduced in the proposed architecture. Further, the control plane collects
all requirements from the PSSF (which in turn are received by PSSF from the UE
as shown in Fig. 3) via the application-control interface and establishes the
PDU session.
The complete message sequences for establishing PDU sessions for the existing
5GS are detailed in [17] while simplified call flow for the proposed
architecture is shown in Fig. 4111In call flows and simulations, only those
messages are considered and compared which are different in proposed and
existing architectures. Please note that the controllers do not require
response messages from the resource (user) plane, as the controller knows
about user plane resource information; it handles resource decision-making.
Therefore, the proposed architecture eliminates many such messages. For
example, the N4 session modification request and response are exchanged
between SMF and UPF in 5GS architecture [17], while the session modification
command (message 3 in Fig. 4 and message 9 in Fig. 7) is exchanged between the
CN controller and CN user plane (UPF) in the proposed architecture. There is
no need for a session modification response message from the UPF. Hence, these
reductions in the messages simplify both the session establishment and
mobility procedure (to be discussed next). Please note that even though using
RAN-User Plane (RAN-UP) and other network functions/messages is necessary, we
have shown only the CN functions in the call flow to keep the analysis
tractable even though RAN functions will also be required in real systems.
However, keeping the RAN functions out of the call flows is not likely to
alter the conclusions drawn here. This note applies to mobility services also.
### IV-B Mobility as a service
We consider mobility as another service to illustrate the difference between
the existing 5GS and the proposed architecture. Fig. 5 and Fig. 6 show the
network entities, signalling and control message flow of the existing 5GS and
proposed architecture, respectively. S-DU and T-DU represent source gNB-DU and
target gNB-DU, respectively. Similarly, the Source-Centralized Unit-User Plane
(S-CU-UP) and Target-Centralized Unit-User Plane (T-CU-UP) represent source
gNB-CU-UP and target gNB-CU-UP, respectively. S-CU-CP and T-CU-CP represent
source gNB-CU-CP and target gNB-CU-CP, respectively. Also, the interaction
between the RAN controller and the CN controller is through the inter-
controller interface, as shown in Fig. 6. Signalling takes place between UE
and MSF via S-DU before handover while after handover it is through T-DU.
Likewise, the data path between UE and UPF is by way of S-UP before handover
while it is via T-UP after handover.
Figure 5: Network entities, signalling and control message flow in case of
mobility service for the existing 5GS architecture Figure 6: Network entities,
signalling and control message flow in case of mobility service for the
proposed architecture
Mobility call flow for the existing 5GS is available in [17]. Fig. 7 shows the
mobility call flow which illustrates the handover procedure of the proposed
architecture. For the sake of simplicity, splitting S-UP into S-DU and S-CU-
UP, T-UP into T-DU and T-CU-UP is not shown. However, the reason behind the
simplification of mobility procedure/messages is the same as explained for PDU
session establishment in Section IV-B.
Figure 7: Mobility procedure in the proposed architecture
## V System Model
TABLE I: system model for PDU session establishment PDU session establishment
---
PEPA Modules | Code Description
UE NF | $Ue_{1}$ ${}_{=}^{def}$ ($acc_{uep}$, $r_{a}$).(process, $r_{iat}$).$Ue_{2}$
| $Ue_{2}$ ${}_{=}^{def}$ ($req_{pduse}$, $r_{r}$).($rep_{pduse}$,
$r_{r}$).$Ue_{1}$
PSSF NF | $Pssf_{1}$ ${}_{=}^{def}$ ($req_{pduse}$, $r_{r}$).$Pssf_{2}$
| $Pssf_{2}$ ${}_{=}^{def}$ ($acc_{pssfp}$, $r_{a}$).(process,
$r_{p}$).$Pssf_{3}$
| $Pssf_{3}$ ${}_{=}^{def}$ ($req_{sc}$, $r_{r}$).($rep_{sc}$,
$r_{r}$).$Pssf_{4}$
| $Pssf_{4}$ ${}_{=}^{def}$ ($acc_{pssfp}$, $r_{a}$).(process,
$r_{p}$).$Pssf_{5}$
| $Pssf_{5}$ ${}_{=}^{def}$ ($rep_{pduse}$, $r_{r}$).$Pssf_{1}$
CN Controller NF | $Con_{1}$ ${}_{=}^{def}$ ($req_{sc}$, $r_{r}$).$Con_{2}$
| $Con_{2}$ ${}_{=}^{def}$ ($acc_{conp}$, $r_{a}$).(process,
$r_{p}$).$Con_{3}$
| $Con_{3}$ ${}_{=}^{def}$ ($req_{n4est}$, $r_{r}$).($rep_{n4est}$,
$r_{r}$).$Con_{4}$
| $Con_{4}$ ${}_{=}^{def}$ ($acc_{conp}$, $r_{a}$).(process,
$r_{p}$).$Con_{5}$
| $Con_{5}$ ${}_{=}^{def}$ ($rep_{sc}$, $r_{r}$).$Con_{1}$
UPF NF | $Upf_{1}$ ${}_{=}^{def}$ ($req_{n4est}$, $r_{r}$).$Upf_{2}$
| $Upf_{2}$ ${}_{=}^{def}$ ($acc_{upfp}$, $r_{a}$).(process,
$r_{p}$).$Upf_{1}$
UE Processor | $Uep_{1}$ ${}_{=}^{def}$ ($acc_{uep}$, $r_{a}$).$Uep_{2}$
| $Uep_{2}$ ${}_{=}^{def}$ (process, $r_{p}$).$Uep_{1}$
PSSF Processor | $Pssfp_{2}$ ${}_{=}^{def}$ (process, $r_{p}$).$Pssfp_{1}$
CN Controller | $Conp_{1}$ ${}_{=}^{def}$ ($acc_{conp}$, $r_{a}$).$Conp_{2}$
Processor | $Conp_{2}$ ${}_{=}^{def}$ (process, $r_{p}$).$Conp_{1}$
UPF Processor | $Upfp_{1}$ ${}_{=}^{def}$ ($acc_{upfp}$, $r_{a}$).$Upfp_{2}$
| $Upfp_{2}$ ${}_{=}^{def}$ (process, $r_{p}$).$Upfp_{1}$
System Equation | ((($Ue_{1}$[n] ${}_{L_{1}}^{\bowtie}$ $Pssf_{1}$[$N_{pssf}$.$N_{pssfp}$.$N_{t}$])
| ${}_{L_{2}}^{\bowtie}$ $Con_{1}$[$N_{con}$.$N_{conp}$.$N_{t}$])
| ${}_{L_{3}}^{\bowtie}$ $Upf_{1}$[$N_{upf}$.$N_{upfp}$.$N_{t}$])
| ${}_{L_{4}}^{\bowtie}$ ((($Uep_{1}$[n] ${}_{\phi}^{\bowtie}$
$Pssfp_{1}$[$N_{pssf}$.$N_{pssfp}$])
| ${}_{\phi}^{\bowtie}$ $Conp_{1}$[$N_{con}$.$N_{conp}$])
| ${}_{\phi}^{\bowtie}$ $Upfp_{1}$[$N_{upf}$.$N_{upfp}$])
Cooperation Set | $L_{1}$ = $<$$req_{pduse}$, $rep_{pduse}$$>$
| $L_{2}$ = $<$$req_{sc}$, $rep_{sc}$$>$
| $L_{3}$ = $<$$req_{n4est}$$>$
| $L_{4}$ = $<$$acc_{uep}$, $process$, $acc_{pssfp}$,
| $acc_{conp}$, $acc_{upfp}$$>$
| $\phi$ = $<>$
TABLE II: system model for mobility Mobility
---
PEPA Modules | Code Description
UE NF | $Ue_{1}$ ${}_{=}^{def}$ ($acc_{uep}$, $r_{a}$).($measure$, $r_{iat}$).$Ue_{2}$
| $Ue_{2}$ ${}_{=}^{def}$ ($reconfig$, $r_{r}$).$Ue_{3}$
| $Ue_{3}$ ${}_{=}^{def}$ ($rachreq$, $r_{r}$).($rachres$, $r_{r}$).$Ue_{4}$
| $Ue_{4}$ ${}_{=}^{def}$ ($reconfigcomp$,$r_{r}$).$Ue_{1}$
T-UP NF | $Upt_{1}$ ${}_{=}^{def}$ ($pathsetup$, $r_{r}$).$Upt_{2}$
| $Upt_{2}$ ${}_{=}^{def}$ ($acc_{uptp}$,
$r_{a}$).($process$,$r_{p}$).$Upt_{3}$
| $Upt_{3}$ ${}_{=}^{def}$ ($rachreq$,$r_{r}$).($rachres$,$r_{r}$).$Upt_{1}$
MSF NF | $Msf_{1}$ ${}_{=}^{def}$ ($measure$,$r_{r}$).$Msf_{2}$
| $Msf_{2}$ ${}_{=}^{def}$ ($acc_{msfp}$,$r_{a}$).($horeq$,$r_{r}$).$Msf_{3}$
| $Msf_{3}$ ${}_{=}^{def}$ ($hores$,$r_{r}$).$Msf_{4}$
| $Msf_{4}$ ${}_{=}^{def}$
($acc_{msfp}$,$r_{a}$).($reconfig$,$r_{r}$).$Msf_{5}$
| $Msf_{5}$ ${}_{=}^{def}$ ($reconfigcomp$,$r_{r}$).$Msf_{6}$
| $Msf_{6}$ ${}_{=}^{def}$ ($acc_{msfp}$,$r_{a}$).
| ($pathswitch$,$r_{r}$).$Msf_{1}$
RAN Controller NF | $Ran_{1}$ ${}_{=}^{def}$ ($horeq$,$r_{r}$).$Ran_{2}$
| $Ran_{2}$ ${}_{=}^{def}$ ($acc_{ranp}$,$r_{a}$).($pathsetup$,$r_{r}$)
| .($hores$,$r_{r}$).$Ran_{1}$
CN Controller NF | $Cn_{1}$ ${}_{=}^{def}$ ($pathswitch$,$r_{r}$).$Cn_{2}$
| $Cn_{2}$ ${}_{=}^{def}$ ($acc_{cnp}$,$r_{a}$).($session$,$r_{r}$).$Cn_{1}$
UPF NF | $Upf_{1}$ ${}_{=}^{def}$ ($session$,$r_{r}$).$Upf_{2}$
| $Upf_{2}$ ${}_{=}^{def}$
($acc_{upfp}$,$r_{a}$).($process$,$r_{p}$).$Upf_{1}$
UE Processor | $Uep_{1}$ ${}_{=}^{def}$ ($acc_{uep}$,$r_{a}$).$Uep_{2}$
| $Uep_{2}$ ${}_{=}^{def}$ ($measure$,$r_{iat}$).$Uep_{1}$
T-UP Processor | $Uptp_{1}$ ${}_{=}^{def}$ ($acc_{uptp}$,$r_{a}$).$Uptp_{2}$
| $Uptp_{2}$ ${}_{=}^{def}$ ($rachreq$,$r_{r}$).$Uptp1$
| +($rachres$,$r_{r}$).$Uptp_{1}$
MSF Processor | $Msfp_{1}$ ${}_{=}^{def}$ ($acc_{msfp}$,$r_{a}$).$Msfp_{2}$
| $Msfp_{2}$ ${}_{=}^{def}$ ($horeq$,$r_{r}$).$Msfp_{1}$+($reconfig$,$r_{r}$)
| .$Msfp_{1}$+($pathswitch$,$r_{r}$).$Msfp_{1}$
RAN Processor | $Ranp_{1}$ ${}_{=}^{def}$ ($acc_{ranp}$,$r_{a}$).$Ranp_{2}$
| $Ranp_{2}$ ${}_{=}^{def}$
($pathsetup$,$r_{r}$).($hores$,$r_{r}$).$Ranp_{1}$
CN Processor | $Cnp_{1}$ ${}_{=}^{def}$ ($acc_{cnp}$,$r_{a}$).$Cnp_{2}$
| $Cnp_{2}$ ${}_{=}^{def}$ ($session$,$r_{r}$).$Cnp_{1}$
UPF Processor | $Upfp_{1}$ ${}_{=}^{def}$ ($acc_{upfp}$,$r_{a}$).$Upfp_{2}$
| $Upfp_{2}$ ${}_{=}^{def}$ ($session$,$r_{r}$).$Upfp_{1}$
System Equation | ((((($Ue_{1}$[n]${}_{L_{1}}^{\bowtie}$$Upt_{1}$[$N_{upt}$.$N_{uptp}$.$N_{t}$])
| ${}_{L_{2}}^{\bowtie}$$Msf1$[$N_{msf}$.$N_{msfp}$.$N_{t}$])
| ${}_{L_{3}}^{\bowtie}$$Ran_{1}$[$N_{ran}$.$N_{ranp}$.$N_{t}$])
| ${}_{L_{4}}^{\bowtie}$$Cn_{1}$[$N_{cn}$.$N_{cnp}$.$N_{t}$])
| ${}_{L_{5}}^{\bowtie}$$Upf_{1}$[$N_{upf}$.$N_{upfp}$.$N_{t}$])
|
${}_{L_{6}}^{\bowtie}$((((($Uep_{1}$[n]${}_{\phi}^{\bowtie}$$Uptp_{1}$[$N_{upt}$.$N_{uptp}$])
| ${}_{\phi}^{\bowtie}$$Msfp_{1}$[$N_{msf}$.$N_{msfp}$])
| ${}_{\phi}^{\bowtie}$$Ranp_{1}$[$N_{ran}$.$N_{ranp}$])
| ${}_{\phi}^{\bowtie}$$Cnp_{1}$[$N_{cn}$.$N_{cnp}$])
| ${}_{\phi}^{\bowtie}$$Upfp_{1}$[$N_{upf}$.$N_{upfp}$])
Cooperation Set | $L_{1}$ = $<rachreq,rachres>$
| $L_{2}$ = $<$$measure$, $reconfig$,
| $reconfigcomp$$>$
| $L_{3}$ = $<pathsetup,horeq,hores>$
| $L_{4}$ = $<pathswitch>$
| $L_{5}$ = $<session>$
| $L_{6}$ = $<$$acc_{uep}$, $acc_{uptp}$, $acc_{msfp}$,
| $acc_{ranp}$, $acc_{cnp}$, $acc_{upfp}$$>$
| $\phi$ = $<>$
This section presents the system model for the proposed architecture using
PEPA. PEPA is a formal high-level language for the quantitative modelling of a
distributed system [13]. Table I and Table II represent the system model for
the proposed architecture for the PDU session establishment and mobility
procedures, respectively. To explain the system model, we use the PDU session
establishment (or session establishment) procedure (shown in Fig. 4).
The session establishment procedure requires PSSF, CN controller and UPF as
the key CN functions in the proposed architecture. These NFs are modelled as
PEPA components. In addition, a UE is also modelled as a PEPA component. Each
PEPA component (representing UE or a CN NF) goes through a set of states
during the handling of the procedure. The individual component states are
denoted by associating a unique number with the name of the component (e.g.,
$Pssf_{1}$, represents the first state of component, PSSF). Each component
performs a set of actions, such as accessing the processor or sending a
request/response. These actions are denoted in lowercase, and subscripts are
added to provide further distinction (as $action_{actiondetail}$). For
example, the notation for the action of PDU session establishment request and
response can be $req_{pduse}$ and $rep_{pduse}$, respectively. Each action is
associated with a specific rate value, $r$. The rate (number of actions
performed per unit time) models the expected duration of the action in the
PEPA component and is taken as reference from [21], [22] and [23].
Let us now understand the details of modelling of NF states as shown in Table
I. Consider UE as an example. The UE acquires the processor in its initial
state ($acc_{uep}$, $r_{a}$) and performs the processing action ($process$,
$r_{iat}$) before sending a request. The second state, $Ue_{2}$, models the
request ($req_{pduse}$, $r_{r}$) and response ($rep_{pduse}$, $r_{r}$)
messages exchanged between UE and PSSF for the PDU session establishment. NFs
acquire processors to process a request/response. In Table I, UEP, PSSFP, CONP
and UPFP are the processing entities for UE, PSSF, CN controller (CON) and UPF
respectively. These processing entities are modelled such that each NF
processor has two states. For instance, the first state of UEP, $Uep_{1}$, is
for acquiring the processor ($acc_{uep}$), and the second state, $Uep_{2}$,
performs the processing action ($process$). Similarly, the other NFs and their
processing entities are modelled.
As discussed in this section, the system model uses the following additional
parameters: $n$ denotes the number of UEs; $N_{pssf}$, $N_{con}$, and
$N_{upf}$ are the number of NF instances for PSSF, CN controller (CON), and
UPF, respectively. Similarly, $N_{pssfp}$, $N_{conp}$, and $N_{upfp}$ are the
number of PSSF processor (PSSFP), CN controller processor (CONP) and UPF
processor (UPFP), respectively. Please note that each processor can handle a
set of concurrent threads, $N_{t}$. Thus, the product
$N_{nf}$·$N_{nfp}$·$N_{t}$ (as mentioned in the system model equation)
represents the total number of threads for a type of NF. Moreover, the product
$N_{nf}$·$N_{nfp}$ is the total number of processors allocated to a type of
NF, e.g., for UPF processor.
The system equation represents the overall system model. The cooperation
operator (“${\bowtie}$”), for example, A ${}_{L}^{\bowtie}$ B, models the
interactions between NFs A and B over the actions defined in the cooperation
set $L$. It can be noted that it is possible that component A
${}_{L}^{\bowtie}$ B will have different behaviour from component A
${}_{K}^{\bowtie}$ B if L$\neq$K. Let us consider an example from Fig. 4,
where PSSF and CN controller (CON) interact with each other for session
context request/response $req_{sc}$/$rep_{sc}$. These actions are defined in
cooperation set $L_{2}$, as shown in Table I. Therefore, the system equation
$Pssf_{1}$[$N_{pssf}$.$N_{pssfp}$.$N_{t}$] ${}_{L_{2}}^{\bowtie}$
$Con_{1}$[$N_{con}$.$N_{conp}$.$N_{t}$], models the interaction between PSSF
and CN controller over the cooperation set $L_{2}$. In a similar way, the
overall system equation, as shown in Table I and Table II represents the
interaction between the various NFs as shown in the two call flows, Fig. 4 and
Fig. 7, respectively.
## VI performance evaluation
This section presents the performance comparison between the existing 5GS and
the proposed architecture analysed using the PEPA Eclipse plug-in [24], a
software tool integrated into the popular Eclipse platform. This tool supports
various performance measures [22] as discussed below, which help evaluate the
network’s performance.
1. 1.
Session establishment rate (or the number of successful handovers in the case
of mobility): The number of session establishments are measured for the action
(say, $rep_{pduse}$, which describes the completion of the session
establishment procedure), representing the session establishment rate.
Similarly, the number of successful handovers is measured for the action
‘$session$’(as performed by UPF NF in Table II), which signifies the
completion of the handover procedure.
2. 2.
Average response time: It measures the UE waiting time for any specific
request and reflects the system’s operating speed. We consider the average
response time as the duration of the completion of the session establishment
procedure. Similarly, we consider the mobility procedure’s average response
time as the completion of the handover procedure.
3. 3.
Utilisation: Utilisation measures the NFs processor capacity utilised during
the procedure. The utilisation of any NF (for example, PSSF processor) is
derived from its population level (one of the features available in the tool)
while performing any process.
4. 4.
Scalability: Scalability (S), in simple terms, measures a network’s ability to
increase or decrease its size, performance and cost in response to changes in
system processing demands. Alternatively, according to Equation 1, scalability
can be defined as the ratio between the productivity of a system at two
configurations (configuration here implies the number of NFs used) having
different scales, say $m_{1}$ and $m_{2}$ [25], which corresponds to the
different numbers of NFs used in the network, say $m_{1}$ = (1,1,1) and
$m_{2}$ = (3,3,1). The mathematical expression for scalability is given as
[25]:
$S(m_{1},m_{2})=\frac{C(m_{2})}{C(m_{1})}$ (1)
Where, C(m) is the productivity of a system at the scale m, given by (Equation
2):
$C(m)=\frac{t(m)\cdot r(m)}{U(m)}$ (2)
Where t(m) is the average number of sessions established at scale m, U(m) is
the processor utilisation of the system at scale m, and r(m) (Equation 3) is
determined by evaluating the response time performance of the scaled system.
We consider the following equation [25] to evaluate the performance function
r(m) by using the average response time T(m), at scale m, with the target
average response time T [22].
$r(m)=\frac{1}{1+T(m)/T}$ (3)
### VI-A Results and Analysis
In this section, we present the performance results for 5GS and the proposed
architecture in the case of PDU session establishment service and mobility
service.
#### VI-A1 PDU Session Establishment Service
The performance analysis of the proposed architecture and the existing 5GS for
the session establishment procedure is discussed in this section. Fig. 8 and
Fig. 9 show the session establishment rate with respect to the number of UEs
for 5GS and the proposed architecture using two different configurations. For
instance, ($N_{pssf}$, $N_{con}$, $N_{upf}$) = (1,1,1) for the proposed
architecture is the basic configuration with single NF assigned and
($N_{pssf}$, $N_{con}$, $N_{upf}$) = (3,3,1) is the configuration for a scaled
system with three NFs assigned to PSSF and CON while one to UPF. Similarly,
basic and the scaled configuration for 5GS is defined as ($N_{amf}$,
$N_{smf}$, $N_{upf}$) = (1,1,1) and ($N_{amf}$, $N_{smf}$, $N_{upf}$) =
(3,3,1), respectively.
Figure 8: Session establishment (number of sessions per unit time) for the
proposed and the 5GS architecture having the basic configuration Figure 9:
Session establishment (number of sessions per unit time) for the proposed and
the 5GS architecture having the scaled configuration Figure 10: Processor
utilisation of session establishment for the proposed and the 5GS architecture
having the basic configuration Figure 11: Processor utilisation of session
establishment for the proposed and the 5GS architecture having scaled
configuration Figure 12: Scalability of PDU session for 5GS and the proposed
architecture
Results show that the proposed architecture can achieve a higher session
establishment rate compared to the existing 5GS in case of both basic and
scaled configurations. Although the session establishment rate has increased
using a scaled configuration for proposed and existing architectures compared
to the session establishment rate achieved using a basic configuration, the
proposed architecture has achieved a higher session establishment rate than
the 5GS. The saturation point for existing 5GS, as shown in Fig. 8, is around
10,000 users i.e. it can serve a maximum number of 10,000 users, while the
session establishment rate for the proposed architecture saturates at around
20,000 users. Similarly, Fig. 9 shows that 5GS saturates at around 34,000
users. As the saturation point is reached, the network drops the incoming
requests from the users. This means that with the given number of
processors/NFs, the proposed architecture can achieve a higher session
establishment rate. In contrast, more processors/NFs are required to support
more number of session establishments.
The processor utilisation for all the NFs of the existing 5GS and the proposed
architecture for basic and the scaled configuration is shown in Fig. 10 and
Fig. 11, respectively. For instance, the PSSFP reaches its maximum utilisation
explaining the saturation point for the session establishment rate. Although
at this point, CONP and UPFP are not fully utilised. These results show that
the request processing chain fails if an NF becomes a bottleneck for the
consecutive chain.
Scalability for the existing 5GS and the proposed architecture is evaluated
based on Equation 1. It is plotted in Fig. 12 based on the results obtained
for session establishment rate, average response time and utilisation from the
PEPA-based simulation and modelling. As stated earlier, we consider the
following two configurations $m_{1}$ and $m_{2}$ for estimating the
scalability metric. Fig. 12 shows that the existing 5GS can serve 10,000 users
for a basic configuration, and the proposed architecture can serve 20,000
users. Similarly, the existing 5GS reaches its saturation point at 34,000
users, and the proposed architecture saturates at 62,000 users for scaled
configuration. Therefore, it implies that the proposed architecture performs
better and can serve more users than the existing 5GS. Besides, the proposed
is more scalable with increased users for the same number of NFs/processors.
Please note that a similar explanation for all the performance measures
(successful handovers, processor utilization and scalability) holds in the
case of mobility service.
#### VI-A2 Mobility Service
Figure 13: Number of successful handovers per unit time for the proposed and
the 5GS architecture having the basic configuration Figure 14: Number of
successful handovers per unit time for the proposed and the 5GS architecture
having the scaled configuration Figure 15: Processor Utilisation in case of
mobility for the proposed and the 5GS architecture having the basic
configuration Figure 16: Processor Utilisation in case of mobility for the
proposed and the 5GS architecture having the scaled configuration Figure 17:
Scalability in case of Mobility
This section presents the comparative analysis of the existing 5GS and the
proposed architecture for the mobility service. Similar to the session
establishment, the analysis is performed for the basic and the scaled
configurations. Therefore, the basic configuration for the proposed
architecture is given as ($N_{upt}$, $N_{msf}$, $N_{ran}$, $N_{cn}$,
$N_{upf}$) = (1,2,2,1,1) and for the 5GS architecture is ($N_{sdu}$,
$N_{scu}$, $N_{tdu}$, $N_{tcu}$, $N_{amf}$, $N_{smf}$, $N_{upf}$) =
(1,1,1,1,1,1,1). Similarly, the scaled configuration for the proposed
architecture is ($N_{upt}$, $N_{msf}$, $N_{ran}$, $N_{cn}$, $N_{upf}$) =
(3,6,6,3,3) and for the 5GS architecture is given as ($N_{sdu}$, $N_{scu}$,
$N_{tdu}$, $N_{tcu}$, $N_{amf}$, $N_{smf}$, $N_{upf}$) = (3,3,3,3,3,3,3).
Here $N_{upt}$, $N_{msf}$, $N_{ran}$, $N_{cn}$, $N_{upf}$ are the number of
Target-User Plane (T-UP), MSF, RAN controller, CN controller and UPF
respectively in the system model. Similarly, $N_{sdu}$, $N_{scu}$, $N_{tdu}$,
$N_{tcu}$, $N_{amf}$, $N_{smf}$, $N_{upf}$ are the number of S-DU, S-CU, T-DU,
T-CU, AMF, SMF, and UPF respectively. Please note that for brevity, we have
not split S-CU into S-CU-CP and S-CU-UP and T-CU into T-CU-CP and T-CU-UP
while modelling the mobility call flow procedure for the 5GS. Further, we
provide an equal number of functions and associated processors to the 5GS and
the proposed architecture for justified comparison.
After reaching the saturation point, the system starts to drop handovers. Fig.
13 and Fig. 14 show that the proposed architecture serves more successful
handovers per unit time compared to the existing 5GS for both the basic and
the scaled configurations, respectively. The saturation point for the existing
5GS is 20,000 users, while for the proposed, the saturation is 30,000 users
for the basic configuration. Similarly, the saturation point for the existing
5GS is around 60,000 users, while for the proposed, the saturation is around
90,000 users for the scaled configuration. The number of successful handovers
per unit of time has increased using a scaled configuration for both
architectures.
Fig. 15 and Fig. 16 are the result of processor utilisation for both the 5GS
and the proposed architecture. Fig. 17 shows the scalability results in the
case of mobility service for 5GS and the proposed architectures. It can be
observed from the scalability results that 5GS reaches its saturation point
earlier than the proposed architecture and the proposed architecture is more
scalable.
## VII CONCLUSION AND FUTURE WORK
In this paper, we have proposed a novel mobile network architecture for
separating the UE signalling from the network control functionality, enhancing
the modularity, scalability, and flexibility of the network. The transposition
of UE signalling functionality to service functions leads to simplified
protocols and opens up ways to implement use case specific signalling in
mobile networks. The proposed architecture also has improved alignment with
the SDN principles.
We have considered PDU session establishment and mobility services as examples
to analyse the performance of the proposed architecture using the PEPA-based
simulation method. Based on the performance results and other benefits, it can
be concluded that the proposed architecture is a promising option for future
networks to handle vast and diverse traffic demands. We plan to extend this
work to analyse other features/services of mobile networks, such as
authentication, network slicing, development of protocols between (signalling)
service functions and the control plane, and addressing security threats in
the 6GS mobile network (touched upon in section III) in future.
## Acknowledgment
We acknowledge the Ministry of Electronics and Information Technology (MeitY),
India, for supporting the project.
## References
* [1] SWG IMT-2030, “Framework and overall objectives of the future development of IMT for 2030 and beyond,” _Technical Recommendation_ , 2023.
* [2] FG NET-2030, “Achievements of ITU-T Focus Group on Network 2030,” _7th SG13 Regional Workshop for Africa on ITU-T Standardization Work on Future Networks: Towards a Better Future for Africa (Abuja, Nigeria)_ , 2020.
* [3] Taksande, Pradnya Kiri and Jha, Pranav and Karandikar, Abhay, “Dual Connectivity Support in 5G Networks: An SDN based approach,” in _IEEE Wireless Communications and Networking Conference (WCNC)_ , 2019, pp. 1–6.
* [4] Akshatha, Nayak M. and Jha, Pranav and Karandikar, Abhay, “A Centralized SDN Architecture for the 5G Cellular Network,” in _IEEE 5G World Forum (5GWF)_ , 2018, pp. 147–152.
* [5] Roozbeh, Amir, “Distributed Cloud and De-centralized Control Plane: A Proposal for Scalable Control Plane for 5G,” in _IEEE/ACM 8th International Conference on Utility and Cloud Computing (UCC)_ , 2015, pp. 348–353.
* [6] Ueda, Tetsuro and Idoue, Akira and Utsunomiya, Eiji, “Hierarchical and Distributed Software-Defined Network to Reduce Control Load,” in _IEEE Wireless Communications and Networking Conference (WCNC)_ , 2019, pp. 1–6.
* [7] Abdulghaffar, Abdulaziz and Mahmoud, Ashraf and Abu-Amara, Marwan and Sheltami, Tarek, “Modeling and Evaluation of Software Defined Networking Based 5G Core Network Architecture,” _IEEE Access_ , vol. 9, pp. 10 179–10 198, 2021.
* [8] Zuo, Qingyun and Chen, Ming and Ding, Ke and Xu, Bo, “On Generality of the Data Plane and Scalability of the Control Plane in Software-Defined Networking,” _China Communications_ , vol. 11, no. 2, pp. 55–64, 2014.
* [9] Hu, Jie and Lin, Chuang and Li, Xiangyang and Huang, Jiwei, “Scalability of Control Planes for Software Defined Networks: Modeling and Evaluation,” in _IEEE 22nd International Symposium of Quality of Service (IWQoS)_ , 2014, pp. 147–152.
* [10] A. Tootoonchian and Y. Ganjali, “Hyperflow: A distributed control plane for openflow,” in _Proceedings of the Internet Network Management Conference on Research on Enterprise Networking_ , 2010, p. 3.
* [11] J. Raychev, D. Kinaneva, G. Hristov, and P. Zahariev, “Optimizing sdn control plane scalability by efficient switch to controller migration,” in _27th National Conference with International Participation (TELECOM)_ , 2019, pp. 42–45.
* [12] Meghna Khaturia, Akshatha Nayak Manjeshwar, Pranav Jha and Abhay Karandikar, “5G-Serv: Decoupling User Control and Network Control in the 3GPP 5G Network,” in _24th Conference on Innovation in Clouds, Internet and Networks and Workshops (ICIN)_ , 2021, pp. 75–79.
* [13] Hillston, J., “A Compositional Approach to Performance Modelling,” _Distinguished Dissertations in Computer Science. Cambridge University Press._ , 1996.
* [14] 3GPP TS 23.501, “System Architecture for the 5G System,” _Technical Specification_ , 2018.
* [15] ITU-T Y.3300, “Framework of Software-Defined Networking,” _Recommendation_ , 2014.
* [16] ONF TR-502, “SDN Architecture,” _Technical Specification_ , vol. Issue 1, 2014.
* [17] 3GPP TS 23.502, “Procedures for the 5G System,” _Technical Specification_ , 2018.
* [18] 3GPP TS 33.512, “5G; 5G Security Assurance Specification (SCAS); Access and Mobility Management Function (AMF) ,” _Technical Specification_ , 2022.
* [19] 3GPP TS 33.511, “5G; Security Assurance Specification (SCAS) for the next generation Node B (gNodeB) network product class ,” _Technical Specification_ , 2022.
* [20] ENISA, “SECURITY IN 5G SPECIFICATIONS; Controls in 3GPP Security Specifications (5G SA),” _Technical Specification_ , 2021.
* [21] Oliveira, E. M. R and Viana, A. C. and Naveen, K. P. and Sarraute, C., “Mobile data traffic modeling: Revealing temporal facets,” _Computer Networks_ , vol. 112, pp. 176–193, 2017.
* [22] Arteaga, C. H. T. and Ordoñez, A. and Rendon, O. M. C., “Scalability and Performance Analysis in 5G Core Network Slicing,” _IEEE Access_ , vol. 8, pp. 142 086–142 100, 2020.
* [23] Arteaga, C. H. T., “A 5G Core Network Prototype,” _Computer Networks_ , vol. 112, pp. 176–193, 2019.
* [24] Tribastone, Mirco and Duguid, Adam and Gilmore, Stephen, “The PEPA Eclipse Plugin,” _SIGMETRICS Perform. Eval. Rev._ , vol. 36, no. 4, p. 28–33, 2009\.
* [25] Jogalekar, P. and Woodside, M., “Evaluating the scalability of distributed systems,” _IEEE Transactions on Parallel and Distributed Systems_ , vol. 11, no. 6, pp. 589–603, 2000.
|
# A geometric take on Kostant’s Convexity Theorem
Ricardo A. E. Mendes University of Oklahoma
Department of Mathematics
601 Elm Ave
Norman, OK, 73019-3103, USA<EMAIL_ADDRESS>
###### Abstract.
Given a compact Lie group $G$ and an orthogonal $G$-representation $V$, we
give a purely metric criterion for a closed subset of the orbit space $V/G$ to
have convex pre-image in $V$. In fact, this also holds with the natural
quotient map $V\to V/G$ replaced with an arbitrary submetry $V\to X$.
In this context, we introduce a notion of “fat section” which generalizes
polar representations, representations of non-trivial copolarity, and
isoparametric foliations.
We show that Kostant’s Convexity Theorem partially generalizes from polar
representations to submetries with a fat section, and give examples
illustrating that it does not fully generalize to this situation.
###### Key words and phrases:
Orbit space, submetry, convexity, polar action
###### 2020 Mathematics Subject Classification:
57S15, 51F99
The author has been supported by the NSF grant DMS-2005373
## 1\. Introduction
B. Kostant’s celebrated “Convexity Theorem” [Kos73, Theorem 8.2] can be
phrased as follows:
###### Theorem 1 (Kostant).
Let $V$ be a real orthogonal representation of the compact group $G$, with
connected orbits. Assume the representation is polar, with section
$\Sigma\subset V$ and Weyl group $W$ acting on $\Sigma$. Then
$\pi_{\Sigma}(G\cdot v)=\operatorname{conv}(W\cdot v)$
for all $v\in\Sigma$. Here $\pi_{\Sigma}$ denotes the orthogonal projection
onto $\Sigma$, $\operatorname{conv}(\cdot)$ the convex hull, $G\cdot v$ the
$G$-orbit through $v$, and $W\cdot v=(G\cdot v)\cap\Sigma$ the $W$-orbit
through $v$.
Recall that the $G$-representation $V$ is called polar with section $\Sigma$
if $\Sigma$ is a linear subspace of $V$ that intersects all $G$-orbits
orthogonally (see [Dad85]). The formulation above is equivalent to the
original because, up to having the same orbits, the class of polar
representations with connected orbits coincides with the class of isotropy
representations of symmetric spaces (see [Dad85, Proposition 6]). Two notable
special cases are the adjoint representation of a connected compact Lie group
$G$ on its Lie algebra $V$, with $\Sigma$ the Lie algebra of a maximal torus
and $W$ the usual Weyl group; and the Schur–Horn Theorem concerning the
diagonal entries of symmetric matrices with a given set of eigenvalues (see
[LRT99, page 150]).
An important consequence is that the study of $G$-invariant convex subsets of
$V$ is in some sense reduced to the study of $W$-invariant convex subsets of
$\Sigma$:
###### Corollary 2.
With assumptions and notations as in Theorem 1:
1. (a)
For every $G$-invariant convex subset $K\subset V$,
$\pi_{\Sigma}(K)=K\cap\Sigma.$
2. (b)
The map $K\mapsto\pi_{\Sigma}(K)=K\cap\Sigma$ is a bijection between
$G$-invariant convex subsets of $V$ and $W$-invariant convex subsets of
$\Sigma$.
For a proof of (a), see e.g. [KS20, Corollary 3.4]. The proof of (b) is
straightforward (see proof of Theorem C below for a generalization).
In terms of quotient spaces, subsets of $V/G$ (resp. $\Sigma/W$) correspond to
$G$-invariant subsets of $V$ (resp. $W$-invariant subsets of $\Sigma$). Thus
Corollary 2(b) states that the isometry $\Sigma/W\to V/G$ induced by the
inclusion $\Sigma\to V$ preserves the collection of subsets that have convex
pre-images in $\Sigma,V$. This is an immediate consequence of our first main
result, which gives a _purely metric_ criterion for a closed subset $S\subset
V/G$ to have convex pre-image in $V$, for _any_ (not necessarily polar)
$G$-representation $V$. More generally, one can also replace the map $V\to
V/G$ with any submetry, that is, any map $V\to X$ to a metric space $X$ that
takes metric balls to metric balls of the same radius (see Subsection 2.1
below):
###### Theorem A.
Let $V$ be a finite-dimensional real vector space with an inner product, $X$ a
metric space, and $\sigma\colon V\to X$ a submetry. Let $S\subset X$ be a
closed subset, and denote by $f\colon X\setminus S\to(0,\infty)$ the distance
function to $S$.
Then $\sigma^{-1}(S)$ is a convex subset of $V$ if and only if
$\lim\sup_{y\to x}\frac{f(y)-f(x)}{d(y,x)}=1$
for every $x\in X\setminus S$.
The condition given above on the function $f$ means that its gradient has norm
one at every point of $X\setminus S$. This can be made precise in Alexandrov
Geometry, see Section 3.
Theorem A fits in a general theme: the interplay between the geometry of the
isometric $G$-action on $V$ (or, more generally, some Riemannian manifold) and
the metric geometry of the orbit space $V/G$. This is a traditional topic
explored extensively in the literature, for example in [HK89, Gro02, GL14,
GLLM19, Men20, GKW21].
Going beyond the polar case, whenever a $G$-representation $V$ admits a
“reduction”, that is, a $G^{\prime}$-representation $V^{\prime}$ such that
$V/G$ is isometric to $V^{\prime}/G^{\prime}$ (called “quotient-equivalent” in
[GL14]), Theorem A implies that there is a bijection between the respective
closed invariant convex subsets. More generally:
###### Corollary B.
Let $V,V^{\prime}$ be finite-dimensional real vector spaces with inner
product, $X,X^{\prime}$ metric spaces, and $\sigma\colon V\to X$ and
$\sigma^{\prime}\colon V^{\prime}\to X^{\prime}$ be submetries. Let
$\varphi\colon X\to X^{\prime}$ be an isometry, and $S\subset X$ be a closed
subset. Then $\sigma^{-1}(S)$ is convex if and only if
$(\sigma^{\prime})^{-1}(\varphi(S))$ is convex. Thus $\varphi$ induces a
bijection between closed convex $\sigma$-saturated subsets of $V$ and closed
convex $\sigma^{\prime}$-saturated subsets of $V^{\prime}$.
Having generalized Corollary 2(b), we investigate the extent to which Theorem
A can be used to generalize Theorem 1 and Corollary 2(a). Since these concern
the orthogonal projection $\pi_{\Sigma}$, we need the “reduction” to be
induced by a vector subspace $\Sigma\subset V$. More precisely, we introduce
the following definition: Given a submetry $\sigma\colon V\to X$, such that
$\\{0\\}$ is a fiber, we call a subspace $\Sigma\subset V$ a _fat section_
111term borrowed from [Mag09, Mag10]. for $\sigma$ if
$\sigma|_{\Sigma}\colon\Sigma\to X$ is a submetry. Besides polar
representations, this generalizes a number of other objects, including
representations of non-trivial copolarity, principal isotropy group
reductions, and isoparametric foliations (see Section 4). Our second main
result is:
###### Theorem C.
Let $\sigma\colon V\to X$ be a submetry such that $\\{0\\}$ is a fiber, with
fat section $\Sigma\subset V$. Denote by $\pi_{\Sigma}$ the orthogonal
projection onto $\Sigma$. Then:
1. (a)
For any $\sigma$-fiber $F\subset V$,
$\operatorname{conv}(F)\cap\Sigma=\pi_{\Sigma}(\operatorname{conv}(F))=\operatorname{conv}(F\cap\Sigma).$
In particular,
$\pi_{\Sigma}(F)\subset\operatorname{conv}(F\cap\Sigma).$
2. (b)
For every $\sigma$-saturated convex set $K$,
$\pi_{\Sigma}(K)=K\cap\Sigma.$
3. (c)
The map $K\mapsto\pi_{\Sigma}(K)=K\cap\Sigma$ is a bijection between
$\sigma$-saturated convex subsets of $V$ and $\sigma|_{\Sigma}$-saturated
convex subsets of $\Sigma$.
Thus Corollary 2 holds in this more general situation, as does one of the
inclusions in the statement of Theorem 1. As for the reverse inclusion, that
is, the equality “$\pi_{\Sigma}(F)=\operatorname{conv}(F\cap\Sigma)$” (or,
equivalently, convexity of $\pi_{\Sigma}(F)$), it does hold for isoparametric
foliations, see [Ter86]. Beyond this class, it is easy to find counter-
examples, see Section 5.
We finish with a couple of open questions:
###### Question 3.
Is there a submetry with fat section, that is not isoparametric, and for which
$\pi_{\Sigma}(F)=\operatorname{conv}(F\cap\Sigma)$ for every $\sigma$-fiber
$F\subset V$?
###### Question 4.
In the situation of Corollary B, does the bijection between $\sigma$-saturated
closed convex subsets of $V$ and $\sigma^{\prime}$-saturated closed convex
subsets of $V^{\prime}$ preserve some special class of convex sets, such as
spectrahedra or spectrahedral shadows?
Special cases of Question 4 have received much attention recently (see, for
example, [KS20, Kum21, SS20]).
### Acknowledgements
It is a pleasure to thank Alexander Lytchak for help with Alexandrov Geometry,
and for suggesting an earlier version of Theorem A. I am also grateful to
Marco Radeschi for suggesting a proof (very similar to the one presented here)
of Proposition 13(b), that convex hulls of saturated sets are saturated.
## 2\. Preliminaries
### 2.1. Submetries
The definition of _submetry_ goes back to [Ber87] (see also [KL20]): A
_submetry_ is a map $\sigma\colon Y\to X$ between metric spaces that maps
closed balls to closed balls of the same radius. The fibers of $\sigma$
(sometimes also called _leaves_) form a decomposition of $Y$ into pairwise
_equidistant_ closed subsets, in the sense that
$d(x,F^{\prime})=d(F,F^{\prime})$ for every two fibers $F,F^{\prime}$ and all
$x\in F$.
Conversely, given such a partition $\mathcal{F}$ of the metric space $Y$ (into
“leaves”), there is a unique metric on the set of leaves $X$ such that the
natural map $Y\to X$ is a submetry. Endowed with this metric, $X$ is called
the _leaf space_.
A function on $Y$ is called _$\sigma$ -basic_ (or just basic, if $\sigma$ is
clear from context) if it is constant on the fibers of $\sigma$, that is, if
it descends to a function on the “base” $X$. A subset of $Y$ is called
_$\sigma$ -saturated_ (or just saturated) if it is the union of
$\sigma$-fibers, that is, if it is the inverse image under $\sigma$ of a
subset of $X$.
The main source of examples of submetries are isometric group actions. Namely,
if the group $G$ acts on the metric space $Y$ by isometries and with closed
orbits, then the natural quotient map $Y\to Y/G$ is a submetry. The fibers
(“leaves”) are the $G$-orbits, the saturated subsets of $Y$ are the
$G$-invariant subsets, and the basic functions on $Y$ are the $G$-invariant
functions.
In the present paper, we consider submetries $V\to X$, where $V$ will always
denote a finite-dimensional real vector space with inner product (and
associated Euclidean metric). We mention a few structure results that apply to
this situation: $X$ is an Alexandrov space with non-negative curvature (see
[BGP92, 4.6]); every fiber is a subset of positive reach; and most fibers are
$C^{1,1}$-submanifolds (see [KL20] for the last two, and much more). If one
adds the assumption that every fiber is a smooth submanifold, one arrives at
the notion of “manifold submetry”, see [MR20c, MR20b] for structure results.
We will many times add the assumption that the singleton $\\{0\\}$ is a
$\sigma$-fiber, where $0\in V$ denotes the origin. It implies the following
version of the Homothetic Transformation Lemma (see also [Mol88, Lemma 6.2],
and [MR20c, Lemma 24]). The importance of such submetries is that they model
the “infinitesimal behavior” of more general submetries (compare [KL20,
Sections 5,7]).
###### Lemma 5.
Let $\sigma\colon V\to X$ be a submetry such that $\\{0\\}$ is a fiber. If
$v,w\in V$ are such that $\sigma(v)=\sigma(w)$, then $\sigma(\lambda
v)=\sigma(\lambda w)$ for all $\lambda\geq 0$.
###### Proof.
The ray $t\mapsto tv$ (respectively $t\mapsto tw$), for $t\geq 0$, minimizes
distance between $\\{0\\}$ and each fiber of the form
$\sigma^{-1}(\sigma(t_{0}v))$ (respectively $\sigma^{-1}(\sigma(t_{0}w))$),
for $t_{0}\geq 0$. Thus they descend to geodesic rays $\gamma_{v},\gamma_{w}$
in $X$. Recall that, $X$ being an Alexandrov space, geodesics do not branch,
and, since $\gamma_{v}(0)=\gamma_{w}(0)$ and $\gamma_{v}(1)=\gamma_{w}(1)$, we
obtain $\gamma_{v}=\gamma_{w}$ (see [BBI01, Exercises 10.1.2, 10.1.5]). In
particular, $\gamma_{v}(\lambda)=\gamma_{w}(\lambda)$, or, in other words,
$\sigma(\lambda v)=\sigma(\lambda w)$. ∎
Given a submetry $\sigma\colon V\to X$, if the singleton $\\{v\\}$ is a fiber,
we will say that $v$ is a _fixed point_ (of $\sigma$).
We will need the following Lemma, which follows from [KL20, Proposition 5.6],
and is also a slight generalization of a well-known fact about singular
Riemannian foliations (see [MR20a, Proposition 5]). For completeness we
provide an elementary proof.
###### Lemma 6.
Let $\sigma\colon V\to X$ be a submetry such that the origin $0$ is a fixed
point. Then the set $V_{0}$ of all fixed points is a vector subspace of $V$,
and $\sigma$ “splits” in the sense that $X$ is isometric to $V_{0}\times
X^{\prime}$ for some metric space $X^{\prime}$, and
$\sigma=\operatorname{Id}_{V_{0}}\times\sigma^{\prime}\colon V=V_{0}\times
V_{0}^{\perp}\to V_{0}\times X^{\prime}=X.$
Here $\sigma^{\prime}\colon V_{0}^{\perp}\to X^{\prime}$ is a submetry whose
unique fixed point is the origin.
We first give a separate statement of a basic fact from Euclidean Geometry
that will be useful in the proof of Lemma 6 and elsewhere:
###### Lemma 7.
Let $V$ be a vector space with inner product. Then, for $u,v\in V$ with
$\|u\|=1$, one has:
$\langle
v,u\rangle=\sup_{t>0}\left(t-d\left(v,tu\right)\right)=\lim_{t\to\infty}\left(t-d\left(v,tu\right)\right)$
###### Proof of Lemma 6.
We use induction on $\dim(V)$. If $\dim(V)=0$, there is nothing to prove.
Assume $\dim(V)>0$.
If the origin is the only fixed point, there is nothing to prove.
If $v\neq 0$ is a fixed point, consider the linear function $\lambda_{v}\colon
V\to\mathds{R}$ given by $\lambda_{v}(x)=\langle x,v\rangle$. We claim that
$\lambda_{v}$ is basic.
Indeed, assume $x,x^{\prime}\in V\setminus\\{0\\}$ belong to the same fiber.
Then $\|x\|=\|x^{\prime}\|$ by equidistance of fibers (because the origin is a
fiber), and so $\sigma(tx/\|x\|)=\sigma(tx^{\prime}/\|x^{\prime}\|)$ for all
$t>0$ by the Homothetic Transformation Lemma (Lemma 5). Again by equidistance
of fibers, we obtain
$d\left(v,tx/\|x\|\right)=d\left(v,tx^{\prime}/\|x^{\prime}\|\right)$ for all
$t>0$ (because $\\{v\\}$ is a fiber, by assumption). But Lemma 7 yields
$\lambda_{v}(x)=\|x\|\sup_{t>0}\left(t-d\left(v,tx/\|x\|\right)\right)$
and similarly for $x^{\prime}$. Therefore
$\lambda_{v}(x)=\lambda_{v}(x^{\prime})$.
Since $\lambda_{v}$ is basic, the level sets are saturated. Given a fiber
$F\subset\lambda_{v}^{-1}(0)$, it follows from equidistance of fibers that the
translate $cv+F$ is again a fiber, for every $c\in\mathds{R}$ (see [MR20a,
Prop. 5, proof of (b)$\implies$(a)] for more details). Therefore $\sigma$
splits as
$\operatorname{Id}_{\mathds{R}}\times\sigma^{\prime}\colon\mathds{R}\times\lambda_{v}^{-1}(0)\to
X=\mathds{R}\times X^{\prime}$ for some submetry
$\sigma^{\prime}\colon\lambda_{v}^{-1}(0)\to X^{\prime}$. Applying the
inductive hypothesis to $\sigma^{\prime}$ finishes the proof. ∎
Regarding convex sets, we will use the elementary observation:
###### Lemma 8.
Let $\sigma\colon V\to X$ be a submetry such that the origin $0$ is a fixed
point. Then every closed convex $\sigma$-saturated set $K$ has a fixed point.
###### Proof.
Take the point $v\in K$ closest to the origin (existence and uniqueness follow
from the assumption that $K$ is closed and convex). Then $\\{v\\}$ is
saturated, hence a fiber, because it is the intersection of two saturated
sets: $K$ and the closed ball of radius $\|v\|$ around the origin. ∎
### 2.2. Convex sets and support functions
We will use the concepts of “support function” and “polar” (see e.g. [SSS11,
pages 280–281]), as well as a couple of basic facts.
###### Definition 9.
Let $A\subset V$ be a subset of a finite-dimensional real vector space with
inner product. The _support function_ $h(A,\cdot)\colon
V\to\mathds{R}\cup\infty$ is defined by
$h(A,v)=\sup_{a\in A}\langle v,a\rangle,$
and the _polar_ of $A$ is the subset $A^{\circ}\subset V$ defined by
$A^{\circ}=\\{v\in V\mid h(A,v)\leq 1\\}=\\{v\in V\mid\langle v,a\rangle\leq
1\quad\forall a\in A\\}.$
###### Proposition 10.
Let $A\subset V$ be a closed subset of a finite-dimensional real vector space
with inner product.
1. (a)
(Bipolar Theorem) If $\operatorname{conv}(A)$ contains the origin, then
$A^{\circ\circ}=\operatorname{conv}(A)$.
2. (b)
If $\Sigma\subset V$ is a subspace, and $A$ is compact, convex, and contains
the origin, then $\pi_{\Sigma}(A)=(\Sigma\cap A^{\circ})^{\circ}$, where
$\pi_{\Sigma}\colon V\to V$ denotes orthogonal projection onto $\Sigma$.
###### Proof.
1. (a)
See [Bar02, IV (1.2) page 144].
2. (b)
It follows directly from Definition 9 that the support function of
$\pi_{\Sigma}(A)$ is the restriction to $\Sigma$ of the support function of
$A$, and, thus, that $\Sigma\cap A^{\circ}=(\pi_{\Sigma}(A))^{\circ}$.
Since $\pi_{\Sigma}(A)$ is a closed convex set containing the origin, the
Bipolar Theorem implies that
$\pi_{\Sigma}(A)=(\pi_{\Sigma}(A))^{\circ\circ}=(\Sigma\cap
A^{\circ})^{\circ}$.
∎
## 3\. Detecting convexity in the base of a submetry
In this section we provide a proof of Theorem A, about how convexity can be
detected metrically in the quotient, and we also investigate convex hulls of
saturated sets. Some Alexandrov Geometry will be used, see [BBI01, Chapter
10], [BGP92], and [Pet07].
###### Proof of Theorem A.
Let $\tilde{f}\colon V\setminus\sigma^{-1}(S)\to(0,\infty)$ be given by
$\tilde{f}=f\circ\sigma$. It follows from the definition of submetry that
$\tilde{f}$ is the distance function to $\sigma^{-1}(S)$.
For $x\in X\setminus S$, let
$|\nabla^{+}f|(x)=\max\left\\{0,\ \lim\sup_{y\to
x}\frac{f(y)-f(x)}{d(y,x)}\right\\}\in[0,1]$
be the “ascending slope” of $f$ at $x$, and analogously for $\tilde{f}$. These
are at most one because $f,\tilde{f}$ are $1$-Lipschitz.
From the definition of submetry, it follows that
$|\nabla^{+}\tilde{f}|(v)=|\nabla^{+}f|(\sigma(v))$ for every $v\in
V\setminus\sigma^{-1}(S)$, see [KL20, Section 2.5].
Since $X,V$ are Alexandrov spaces, the distance function to any point is semi-
concave, and thus the functions $f,\tilde{f}$, being infima of such functions,
are semi-concave as well. In particular, they have well-defined gradients,
which are elements $\nabla_{x}f\in T_{x}X,\ \nabla_{v}\tilde{f}\in T_{v}V$ of
the respective tangent cones, for all $x\in X\setminus S$ and $v\in
V\setminus\sigma^{-1}(S)$. (See [Pet07, Section 1] for the definitions of
semi-concave functions and their gradients.)
Moreover, the norm $\|\nabla_{x}f\|$ of the gradient $\nabla_{x}f\in T_{x}X$
(that is, the distance to the apex of the cone $T_{x}X$) is the maximum of the
differential $d_{x}f$ on the space of directions $\Sigma_{x}X\subset T_{x}X$,
unless $d_{x}f\leq 0$, in which case $\|\nabla_{x}f\|=0$, see [Pet07, Section
1.3]. Therefore $\|\nabla_{x}f\|$ is equal to the ascending slope
$|\nabla^{+}f|(x)$, and analogously for $\tilde{f}$.
Assume $\sigma^{-1}(S)$ is convex. It is well-known that $\tilde{f}$ is
$C^{1}$ with gradient (in the sense of Calculus) of norm one — see, for
example, [BL06, Section 3.3, Exercise 12, on page 57]. Since for $C^{1}$
functions the standard gradient coincides with the gradient in the sense of
[Pet07, Section 1], it follows that $|\nabla^{+}\tilde{f}|$, and hence
$|\nabla^{+}f|$, is constant equal to one.
For the converse, assume $|\nabla^{+}f|(x)=1$ for all $x\in X\setminus S$.
Claim: For every $v\in V\setminus\sigma^{-1}(S)$, with $\tilde{f}(v)=l>0$,
there is a unique closest point $p(v)\in\sigma^{-1}(S)$ with $\|v-p(v)\|=l$.
Moreover, along the geodesic ray $\gamma(t)=p(v)+t(v-p(v))/l$, one has
$\tilde{f}(\gamma(t))=t$ for all $t\geq 0$.
Assuming the Claim, we show that $\sigma^{-1}(S)$ is convex. If not, there
must exist $P,Q\in\sigma^{-1}(S)$ such that $v:=(P+Q)/2\notin\sigma^{-1}(S)$.
We may assume, without loss of generality, that $\angle
p(v)\hat{v}P\geq\pi/2$. By an elementary Calculus argument (compare Lemma 7),
$\lim_{t\to\infty}\big{(}\|\gamma(t)-v\|-\|\gamma(t)-P\|\big{)}\geq 0.$
On the other hand, for $t\geq l$, one has
$\|\gamma(t)-v\|=t-l=\tilde{f}(\gamma(t))-l=d(\gamma(t),\sigma^{-1}(S))-l\leq\|\gamma(t)-P\|-l$
which implies $l\leq 0$, a contradiction. Therefore $\sigma^{-1}(S)$ is
convex.
Finally, we prove the Claim. Let $v\in V\setminus\sigma^{-1}(S)$, and set
$l=\tilde{f}(v)>0$. Choose a straight line $\gamma\colon\mathds{R}\to V$ such
that $\gamma|_{[0,l]}$ is a (unit-speed) minimizing geodesic from
$\sigma^{-1}(S)$ to $v=\gamma(l)$. Note that $\tilde{f}(\gamma(t))=t$ for
$t\in[0,l]$.
By [Pet07, Proposition 2.1.2], there is a gradient curve
$\alpha\colon[0,\infty)\to V$ for $\tilde{f}$, starting at $v=\alpha(0)$. This
means that, for all $t\in[0,\infty)$, the right tangent vector $\alpha^{+}(t)$
coincides with the gradient $\nabla_{\alpha(t)}\tilde{f}$.
Since $|\nabla^{+}f|(\sigma(\alpha(t)))=1$ by assumption, the gradient
$\nabla_{\alpha(t)}\tilde{f}$ has norm one, so that $\alpha$ is parametrized
by arc length.
Moreover, by definition of gradient,
$(d_{\alpha(t)}\tilde{f})(\alpha^{+}(t))=1$ for all $t\geq 0$. This means that
$\tilde{f}\circ\alpha$ has right-derivative identically equal to one, which
implies that
$\tilde{f}(\alpha(t))=l+t\quad\forall t\geq 0.$
But $l+t$ is the length of the concatenated curve
$\gamma|_{[0,l]}*\alpha|_{[0,t]}$, which implies that this curve minimizes
distance between $\sigma^{-1}(S)$ and $\alpha(t)$, so it must be a line
segment, and therefore $\alpha(t)=\gamma(l+t)$ for all $t\geq 0$. In
particular $\gamma|_{[0,l]}$ and the closest point $p(v)=\gamma(0)$ are
uniquely determined by $v$, and $\tilde{f}(\gamma(t))=t$ for $t\in[l,\infty)$.
∎
###### Remark 11.
A key step in the proof of Theorem A above is the fact that if the distance
function $\tilde{f}$ from the closed subset $\sigma^{-1}(S)\subset V$ has
gradient (in the sense of Alexandrov Geometry [Pet07, Section 1]) of norm one
at every $x\in V\setminus\sigma^{-1}(S)$, then $\sigma^{-1}(S)$ is convex. An
alternative proof of this fact, avoiding gradient curves, goes as follows.
There is an explicit formula (see [Pet07, page 10]) for the differential
$d_{x}\tilde{f}$ of $\tilde{f}$ at $v\in T_{x}V=V$, namely
$d_{x}\tilde{f}(v)=\min_{\xi\in\Uparrow_{x}^{\sigma^{-1}(S)}}\langle-\xi,v\rangle$
where $\Uparrow_{x}^{\sigma^{-1}(S)}$ denotes the (compact) subset of the unit
sphere in $T_{x}V=V$ of all initial vectors of minimizing geodesics from $x$
to $\sigma^{-1}(S)$. This formula can be proved using the First Variation
formula for the arc length, along the same lines as the proof of Berger’s
Lemma (see [CE08, Lemma 6.2]).
The condition $\|\nabla_{x}\tilde{f}\|=1$ is equivalent to $d_{x}\tilde{f}$
having maximum value equal $1$ on the unit sphere, and hence, from the formula
above, to $\Uparrow_{x}^{\sigma^{-1}(S)}$ being a singleton222These conditions
are also equivalent to differentiability of the distance function, compare
with [Sak96, Prop. 4.8] and [KL21, Proposition 4.1].. Thus, for every $x\in
V\setminus\sigma^{-1}(S)$, there is a unique minimizing geodesic from $x$ to
$\sigma^{-1}(S)$, that is, $\sigma^{-1}(S)$ is a set of “infinite reach”. But
a closed subset of Euclidean space has infinite reach if and only if it is
convex, see [KP99, Theorems 6.2.12, 6.2.13].
###### Remark 12.
More generally, the distance function to the closed subset
$\sigma^{-1}(S)\subset V$ has gradient of norm one _near_ $\sigma^{-1}(S)$ (as
opposed to on all of $V\setminus\sigma^{-1}(S)$) if and only if
$\sigma^{-1}(S)\subset V$ has “positive reach” — see [KL21, Proposition 1.3
and 4.1].
Thus, one obtains an analogue of Theorem A for sets of positive reach (instead
of convex). Namely, if one replaces “for every $x\in X\setminus S$” with “for
every $x\in O\setminus S$, for some open subset $O\subset X$ containing $S$”,
one obtains a metric condition on $S$ equivalent to $\sigma^{-1}(S)$ having
positive reach.
### 3.1. Convex hulls and polars
Using support functions (Definition 9), we show that taking the convex hull
(or polar) preserves saturation:
###### Proposition 13.
Given a submetry $\sigma\colon V\to X$, such that $\\{0\\}$ is a fiber, and
$A$ any closed saturated subset of $V$, one has:
1. (a)
The support function $h(A,\cdot)$ is $\sigma$-basic.
2. (b)
The convex hull $\operatorname{conv}(A)$ is $\sigma$-saturated.
3. (c)
The polar $A^{\circ}$ is $\sigma$-saturated.
###### Proof.
1. (a)
Assume first that $A$ is a single fiber $F$. Then $F$ is contained in a sphere
of radius $r$ centered at the origin, and, using Lemma 7, one obtains, for
every $v\in V$:
$h(F,v)=\sup_{a\in F}r\left(\sup_{t>0}\left(t-d\left(v,{t\over
r}a\right)\right)\right)=r\sup_{t>0}\left(t-d\left(v,{t\over
r}F\right)\right).$
By the Homothetic Transformation Lemma (Lemma 5), the set ${t\over r}F$ is a
fiber for every $t>0$, thus $d\left(\cdot,{t\over r}F\right)$ is a basic
function for all $t>0$. Therefore $h(F,\cdot)$ is a basic function.
For $A$ not necessarily a single fiber, one has $h(A,v)=\sup_{F\subset
A}h(F,v)$ for all $v\in V$, where the supremum is taken over all fibers $F$
contained in $A$. Thus $h(A,\cdot)$ is basic.
2. (b)
Since $A$ is closed, $\operatorname{conv}(A)$ is the intersection of all half-
spaces that contain $A$. In terms of support functions, this reads
$\operatorname{conv}(A)=\\{x\in V\mid\langle v,x\rangle\leq\lambda\quad\forall
v,\lambda\text{ such that }h(A,v)\leq\lambda\\}.$
By part (a), the support function $h(A,\cdot)$ is basic, and therefore this
can be rewritten as
$\operatorname{conv}(A)=\left\\{x\in V\mid\sup_{w\in F_{v}}\langle
w,x\rangle\leq\lambda\quad\forall v,\lambda\text{ such that
}h(A,v)\leq\lambda\right\\},$
where $F_{v}$ denotes the $\sigma$-fiber containing $v$. But $\sup_{w\in
F_{v}}\langle w,x\rangle$ is exactly $h(L_{v},x)$, and, again by part (a),
$h(L_{v},\cdot)$ is basic. Thus $\operatorname{conv}(A)$ is the intersection
of saturated sets, hence saturated.
3. (c)
This follows immediately from part (a) and the definition of the polar as a
sub-level set of the support function.
∎
###### Remark 14.
Proposition 13 and Theorem A imply that convex hulls can be detected
metrically in the quotient. More precisely, given a submetry $\sigma\colon
V\to X$, such that $\\{0\\}$ is a fiber, and a closed saturated subset
$A\subset V$, the convex hull $\operatorname{conv}(A)$, being closed and
saturated, equals the smallest closed convex saturated subset of $V$
containing $A$. Using Theorem A, $\operatorname{conv}(A)$ is then the inverse
image $\sigma^{-1}(S)$ of the smallest closed subset $S\subset X$ satisfying
the (purely metric) condition in Theorem A and containing $\sigma(A)$.
If, given $S\subset X$ closed, we define $\operatorname{conv}_{0}(S)$ to be
the unique closed subset $C\subset X$ such that
$\operatorname{conv}(\sigma^{-1}(S))=\sigma^{-1}(C)$, then the map
$\operatorname{conv}_{0}$ depends only on the distance function of $X$. In
particular, in the situation of Corollary B,
$\varphi(\operatorname{conv}_{0}(S))=\operatorname{conv}_{0}(\varphi(S)).$
Of special interest to us in this article is the case where $S$ is a single
point. Then $\varphi$ maps (the $\sigma$-image of) every
“fibertope”333generalization of “orbitope”, see e.g. [SSS11]., that is, convex
set of the form $\operatorname{conv}(F)$, where $F$ is a single
$\sigma$-fiber, to another fibertope in $X^{\prime}$.
## 4\. Fat sections
###### Definition 15.
Given a submetry $\sigma\colon V\to X$, such that $\\{0\\}$ is a fiber, we
call a subspace $\Sigma\subset V$ a _fat section_ for $\sigma$ if
$\sigma|_{\Sigma}\colon\Sigma\to X$ is a submetry.
In terms of equidistant decompositions (see Subsection 2.1), Definition 15 can
be rephrased as the following conditions on $\Sigma$: it meets all fibers, the
decomposition $\mathcal{F}$ of $\Sigma$ given by
$\mathcal{F}=\\{F\cap\Sigma\mid F\text{ fiber of }\sigma\\}$ is equidistant,
and the natural bijection $\Sigma/\mathcal{F}\to X$ is an isometry.
In particular, the following are examples of fat sections:
* •
$\sigma\colon V\to X=V/G$ is the natural quotient map, where $V$ is a polar
$G$-representation, and $\Sigma$ is a section.
* •
More generally, $\sigma\colon V\to V/G$, where $V$ is a $G$-representation of
nontrivial “copolarity”, and $\Sigma$ is a “generalized minimal section”. See
[GOT04], and [GL14, Section 2.3]).
* •
$\sigma\colon V\to V/G$, where $V$ is an effective $G$-representation with
non-trivial principal isotropy group $K$, and $\Sigma$ is the fixed-point set
$\Sigma=V^{K}$. See [GL14, page 62] and references therein.
* •
the fibers of $\sigma\colon V\to X$ form an isoparametric foliation, and
$\Sigma$ is the normal space to a regular leaf. In this case, $\dim(\Sigma)=2$
and $\sigma|_{\Sigma}\colon\Sigma\to X$ is the quotient map of a dihedral
group action.
We turn to the proof of Theorem C, which is a partial generalization of
Theorem 1 (Kostant’s Theorem), and a full generalization of Corollary 2, to
the case of fat sections. We note that Theorem 1 does not fully generalize to
fat sections, see Section 5 for counter-examples.
Since Theorem C concerns orthogonal projection onto a section, but Corollary B
concerns intersection with the fat section, an important step is to link these
two via Proposition 10 (including the Bipolar Theorem). A technical issue
arises from the fact that Proposition 10 applies only to convex sets
_containing the origin_ , which we address using Lemmas 6 and 8, together with
the following:
###### Lemma 16.
Let $\sigma\colon V\to X$ a submetry with $\\{0\\}$ a fiber, and
$\Sigma\subset V$ a fat section. Denote by $V_{0}\subset V$ (respectively
$\Sigma_{0}\subset\Sigma$) the set of all fixed points, that is, the set of
$v\in V$ (respectively $v\in\Sigma$) such that $\\{v\\}$ is a $\sigma$-fiber
(respectively $\sigma|_{\Sigma}$-fiber). Then $V_{0}=\Sigma_{0}$.
###### Proof.
Since $\sigma|_{\Sigma}$ is onto $X$, one has $V_{0}\subset\Sigma_{0}$.
For the reverse inclusion $\Sigma_{0}\subset V_{0}$, use Corollary B. Indeed,
for every $v\in\Sigma_{0}$, $\sigma|_{\Sigma}^{-1}(\sigma(v))=\\{v\\}$ is
convex, hence $\sigma^{-1}(\sigma(v))$ is also convex, but since it is
contained in a sphere, it must be the singleton $\\{v\\}$. In other words,
$v\in V_{0}$. ∎
###### Proof of Theorem C.
1. (a)
First note that $F$ is compact (being a closed subset of a sphere), so
$K=\operatorname{conv}(F)$ is a compact, convex, $\sigma$-saturated subset of
$V$, by Proposition 13.
Reduction: We reduce the proof to the case where $K$ contains the origin.
Indeed, by Lemma 8, $K$ contains a fixed point $v\in V$, that is, a point $v$
such that $\\{v\\}$ is a $\sigma$-fiber. By Lemma 16, $v\in\Sigma$. By Lemma
6, the translation map $V\to V$, given by $w\mapsto w-v$, sends
$\sigma$-fibers to $\sigma$-fibers.
Thus $F-v$ is a fiber such that $K-v=\operatorname{conv}(F-v)$ contains the
origin, and, assuming
$(K-v)\cap\Sigma=\pi_{\Sigma}(K-v)=\operatorname{conv}((F-v)\cap\Sigma)$, we
obtain
$K\cap\Sigma=v+(K-v)\cap\Sigma=v+\pi_{\Sigma}(K-v)=\pi_{\Sigma}(K)$
and, similarly, $K\cap\Sigma=\operatorname{conv}(F\cap\Sigma)$. This finishes
the proof of the Reduction.
By Corollary B, the map $A\mapsto\Sigma\cap A$ is a bijection between origin-
containing saturated closed convex subsets of $V$ and $\Sigma$. This bijection
preserves the partial order by inclusion.
By Propositions 13(c) and 10(a), taking the polar is an order-reversing
involution of the set of all origin-containing saturated closed convex subsets
of $V$ (respectively $\Sigma$).
Composing these bijections, the map $A\mapsto(\Sigma\cap A^{\circ})^{\circ}$
is another order-preserving bijection between origin-containing saturated
closed convex subsets of $V$ and $\Sigma$. If $A\subset V$ is a closed ball
centered at the origin, $(\Sigma\cap A^{\circ})^{\circ}$ is closed ball (in
$\Sigma$) of the same radius. Thus, the map $A\mapsto(\Sigma\cap
A^{\circ})^{\circ}$ is also an order-preserving bijection between origin-
containing saturated _compact_ convex subsets of $V$ and $\Sigma$. By
Proposition 10(b), this map coincides with $A\mapsto\pi_{\Sigma}(A)$.
Thus there is $K^{\prime}\subset V$ compact, convex, saturated, origin-
containing, such that $\pi_{\Sigma}(K^{\prime})=K\cap\Sigma$. Since
$K\cap\Sigma\subset\pi_{\Sigma}(K)$, we have $K^{\prime}\subset K$.
Choose any $v\in\Sigma\cap F\subset K\cap\Sigma$, and $v^{\prime}\in
K^{\prime}$ such that $\pi_{\Sigma}(v^{\prime})=v$. Since $v^{\prime}\in
K=\operatorname{conv}(F)$, and $F$ is contained in the sphere of radius
$\|v\|$ around the origin, we have $\|v^{\prime}\|\leq\|v\|$. Thus
$v=v^{\prime}$, and $v\in K^{\prime}$. Since $K^{\prime}$ is saturated, we
obtain $F\subset K^{\prime}$, and, since $K^{\prime}$ is convex, we obtain
$K=\operatorname{conv}(F)\subset K^{\prime}$. Thus $K=K^{\prime}$, and
$K\cap\Sigma=\pi_{\Sigma}(K)$.
The other equation $K\cap\Sigma=\operatorname{conv}(F\cap\Sigma)$ follows from
the fact that $A\mapsto A\cap\Sigma$ preserves “fibertopes”, see Remark 14.
Finally, the last statement is clear:
$\pi_{\Sigma}(F)\subset\pi_{\Sigma}(\operatorname{conv}(F))=\operatorname{conv}(F\cap\Sigma)$.
2. (b)
The inclusion $K\cap\Sigma\subset\pi_{\Sigma}(K)$ is clear. For the reverse
inclusion, let $v\in\pi_{\Sigma}(K)$. Choose $v^{\prime}\in K$ with
$\pi_{\Sigma}(v^{\prime})=v$, and denote by $F_{v^{\prime}}\subset K$ the
$\sigma$-fiber through $v^{\prime}$. Then, by (a),
$v\in\pi_{\Sigma}(F_{v^{\prime}})\subset\operatorname{conv}(\Sigma\cap
F_{v^{\prime}})\subset K\cap\Sigma$.
3. (c)
The map $A\mapsto A\cap\Sigma$ is a bijection from the set of all
$\sigma$-saturated subsets of $V$ to the set of all
$\sigma|_{\Sigma}$-saturated subsets of $\Sigma$, with inverse
$B\mapsto\sigma^{-1}(\sigma(B))$. We need to show that both these maps
preserve convexity. The first map $A\mapsto A\cap\Sigma$ clearly does. Let
$B\subset\Sigma$ be convex and $\sigma|_{\Sigma}$-saturated. Define
$K=\operatorname{conv}(\sigma^{-1}(\sigma(B)))$, which is a convex
$\sigma$-saturated set by Proposition 13(b). Using part (b),
$K\cap\Sigma=\pi_{\Sigma}(K)=\operatorname{conv}(\pi_{\Sigma}(\sigma^{-1}(\sigma(B))))$.
Using part (a) and convexity of $B$, for each $\sigma$-fiber
$F\subset\sigma^{-1}(\sigma(B))$, we have
$\pi_{\Sigma}(F)\subset\operatorname{conv}(F\cap\Sigma)\subset B$, and thus
$K\cap\Sigma\subset B$. The reverse inclusion being clear, we obtain
$K\cap\Sigma=B$, and therefore, $K=\sigma^{-1}(\sigma(B))$, that is,
$\sigma^{-1}(\sigma(B))$ is convex.
∎
## 5\. Examples
To illustrate that Theorem C applies more generally than Kostant’s Theorem,
one can point to any representation that is not polar but has non-trivial
copolarity in the sense of [GOT04]. Here is one concrete example (see tables
in [GKW21] for many more):
###### Example 17.
Let $2\leq k\leq n-1$ be integers, $G=\mathsf{O}(n)$ be the orthogonal
group444$\mathsf{O}(n)$ can be replaced with $\mathsf{SO}(n)$ and the orbits
would not change. In other words, the orbits are connected., acting diagonally
on $V=(\mathds{R}^{n})^{\oplus k}$. Then a principal isotropy is
$H=\mathsf{O}(n-k)$ (embedded as a diagonal block), whose fixed point set is
$\Sigma=(\mathds{R}^{k})^{\oplus k}$. The normalizer of $H$ in $G$ is
$N(H)=\mathsf{O}(k)\times\mathsf{O}(n-k)$ (embedded as block-diagonal
matrices). Its action has $\mathsf{O}(n-k)$ as ineffective kernel, so it has
the same orbits as the diagonal action of $\mathsf{O}(k)$ on $\Sigma$. The
orbit spaces $V/G$ and $\Sigma/N=\Sigma/\mathsf{O}(k)$ are isometric, and
$\Sigma$ is a fat section for the natural quotient map $\sigma\colon V\to
V/G$.
Next we illustrate that Theorem 1 does not apply for all submetries with a fat
section. That is, the reverse inclusion in Theorem C(a) does not always hold.
The easiest counter-examples can be found considering polar representations
with _disconnected_ orbits, thus also illustrating the necessity of the
hypothesis that orbits are connected in Theorem 1. At the most extreme, one
can take $G\subset\mathsf{O}(V)$ finite and non-trivial. Then $\Sigma=V$ is a
section, so $\pi_{\Sigma}$ is the identity map, and $\pi_{\Sigma}(G\cdot
v)=G\cdot v$ is not convex unless it is a single point. Similar counter-
examples with $G$ infinite are also easily constructed.
A more interesting example, with connected orbits:
###### Example 18.
Let $V,G,\Sigma,\ldots$ as in Example 17. Assume $k>\frac{2n-1}{3}$. Then, for
a generic $v\in\Sigma$, $\pi_{\Sigma}(G\cdot v)$ is strictly contained in the
orbitope $\mathsf{O}(k)\cdot v$. Indeed, since the
$\mathsf{O}(k)$-representation $(\mathds{R}^{k})^{\oplus k}$ is of real type,
the generic orbitope has non-empty interior, that is, dimension $k^{2}$ (see
[SSS11, pages 279-280]). On the other hand, the principal $G$-orbits have
dimension $\dim\mathsf{O}(n)-\dim\mathsf{O}(n-k)=kn-k(k+1)/2$. When
$k>\frac{2n-1}{3}$, this is smaller than $k^{2}$.
## References
* [Bar02] Alexander Barvinok. A course in convexity, volume 54 of Graduate Studies in Mathematics. American Mathematical Society, Providence, RI, 2002.
* [BBI01] Dmitri Burago, Yuri Burago, and Sergei Ivanov. A course in metric geometry, volume 33 of Graduate Studies in Mathematics. American Mathematical Society, Providence, RI, 2001.
* [Ber87] V. N. Berestovskiĭ. “Submetries” of three-dimensional forms of nonnegative curvature. Sibirsk. Mat. Zh., 28(4):44–56, 224, 1987.
* [BGP92] Yu. Burago, M. Gromov, and G. Perel’man. A. D. Aleksandrov spaces with curvatures bounded below. Uspekhi Mat. Nauk, 47(2(284)):3–51, 222, 1992.
* [BL06] Jonathan M. Borwein and Adrian S. Lewis. Convex analysis and nonlinear optimization, volume 3 of CMS Books in Mathematics/Ouvrages de Mathématiques de la SMC. Springer, New York, second edition, 2006. Theory and examples.
* [CE08] Jeff Cheeger and David G. Ebin. Comparison theorems in Riemannian geometry. AMS Chelsea Publishing, Providence, RI, 2008. Revised reprint of the 1975 original.
* [Dad85] Jiri Dadok. Polar coordinates induced by actions of compact Lie groups. Trans. Amer. Math. Soc., 288(1):125–137, 1985.
* [GKW21] Claudio Gorodski, Andreas Kollross, and Burkhard Wilking. Actions on positively curved manifolds and boundary in the orbit space. arXiv e-prints, page arXiv:2112.00513, December 2021.
* [GL14] Claudio Gorodski and Alexander Lytchak. On orbit spaces of representations of compact Lie groups. J. Reine Angew. Math., 691:61–100, 2014.
* [GLLM19] Claudio Gorodski, Christian Lange, Alexander Lytchak, and Ricardo A. E. Mendes. A diameter gap for quotients of the unit sphere. To appear in JEMS. arXiv e-prints, page arXiv:1903.12619, March 2019\.
* [GOT04] Claudio Gorodski, Carlos Olmos, and Ruy Tojeiro. Copolarity of isometric actions. Trans. Amer. Math. Soc., 356(4):1585–1608, 2004.
* [Gro02] Karsten Grove. Geometry of, and via, symmetries. In Conformal, Riemannian and Lagrangian geometry (Knoxville, TN, 2000), volume 27 of Univ. Lecture Ser., pages 31–53. Amer. Math. Soc., Providence, RI, 2002.
* [HK89] Wu-Yi Hsiang and Bruce Kleiner. On the topology of positively curved $4$-manifolds with symmetry. J. Differential Geom., 29(3):615–621, 1989.
* [KL20] Vitali Kapovitch and Alexander Lytchak. Structure of Submetries. arXiv e-prints, page arXiv:2007.01325, July 2020.
* [KL21] Vitali Kapovitch and Alexander Lytchak. Remarks on manifolds with two-sided curvature bounds. Anal. Geom. Metr. Spaces, 9(1):53–64, 2021.
* [Kos73] Bertram Kostant. On convexity, the Weyl group and the Iwasawa decomposition. Ann. Sci. École Norm. Sup. (4), 6:413–455 (1974), 1973.
* [KP99] Steven G. Krantz and Harold R. Parks. The geometry of domains in space. Birkhäuser Advanced Texts: Basler Lehrbücher. [Birkhäuser Advanced Texts: Basel Textbooks]. Birkhäuser Boston, Inc., Boston, MA, 1999\.
* [KS20] Tim Kobert and Claus Scheiderer. Spectrahedral representation of polar orbitopes. arXiv e-prints, page arXiv:2010.02045, October 2020.
* [Kum21] Mario Kummer. Spectral linear matrix inequalities. Advances in Mathematics, 384:107749, 2021.
* [LRT99] R. S. Leite, T. R. W. Richa, and C. Tomei. Geometric proofs of some theorems of Schur-Horn type. Linear Algebra Appl., 286(1-3):149–173, 1999.
* [Mag09] Frederick Magata. Reductions, Resolutions and the Copolarity of Isometric Group Actions. arXiv e-prints, page arXiv:0908.0183, August 2009.
* [Mag10] Frederick Magata. A general Weyl-type integration formula for isometric group actions. Transform. Groups, 15(1):184–200, 2010.
* [Men20] Ricardo A. E. Mendes. Lifting isometries of orbit spaces. To appear in BLMS. arXiv e-prints, page arXiv:2004.00097, March 2020\.
* [Mol88] Pierre Molino. Riemannian foliations, volume 73 of Progress in Mathematics. Birkhäuser Boston, Inc., Boston, MA, 1988. Translated from the French by Grant Cairns, With appendices by Cairns, Y. Carrière, É. Ghys, E. Salem and V. Sergiescu.
* [MR20a] R. A. E. Mendes and M. Radeschi. Singular Riemannian foliations and their quadratic basic polynomials. Transform. Groups, 25(1):251–277, 2020.
* [MR20b] Ricardo A. E. Mendes and Marco Radeschi. Applications of Foliation Theory to Invariant Theory. arXiv e-prints, page arXiv:2012.07914, December 2020.
* [MR20c] Ricardo A. E. Mendes and Marco Radeschi. Laplacian algebras, manifold submetries and the inverse invariant theory problem. Geom. Funct. Anal., 30(2):536–573, 2020.
* [Pet07] Anton Petrunin. Semiconcave functions in Alexandrov’s geometry. In Surveys in differential geometry. Vol. XI, volume 11 of Surv. Differ. Geom., pages 137–201. Int. Press, Somerville, MA, 2007.
* [Sak96] Takashi Sakai. Riemannian geometry, volume 149 of Translations of Mathematical Monographs. American Mathematical Society, Providence, RI, 1996. Translated from the 1992 Japanese original by the author.
* [SS20] Raman Sanyal and James Saunderson. Spectral Polyhedra. arXiv e-prints, page arXiv:2001.04361, January 2020.
* [SSS11] Raman Sanyal, Frank Sottile, and Bernd Sturmfels. Orbitopes. Mathematika, 57(2):275–314, 2011.
* [Ter86] Chuu-Lian Terng. Convexity theorem for isoparametric submanifolds. Invent. Math., 85(3):487–492, 1986.
|
CT^{\kappa}\|v\|_{Y^{1}_{T}}^{p}\|v\|_{Y^{s}_{T}},\quad\kappa>0$
will be enough. Its proof follows by combining the following propositions. The
next statement is a slightly modified version of [7, Theorem 3].
###### Proposition 6.1.
For $s\geq 1$ there exists a constant $C>0$ such that
(6.2) $\|u^{p}\|_{X^{s-1,\frac{1}{2}}}\leq
C\|u\|_{Y^{s}}\,\|u\|_{Y^{1}}^{p-1}\,.$
We shall also need the following bilinear estimate.
###### Proposition 6.2.
For $s\geq 1$ there exists $C>0$ such that, for $T\in(0,1)$,
$\big{\|}\Pi(\Pi u_{1}\Pi u_{2})\|_{Z^{s}}\leq
CT^{\kappa}\big{(}\|u_{1}\|_{X^{s-1,\frac{1}{2}}_{T}}\|u_{2}\|_{X^{0,\frac{1}{2}}_{T}}+\|u_{1}\|_{X^{0,\frac{1}{2}}_{T}}\|u_{2}\|_{X^{s-1,\frac{1}{2}}_{T}}\big{)},\quad\kappa>0\,.$
Let us see how (6.1) is a consequence of both propositions: from Proposition
6.2 we get
$\big{\|}\Pi(\partial_{x}v\Pi
v^{p})\big{\|}_{Z^{s}}=\big{\|}\Pi(\Pi\partial_{x}v\Pi
v^{p})\big{\|}_{Z^{s}}\leq
CT^{\kappa}\big{(}\|v^{p}\|_{X^{s-1,\frac{1}{2}}}\|\partial_{x}v\|_{X^{0,\frac{1}{2}}}+\|v^{p}\|_{X^{0,\frac{1}{2}}}\|\partial_{x}v\|_{X^{s-1,\frac{1}{2}}}\big{)}.$
Next, using Proposition 6.1, we get
$\|v^{p}\|_{X^{s-1,\frac{1}{2}}}\leq
C\|v\|_{Y^{s}}\,\|v\|_{Y^{1}}^{p-1}\,,\quad\|v^{p}\|_{X^{0,\frac{1}{2}}}\leq
C\|v\|_{Y^{1}}^{p}\,$
and we get (6.1).
###### Proof of Proposition 6.1.
Write
(6.3)
${\mathcal{F}}(u^{p})(\tau,n)=\int_{\tau=\tau_{1}+\cdots+\tau_{p}}\,\,\,\sum_{n=n_{1}+\cdots+n_{p}}\,\,\prod_{k=1}^{p}\hat{u}(\tau_{k},n_{k})\,,$
where ${\mathcal{F}}$ and $\hat{u}$ denote the space time Fourier transform
(continuous in time and discrete in space).
(6.4)
$\|u^{p}\|_{X^{s-1,\frac{1}{2}}}^{2}=\int_{{\mathbb{R}}}\sum_{n\in{\mathbb{Z}}}\langle
n\rangle^{2(s-1)}\langle\tau+n^{3}\rangle\,|{\mathcal{F}}(u^{p})(\tau,n)|^{2}\,d\tau\,.$
Notice that the r.h.s. in (6.4) may be bounded with
(6.5) $\int_{{\mathbb{R}}}\sum_{n\in{\mathbb{Z}}}\langle
n\rangle^{2(s-1)}\langle\tau+n^{3}\rangle\big{(}\int_{\tau=\tau_{1}+\cdots+\tau_{p}}\,\,\sum_{n=n_{1}+\cdots+n_{p}}\,\,\prod_{k=1}^{p}|\widehat{u}(\tau_{k},n_{k})|\big{)}^{2}\,d\tau.$
Hence if we define $w(t,x)$ by $\hat{w}(\tau,n)=|\hat{u}(\tau,n)|$ we get
$\|u\|_{X^{s,b}}=\|w\|_{X^{s,b}}$, $\|u\|_{Y^{s}}=\|w\|_{Y^{s}}$, and we are
reduced to estimate
(6.6) $\int_{{\mathbb{R}}}\sum_{n\in{\mathbb{Z}}}\langle
n\rangle^{2(s-1)}\langle\tau+n^{3}\rangle\,\big{(}\int_{\tau=\tau_{1}+\cdots+\tau_{p}}\,\,\sum_{n=n_{1}+\cdots+n_{p}}\,\,\prod_{k=1}^{p}\widehat{w}(\tau_{k},n_{k})\big{)}^{2}d\tau\,\,\,.$
Next we split the domain of integration and we consider first the contribution
to (6.6) in the region
(6.7) $|\tau+n^{3}|\leq 10p|\tau_{1}+n_{1}^{3}|.$
If we define $w_{1}$ by
$\widehat{w_{1}}(\tau,n)=\langle\tau+n^{3}\rangle^{\frac{1}{2}}\,\widehat{w}(\tau,n)$,
then the contribution to (6.6) in the region (6.7) can be controlled in the
physical space variables as follows
$\displaystyle C\|w_{1}w^{p-1}\|_{L^{2}({\mathbb{R}};H^{s-1})}^{2}\leq$
$\displaystyle
C\big{(}\|w_{1}\|_{L^{2}({\mathbb{R}};H^{s-1})}^{2}\|w^{p-1}\|_{L^{\infty}({\mathbb{R}};L^{\infty})}^{2}+\|w_{1}\|_{L^{2}({\mathbb{R}};L^{\infty})}^{2}\|w^{p-1}\|_{L^{\infty}({\mathbb{R}};H^{s-1})}^{2}\big{)}$
$\displaystyle\leq$ $\displaystyle
C\big{(}\|w\|_{X^{s-1,\frac{1}{2}}}^{2}\|w\|_{L^{\infty}({\mathbb{R}};H^{1})}^{2(p-1)}+\|w_{1}\|_{L^{2}({\mathbb{R}};H^{1})}^{2}\|w\|_{L^{\infty}({\mathbb{R}};H^{s-1})}^{2}\|w\|_{L^{\infty}({\mathbb{R}};H^{1})}^{2(p-2)}\big{)}$
where we have used standard product rules and Sobolev embedding $H^{1}\subset
L^{\infty}$. We proceed with
$(\dots)\leq
C\big{(}\|w\|_{X^{s-1,\frac{1}{2}}}^{2}\|w\|_{Y^{1}}^{2(p-1)}+\|w_{1}\|_{X^{1,\frac{1}{2}}}^{2}\|w\|_{Y^{s-1}}^{2}\|w\|_{Y^{1}}^{2(p-2)}\big{)}$
where we used $Y^{1}\subset L^{\infty}({\mathbb{R}};H^{1})$, $Y^{s-1}\subset
L^{\infty}({\mathbb{R}};H^{s-1})$. Notice that we have a better estimate, when
compared with (6.2), in the region (6.7). Similarly, we can evaluate the
contributions to (6.6) of the regions
$|\tau+n^{3}|\leq 10p|\tau_{k}+n_{k}^{3}|,\quad 2\leq k\leq p\,.$
Therefore, we may assume that the summation and the integration in (6.6) is
performed in the region
(6.8) $\max_{1\leq k\leq
p}|\tau_{k}+n_{k}^{3}|\leq\frac{1}{10p}|\tau+n^{3}|\,.$
Write
$(\tau+n^{3})-\sum_{k=1}^{p}(\tau_{k}+n_{k}^{3})=\Big{(}\sum_{k=1}^{p}n_{k}\Big{)}^{3}-\sum_{k=1}^{p}n_{k}^{3}\,,$
therefore in the region (6.8) we have
$\Big{|}\Big{(}\sum_{k=1}^{p}n_{k}\Big{)}^{3}-\sum_{k=1}^{p}n_{k}^{3}\Big{|}\geq|\tau+n^{3}|-\sum_{k=1}^{p}|\tau_{k}+n_{k}^{3}|\geq\frac{9}{10}|\tau+n^{3}|$
hence
$\langle\tau+n^{3}\rangle\leq
C\Big{|}\Big{(}\sum_{k=1}^{p}n_{k}\Big{)}^{3}-\sum_{k=1}^{p}n_{k}^{3}\Big{|}\,.$
By symmetry we can assume $|n_{1}|\geq|n_{2}|\geq\cdots\geq|n_{k}|$ and by
using [7, Lemma 4.1], we obtain that
$\Big{|}\Big{(}\sum_{k=1}^{p}n_{k}\Big{)}^{3}-\sum_{k=1}^{p}n_{k}^{3}\Big{|}\leq
C|n_{1}|^{2}|n_{2}|.$
Consequently in the region (6.8) we get $\langle\tau+n^{3}\rangle\leq C\langle
n_{1}\rangle^{2}\langle n_{2}\rangle$, and the corresponding contribution to
(6.6) can be estimated as
(6.9)
$C\,\int_{{\mathbb{R}}}\sum_{n\in{\mathbb{Z}}}\,\big{(}\int_{\tau=\tau_{1}+\cdots+\tau_{p}}\,\,\sum_{n=n_{1}+\cdots+n_{p}}\,\langle
n_{1}\rangle^{s}\langle
n_{2}\rangle^{\frac{1}{2}}\,\prod_{k=1}^{p}(\widehat{w}(\tau_{k},n_{k})\big{)}^{2}\,d\tau$
If we define $w_{1}$, $w_{2}$ by $\widehat{w_{1}}(\tau,n)=\langle
n\rangle^{s}\widehat{w}(\tau,n)$, $\widehat{w_{2}}(\tau,n)=\langle
n\rangle^{\frac{1}{2}}\widehat{w}(\tau,n)$, going back to physical space
variables, we estimate (6.9) as
$\displaystyle C\|w_{1}w_{2}w^{p-2}\|_{L^{2}({\mathbb{R}};L^{2})}^{2}\leq$
$\displaystyle
C\|w_{1}\|_{L^{\infty}({\mathbb{R}};L^{2})}^{2}\|w_{2}\|_{L^{4}({\mathbb{R}};L^{\infty})}^{2}\|w\|_{L^{4}({\mathbb{R}};L^{\infty})}^{2}\|w\|_{L^{\infty}({\mathbb{R}};L^{\infty})}^{2(p-3)}$
$\displaystyle\leq$ $\displaystyle
C\|w\|_{L^{\infty}({\mathbb{R}};H^{s})}^{2}\|w_{2}\|_{L^{4}({\mathbb{R}};W^{\frac{1}{2},4})}^{2}\|w\|_{L^{4}({\mathbb{R}};W^{1,4})}^{2}\|w\|_{L^{\infty}({\mathbb{R}};H^{1})}^{2(p-3)}.$
Hence by using $Y^{1}\subset L^{\infty}({\mathbb{R}};H^{1})$ and $Y^{s}\subset
L^{\infty}({\mathbb{R}};H^{s})$, along with the estimate
(6.10) $\|u\|_{L^{4}({\mathbb{R}};L^{4})}\leq C\|u\|_{X^{0,\frac{1}{3}}}$
established in the fundamental work [1], we proceed with
$(\dots)\leq
C\|w\|_{Y^{s}}^{2}\|w\|_{X^{1,\frac{1}{3}}}^{2}\|w\|_{X^{1,\frac{1}{3}}}^{2}\|w\|_{Y^{1}}^{2(p-3)}$
and this concludes the proof. ∎
###### Proof of Proposition 6.2.
We start with proving
(6.11) $\big{\|}\Pi(\Pi u_{1}\Pi u_{2}))\|_{X^{s,-\frac{1}{2}}}\leq
CT^{\kappa}\big{(}\|u_{1}\|_{X^{s-1,\frac{1}{2}}_{T}}\|u_{2}\|_{X^{0,\frac{1}{2}}_{T}}+\|u_{1}\|_{X^{0,\frac{1}{2}}_{T}}\|u_{2}\|_{X^{s-1,\frac{1}{2}}_{T}}\big{)}.$
As a first step we prove an estimate for global in time functions:
(6.12) $\big{\|}\Pi(\Pi u_{1}\Pi u_{2}))\|_{X^{s,-\frac{1}{2}}}\leq
C\big{(}\|u_{1}\|_{X^{s-1,\frac{1}{2}}}\times\|u_{2}\|_{X^{0,\frac{1}{3}}}+\|u_{1}\|_{X^{s-1,\frac{1}{3}}}\times\|u_{2}\|_{X^{0,\frac{1}{2}}}\\\
+\|u_{1}\|_{X^{0,\frac{1}{3}}}\times\|u_{2}\|_{X^{s-1,\frac{1}{2}}}+\|u_{1}\|_{X^{0,\frac{1}{2}}}\times\|u_{2}\|_{X^{s-1,\frac{1}{3}}}\big{)}.$
Notice that by comparing (6.11) and (6.12) in the second estimate we gain
derivatives but we loose a positive power of the time $T$. We will prove
toward the end how to go from (6.12) to (6.11).
The square of the left hand-side of (6.12) may be written as
$\int_{{\mathbb{R}}}\sum_{n\neq 0}\langle
n\rangle^{2s}\langle\tau+n^{3}\rangle^{-1}|{\mathcal{F}}(\Pi u_{1}\Pi
u_{2})(\tau,n)|^{2}\,d\tau\,,$
and moreover we easily have
$|{\mathcal{F}}(\Pi u_{1}\Pi u_{2})(\tau,n)|\leq\sum_{n_{1}\neq
0,n}\int_{{\mathbb{R}}}|\hat{u}_{1}(\tau_{1},n_{1})||\hat{u}_{2}(\tau-\tau_{1},n-n_{1})|d\tau_{1}\,.$
For $j=1,2$, define $w_{j}(t,x)$ with
$\hat{w}_{j}(\tau,n)=|\hat{u}_{j}(\tau,n)|$. Then, we estimate the left hand-
side of (6.12) as
$\int_{{\mathbb{R}}}\sum_{n\neq 0}\frac{\langle
n\rangle^{2s}}{\langle\tau+n^{3}\rangle}\Big{(}\sum_{n_{1}\neq
0,n}\int_{{\mathbb{R}}}\hat{w}_{1}(\tau_{1},n_{1})\hat{w}_{2}(\tau-\tau_{1},n-n_{1})d\tau_{1}\Big{)}^{2}d\tau\,$
which in turn by a duality argument is bounded with
$\sup_{\|v\|_{L^{2}_{t,x}}\leq 1}\int_{\mathbb{R}}\sum_{n\neq 0}\frac{\langle
n\rangle^{s}}{\langle\tau+n^{3}\rangle^{\frac{1}{2}}}\sum_{n_{1}\neq
0,n}\int_{{\mathbb{R}}}\hat{w}_{1}(\tau_{1},n_{1})\hat{w}_{2}(\tau-\tau_{1},n-n_{1})|\hat{v}(\tau,n)|d\tau_{1}d\tau\,.$
For $n\neq 0$ and $n_{1}\neq 0,n$, we have $\langle n\rangle^{s}\leq
C|n_{1}|^{\frac{1}{2}}|n-n_{1}|^{\frac{1}{2}}|n|^{\frac{1}{2}}\big{(}\langle
n_{1}\rangle^{s-1}+\langle n-n_{1}\rangle^{s-1}\big{)}$. Therefore, by using a
symmetry argument, it suffices to evaluate
(6.13) $\sup_{\|u\|_{L^{2}_{t,x}}\leq 1}\int_{{\mathbb{R}}^{2}}\sum_{n\neq
0,n_{1}\neq
0,n}\frac{|n_{1}|^{\frac{1}{2}}|n-n_{1}|^{\frac{1}{2}}|n|^{\frac{1}{2}}}{\langle\tau+n^{3}\rangle^{\frac{1}{2}}}\,\big{(}\langle
n_{1}\rangle^{s-1}\hat{w}_{1}(\tau_{1},n_{1})\big{)}\hat{w}_{2}(\tau-\tau_{1},n-n_{1})|\hat{v}(\tau,n)|d\tau_{1}d\tau\,.$
The key property for smoothing is the elementary bound
$\max\Big{(}\langle\tau+n^{3}\rangle^{\frac{1}{2}},\langle\tau_{1}+n_{1}^{3}\rangle^{\frac{1}{2}},\langle\tau-\tau_{1}+(n-n_{1})^{3}\rangle^{\frac{1}{2}}\Big{)}\geq
C|n_{1}|^{\frac{1}{2}}|n-n_{1}|^{\frac{1}{2}}|n|^{\frac{1}{2}}\,.$
We will consider a splitting of the expression in (6.13) in three
contributions taking into account which term is the maximum in the above
elementary bound. Notice that the contribution of (6.13) in the following
region
(6.14) $\langle\tau+n^{3}\rangle^{\frac{1}{2}}\geq
C|n_{1}|^{\frac{1}{2}}|n-n_{1}|^{\frac{1}{2}}|n|^{\frac{1}{2}}\,$
may be estimated as
$C\|w_{2}\times v_{1}\times\langle
D_{x}\rangle^{s-1}w_{1}\|_{L^{1}({\mathbb{R}};L^{1})}\leq
C\|v\|_{L^{2}_{t,x}}\|\langle
D_{x}\rangle^{s-1}w_{1}\|_{L^{4}({\mathbb{R}};L^{4})}\|w_{2}\|_{L^{4}({\mathbb{R}};L^{4})}\,$
where $v_{1}(t,x)$ is defined with $\hat{v}_{1}(\tau,n)=|\hat{v}(\tau,n)|$.
Now, using (6.10), we write
$\displaystyle\|\langle
D_{x}\rangle^{s-1}w_{1}\|_{L^{4}({\mathbb{R}};L^{4})}\leq$ $\displaystyle
C\|w_{1}\|_{X^{s-1,\frac{1}{3}}}=C\|u_{1}\|_{X^{s-1,\frac{1}{3}}}$
$\displaystyle\|w_{1}\|_{L^{4}({\mathbb{R}};L^{4})}\leq$ $\displaystyle
C\|w_{1}\|_{X^{0,\frac{1}{3}}}=C\|u_{1}\|_{X^{0,\frac{1}{3}}}\,.$
Hence we can estimate the contribution to (6.13) in the region (6.14) by
$\|u_{1}\|_{X^{s-1,\frac{1}{3}}}\times\|u_{2}\|_{X^{0,\frac{1}{3}}}$
up to a multiplicative constant. Next we consider the contribution to (6.13)
in the region
(6.15) $\langle\tau_{1}+n_{1}^{3}\rangle^{\frac{1}{2}}\geq
C|n_{1}|^{\frac{1}{2}}|n-n_{1}|^{\frac{1}{2}}|n|^{\frac{1}{2}}\,.$
Let $v_{1}(t,x)$ be defined by
$\widehat{v}_{1}(\tau,n)=\langle\tau+n^{3}\rangle^{-\frac{1}{2}}|\hat{v}(\tau,n)|$
and let $w_{11}(t,x)$ be defined as
$\hat{w}_{11}(\tau,n)=\langle
n\rangle^{s-1}\langle\tau+n^{3}\rangle^{\frac{1}{2}}\hat{w}_{1}(\tau,n)\,,$
then we can estimate the contribution of(6.13) in the region (6.15) by
$C\|w_{11}\times w_{2}\times v_{1}\|_{L^{1}({\mathbb{R}};L^{1})}\leq
C\|w_{11}\|_{L^{2}({\mathbb{R}};L^{2})}\|w_{2}\|_{L^{4}({\mathbb{R}};L^{4})}\|v_{1}\|_{L^{4}({\mathbb{R}};L^{4})}$
Using again (6.10) we obtain that
$\displaystyle\|v_{1}\|_{L^{4}({\mathbb{R}};L^{4})}\leq$ $\displaystyle
C\|v\|_{X^{0,-\frac{1}{6}}}\leq C\|v\|_{L^{2}({\mathbb{R}};L^{2})}\,,$
$\displaystyle\|w_{2}\|_{L^{4}({\mathbb{R}};L^{4})}\leq$ $\displaystyle
C\|w_{2}\|_{X^{0,\frac{1}{3}}}\leq C\|u_{2}\|_{X^{0,\frac{1}{3}}}\,.$
Moreover we have
$\|w_{11}\|_{L^{2}({\mathbb{R}};L^{2})}=\|w_{1}\|_{X^{s-1,\frac{1}{2}}}=\|u_{1}\|_{X^{s-1,\frac{1}{2}}}$
and summarizing we control the contribution to (6.13) in the region (6.15) by
$\|u_{1}\|_{X^{s-1,\frac{1}{2}}}\times\|u_{2}\|_{X^{0,\frac{1}{3}}}$ up to a
multiplicative factor. Finally consider the contribution to (6.13) on the
third region
(6.16) $\langle\tau-\tau_{1}+(n-n_{1})^{3}\rangle^{\frac{1}{2}}\geq
C|n_{1}|^{\frac{1}{2}}|n-n_{1}|^{\frac{1}{2}}|n|^{\frac{1}{2}}\,.$
Retain $v_{1}(t,x)$ with
$\hat{v}_{1}(\tau,n)=\langle\tau+n^{3}\rangle^{-\frac{1}{2}}|\hat{v}(\tau,n)|$
and let $w_{21}(t,x)$ be defined with
$\hat{w}_{21}(\tau,n)=\langle\tau+n^{3}\rangle^{\frac{1}{2}}\hat{w}_{2}(\tau,n)$.
Then we can control the contribution to (6.13) in the region (6.16) by
$\displaystyle C\|w_{21}\times v_{1}\times\langle
D_{x}\rangle^{s-1}w_{1}\|_{L^{1}({\mathbb{R}};L^{1})}\leq$ $\displaystyle
C\|\langle
D_{x}\rangle^{s-1}w_{1}\|_{L^{4}({\mathbb{R}};L^{4})}\|w_{21}\|_{L^{2}({\mathbb{R}};L^{2})}\|v_{1}\|_{L^{4}({\mathbb{R}};L^{4})}\,$
$\displaystyle\leq$ $\displaystyle
C\|u_{1}\|_{X^{s-1,\frac{1}{3}}}\|u_{2}\|_{X^{0,\frac{1}{2}}}\|v\|_{X^{0,-\frac{1}{6}}}$
where we have used again (6.10). Hence the contribution of (6.13) in the
region (6.16) can be estimated, up to a multiplicative constant, by
$\|u_{1}\|_{X^{s-1,\frac{1}{3}}}\times\|u_{2}\|_{X^{0,\frac{1}{2}}}$.
Summarizing, we estimate (6.13) by
$\|u_{1}\|_{X^{s-1,\frac{1}{2}}}\times\|u_{2}\|_{X^{0,\frac{1}{3}}}+\|u_{1}\|_{X^{s-1,\frac{1}{3}}}\times\|u_{2}\|_{X^{0,\frac{1}{2}}}$
for functions $u_{1}$ and $u_{2}$ which are not localized in time. Recall that
by symmetry, in order to estimate the l.h.s. in (6.12), we need to add further
terms where the role of $u_{1}$ and $u_{2}$ has exchanged. Hence we have
established (6.12) which, as already said, in some sense is stronger than
(6.11), since less conormal derivatives of $u_{1}$ and $u_{2}$ are involved,
but it is weaker than (6.11) since no gain of positive power of $T$ has been
obtained in (6.12). We finally deal with this issue: as a consequence of [47,
Lemma 2.11] (see also [8, Lemma 3.2]) there exists $\kappa>0$ such that
$\|w\|_{X^{s-1,\frac{1}{3}}_{T}}\leq
CT^{\kappa}\|w\|_{X^{s-1,\frac{1}{2}}_{T}},\quad\|w\|_{X^{0,\frac{1}{3}}_{T}}\leq
CT^{\kappa}\|w\|_{X^{0,\frac{1}{2}}_{T}},$
and we complete the proof of (6.11) by combining the estimates above with
(6.12), along with a suitable choice of extensions for $u_{1}$ and $u_{2}$,
which a priori are defined on the strip of time $(-T,T)$, on the whole space
time with a global norm of the extension which is at most twice the
corresponding localized norm. ∎
## References
* [1] J. Bourgain, Fourier restriction phenomena for certain lattice subsets and applications to nonlinear evolution equations, Parts I; II, Geometric Funct. Anal. 3(2) (1993) 107–156; 3(3) (1993) 209–262.
* [2] J. Bourgain, On the Cauchy problem for periodic KdV-type equations, Proceedings of the Conference in honor of Jean-Pierre Kahane (Orsay, 1993), J. Fourier Anal. Appl. (1995) 17–86.
* [3] J. Bourgain, Periodic nonlinear Schrödinger equation and invariant measures, Comm. Math. Phys., 166 (1994) 1–26.
* [4] J. Bourgain, Invariant measures for the 2d-defocusing nonlinear Schrödinger equation, Comm. Math. Phys., 176 (1996) 421–445.
* [5] J. Bourgain, On the growth in time of higher Sobolev norms of smooth solutions of Hamiltonian PDE, Internat. Math. Res. Notices, (1996) 6, 277–304.
* [6] A. Chapouto, N. Kishimoto, Invariance of the Gibbs measures for periodic generalized Korteweg-de Vries equations, arXiv:2104.07382
* [7] J. Colliander, M. Keel, G. Staffilani, H. Takaoka, T. Tao, Multilinear estimates for periodic KdV equations, and applications J. Funct. Anal. 211 (2004), 173–218.
* [8] J. Colliander, T. Oh, Almost sure well-posedness of the cubic nonlinear Schrödinger equation below $L^{2}({\mathbb{T}})$. Duke Math. J. 161 (2012), 367–414.
* [9] A. Debussche, Y. Tsustumi, Quasi-Invariance of Gaussian Measures Transported by the Cubic NLS with Third-Order Dispersion on ${\mathbb{T}}$, arXiv:2002.04899 (2020).
* [10] Y. Deng, N. Tzvetkov, N. Visciglia, Invariant measures and long time behaviour for the Benjamin-Ono equation III, Comm. Math. Phys. 339 (2015), 815–857.
* [11] J. Forlano, K. Seong, Transport of Gaussian measures under the flow of one-dimensional fractional nonlinear Schrödinger equations, arXiv:2102.13398 [math.AP] (2021).
* [12] J. Forlano, W. Trenberth, On the transport of Gaussian measures under the one-dimensional fractional nonlinear Schrödinger equation, Ann. Inst. Henri Poincare, Anal. Non Lineaire, 36 (2019), 1987-2025.
* [13] L. Friedlander, An invariant measure for the equation $u_{tt}-u_{xx}+u^{3}=0$, Comm. Math. Phys., 98 (1985) 1–16.
* [14] G. Genovese, R. Lucà, N. Tzvetkov, Quasi-invariance of low regularity Gaussian measures under the gauge map of the periodic derivative NLS, arXiv:2008.10001v1 (2020).
* [15] G. Genovese, R. Lucà, N. Tzvetkov, Transport of Gaussian measures with exponential cut-off for Hamiltonian PDEs, arXiv:2103.04408 [math.AP] (2021).
* [16] G. Genovese, R. Lucà, D, Valeri, Invariant measures for the periodic derivative nonlinear Schrödinger equation, Mathematische Annalen 374 (3-4), 1075-1138, 2019.
* [17] J. Ginibre, Y. Tsutsumi, G. Velo, On the Cauchy problem for the Zakharov system, J. Funct. Anal. 151 (1997), no. 2, 384–436.
* [18] T. S. Gunaratnam, T. Oh, N. Tzvetkov, H. Weber, Quasi-invariant Gaussian measures for the nonlinear wave equation in three dimensions, arXiv:1808.03158 (2018).
* [19] Z. Hani, B. Pausader, N. Tzvetkov, N. Visciglia, Modified scattering for the cubic Schrödinger equation on product spaces and applications, Forum Math. PI 3 (2015), e4, 63 pp.
* [20] T. Kappeler, J. Pöschel, KdV and KAM, A Series of Modern Surveys in Mathematics , 45. Springer-Verlag, Berlin, 2003. xiv+279 pp.
* [21] T. Kato, T On the Cauchy problem for the (generalized) Korteweg-de Vries equation, Advances in Math. Suppl. Stud., (1983), 93–128.
* [22] C. Kenig, G. Ponce, L. Vega, On the (generalized) Korteweg-de Vries equation, Duke Math. J. 59 (1989) 585–610.
* [23] C. Kenig, G. Ponce, L. Vega, Oscillatory integrals and regularity of dispersive equations, Indiana Univ. Math. J. 40 (1991) 33–69.
* [24] C. Kenig, G. Ponce, L. Vega, Well-posedness and scattering results for the generalized Korteweg-de Vries equation via the contraction principle, Comm. Pure Appl. Math. 46 (1993) 527–620.
* [25] J. Lebowitz, R. Rose, E. Speer, Statistical dynamics of the nonlinear Schrödinger equation, J. Stat. Physics, 50 (1988) 657–687.
* [26] Y. Martel, Asymptotic N-soliton-like solutions of the subcritical and critical generalized Korteweg-de Vries equations, Amer. J. Math. 127 (2005), no. 5, 1103–1140.
* [27] Y. Martel, F. Merle, P. Raphaël, Blow up for the critical generalized Korteweg?de Vries equation. I: Dynamics near the soliton, Acta Math. 212 (2014) 59–140.
* [28] Y. Martel, F. Merle, P. Raphaël, Blow up for the critical gKdV equation. II: Minimal mass dynamics, J. Eur. Math. Soc. (JEMS) 17 (2015) 1855–1925.
* [29] Y. Martel, F. Merle, P. Raphaël, Blow up for the critical gKdV equation III: exotic regimes, Ann. Sc. Norm. Super. Pisa Cl. Sci. (5) 14 (2015), no. 2, 575–631.
* [30] Y. Martel, D. Pilod, Finite point blowup for the critical generalized Korteweg-de Vries equation, arXiv:2107.00268 [math.AP]
* [31] T. Oh, K. Seong, Quasi-invariant Gaussian measures for the cubic fourth order nonlinear Schrd̈inger equation in negative Sobolev spaces, arXiv:2012.06732 [math.AP] (2020).
* [32] S. Oh, A. Stefanov, Smoothing and growth bound of periodic generalized Korteweg-de Vries equation, arXiv:2001.08984 [math.AP]
* [33] T. Oh, G. Richards, L. Thomann, On invariant Gibbs measures for the generalized KdV equations, Dyn. Partial Differ. Equ. 13 (2016) 133–153.
* [34] T. Oh, N. Tzvetkov, Quasi-invariant Gaussian measures for the cubic fourth order nonlinear Schrödinger equation, Probab. Theory Relat. Fields 169 (2017), 1121-1168.
* [35] T. Oh, N. Tzvetkov, Quasi-invariant Gaussian measures for the two-dimensional defocusing cubic nonlinear wave equation, JEMS 22 (2020), no. 6, 1785–1826.
* [36] T. Oh, P. Sosoe, N. Tzvetkov, An optimal regularity result on the quasi-invariant Gaussian measures for the cubic fourth order nonlinear Schrödinger equation, J. Ec. Polytechnique, Math., 5 (2018), 793-841.
* [37] T. Oh, Y. Tsutsumi and N. Tzvetkov, Quasi-invariant Gaussian measures for the cubic nonlinear Schroedinger equation with third-order dispersion, C. R. Acad. Sci. Paris, Ser. I, 357 (2019), 366-381.
* [38] F. Planchon, N. Tzvetkov and N. Visciglia, On the growth of Sobolev norms for NLS on 2- and 3-dimensional manifolds. Anal. PDE 10 (2017) 1123–1147.
* [39] F. Planchon, N. Tzvetkov and N. Visciglia, Transport of Gaussian measures by the flow of the nonlinear Schrödinger equation, Math. Ann. 378 (2020) 389-423.
* [40] G. Richards, Invariance of the Gibbs measure for the periodic quartic gKdV, Ann. Inst. H. Poincaré Anal. Non Linéaire 33 (2016) 699–766.
* [41] J.C. Saut, Sur quelques généralisations de l’équation de Korteweg-de Vries, J. Math. Pures Appl. 58 (1979) 21–61.
* [42] P. Schuur, Asymptotic analysis of soliton problems, Lect. Notes in Math., vol. 1232, Springer-Verlag, Berlin, 1986.
* [43] P. Sosoe, W. J. Trenberth and T. Xiao, Quasi-invariance of fractional Gaussian fields by nonlinear wave equation with polynomial nonlinearity, arXiv:1906.02257v1 (2019).
* [44] G. Staffilani, On solutions for periodic generalized KdV equations, IMRN 18 (1997) 899–917.
* [45] G. Staffilani, On the growth of high Sobolev norms of solutions for KdV and Schrödinger equations, Duke Math. J. 86 (1997), 109–142.
* [46] C. Sun, N. Tzvetkov, Gibbs measure dynamics of the fractional NLS, SIAM J. Math. Anal., 52(5), (2020) 4638-4704.
* [47] T. Tao, Nonlinear dispersive equations. Local and global analysis, CBMS Regional Conference Series in Mathematics, 106. American Mathematical Society, Providence, RI, 2006. xvi+373 pp.
* [48] N. Tzvetkov, Quasi-invariant Gaussian measures for one dimensional Hamiltonian PDEs, Forum Math. Sigma 3 (2015), e28, 35 pp.
* [49] N. Tzvetkov, N. Visciglia, Gaussian measures associated to the higher order conservation laws of the Benjamin-Ono equation, Ann. Sci. ENS 46 (2013) 249–299 (2013).
* [50] N. Tzvetkov, N. Visciglia, Invariant measures and long time behaviour for the Benjamin-Ono equation, Int. Math. Res. Not. 17 (2014) 4679–4614.
* [51] N. Tzvetkov, N. Visciglia, Invariant measures and long time behaviour for the Benjamin-Ono equation II, J. Math. Pures Appl. 103 (2015), 102–141.
* [52] N. Visciglia, The modified energy technique and applications, Bollettino dell’ Unione Matematica Italiana 14(1) (2021), 3–16
* [53] P. Zhidkov, Korteweg-de Vries and nonlinear Schrödinger equations: qualitative theory, Lecture Notes in Mathematics, 1756. Springer-Verlag, Berlin, 2001. vi+147 pp. |
∎
11institutetext: Thuy Ngoc Nguyen 22institutetext: Duy Nhat Phan
33institutetext: Cleotilde Gonzalez 44institutetext: 5000 Forbes Ave.,
Pittsburgh, 15213 PA, USA
44email<EMAIL_ADDRESS><EMAIL_ADDRESS><EMAIL_ADDRESS>
# SpeedyIBL: A Solution to the Curse of Exponential Growth in Instance-Based
Learning Models of Decisions from Experience
Thuy Ngoc Nguyen Duy Nhat Phan Cleotilde Gonzalez *Corresponding author
###### Abstract
Computational cognitive modeling is a useful methodology to explore and
validate theories of human cognitive processes. Often cognitive models are
used to simulate the process by which humans perform a task or solve a problem
and to make predictions about human behavior. Cognitive models based on
Instance-Based Learning (IBL) Theory rely on a formal computational algorithm
for dynamic decision making and on a memory mechanism from a well-known
cognitive architecture, ACT-R. To advance the computational theory of human
decision making and to demonstrate the usefulness of cognitive models in
diverse domains, we must address a practical computational problem, the curse
of exponential growth, that emerges from memory-based tabular computations.
When more observations accumulate, there is an exponential growth of the
memory of instances that leads directly to an exponential slow down of the
computational time. In this paper, we propose a new Speedy IBL implementation
that innovate the mathematics of vectorization and parallel computation over
the traditional loop-based approach. Through the implementation of IBL models
in many decision games of increasing complexity, we demonstrate the
applicability of the regular IBL models and the advantages of their Speedy
implementation. Decision games vary in their complexity of decision features
and in the number of agents involved in the decision process. The results
clearly illustrate that Speedy IBL addresses the curse of exponential growth
of memory, reducing the computational time significantly, while maintaining
the same level of performance than the traditional implementation of IBL
models.
###### Keywords:
Instance-Based Learning Cognitive Models Efficient Computation
## 1 Introduction
A cognitive theory is a general postulation of mechanisms and processes that
are globally applicable to families of tasks and types of activities rather
than being dependent on a particular task. Cognitive models are very specific
representations of part or of all aspects of a cognitive theory that apply to
a particular task or activity gonzalez2017decision . Specifically, normative
and descriptive theories of choice often rely on utility theory Savage1954 ;
morgenstern1953theory or aim at describing the psychological impact of
perceptions of probability and value on choice kahneman1979prospect ;
tversky1992advances . In contrast, models of decisions from experience (DfE)
are often dynamic computational representations of sequential choices that are
distributed over time and space and that are made under uncertainty
gonzalez2017dynamic .
Cognitive models of DfE can be used to simulate the interaction of theoretical
cognitive processes with the environment, representing a particular task.
These models can make predictions regarding how human choices are made in such
tasks. These predictions are often compared to data collected from human
participants in the same tasks using interactive tools. The explicit
comparison of cognitive models’ predictions to human actual behavior is a
common research approach in the cognitive sciences and in particular in the
study of decision making gonzalez2017decision . Cognitive models are dynamic
and adaptable computational representations of the cognitive structures and
mechanisms involved in decision making tasks such as DfE tasks under
conditions of partial knowledge and uncertainty. Moreover, cognitive models
are generative, in the sense that they actually make decisions in similar ways
like humans do, based on their own experience, rather than being data-driven
and requiring large training sets. In this regard, cognitive models differ
from purely statistical approaches, such as Machine Learning or Bayesian
models, that are often capable of evaluating stable, long-term sequential
dependencies from existing data but fail to account for the dynamics of human
cognition and human adaptation to novel situations.
There are many models of DfE as evidenced by past modeling competitions
erev2010choice ; erev2017anomalies . Most of these models often make broadly
disparate assumptions regarding the cognitive processes by which humans make
decisions erev2010choice . For example, the models submitted to these
competitions are often applicable to a particular task or choice paradigm
rather than presenting an integrated view of how the dynamic choice process
from experience is performed by humans. Associative learning models are a
class of models of DfE that conceptualize choice as a learning process that
stores behavior-outcome relationships and are contingent on the environment
hertwig2015 . Generally speaking, these kinds of models rely on learning from
reinforcement and the contingencies of the environment as in the Skinnerian
tradition skinner2014contingencies ; sutton1995theory . These models have
shown to be successful at representing human learning over time based on
feedback.
In contrast to many of the associative learning models, Instance-Based
Learning (IBL) models rely on a single decision theory: Instance-Based
Learning Theory GONZALEZ03 . IBLT emerged from the need to explain the process
of dynamic decision making, where a sequence of interdependent decisions were
made over time. IBLT provides a single general algorithm and mathematical
formulations of memory retrieval that rely on the ACT-R cognitive architecture
ANDERSON14 . The theory proposes a representation of decisions in the form of
instances, which are triplets involving state, action, and utilities; the
theory also provides a process of retrieval of past instances based on their
similarity to a current decision situation, and the generation of accumulated
value (from experience) based on a mechanism called Blending, which is a
function of the payoffs experienced and the probability of retrieving those
instances from memory LEJARRAGA12 ; GONZALEZ11 .
Many models have been developed based on IBLT. From its inception, the theory
was demonstrated in a highly complex, dynamic decision-making task
representing the complex process of dynamic allocation of limited resources
over time and under time constraints in a “water purification plant”
GONZALEZ03 . Since then, IBLT has been used to demonstrate human DfE in a
large diversity of contexts and domains, from simple and abstract binary
choice dynamics GONZALEZ11 ; LEJARRAGA12 to highly specialized and complex
tasks such as cyber defense aggarwal2020exploratory ; cranford2020toward and
anti-phishing detection cranford2019modeling . Also, IBL models have been
created to account for group and network effects, where each individual in a
group is represented by an IBL agent gonzalez2015cognitive ; more recently,
this IBL algorithm has also been applied to multi-state gridworld tasks
NGUYEN20 ; NGUYEN2020ICCM ; Ngoc2021 in which the agents execute a sequence
of actions with delayed feedback.
The recent applications of IBL cognitive models have led to significantly more
complex and realistic tasks, where multi-dimensional state-action-utility
representations are required, where extended training is common, and where
multi-agents interact to solve such tasks. Such an increase in the task
complexities and the number of agents modeled with IBL leads to a practical
computational problem, the curse of exponential growth, (c.f.
bellman1957dynamic ) kuo2005lifting . The curse of exponential growth is a
common problem for every modeling approach that relies on the accumulation of
data over time and on tabular computation, such as reinforcement learning
models (RL) sutton2018reinforcement . Deep reinforcement learning, a
combination of RL and deep learning, enables to move from simple
representations to more realistic complex environments in games such as Atari,
Go, Starcraft mnih2013playing ; silver2016mastering ; vinyals2019alphastar .
However, as summarized in a recent overview of the challenges in multi-agent
RL models, these algorithms become less efficient with the increase in the
dimensions of the state-action space, and as the number of agents increases
wong2021multiagent . The problem becomes even more complex under nonstationary
environments and under uncertainty where information is incomplete. Dynamic
conditions significantly increase the diversity and number of states as it is
needed for every dynamic decision making task gonzalez2017dynamic .
In this paper, we propose a solution to the curse of exponential growth by
innovating the mathematics of vectorization and parallel computation over the
traditional loop-based approach larsen2000exploiting . We propose this new
method in a new Speedy IBL implementation. Importantly, we demonstrate how
SpeedyIBL is increasingly more efficient than traditional IBL model
implementations (PyIBL for Python-IBL, MorrisonGonzalez ), as the
dimensionality and dynamics of the problems increase. The costs of computation
in the PyIBL implementation grow exponentially as the dimensions of the
representation increase and the number of agents and their interactions
increase. The benefits of SpeedyIBL over regular PyIBL models, therefore,
increase also exponentially.
## 2 Instance based Learning Theory
The general decision process proposed in IBLT is illustrated in Figure 1, and
the mechanisms are made mathematically concrete in Algorithm 1 GONZALEZ03 .
The process starts with the observation of the environmental state, and the
determination of whether there are past experiences (i.e., instances) that are
similar to the current environmental state (i.e., Recognition). Whether there
are similar past instances will determine the process used to generate the
expected utility of a decision alternative (i.e., Judgment). If there are past
experiences that are similar to the current environmental state, the expected
utility of such an alternative is calculated via a process of Blending past
instances, but if there are no similar past instances, then the theory
suggests that a heuristic is used instead. After Judgment, the option with the
highest expected utility is maintained in memory and a decision is made as to
whether to stop the exploration of additional alternatives and execute the
current best decision (e.g., Choice). When the exploration process ends, a
choice is implemented, which changes the environment (i.e., Execution).
Feedback (e.g., reward) is received at any time from the environment, with or
without delay from the execution of a choice. Such feedback is used to update
past experiences since the last time feedback was received through a credit
assignment mechanism.
Figure 1: IBLT algorithm from GONZALEZ03
In IBLT, an “instance” is a memory unit that results from the potential
alternatives evaluated. These memory representations consist of three elements
which are constructed over time: a situation state $s$ which is composed of a
set of features $f$; a decision or action $a$ taken corresponding to an
alternative in state $s$; and an expected utility or experienced outcome $x$
of the action taken in a state.
Each instance in memory has an Activation value, which represents how readily
available that information is in memory, and it is determined by the
similarity to past situations, recency, frequency, and noise according to the
Activation equation in ACT-R ANDERSON14 . Activation of an instance is used to
determine the probability of retrieval of an instance from memory which is a
function of its activation relative to the activation of all instances
corresponding the same state in memory. The expected utility of a choice
option is calculated by blending past outcomes. This blending mechanism for
choice has its origins in a more general blending formulation
lebiere1999dynamics , but a simplification of this mechanism is often used in
models with discrete choice options, defined as the sum of all past
experienced outcomes weighted by their probability of retrieval GONZALEZ11 ;
LEJARRAGA12 . This formulation of blending represents the general idea of an
expected value in decision making, where the probability is a cognitive
probability, a function of the activation equation in ACT-R. Algorithm 1
provides a formal representation of the general IBL process.
1
Input: default utility $x_{0}$, a memory dictionary $\mathcal{M}=\\{\\}$,
global counter $t=1$, step limit $L$, a flag $delayed$ to indicate whether
feedback is delayed.
2 repeat
3 Initialize a counter (i.e., step) $l=0$ and observe state $s_{l}$
4 while _$s_{l}$ is not terminal and $l<L$_ do
5 Execution Loop __
6 Exploration Loop _$a\in A$_ do
7 Compute activation values $\Lambda_{i(s_{l}^{i},a)t}$ of instances
$((s_{l}^{i},a),x_{i(s_{l}^{i},a)t},T_{i(s_{l}^{i},a)t})$ by (1)
8
9 Compute retrieval probabilities $P_{i(s_{l}^{i},a)t}$ by (2)
10 Compute blended values $V_{(s_{l},a)t}$ corresponding to $(s_{l},a)$ by (3)
11
12 end
13 Choose an action $a_{l}\in\arg\max_{a\in A}V_{(s_{l},a)t}$
14 end
15 Take action $a_{l}$, move to state $s_{l+1}$, observe $s_{l+1}$, and
receive outcome $x_{l+1}$
16 Store $t$ into instance corresponding to selecting $(s_{l},a_{l})$ and
achieving outcome $x_{l+1}$ in $\mathcal{M}$
17 If $delayed$ is true, update outcomes using a credit assignment mechanism
18 $l\leftarrow l+1$ and $t\leftarrow t+1$
19
20 end while
21
22until _task stopping condition_
Algorithm 1 Pseudo Code of Instance-based Learning process
Concretely, for an agent, an option $k=(s,a)$ is defined by taking action $a$
after observing state $s$. At time $t$, assume that there are $n_{kt}$
different considered instances $(k_{i},x_{ik_{i}t})$ for $i=1,...,n_{kt}$,
associated with $k$. Each instance $i$ in memory has an Activation value,
which represents how readily available that information is in memory and
expressed as follows ANDERSON14 :
$\begin{array}[]{l}\Lambda_{ik_{i}t}=\ln{\left(\sum\limits_{t^{\prime}\in
T_{ik_{i}t}}(t-t^{\prime})^{-d}\right)}+\alpha\sum\limits_{j}Sim_{j}(f^{k}_{j},f^{k_{i}}_{j})+\sigma\ln{\frac{1-\xi_{ik_{i}t}}{\xi_{ik_{i}t}}},\end{array}$
(1)
where $d$, $\alpha$, and $\sigma$ are the decay, mismatch penalty, and noise
parameters, respectively, and $T_{ik_{i}t}\subset\\{0,...,t-1\\}$ is the set
of the previous timestamps in which the instance $i$ was observed, $f_{j}^{k}$
is the $j$-th attribute of the state $s$, and $Sim_{j}$ is a similarity
function associated with the $j$-th attribute. The second term is a partial
matching process reflecting the similarity between the current state $s$ and
the state of the option $k_{i}$. The rightmost term represents a noise for
capturing individual variation in activation, and $\xi_{ik_{i}t}$ is a random
number drawn from a uniform distribution $U(0,1)$ at each timestep and for
each instance and option.
Activation of an instance $i$ is used to determine the probability of
retrieval of an instance from memory. The probability of an instance $i$ is
defined by a soft-max function as follows
$P_{ik_{i}t}=\frac{e^{\Lambda_{ik_{i}t}/\tau}}{\sum_{j=1}^{n_{kt}}e^{\Lambda_{jk_{j}t}/\tau}},$
(2)
where $\tau$ is the Boltzmann constant (i.e., the “temperature”) in the
Boltzmann distribution. For simplicity, $\tau$ is often defined as a function
of the same $\sigma$ used in the activation equation $\tau=\sigma\sqrt{2}$.
The expected utility of option $k$ is calculated based on Blending as
specified in choice tasks LEJARRAGA12 ; GONZALEZ11 :
$V_{kt}=\sum_{i=1}^{n_{kt}}P_{ik_{i}t}x_{ik_{i}t}.$ (3)
The choice rule is to select the option that corresponds to the maximum
blended value. In particular, at the $l$-th step of an episode, the agent
selects the option $(s_{l},a_{l})$ with
$a_{l}=\arg\max_{a\in A}V_{(s_{l},a),t}$ (4)
The flag $delayed$ on line 1 of Algorithm 1 is true when the agent knows the
real outcome after making a sequence of decision without feedback. In such
case, the agent updates outcomes by using one of the credit assignment
mechanisms Nguyen21 . It is worth noting that when the flag $delayed$ is true
depends on a specific task. For instance, $delayed$ can be set to true when
the agent reaches the terminal state, or when the agent receives a positive
reward.
## 3 SpeedyIBL Implementation
From the IBL algorithm 1, we observe that its computational cost revolves
around the computations on lines 1 (Eq. 1), 1 (Eq. 2), 1 (Eq. 3), and the
storage of instances with their associated time stamps on line 1. Clearly,
when the number of states and action variables (dimensions) grow, or the
number of IBL agent objects increases, the execution of steps 1 to 3) in
algorithm 1 will directly increase the execution time. The “speedy” version of
IBL (i.e., SpeedyIBL) is a library focused on dealing with these computations
more efficiently.
SpeedyIBL algorithm is the same as that in Algorithm 1. The innovation is in
the Mathematics. Equations 1, 2 and 3 are replaced with Equations 6, 7 and 8,
respectively (as explained below). Our idea is to take advantage of
vectorization, which typically refers to the process of applying a single
instruction to a set of values (vector) in parallel, instead of executing a
single instruction on a single value at a time. In general, this idea can be
implemented in any programming language. We particularly implemented these in
Python, since that is how PyIBL is implemented MorrisonGonzalez .
Technically, the memory in an IBL model is stored by using a dictionary
$\mathcal{M}$ that, at time $t$, represented as follows:
$\mathcal{M}=\biggl{\\{}k_{i}:\\{x_{ik_{i}t}:T_{ik_{i}t},...\\},...\biggr{\\}},$
(5)
where $(k_{i},x_{ik_{i}t},T_{ik_{i}t})$ is an instance $i$ that corresponds to
selecting option $k_{i}$ and achieving outcome $x_{ik_{i}t}$ with
$T_{ik_{i}t}$ being the set of the previous timestamps in which the instance
$i$ is observed.
To vectorize the codes, we convert $T_{ik_{i}t}$ to a
NumPy111https://numpy.org/doc/stable/ array on which we can use standard
mathematical functions with built-in Numpy functions for fast operations on
entire arrays of data without having to write loops.
After the conversion, we consider $T_{ik_{i}t}$ as a NumPy array. In addition,
since we may use a common similarity function for several attributes, we
assume that $f$ is partitioned into $J$ non-overlapping groups
$f_{[1]},...,f_{[J]}$ with respect to the distinct similarity functions
$Sim_{1},...,Sim_{J}$, i.e., $f_{[j]}$ contains attributes that use the same
similarity function $Sim_{j}$. We denote $S(f^{k},f^{k_{i}})$ the second term
of (1) computed by:
set $S(f^{k},f^{k_{i}})$ = 0
for $j=1$ to $J$ do
$S(f^{k},f^{k_{i}})\ +=\texttt{sum}((Sim_{j}(f_{[j]}^{k},f_{[j]}^{k_{i}}))$
end for
Hence, the activation value (see Equation 1) can be fast and efficiently
computed as follows:
$\Lambda_{ik_{i}t}=\texttt{math.log}(\texttt{sum}(\texttt{pow}(t-T_{ik_{i}t},-d)))+\alpha*S(f^{k},f^{k_{i}})+\sigma*\texttt{math.log}((1-\xi_{ik_{i}t})/\xi_{ik_{i}t}).$
(6)
With the vectorization, the operation such as pow can be performed on multiple
elements of the array at once, rather than looping through and executing them
one at a time. Similarly, the retrieval probability (see Equation 2) is now
computed by:
$P_{kt}:=[P_{1k_{1}t},...,P_{n_{kt}k_{n_{kt}}t}]=v/\texttt{sum}(v),$ (7)
where
$v=\texttt{math.exp}([\Lambda_{1k_{1}t},...,\Lambda_{n_{kt}k_{n_{kt}}t}]/\tau)$.
The blended value (see Equation 3) is now computed by:
$V_{kt}=\texttt{sum}(x_{kt}*P_{kt}),$ (8)
where $x_{kt}:=[x_{1k_{1}t},...,x_{n_{kt}k_{n_{kt}}t}]$ is a NumPy array that
contains all the outcomes associated with the option $k$.
## 4 Experiments: Demonstration of the Curse of Exponential Growth and
SpeedyIBL solution
To demonstrate the efficiency of SpeedyIBL, we evaluate its performance
against a regular implementation of the IBL algorithm (Algorithm 1) in Python
(PyIBL PyIBL ), in six different tasks that were selected to represent
different dimensions of complexity in dynamic decision making tasks
gonzalez2005use . The codes are available at
https://github.com/nhatpd/SpeedyIBL.
### 4.1 General Simulation Methods
The parameter values configured in the IBL models with SpeedyIBL and PyIBL
implementations were identical. In particular, we used the decay $d=0.5$ and
noise $\sigma=0.25$. The default utility values generally set to be higher
than the maximum value obtained in the task, to create exploration as
suggested in LEJARRAGA12 (see the task descriptions below for specific
values), and they were set the same for PyIBL and SpeedyIBL.
For each of the six tasks, we compared the performance of PyIBL and SpeedyIBL
implementations in terms of (i) running time measured in seconds and (ii)
performance. The performance measure is identified within each task.
We conducted 1000 runs of the models and each run performed 100 episodes for
the Binary choice and Insider attack. Given the running time required for
PyIBL, we only ran 100 runs of 100 episodes for the remaining tasks. We note
that an episode of the Binary choice and Insider attack tasks has one step
(trial) while the remaining tasks have $2500$ steps within each episode.
The credit assignment mechanisms in IBL are being studied in NGUYEN20 . In
this paper we used an equal credit assignment mechanism for all tasks. This
mechanism updates the current outcome for all the actions that took place from
the current state to the last state where the agent started or the flag
$delayed$ was true.
### 4.2 Tasks
Table 1 provides an overview of the dimensions of the tasks with respect to
the number of agents, actions, states, partial matching mechanism, feedback
delays, and number of choice options. There are 4 single agent tasks, one task
with two agents, and one task with 3 agents. The tasks have between 2 to 9
potential actions and the number of states and choice options also vary from
just a few to a significant large number. We also include a task to illustrate
the partial matching (similarity) process of equations 1 and 6, and a task
with no feedback delay.
We start with a repeated Binary choice task that has only one state and two
options, followed by an Insider attack two-stage game in which players choose
one of six targets after observing their features to advance. We then scale up
to a larger number of states and actions in significantly more complex tasks.
A Minimap task involving a search and rescue mission and Ms. Pac-man task have
a larger number of the discrete state-action variables. Next, we scale up to
two multi-agent tasks: the Fireman task has two agents and four actions, and a
Cooperative Navigation task in which three agents navigate and cooperate to
accomplish a goal. The number of agents increases the memory computation,
since each of those agents adds their own variables to the joint state-action
space. Importantly, all these demonstrations use the same IBL algorithm 1, and
the implementation of such algorithm with the Speedy equations described in
Section 3. Details for each task follow below. Based on these dimensions of
increasing complexity, we expect that SpeedyIBL’s benefits over PyIBL will be
larger with increasing complexity of the task.
Task | # Agents | # Actions | # States | # Options | Partial | Delayed
---|---|---|---|---|---|---
Matching | Feedback
Binary choice | 1 | 2 | 1 | 2 | No | No
Insider attack game | 1 | 6 | 4 | 24 | Yes | Yes
Minimap | 1 | 4 | $\approx 10^{41}$ | $\approx 4\times 10^{41}$ | No | Yes
Ms.Pac-man | 1 | 9 | $\approx 10^{347}$ | $\approx 9\times 10^{347}$ | No | Yes
Fireman | 2 | 4 | $\approx 10^{15}$ | $\approx 4\times 10^{15}$ | No | Yes
Cooperative navigation | 3 | 4 | $\approx 10^{7}$ | $\approx 4\times 10^{7}$ | No | Yes
Table 1: Task Summary
#### 4.2.1 Binary choice
In each trial, the agent is required to choose one of two options: Option A or
Option B. A numerical outcome drawn from a distribution after the selection,
is the immediate feedback of the task. This is a well-studied problem in the
literature of risky choice task Hertwig2004 , where individuals make decisions
under uncertainty. Unknown to the agent is that the options A and B are
assigned to draw the outcome from a predefined distribution. One option is
safe and it yields a fixed medium outcome (i.e., $3$) every time it is chosen.
The other option is risky, and it yields a high outcome ($4$) with some
probability $0.8$, and a low outcome ($0$) with the complementary probability
$0.2$.
An IBL model of this task has been created and reported in various past
studies, including GONZALEZ11 ; LEJARRAGA12 . Here, we conducted the
simulations of 1000 runs of 100 trials. We also run the experiment with 5000
trials to more clearly highlight the difference between PyIBL and SpeedyIBL.
The default utility $x_{0}$ was set to $4.4$. For each option $s$, where $s$
is either A or B, we consider all the generated instances taking the form of
$(s,x)$, where $x$ is an outcome. The performance is determined by the average
proportion of the maximum reward expectation choice (PMax).
Figure 2: Binary choice
#### 4.2.2 Insider attack game
The insider attack game is an interactive task designed to study the effect of
signaling algorithms in cyber deception experiments (e.g., Cranford18 ).
Figure 3 illustrates the interface of the task, including a representation of
the agent (insider attacker) and the information of 6 computers. Each of the
six computers is “protected” with some probability (designed by a defense
algorithm). Each computer displays the monitoring probability and potential
outcomes and the information of the signal. When the agent selects one of the
six computers, a signal is presented to the agent (based on the defense
signaling strategy); which informs the agent whether the computer is monitored
or not. The agent then makes a second decision after the signal: whether to
continue or withdraw the attack on the pre-selected computer. If the agent
attacks a computer that is monitored, the player loses points, but if the
computer is not monitored, the agent wins points. The signals are, therefore,
truthful or deceptive. If the agent withdraws the attack, it earns zero
points.
Figure 3: Insider Attack game
In each trial, the agent must decide which of the 6 computers to attack, and
whether to continue or withdraw the attack after receiving a signal. An IBL
model of this task has been created and reported in past studies (e.g.,
cranford2019modeling ; Cranford2021Towards ). We perform the simulations of
1000 runs of 100 episodes. For each option $(s,a)$, where the sate $s$ is the
features of computers including reward, penalty and the probability that the
computers is being monitored (see cranford2019modeling for more details), and
$a\in\\{1,...,6\\}$ is an index of computers, we consider all the generated
instances taking the form of $(s^{\prime},a,x)$ with $s^{\prime}$ being a
state and $x$ being an outcome. The performance is determined by the average
collected reward.
#### 4.2.3 Search and rescue in Minimap
The Minimap task is inspired by a search and rescue scenario, which involves
an agent being placed in a building with multiple rooms and tasked with
rescuing victims Nguyen21b . Victims have been scattered across the building
and their injuries have different degrees of severity with some needing more
urgent care than others. In particular, there are 34 victims grouped into two
categories (24 green victims and 10 yellow victims). There are many obstacles
(walls) placed in the path forcing the agent to look for alternative routes.
The agent’s goal is to rescue as many victims as possible. The task is
simulated as a $93\times 50$ grid of cells which represents one floor of this
building. Each cell is either empty, an obstacle, or a victim. The agent can
choose to move left, right, up, or down, and only move one cell at a time.
Figure 4: Search and rescue map. The empty cells are white and the walls are
black. The yellow and green cells represent the locations of the yellow and
green victims respectively. The cell with the red color represents the start
location of the agent.
The agent receives a reward of 0.75 and 0.25 for rescuing a yellow victim and
a green victim, respectively. Moving into an obstacle or an empty cell is
penalized by 0.05 or 0.01 accordingly. Since the agent might have to make a
sequence of decisions to rescue a victim, we update the previous instances by
a positive outcome that once the agent receives.
An IBL model of this task has been created and reported in past studies
Gulati2021Task . Here we created the SpeedyIBL implementation of this model to
perform the simulation of 100 runs of 100 episodes. An episode terminates when
a $2500$-trial limit is reached or when the agent successfully rescues all the
victims. After each episode, all rescued victims are placed back at the
location where they were rescued from and the agent restarts from the pre-
defined start position.
In this task, a state $s$ is represented by a gray-scale image (array) with
the same map size. We use the following pixel values to represent the entities
in the map: $s[x][y]$ = 240 if the agent locates at the coordinate $(x,y)$,
150 if a yellow victim locates at the coordinate $(x,y)$, 200 if a green
victim locates at the coordinate $(x,y)$, 100 if an obstacle locates at the
coordinate $(x,y)$, and 0 otherwise. For each option $(s,a)$, where $s$ is a
state and $a$ is an action, we consider all the generated instances taking the
form of $(s,a,x)$ with $x$ being an outcome. The default utility was set to
$0.1$. The flag $delayed$ is set to true if the agent rescues a victim,
otherwise false. The performance is determined by the average collected
reward.
#### 4.2.4 Ms. Pac-man
The next task considered in the experiment is Ms. Pac-man game, a benchmark
for evaluating agents in machine learning, e.g. Hasselt2016Deep . The agent
maneuvers Pac-Man in a maze while Pac-Man eats the dots (see Fig. 5).
Figure 5: Mis.Pac-man game
In this particular maze, there are 174 dots, each one is worth 10 points. A
level is finished when all dots are eaten. To make things more difficult,
there are also four ghosts in the maze who try to catch Pac-Man, and if they
succeed, Pac-Man loses a life. Initially, she has three lives and gets an
extra life after reaching $10,000$ points. There are four power-up items in
the corners of the maze, called power dots (worth 40 points). After Pac-Man
eats a power dot, the ghosts turn blue for a short period, they slow down and
try to escape from Pac-Man. During this time, Pac-Man is able to eat them,
which is worth 200, 400, 800, and 1600 points, consecutively. The point values
are reset to 200 each time another power dot is eaten, so the agent would want
to eat all four ghosts per power dot. If a ghost is eaten, his remains hurry
back to the center of the maze where the ghost is reborn. At certain
intervals, a fruit appears near the center of the maze and remains there for a
while. Eating this fruit is worth 100 points.
We use the MsPacman-v0 environment developed by Gym
OpenAI222https://gym.openai.com/envs/MsPacman-v0/, where a state is
represented by a color image. Here, we developed an IBL model for this task
and created the SpeedyIBL implementation of this model to perform the
simulation of 100 runs of 100 episodes. An episode terminates when either a
$2500$-step limit is reached or when Pac-man successfully eats all the dots or
loses three lives. Like in the Minimap task, for each option $(s,a)$, where
$s$ is a state and $a$ is an action, we consider all the generated instances
taking the form of $(s,a,x)$ with $x$ being an outcome. The parameter
$delayed$ is set to true if Pac-man receives a positive reward, otherwise it
is set to false. The performance is determined by the average collected
reward.
#### 4.2.5 Fireman
The Fireman task replicates the coordination in firefighting service wherein
agents need to pick up matching items for extinguishing fire. This task was
used for examining deep reinforcement learning agents Palmer2019Negative . In
the experiment, the task is simulated in a gridworld of size $11\times 14$, as
illustrated in Fig. 6. Two agents A1 and A2 located within the gridworld are
tasked with locating an equipment pickup area and choosing one of the
firefight items. Afterwards, they need to navigate and find the location of
the fire (F) to extinguish it. The task is fully cooperative as both agents
are required to extinguish one fire. More importantly, the location of the
fire is dynamic in every episode.
Figure 6: Fireman game
The agents receive the collective reward according to the match between their
selected firefighting items, which is determined by the payoff matrix in Table
2. The matrix is derived from a partial stochastic climbing game MatignonLF12
that has a stochastic reward. If they both select the equipment E2, they get
14 points with the probability 0.5, and 0 otherwise. This Fireman task has
both stochastic and dynamic properties.
| Agent 2
---|---
| E1 | E2 | E3
Agent 1 | E1 | 11 | -30 | 0
E2 | -30 | 14/0 | 6
E3 | 0 | 0 | 5
Table 2: Payoff matrix.
Here we developed an IBL model for this task. We created the SpeedyIBL
implementation of this model to perform the simulations of 100 runs of 100
episodes. An episode terminates when a $2500$-trial limit is reached or when
the agents successfully extinguish the fire. After each episode, the fire is
replaced in a random location and the agents restart from the pre-defined
start positions.
Like in the search and rescue Minimap task, a state $s$ of the agent A1 (resp.
A2) is represented by a gray-scale image with the same gridworld size using
the following pixel values to represent the entities in the gridworld:
$s[x][y]$ = 240 (resp. 200) if the agent A1 (resp. A2) locates at the
coordinate $(x,y)$, 55 if the fire locates at the coordinate $(x,y)$, 40 if
equipment E1 locates at the coordinate $(x,y)$, 50 if equipment E2 locates at
the coordinate $(x,y)$, 60 if equipment E3 locates at the coordinate $(x,y)$,
100 if an obstacle locates at the coordinate $(x,y)$, 0 otherwise. Moreover,
we assume that the agents cannot observe the relative positions of the other,
and hence, their states do not include the pixel values of the other agent.
For each option $(s,a)$, where $s$ is a state and $a$ is an action, we
consider all the generated instances taking the form of $(s,a,x)$ with $x$
being an outcome. The flag $delayed$ is set to true if the agents finish the
task, otherwise false. The performance is determined by the average collected
reward.
#### 4.2.6 Cooperative Navigation
In this task, three agents (A1, A2 and A3) must cooperate through physical
actions to reach a set of three landmarks (L1, L2 and L3) shown in Fig. 7, see
Lowe2017Multi . The agents can observe the relative positions of other agents
and landmarks, and are collectively rewarded based on the number of the
landmarks that they cover. For instance, if all the agents cover only one
landmark L2, they receive one point. By contrast, if they all can cover the
three landmarks, they get the maximum of three points. Simply put, the agents
want to cover all landmarks, so they need to learn to coordinate the landmark
they must cover.
Figure 7: Cooperative navigation
Here we developed an IBL model for this task. We created the SpeedyIBL
implementation of this model to perform the simulations of 100 runs of 100
episodes. An episode terminates when a $2500$-trial limit is reached or when
each of the agents covers one landmark. After each episode, the fire is
replaced in a random location and the agents restart from the pre-defined
start positions.
In this task, a state $s$ is also represented by a gray-scale image with the
same gridworld size using the following pixel values to represent the entities
in the environment: $s[x][y]$ = 240 if the agent A1 locates at the coordinate
$(x,y)$, 200 if the agent A2 locates at the coordinate $(x,y)$, 150 if the
agent A3 locates at the coordinate $(x,y)$, 40 if the landmark L1 locates at
the coordinate $(x,y)$, 50 if the landmark L2 locates at the coordinate
$(x,y)$, 60 if the landmark L3 locates at the coordinate $(x,y)$, 0 otherwise.
For each option $(s,a)$, where $s$ is a state and $a$ is an action, we
consider all the generated instances taking the form of $(s,a,x)$ with $x$
being an outcome. The flag $delayed$ is set to true if the agents receive a
positive reward, otherwise false. The performance is determined by the average
collective reward.
## 5 Results
In this section, we present the results of the SpeedyIBL and PyIBL models
across all the considered tasks. The comparison is provided in terms of the
average running time and performance.
### 5.1 Average Running time and Performance
Table 3 shows the overall average of computational time and Table 4 the
average performance across the runs and 100 episodes. The Ratio in Table 3
indicates the speed improvement from running the model in SpeedyIBL over
PyIBL.
Task | PyIBL | SpeedyIBL | Ratio
---|---|---|---
Binary choice | 0.0087 | 0.0076 | 1.14
Insider Attack Game | 0.1411 | 0.0652 | 2.2
Minimap | 21951.88 ($\approx$ 365 mins $\approx$ 6 hours) | 78.4 ($\approx$ 1.3 mins) | 279
Ms.Pac-man | 162372.58 ($\approx$ 2706.2 mins $\approx$ 45 hours) | 111.98 ($\approx$ 1.86 mins) | 1450
Fireman | 23 743.36 ($\approx$ 395.72 mins $\approx$ 6.6 hours) | 37.72 ($\approx$ 0.62 mins) | 629
Cooperative Navigation | 9741.37 ($\approx$ 162 mins $\approx$ 2.7 hours) | 2.59 ($\approx$ 0.04 mins) | 3754
Table 3: Average running time in seconds of a run
The ratio of PyIBL running time to SpeedyIBL running time in Table 3 shows
that the benefit of SpeedyIBL over PyIBL increases significantly with the
complexity of the task. In a simple task such as binary choice, SpeedyIBL
performs 1.14 faster than PyIBL. However, the speed-up ratio increases with
the higher dimensional state space tasks; for example, in Minimap SpeedyIBL
was 279 times faster than PyIBL; and in Ms. Pac-man SpeedyIBL was 1450 times
faster than PyIBL.
Furthermore, the multi-agent tasks exhibit the largest ratio benefit of
SpeedyIBL over PyIBL. For example, in the Cooperative Navigation task, PyIBL
took about 2.7 hours to finish a run, but SpeedyIBL only takes 2.59 seconds to
accomplish a run.
In all tasks, we observe that the computational time of SpeedyIBL is
significantly shorter than running the same task in PyIBL; we also observe
that there is no significant difference in the performance of SpeedyIBL and
PyIBL ($p>0.05)$. These results suggest that SpeedyIBL is able to greatly
reduce the execution time of an IBL model without compromising its
performance.
Task | Metric | PyIBL | SpeedyIBL | $t$-test
---|---|---|---|---
Binary choice | PMax | 0.8333 | 0.8275 | $t=-0.83,p=0.4>0.05$
Insider Attack Game | Average Reward | 1.3828 | 1.3751 | $t=-0.38,p=0.69>0.05$
Minimap | Average Reward | 4.1021 | 4.2641 | $t=0.87,p=0.38>0.05$
Ms.Pac-man | Average Reward | 228.357 | 228.464 | $t=0.72,p=0.47>0.05$
Fireman | Average Reward | 4.7825 | 4.9456 | $t=1.07,p=0.28>0.05$
Cooperative Navigation | Average Reward | 2.7049 | 2.7261 | $t=0.69,p=0.48>0.05$
Table 4: Average performance of a run
### 5.2 Learning curves
Figure 8 shows the comparison of average running time (middle column) and
average performance (right column) between PyIBL (Blue) and SpeedyIBL (Green)
across episodes for all the six tasks.
(a) Binary Choice
(b) Insider Attack
(c) Minimap
(d) Ms.Pac-man
(e) Fireman
(f) Cooperative Navigation
Figure 8: The comparison between SpeedyIBL (Green line) and PyIBL (Blue line)
over time in the considered tasks.
In the Binary choice task, it is observed that there is a small difference in
the execution time before 100 episodes; where SpeedyIBL performs slightly
faster than PyIBL. To illustrate how the benefit of SpeedyIBL over PyIBL
implementation increases significantly as the number of episodes increase, we
ran these models over 5000 episodes. Figure 9 illustrates the curse of
exponential growth very clearly, where PyIBL exponentially increases the
execution time with more episodes. The benefit of SpeedyIBL over PyIBL
implementation is clear with increased episodes. The PMax of SpeedyIBL and
PyIBL overlap, suggesting the same performance.
Figure 9: The comparison between SpeedyIBL and PyIBL in 5000 playing episodes
of binary choice task.
In the Insider Attack game as shown Figure 8(b), the relation between
SpeedyIBL and PyIBL in terms of computational time shows again, an increased
benefit with increased number of episodes. We see that their running time is
indistinguishable initially, but then the difference becomes distinct in the
last 60 episodes. Regarding the performance (i.e., average reward), again,
their performance over time is nearly identical. Learning in this task was
more difficult, given the design of this task, and we do not observe a clear
upward trend in the learning curve due to the presence of stochastic elements
in the task.
In all the rest of the tasks, the Minimap, Mis.Pac-man, Fireman, and
Cooperative Navigation, given the multi-dimensionality of these tasks
representations and the number of agents involved in Fireman, and Cooperative
Navigation tasks, the curse of exponential growth is observed from early on,
as shown in Figure 8(c). The processing time of PyIBL grows nearly
exponentially over time in all cases. The curve of SpeedyIBL also increases,
but it appears to be constant in relation to the exponential growth of PyIBL
given the significant difference between the two, when plotted in the same
scale.
The performance over time is again indistinguishable between PyIBL and
SpeedyIBL. Depending on the task, the dynamics, and strochastic elements of
the task, the models’ learning curves appear to fluctuate over time (e.g.
Mis.Pac-man), but when the scenarios are consistent over time, the models show
similar learning curves for both, PyIBL and SpeedyIBL.
## 6 Discussion and Conclusions
The curse of exponential growth is an important computational problem that
emerges in many modeling approaches involving tabular and loop computations:
as more observations accumulate and the dimensions of a task and the number of
agents modeled in a task increase, the execution time of such a model will
also increase. Models slow down with the increased number of computations that
need to be done in a model.
In this paper, we demonstrate the curse of exponential growth problem and
propose a solution to that problem. We demonstrate the problem and solutions
within cognitive models, in particular Instance-Based Learning models
GONZALEZ03 . We chose IBL models because it is possible to demonstrate how
models constructed in agreement with the same theory can demonstrate different
behaviors according to the complexity, number of agents, and hyper-
dimensionality of the decision tasks.
We propose a a new implementation for IBL models in SpeedyIBL, a Python
library that allows to create multiple IBL agents with fast processing and
response time without compromising the performance. SpeedyIBL relies on the
same IBL algorithm GONZALEZ03 but innovates the PyIBL implementation of this
algorithm PyIBL with the mathematics of vectorization and parallel
computation larsen2000exploiting . The underlying idea of the SpeedyIBL
implementation is to speed up the performance by using a data structure to
store memory more efficiently and by leveraging vectorization in computation.
We have demonstrated the robustness of SpeedyIBL by comparing it with PyIBL, a
widely used Python implementation of IBLT, on a wide range of tasks that vary
in their increased complexity. We demonstrate how SpeedyIBL model
implementation, based on the same theory can be exponentially beneficial
compared to the traditional PyIBL implementation. We demonstrate tasks that
range from a single-agent, single-state, to single-agent multi-state, and to
multi-agent multi-state settings. The results show that SpeedyIBL is able to
perform significantly faster than PyIBL while keeping the performance as good
as PyIBL. Moreover, the difference in the running time of the SpeedyIBL and
PyIBL becomes large, especially in multi-agent domains and high-dimensional
state spaces.
With the fast processing time, SpeedyIBL can not only be used in simulation
experiments, but also can be integrated into a browser-based application in
which IBL agents can interact with human subjects. Given that research on
human–machine behavioral has attracted much attention lately, we are convinced
that the implementation of SpeedyIBL will bring real benefits to researchers
in the area.
###### Acknowledgements.
This research was partly sponsored by the Defense Advanced Research Projects
Agency and was accomplished under Grant Number W911NF-20-1-0006 and by AFRL
Award FA8650-20-F-6212 subaward number 1990692 to Cleotilde Gonzalez.
## References
* (1) Aggarwal, P., Thakoor, O., Mate, A., Tambe, M., Cranford, E.A., Lebiere, C., Gonzalez, C.: An exploratory study of a masking strategy of cyberdeception using cybervan. In: Proceedings of the Human Factors and Ergonomics Society Annual Meeting, vol. 64, pp. 446–450. SAGE Publications Sage CA: Los Angeles, CA (2020)
* (2) Anderson, J.R., Lebiere, C.J.: The atomic components of thought. Psychology Press (2014)
* (3) Bellman, R.: Dynamic programming, princeton univ. Press Princeton, New Jersey (1957)
* (4) Cranford, E.A., Gonzalez, C., Aggarwal, P., Cooney, S., Tambe, M., Lebiere, C.: Toward personalized deceptive signaling for cyber defense using cognitive models. Topics in Cognitive Science 12(3), 992–1011 (2020)
* (5) Cranford, E.A., Gonzalez, C., Aggarwal, P., Tambe, M., Cooney, S., Lebiere, C.: Towards a cognitive theory of cyber deception. Cognitive Science 45(7) (2021)
* (6) Cranford, E.A., Lebiere, C., Gonzalez, C., Cooney, S., Vayanos, P., Tambe, M.: Learning about cyber deception through simulations: Predictions of human decision making with deceptive signals in stackelberg security games. In: C. Kalish, M.A. Rau, X.J. Zhu, T.T. Rogers (eds.) Proceedings of the 40th Annual Meeting of the Cognitive Science Society, CogSci 2018, Madison, WI, USA, July 25-28, 2018 (2018)
* (7) Cranford, E.A., Lebiere, C., Rajivan, P., Aggarwal, P., Gonzalez, C.: Modeling cognitive dynamics in (end)-user response to phishing emails. Proceedings of the 17th ICCM (2019)
* (8) Erev, I., Ert, E., Plonsky, O., Cohen, D., Cohen, O.: From anomalies to forecasts: Toward a descriptive model of decisions under risk, under ambiguity, and from experience. Psychological review 124(4), 369 (2017)
* (9) Erev, I., Ert, E., Roth, A.E., Haruvy, E., Herzog, S.M., Hau, R., Hertwig, R., Stewart, T., West, R., Lebiere, C.: A choice prediction competition: Choices from experience and from description. Journal of Behavioral Decision Making 23(1), 15–47 (2010)
* (10) Gonzalez, C.: Decision-making: a cognitive science perspective. The Oxford handbook of cognitive science 1, 1–27 (2017)
* (11) Gonzalez, C., Ben-Asher, N., Martin, J.M., Dutt, V.: A cognitive model of dynamic cooperation with varied interdependency information. Cognitive science 39(3), 457–495 (2015)
* (12) Gonzalez, C., Dutt, V.: Instance-based learning: Integrating decisions from experience in sampling and repeated choice paradigms. Psychological Review 118(4), 523–51 (2011)
* (13) Gonzalez, C., Fakhari, P., Busemeyer, J.: Dynamic decision making: Learning processes and new research directions. Human factors 59(5), 713–721 (2017)
* (14) Gonzalez, C., Lerch, J.F., Lebiere, C.: Instance-based learning in dynamic decision making. Cognitive Science 27(4), 591–635 (2003)
* (15) Gonzalez, C., Vanyukov, P., Martin, M.K.: The use of microworlds to study dynamic decision making. Computers in human behavior 21(2), 273–286 (2005)
* (16) Gulati, A., Nguyen, T.N., Gonzalez, C.: Task complexity and performance in individuals and groups without communication. In: AAAI Fall Symposium on Theory of Mind for Teams (2021)
* (17) Hasselt, H.v., Guez, A., Silver, D.: Deep reinforcement learning with double q-learning. In: Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence, AAAI’16, p. 2094–2100. AAAI Press (2016)
* (18) Hertwig, R.: Decisions from experience. The Wiley Blackwell handbook of judgment and decision making 1, 240–267 (2015)
* (19) Hertwig, R., Barron, G., Weber, E.U., Erev, I.: Decisions from experience and the effect of rare events in risky choice. Psychological Science 15(8), 534–539 (2004)
* (20) Kahneman, D., Tversky, A.: The dynamics of cognition: An act-r model of cognitive arithmetic. Kognitionswissenschaft 8, 5–19 (1979)
* (21) Kahneman, D., Tversky, A.: Prospect theory: An analysis of decision under risk. Econometrica 47(2), 363–391 (1979)
* (22) Kuo, F.Y., Sloan, I.H.: Lifting the curse of dimensionality. Notices of the AMS 52(11), 1320–1328 (2005)
* (23) Larsen, S., Amarasinghe, S.: Exploiting superword level parallelism with multimedia instruction sets. ACM SIGPLAN Notices 35(5), 145–156 (2000)
* (24) Lejarraga, T., Dutt, V., Gonzalez, C.: Instance-based learning: A general model of repeated binary choice. Journal of Behavioral Decision Making 25(2), 143–153 (2012)
* (25) Lowe, R., Wu, Y., Tamar, A., Harb, J., Abbeel, P., Mordatch, I.: Multi-agent actor-critic for mixed cooperative-competitive environments. In: Proceedings of the 31st International Conference on Neural Information Processing Systems, NIPS’17, p. 6382–6393. Curran Associates Inc., Red Hook, NY, USA (2017)
* (26) Matignon, L., Laurent, G.J., Fort-Piat, N.L.: Independent reinforcement learners in cooperative markov games: a survey regarding coordination problems. Knowl. Eng. Rev. 27(1), 1–31 (2012). DOI 10.1017/S0269888912000057. URL https://doi.org/10.1017/S0269888912000057
* (27) Mnih, V., Kavukcuoglu, K., Silver, D., Graves, A., Antonoglou, I., Wierstra, D., Riedmiller, M.: Playing atari with deep reinforcement learning. arXiv preprint arXiv:1312.5602 (2013)
* (28) Morgenstern, O., Von Neumann, J.: Theory of games and economic behavior. Princeton university press (1953)
* (29) Morrison, D., Gonzalez, C.: Pyibl: A python implementation of iblt. https://www.cmu.edu/dietrich/sds/ddmlab/downloads.html. Accessed: 2021-09-27
* (30) Morrison, D., Gonzalez, C.: Pyibl python implementation of ibl. URL http://pyibl.ddmlab.com/. Version 4.1
* (31) Nguyen, T.N., Gonzalez, C.: Cognitive machine theory of mind. In: Proceedings of CogSci (2020)
* (32) Nguyen, T.N., Gonzalez, C.: Effects of decision complexity in goal-seeking gridworlds: A comparison of instance-based learning and reinforcement learning agents. In: Proceedings of the 18th intl. conf. on cognitive modelling (2020)
* (33) Nguyen, T.N., Gonzalez, C.: Minimap: A dynamic decision making interactive tool for search and rescue missions. Tech. rep., Carnegie Mellon University (2021)
* (34) Nguyen, T.N., Gonzalez, C.: Theory of mind from observation in cognitive models and humans. Topics in Cognitive Science (2021). DOI https://doi.org/10.1111/tops.12553. URL https://onlinelibrary.wiley.com/doi/abs/10.1111/tops.12553
* (35) Nguyen, T.N., McDonald, C., Gonzalez, C.: Credit assignment: Challenges and opportunities in developing human-like ai agents. Tech. rep., Carnegie Mellon University (2021)
* (36) Palmer, G., Savani, R., Tuyls, K.: Negative update intervals in deep multi-agent reinforcement learning. In: E. Elkind, M. Veloso, N. Agmon, M.E. Taylor (eds.) Proceedings of the 18th International Conference on Autonomous Agents and MultiAgent Systems, AAMAS ’19, Montreal, QC, Canada, May 13-17, 2019, pp. 43–51. International Foundation for Autonomous Agents and Multiagent Systems (2019)
* (37) Savage, L.J.: The foundations of statistics. Naval Research Logistics Quarterly (1954)
* (38) Silver, D., Huang, A., Maddison, C.J., Guez, A., Sifre, L., Van Den Driessche, G., Schrittwieser, J., Antonoglou, I., Panneershelvam, V., Lanctot, M., et al.: Mastering the game of go with deep neural networks and tree search. nature 529(7587), 484–489 (2016)
* (39) Skinner, B.F.: Contingencies of reinforcement: A theoretical analysis, vol. 3. BF Skinner Foundation (2014)
* (40) Sutton, R.I., Staw, B.M.: What theory is not. Administrative science quarterly pp. 371–384 (1995)
* (41) Sutton, R.S., Barto, A.G.: Reinforcement learning: An introduction. MIT press (2018)
* (42) Tversky, A., Kahneman, D.: Advances in prospect theory: Cumulative representation of uncertainty. Journal of Risk and uncertainty 5(4), 297–323 (1992)
* (43) Vinyals, O., Babuschkin, I., Chung, J., Mathieu, M., Jaderberg, M., Czarnecki, W.M., Dudzik, A., Huang, A., Georgiev, P., Powell, R., et al.: Alphastar: Mastering the real-time strategy game starcraft ii. DeepMind blog 2 (2019)
* (44) Wong, A., Bäck, T., Kononova, A.V., Plaat, A.: Multiagent deep reinforcement learning: Challenges and directions towards human-like approaches. arXiv preprint arXiv:2106.15691 (2021)
|
marginparsep has been altered.
topmargin has been altered.
marginparwidth has been altered.
marginparpush has been altered.
The page layout violates the ICML style. Please do not change the page layout,
or include packages like geometry, savetrees, or fullpage, which change it for
you. We’re not able to reliably undo arbitrary changes to the style. Please
remove the offending package(s), or layout-changing commands and try again.
Analyzing the Performance of Graph Neural Networks
with Pipe Parallelism
Anonymous Authors1
###### Abstract
Many interesting datasets ubiquitous in machine learning and deep learning can
be described via graphs. As the scale and complexity of graph-structured
datasets increase, such as in expansive social networks, protein folding,
chemical interaction networks, and material phase transitions, improving the
efficiency of the machine learning techniques applied to these is crucial. In
this study, we focus on Graph Neural Networks (GNN) that have found great
success in tasks such as node or edge classification and link prediction.
However, standard GNN models have scaling limits due to necessary recursive
calculations performed through dense graph relationships that lead to memory
and runtime bottlenecks. While new approaches for processing larger networks
are needed to advance graph techniques, and several have been proposed, we
study how GNNs could be parallelized using existing tools and frameworks that
are known to be successful in the deep learning community. In particular, we
investigate applying pipeline parallelism to GNN models with GPipe, introduced
by Google in 2018.
††footnotetext: 1Anonymous Institution, Anonymous City, Anonymous Region,
Anonymous Country. Correspondence to: Anonymous Author
<EMAIL_ADDRESS>
Preliminary work. Under review by the Machine Learning and Systems (MLSys)
Conference. Do not distribute.
## 1 Introduction
Traditional flat or sequential data delivery cannot fully satisfy many of
today’s demanding deep learning models, especially as more data structures of
interest can be better represented as high-dimensional graphs instead of low-
dimensional grids. Graph machine learning has demonstrated successful
applications in domains such as chemistry and drug design Duvenaud et al.
(2015); Mercado et al. (2020), natural language processing Vashishth (2019),
spatio-temporal forecasting Yu et al. (2017), security Zhou et al. (2020),
social networks Zhou et al. (2020), knowledge graphs Arora (2020), recommender
systems Ying et al. (2018), protein design discovery Strokach et al. (2020),
and material phase transitions Bapst et al. (2020). With its increased use,
especially on large datasets, performance and scaling challenges with the
Graph Neural Network (GNN) Zhou et al. (2018) are becoming prevalent when
using existing machine learning frameworks and accelerators because of memory
and data movement limitations Auten et al. (2020)
A variety of new solutions to address these issues have been proposed and are
highlighted in Section 3. However, as a practical consideration, leveraging
existing state-of-the-art tools and frameworks with demonstrated success at
improving deep neural network performance is valuable to push these
technologies forward. Therefore, to better understand how training and
inference can be more efficient for GNN models, we implement and analyze the
performance of parallelized GNN models compared to their unparallelized
counterparts when trained on a single CPU and GPU and multiple GPUs. As GNNs
are executed sequentially, either layer-by-layer or stage-by-stage, the
motivation for this study is to extend current techniques for improving
performance by introducing pipeline parallelism into the GNN model
architecture Zhang et al. (2020).
## 2 Graph Neural Networks
The success of deep learning on traditional grid- or sequence-based inputs,
such as images and sentences, cannot be overstated. Nevertheless, many
datasets in the real-world cannot be expressed within a Euclidean coordinate
system, and instead naturally take an arbitrary form of graphs or networks.
Various studies exist on how to generalize neural networks for the application
to arbitrary irregular graphs Bruna et al. (2013); Henaff et al. (2015);
Duvenaud et al. (2015); Li et al. (2015); Defferrard et al. (2016); Kipf &
Welling (2016), and we follow the exposition of Kipf and Welling Kipf &
Welling (2016) who first introduced the notion of a convolution architecture
for a graph.
We consider the general scenario of node classification in a graph. The input
is a graph $G=(V,E)$ where $V$ is the set of vertices and $E$ is the set of
edges. Each node (or vertex) $i$ in $V$ has a set of features $x_{i}$. Some
pairs of nodes are connected, and the connections are called edges (or links)
and form the set $E$. If $n$ is the number of nodes and there are $d$ features
in each node, then the set of all features form a $n\times d$ matrix. Some
nodes may have labels from a set of $C$ classes. The task is to classify each
node into $C$ classes using the ingrained feature information through the edge
connectivity between nodes across the graph.
Table 1: Comparisons of training dataset sizes used in this work and considered for future experimentation. Dataset | Nodes | Edges | Classes
---|---|---|---
Cora | 2,708 | 5,429 | 7
CiteSeer | 3,312 | 4,732 | 6
PubMed | 19,717 | 44,338 | 3
Reddit | 233,000 | 150,000,000 | 50
Amazon | 6,000,000 | 180,000,000 | 11
For our analysis, we use the Cora, CiteSeer, and PubMed datasets which are
well-established citation network datasets Sen et al. (2008); Yang et al.
(2016) often used in benchmark training. For comparison of these in Table 1,
we also list the sizes of two larger datasets, the Reddit post dataset
Hamilton et al. (2017) and the Amazon data dump McAuley et al. (2015). As the
data sets used in this study are small and do not require the techniques
explored here to provide training efficiency, they offer valuable benchmarks
for GNNs and measuring the baseline efficiency of parallelized models.
A GNN approaches the node classification problem by building a neural network
layer atop a simultaneous message passing paradigm. Suppose there are $L+1$
layers, $H_{0},\ldots,H_{L}$. Then, $H_{0}=X$, the input set of features. For
each layer $l$, the set of output features $H^{l+1}$ depends only on the
previous layer $H^{l}$ and the input graph $G$. So, for some efficiently
computable function $f$, we have $H^{l+1}=f(H^{l},G)$. Implementations of the
GNN model considers different choices of $f$. By setting the output of the
last layer to be a single neuron, the model computes the logits for each node,
which is then used to classify the nodes. Assuming $f$ is differentiable, this
approach can be optimized by standard gradient descent algorithms. In most
graph networks studied in the literature, the features $H^{l+1}(i)$ for node
$i$ depend on the original feature $H^{l}(i)$ and the features of neighboring
nodes $H^{l}(j)$, where $j$ being a neighbor of $i$ connected by edge
{$i$,$j$}. In some settings, these edges can be of different types, such as
being directed or undirected, as well as include features, in which case the
edge properties also contribute to the calculation of $f$. The message passing
paradigm is designed by how the output features of a layer are simultaneously
updated based on the input features of the layer, instead of through
sequential updates. For many learning tasks on a graph, earlier approaches
Dijkstra (1959); Cheriton & Tarjan (1976) usually introduced problem-specific
architectures or spectral graph theory to make predictions. However, these
algorithms are limited as they require prior knowledge of the graph structure,
and the GNN model provides a unified approach that allows for studying the
properties of a graph itself.
The experiments presented in this paper are based on the Graph Attention
Network (GAT), which is a novel GNN architecture that use attention layers on
top of graph convolutions Veličković et al. (2017). By leveraging this self-
attention mechanism, GAT can achieve state-of-the-art accuracy results on
several transductive and inductive graph benchmarks including the Cora,
CiteSeer, and Pubmed datasets. We chose this model because it can be applied
to graphs of different degrees by specifying arbitrary weights on the
neighbors, making it useful for a wide variety of graph datasets. GAT is also
applicable to inductive learning problems and has generalization capabilities
to unseen graph data. Through our experiments, it does not perform as
efficiently as simpler Graph Convolutional Networks (GCN), such as those
inspired by Kipf & Welling (2016), lending it to being an illustrative
technique to inform us on parallelization benchmarks.
## 3 Related Work
As described above, the core message-passing function of a GNN is the
aggregation of features from the neighborhood of each node. Computing a
gradient descent operation requires storing the entire graph as a single
training batch. With increasing graph size or edge density, the time and
memory complexity of this computation can grow exponentially and introduce an
information bottleneck Alon & Yahav (2020). This effect limits the scalability
of traditional GNNs, many of which, including GAT, do not address these
concerns as their benchmark datasets were of a reasonable size for the
available device capacities.
GraphSAGE Hamilton et al. (2017) was the first attempt to address graph
scalability by using a neighborhood sampling with mini-batch training to limit
the number of nodes included in a batch. This approach can introduce redundant
calculations as the same nodes may appear in multiple sampled batches and lead
to ”neighbor explosion” Zeng et al. (2020). Similar sampling techniques
responded to this challenge by batching sub-graphs instead of nodes, such as
in Chiang et al. (2019). However, graph clustering approaches are faced with
the challenge of defining sub-graphs that sufficiently preserve the edge
topology that guides the node feature updates during training, which is an
issue directly observed in the analysis of the present study. NeuGraph Ma et
al. (2019) introduced parallel computation to enable GNN scalability through a
new graph-parallel abstraction of Scatter-ApplyEdge-Gather-ApplyVertex (SAGA-
NN). This framework conveniently encapsulates almost all existing GNNs in the
literature Zhang et al. (2020), and serves as a foundation for studying
parallelized GNN performance optimization. The authors explored the design of
a GNN processing framework on top of a dataflow based deep learning system,
through which they optimized graph computations, scheduling, and parallelism
in a dataflow-based deep learning framework for graphs. Exploring
computational efficiencies at the device level, $G^{3}$ Liu et al. (2020) is a
GNN training framework that implements graph-structured parallel operations
that leverage the architectural features on GPUs. By directly utilizing graph-
aware components of the GPU, they demonstrated significant speedups in
training times over standard implementations in PyTorch and TensorFlow.
Finally, a recent approach that avoids graph sampling over nodes or sub-graphs
is the Scalable Inception Graph Neural Network (SIGN) Frasca et al. (2020).
Here, graph convolutional filters of different sizes precompute intermediate
node representations. This method enables its scaling to large graphs with
classic mini-batching because it retains sufficient expressiveness from the
node relationships for effective learning.
## 4 Pipeline Parallelism
Google Brain introduced the scalable pipeline parallelism library GPipe Huang
et al. (2019) to enable the efficient distributed training of large, memory-
consuming deep learning models on current accelerator architectures. According
to their published results, GPipe increased training times of a 557 million-
parameter model by 25 times using eight TPU devices and 3.5 times faster using
four devices.
GPipe configures a distribution of a sequentially-designed deep neural network
across multiple devices. To maximize these devices’ capability to calculate in
parallel, it further splits the input mini-batch from the training samples
into “micro-batches” to distribute across the devices. This micro-batching
technique reduces the load for the available memory on the accelerators,
resulting in effectively simultaneous training of the same batch across all
devices. This approach to pipeline parallelism is like a stacked combination
of model parallelism and small data parallelism. During the forward pass, when
each partition finishes processing a micro-batch, it shifts the output to the
next partition, then immediately begins work on the next micro-batch, enabling
partitions can overlap across GPUs. During the backward pass, the gradients
for each micro-batch are calculated using the same model parameters from the
forward pass, and are consistently accumulated at the end into the mini-batch
to update the model parameters. Therefore, the number of partitions separating
the data does not affect model quality.
Although designed for deep neural networks, the GPipe workflow is applicable
to GNNs, with some necessary adaptations that we explore in this study. To the
best of our knowledge, this is the first work to consider the idea of applying
pipeline parallelism using existing libraries to potentially optimize the
runtime of GNNs.
## 5 Implementation
Experiments included training a GAT-based multi-layer sequential neural
network on the task of node classification with the citation datasets
described above. The PubMed set was solely used to compare performance with
pipeline parallelism and graph data batching. The forward propagation model
structure remained consistent across all experiments designed with a drop-out
layer (p = 0.6) followed by a GAT layer with eight heads (attention drop-out =
0.6), a leaky ReLU activation function, a second drop-out layer (p = 0.6), a
second GAT layer (eight heads, attention drop-out = 0.6) where the outputs are
averaged, and, finally, a log softmax function. The neural network model was
implemented in PyTorch with the graph frameworks, PyTorch Geometric (PyG) Fey
& Lenssen (2019), and Deep Graph Library (DGL) Wang et al. (2019). Each
framework was compared for performance on each device architecture. Pipeline
and data parallelism through GPipe was only implemented through DGL. Trials
included performance measures for a single CPU, single GPU, and pipe parallel
distribution across four GPUs with and without micro-batching of the graph
data. For the single device benchmarks, we used an Intel(R) Xeon(R) CPU @
2.20GHz and NVIDIA Tesla T4 GPU, and four NVIDIA Tesla V100-SXM2 GPUs (DGX)
were used for the distributed pipeline parallel experiments.
GPipe was incorporated into the GNN models for each framework with the
torchgpipe Kim et al. (2020) library, an implementation of the GPipe framework
in PyTorch. The defined model is wrapped in a method that takes as parameters
a defined distribution of the model layers across the available GPUs as the
number of micro-batches (called “chunks”) to be applied. A value of one
corresponds to the data parallelism feature being disabled.
In Listing LABEL:code:gpipe, `g` contains the complete graph, `numfeats`
represents the number of features per node, and `nclasses` is the number of
classes in the classification task. The customizable `balance` array specifies
how many layers from the sequence to distribute to each GPU. From this, GPipe
automatically manages the necessary movements of the model and data across
devices. An automated distribution algorithm is also available to optimize
this layer assignment. However, for the uniform analysis presented in this
paper, we manually set the layer distribution across four devices to ensure
consistency for all experiments. With `chunks > 1`, the complete dataset or
batches from the training are split into micro-batches by GPipe to increase
device parallelism. After a partition completes its processing of a micro-
batch, it passes the output to the next partition and begins on the next
incoming micro-batch in the pipeline. Through this approach, the multiple
devices effectively process the same mini-batch (or entire dataset)
simultaneously during a training epoch.
Listing 1: Illustrative GPipe implementation with torchgpipe.
⬇
import torch.nn as nn
from torchgpipe import GPipe
# Define a sequential model
model = nn.Sequential(
nn.Dropout(0.6),
GAT(g, numfeats, 8),
nn.ELU(),
nn.Dropout(0.6),
GAT(g, 8 * 8, nclasses, take_mean = True),
nn.LogSoftmax(1))
# Wrap the model for pipeline parallelism management
model = GPipe(model, balance = [1, 2, 1, 2], chunks=4)
A key challenge with this implementation for a GNN is that a sequential module
is required for the network layers. A cascade of additional restrictions
results, beginning with only a single input of features may be passed through
the layers. However, the graph convolution layer expects as input the graph
data object and its corresponding features. For our experiments that did not
incorporate model parallelism across multiple GPUs, this condition did not
pose an issue because we could simply include the full graph data object, `g`,
into the GNN model definition and pass the single tensor of features. However,
when model parallelism is activated, GPipe applies micro-batching to this
feature tensor, and the corresponding subset of graph nodes must instead be
presented to the graph convolution layer, instead of the full graph data
object.
As a workaround for enabling micro-batching, we exploited the option that the
sequential module can pass a single tuple comprised of multiple tensors. Then,
we pass the node indices of the graph as the first tensor along with the
corresponding features in a second tensor. GPipe applies its micro-batching to
each tensor in the tuple, and a subset of graph nodes with the corresponding
features are passed along the sequence of layers, as needed. When the graph
convolution layer receives the passed tuple, our adapted code extracts the
node tensor comprised of the sub-graph as determined by the micro-batch from
GPipe. Both DGL and PyG graph frameworks include a method to re-build a graph
structure from a subset of graph nodes, which requires the full graph data
object, `g`, for the re-build. The output is then a sub-graph structure
expected by the graph convolution layer. The second tensor of the passed tuple
that includes the features is subsequently extracted in the graph convolution
layer and used in the forward calculation. Upon completion, the two-tensor
tuple is reformed with the original nodes of the sub-graph and the updated
features to be passed along through the remaining layers of the sequence.
## 6 Results
Table 2: Benchmark results on multiple compute architectures and graph frameworks for the standard, small graph datasets. Compute | _Ave. epoch (ms) $|$ Test accuracy_
---|---
Package | Cora | CiteSeer | PubMed
CPU – PyG | 104.4 $|$ 0.717 | 182.9 $|$ 0.696 | 798.5 $|$ 0.718
CPU – DGL | 71.3 $|$ 0.785 | 153.4 $|$ 0.710 | 338.6 $|$ 0.723
GPU – PyG | 7.7 $|$ 0.796 | 8.4 $|$ 0.720 | 10.9 $|$ 0.718
GPU – DGL | 13.3 $|$ 0.721 | 12.4 $|$ 0.641 | 12.5 $|$ 0.682
All experimental training runs were performed for 300 epochs on the same GNN
model structure. This model was not optimized for best training performance
but remained consistent for all scenarios so that direct comparisons focusing
on the graph frameworks, hardware, and parallelism approach could be observed
independent of the model structure.
### 6.1 Benchmarks
Table 3: Benchmarks on different compute architectures and graph frameworks for the sequential GAT model with the PubMed dataset. *The full graph was defined in the GNN model instead of being passed through as a subset of nodes to be re-built as a sub-graph. Framework | Compute | Epoch 1 (s) | Epochs 2–300 (s) | Ave. Epoch (s) | Train Loss | Train Acc. | Val Acc.
---|---|---|---|---|---|---|---
DGL | Single CPU | 0.3555 | 101.2 | 0.3386 | 0.2000 | 0.9833 | 0.7520
DGL | Single GPU | 0.2254 | 3.736 | 0.0125 | 0.2030 | 1.000 | 0.7520
PyG | Single CPU | 0.7946 | 238.7 | 0.7985 | 0.1567 | 0.9833 | 0.7910
PyG | Single GPU | 0.2509 | 3.260 | 0.0109 | 0.2131 | 1.000 | 0.7920
DGL | DGX with GPipe Chunk = 1* | 6.985 | 3.755 | 0.0126 | 0.1984 | 1.000 | 0.7660
PyG | DGX with GPipe Chunk = 1* | 7.312 | 3.407 | 0.0114 | 0.2097 | 1.000 | 0.7840
DGL | DGX with GPipe Chunk = 1 | 7.294 | 15.62 | 0.0522 | 0.1879 | 0.9500 | 0.7780
DGL | DGX with GPipe Chunk = 2 | 7.192 | 12.30 | 0.0411 | 0.4283 | 0.8333 | 0.6000
DGL | DGX with GPipe Chunk = 3 | 7.281 | 15.29 | 0.0511 | 0.5204 | 0.7667 | 0.4920
DGL | DGX with GPipe Chunk = 4 | 7.712 | 18.06 | 0.0604 | 0.6016 | 0.7500 | 0.4580
As a first comparison benchmark, we trained the GNN model on single devices
with the citation datasets of Cora, CiteSeer, and PubMed. The training time
and test accuracy results are summarized in Table 2. As expected, training
times, as measured by the average time per training epoch, on the GPU are
faster for both graph frameworks across all datasets. Interestingly, DGL
trained on average 35% faster than PyG on a CPU, while PyG trained on average
29% faster than DGL on a GPU. This outcome suggests, at least for our applied
GNN model training on a single device, PyG may be better optimized for a GPU
and DGL for a CPU. Training accuracy remained within a range of 15.5%, with
PyG averaging 2.4% better than DGL over all datasets.
Next, we compare the average training time per epoch on three compute
architectures, including the single CPU and GPU, as previously measured, with
the DGX system comprised of four GPUs leveraging GPipe pipeline parallelism
without micro-batching. In each case presented in Figure 1, the complete graph
data object was included in the graph convolution layer during each training
epoch. The comprehensive benchmark report in Table 3 implements the GAT model
on the PubMed dataset across combinations of frameworks and compute clusters
for an expanded analysis and comparison of the runtimes and accuracy. We also
include the duration of the first epoch in the reported training times to
provide a complete comparison of the graph frameworks and hardware, which
varied slightly across graph frameworks and architectures. The remaining
training epochs ran on the order of 80–100 times faster on the single GPU
compared to the single CPU.
Figure 1: Benchmark training times for DGL and PyG on the PubMed dataset
comparing the single devices to multiple devices with pipeline parallelism.
Here, data parallelism is disabled.
Surprisingly, no significant performance improvement in training time is
observed in the four GPU system using GPipe with a “chunk size” = 1 (i.e., no
micro-batching) compared to a single GPU. The PubMed dataset used in these
experiments is considered small compared to those that are intended to benefit
from pipeline parallelism. Therefore, the added cost of shifting data across
the four GPUs may overtake the minimal speedup provided by GPipe. This may
also suggest that the additional feature of data parallelism (via the data
“chunks”) provided by GPipe is crucial to realizing meaningful performance
improvements. We also measured the training accuracy resulting from both graph
machine learning frameworks applied with GPipe across four GPUs, but without
micro-batching, exactly as in the timing measurements. As plotted in Figure 2,
each framework converged similarly in accuracy over 300 training epochs in
this configuration.
Figure 2: Training accuracy with the DGL and PyG frameworks with pipe
parallelism across four GPUs with no graph data batching.
### 6.2 Increased training time
To investigate the impact of data parallelism within GPipe, we activated
micro-batching and ran the training with the DGL graph framework to compare
total training times between a single GPU and multiple distributed GPUs. As
seen in Figure 3, the training times dramatically increase with micro-batching
enabled at two, three, and four batches, as generated by GPipe.
Figure 3: Increased training time with GPipe applied pipeline parallelism with
an increasing number of graph micro-batches.
Our approach adapts the forward training to pass the graph information along
with the features through a tuple of tensors into the sequential model. The
GPipe micro-batching splits each tensor within the tuple so that only a subset
of nodes is passed through, along with its corresponding set of features, as
expected. The first convolution layer receives this subset of nodes indices
but still must have a complete graph structure as its input with the feature
tensor. So, a re-build of a graph is first performed with a DGL framework-
delivered method. This sub-graph creation from the provided subset of nodes
requires the full graph data object as a reference. However, DGL necessitates
that the full graph, `g`, must remain on the CPU. To generate the sub-graph
within the convolution layer, a copy of the subset node tensor must first be
moved from the GPU onto the CPU, then the sub-graph is built and moved back
onto the GPU. This data flow across devices was performed twice because our
model includes two convolution layers. So, significant overhead was added to
the total time just to enable the basic training calculations. As the chunk
size increased, more micro-batches were generated, resulting in even more sub-
graph build steps. Fortunately, the feature tensor extracted from the passed
tuple could remain on the GPU. However, the updated values were still re-
packaged into a tuple with the original sub-graph nodes to be returned into
the forward pass of the model sequence.
### 6.3 Degraded accuracy
We also observed that the training accuracy suffered severely with an
increasing number of micro-batches. Although GPipe micro-batching can be
disabled, as configured for our benchmark tests (Figures 1 and 2), the
expected benefit of pipeline parallelism requires micro-batching. We next ran
the same DGL framework-based model with GPipe across four GPUs, sequentially
distributed as before, to observe the effects of micro-batching in our adapted
implementation.
The intended design of the GPipe micro-batching through the torchgpipe library
implementation is to separate the features tensor into uniform batches. This
challenges our adaptation that passes a tuple containing both a node tensor
and feature tensor, as we observed the micro-batching being applied to each
tensor by sequentially selecting the tensor indices into a number of batches
equal to the set chunk size parameter. This sequential separation preserves
the nodes of the resulting sub-graph with their corresponding features.
However, the edge relationships between the nodes are lost. Although edges are
re-established during the sub-graph re-build in the convolution layers, the
original graph structure is not expected to store its edges sequentially.
Therefore, separating the graph this way during the micro-batching likely
eliminates crucial node relationships that need aggregated during the graph
convolution layer calculations. As expected from such a potential for
significant information loss during the GPipe micro-batching, as the number of
batches generated increases, the training accuracy drops, as seen in Figure 4.
Figure 4: Accuracy drop-off with GPipe and graph micro-batching with
comparisons to the previous training accuracy results without batching.
## 7 Conclusion and Future Work
By analyzing the performance of GNNs using pipeline parallelism via GPipe, our
first results suggest that although GPipe has demonstrated great success in
optimizing deep neural networks, its ability to deliver efficiency for a graph
neural network remains limited with the adapted implementation presented here.
An immediate scope for future work is to determine how to customize the GPipe
data parallelism to utilize intelligent graph batching instead of a sequential
separation by index. Such an improvement is expected to increase accuracy to
match the benchmark levels while benefiting from the parallelism in runtime
efficiency. The SIGN technique described in Section 3 may be the best batching
approach to consider for parallelizing GNNs with our implementation because it
avoids the experienced pitfalls of node and graph sampling and instead
provides precomputed node representations that may be straightforwardly mini-
batched by GPipe.
Pipeline parallelism is intended to benefit neural network training on very
large datasets, much greater in size than the PubMed set used here to
establish our adapted implementation without overburdening memory resources.
We anticipate that runtime performance will increase for training on extremely
large graphs after memory bottlenecks or computational complexity overtake the
capability offered by a single GPU. Extending the current implementation to
massive datasets, on the scale of millions of nodes and a billion edges, such
as the Reddit post dataset Hamilton et al. (2017), the Amazon data dump
McAuley et al. (2015), and others available with data loaders for DGL and PyG
through the Open Graph Benchmark (OGB) Hu et al. (2020), will better
illustrate the impact of GPipe parallelism on GNNs, and provide a deeper
understanding for potential enhancements to our current implementation for
improving training performance with these existing tools and frameworks.
## Acknowledgments
Thank you to Prof. Ian Foster, Prof. Rick Stevens, and Peng Ding for the
guidance and feedback on this paper. We also thank the DGX team, Daniel
Murphy-Olson and Ryan Aydelott, and the Computing, Environment, and Life
Sciences directorate at Argonne National Laboratory.
## References
* Alon & Yahav (2020) Alon, U. and Yahav, E. On the bottleneck of graph neural networks and its practical implications. _arXiv preprint arXiv:2006.05205_ , 2020.
* Arora (2020) Arora, S. A survey on graph neural networks for knowledge graph completion, 2020\.
* Auten et al. (2020) Auten, A., Tomei, M., and Kumar, R. Hardware acceleration of graph neural networks. In _2020 57th ACM/IEEE Design Automation Conference (DAC)_ , pp. 1–6. IEEE, 2020.
* Bapst et al. (2020) Bapst, V., Keck, T., Grabska-Barwińska, A., Donner, C., Cubuk, E., Schoenholz, S., Obika, A., Nelson, A., Back, T., Hassabis, D., and Kohli, P. Unveiling the predictive power of static structure in glassy systems. _Nature Physics_ , 16(4):448–454, 2020.
* Bruna et al. (2013) Bruna, J., Zaremba, W., Szlam, A., and LeCun, Y. Spectral networks and locally connected networks on graphs. _arXiv preprint arXiv:1312.6203_ , 2013.
* Cheriton & Tarjan (1976) Cheriton, D. and Tarjan, R. Finding minimum spanning trees. _SIAM Journal on Computing_ , 5(4):724–742, 1976\.
* Chiang et al. (2019) Chiang, W.-L., Liu, X., Si, S., Li, Y., Bengio, S., and Hsieh, C.-J. Cluster-gcn: An efficient algorithm for training deep and large graph convolutional networks. In _Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining_ , pp. 257–266, 2019.
* Defferrard et al. (2016) Defferrard, M., Bresson, X., and Vandergheynst, P. Convolutional neural networks on graphs with fast localized spectral filtering. In _Advances in neural information processing systems_ , pp. 3844–3852, 2016.
* Dijkstra (1959) Dijkstra, E. A note on two problems in connexion with graphs. _Numerische Mathematik_ , 1:269–271, 1959.
* Duvenaud et al. (2015) Duvenaud, D. K., Maclaurin, D., Iparraguirre, J., Bombarell, R., Hirzel, T., Aspuru-Guzik, A., and Adams, R. P. Convolutional networks on graphs for learning molecular fingerprints. In _Advances in neural information processing systems_ , pp. 2224–2232, 2015.
* Fey & Lenssen (2019) Fey, M. and Lenssen, J. E. Fast graph representation learning with pytorch geometric. _arXiv preprint arXiv:1903.02428_ , 2019.
* Frasca et al. (2020) Frasca, F., Rossi, E., Eynard, D., Chamberlain, B., Bronstein, M., and Monti, F. Sign: Scalable inception graph neural network. _arXiv preprint arXiv:2004.11198_ , 2020.
* Hamilton et al. (2017) Hamilton, W., Ying, Z., and Leskovec, J. Inductive representation learning on large graphs. In _Advances in neural information processing systems_ , pp. 1024–1034, 2017.
* Henaff et al. (2015) Henaff, M., Bruna, J., and LeCun, Y. Deep convolutional networks on graph-structured data. _arXiv preprint arXiv:1506.05163_ , 2015.
* Hu et al. (2020) Hu, W., Fey, M., Zitnik, M., Dong, Y., Ren, H., Liu, B., Catasta, M., and Leskovec, J. Open graph benchmark: Datasets for machine learning on graphs. _arXiv preprint arXiv:2005.00687_ , 2020.
* Huang et al. (2019) Huang, Y., Cheng, Y., Bapna, A., Firat, O., Chen, D., Chen, M., Lee, H., Ngiam, J., Le, Q. V., Wu, Y., et al. Gpipe: Efficient training of giant neural networks using pipeline parallelism. In _Advances in neural information processing systems_ , pp. 103–112, 2019.
* Kim et al. (2020) Kim, C., Lee, H., Jeong, M., Baek, W., Yoon, B., Kim, I., Lim, S., and Kim, S. torchgpipe: On-the-fly pipeline parallelism for training giant models. 2020\.
* Kipf & Welling (2016) Kipf, T. N. and Welling, M. Semi-supervised classification with graph convolutional networks. _arXiv preprint arXiv:1609.02907_ , 2016.
* Li et al. (2015) Li, Y., Tarlow, D., Brockschmidt, M., and Zemel, R. Gated graph sequence neural networks. _arXiv preprint arXiv:1511.05493_ , 2015.
* Liu et al. (2020) Liu, H., Lu, S., Chen, X., and He, B. G3: when graph neural networks meet parallel graph processing systems on gpus. _Proceedings of the VLDB Endowment_ , 13(12):2813–2816, 2020.
* Ma et al. (2019) Ma, L., Yang, Z., Miao, Y., Xue, J., Wu, M., Zhou, L., and Dai, Y. Neugraph: parallel deep neural network computation on large graphs. In _2019 $\\{$USENIX$\\}$ Annual Technical Conference ($\\{$USENIX$\\}$$\\{$ATC$\\}$ 19)_, pp. 443–458, 2019.
* McAuley et al. (2015) McAuley, J., Targett, C., Shi, Q., and Van Den Hengel, A. Image-based recommendations on styles and substitutes. In _Proceedings of the 38th international ACM SIGIR conference on research and development in information retrieval_ , pp. 43–52, 2015.
* Mercado et al. (2020) Mercado, R., Rastemo, T., Lindelöf, E., Klambauer, G., Engkvist, O., Chen, H., and Bjerrum, E. J. Graph Networks for Molecular Design. 8 2020. doi: 10.26434/chemrxiv.12843137.v1. URL https://chemrxiv.org/articles/preprint/Graph_Networks_for_Molecular_Design/12843137.
* Sen et al. (2008) Sen, P., Namata, G., Bilgic, M., Getoor, L., Galligher, B., and Eliassi-Rad, T. Collective classification in network data. _AI magazine_ , 29(3):93–93, 2008.
* Strokach et al. (2020) Strokach, A., Becerra, D., Corbi-Verge, C., Perez-Riba, A., and Kim, P. Fast and flexible protein design using deep graph neural network. _Cell Systems_ , 11(4):402–411.e4, 2020.
* Vashishth (2019) Vashishth, S. Neural graph embedding methods for natural language processing. _CoRR_ , abs/1911.03042, 2019. URL http://arxiv.org/abs/1911.03042.
* Veličković et al. (2017) Veličković, P., Cucurull, G., Casanova, A., Romero, A., Lio, P., and Bengio, Y. Graph attention networks. _arXiv preprint arXiv:1710.10903_ , 2017.
* Wang et al. (2019) Wang, M., Zheng, D., Ye, Z., Gan, Q., Li, M., Song, X., Zhou, J., Ma, C., Yu, L., Gai, Y., Xiao, T., He, T., Karypis, G., Li, J., and Zhang, Z. Deep graph library: A graph-centric, highly-performant package for graph neural networks. _arXiv preprint arXiv:1909.01315_ , 2019.
* Yang et al. (2016) Yang, Z., Cohen, W., and Salakhudinov, R. Revisiting semi-supervised learning with graph embeddings. In _International conference on machine learning_ , pp. 40–48. PMLR, 2016.
* Ying et al. (2018) Ying, R., He, R., Chen, K., Eksombatchai, P., Hamilton, W. L., and Leskovec, J. Graph convolutional neural networks for web-scale recommender systems. _CoRR_ , abs/1806.01973, 2018. URL http://arxiv.org/abs/1806.01973.
* Yu et al. (2017) Yu, B., Yin, H., and Zhu, Z. Spatio-temporal graph convolutional neural network: A deep learning framework for traffic forecasting. _CoRR_ , abs/1709.04875, 2017. URL http://arxiv.org/abs/1709.04875.
* Zeng et al. (2020) Zeng, H., Zhou, H., Srivastava, A., Kannan, R., and Prasanna, V. Graphsaint: Graph sampling based inductive learning method. _arXiv preprint arXiv:1907.04931_ , 2020.
* Zhang et al. (2020) Zhang, Z., Leng, J., Ma, L., Miao, Y., Li, C., and Guo, M. Architectural implications of graph neural networks. _IEEE Computer Architecture Letters_ , 19(1):59–62, 2020.
* Zhou et al. (2018) Zhou, J., Cui, G., Zhang, Z., Yang, C., Liu, Z., Wang, L., Li, C., and Sun, M. Graph neural networks: A review of methods and applications. _arXiv preprint arXiv:1812.08434_ , 2018.
* Zhou et al. (2020) Zhou, J., Xu, Z., Rush, A. M., and Yu, M. Automating botnet detection with graph neural networks. _arXiv preprint arXiv:2003.06344_ , 2020.
|
# Measurement of the scaling slope of compressible magnetohydrodynamic
turbulence by synchrotron radiation statistics
Xue-Wen Zhang,1 Jian-Fu Zhang,1,2 Ru-Yue Wang 1 and Fu-Yuan Xiang1,2
1Department of Physics, Xiangtan University, Xiangtan 411105, China,
2Key Laboratory of Stars and Interstellar Medium, Xiangtan University,
Xiangtan 411105, China E-mail<EMAIL_ADDRESS><EMAIL_ADDRESS>
(Accepted XXX. Received YYY; in original form ZZZ)
###### Abstract
Based on magnetohydrodynamic turbulence simulations, we generate synthetic
synchrotron observations to explore the scaling slope of the underlying MHD
turbulence. We propose the new $Q$-$U$ cross intensity $X$ and cross-
correlation intensity $Y$ to measure the spectral properties of magnetic
turbulence, together with statistics of the traditional synchrotron $I$ and
polarization $PI$ intensities. By exploring the statistical behavior of these
diagnostics, we find that the new statistics $X$ and $Y$ can extend the
inertial range of turbulence to improve measurement reliability. When focusing
on different Alfvénic and sonic turbulence regimes, our results show that the
diagnostics proposed in this paper not only reveal the spectral properties of
the magnetic turbulence but also gain insight into the individual plasma modes
of compressible MHD turbulence. The synergy of multiple statistical methods
can extract more reliable turbulence information from the huge amount of
observation data from the Low-Frequency Array for Radio astronomy and the
Square Kilometer Array.
###### keywords:
ISM: general – ISM: turbulence—magnetohydrodynamics (MHD) — methods: numerical
— polarization
††pubyear: 2023††pagerange: Measurement of the scaling slope of compressible
magnetohydrodynamic turbulence by synchrotron radiation
statistics–LABEL:lastpage
## 1 Introduction
The magnetized turbulent fluids in astrophysical environments can be usually
described by magnetohydrodynamic (MHD) turbulence theory, which plays a
critical role in many astrophysical processes such as star formation (Mac Low
& Klessen 2004), heat conduction (Narayan & Medvedev 2001), magnetic
reconnection (Beresnyak 2017), and acceleration of cosmic rays (Yan & Lazarian
2008; Zhang & Xiang 2021; Zhang et al. 2023). Therefore, studying the
properties of MHD turbulence helps to advance the theory of MHD turbulence and
to understand astrophysical processes associated with MHD turbulence.
Here, we briefly describe three significant advances made in earlier research
on turbulence. The first is about incompressible non-magnetized turbulence. By
using a self-similarity assumption of the turbulence cascade, Kolmogorov
(1941, henceforth K41) derived a power-law relation of $E(k)\sim k^{-5/3}$ in
the inertial range, which is called a classic Kolmogorov spectrum. The second
is about incompressible magnetized turbulence. Iroshnikov & Kraichnan (1963;
1965, henceforth IK65) obtained the power-law scaling of $E(k)\sim k^{-3/2}$
in the inertial range by introducing nonlinear energy cascade. Although IK65
advanced the K41 theory by considering the effect of magnetic fields, it
ignored a critical issue that the turbulence should be anisotropic in the
magnetized fluids (Montgomery & Turner 1981). The third is still about
incompressible magnetized turbulence but focuses on the anisotropy of MHD
turbulence due to turbulent magnetic fields. Focused on the nonlinear energy
cascade of incompressible strong MHD turbulence, Goldreich & Sridhar (1995,
hereafter GS95) provided the theoretical predictions on the power-law scaling
and anisotropic relationship, as described in more detail in Section 2.1.
At present, many numerical simulations have significantly increased our
knowledge on the scaling, anisotropy and compressibility of MHD turbulence
(e.g., Cho & Lazarian 2002; see textbook by Beresnyak & Lazarian 2019; and a
recent review by Beresnyak 2019 ). The properties obtained by the simulation
of MHD turbulence can understand the acceleration and propagation of cosmic
rays (Yan & Lazarian 2002). In particular, the turbulent reconnection model
proposed in Lazarian & Vishniac (1999, hereafter LV99), which provides a new
interpretation for GS95 theory from the perspective of eddies, has been
applied to various astrophysical environments such as gamma-ray bursts
(Lazarian et al. 2003; Zhang & Yan 2011), microquasars (de Gouveia dal Pino &
Lazarian 2005), active galactic nuclei (Kadowaki et al. 2015) and radio
galaxies (Brunetti & Lazarian 2016).
Due to the large scale of astrophysical system with a high Reynolds number
$R_{\rm e}>10^{10}$, it is challenging to simulate a realistic astrophysical
environment by direct numerical simulation. The currently available 3D MHD
simulations can achieve the case of the Reynolds number $R_{\rm e}\simeq
10^{5}$ (e.g., Beresnyak & Lazarian 2019). A distinctive feature is that the
realistic inertial range in astrophysical turbulence is much greater than that
revealed by numerical simulations. It is more effective to move away from
direct numerical simulations and develop statistical techniques using
observational data, to explore the properties of MHD turbulence.
When relativistic electrons motion in turbulent magnetic fields, they produce
synchrotron radiation fluctuations providing information on the magnetic
fields (Schnitzeler et al. 2007; Lazarian & Pogosyan 2012, 2016 henceforth
LP12 and LP16, Iacobelli et al. 2013; Van Eck et al. 2017; West et al. 2020;
Sun et al. 2022 ). Based on the modern understanding of MHD turbulence and
synchrotron radiation theory, LP12 explored the properties of MHD turbulence
by statistics of synchrotron total intensity fluctuations. They predicted that
synchrotron intensity fluctuations are anisotropic with a long extension along
the direction of magnetic fields. Using the ratio between quadrupole and
monopole components can determine the anisotropy of MHD turbulence, which is
sensitive to the compressibility of underlying turbulence. These theoretical
predictions on anisotropy have been confirmed successfully using numerical
simulations (Herron et al. 2016; Lee et al. 2019; Wang et al. 2022). These
studies are opening new avenues for exploring MHD turbulence using
observational data.
Moreover, LP16 proposed to recover the properties of MHD turbulence using
synchrotron polarization intensity fluctuations (including Faraday rotation
fluctuations). They developed two main techniques from the perspective of
analytical theory, i.e., polarization frequency analysis and polarization
spatial analysis. The former, using a variance of synchrotron polarization
intensity (or its derivative) as a function of the square of the wavelength,
was tested by Zhang et al. (2016). The latter, making use of spatial
correlations of synchrotron polarization intensity (or its derivative) at the
fixed wavelength as a function of the spatial separation $R$, was tested by
Lee et al. (2016) and Zhang et al. (2018). As confirmed, these two methods
obtain the scaling index of the underlying turbulence cascade in the inertial
range. Compared with synchrotron radiation, polarized radiation can reveal not
only information about magnetic fields in the plane of the sky but also that
parallel to the line of sight (LOS).
From the perspective of synthetic observations, numerical dissipation
inevitably limits the extension of the power-law range, and the greater the
inertial range, the higher the reliability of the measurement. However, from
an observational point of view, the scaling index measurement of MHD
turbulence is also limited by the telescope’s resolution and data noise. It is
necessary to synergize multiple techniques to reveal the properties of MHD
turbulence and enhance the reliability of the measurement results.
This paper aims to advance the study of the power-law scaling properties of
MHD turbulence. With the power spectrum (PS) and structure function (SF)
methods for synchrotron diagnostics, we propose two new statistical quantities
to explore the scaling slope properties of compressible MHD turbulence. The
paper is structured as follows. Section 2 describes theory aspects involving
the basic theory of MHD turbulence, synchrotron radiative process and
statistical methods. Section 3 introduces the setups of the numerical
simulation of MHD turbulence. Sections 4 and 5 present the numerical results.
In Sections 6 and 7, we provide our discussion and summary, respectively.
## 2 Theoretical description
### 2.1 MHD turbulence theory
GS95 theory is generally considered the basis for MHD turbulence. Note that
GS95 theory focused on incompressible strong MHD turbulence with Alfvénic Mach
number $M{\rm{}_{A}}=V_{\rm L}/V_{\rm A}\simeq 1$, where $V_{\rm L}$ is the
injection velocity at the injection scale $L_{\rm inj}$ and $V_{\rm A}$ is the
Alfvénic velocity. This theory combined the motions of the eddies
perpendicular to the magnetic field with those parallel to the magnetic field
by the critical balance condition $l_{\perp}/v_{\perp}=l_{\parallel}/V_{\rm
A}$, where $v_{\perp}$ is the velocity at the scale $l$, and the scales
$l_{\parallel}$, $l_{\perp}$ represent the parallel and perpendicular scales
of eddies, respectively. They found that the motions of eddies perpendicular
to the magnetic field have similar properties to Kolmogorov turbulence with
the spectrum of $E(k_{\perp})\propto{\epsilon}^{2/3}k_{\perp}^{-5/3}$, and the
velocity-scale relation of $v_{\perp}\propto(\epsilon l_{\perp})^{1/3}$, where
$k_{\perp}$ is the wave-vector component perpendicular to the magnetic field
and $\epsilon$ is the rate of energy cascade. According to the velocity-scale
relation and critical balance condition, they predicted an anisotropic
relationship of
$l_{\parallel}\sim V_{\rm A}{\epsilon^{-1/3}}l_{\perp}^{2/3},$ (1)
which delineates the dependencies between the perpendicular and parallel
scales of the eddies.
Later, the GS95 theory was generalized from the trans-Alfvénic turbulence to
sub-Alfvénic and super-Alfvénic ones, respectively (LV99; Lazarian 2006). For
the former, $M_{\rm A}<1$, that is, the turbulence drives with the injection
velocity $V_{\rm L}$ less than the Alfvénic velocity $V_{\rm A}$, LV99 found
that the turbulence cascade corresponds to two regimes. The first regime is a
weak turbulence cascade ranging from the injection scale $L_{\rm inj}$ to the
transition scale $L_{\rm tr}=L_{\rm inj}M_{\rm A}^{2}$. The second one is
strong turbulence from the transition scale $L_{\rm tr}$ to the dissipation
scale $L_{\rm diss}$, where the energy cascade perpendicular to the magnetic
field is analogous to the hydrodynamic Kolmogorov cascade. In this strong
turbulence regime, they derived the turbulence velocity as
$v_{\perp}\approx V_{\rm L}L_{\rm{inj}}^{-1/3}M_{\rm A}^{1/3}l_{\perp}^{1/3},$
(2)
and the anisotropic relation as
$l_{\rm\parallel}\approx L_{\rm inj}^{1/3}M_{\rm A}^{-4/3}l_{\perp}^{2/3}.$
(3)
When taking $M_{\rm A}=1$, the above equations will return to the relevant
expressions of GS95 theory.
As for the latter, $M_{\rm A}>1$, the MHD turbulence starting from the
injection scale $L_{\rm inj}$ is almost no constraint of the magnetic field
and has properties similar to those of hydrodynamic turbulence. With the
cascade of turbulence, it experiences a transition from hydrodynamic-like
turbulence to MHD one at the scale $L_{\rm A}=L_{\rm inj}M_{\rm A}^{-3}$.
However, from the scale $L_{\rm A}$ to $L_{\rm diss}$, the turbulence again
follows the characteristics of GS95 theory, having the velocity-scale relation
of
$v_{\rm\perp}\approx V_{\rm L}L_{\rm inj}^{-1/3}l_{\perp}^{1/3},$ (4)
and the anisotropy of
$l_{\parallel}\approx L_{\rm inj}^{1/3}M_{\rm A}^{-1}l_{\perp}^{2/3}.$ (5)
At present, the properties of compressible MHD turbulence have become an
important part of the modern understanding of MHD turbulence theory.
Compressible MHD turbulence can be decomposed into three modes, namely Alfvén,
slow and fast modes, as confirmed by numerical simulations (Cho & Lazarian
2002, 2003; Kowal & Lazarian 2010). Specifically, they found that Alfvén and
slow modes follow the GS95-type scaling law, namely $E(k_{\perp})\propto
k_{\perp}^{-5/3}$ and the scale-dependent anisotropy, while fast mode presents
the scaling law of $E(k_{\perp})\propto k_{\perp}^{-3/2}$ and the isotropy. In
addition, for compressible MHD turbulence, the Alfvén mode is incompressible,
while the slow and fast modes, called magnetosonic modes, are compressible.
111 When focusing on the compressible MHD turbulence as done in this work,
one, for the sake of simplicity, can call the slow mode as a compressible
mode. However, for the incompressible MHD turbulence with the plasma parameter
$\beta\gg 1$, the slow mode is a pseudo-Alfvén mode with a purely solenoidal
feature.
Despite the progress made in the development of MHD turbulence theory, there
are still a lot of controversial issues. For example, Maron & Goldreich (2001)
numerically studied the incompressible MHD turbulence and found a shallow
energy spectral index of $k^{-3/2}$ different from $k^{-5/3}$ given by GS95.
Subsequently, to explain this shallow index, Boldyrev (2006) proposed the
dynamic alignment model to modify the GS95 scaling index from $-5/3$ to
$-3/2$. Later, Beresnyak & Lazarian (2010) and Beresnyak (2014) thought that
the spectral index $-5/3$ cannot extend to the entire inertial range, but
deviate near the part of the injection scale (see also Beresnyak & Lazarian
2019 for the recent review). This can explain why the low-resolution numerical
simulations generate a shallower spectral index, while the results of higher-
resolution numerical simulations are consistent with GS95. However, some
recent studies, e.g., Chandran et al. (2015), agreed with the dynamical
alignment theory.
By analyzing the power spectrum of super-sonic turbulence from the solenoidal
and compressive driving ways, Federrath (2013) found the velocity spectral
indices satisfy with $k^{-2}$. In the case of solenoidal driving, the spectrum
of the density-weighted velocity $\rho^{1/3}v$ satisfies with $k^{-1.74}$,
while in the case of compressive driving, the slope is significantly steeper
and close to $k^{-2.1}$. This result is consistent with the compressible
turbulence theory (Galtier & Banerjee 2011), which predicts the scaling of
density-weighted velocity $k^{-19/9}$. Recently, Mallet & Schekochihin (2017)
proposed the intermittency model to modify MHD turbulence theory at scales
close to the dissipation scales. However, because of the limitations of
numerical simulations, it is difficult to confirm.
Until now, many attempts have not significantly changed the framework of the
GS95 theory. Although our study below is based on the GS95 theory, the change,
in theory, does not affect our results based on synthetic synchrotron
observations.
### 2.2 Synchrotron emission fluctuations
For the sake of simplicity, this work assumes that relativistic electrons
interacting with the turbulent magnetic field satisfy a homogeneous and power-
law energy distribution of $N(E)=N_{0}E^{-p}$, where $p$ and $E$ represent the
spectral index and energy of relativistic electrons, respectively. Here,
$N_{0}$ is the normalization constant of electrons. According to the classic
textbooks (Rybicki & Lightman 1979; Longair 2011), observable Stokes
parameters under the condition of no Faraday rotation effect can be expressed
as follows (see also, e.g.,Waelkens et al. 2009; LP16):
${I}({\bm{X}})=\int_{0}^{L}dz(B_{\rm x}^{2}({\bm{x}})+B_{\rm
y}^{2}({\bm{x}}))^{\frac{p-3}{4}}(B_{\rm x}^{2}({\bm{x}})+B_{\rm
y}^{2}({\bm{x}})),$ (6) $Q_{0}({\bm{X}})=\int_{0}^{L}dz(B_{\rm
x}^{2}({\bm{x}})+B_{\rm y}^{2}({\bm{x}}))^{\frac{p-3}{4}}(B_{\rm
x}^{2}({\bm{x}})-B_{\rm y}^{2}({\bm{x}})),$ (7)
$U_{0}({\bm{X}})=\int_{0}^{L}dz(B_{\rm x}^{2}({\bm{x}})+B_{\rm
y}^{2}({\bm{x}}))^{\frac{p-3}{4}}(2B_{\rm x}({\bm{x}})B_{\rm y}({\bm{x}})),$
(8)
where $L$ is the integral depth along the LOS, $B_{\rm x}$ and $B_{\rm y}$ the
components of the magnetic field perpendicular to the LOS, and
${\bm{X}}=({x},{y})$ the spatial coordinate in the plane of the sky.
Focusing on linear polarization synchrotron radiation, we have a complex
vector
${\bm{P}}({\bm{X}},\lambda^{2})=Q+iU=\int_{0}^{L}d{z}P_{\rm
in}({\bm{X}},z)e^{2i{\rm\phi}({\bm{X}},z)},$ (9)
describing polarization states in the plane of the sky. In this equation, the
exponential factor involves Faraday rotation effect. The observed polarization
angle $\phi$ is expressed by
$\phi=\phi_{0}+{\lambda^{2}}\rm RM,$ (10)
where the angle $\phi_{0}$ is the intrinsic angle. The Faraday rotation
measure $\rm RM$ is written as
${\rm RM}(\bm{X},z)=0.81\int_{0}^{z}n_{\rm
e}({\bm{X}},z^{\prime})B_{\parallel}({\bm{X}},z^{\prime})dz^{\prime}~{}{\rm
rad}~{}{\rm m^{-2}},$ (11)
where $n_{\rm e}$ is the density of thermal electrons, $B_{\parallel}$ the
component of the magnetic field along the LOS. The integral length $L$ is
along the LOS from the position of the source at $z$ to the observer.
Moreover, the part $P_{\rm in}$ of integrated function in Equation (9)
represents the intrinsic polarization intensity density and can be expressed
by $P_{\rm in}\equiv(Q_{0},U_{0})$. After including Faraday rotation effect,
the new Stokes parameters $Q$ and $U$ can be rewritten as
$Q({\bm{X}},{\rm\lambda^{2}})=Q_{\rm 0}{\rm\cos 2\phi}+U_{\rm 0}{\rm\sin
2\phi},$ (12) $U({\bm{X}},{\rm\lambda^{2}})=U_{\rm 0}{\rm\cos 2\phi}-Q_{\rm
0}{\rm\sin 2\phi},$ (13)
from which we obtain the synchrotron polarization intensity of
$PI=\sqrt{Q^{2}+U^{2}}.$ (14)
A complete description of synchrotron radiation can be encoded by a
polarization matrix, e.g., as done in Equation (E1) of LP12. This paper
focuses on the correlation statistics between $Q$ and $U$. The first is the
$Q$-$U$ cross intensity defined by 222 This definition was used to explore the
anisotropy of MHD turbulence by the structure function (LP12) and to trace
magnetic field directions by gradient techniques (Lazarian & Yuen 2018).
$X^{2}=QU,$ (15)
which is related to the relative importance of $Q$ and $U$. In general, both
the different turbulence properties and the level of Faraday rotation
depolarization will lead to different $Q$ and $U$ values, and different ratios
of $Q$ and $U$. As done in LP16, when considering the correlation function of
the polarization complex vector ${\bm{P}}$, we have
$\langle P(\bm{X_{1}})P^{*}(\bm{X_{2}})\rangle=\langle
Q(\bm{X_{1}})Q(\bm{X_{2}})+U(\bm{X_{1}})U(\bm{X_{2}})\rangle\\\ +i\langle
U(\bm{X_{1}})Q(\bm{X_{2}})-Q(\bm{X_{1}})U(\bm{X_{2}})\rangle,$ (16)
which is split into real and imaginary parts that are separately invariant
with respect to frame rotation. The symmetric real part carries the most
straightforward information about the magnetized turbulent medium and has been
numerically studied in Zhang et al. (2016). LP16 predicted that the
antisymmetric imaginary part reflects helical correlations of the magnetic
field, which still needs numerical testing. In addition, analytical studies
demonstrated that the anisotropy of the MHD turbulence can generate the
observable antisymmetric correlations. Based on the antisymmetric imaginary
part in Equation (16), we rewrite the cross-correlation intensity as
$Y^{2}(\bm{X}^{\prime})=\int{d^{2}{\bm{X}}[U({\bm{X}})Q({\bm{X}}+\bm{X}^{\prime})-Q({\bm{X}})U({\bm{X}}+\bm{X}^{\prime})]}.$
(17)
Adopting Equations (15) and (17), we will explore the scaling property of MHD
turbulence by comparing the traditional $PI$ and $I$ statistics. Note that the
cross-correlation intensity $Y$ is covariant variable during rotation and
translation transformation in the Stokes frame, while the cross-intensity $X$
are unchanged only when the Stokes frame is translated.
### 2.3 Statistical methods
Although turbulence is a complex and chaotic process, it allows us to use
statistical methods to reveal its underlying properties. In this paper, we
focus on the SF and PS methods. We first consider the simplest and often used
correlation function for an arbitrary 2D physical quantity $\zeta$. According
to the textbook by Monin & Yaglom (1975), correlation and structure functions
are written as
${\rm CF}({\bm{R}})=\langle\zeta{({\bm{X}}+{\bm{R}})}\zeta({\bm{X}})\rangle,$
(18) ${\rm
SF}({\bm{R}})=\langle(\zeta{({\bm{X}}+{\bm{R}})}-\zeta({\bm{X}}))^{2}\rangle,$
(19)
and they satisfy the following relation
${\rm SF}({\bm{R}})=2[{\rm CF(0)-CF}(\bm{R})],$ (20)
where ${\bm{R}}$ is a separation vector, and $\langle...\rangle$ represents
the average through the whole volume space.
The PS, a common statistical tool in the study of turbulence, can provide
information on the energy cascade of MHD turbulence, such as the spectral
shape and index, the source and the sink. The PS of a two-dimensional physical
quantity is expressed by
$P_{\rm
2D}({\bm{K}})=\frac{1}{(2\pi)^{2}}\mathrm{\int}\langle\zeta({\bm{X}})\zeta({\bm{X}}+{\bm{R}})\rangle
e^{{-i{\bm{K}}}\cdot{\bm{R}}}d{\bm{R}}$ (21)
by the Fourier transform of the correlation function. The ring-integrated 1D
spectrum for a 2D variable follows
$E_{\rm{2D}}({K})=\int_{K-0.5}^{K+0.5}P_{\rm{2D}}(K)d{K}.$ (22)
Note that there is a direct connection between PS and SF by the scaling slope:
$E_{\rm 2D}(K)\propto K^{-m}$ and ${\rm SF}(R)\propto R^{m-1}$ (LP12; see also
numerical confirmation in Lee et al. 2016; Zhang et al. 2018), where $m$ is
equal to 8/3 for Kolmogorov power spectrum in two dimensions.
Table 1: The information of data cubes arising from the simulation of compressible MHD turbulence. Relevant parameters used to characterize data cubes are given as —- $B_{0}$: mean magnetic field along the $x$ coordinate; $\beta$: plasma parameter; $L_{\rm tr}$: transition scale of strong turbulence in sub-Alfvénic regime; $L_{\rm A}$: transition scale of strong turbulence in the super-Alfvénic regime. Run | $B_{0}$ | $M_{\rm A}$ | $M_{\rm s}$ | $\beta=2M^{2}_{\rm A}/M_{\rm s}^{2}$ | $L_{\rm inj}$[2<$k$<3] | $L_{\rm inj}[k=2.5]$ | $L_{\rm tr}$($L_{\rm A}$)[2<$k$<3] | $L_{\rm tr}$($L_{\rm A}$)[$k$ = 2.5]
---|---|---|---|---|---|---|---|---
1 | 1.00 | 0.70 | 0.87 | 1.30 | [170.6, 256.0] | 204.8 | [83.59, 125.44] | 100.35
2 | 1.00 | 0.55 | 4.46 | 0.03 | [170.6, 256.0] | 204.8 | [51.61, 77.44] | 61.95
3 | 1.00 | 0.65 | 0.48 | 3.67 | [170.6, 256.0] | 204.8 | [72.09, 108.16] | 86.53
4 | 0.10 | 1.69 | 3.11 | 0.60 | [170.6, 256.0] | 204.8 | [35.34, 53.04] | 42.43
5 | 0.10 | 1.72 | 0.45 | 29.30 | [170.6, 256.0] | 204.8 | [33.53, 50.31] | 40.25
## 3 MHD turbulence simulations
To generate synchrotron observations, we use a third-order accurate hybrid,
essentially non-oscillatory (ENO) scheme (Cho & Lazarian 2002) to solve ideal
isothermal MHD equations in a periodic box of size $2\pi$:
$\frac{\partial\rho}{\partial t}+\nabla\cdot(\rho{\bm{v}})=0,$ (23)
$\rho[\frac{\partial{\bm{v}}}{\partial
t}+({\bm{v}}\cdot\nabla){\bm{v}}]+\nabla
p-{\bm{J}}\times\frac{\bm{B}}{4\pi}={\bm{f}},$ (24)
$\frac{\partial{\bm{B}}}{\partial t}-\nabla\times({\bm{v}}\times{\bm{B}})=0,$
(25) $\nabla\cdot{\bm{B}}=0,$ (26)
where $\rho$ is the density, $p=c^{2}_{s}\rho$ the thermal gas pressure,
$\bm{J}$ the current density, and $\bm{f}$ an external driving force. The
turbulence is driven by a solenoidal driving force at the wave number of
$2<k<3$ (corresponding to the mean wavenumber of $k\approx 2.5$) (Cho &
Lazarian 2003). By setting the initial mean magnetic field (along the $x$
axis) and the gas pressure, we run several simulations with the resolution of
$512^{3}$ covering different MHD turbulence regimes. When the running reaches
a statistically steady state, from output data cubes of the magnetic field,
velocity, and density, we calculate the Alfvénic and sonic Mach numbers to
characterize the properties of each simulation. Specifically, Alfvénic Mach
number is obtained by $M_{\rm A}=\langle\bm{V}_{\rm L}/\bm{V}_{\rm A}\rangle$
and sonic Mach number by $M_{\rm s}=\langle\bm{V}_{\rm L}/c_{\rm s}\rangle$,
where $c_{\rm s}=\sqrt{p/\rho}$ is sound speed, and $V_{\rm
A}\approx\frac{\bm{B}}{\sqrt{\rho}}$ is Alfvénic velocity. The resulting
numerical values are listed in Table 1.
The compressible MHD turbulence can be decomposed into Alfvén, slow and fast
modes by using the following theoretical procedures (Cho & Lazarian 2002; Cho
& Lazarian 2003)
$\hat{\xi}_{\rm
s}\propto(1+\frac{\beta}{2}-\sqrt{D})(k_{\perp}{\hat{\bm{k}}}_{\perp})+(-1+\frac{\beta}{2}-\sqrt{D})(k_{\parallel}{\hat{\bm{k}}}_{\parallel}),$
(27) $\hat{\xi}_{\rm
f}\propto(1+\frac{\beta}{2}+\sqrt{D})(k_{\perp}{\hat{\bm{k}}}_{\perp})+(-1+\frac{\beta}{2}+\sqrt{D})(k_{\parallel}{\hat{\bm{k}}}_{\parallel}),$
(28) $\hat{\xi}_{\rm
A}\propto-{\hat{\bm{k}}}_{\perp}\times{\hat{\bm{k}}}_{\parallel},$ (29)
in the Fourier space, where $D=(1+\frac{\beta}{2})^{2}-2\beta\cos^{2}\theta$,
and $\cos\theta={\hat{\bm{k}}}_{\parallel}\cdot{\hat{\bm{B}}}_{0}$. By
projecting the magnetic field into the displacement vectors $\hat{\xi}_{\rm
f}$, $\hat{\xi}_{\rm A}$ and $\hat{\xi}_{\rm s}$, we get the individual
components of three modes for magnetic field and then convert them to volume
space by the Fourier inverse transform. Later, Kowal & Lazarian (2010)
proposed an optimized decomposition method by introducing a discrete wavelet
transform, and decomposing each component of the magnetic fields into
orthogonal wavelets using a discrete wavelet transform. This method, which
depends on the local magnetic field rather than the mean magnetic field, is
universal for decomposition in the super-Alfvénic turbulence with weak
magnetic fields. Since only sub-Alfénic turbulence is involved in our work
when decomposing plasma modes, we use the Fourier decomposition method.
When exploring below the influence of the angle between the mean magnetic
field and LOS on the power spectrum, we adopt the Euler rotation algorithm to
rotate data cubes (Lazarian & Yuen 2018; Carmo et al. 2020; Wang et al. 2022;
Malik et al. 2023). The components of the rotation matrix
$\bm{\hat{F}}=\hat{\bm{F}}_{x}\hat{\bm{F}}_{y}\hat{\bm{F}}_{z}$ are expressed
as follows:
${\bm{\hat{F}_{x}}}=\left[\begin{array}[]{cccc}1&0&0&\\\ 0&\rm
cos(\varphi_{x})&-\rm sin(\varphi_{x})&\\\ 0&\rm sin(\varphi_{x})&\rm
cos(\varphi_{x})&\end{array}\right],$ (30)
${\hat{\bm{F}}_{y}}=\left[\begin{array}[]{cccc}\rm cos(\varphi_{y})&0&\rm
sin(\varphi_{y})&\\\ 0&1&0&\\\ -\rm sin(\varphi_{y})&0&\rm
cos(\varphi_{y})&\\\ \end{array}\right],$ (31)
${\hat{\bm{F}}_{z}}=\left[\begin{array}[]{cccc}\rm cos(\varphi_{z})&-\rm
sin(\varphi_{z})&0&\\\ \rm sin(\varphi_{z})&\rm cos(\varphi_{z})&0&\\\
0&0&1&\\\ \end{array}\right],$ (32)
where $\varphi_{m=x,y,z}$ is the rotation angle along the $x$, $y$, $z$ axis,
respectively. For data cube $\Re(\bm{r})$ of components of magnetic filed, the
rotated data cube is obtained by
${\hat{\bm{F}}\Re({\bm{\hat{}}{\bm{F}}}^{-1}}{\bm{r}})$ transformation. Since
the rotation of the cube is equivalent to the rotation of the observation
frame in the opposite direction, we perform an inverse transformation of the
position vector ${\bm{r}}$.
Figure 1: Images of synchrotron radiation diagnostics: $Q$-$U$ cross ($X$),
cross-correlation ($Y$), linear polarization $(PI)$ and total ($I$)
intensities. The calculations without the Faraday rotation effect are based on
run3 listed in Table 1. The mean magnetic field is along the horizontal
direction.
## 4 Statistical Results: Compressible MHD Turbulence
When generating synthetic synchrotron observations, we adopt dimensionless
units. In the case of involving the Faraday rotation effect, we quantify
relevant physical quantities: magnetic fields as $B=1.23\rm\ \mu G$, thermal
electron density as $n_{\rm e}=0.01\rm\ cm^{-3}$, and the box size as
$L=100\rm\ pc$, which correspond to the Galactic ISM environment. In addition,
we set the spectral index of relativistic electrons to be $p=2.5$.333 Electron
spectral indices are associated with a specific acceleration mechanism. For
instance, de Gouveia dal Pino & Lazarian (2005) predicted that turbulent
reconnection acceleration provided a steeper spectral index $p=2.5$, confirmed
numerically by Zhang et al. (2023). The spectral index determined from
observations will change in different astrophysical environments. Our previous
studies demonstrated that the change in spectral indices could not impede the
turbulence measurement using the synchrotron polarization technique (see LP12
for theoretical predictions; Lee et al. 2016 and Zhang et al. 2018 for
numerical confirmations).
### 4.1 Slope measurement of MHD turbulence without Faraday rotation
Before performing statistical analysis, let us illustrate the map structures
of the statistical diagnostics considered in this paper. Based on run3 listed
in Table 1, we obtain the intensities of different synchrotron radiation
diagnostics, namely $Q$-$U$ cross $X$, cross-correlation $Y$, linear
polarization $PI$ and total $I$ intensities. Here, we do not consider the
Faraday rotation effect. The imaging of these diagnostics is plotted in Figure
1 (using the real part of $X$, $Y$ to exhibit their maps), from which we can
see that map structures of the $Q$-$U$ cross $X$, cross-correlation $Y$ are
elongated along the direction perpendicular to the mean magnetic field while
maps of linear polarization $PI$ and total $I$ intensities have nearly similar
structures extending along the horizontal direction, i.e., the direction of
the mean magnetic field. The perpendicular distribution from $X$ and $Y$
should be dominated by the Stokes parameter $U$, while the horizontal
structure from $PI$ and $I$ is dominated by the Stokes parameter $Q$. In
general, map structures in the Stokes parameter $Q$ are aligned with the
direction of the mean magnetic field, while those of $U$ are perpendicular to
the mean magnetic field. Moreover, the intensities of $PI$ and $I$ diagnostics
have larger amplitudes than those of $X$ and $Y$. This is caused by amplitude
changes in $Q$ and $U$, the intensity of which is associated with the magnetic
field strength in the plane of the sky (see Equations (7) and (8)).
The SFs of $X$, $Y$, $PI$, and $I$ are plotted in Figure 2 for four different
turbulence regimes: sub-Alfvénic and supersonic (left upper panel), sub-
Alfvénic and subsonic (right upper), super-Alfvénic and supersonic (left
lower) and super-Alfvénic and subsonic (right lower). As shown, SFs cannot
recover the scaling of MHD turbulence in the regime ranging from the injection
scale $L_{\rm inj}$ to the transition scale $L_{\rm tr}$ for $M_{\rm A}<1$ (or
$L_{\rm A}$ for $M_{\rm A}>1$). These numerical results are in agreement with
theoretical predictions of MHD turbulence cascade due to weak turbulent
interaction (LV99; Lazarian 2006). At the scale less than the transition scale
$L_{\rm tr}$ (or $L_{\rm A}$), i.e., in the strong turbulence regime, these
four diagnostics present the power-law distributions predicted by LV99 and
Lazarian (2006). From the figures, We can see that: (1) in the case of sub-
Alfvénic turbulence (upper panels), the measurements from $X$ and $Y$ are
closer to the slope index 5/3 than those from $PI$ and $I$; and (2) in the
case of super-Alfvénic turbulence scenario (lower panels), the $Y$ statistics
can better determine the slope index 5/3 compared with the other three
statistics $X$, $PI$ and $I$. Comparing sub-Alfvénic and super-Alfvénic
turbulence, we find that super-Alfvénic turbulence has a shorter inertial
range for $X$, $PI$, and $I$. Interestingly, we find that statistics $Y$ can
well reflect the scaling of 5/3 with a wide inertial range and does not depend
on specific turbulence properties.
Figure 2: Structure function of synchrotron radiation diagnostics: $Q$-$U$
cross ($X$), cross-correlation ($Y$), linear polarization $(PI)$ and total
($I$) intensities in different turbulence regimes. The yellow and green
vertical dashed lines represent the injection and transition scales,
respectively. Figure 3: Power spectra of the synchrotron radiation
diagnostics: $Q$-$U$ cross ($X$), cross-correlation ($Y$), linear polarization
$(PI)$ and total ($I$) intensities in different turbulence regimes. The yellow
and green vertical dashed lines represent the injection and transition scales,
respectively.
Based on data cubes used in Figure 2, we plot the PS of $X$, $Y$, $PI$ and $I$
in Figure 3. As seen, the scaling indices of PS of $X$ and $Y$ are consistent
with $-8/3$ in four turbulence regimes, with an extended inertial range for
MHD turbulence. This is a valuable finding in this paper. Due to the presence
of numerical dissipation at large wavenumber, the measurements of $PI$ and $I$
can only provide a narrow power-law range, which limits their flexibility to
determine the scaling slope of the underlying MHD turbulence. Therefore, the
new statistics $X$ and $Y$ have advantages over traditional statistics $PI$
and $I$ in measuring the spectral index and inertial range of MHD turbulence.
In addition, the amplitudes of PS of $X$, $Y$, $PI$, and $I$ in the sub-
Alfvénic turbulence regime are greater than those in the super-Alfvénic
turbulence regime. The reason is that large mean magnetic fields for sub-
Alfvénic turbulence produce more synchrotron radiative information, enhancing
Stokes parameters $Q$ and $U$ intensities. In the case of sub-Alfvénic
turbulence (upper panels), the amplitudes of $X$ are greater than those of
$PI$ and $I$ in the inertial range, while it is the opposite in the case of
super-Alfvénic turbulence (lower panels). Our studies on PS demonstrate that
the four statistics explored can measure the scaling slope of MHD turbulence.
We emphasize their synergistic measurement abilities to enhance reliability.
At the same time, by comparing the amplitudes of different quantities, we can
understand their magnetic field strength. It provides a new way to measure
magnetization strength, $M_{\rm A}$; further research is necessary in the
future.
### 4.2 Slope measurement of MHD turbulence with Faraday rotation
#### 4.2.1 Effect of radiative frequency
In this section, we explore how the radiation frequency and the angle between
the mean magnetic field and the LOS influence the PS of $X$, $Y$, and $PI$
($I$ independent of the frequency) in the presence of the Faraday rotation
effect. To explore the influence of radiation frequency, we first set the mean
magnetic field parallel and perpendicular to the LOS, respectively. Based on
the run3 listed in Table 1, we show the numerical results in Figure 4 for the
mean magnetic field parallel (left column) and perpendicular (right column) to
the LOS. As is shown in the left column, the PS of $X$, $Y$, and $PI$ follow
the scaling law of $-8/3$ in the scales of $<L_{\rm tr}$ for simulations at
high frequency (about $\nu\geq 0.1\ \rm GHz$), while in the case of low
frequencies, they downward (upward) deviate from $-8/3$ at the small (large)
wavenumber regions. This is because, in the high-frequency range, the effect
of the Faraday rotation depolarization on the PS is small. With decreasing the
frequency, the strong Faraday rotation depolarization leads to a weaker
correlation of the radiation signal. As the frequency decreases, the
appearance of noise gradually fills the entire synthetical map of Stokes
parameters $Q$ and $U$, severely downward distorting the PS statistics at
large scales (small wavenumbers). The increase of noise at small scales leads
to the upward deviation of the spectral distribution.
The right column of Figure 4 shows the results of the PS at different
frequencies, for which we stress that the mean magnetic field is in the
direction perpendicular to the LOS. As seen, the PS of three synchrotron
diagnostics $X$, $Y$, and $PI$ satisfy the power law of $-8/3$ at the higher
frequencies (about $\nu\geq 0.1\ \rm GHz$) compared with the left panels. At
the low-frequency regimes, they show a distribution similar to those of the
left column. In addition, we see that the amplitudes of PS of $X$, $Y$, and
$PI$ have significant changes at the lower frequencies. In addition to the
case of $M_{\rm A}<1$ studied above, we also consider other possible scenarios
with $M_{\rm A}>1$ (the relevant results not shown in this paper). When the
frequency is higher than 100$~{}\rm MHz$, the PS of $X$, $Y$, and $PI$ can
reveal the spectral index of the underlying MHD turbulence.
As a result, in the case of moderate depolarization, $X$ and $Y$ have more
advantages than $PI$ for measuring the scaling slope of MHD turbulence. When
the mean magnetic field is parallel to the LOS, it is more helpful to use
these statistics to reveal the magnetic turbulence information in the case of
a lower frequency (down to about $\nu\simeq 0.1\ \rm GHz$ for our parameter
selections).
#### 4.2.2 Effect of noise and view angle
We here explore how the angle between the mean magnetic field and the LOS
affects the distribution of PS from relevant diagnostics in Figure 5, plotted
at the frequency of $0.14~{}\rm GHz$. As is seen in this figure, the inertial
range and the amplitude are decreased with decreasing the angle for $X$ and
$PI$, resulting in the measured spectral index deviating from -8/3 at a large
scale. The reason is that with decreasing angle, the Faraday rotation measure
makes synchrotron-polarized signal depolarization. For $Y$, only its amplitude
changes rather than its inertial range.
Figure 4: Power spectra of synchrotron radiation diagnostics: $Q$-$U$ cross
($X$), cross-correlation ($Y$) and linear polarization $(PI)$ intensities at
different frequencies. Left column: the mean magnetic field is along the LOS.
Right column: the mean magnetic field is perpendicular to the LOS. The yellow
and green vertical dashed lines are plotted to represent the injection and
transition scales, respectively. Our calculations are based on the run3 listed
in Table 1. Figure 5: Power spectra of synchrotron radiation diagnostics:
$Q$-$U$ cross ($X$), cross-correlation ($Y$) and linear polarization $(PI)$
intensities at different angles between the mean magnetic field and the LOS at
the frequency $\nu$ = 0.14$~{}\rm GHz$. The yellow and green vertical dashed
lines are the injection and transition scales, respectively. Our calculations
are based on the run3 listed in Table 1. Figure 6: The influence of the noise
on the power spectra of synchrotron radiation diagnostics: $Q$-$U$ cross
($X$), cross-correlation ($Y$) and linear polarization $(PI)$ calculated at
the frequency $\nu$ = 0.14$\rm~{}GHz$. The symbol $\sigma$ indicates a
standard deviation of Gaussian noise and accounts for a fraction of the mean
synchrotron intensities. The yellow and green vertical dashed lines denote the
injection and transition scales, respectively.
Figure 6 explores the influence of the noise on the scaling index of PS of
$X$, $Y$, and $PI$ at the frequency of $0.14~{}\rm GHz$. We generate a
Gaussian noise map with the resolution of $512^{2}$ and add it to the original
image to study the noise effect. Here, the standard deviation of Gaussian
noise accounts for the fraction of the mean synchrotron intensity. The figure
clearly shows that the PS of $X$, $Y$, and $PI$ adding the Gaussian noise
deviate upward from those without noise in the large-$K$ regime. This is
because adding noise increases the random fluctuation of the original image,
resulting in an increase in the PS in the small-scale region. In addition, we
can see that under the same noise level, the inertial range measured by $X$ is
wider than those of $PI$ and $Y$. And the higher the level of Gaussian noise,
the more obvious the deviation. Therefore, increasing the level of Gaussian
noise makes the power-law inertial range narrower.
### 4.3 The influence of numerical resolution on the results
To test the influence of numerical resolution on the measurement of the
scaling slope of turbulence, we simulate the data cube with the low numerical
resolution of $256^{3}$ in the same way as the run1 listed in Table 1. The
difference of resolution results in slightly different Mach numbers: $M_{\rm
A}=0.70$ and $M_{\rm s}=0.74$ for $256^{3}$, as well as $M_{\rm A}=0.70$ and
$M_{\rm s}=0.87$ for $512^{3}$.
Figure 7: Structure functions of synchrotron radiation diagnostics: $Q$-$U$
cross ($X$), cross-correlation ($Y$), linear polarization $(PI)$ and total
($I$) intensities. The results are presented in two numerical resolutions
$256^{3}$ (blue) and $512^{3}$ (red), with the run1 listed in Table 1. The
vertical lines represent the injection and transition scales, respectively.
Still focusing on the SFs of $X$, $Y$, $PI$ and $I$, we provide numerical
results in Figure 7. From this figure, we find that: (1) the measurements from
high resolution $512^{3}$ show better results, namely, closer to $R^{5/3}$,
than those from low resolution $256^{3}$, as expected; (2) For the data cubes
with the same numerical resolution, the structure functions of $X$ and $Y$ are
more advantageous than those of $PI$ and $I$ in measuring the scaling index
and inertial range of MHD turbulence.
In addition, we here explore the influence of numerical resolution on the PS
of synchrotron radiation diagnostics using two data cubes with the resolutions
$256^{3}$ and $512^{3}$, the parameters of which correspond to those of data
cubes used in Figure 7. The numerical results are presented in Figure 8, from
which we see that the resolution $512^{3}$ can better determine the spectral
index of $-8/3$ expected in the inertial range. Importantly, we find that the
measurements of $Q$-$U$ cross $X$, and cross-correlation $Y$ extend the width
of the inertial range, compared with the traditional statistics of linear
polarization $PI$ and total $I$ intensities.
Comparing the upper and lower panels of Figure 8, we find that the
measurements of $PI$ and $I$ have a dissipation at large wavenumbers, i.e.,
small scales, while $X$ and $Y$ show a weak dissipation to extend the inertial
range. Therefore, when recovering the scaling slope of MHD turbulence from
observational data, we recommend $X$ and $Y$ statistics. However, we should be
particularly cautious that in the largest wavenumber, there is an effect of
numerical noise. In this regard, interested readers are advised to refer to
Zhang et al. (2016) and Zhang et al. (2018) who found that the noise would
cause the spectrum to reverse upwards.
Figure 8: Power spectra of synchrotron radiation diagnostics: $Q$-$U$ cross
($X$), cross-correlation ($Y$), linear polarization $(PI)$ and total ($I$)
intensities. The vertical lines denote the injection and transition scales of
the underlying MHD turbulence. Numerical results are plotted in resolutions of
$256^{3}$ and $512^{3}$, using run1 listed in Table 1.
## 5 Statistical Results: Decomposition of Compressible MHD turbulence
With the procedures described in Section 3, we decompose data cubes of
compressible MHD turbulence, namely, run3 listed in Table 1, and then explore
the PS of $X$, $Y$, $PI$, and $I$ for three modes based on the post
decomposition data cubes. As is shown in Figure 9, the PS of $X$, $Y$, $PI$,
and $I$ follows the power law index of $-8/3$ for Alfvén and slow modes, and
$-5/2$ for fast mode. Meanwhile, it can be seen that the PS of new statistics
$X$ and $Y$ has a more extended inertial range than that of traditional $PI$
and $I$ statistics. The properties of PS obtained for these plasma modes are
consistent with the theoretical prediction of compressible MHD turbulence
described in Section 2.
Figure 9: Power spectra of synchrotron radiation diagnostics: $Q$-$U$ cross
($X$), cross-correlation ($Y$), linear polarization $(PI)$ and total ($I$)
intensities for Alfvén, slow and fast modes. The yellow and green vertical
dashed lines represent the injection and transition scales, respectively. The
decomposition of data cubes is based on the run3 listed in Table 1.
We further explore how the frequency influences the PS of different
diagnostics arising from three modes, as shown in Figure 10. Each row
corresponds to the PS of the same diagnostic for Alfvén (left), slow (middle),
and fast (right) modes, respectively, while in each column, the PS of
different diagnostics $X$ (upper), $Y$ (middle) and $PI$ (lower) for the same
mode. The PS of three diagnostics ($X$, $Y$ and $PI$) follows the scaling
slope of $K^{-8/3}$ for Alfvén (left column) and slow modes (middle column),
and $K^{-5/2}$ for fast mode (right column), at the frequency $\nu>0.05$ GHz,
while they deviate from the expected values ($-8/3$ or $-5/2$) at lower
frequencies, particularly, for small wavenumbers. It can be seen that the
amplitudes of PS of three diagnostics go up with decreasing frequency in the
large-$K$ part. For the slow and fast modes, the physical interpretation of
the above phenomena is similar to that of Section 4.2.1. But for the Alfvén
mode, the Stokes parameters of pure Alfvén mode at $\varphi=90^{\circ}$
projects quicker than random walk (Lazarian et al. 2022) in the case of low
$M_{\rm A}$. Physically it means the Alfvén mode without Faraday rotation
self-projects to zero if the integration length is large enough. FR destroys
this phenomenon and creates fluctuations that are not canceling itself.
Significantly, we find that the PS of various diagnostics arising from three
modes depends on the frequency. Note that the PS of different statistics for
the slow mode has a slightly weaker dependence on the frequency than the other
two modes. The reason is that the slow mode has a small Faraday rotation
measure value.
Our research by simulation data confirms that scaling slopes of compressible
plasma modes can be obtained from observational data. In practice, using
observational data to extract the properties of three plasma modes is a
challenging subject. Since the scaling index of $-5/2$ for the fast mode is
different from the $-8/3$ for the Alfvén and slow modes, one could first
obtain the properties of the fast mode (Zhang et al. 2020a). However, the more
challenging is how to effectively distinguish Alfvén and slow modes from the
observational data. This deserves more exploratory efforts.
Figure 10: Power spectra of synchrotron radiation diagnostics at different
frequencies. In the order of the rows: $Q$-$U$ cross $X$ (first row), cross-
correlation $Y$ (second), and linear polarization $PI$ (third). In the order
of the columns: Alfvén (first column), slow (second), and fast (third) modes.
The yellow and green vertical dashed lines represent the injection and
transition scales, respectively. The decomposition of data cubes is based on
the run3 listed in Table 1.
## 6 Discussion
Synchrotron radiation is an important source of magnetic field information in
the interstellar medium environment. Statistics of the total intensity $I$ can
provide the information of magnetic field in the plane of the sky. Compared
with the intensity $I$, the linear polarization intensity $PI$ can reflect
more information about the magnetic field, i.e., not only in the plane of the
sky but also in the LOS. The disadvantage is that $P$ can only provide the
total polarization information but not the relative importance of $Q$ and $U$,
i.e., the relative changes in $Q$ and $U$ values. In this paper, we propose
two new diagnostics $X$ and $Y$ together with the traditionally used $PI$ and
$I$ to explore the scaling properties of MHD turbulence by PS and SFs and
found that PS of $X$ and $Y$ have a larger measurable inertial range compared
with $PI$ and $I$. In fact, each technique has its advantages and limitations
when measuring turbulence properties. A synergy of various techniques can
obtain more comprehensive turbulence information, enhancing the reliability of
the turbulence measurement.
Although the PS of synchrotron diagnostics can not provide more information on
the spatial structure of MHD turbulence, it is an advantageous statistical
method for studying the source, sink and scaling slope of turbulence. As
studied in Section 4.2.1, we found that the PS has different amplitudes when
the LOS is perpendicular and parallel to the mean magnetic field, so we expect
that it can also be an alternative tool for measuring magnetization.
For compressible MHD turbulence, one can also explore density fluctuation
information within MHD turbulence by introducing Faraday rotation. However,
the biggest difficulty in measuring magnetic fields through involvement in
Faraday rotation studies is that there is not currently a good way to decouple
the coupling between vector and scalar. With a proper understanding of the
magnetic field through synergistically related techniques, one can gain
insight into the density information. Various MHD turbulence modes have
important effects on many astrophysical processes. Therefore, the study of
plasma modes is helpful to understand the contribution of different modes to
these physical processes, such as the acceleration and diffusion process of
cosmic rays (Zhang & Xiang 2021; Sampson et al. 2023).
This paper focused on the scaling properties of magnetic fields by PS and SF
statistics. Notice that the latter can also be used to recover properties of
magnetic field structure and eddy, as done in Wang et al. (2020) and Zhang et
al. (2020b). To understand other aspects of MHD turbulence, many other
synergistic techniques have been developed based on synchrotron radiation.
These techniques include the kurtosis and skewness exploring the anisotropy of
MHD turbulence (Herron et al. 2016) and constraining the sonic Mach number
(Burkhart et al. 2012), the quadrupole moment revealing the anisotropy (Herron
et al. 2016; Lee et al. 2019; Wang et al. 2022), as well as the gradient
statistics measuring magnetic field directions (Lazarian et al. 2017; Lazarian
& Yuen 2018; Zhang et al. 2019a; Zhang et al. 2019b, 2020b; Wang et al. 2021;
Liu et al. 2023) and magnetization (Carmo et al. 2020; Lazarian et al. 2022).
In addition, the PS of the tension force can diagnose the spatial structure of
the magnetic structures (Schekochihin et al. 2004; Waelkens et al. 2009; Sun
et al. 2014).
For completeness, our work explored how to get the scaling index of the
turbulence by a synchrotron signal in the cases of both subsonic and
supersonic turbulence. Indeed, the hot/warm ionized diffuse media with the low
$M_{\rm s}$ can be probed by radio synchrotron emission (such as the Galactic
ISM with $M_{\rm s}\leq 2$, see Gaensler et al. 2011), while some environments
still have a large $M_{\rm s}$, such as the regions of active galactic nuclei
and supernova remnants interacting with the surrounding cold molecular cloud.
Therefore, the $M_{\rm s}$ we explored in this paper were not much greater
than 1. For turbulence regimes much larger than 1, one can use alternative
approaches, such as velocity channel analysis and velocity correlation
spectrum (e.g., Lazarian & Pogosyan 2004; Yuen & Lazarian 2017; Yang et al.
2021). In this work, we did not involve the effect of self-absorption. This
process will become important when the magnetic field interacts with
relativistic electrons at low-frequency regimes. In the presence of self-
absorption, the PS of these statistics may vary not only in the scaling index
but also in the inertial range, which provides us with a new research
perspective to recover the 3D magnetic field structure.
## 7 Summary
In this paper, we proposed two new synchrotron diagnostics: the cross
intensity $X$ and cross-correlation intensity $Y$ to reveal the MHD turbulence
properties. Using their PS and SF together with traditional diagnostics $PI$
and $I$, we have well understood the spectral properties of the underlying
compressible MHD turbulence. We focused on exploring how Mach numbers, noise,
Faraday depolarization, and numerical resolution affect the spectral
measurement of magnetic turbulence. The main results are summarized as
follows.
* •
The SF of statistics $X$, $Y$, $PI$, and $I$ can determine the scaling slope
of MHD turbulence in sub-Alfvénic regimes. Interestingly, new statistics $Y$
could better measure the scaling slope compared with other statistics $X$,
$PI$, and $I$ in the different Alfvénic regimes.
* •
The noise does not impede the recovery of the scaling index of MHD turbulence,
and the inertial range of PS measured by $X$ is wider than that by $PI$ and
$Y$ at the same noise level.
* •
In the case of moderate Faraday depolarization, they still improve the scaling
slope measurements since the statistics $X$ and $Y$ extend the inertial range.
The influence of numerical resolution does not change our conclusions.
* •
The change of angle between the mean magnetic field and the LOS does not
affect the measurement of the scaling index, but the inertial range and
amplitude.
* •
Using the synchrotron radiation diagnostics ($X$, $Y$ and $PI$) can measure
the spectral properties of Alfvén, slow and fast modes.
## ACKNOWLEDGMENTS
We thank the anonymous referee for valuable comments that significantly
improved the quality of the paper. J.F.Z. thanks to the support from the
National Natural Science Foundation of China (grant Nos. 11973035), the Hunan
Province Innovation Platform and Talent Plan-HuXiang Youth Talent Project (No.
2020RC3045), and the Hunan Natural Science Foundation for Distinguished Young
Scholars (No. 2023JJ10039). F.Y.X. acknowledges the support from the Joint
Research Funds in Astronomy U2031114 under a cooperative agreement between the
National Natural Science Foundation of China and the Chinese Academy of
Sciences.
## DATA AVAILABILITY
The data underlying this paper can be shared on reasonable request to the
corresponding author.
## References
* Beresnyak (2014) Beresnyak A., 2014, ApJ, 784, L20
* Beresnyak (2017) Beresnyak A., 2017, ApJ, 834, 47
* Beresnyak (2019) Beresnyak A., 2019, Living Reviews in Computational Astrophysics, 5, 2
* Beresnyak & Lazarian (2010) Beresnyak A., Lazarian A., 2010, ApJ, 722, L110
* Beresnyak & Lazarian (2019) Beresnyak A., Lazarian A., 2019, Turbulence in Magnetohydrodynamics, Studies in Mathematical Physics, 12, De Gruyter
* Boldyrev (2006) Boldyrev S., 2006, Phys. Rev. Lett., 96, 115002
* Brunetti & Lazarian (2016) Brunetti G., Lazarian A., 2016, MNRAS, 458, 2584
* Burkhart et al. (2012) Burkhart B., Lazarian A., Gaensler B. M., 2012, ApJ, 749, 145
* Carmo et al. (2020) Carmo L., et al., 2020, ApJ, 905, 130
* Chandran et al. (2015) Chandran B. D. G., Schekochihin A. A., Mallet A., 2015, ApJ, 807, 39
* Cho & Lazarian (2002) Cho J., Lazarian A., 2002, Phys. Rev. Lett., 88, 245001
* Cho & Lazarian (2003) Cho J., Lazarian A., 2003, MNRAS, 345, 325
* Federrath (2013) Federrath C., 2013, MNRAS, 436, 1245
* Gaensler et al. (2011) Gaensler B. M., et al., 2011, Nature, 478, 214
* Galtier & Banerjee (2011) Galtier S., Banerjee S., 2011, Phys. Rev. Lett., 107, 134501
* Goldreich & Sridhar (1995) Goldreich P., Sridhar S., 1995, ApJ, 438, 763
* Herron et al. (2016) Herron C. A., Burkhart B., Lazarian A., Gaensler B. M., McClure-Griffiths N. M., 2016, ApJ, 822, 13
* Iacobelli et al. (2013) Iacobelli M., et al., 2013, A&A, 558, A72
* Iroshnikov (1963) Iroshnikov P. S., 1963, Azh, 40, 742
* Kadowaki et al. (2015) Kadowaki L. H. S., de Gouveia Dal Pino E. M., Singh C. B., 2015, ApJ, 802, 113
* Kolmogorov (1941) Kolmogorov A., 1941, Akademiia Nauk SSSR Doklady, 30, 301
* Kowal & Lazarian (2010) Kowal G., Lazarian A., 2010, ApJ, 720, 742
* Kraichnan (1965) Kraichnan R. H., 1965, Physics of Fluids, 8, 1385
* Lazarian (2006) Lazarian A., 2006, ApJ, 645, L25
* Lazarian & Pogosyan (2004) Lazarian A., Pogosyan D., 2004, ApJ, 616, 943
* Lazarian & Pogosyan (2012) Lazarian A., Pogosyan D., 2012, ApJ, 747, 5
* Lazarian & Pogosyan (2016) Lazarian A., Pogosyan D., 2016, ApJ, 818, 178
* Lazarian & Vishniac (1999) Lazarian A., Vishniac E. T., 1999, ApJ, 517, 700
* Lazarian & Yuen (2018) Lazarian A., Yuen K. H., 2018, ApJ, 865, 59
* Lazarian et al. (2003) Lazarian A., Petrosian V., Yan H., Cho J., 2003, arXiv e-prints, pp astro–ph/0301181
* Lazarian et al. (2017) Lazarian A., Yuen K. H., Lee H., Cho J., 2017, ApJ, 842, 30
* Lazarian et al. (2022) Lazarian A., Yuen K. H., Pogosyan D., 2022, ApJ, 935, 77
* Lee et al. (2016) Lee H., Lazarian A., Cho J., 2016, ApJ, 831, 77
* Lee et al. (2019) Lee H., Cho J., Lazarian A., 2019, ApJ, 877, 108
* Liu et al. (2023) Liu M., Hu Y., Lazarian A., Xu S., Soida M., 2023, MNRAS, 519, 1068
* Longair (2011) Longair M. S., 2011, High Energy Astrophysics. Cambridge University Press, Cambridge
* Mac Low & Klessen (2004) Mac Low M.-M., Klessen R. S., 2004, Reviews of Modern Physics, 76, 125
* Malik et al. (2023) Malik S., Yuen K. H., Yan H., 2023, arXiv e-prints, p. arXiv:2303.17282
* Mallet & Schekochihin (2017) Mallet A., Schekochihin A. A., 2017, MNRAS, 466, 3918
* Maron & Goldreich (2001) Maron J., Goldreich P., 2001, ApJ, 554, 1175
* Monin & Yaglom (1975) Monin A. S., Yaglom A. M., 1975, Statistical Fluid Mechanics: Mechanics of Turbulence, Vol. 2. The MIT Press, Cambridge, Massachusetts
* Montgomery & Turner (1981) Montgomery D., Turner L., 1981, Physics of Fluids, 24, 825
* Narayan & Medvedev (2001) Narayan R., Medvedev M. V., 2001, ApJ, 562, L129
* Rybicki & Lightman (1979) Rybicki G. B., Lightman A. P., 1979, Radiative processes in astrophysics. Wiley, New York
* Sampson et al. (2023) Sampson M. L., Beattie J. R., Krumholz M. R., Crocker R. M., Federrath C., Seta A., 2023, MNRAS, 519, 1503
* Schekochihin et al. (2004) Schekochihin A. A., Cowley S. C., Taylor S. F., Maron J. L., McWilliams J. C., 2004, ApJ, 612, 276
* Schnitzeler et al. (2007) Schnitzeler D. H. F. M., Katgert P., Haverkorn M., de Bruyn A. G., 2007, A&A, 461, 963
* Sun et al. (2014) Sun X. H., Gaensler B. M., Carretti E., Purcell C. R., Staveley-Smith L., Bernardi G., Haverkorn M., 2014, MNRAS, 437, 2936
* Sun et al. (2022) Sun X.-H., Gao X.-Y., Reich W., Jiang P., Li D., Yan H., Li X.-H., 2022, Research in Astronomy and Astrophysics, 22, 125011
* Van Eck et al. (2017) Van Eck C. L., et al., 2017, A&A, 597, A98
* Waelkens et al. (2009) Waelkens A. H., Schekochihin A. A., Enßlin T. A., 2009, MNRAS, 398, 1970
* Wang et al. (2020) Wang R.-Y., Zhang J.-F., Xiang F.-Y., 2020, ApJ, 890, 70
* Wang et al. (2021) Wang R.-Y., Zhang J.-F., Lazarian A., Xiao H.-P., Xiang F.-Y., 2021, MNRAS, 505, 6206
* Wang et al. (2022) Wang R.-Y., Zhang J.-F., Lazarian A., Xiao H.-P., Xiang F.-Y., 2022, ApJ, 940, 158
* West et al. (2020) West J. L., Henriksen R. N., Ferrière K., Woodfinden A., Jaffe T., Gaensler B. M., Irwin J. A., 2020, MNRAS, 499, 3673
* Yan & Lazarian (2002) Yan H., Lazarian A., 2002, Phys. Rev. Lett., 89, 281102
* Yan & Lazarian (2008) Yan H., Lazarian A., 2008, ApJ, 673, 942
* Yang et al. (2021) Yang B., Zhang J.-F., Lazarian A., de Medeiros J. R., 2021, MNRAS, 503, 768
* Yuen & Lazarian (2017) Yuen K. H., Lazarian A., 2017, ApJ, 837, L24
* Zhang & Xiang (2021) Zhang J.-F., Xiang F.-Y., 2021, ApJ, 922, 209
* Zhang & Yan (2011) Zhang B., Yan H., 2011, ApJ, 726, 90
* Zhang et al. (2016) Zhang J.-F., Lazarian A., Lee H., Cho J., 2016, ApJ, 825, 154
* Zhang et al. (2018) Zhang J.-F., Lazarian A., Xiang F.-Y., 2018, ApJ, 863, 197
* Zhang et al. (2019a) Zhang J.-F., Lazarian A., Ho K. W., Yuen K. H., Yang B., Hu Y., 2019a, MNRAS, 486, 4813
* Zhang et al. (2019b) Zhang J.-F., Liu Q., Lazarian A., 2019b, ApJ, 886, 63
* Zhang et al. (2020a) Zhang H., Chepurnov A., Yan H., Makwana K., Santos-Lima R., Appleby S., 2020a, Nature Astronomy, 4, 1001
* Zhang et al. (2020b) Zhang J.-F., Hu K., Cho J., Lazarian A., 2020b, ApJ, 895, 20
* Zhang et al. (2023) Zhang J.-F., Xu S., Lazarian A., Kowal G., 2023, Journal of High Energy Astrophysics, submitted
* de Gouveia dal Pino & Lazarian (2005) de Gouveia dal Pino E. M., Lazarian A., 2005, A&A, 441, 845
|
# Equality cases in the Anantharam–Jog–Nair inequality
Efe Aras, Thomas A. Courtade and Albert Zhang
University of California, Berkeley
( )
###### Abstract
Anantharam, Jog and Nair recently unified the Shannon–Stam inequality and the
entropic form of the Brascamp–Lieb inequalities under a common inequality.
They left open the problems of extremizability and characterization of
extremizers. Both questions are resolved in the present paper.
## 1 Preliminaries
We begin by briefly fixing notation and definitions that will be needed
throughout. A Euclidean space $E$ is a finite-dimensional Hilbert space over
the real field, equipped with Lebesgue measure. For a probability measure
$\mu$ on $E$, absolutely continuous with respect to Lebesgue measure, and a
random vector $X\sim\mu$, we define the Shannon entropy
$h(X)\equiv h(\mu):=-\int_{E}\log\left(\frac{d\mu}{dx}\right)d\mu,$
provided the integral exists. If $\mu$ is not absolutely continuous with
respect to Lebesgue measure, we adopt the convention that $h(\mu):=-\infty$.
We let $\mathcal{P}(E)$ denote the set of probability measures on $E$ having
finite entropies and second moments. When there is no cause for ambiguity, we
adopt the usual notational convention where a random vector $X$ and its law
$\mu$ are used interchangeably. So, for example, writing $X\in\mathcal{P}(E)$
means that $X$ is a random vector taking values in $E$, having finite entropy
and finite second moments.
For $x,y\in E$, we denote the standard (Euclidean) inner product as $x^{T}y$,
and denote the Euclidean metric by $|\cdot|$ (i.e., $|x|:=\sqrt{x^{T}x}$). If
$A:E\to E^{\prime}$ is a linear map between Euclidean spaces $E,E^{\prime}$,
we let $A^{T}:E^{\prime}\to E$ denote its adjoint satisfying
$(Ax)^{T}y=x^{T}(A^{T}y),~{}~{}~{}\forall x\in E,y\in E^{\prime}.$
All of this notation is consistent with the representation of linear maps as
matrices. We let $\mathbf{S}(E)$ denote the set of symmetric linear maps from
$E$ to itself (i.e., $A\in\mathbf{S}(E)$ iff $A=A^{T}$), and
$\mathbf{S}^{+}(E)$ denote the subset of positive definite linear maps (i.e.,
$A\in\mathbf{S}^{+}(E)$ iff $A=A^{T}$ and $x^{T}Ax>0$ for all nonzero $x\in
E$).
For a random vector $X\sim\mu\in\mathcal{P}(E)$, its covariance is defined as
the (positive semidefinite) symmetric linear map
$\operatorname{Cov}(X)=\int_{E}(x-\mathbb{E}[X])(x-\mathbb{E}[X])^{T}d\mu(x)\in\mathbf{S}(E),$
where $\mathbb{E}$ denotes expectation (here, with respect to $\mu$). The
Gaussian distribution on $E$ with mean $m$ and covariance
$\Sigma\in\mathbf{S}^{+}(E)$ is denoted by $N(m,\Sigma)$. A Gaussian random
vector $X$ is said to be isotropic if it has covariance proportional to the
identity map. The standard Gaussian distribution on $E$ is denoted by
$\gamma_{E}$.
Of course, all Euclidean spaces $E,E^{\prime}$ of dimensions $m$ and $n$,
respectively, can always be identified as $\mathbb{R}^{m}$ and
$\mathbb{R}^{n}$, respectively. Moreover, any linear transformation $A:E\to
E^{\prime}$ can be expressed as a real $n\times m$ matrix. Our notation is
chosen to be compatible with this, but for various reasons it is notationally
more convenient to state things abstractly. For example, this avoids ambiguity
that can result from referring to two different Euclidean spaces of the same
dimension.
Throughout, we consider collections of Euclidean spaces $(E_{i})_{i=1}^{k}$,
$(E^{j})_{j=1}^{m}$, and corresponding sets of positive real numbers
$\mathbf{c}=(c_{i})_{i=1}^{k}$ and $\mathbf{d}=(d_{j})_{j=1}^{m}$. A datum is
a triplet $(\mathbf{c},\mathbf{d},\mathbf{B})$ where
$\mathbf{B}=(B_{j})_{j=1}^{m}$ is a collection of linear maps $B_{j}:E_{0}\to
E^{j}$, with common domain $E_{0}:=\oplus_{i=1}^{k}E_{i}$. Given the structure
of $E_{0}$, we let $\pi_{E_{i}}:E_{0}\to E_{i}$ denote the coordinate
projections. A vector $x\in E_{0}$ will frequently be written in its
coordinate representation $x=(x_{1},\dots,x_{k})$, where
$x_{i}=\pi_{E_{i}}(x)$, $1\leq i\leq k$. If $A_{i}:E_{i}\to E_{i}$, $1\leq
i\leq k$, are linear maps, then the direct sum of operators
$A=\oplus_{i=1}^{k}A_{i}$ is a linear map from $E_{0}$ to itself and, without
confusion, can be denoted as the block-diagonal operator
$A=\operatorname{diag}(A_{1},\dots,A_{k}).$
For a set $V$, we let $\operatorname{id}_{V}:V\to V$ denote the identity map
from $V$ to itself. So, as an example of the above, we have
$\operatorname{id}_{E_{0}}=\oplus_{i=1}^{k}\operatorname{id}_{E_{i}}\equiv\operatorname{diag}(\operatorname{id}_{E_{1}},\dots,\operatorname{id}_{E_{k}})$.
Again, this is all compatible with the representation of linear operators as
matrices.
We conclude this section by recording a few associated definitions for
convenience.
###### Definition 1.
A subspace $T\subset E_{0}$ is said to be product-form if it can be written as
$T=\oplus_{i=1}^{k}T_{i}$, where $T_{i}\subset E_{i}$ for $1\leq i\leq k$.
###### Definition 2.
A subspace $T\subset E_{0}$ is said to be critical for
$(\mathbf{c},\mathbf{d},\mathbf{B})$ if it is product-form, and
$\displaystyle\sum_{i=1}^{k}c_{i}\dim(\pi_{E_{i}}T)=\sum_{j=1}^{m}d_{j}\dim(B_{j}T).$
###### Definition 3.
Two data $(\mathbf{c},\mathbf{d},\mathbf{B})$ and
$(\mathbf{c^{\prime}},\mathbf{d^{\prime}},\mathbf{B^{\prime}})$ are said to be
equivalent if $\mathbf{c}=\mathbf{c^{\prime}}$,
$\mathbf{d}=\mathbf{d^{\prime}}$, and there exist invertible linear
transformations $A_{j}:E^{j}\to E^{j}$ and $C_{i}:E_{i}\to E_{i}$ such that
$\displaystyle B^{\prime}_{j}=A_{j}^{-1}B_{j}C^{-1}\hskip 14.22636pt\mbox{for
each $1\leq j\leq m$},$ (1)
where $C:=\operatorname{diag}(C_{1},\dots,C_{k})$.
We remark that, in the special case of $k=1$, the definitions of critical
subspaces and equivalent data coincide with those found in [3]. For general
$k$, all three definitions coincide with those in [8].
## 2 The Anantharam–Jog–Nair inequality
For a datum $(\mathbf{c},\mathbf{d},\mathbf{B})$, Anantharam, Jog and Nair
(AJN) characterized the best (i.e., smallest) constant $C$ such that the
entropy inequality
$\displaystyle\sum_{i=1}^{k}c_{i}h(X_{i})\leq\sum_{j=1}^{m}d_{j}h(B_{j}X)+C$
(2)
holds for any choice of independent random vectors
$X_{i}\in\mathcal{P}(E_{i})$, $1\leq i\leq k$, with $X:=(X_{1},\dots,X_{k})$.
This inequality unifies the Shannon–Stam inequality [21, 22] and the entropic
formulation of the (Euclidean) Brascamp–Lieb inequalities [7, 5] under a
common framework. Extending the Gaussian saturation properties enjoyed by each
(see, e.g., [6] and [15]), Anantharam, Jog and Nair showed that the best
constant can be computed by considering only Gaussian $X_{i}$’s, and gave
necessary and sufficient conditions for finiteness. More precisely, their main
result is the following:
###### Theorem 4 (AJN inequality [1]).
Fix a datum $(\mathbf{c},\mathbf{d},\mathbf{B})$. For any random vectors
$X_{i}\in\mathcal{P}(E_{i})$, $1\leq i\leq k$ and $X=(X_{1},\dots,X_{k})$,
$\displaystyle\sum_{i=1}^{k}c_{i}h(X_{i})-\sum_{j=1}^{m}d_{j}h(B_{j}X)\leq
C_{g}(\mathbf{c},\mathbf{d},\mathbf{B}),$ (3)
where $C_{g}(\mathbf{c},\mathbf{d},\mathbf{B})$ is defined as the supremum of
the LHS over independent Gaussian vectors $(X_{i})_{i=1}^{k}$. Moreover, the
constant $C_{g}(\mathbf{c},\mathbf{d},\mathbf{B})$ is finite if and only if
the following two conditions are satisfied.
1. (i)
Scaling condition: It holds that
$\displaystyle\sum_{i=1}^{k}c_{i}\dim(E_{i})=\sum_{j=1}^{m}d_{j}\dim(E^{j}).$
(4)
2. (ii)
Dimension condition: For all product-form subspaces $T\subset E_{0}$,
$\displaystyle\sum_{i=1}^{k}c_{i}\dim(\pi_{E_{i}}T)\leq\sum_{j=1}^{m}d_{j}\dim(B_{j}T).$
(5)
Anantharam, Jog and Nair left open the question of extremizability. That is,
when do there exist random vectors $(X_{i})_{i=1}^{k}$ such that (3) is met
with equality, and what form do any such extremizers take? The goal of this
paper is to answer both questions completely. The first question is addressed
in Section 3, and the second in Section 4.
The precise characterization of extremizers is somewhat complicated, but the
general idea is easily understood in the context of a toy example. For
$\lambda\in(0,1)$, the following holds: If $(X,Y)$ is independent of $Z$, and
$Y$ and $Z$ are of the same dimension, then
$\displaystyle\lambda h(X,Y)+(1-\lambda)h(Z)\leq\lambda
h(X)+h(\lambda^{1/2}Y+(1-\lambda)^{1/2}Z).$ (6)
This inequality is obtained by a concatenation of subadditivity of entropy and
the Shannon–Stam inequality. Restricting attention to cases where all
entropies are finite, we can use known equality cases for both to assert that
$(X,Y)$ and $Z$ are extremizers in (6) if and only if (i) $X$ and $Y$ are
independent; and (ii) $Y$ and $Z$ are Gaussian with identical covariances.
Roughly speaking, all extremizers of the AJN inequality (3) resemble the above
example. That is, extremizers are characterized by a rigid factorization into
independent components, where some components can have any distribution, and
the remaining are necessarily Gaussian with covariances that are typically
linked in some way.
Our approach leverages an assemblage of techniques developed by various
researchers. In particular, the question of extremizability is addressed by
identifying a suitable notion of “AJN-geometricity”, and showing that all
extremizable data are equivalent to AJN-geometric data. This parallels the
approach developed by Bennett, Carbery, Christ and Tao [3] for the functional
form of the Brascamp–Lieb inequalities, which by duality [7] can be realized
as an instance of (3). The Gaussian saturation property of AJN-geometric data
is established by a stochastic argument involving the Föllmer drift (see
Appendix A for definitions and properties), inspired by Lehec’s stochastic
proof of the Shannon–Stam inequality [14]. This stochastic proof lends itself
to identifying the structure of extremizers (when they exist), by combining
key ideas from Valdimarsson’s characterization of optimizers in the functional
Brascamp–Lieb inequalities [23] together with tools from Eldan and
Mikulincer’s work on the stability of the Shannon–Stam inequality [11].
## 3 Extremizability and Geometricity
We first address the question of when (3) is extremizable. To make things
precise, we say that a datum $(\mathbf{c},\mathbf{d},\mathbf{B})$ is
extremizable if $C_{g}(\mathbf{c},\mathbf{d},\mathbf{B})$ is finite and there
exist independent $X_{i}\in\mathcal{P}(E_{i})$, $1\leq i\leq k$ such that (3)
is met with equality. We say that $(\mathbf{c},\mathbf{d},\mathbf{B})$ is
Gaussian-extremizable if $C_{g}(\mathbf{c},\mathbf{d},\mathbf{B})$ is finite
and there exist independent Gaussian $(X_{i})_{i=1}^{k}$ meeting (3) with
equality.
In analogy to definitions made in the context of Brascamp–Lieb inequalities,
we define the class of AJN-geometric data below. Their significance to (3) is
the same as that of geometric data to inequalities of Brascamp–Lieb-type. In
particular, we will see that all (Gaussian-)extremizable instances of (3) are
equivalent to AJN-geometric data.
###### Definition 5 (AJN-Geometric datum).
A datum $(\mathbf{c},\mathbf{d},\mathbf{B})$ is said to be AJN-geometric if
1. (i)
$B_{j}B_{j}^{T}=\operatorname{id}_{E^{j}}$ for each $1\leq j\leq m$; and
2. (ii)
we have the operator identity
$\displaystyle\sum_{j=1}^{m}d_{j}\pi_{E_{i}}B^{T}_{j}B_{j}\pi^{T}_{E_{i}}=c_{i}\operatorname{id}_{E_{i}},\hskip
14.22636pt\mbox{for each~{}}1\leq i\leq k.$ (7)
###### Remark 6.
Conditions (i)-(ii) together imply the scaling condition (4). This can be seen
by taking traces in (7), summing from $i=1,\dots,k$, and using the cyclic and
linearity properties of trace together with (ii).
AJN-geometric data have the convenient property that
$C_{g}(\mathbf{c},\mathbf{d},\mathbf{B})=0$, and they are extremizable by
standard Gaussians. We summarize as a formal proposition.
###### Proposition 7.
If $(\mathbf{c},\mathbf{d},\mathbf{B})$ is AJN-geometric, then
$C_{g}(\mathbf{c},\mathbf{d},\mathbf{B})=0$ and $X\sim
N(0,\operatorname{id}_{E_{0}})$ achieves equality in (3).
###### Proof.
We’ll use the properties of the Föllmer drift summarized in Appendix A. Begin
by fixing centered $\mu_{i}\in\mathcal{P}(E_{i})$, $1\leq i\leq k$, and let
$(W_{t})_{t\geq 0}$ be a Brownian motion on $E_{0}$ with
$\operatorname{Cov}(W_{1})=\operatorname{id}_{E_{0}}$. By Theorem 28 and (54),
there is a drift $U_{t}=\int_{0}^{t}u_{s}ds$ such that $\mathbb{E}[u_{t}]=0$
and $(\pi_{E_{i}}(u_{t}))_{i=1}^{k}$ are independent for all $0\leq t\leq 1$,
$\displaystyle(W_{1}+U_{1})\sim\mu_{1}\otimes\cdots\otimes\mu_{k},$ (8)
and
$D(\mu_{i}\|\gamma_{E_{i}})=\frac{1}{2}\int_{0}^{1}\mathbb{E}|\pi_{E_{i}}(u_{s})|^{2}ds$
for each $1\leq i\leq k$. Therefore,
$\displaystyle\sum_{i=1}^{k}c_{i}D(\mu_{i}\|\gamma_{E_{i}})$
$\displaystyle=\frac{1}{2}\mathbb{E}\int_{0}^{1}\sum_{i=1}^{k}c_{i}|\pi_{E_{i}}(u_{s})|^{2}ds$
$\displaystyle=\frac{1}{2}\mathbb{E}\int_{0}^{1}\sum_{j=1}^{m}d_{j}|B_{j}{u}_{s}|^{2}ds$
(9)
$\displaystyle\geq\sum_{j=1}^{m}d_{j}D(B_{j}\sharp(\mu_{1}\otimes\cdots\otimes\mu_{k})\|\gamma_{E^{j}}),$
(10)
where (9) follows from (7) and the properties of $u_{t}$, and (10) follows
from (8) and Proposition 26 (with construction (52)) because
$B_{j}W_{1}\sim\gamma_{E^{j}}$, due to
$B_{j}B_{j}^{T}=\operatorname{id}_{E^{j}}$ by assumption. Now, expanding the
relative entropies in terms of Shannon entropies and second moments, the
second-moment terms cancel due to independence and (7), giving
$\displaystyle\sum_{i=1}^{k}c_{i}h(X_{i})\leq\sum_{j=1}^{m}d_{j}h(B_{j}X)$
(11)
for any $X_{i}\sim\mu_{i}\in\mathcal{P}(E_{i})$ and
$X\sim\otimes_{i=1}^{k}\mu_{i}$, where the centering assumption can be removed
due to translation invariance of Shannon entropy. The fact that
$X\sim\gamma_{E_{0}}$ is an extremizer follows immediately from the scaling
condition (4) (see Remark 6) and the observation that
$B_{j}X\sim\gamma_{E^{j}}$ (since $B_{j}B_{j}^{T}=\operatorname{id}_{E^{j}}$).
∎
###### Remark 8.
In the case where the datum is such that (3) coincides with the Shannon–Stam
inequality, the above proof reduces to that of Lehec [14]. The new idea is
identifying and incorporating the “correct” definition of AJN-geometricity.
When $k=1$, the AJN inequality (3) coincides with the entropic form of the
Brascamp–Lieb inequalities, and the definition of AJN-geometricity reduces to
the the definition of geometricity for Brascamp–Lieb data found in [3].
AJN-geometric data have a relatively straightforward geometric interpretation.
In particular, first note that each $E_{i}$ has a natural isometric embedding
into $E_{0}$ via the inclusion $\pi^{T}_{E_{i}}:E_{i}\to E_{0}$. If
$(\mathbf{c},\mathbf{d},\mathbf{B})$ is AJN-geometric then
$B_{j}B_{j}^{T}=\operatorname{id}_{E^{j}}$, which means that each $E^{j}$ can
be isometrically embedded into $E_{0}$ by the map $B_{j}^{T}:E^{j}\to E_{0}$.
In this way, we can consider $(E_{i})_{i=1}^{k}$ and $(E^{j})_{j=1}^{m}$ to be
subspaces of $E_{0}$, and $\Pi_{E_{i}}:=\pi^{T}_{E_{i}}\pi_{E_{i}}$ and
$\Pi_{E^{j}}:=B_{j}^{T}B_{j}$ define the orthogonal projections of $E_{0}$
onto $E_{i}$ and $E^{j}$, respectively. Thus, the geometric instances of the
AJN inequality (3) can be restated in a way that dispenses with the specific
linear maps $\mathbf{B}$ as follows.
###### Corollary 9.
Let $E^{1},\dots,E^{m}$ be subspaces of $E_{0}=\oplus_{i=1}^{k}E_{i}$. If
$\mathbf{c}$ and $\mathbf{d}$ satisfy
$\displaystyle\sum_{j=1}^{m}d_{j}\Pi_{E_{i}}\Pi_{E^{j}}\Pi_{E_{i}}=c_{i}\Pi_{E_{i}},\hskip
14.22636pt\mbox{for each~{}}1\leq i\leq k,$ (12)
then for any independent $X_{i}\in\mathcal{P}(E_{i})$, $1\leq i\leq k$, and
$X=(X_{1},\dots,X_{m})$,
$\displaystyle\sum_{i=1}^{k}c_{i}h(\Pi_{E_{i}}X)\leq\sum_{j=1}^{m}d_{j}h(\Pi_{E^{j}}X).$
(13)
Equality is achieved for $X\sim N(0,\operatorname{id}_{E_{0}})$.
###### Remark 10.
Entropies in (13) are computed with respect to Lebesgue measure on the
subspace being projected upon. In particular, we have
$h(\Pi_{E_{i}}X)=h(X_{i})$, but have chosen to write (13) in a way to
emphasize the symmetry of the inequality.
With the above definitions in hand, the following completely characterizes the
(Gaussian-)extremizable instances of Theorem 4. It is the main result of this
section, and specializes to the extremizability results in [3] for the
Brascamp–Lieb functional inequalities when $k=1$.
###### Theorem 11.
The following are equivalent:
1. (i)
$(\mathbf{c},\mathbf{d},\mathbf{B})$ is extremizable.
2. (ii)
$(\mathbf{c},\mathbf{d},\mathbf{B})$ is Gaussian-extremizable.
3. (iii)
There are $K_{i}\in\mathbf{S}^{+}(E_{i})$, $1\leq i\leq k$, satisfying
$\displaystyle\sum_{j=1}^{m}d_{j}\pi_{E_{i}}B^{T}_{j}(B_{j}KB_{j}^{T})^{-1}B_{j}\pi^{T}_{E_{i}}=c_{i}K_{i}^{-1},\hskip
14.22636pt1\leq i\leq k,$ (14)
where $K:=\operatorname{diag}(K_{1},\dots,K_{k})$.
4. (iv)
$(\mathbf{c},\mathbf{d},\mathbf{B})$ is equivalent to an AJN-geometric datum.
###### Remark 12.
For $(K_{i})_{i=1}^{k}$ satisfying (14), the Gaussians $X_{i}\sim N(0,K_{i})$,
$1\leq i\leq k$ are extremal in (3). In fact, the proof of Theorem 11 will
show that if $X_{i}\in\mathcal{P}(E_{i})$, $1\leq i\leq k$ are extremal in
(3), then the covariances $K_{i}=\operatorname{Cov}(X_{i})$ necessarily
satisfy (14).
As a preliminary observation, we note that the extremizers in (3) are closed
under convolutions. This fact can be extracted from the doubling argument in
[1]; we state and prove it here for completeness.
###### Proposition 13.
Fix a datum $(\mathbf{c},\mathbf{d},\mathbf{B})$ that is extremizable for the
AJN inequality (3). Let $X=(X_{1},\dots,X_{k})$ and $Y=(Y_{1},\dots,Y_{k})$
each satisfy (3) with equality. If $X,Y$ are independent, then
$X+Y=(X_{1}+Y_{1},\dots,X_{k}+Y_{k})$ also satisfies (3) with equality.
###### Proof.
Define $Z^{+}=(Z_{1}^{+},\dots,Z_{k}^{+})$ and
$Z^{-}=(Z_{1}^{-},\dots,Z_{k}^{-})$, where
$Z_{i}^{+}:=\frac{1}{\sqrt{2}}(X_{i}+Y_{i}),\hskip
14.22636ptZ_{i}^{-}:=\frac{1}{\sqrt{2}}(X_{i}-Y_{i}),\hskip 14.22636pt1\leq
i\leq k.$
Observe that
$\displaystyle\sum_{i=1}^{k}c_{i}(h(X_{i})+h(Y_{i}))$
$\displaystyle=\sum_{i=1}^{k}c_{i}h(X_{i},Y_{i})$ (15)
$\displaystyle=\sum_{i=1}^{k}c_{i}\left(h(Z_{i}^{+})+h(Z_{i}^{-}|Z_{i}^{+})\right)$
(16)
$\displaystyle\leq\sum_{j=1}^{m}d_{j}\left(h(B_{j}Z^{+})+h(B_{j}Z^{-}|Z^{+})\right)+2C_{g}(\mathbf{c},\mathbf{d},\mathbf{B})$
(17)
$\displaystyle\leq\sum_{j=1}^{m}d_{j}\left(h(B_{j}Z^{+})+h(B_{j}Z^{-}|B_{j}Z^{+})\right)+2C_{g}(\mathbf{c},\mathbf{d},\mathbf{B})$
(18)
$\displaystyle=\sum_{j=1}^{m}d_{j}\left(h(B_{j}X,B_{j}Y)\right)+2C_{g}(\mathbf{c},\mathbf{d},\mathbf{B})$
(19)
$\displaystyle=\sum_{j=1}^{m}d_{j}(h(B_{j}X)+h(B_{j}Y))+2C_{g}(\mathbf{c},\mathbf{d},\mathbf{B}).$
(20)
In the above, (15) is due to independence; (16) follows due to orthogonality
of the transformation $(X_{i},Y_{i})\to(Z_{i}^{+},Z_{i}^{-})$ and the chain
rule; (17) is two applications of (3); (18) follows because conditioning
reduces entropy; (19) is due to the chain rule and orthogonality of the
transformation $(B_{j}Z^{+},B_{j}Z^{-})\to(B_{j}X,B_{j}Y)$; (20) is again due
to independence.
Since $X$ and $Y$ are extremal by assumption, we have equality throughout.
This implies $Z^{+}$ is also extremal, and hence we conclude $X+Y$ is extremal
by the scaling condition (4). ∎
###### Proof of Theorem 11.
$(i)\Rightarrow(ii)$: Let $X$ be an extremizer in (3), and put
$Z_{n}:=n^{-1/2}\sum_{\ell=1}^{n}X^{(i)}$, where $X^{(1)},X^{(2)},\dots$ are
i.i.d. copies of $X$, which we assume to be zero-mean without loss of
generality. By an application of Proposition 13 and the scaling condition (4)
(which holds by finiteness of $C_{g}(\mathbf{c},\mathbf{d},\mathbf{B})$), we
have that $Z_{n}$ is an extremizer in (3) for all $n\geq 1$. By an application
of the entropic central limit theorem [2, 6], it follows that $Z\sim
N(0,\operatorname{Cov}(X))$ is also an extremizer in (3).
$(ii)\Rightarrow(i)$: This follows immediately from Theorem 4.
$(ii)\Rightarrow(iii)$: If $(\mathbf{c},\mathbf{d},\mathbf{B})$ is Gaussian-
extremizable, then there exist $K^{*}_{i}\in\mathbf{S}^{+}(E_{i})$, $1\leq
i\leq k$ which maximize
$(K_{i})_{i=1}^{k}\mapsto\sum_{i=1}^{k}c_{i}\log\det(K_{i})-\sum_{j=1}^{m}d_{j}\log\det(B_{j}KB_{j}^{T}),$
where $K:=\operatorname{diag}(K_{1},\dots,K_{k})$ (note this implies
$B_{j}K^{*}B_{j}^{T}$ is invertible for each $1\leq j\leq m$). This means, for
any index $i$ and any $A_{i}\in\mathbf{S}(E_{i})$, we can consider the
perturbation $K_{i}=K_{i}^{*}+\epsilon A_{i}$ for $\epsilon$ sufficiently
small, and the function value cannot increase. By first-order Taylor
expansion, this implies
$\displaystyle c_{i}\langle A_{i},(K^{*}_{i})^{-1}\rangle$
$\displaystyle=\sum_{j=1}^{m}d_{j}\langle
B_{j}\pi_{E_{i}}^{T}A_{i}\pi_{E_{i}}B_{j}^{T},(B_{j}K^{*}B_{j}^{T})^{-1}\rangle$
$\displaystyle=\Big{\langle}A_{i},\sum_{j=1}^{m}d_{j}\pi_{E_{i}}B^{T}_{j}(B_{j}K^{*}B_{j}^{T})^{-1}B_{j}\pi^{T}_{E_{i}}\Big{\rangle},$
where $\langle\cdot,\cdot\rangle$ is the Hilbert–Schmidt (trace) inner
product. By arbitrariness of $A_{i}$, we conclude (14).
$(iii)\Rightarrow(iv)$: Let $K$ be as in (14). The equivalent datum
$(\mathbf{c},\mathbf{d},\mathbf{B^{\prime}})$ defined by
$B_{j}^{\prime}=(B_{j}KB_{j}^{T})^{-1/2}B_{j}K^{1/2},~{}~{}1\leq j\leq m$
is AJN-geometric. Indeed, $B_{j}^{\prime}B_{j}^{\prime
T}=\operatorname{id}_{E^{j}}$ and (14) gives
$\displaystyle\sum_{j=1}^{m}d_{j}\pi_{E_{i}}B_{j}^{\prime
T}B_{j}^{\prime}\pi^{T}_{E_{i}}=\sum_{j=1}^{m}d_{j}K_{i}^{1/2}\pi_{E_{i}}B_{j}(B_{j}KB_{j}^{T})^{-1}B_{j}\pi^{T}_{E_{i}}K_{i}^{1/2}=c_{i}\operatorname{id}_{E_{i}}.$
$(iv)\Rightarrow(ii)$: Let $(\mathbf{c},\mathbf{d},\mathbf{B^{\prime}})$ be
the geometric datum equivalent to $(\mathbf{c},\mathbf{d},\mathbf{B})$. In the
notation of (1), for any $X_{i}\in\mathcal{P}(E_{i})$, $1\leq i\leq k$ and
$X=(X_{1},\dots,X_{k})$, we have by a change of variables
$\displaystyle\sum_{i=1}^{k}c_{i}h(X_{i})-\sum_{j=1}^{m}d_{j}h(B_{j}X)$
$\displaystyle=\sum_{i=1}^{k}c_{i}h(C_{i}X_{i})-\sum_{i=1}^{k}c_{i}\log\det(C_{i})-\sum_{j=1}^{m}d_{j}h(B_{j}^{\prime}CX)-\sum_{j=1}^{m}d_{j}\log\det(A_{j})$
$\displaystyle=\sum_{i=1}^{k}c_{i}h(Y_{i})-\sum_{j=1}^{m}d_{j}h(B_{j}^{\prime}Y)-\sum_{i=1}^{k}c_{i}\log\det(C_{i})-\sum_{j=1}^{m}d_{j}\log\det(A_{j}),$
where we have defined $Y_{i}:=C_{i}X_{i}$, and $Y=(Y_{1},\dots,Y_{k})$. Since
each $C_{i}$ is invertible, it is clear that $X$ is a (Gaussan-)extremizer for
$(\mathbf{c},\mathbf{d},\mathbf{B})$ if and only if $Y$ is a
(Gaussan-)extremizer for $(\mathbf{c},\mathbf{d},\mathbf{B^{\prime}})$. The
latter is Gaussian-extremizable by the assumption of geometricity and
Proposition 7, so the claim follows. ∎
###### Remark 14.
We remark that Theorem 4 can be derived as a limiting case of the forward-
reverse Brascamp–Lieb inequalities [16]; details can be found in [8, Section
4]. There is a counterpart notion of geometricity for the forward-reverse
Brascamp–Lieb inequalities, for which a result parallel to Theorem 11 holds.
However, the notion of “geometricity” in the context of [8] does not easily
pass through the aforementioned limit, so it seems the simplest proof of
Theorem 11 is a more direct one, as given here.
## 4 Characterization of extremizers
The goal of this section is to give a complete characterization of the
extremizers in (3). In view of Theorem 11, it suffices to consider geometric
instances of the AJN inequality; indeed, the extremizers of any other
extremizable instance of the AJN inequality will be linear transformations of
the extremizers for an equivalent AJN-geometric datum.
Toward this end, let $(\mathbf{c},\mathbf{d},\mathbf{B})$ be AJN-geometric,
and regard $(E_{i})_{i=1}^{k}$ and $(E^{j})_{j=1}^{m}$ as subspaces of
$E_{0}$, as in the discussion preceding Corollary 9. We now extend definitions
found in Valdimarsson [23] to the present setting. A nonzero subspace
$K\subset E_{0}$ is said to be independent if it can be written as
$K=E_{i}\cap\bigcap_{j=1}^{m}V_{j},$
for some $i\in\\{1,\dots,k\\}$, and each $V_{j}$ equal to $E^{j}$ or
${E^{j}}^{\perp}$ (the latter equal to the orthogonal complement of $E^{j}$ in
$E_{0}$). Each independent subspace is contained in some $E_{i}$, and distinct
independent subspaces are orthogonal by construction. So, if
$K^{i}_{1},\dots,K^{i}_{n_{i}}$ is an enumeration of independent subspaces of
$E_{i}$, then we can uniquely decompose
$\displaystyle E_{i}=K^{i}_{0}\oplus K^{i}_{1}\oplus\cdots\oplus
K^{i}_{n_{i}},$ (21)
where $K^{i}_{0}$ is defined to be the orthogonal complement of
$\oplus_{\ell=1}^{n_{i}}K^{i}_{\ell}$ in $E_{i}$. Now, we can uniquely define
the dependent subspace $K_{dep}$ as the product-form subspace
$\displaystyle K_{dep}:=\oplus_{i=1}^{k}K^{i}_{0}.$ (22)
###### Proposition 15.
If $K_{dep}$ is nonzero, there is an orthogonal decomposition
$\displaystyle K_{dep}=\oplus_{\ell=1}^{n}K^{\ell}_{dep},$ (23)
where each $K^{\ell}_{dep}$ is critical for the datum
$(\mathbf{c},\mathbf{d},\mathbf{B})$.
A decomposition of the form (23) is said to be a critical decomposition; we
remark that critical decompositions are not necessarily unique. Together with
Theorem 11, the following completely characterizes the extremizers in the AJN
inequality (3). In the statement, we let $\Pi_{V}:E_{0}\to E_{0}$ denote the
orthogonal projection onto the indicated subspace $V$.
###### Theorem 16.
Let $(\mathbf{c},\mathbf{d},\mathbf{B})$ be AJN-geometric, and decompose each
$E_{i}$ as in (21). Independent $X_{i}\sim\mathcal{P}(E_{i})$, $1\leq i\leq k$
and $X=(X_{1},\dots,X_{k})$ satisfy (3) with equality iff
1. (i)
$\Pi_{K^{i}_{0}}(X),\dots,\Pi_{K^{i}_{n_{i}}}(X)$ are independent for each
$1\leq i\leq k$; and
2. (ii)
there is a critical decomposition $K_{dep}=\oplus_{\ell=1}^{n}K^{\ell}_{dep}$
such that $\Pi_{K^{1}_{dep}}(X)$, …, $\Pi_{K^{n}_{dep}}(X)$ are independent
isotropic Gaussians on their respective subspaces.
In words, (i) says that each random vector $X_{i}$ splits into independent
factors on the orthogonal decomposition of $E_{i}$ given by (21). Condition
(ii) tells us that the factor of $X$ supported on $K_{dep}$ is Gaussian with
$\operatorname{Cov}(\Pi_{K_{dep}}(X))=\sum_{\ell=1}^{n}\sigma_{\ell}^{2}\Pi_{K^{\ell}_{dep}}$,
for some critical decomposition (23) and choice of variances
$(\sigma_{\ell}^{2})_{\ell=1}^{n}$. In effect, this links the covariances of
the Gaussian factors of the $X_{i}$’s.
###### Remark 17.
In the case of $k=1$, the above characterization of extremizers is compatible
with that articulated by Valdimarsson for the functional Brascamp–Lieb
inequalities [23]. As noted in Remark 14, the AJN inequality is formally
implied by the Euclidean forward-reverse Brascamp–Lieb inequalities. A
characterization of extremizers for the latter remains unknown at the moment,
but will necessarily involve a new ingredient of log-concavity (since, e.g.,
the Prékopa–Leindler inequality is realized as a special case, and the
extremizers are log-concave [9]).
Before giving the proof, let us consider a few quick examples to demonstrate
the result.
###### Example 18.
Consider the Shannon–Stam inequality on $E_{1}=E_{2}=\mathbb{R}^{n}$ with
$\lambda\in(0,1)$, stated as
$\lambda h(X_{1})+(1-\lambda)h(X_{2})\leq
h(\lambda^{1/2}X_{1}+(1-\lambda)^{1/2}X_{2}),$
for independent $X_{1},X_{2}$ with finite entropies and second moments. There
are no independent subspaces, and every maximal critical decomposition of
$K_{dep}=E_{0}=\mathbb{R}^{n}\oplus\mathbb{R}^{n}$ can be written as
$\mathbb{R}^{n}\oplus\mathbb{R}^{n}=\bigoplus_{\ell=1}^{n}(\operatorname{span}\\{e_{\ell}\\}\oplus\operatorname{span}\\{e_{\ell}\\}),$
with $(e_{\ell})_{\ell=1}^{n}$ an orthonormal basis of $\mathbb{R}^{n}$. Thus,
(ii) is equivalent to the assertion that $X_{1}$ and $X_{2}$ must be Gaussian,
with identical covariances.
###### Example 19.
In the toy inequality (6), the subspace on which $X$ is supported is the only
independent subspace. So, if equality is achieved in (6), then condition (i)
of the theorem tells us that $X$ and $Y$ must be independent; and condition
(ii) implies that $Y$ and $Z$ are Gaussian with identical covariances, as in
the previous example.
###### Example 20.
The Zamir–Feder inequality [24] can be stated as follows (see, e.g., [18]). If
a matrix $B\in\mathbb{R}^{k\times n}$ satisfying
$BB^{T}=\operatorname{id}_{\mathbb{R}^{n}}$ has columns
$(b_{i})_{i=1}^{k}\subset\mathbb{R}^{n}$, then any random vector
$X=(X_{1},\dots,X_{k})\in\mathcal{P}(\mathbb{R}^{k})$ with independent
coordinates satisfies
$\displaystyle h(BX)\geq\sum_{i=1}^{k}|b_{i}|^{2}h(X_{i}).$ (24)
Observe that this is a geometric instance of the AJN inequality, with
$B_{1}=B$, $d_{1}=1$, and $c_{i}=|b_{i}|^{2}$. Letting $(e_{i})_{i=1}^{k}$
denote the natural basis for $\mathbb{R}^{k}$, it follows by definitions that
any independent subspace must be equal to $\operatorname{span}\\{e_{i}\\}$ for
some $1\leq i\leq k$, and $\operatorname{span}\\{e_{i}\\}$ is an independent
subspace iff $e_{i}\in\ker(B)\cup\ker(B)^{\perp}$. Hence, any
$X\in\mathcal{P}(\mathbb{R}^{k})$ with independent coordinates meeting (24)
with equality has the following form:
1. 1.
If $e_{i}\in\ker(B)\cup\ker(B)^{\perp}$, then $X_{i}$ can have any
distribution in $\mathcal{P}(\mathbb{R})$.
2. 2.
Otherwise, $X_{i}$ is Gaussian.
Observe that $e_{i}\in\ker(B)\Leftrightarrow b_{i}=0$; in this case,
coordinate $X_{i}$ is not present in (24). If $e_{i}\in\ker(B)^{\perp}$, then
$X_{i}$ is recoverable from $BX$ in the sense that there exists
$u\in\mathbb{R}^{n}$ such that $u^{T}BX=X_{i}$. Hence, we might say that the
extremizers in (24) are characterized by all present non-recoverable
components being Gaussian. This is precisely the statement given by Rioul and
Zamir in their recent work [19, Theorem 1], which gave the first
characterization of extremizers in the Zamir–Feder inequality.
To give an application that yields a new result, consider the following
inequality proposed in [1]:
$\displaystyle c_{1}h(Z_{1},Z_{2})+c_{2}h(Y)\leq
h(Z_{1}+Y,Z_{2}+Y)+d_{2}h(Z_{1})+d_{3}h(Z_{2})+C_{g},$ (25)
where the $Z_{1},Z_{2},Y$ are random variables with $(Z_{1},Z_{2})$
independent of $Y$, and all coefficients are assumed to be strictly positive.
An immediate consequence of Theorem 4 is that the sharp constant $C_{g}$ can
be computed by considering only Gaussians, and conditions on the coefficients
$\mathbf{c},\mathbf{d}$ ensuring finiteness of $C_{g}$ can be deduced from (4)
and (5). Using Theorem 16, we can further conclude that if $\mathbf{c}$ and
$\mathbf{d}$ are such that (25) is extremizable, then it admits only Gaussian
extremizers.
To see that this is the case, let $(\mathbf{c},\mathbf{d},\mathbf{B})$ denote
the datum corresponding to (25). In matrix notation with respect to the
natural choice of basis, we have
$B_{1}=\begin{bmatrix}1&0&1\\\
0&1&1\end{bmatrix},~{}~{}B_{2}=\begin{bmatrix}1&0&0\end{bmatrix},~{}~{}B_{3}=\begin{bmatrix}0&1&0\end{bmatrix}.$
Assuming $(\mathbf{c},\mathbf{d},\mathbf{B})$ is extremizable, let $C$ and
$(A_{j})_{j=1}^{3}$ be the matrices in (1) that transform
$(\mathbf{c},\mathbf{d},\mathbf{B})$ to an AJN-geometric datum
$(\mathbf{c},\mathbf{d},\mathbf{B^{\prime}})$. By rescaling, we can assume
without loss of generality that $C=\operatorname{diag}(C_{1},1)$, where
$C_{1}$ is an invertible $2\times 2$ matrix. In order to show (25) admits only
Gaussian extremizers, we need to show that
$(\mathbf{c},\mathbf{d},\mathbf{B^{\prime}})$ admits no independent subspaces.
To do this, we will show the stronger claim that
$\bigcap_{j=1}^{3}V_{j}=\\{0\\}$
for any choice of $V_{j}$ equal to $E^{j}$ or ${E^{j}}^{\perp}$, where we
identify
$E^{j}=\operatorname{col}(C^{-T}B_{j}^{T}A_{j}^{-T})=\operatorname{col}(C^{-T}B_{j}^{T})$,
with $\operatorname{col}(\cdot)$ denoting the columnspace of its argument.
Explicitly, we have
$\displaystyle E^{1}=\operatorname{col}\left(\begin{bmatrix}\,C_{1}^{-T}\,\\\
1~{}~{}~{}1\end{bmatrix}\right),~{}~{}E^{2}=\operatorname{col}\left(\begin{bmatrix}\,C_{1}^{-T}\begin{bmatrix}1\\\
0\end{bmatrix}\,\\\
0\end{bmatrix}\right),~{}~{}E^{3}=\operatorname{col}\left(\begin{bmatrix}\,C_{1}^{-T}\begin{bmatrix}0\\\
1\end{bmatrix}\,\\\ 0\end{bmatrix}\right).$
Direct computation shows
$\displaystyle{E^{1}}^{\perp}=\operatorname{col}\left(\begin{bmatrix}\,C_{1}\begin{bmatrix}1\\\
1\end{bmatrix}\,\\\
-1\end{bmatrix}\right),~{}~{}{E^{2}}^{\perp}=\operatorname{col}\left(\begin{bmatrix}\begin{matrix}0\\\
0\end{matrix}&C_{1}\begin{bmatrix}0\\\ 1\end{bmatrix}\,\\\
1&0\end{bmatrix}\right),~{}~{}{E^{3}}^{\perp}=\operatorname{col}\left(\begin{bmatrix}\begin{matrix}0\\\
0\end{matrix}&C_{1}\begin{bmatrix}1\\\ 0\end{bmatrix}\,\\\
1&0\end{bmatrix}\right).$
The problem now reduces to casework. By inspection, we have
${E^{1}}^{\perp}\cap E^{2}={E^{1}}^{\perp}\cap E^{3}=\\{0\\}$. Next, since
$C_{1}$ is invertible, we have $E^{2}\cap E^{3}=\\{0\\}$, and it similarly
follows that $E^{1}\cap E^{2}=E^{1}\cap
E^{3}={E^{1}}^{\perp}\cap{E^{2}}^{\perp}=\\{0\\}$. It only remains to show
that $E^{1}\cap{E^{2}}^{\perp}\cap{E^{3}}^{\perp}=\\{0\\}$. To this end,
invertibility of $C_{1}$ allows us to write
${E^{2}}^{\perp}\cap{E^{3}}^{\perp}=\operatorname{col}\left(\begin{bmatrix}0\\\
0\\\ 1\end{bmatrix}\right).$
However, the only vector in $E^{1}$ that is zero in the first two components
is the all-zero vector (again, by invertibility of $C_{1}$), so it follows
that $E^{1}\cap{E^{2}}^{\perp}\cap{E^{3}}^{\perp}=\\{0\\}$, and we conclude
that the datum $(\mathbf{c},\mathbf{d},\mathbf{B^{\prime}})$ admits no
independent subspaces.
Although the above shows (25) can only admit Gaussian extremizers, it does not
tell us whether any exist, or their structure if they do. This is, however,
the content of Theorem 11. Namely, the covariances of Gaussian extremizers are
characterized completely by solutions $K$ to (14) for the datum
$(\mathbf{c},\mathbf{d},\mathbf{B})$; see Remark 12. This emphasizes the
complementary nature of Theorems 16 and 11.
### 4.1 Proof of Theorem 16
The remainder of this section is dedicated to the proof of Theorem 16. We
establish the assertion of sufficiency first, and necessity second. The
assumption that the datum $(\mathbf{c},\mathbf{d},\mathbf{B})$ is AJN-
geometric prevails throughout. Accordingly we will regard $E^{j}$ as a
subspace of $E_{0}$, with $\Pi_{E^{j}}=B_{j}^{T}B_{j}$ denoting the orthogonal
projection onto $E^{j}$.
###### Lemma 21.
Let the notation of (21) and (22) prevail. For each $1\leq j\leq m$, we have
the orthogonal decomposition
$\displaystyle
E^{j}=(\Pi_{E^{j}}K_{dep})\oplus\left(\bigoplus_{i=1}^{k}\bigoplus_{\begin{subarray}{c}1\leq\ell\leq
n_{i}:\\\ K^{i}_{\ell}\subset E^{j}\end{subarray}}K^{i}_{\ell}\right).$ (26)
Moreover, for any critical decomposition
$K_{dep}=\oplus_{\ell=1}^{n}K^{\ell}_{dep}$, we have the orthogonal
decomposition
$\displaystyle\Pi_{E^{j}}K_{dep}=\oplus_{\ell=1}^{n}\Pi_{E^{j}}K^{\ell}_{dep}.$
(27)
###### Proof of Proposition 15 and Lemma 21.
We first note that $\Pi_{E^{j}}K_{dep}$ is orthogonal to $\Pi_{E^{j}}K$, for
any independent subspace $K$. Indeed, by definition of an independent
subspace, we either have $\Pi_{E^{j}}K=\\{0\\}$ or $\Pi_{E^{j}}K=K$. The
former is trivially orthogonal to $\Pi_{E^{j}}K_{dep}$, and the latter is
orthogonal to $\Pi_{E^{j}}K_{dep}$ since $K_{dep}$ is orthogonal to $K$ by
definition and $\Pi_{E^{j}}$ is self-adjoint. Indeed,
$(\Pi_{E^{j}}x)^{T}y=(\Pi_{E^{j}}x)^{T}y=x^{T}(\Pi_{E^{j}}y)=x^{T}y=0,~{}~{}\forall
x\in K_{dep},y\in K.$
This establishes (26).
Now, using the decomposition (21) and the scaling condition (4) (which holds
by AJN-geometricity), we have
$\displaystyle\sum_{i=1}^{k}c_{i}\sum_{\ell=0}^{n_{i}}\dim(K_{\ell}^{i})=\sum_{i=1}^{k}c_{i}\dim(E_{i})$
$\displaystyle=\sum_{j=1}^{m}d_{j}\dim(E^{j})$
$\displaystyle=\sum_{j=1}^{m}d_{j}\dim(\Pi_{E^{j}}K_{dep})+\sum_{j:K_{\ell}^{i}\subset
E^{j}}^{m}d_{j}\dim(K^{i}_{\ell}).$
To summarize,
$\displaystyle\sum_{i=1}^{k}c_{i}\sum_{\ell=0}^{n_{i}}\dim(K_{\ell}^{i})$
$\displaystyle=\sum_{j=1}^{m}d_{j}\dim(\Pi_{E^{j}}K_{dep})+\sum_{j:K_{\ell}^{i}\subset
E^{j}}^{m}d_{j}\dim(K^{i}_{\ell}).$ (28)
Since each independent subspace is of product form, the dimension condition
(5) implies, for each $1\leq i\leq k$ and $1\leq\ell\leq n_{i}$,
$\displaystyle c_{i}\dim(K_{\ell}^{i})$
$\displaystyle\leq\sum_{j:K_{\ell}^{i}\subset
E^{j}}^{m}d_{j}\dim(K^{i}_{\ell}).$ (29)
Likewise, since $K_{dep}=\oplus_{i=1}^{k}K_{0}^{i}$ is of product form, (5)
also implies
$\displaystyle\sum_{i=1}^{k}c_{i}\dim(K_{0}^{i})$
$\displaystyle\leq\sum_{j=1}^{m}d_{j}\dim(\Pi_{E^{j}}K_{dep}).$ (30)
Comparing against (28), we necessarily have equality in (29) and (30), which
proves that $K_{dep}$ is critical. Thus, there exists at least one critical
decomposition of $K_{dep}$ (the trivial one), and Proposition 15 follows.
It remains to show (27). By induction, it suffices to show if $K\subset E_{0}$
is a critical subspace, and $K=K_{1}\oplus K_{2}$ is a critical decomposition,
then $\Pi_{E^{j}}K_{1}$ and $\Pi_{E^{j}}K_{2}$ are orthogonal complements in
$\Pi_{E^{j}}K$. The proof is similar to that of [3, Lemma 7.12]. Letting
$\Pi_{K_{1}}:E_{0}\to E_{0}$ denote the orthogonal projection onto $K_{1}$, we
have that $\Pi_{E^{j}}\Pi_{K_{1}}$ is a contraction in $E_{0}$, so
$\operatorname{Tr}(\Pi_{E^{j}}\Pi_{K_{1}})\leq\dim(\Pi_{E^{j}}K_{1})$. Since
$K_{1}$ is critical, it is product-form by definition and therefore
$\Pi_{K_{1}}=\sum_{i=1}^{k}\Pi_{E_{i}}\Pi_{K_{1}}\Pi_{E_{i}}$. From (7), this
implies
$\displaystyle\sum_{i=1}^{k}c_{i}\dim(\Pi_{E_{i}}K_{1})$
$\displaystyle=\sum_{i=1}^{k}c_{i}\operatorname{Tr}(\Pi_{E_{i}}\Pi_{K_{1}})=\sum_{j=1}^{m}d_{j}\operatorname{Tr}(\Pi_{E^{j}}\Pi_{K_{1}})\leq\sum_{j=1}^{m}d_{j}\dim(\Pi_{E^{j}}K_{1}).$
Since $K_{1}$ is critical, we have equality throughout, implying
$\operatorname{Tr}(\Pi_{E^{j}}\Pi_{K_{1}})=\dim(\Pi_{E^{j}}K_{1})$ for each
$j$. From this, we can conclude that $\Pi_{K_{1}}\Pi_{E^{j}}$ is an isometry
from $\Pi_{E^{j}}K_{1}$ into $K_{1}$, and similarly $\Pi_{K_{2}}\Pi_{E^{j}}$
is an isometry from $\Pi_{E^{j}}K_{2}$ into $K_{2}$. Since $K_{1}$ and $K_{2}$
are orthogonal complements in $K$, it follows that $\Pi_{E^{j}}K_{1}$ and
$\Pi_{E^{j}}K_{2}$ are orthogonal complements in $\Pi_{E^{j}}K$. ∎
###### Sufficiency of conditions (i)-(ii) in Theorem 16.
Let $X_{i}\sim\mathcal{P}(E_{i})$, $1\leq i\leq k$ be independent and satisfy
(i)-(ii), and let $X=(X_{1},\dots,X_{k})$. By the orthogonal decomposition
(26) and the independence assumptions imposed by (i), we can decompose
$\displaystyle
h(B_{j}X)=h(B_{j}\Pi_{K_{dep}}(X))+\sum_{i=1}^{k}\sum_{\begin{subarray}{c}1\leq\ell\leq
n_{i}:\\\ K^{i}_{\ell}\subset
E^{j}\end{subarray}}h(\Pi_{K^{i}_{\ell}}(X_{i})),$ (31)
where all entropies are computed with respect to the subspace being projected
upon. In the proof of Lemma 21, we found (29) was met with equality. So,
whenever $E_{i}$ contains an independent subspace (i.e., $n_{i}\geq 1$), we
have
$c_{i}=\sum_{j:K_{\ell}^{i}\subset E^{j}}^{m}d_{j}.$
Now, using the decomposition (21) and the independence assumptions imposed by
(i), an application of the above identity followed by (31) reveals
$\displaystyle\sum_{i=1}^{k}c_{i}h(X_{i})$
$\displaystyle=\sum_{i=1}^{k}\sum_{\ell=0}^{n_{i}}c_{i}h(\Pi_{K_{\ell}^{i}}(X_{i}))$
$\displaystyle=\sum_{i=1}^{k}c_{i}h(\Pi_{K_{0}^{i}}(X_{i}))+\sum_{i=1}^{k}\sum_{\ell=1}^{n_{i}}\sum_{j:K_{\ell}^{i}\subset
E^{j}}^{m}d_{j}h(\Pi_{K_{\ell}^{i}}(X_{i}))$
$\displaystyle=\sum_{i=1}^{k}c_{i}h(\Pi_{K_{0}^{i}}(X_{i}))+\sum_{j=1}^{m}d_{j}\sum_{i=1}^{k}\sum_{\begin{subarray}{c}1\leq\ell\leq
n_{i}:\\\ K^{i}_{\ell}\subset
E^{j}\end{subarray}}h(\Pi_{K_{\ell}^{i}}(X_{i}))$
$\displaystyle=\sum_{i=1}^{k}c_{i}h(\Pi_{K_{0}^{i}}(X_{i}))+\sum_{j=1}^{m}d_{j}\left(h(B_{j}X)-h(B_{j}\Pi_{K_{dep}}(X))\right).$
In summary,
$\displaystyle\sum_{i=1}^{k}c_{i}h(X_{i})-\sum_{j=1}^{m}d_{j}h(B_{j}X)=\sum_{i=1}^{k}c_{i}h(\Pi_{K_{0}^{i}}(X_{i}))-\sum_{j=1}^{m}d_{j}h(B_{j}\Pi_{K_{dep}}(X)),$
(32)
where any entropies over the trivial subspace $\\{0\\}$ are to be neglected.
It remains to show the RHS is zero. By (ii) and translation invariance of
entropy, we can assume that $\Pi_{K^{\ell}_{dep}}(X)\sim
N(0,\sigma_{\ell}^{2}\operatorname{id}_{K^{\ell}_{dep}})$ for each
$1\leq\ell\leq n$. Using the independence assumption in (ii) and the
decomposition (27), we can express
$h(B_{j}\Pi_{K_{dep}}(X))=\sum_{\ell=1}^{n}\frac{\dim(B_{j}K_{dep}^{\ell})}{2}\log(2\pi
e\sigma_{\ell}^{2}).$
Since each $K_{dep}^{\ell}$ is critical by definition, we have
$\displaystyle\sum_{j=1}^{m}d_{j}h(B_{j}\Pi_{K_{dep}}(X))$
$\displaystyle=\sum_{\ell=1}^{n}\frac{1}{2}\log(2\pi
e\sigma_{\ell}^{2})\sum_{j=1}^{m}d_{j}\dim(B_{j}K_{dep}^{\ell})$
$\displaystyle=\sum_{\ell=1}^{n}\frac{1}{2}\log(2\pi
e\sigma_{\ell}^{2})\sum_{i=1}^{k}c_{i}\dim(\pi_{E_{i}}K_{dep}^{\ell})$
$\displaystyle=\sum_{i=1}^{k}c_{i}\sum_{\ell=1}^{n}\frac{\dim(\pi_{E_{i}}K_{dep}^{\ell})}{2}\log(2\pi
e\sigma_{\ell}^{2})$
$\displaystyle=\sum_{i=1}^{k}c_{i}h(\Pi_{K_{0}^{i}}(X_{i})),$
where we used the independence assumption in (ii) for the last line. Putting
everything together shows
$\sum_{i=1}^{k}c_{i}h(X_{i})=\sum_{j=1}^{m}d_{j}h(B_{j}X),$
so that (i) and (ii) are sufficient conditions for the $X_{i}$’s to be
extremal, since $C_{g}(\mathbf{c},\mathbf{d},\mathbf{B})=0$ by Proposition 7.
∎
As we turn our attention to the necessity part of Theorem 16, we record
several technical lemmas for convenience. We define $\mathbf{S}_{0}^{+}(E)$ to
be the closure of $\mathbf{S}^{+}(E)$ (i.e., the positive semidefinite
symmetric linear operators on $E$). For $A_{i}\in\mathbf{S}_{0}^{+}(E_{i})$,
$1\leq i\leq k$, we define the set
$\Pi(A_{1},\dots,A_{k})\subset\mathbf{S}_{0}^{+}(E_{0})$ to be the set of
symmetric positive semidefinite linear maps $A:E_{0}\to E_{0}$ satisfying
$\pi_{E_{i}}A\pi_{E_{i}}^{T}=A_{i},~{}~{}~{}1\leq i\leq k.$
In terms of matrices, this means $A\in\Pi(A_{1},\dots,A_{k})$ iff $A$ is a
positive semidefinite matrix with diagonal blocks $A_{1},\dots,A_{k}$.
###### Lemma 22.
Let $(\mathbf{c},\mathbf{d},\mathbf{B})$ be AJN-geometric, and
$A_{i}\in\mathbf{S}_{0}^{+}(E_{i})$, $1\leq i\leq k$. For any
$A\in\Pi(A_{1},\dots,A_{k})$, we have
$\displaystyle\sum_{i=1}^{k}c_{i}\operatorname{Tr}\left((A_{i}-\operatorname{id}_{E_{i}})^{2}\right)\geq\sum_{j=1}^{m}d_{j}\operatorname{Tr}\left(((B_{j}A^{2}B_{j}^{T})^{1/2}-\operatorname{id}_{E^{j}})^{2}\right),$
(33)
with equality if and only if
$(\operatorname{id}_{E_{0}}-\Pi_{E^{j}})A\Pi_{E^{j}}=0$ for each $1\leq j\leq
m$.
###### Proof.
Using the block-diagonal structure of $A$ and the definition of AJN-
geometricity, we have
$\displaystyle\sum_{i=1}^{k}c_{i}\operatorname{Tr}\left((A_{i}-\operatorname{id}_{E_{i}})^{2}\right)$
$\displaystyle=\sum_{j=1}^{m}d_{j}\operatorname{Tr}(B_{j}(A-\operatorname{id}_{E_{0}})^{2}B_{j})$
$\displaystyle=\sum_{j=1}^{m}d_{j}\operatorname{Tr}(B_{j}A^{2}B_{j}^{T}-2B_{j}AB_{j}^{T}+\operatorname{id}_{E^{j}})$
$\displaystyle\geq\sum_{j=1}^{m}d_{j}\operatorname{Tr}(B_{j}A^{2}B_{j}^{T}-2(B_{j}A^{2}B_{j}^{T})^{1/2}+\operatorname{id}_{E^{j}})$
$\displaystyle=\sum_{j=1}^{m}d_{j}\operatorname{Tr}\left(((B_{j}A^{2}B_{j}^{T})^{1/2}-\operatorname{id}_{E^{j}})^{2}\right),$
where the inequality follows because square root is operator monotone. More
precisely, AJN-geometricity implies
$(B_{j}AB_{j}^{T})^{2}=B_{j}AB_{j}^{T}B_{j}AB_{j}^{T}\leq
B_{j}A^{2}B_{j}^{T},$
so that operator monotonicity of square root gives
$B_{j}AB_{j}^{T}\leq(B_{j}A^{2}B_{j}^{T})^{1/2}$. Equality in (33) is
therefore equivalent to equality above, which can be rewritten as
$B_{j}A(\operatorname{id}_{E_{0}}-B_{j}^{T}B_{j})AB_{j}^{T}=0~{}~{}\Leftrightarrow~{}~{}(\operatorname{id}_{E_{0}}-\Pi_{E^{j}})A\Pi_{E^{j}}=0.$
∎
The following is due to [11]; we sketch the proof for completeness.
###### Lemma 23.
Fix a Euclidean space $E$. Consider a filtered probability space carrying an
$E$-valued Brownian motion $(W_{t})_{\geq 0}$, and let $(F_{t})_{\geq 0}$ be
an adapted process taking values in $\mathbf{S}^{+}(E)$. If
$\int_{0}^{1}F_{t}dW_{t}\sim\mu$, then
$D(\mu\|\gamma_{E})\leq\frac{1}{2}\int_{0}^{1}\frac{\mathbb{E}\operatorname{Tr}\left((F_{t}-\operatorname{id}_{E})^{2}\right)}{1-t}dt.$
###### Proof.
Define the drift
$u_{t}=\int_{0}^{t}\frac{F_{s}-\operatorname{id}_{E}}{1-s}dW_{s}.$
We claim that $W_{1}+\int_{0}^{1}u_{t}dt\sim\mu$. To see this, write
$\displaystyle\int_{0}^{1}F_{t}dW_{t}=\int_{0}^{1}\operatorname{id}_{E}dW_{t}+\int_{0}^{1}(F_{t}-\operatorname{id}_{E})dW_{t}$
$\displaystyle=W_{1}+\int_{0}^{1}\int_{t}^{1}\frac{F_{t}-\operatorname{id}_{E}}{1-t}dsdt$
$\displaystyle=W_{1}+\int_{0}^{1}u_{s}ds,$
where we used the stochastic Fubini theorem. Now, by Proposition 26 and the
data processing inequality, Itô’s isometry, and Fubini’s theorem, we have
$\displaystyle
D(\mu\|\gamma_{E})\leq\frac{1}{2}\int_{0}^{1}\mathbb{E}|u_{t}|^{2}dt$
$\displaystyle=\frac{1}{2}\int_{0}^{1}\int_{0}^{t}\frac{\mathbb{E}\operatorname{Tr}\left((F_{s}-\operatorname{id}_{E})^{2}\right)}{(1-s)^{2}}dsdt$
(34)
$\displaystyle=\frac{1}{2}\int_{0}^{1}\frac{\mathbb{E}\operatorname{Tr}\left((F_{s}-\operatorname{id}_{E})^{2}\right)}{1-s}ds.$
∎
###### Lemma 24.
Let $(P_{t})_{t\geq 0}$ be the heat semigroup, and let
$X\sim\mu\in\mathcal{P}(E)$ have density $d\mu=fd\gamma_{E}$. For each
$0<t<1$, there is a constant $C$ depending only on $t$ and the second moments
of $X$ such that
$|\nabla\log P_{1-t}f(x)|\leq C(|x|+1),~{}~{}~{}x\in E.$
If, moreover, $\mu$ is of the form $\mu=\nu*\gamma_{E}$, then
$\nabla^{2}\log(P_{1-t}f(x))\in\mathbf{S}_{0}^{+}(E),~{}~{}~{}x\in E,0<t<1.$
###### Proof.
Let $\rho$ denote the density of $X$ with respect to Lebesgue measure on $E$.
By direct calculation, we can reparametrize $P_{1-t}f$ in terms of $\rho$ as
$P_{1-t}f(x)=\left(\frac{2\pi}{t}\right)^{\dim(E)/2}e^{\frac{|x|^{2}}{2t}}P_{\frac{1-t}{t}}\rho(x/t)$
Hence,
$\displaystyle\nabla\log P_{1-t}f(x)=\frac{1}{t}x+\frac{1}{t}\nabla(\log
P_{\frac{1-t}{t}}\rho)(x/t).$ (35)
Regularity estimates for evolution of densities under $(P_{t})_{t\geq 0}$
imply
$|\nabla\log P_{s}\rho(x)|\leq c_{s}(|x|+1),~{}~{}~{}s>0$
for some finite constant $c_{s}$ depending only on $s$ and the second moments
of $\rho$ (see, e.g., [17, Proposition 2]). Hence, the first claim follows.
For the second claim, we have $\rho=P_{1}\rho_{0}$ for some density
$\rho_{0}$. Hence, by the semigroup property combined with (35), we have
$\nabla^{2}\log
P_{1-t}f(x)=\frac{1}{t}\operatorname{id}_{E}+\frac{1}{t^{2}}\nabla^{2}(\log
P_{\frac{1}{t}}\rho_{0})(x/t).$
By a simple convexity calculation [10, Lemma 1.3], it holds that
$\nabla^{2}(\log P_{s}g)\geq-\frac{1}{s}\operatorname{id}_{E}$ for any density
$g$ and $s>0$, so we find
$\nabla^{2}\log
P_{1-t}f(x)\geq\left(\frac{1}{t}-\frac{1}{t}\right)\operatorname{id}_{E}=0.$
∎
###### Necessity of conditions (i)-(ii) in Theorem 16.
Let $\mu_{i}\in\mathcal{P}(E_{i})$, $1\leq i\leq k$ satisfy
$\displaystyle\sum_{i=1}^{k}c_{i}D(\mu_{i}\|\gamma_{E_{i}})$
$\displaystyle=\sum_{j=1}^{m}d_{j}D(B_{j}\sharp(\mu_{1}\otimes\cdots\otimes\mu_{k})\|\gamma_{E^{j}})$
(36)
under the prevailing assumption of AJN-geometricity; this is the same as
equality in (11). Without loss of generality, we can assume each $\mu_{i}$ is
centered. Moreover, since the extremizers of the AJN inequality are closed
under convolutions (Proposition 13) and standard Gaussians are extremal in the
geometric AJN inequality (Proposition 7), we can assume without loss of
generality that each $\mu_{i}$ is of the form
$\displaystyle\mu_{i}=\widetilde{\mu}_{i}*\gamma_{E_{i}}$ (37)
for some extremal $\widetilde{\mu}_{i}\in\mathcal{P}(E_{i})$, $1\leq i\leq k$.
Indeed, $X\sim\otimes_{i=1}^{k}\mu_{i}$ satisfies (i)-(ii) if and only if
$X+Z$ satisfies (i)-(ii) for $Z\sim\gamma_{E_{0}}$, independent of $X$.
Necessity of condition (i): In the proof of Proposition 7, the sole inequality
is (10). Hence, properties of the drift $u_{t}$ warrant a closer inspection;
we follow the approach developed in [11]. Toward this end, let $f$ denote the
density of $\mu_{1}\otimes\cdots\otimes\mu_{k}$ with respect to
$\gamma_{E_{0}}$, and define the function
$u_{t}(x):=\nabla\log P_{1-t}f(x),~{}~{}~{}~{}x\in E_{0},~{}0\leq t\leq 1,$
where $(P_{t})_{t\geq 0}$ denotes the heat semigroup. Define the matrix-valued
function
$\displaystyle\Gamma_{t}(x):=(1-t)\nabla
u_{t}(x)+\operatorname{id}_{E_{0}},~{}~{}~{}~{}x\in E_{0},~{}0\leq t\leq 1,$
(38)
which, for each $0\leq t\leq 1$, takes the block-diagonal form
$\Gamma_{t}=\operatorname{diag}(\Gamma^{1}_{t},\dots,\Gamma^{k}_{t})$ with
$\Gamma_{t}^{i}\in\mathbf{S}^{+}(E_{i})$ due to the product form of the
density $f$ and Lemma 24 applied to (37).
Now, consider the Wiener space of continuous functions
$\mathbb{W}=\\{\omega:[0,1]\to E_{0};~{}\omega(0)=0\\}$, equipped with the
uniform norm, the Borel sets $\mathcal{B}$, and the Wiener measure $\gamma$.
Let $X_{t}(\omega)=\omega(t)$ be the coordinate process, and
$\mathcal{F}=(\mathcal{F}_{t})_{0\leq t\leq 1}$ be the natural filtration of
$(X_{t})_{0\leq t\leq 1}$. We’ll work on the filtered probability space
$(\mathbb{W},\mathcal{B},\nu,\mathcal{F})$, where $\nu$ is the Brownian bridge
$\frac{d\nu}{d\gamma}(\omega):=f(\omega(1)),~{}~{}~{}\omega\in\mathbb{W}.$
By the representation theorem for Brownian bridges, we have
$\displaystyle X_{t}\overset{\text{law}}{=}tX+\sqrt{t(1-t)}Z,$ (39)
where $X\sim\mu_{1}\otimes\cdots\otimes\mu_{k}$ and $Z\sim\gamma_{E_{0}}$ are
independent. Writing $u_{t}\equiv u_{t}(X_{t})$, the classical de Bruijn
identity, parametrized with respect to the bridge (39), gives
$\displaystyle
D(\mu_{i}\|\gamma_{E_{i}})=\frac{1}{2}\int_{0}^{1}\mathbb{E}|\pi_{E_{i}}(u_{t})|^{2}dt,~{}~{}~{}1\leq
i\leq k,$ (40)
where the expectation is with respect to $\nu$. Moreover, if we define the
$\mathcal{F}_{t}$-adapted process $(W_{t})_{0\leq t\leq 1}$ by the equation
$\displaystyle W_{t}:=X_{t}-\int_{0}^{t}u_{s}(X_{s})ds,~{}~{}~{}~{}0\leq t\leq
1,$ (41)
then $(W_{t})_{0\leq t\leq 1}$ is a Brownian motion by an application of
Girsanov’s theorem [14, 10]. Using the SDE (41) and the heat equation
$\partial_{t}P_{1-t}f(x)=-\frac{1}{2}\Delta P_{1-t}f(x),$
we can apply Itô’s formula to $u_{t}$ to reveal the relationship
$du_{t}=\nabla
u_{t}dW_{t}=\frac{\Gamma_{t}-\operatorname{id}_{E_{0}}}{1-t}dW_{t},$
with $\Gamma_{t}\equiv\Gamma_{t}(X_{t})$. Rearranging and integrating gives
$\displaystyle\int_{0}^{1}\Gamma_{t}dW_{t}=W_{1}+\int_{0}^{1}u_{t}dt\sim\mu_{1}\otimes\cdots\otimes\mu_{k}.$
(42)
In particular, equality in (40) together with the computation in (34) gives
the following representation for the entropies in terms of the
$\Gamma_{t}^{i}$ processes:
$\displaystyle D(\mu_{i}\|\gamma_{E_{i}})$
$\displaystyle=\frac{1}{2}\int_{0}^{1}\frac{\mathbb{E}\operatorname{Tr}\left((\Gamma^{i}_{t}-\operatorname{id}_{E_{i}})^{2}\right)}{1-t}dt,~{}~{}~{}1\leq
i\leq k.$ (43)
Next, positive-definiteness of $\Gamma_{t}$ and the assumption that
$B_{j}B_{j}^{T}=\operatorname{id}_{E^{j}}$ together justify the definition of
a new process $(\widetilde{W}^{j}_{t})_{0\leq t\leq 1}$ via
$d\widetilde{W}^{j}_{t}=(B_{j}\Gamma_{t}^{2}B_{j}^{T})^{-1/2}B_{j}\Gamma_{t}dW_{t},~{}~{}~{}1\leq
j\leq m.$
By Lévy’s characterization, this process is a Brownian motion, since it has
quadratic covariation
$[\widetilde{W}^{j}]_{t}=\int_{0}^{t}(B_{j}\Gamma_{s}^{2}B_{j}^{T})^{-1/2}B_{j}\Gamma_{s}^{2}B_{j}^{T}(B_{j}\Gamma_{s}^{2}B_{j}^{T})^{-1/2}ds=t\operatorname{id}_{E^{j}}.$
Putting things together, observe that definitions and (42) give
$\int_{0}^{1}(B_{j}\Gamma_{t}^{2}B_{j}^{T})^{1/2}d\widetilde{W}^{j}_{t}=B_{j}\int_{0}^{1}\Gamma_{t}dW_{t}\sim
B_{j}\sharp(\mu_{1}\otimes\cdots\otimes\mu_{k}).$
Thus, by (43) and an application of Lemmas 22 and 23, we have
$\displaystyle\sum_{i=1}^{k}c_{i}D(\mu_{i}\|\gamma_{E_{i}})$
$\displaystyle=\frac{1}{2}\int_{0}^{1}\frac{\sum_{i=1}^{k}c_{i}\mathbb{E}\operatorname{Tr}\left((\Gamma^{i}_{t}-\operatorname{id}_{E_{i}})^{2}\right)}{1-t}dt$
$\displaystyle\geq\frac{1}{2}\int_{0}^{1}\frac{\sum_{j=1}^{m}d_{j}\mathbb{E}\operatorname{Tr}\left(((B_{j}\Gamma_{t}^{2}B_{j}^{T})^{1/2}-\operatorname{id}_{E^{j}})^{2}\right)}{1-t}dt$
$\displaystyle\geq\sum_{j=1}^{m}d_{j}D(B_{j}\sharp(\mu_{1}\otimes\cdots\otimes\mu_{k})\|\gamma_{E^{j}}).$
We have equality throughout due to (36). Since $X_{t}$ has full support for
each $0<t\leq 1$ and $(t,x)\mapsto\Gamma_{t}(x)$ is smooth by the regularizing
properties of the heat semigroup, Lemma 22 and the above equality implies that
$\displaystyle(\operatorname{id}_{E_{0}}-\Pi_{E^{j}})\Gamma_{t}(x)\Pi_{E^{j}}=0,~{}~{}x\in
E_{0},~{}0<t<1,~{}1\leq j\leq m.$ (44)
By definition, this implies that, for each $t\in(0,1)$, we have
$\displaystyle(\operatorname{id}_{E_{0}}-\Pi_{E^{j}})\ \nabla^{2}\log
P_{1-t}f(x)\Pi_{E^{j}}=0,~{}~{}x\in E_{0},~{}1\leq j\leq m.$
Since $f$ is assumed regular by virtue of (37), the above also holds for $t=1$
by continuity of the derivatives of the heat semigroup. Since
$f=\prod_{i=1}^{k}f_{i}$ by definition, where each $f_{i}$ is a density on
$E_{i}$ with respect to $\gamma_{E_{i}}$, the above imposes a block-diagonal
structure on the Hessian of $\log f_{i}$, which can be summarized as
$D^{2}(\log f_{i})(x,y)=0,$
whenever $x,y$ are vectors from distinct spaces in the decomposition (21).
This implies, for each $1\leq i\leq k$, that the density $f_{i}$ has product
form
$\displaystyle f_{i}(x)=\prod_{\ell=0}^{n_{i}}\
f_{i,{\ell}}(\Pi_{K_{\ell}^{i}}(x)),~{}~{}x\in E_{i},$ (45)
establishing necessity of (i).
###### Remark 25.
The above proof can be viewed as a modification of Eldan and Mikulincer’s
argument for bounding the deficit in the Shannon–Stam inequality [11],
suitable for setting of the AJN inequality. The emergence of the factorization
(45) is new, and results from AJN-geometricity via the matrix inequality in
Lemma 22. Although Valdimarsson’s arguments in the context of the functional
Brascamp–Lieb inequalities are slightly different, the same basic
factorization emerges in [23, Lemma 13]. Hence, the above might be regarded as
a combination of ideas from both [11] and [23]. In the next step, the Fourier
analytic argument is effectively the same as that found in [23, Lemma 14],
with the drift $u_{t}$ playing the role of what Valdimarsson calls $\nabla\log
F$.
Necessity of condition (ii): Having established necessity of (i), the initial
calculations in the proof of sufficiency hold, leading to the conclusion (32).
The reduced datum $(\mathbf{c},\mathbf{d},\mathbf{B}_{K_{dep}})$ obtained by
restricting the maps in $\mathbf{B}$ to domain $K_{dep}$ remains AJN-
geometric, so without loss of generality, we can assume for simplicity that
there are no independent subspaces henceforth; i.e., $K_{dep}\equiv E_{0}$. As
in the previous step, we let $f$ denote the density of
$X\sim\mu_{1}\otimes\cdots\otimes\mu_{k}$ with respect to $\gamma_{E_{0}}$.
Letting definitions from the previous step prevail, Lemma 24 implies that
$u_{t}$ has linear growth in $x$ for each $0<t<1$. Hence, we are justified in
taking the Fourier transform, which we denote by $\hat{u}_{t}$. By (45),
$u_{t}$ is additively separable in the variables $\Pi_{E_{j}}x$ and
$(\operatorname{id}_{E_{0}}-\Pi_{E_{j}})x$, and therefore $\hat{u}_{t}$ is
supported on $H^{j}\cup(H^{j})^{\perp}$ for each $1\leq j\leq m$ (where
$H^{j}$ denotes the complex Hilbert space $E^{j}+\mathbf{i}E^{j}$). Similarly,
since $u_{t}$ is additively separable in the variables
$\pi_{E_{1}}(x),\dots,\pi_{E_{k}}(x)$, it follows that $\hat{u}_{t}$ is
supported on $\cup_{i=1}^{k}H_{i}$ (where, $H_{i}:=E_{i}+\mathbf{i}E_{i}$).
Taking intersections, we find $\hat{u}_{t}$ is supported on the set
$(H_{1}\cup\cdots\cup
H_{k})\cap\bigcap_{j=1}^{m}(H^{j}\cup(H^{j})^{\perp})=\\{0\\},$
where the equality follows by the assumption that there are no independent
subspaces. A tempered distribution with Fourier transform supported at the
origin is a polynomial [20, p. 194], so the linear growth estimate in Lemma 24
implies that $x\mapsto u_{t}(x)$ is affine for each $0<t<1$. As a consequence
of its defnition, $\Gamma_{t}$ is therefore deterministic for each $0<t<1$, in
the sense that $\Gamma_{t}(x)$ does not depend on $x$. Using the Itô isometry,
we conclude from the representation
$\int_{0}^{1}\Gamma_{t}dW_{t}\overset{\text{law}}{=}X$ that $X$ is Gaussian
with covariance
$\Sigma:=\operatorname{Cov}(X)=\int_{0}^{1}(\Gamma_{t})^{2}dt.$
Note that $\Sigma$ has diagonal form
$\displaystyle\Sigma=\Pi(\Sigma_{1},\dots,\Sigma_{k}),~{}~{}~{}\Sigma_{i}\in\mathbf{S}_{0}^{+}(E_{i}),1\leq
i\leq k$ (46)
due to independence of the coordinates of $X$.
From (44), we have $\Pi_{E^{j}}\Sigma=\Pi_{E^{j}}\Sigma\Pi_{E^{j}}$ for each
$1\leq j\leq m$. This implies that if $v\in E_{0}$ is an eigenvector of
$\Sigma$ with eigenvalue $\lambda$, then $\Pi_{E^{j}}v$ is an eigenvector of
$\Pi_{E^{j}}\Sigma\Pi_{E^{j}}$ with eigenvalue $\lambda$. In particular, if we
consider the spectral decomposition
$\Sigma=\sigma_{1}^{2}\Pi_{K^{1}_{dep}}+\cdots\sigma_{n}^{2}\Pi_{K^{n}_{dep}}$
with $\sigma^{2}_{1},\dots,\sigma^{2}_{n}$ distinct, then we have the
orthogonal decomposition
$\displaystyle B_{j}E_{0}=\oplus_{\ell=1}^{n}B_{j}K^{\ell}_{dep},\hskip
14.22636pt1\leq j\leq m,$ (47)
where we note each $K^{\ell}_{dep}$ is product-form due to (46). To see that
$E_{0}=\oplus_{\ell=1}^{n}K^{\ell}_{dep}$ is a critical decomposition, observe
that
$\displaystyle\sum_{i=1}^{k}c_{i}h(X_{i})$
$\displaystyle=\sum_{j=1}^{m}d_{j}h(B_{j}X)$ (48)
$\displaystyle=\sum_{\ell=1}^{n}\frac{1}{2}\log(2\pi
e\sigma_{\ell}^{2})\sum_{j=1}^{m}d_{j}\dim(B_{j}K_{dep}^{\ell})$ (49)
$\displaystyle\geq\sum_{\ell=1}^{n}\frac{1}{2}\log(2\pi
e\sigma_{\ell}^{2})\sum_{i=1}^{k}c_{i}\dim(\pi_{E_{i}}K_{dep}^{\ell})$ (50)
$\displaystyle=\sum_{i=1}^{k}c_{i}\sum_{\ell=1}^{n}\frac{\dim(\pi_{E_{i}}K_{dep}^{\ell})}{2}\log(2\pi
e\sigma_{\ell}^{2})=\sum_{i=1}^{k}c_{i}h(X_{i}),$ (51)
where (48) is the extremality assumption; (49) is due to (47) and the spectral
decomposition of $\Sigma$; (50) is the dimension condition (5); and (51)
follows due to the orthogonal decomposition
$E_{i}=\oplus_{\ell=1}^{n}\pi_{E_{i}}K_{dep}^{\ell}$ for each $1\leq i\leq k$,
because each $K_{dep}^{\ell}$ is of product-form. Since we have equality
throughout, this implies $K_{dep}\equiv
E_{0}=\oplus_{\ell=1}^{n}K_{dep}^{\ell}$ is a critical decomposition, as
desired. Since $K_{dep}^{1},\dots,K_{dep}^{n}$ are eigenspaces of $\Sigma$,
(ii) holds. ∎
### Acknowledgement
T.C. thanks Dan Mikulincer for his explanations of the properties of the
Föllmer drift and the martingale embedding used in [11]. This work was
supported in part by NSF grant CCF-1750430 (CAREER).
## References
* [1] V. Anantharam, V. Jog, and C. Nair. Unifying the Brascamp-Lieb inequality and the entropy power inequality. arXiv preprint arXiv:1901.06619, 2019.
* [2] A. R. Barron. Entropy and the central limit theorem. The Annals of probability (1986): 336-342.
* [3] J. Bennett, A. Carbery, M. Christ, and T. Tao. The Brascamp-Lieb inequalities: finiteness, structure and extremals. Geometric and Functional Analysis, 17(5):1343–1415, 2008.
* [4] H. J. Brascamp, E. H. Lieb, and J. M. Luttinger. A general rearrangement inequality for multiple integrals. Journal of functional analysis, 17(2):227–237, 1974.
* [5] H. J. Brascamp and E. H. Lieb. Best constants in Young’s inequality, its converse, and its generalization to more than three functions. Advances in Mathematics, 20(2):151–173, 1976.
* [6] E. Carlen and A. Soffer. Entropy production by block variable summation and central limit theorems. Commun. Math. Phys., vol. 140, no. 2, pp. 339–371, 1991.
* [7] E. A. Carlen and D. Cordero-Erausquin. Subadditivity of the entropy and its relation to Brascamp–Lieb type inequalities. Geometric and Functional Analysis, 19(2):373–405, 2009.
* [8] T. A. Courtade and J. Liu. Euclidean forward-reverse Brascamp–Lieb inequalities: Finiteness, structure, and extremals. The Journal of Geometric Analysis 31.4 (2021): 3300–3350.
* [9] S. Dubuc. Critères de convexité et inégalit s integralés.
Ann. Inst. Fourier Grenoble 27 (1) (1977) 135–165.
* [10] R. Eldan and J. R. Lee. Regularization under diffusion and anti-concentration of temperature. arXiv preprint arXiv:1410.3887, 2014.
* [11] R. Eldan and D. Mikulincer. Stability of the Shannon–Stam inequality via the Föllmer process. Probability Theory and Related Fields 177.3 (2020): 891-922.
* [12] H. Föllmer. An entropy approach to the time reversal of diffusion processes, in _Stochastic differential systems (Marseille-Luminy, 1984)_. Lecture Notes in Control and Inform. Sci. 69, Springer (1985) 156–163.
* [13] H. Föllmer. Time reversal on Wiener space, in _Stochastic processes – mathematics and physics (Bielefeld, 1984)_. Lecture Notes in Math. 1158, Springer (1986) 119–129.
* [14] J. Lehec. Representation formula for the entropy and functional inequalities. In Annales de l’IHP Probabilités et statistiques, volume 49, pages 885–899, 2013.
* [15] E. H. Lieb. Gaussian kernels have only Gaussian maximizers. Inventiones mathematicae, 102(1):179–208, 1990.
* [16] J. Liu, T. A. Courtade, P. Cuff, and S. Verdú. A forward-reverse Brascamp-Lieb inequality: Entropic duality and Gaussian optimality. Entropy (special issue on information inequalities), 20(6):418, 2018\.
* [17] Y. Polyanskiy and Y. Wu. Wasserstein continuity of entropy and outer bounds for interference channels. IEEE Trans. Inf. Theory, vol. 62, no. 7, pp. 3992–4002, 2016.
* [18] O. Rioul Information theoretic proofs of entropy power inequalities. IEEE Transactions on Information Theory 57.1 (2010): 33–55.
* [19] O. Rioul and R. Zamir. Equality in the matrix entropy-power inequality and blind separation of real and complex sources. in Proc. of the IEEE International Symposium on Information Theory (ISIT). IEEE, 2019.
* [20] W. Rudin. Functional Analysis. Second edition. New York: McGraw-Hill, 1991. Print.
* [21] C. E. Shannon. A mathematical theory of communication. The Bell system technical journal 27.3: 379–423, 1948.
* [22] A. J. Stam. Some inequalities satisfied by the quantities of information of Fisher and Shannon. Information and Control 2.2: 101–112, 1959.
* [23] S. I. Valdimarsson. Optimisers for the Brascamp-Lieb inequality. Israel J. Math., 168:253–274, 2008.
* [24] R. Zamir and M. Feder. A generalization of the entropy power inequality with applications. IEEE transactions on Information Theory, 39.5: 1723–1728, 1993.
## Appendix A Föllmer’s drift
The material in this appendix can be found in [14], and interested readers are
referred there for more details. We summarize the required results for
completeness, since it plays an important role in the proofs of Proposition 7
and Theorem 16.
For a Euclidean space $E$, let $\mathbb{W}$ denote the classical Wiener space
of continuous functions $C^{0}([0,1],E):=\\{\omega:[0,1]\to E;\omega(0)=0\\}$
equipped with the topology of uniform convergence, and the Borel
$\sigma$-algebra $\mathcal{B}$. Let $\gamma$ denote the Wiener measure on
$(\mathbb{W},\mathcal{B})$. Let $X_{t}:\omega\mapsto\omega(t)$ be the
coordinate process, and $\mathcal{G}=(\mathcal{G}_{t})_{0\leq t\leq 1}$ be the
natural filtration of $X=(X_{t})_{0\leq t\leq 1}$. It is a fact that
$\mathcal{B}$ is the $\sigma$-algebra generated by $\mathcal{G}$.
Given a filtered probability space
$(\Omega,\mathcal{A},\mathbb{P},\mathcal{F})$, where $\mathcal{A}$ is the
Borel $\sigma$-algebra of a Polish topology on $\Omega$, a drift is any
adapted process $U:[0,1]\to E$ such that there exists $u\in L^{1}([0,1];E)$
satisfying
$U_{t}=\int_{0}^{t}u_{s}ds,~{}~{}0\leq t\leq 1$
and $\int_{0}^{1}|u_{s}|^{2}ds<\infty$ almost surely. By definition and
Cauchy–Schwarz, any drift $U$ belongs to $\mathbb{W}$ almost surely.
A process $B=(B_{t})_{t\geq 0}$ taking values in $E$ is said to be a standard
Brownian motion if it is a Brownian motion with $B_{0}=0$ and
$\operatorname{Cov}(B_{1})=\operatorname{id}_{E}$. The following is a
consequence of Girsanov’s theorem; it can be found in [14, Proposition 1].
###### Proposition 26.
Let a standard Brownian motion $B$, taking values in $E$, be defined on a
filtered probability space $(\Omega,\mathcal{A},\mathbb{P},\mathcal{F})$, and
let $U_{t}=\int_{0}^{t}u_{s}ds$ be a drift. If $\nu$ is the law of the process
$(B_{t}+U_{t})_{0\leq t\leq 1}$, then
$D(\nu\|\gamma)\leq\frac{1}{2}\int_{0}^{1}\mathbb{E}|u_{s}|^{2}ds.$
It turns out that the upper bound given on the relative entropy above can be
met with equality. The result is due to Föllmer [12, 13]; the statement given
can be found in [14, Theorem 2].
###### Proposition 27 (Föllmer’s drift).
Let $\nu\ll\gamma$ be a probability measure on $(\mathbb{W},\mathcal{B})$ with
$D(\nu\|\gamma)<\infty$. There exists an adapted process $u$ such that, under
$\nu$, the following holds:
1. 1.
The process $U_{t}=\int_{0}^{t}u_{s}ds$ is a drift.
2. 2.
The process $B_{t}=X_{t}-U_{t}$ is standard Brownian motion.
3. 3.
We have $D(\nu\|\gamma)=\frac{1}{2}\int_{0}^{1}\mathbb{E}_{\nu}|u_{s}|^{2}ds$.
Let $\mu\in\mathcal{P}(E)$ have density $d\nu=fd\gamma_{E}$. By defining the
Brownian bridge $\nu$ on $(\mathbb{W},\mathcal{B})$ via
$\displaystyle\frac{d\nu}{d\gamma}(\omega)=f(\omega(1)),\hskip
14.22636pt\omega\in\mathbb{W},$ (52)
we have $D(\mu\|\gamma_{E})=D(\nu\|\gamma)$, which follows by data processing
and the observation that $X_{1}\sim\gamma_{E}$ under $\gamma$. This gives the
following convenient representation for the entropy. For $\mu\ll\gamma_{E}$
with $D(\mu\|\gamma_{E})<\infty$, let $\nu$ be the bridge in (52). On the
filtered probability space $(\mathbb{W},\mathcal{B},\nu,\mathcal{G})$, we have
$\displaystyle
D(\mu\|\gamma_{E})=\min_{U}\frac{1}{2}\int_{0}^{1}\mathbb{E}|u_{s}|^{2}ds,$
(53)
where the minimum is over all drifts $U_{t}=\int_{0}^{t}u_{s}ds$ such that
$\mu\sim B_{1}+U_{1}$ for a standard Brownian motion $B$ carried by
$(\mathbb{W},\mathcal{B},\nu,\mathcal{G})$. Moreover, since the process
$(X_{t})_{0\leq t\leq 1}$ under $\nu$ is the Brownian bridge
$X_{t}\sim tX_{1}+\sqrt{t(1-t)}Z,$
with $Z\sim\gamma_{E}$ independent of $X_{1}\sim\mu$, we can take expectations
in Proposition 27(ii) to find, with the help of Fubini’s theorem, that the
minimum-energy process $(u_{t})_{0\leq t\leq 1}$ in (53) satisfies
$\displaystyle\mathbb{E}[u_{t}]=\int_{E}xd\mu(x),\hskip
14.22636pt\mbox{a.e.~{}}0\leq t\leq 1.$ (54)
We now record a simple application of the above, which will suit our needs.
###### Theorem 28.
Fix probability measures $\mu_{i}\ll\gamma_{E_{i}}$ on $E_{i}$ satisfying
$D(\mu_{i}\|\gamma_{E_{i}})<\infty$ for each $1\leq i\leq k$. There is a
filtered probability space $(\Omega,\mathcal{A},\mathbb{P},\mathcal{F})$
carrying a Brownian motion $B$ with
$\operatorname{Cov}(B_{1})=\operatorname{id}_{E_{0}}$ and a drift
$U_{t}=\int_{0}^{t}u_{s}ds$, $u\in L^{1}([0,1];E_{0})$ such that, for each
$1\leq i\leq k$,
1. 1.
$\mu_{i}\sim\pi_{E_{i}}(B_{1}+U_{1})$.
2. 2.
$D(\mu_{i}\|\gamma_{E_{i}})=\frac{1}{2}\int_{0}^{1}\mathbb{E}|\pi_{E_{i}}(u_{s})|^{2}ds$.
Moreover, the processes
$(B^{i},u^{i})=\big{(}\pi_{E_{i}}(B_{t}),\pi_{E_{i}}(u_{t})\big{)}_{0\leq
t\leq 1},~{}~{}1\leq i\leq k$
are independent.
###### Proof.
For each $1\leq i\leq k$, let $\mathbb{W}_{i}=C^{0}([0,1];E_{i})$,
$\mathcal{G}_{i}$ be its natural filtration, $\mathcal{B}_{i}$ be the
corresponding Borel $\sigma$-algebra, and $\gamma_{i}$ the Wiener measure.
Define measure $\nu_{i}\ll\gamma_{i}$ on $(\mathbb{W}_{i},\mathcal{B}_{i})$ by
$\frac{d\nu_{i}}{d\gamma_{i}}(\omega)=\frac{d\mu_{i}}{d\gamma_{E_{i}}}(\omega(1)),\hskip
14.22636pt\omega\in\mathbb{W}_{i}.$
By Proposition 27 and the subsequent discussion, there exists a drift
$U_{t}^{i}=\int_{0}^{t}u^{i}_{s}ds$ and a standard Brownian motion $B^{i}$,
both carried on $(\mathbb{W}_{i},\mathcal{B}_{i},\nu_{i},\mathcal{G}_{i})$,
such that $\mu_{i}\sim B^{i}_{1}+U^{i}_{1}$ and
$D(\mu_{i}\|\gamma_{E_{i}})=D(\nu_{i}\|\gamma_{i})=\frac{1}{2}\int_{0}^{1}\mathbb{E}|u^{i}_{s}|^{2}ds,\hskip
14.22636pt1\leq i\leq k.$
Now, put everything together on the product space
$\Omega=\prod_{i=1}^{k}(\mathbb{W}_{i}\times\mathbb{W}_{i})$ equipped with its
natural filtration, the Borel sets, and the product measure
$\mathbb{P}=\otimes_{i=1}^{k}P_{B^{i}U^{i}}$. ∎
|
# ARTIC3D: Learning Robust Articulated 3D Shapes from Noisy Web Image
Collections
Chun-Han Yao1* Amit Raj2 Wei-Chih Hung3 Yuanzhen Li2 Michael Rubinstein2
Ming-Hsuan Yang124 Varun Jampani2
1UC Merced 2Google Research 3Waymo 4Yonsei University
###### Abstract
Estimating 3D articulated shapes like animal bodies from monocular images is
inherently challenging due to the ambiguities of camera viewpoint, pose,
texture, lighting, etc. We propose ARTIC3D, a self-supervised framework to
reconstruct per-instance 3D shapes from a sparse image collection in-the-wild.
Specifically, ARTIC3D is built upon a skeleton-based surface representation
and is further guided by 2D diffusion priors from Stable Diffusion. First, we
enhance the input images with occlusions/truncation via 2D diffusion to obtain
cleaner mask estimates and semantic features. Second, we perform diffusion-
guided 3D optimization to estimate shape and texture that are of high-fidelity
and faithful to input images. We also propose a novel technique to calculate
more stable image-level gradients via diffusion models compared to existing
alternatives. Finally, we produce realistic animations by fine-tuning the
rendered shape and texture under rigid part transformations. Extensive
evaluations on multiple existing datasets as well as newly introduced noisy
web image collections with occlusions and truncation demonstrate that ARTIC3D
outputs are more robust to noisy images, higher quality in terms of shape and
texture details, and more realistic when animated. Project page:
https://chhankyao.github.io/artic3d/
††*Work done as a student researcher at Google.
## 1 Introduction
Articulated 3D animal shapes are widely used in applications such as AR/VR,
gaming, and content creation. However, the articulated models are usually hard
to obtain as manually creating them is labor intensive and 3D scanning real
animals in the lab settings is highly infeasible. In this work, we aim to
automatically estimate high-quality 3D articulated animal shapes directly from
sparse and noisy web image collections. This is a highly ill-posed problem due
to the variations across images with diverse backgrounds, lighting, camera
viewpoints, animal poses, shapes, and textures, etc. In addition, we do not
assume access to any 3D shape models or per-image annotations like keypoints
and camera viewpoints in our in-the-wild setting.
Figure 1: Learning articulated 3D shapes from noisy web images. We propose
ARTIC3D, a diffusion-guided optimization framework to estimate the 3D shape
and texture of articulated animal bodies from sparse and noisy image in-the-
wild. Results show that ARTIC3D outputs are detailed, animatable, and robust
to occlusions or truncation.
While several recent methods [39, 33, 38] can produce animatable 3D shapes
using a skeleton-based neural surface or pre-defined mesh template, the
success is largely dependent on large-scale image datasets or manually-
filtered clean images for training or optimization. Moreover, the output 3D
shapes and textures are usually unrealistic when viewed from novel viewpoints
or pose articulations. On the other hand, recent success of generative
diffusion models [25, 28, 27] shows that one can generate high-quality images
for a given text prompt. Several recent works [21, 15, 18, 23] further
demonstrate the possibility to produce 3D objects/scenes simply using 2D
diffusion as multi-view supervision. In this work, we leverage the powerful 2D
diffusion prior to learn 3D articulated shapes, aiming to reconstruct and
animate 3D animals from sparse noisy online images without any 2D or 3D
annotations. Intuitively, one can improve the quality of 3D reconstructions by
utilizing a diffusion prior similar to the score distillation sampling (SDS)
loss proposed in DreamFusion [21]. In our experiments, nonetheless, we observe
that naively applying the SDS loss on 3D surface optimization leads to
unstable and inefficient training, producing undesirable artifacts like noisy
surfaces or ambiguous texture.
In this work, we present ARTIC3D (ARTiculated Image Collections in 3D), a
diffusion-guided optimization framework to learn articulated 3D shapes from
sparse noisy image collections. We use the articulated part surface and
skeleton from Hi-LASSIE [38], which allows explicit part manipulation and
animation. We propose a novel Decoder-based Accumulative Score Sampling (DASS)
module that can effectively leverage 2D diffusion model priors from Stable
Diffusion [27] for 3D optimization. In contrast to existing works that back-
propagate image gradients through the latent encoder, we propose a decoder-
based multi-step strategy in DASS, which we find to provide more stable
gradients for 3D optimization. To deal with noisy input images, we propose an
input preprocessing scheme that use diffusion model to reason about occluded
or truncated regions. In addition, we also propose techniques to create
realistic animations from pose articulations.
We analyze ARTIC3D on the Pascal-Part [5] and LASSIE [39] datasets. To better
demonstrate the robustness to noisy images, we extend LASSIE animal dataset
[39] with noisy web animal images where animals are occluded and truncated.
Both qualitative and quantitative results show that ARTIC3D produces 3D shapes
and textures that are detailed, faithful to input images, and robust to
partial observations. Moreover, our 3D articulated representation enables
explicit pose transfer and realistic animation which are not feasible for
prior diffusion-guided methods with neural volumetric representations. Fig. 1
shows sample 3D reconstructions and applications from ARTIC3D. The main
contributions of this work are:
* •
We propose a diffusion-guided optimization framework called ARTIC3D, where we
reconstruct 3D articulated shapes and textures from sparse noisy online images
without using any pre-defined shape templates or per-image annotations like
camera viewpoint or keypoints.
* •
We design several strategies to efficiently incorporate 2D diffusion priors in
3D surface optimization, including input preprocessing, decoding diffused
latents as image targets, pose exploration, and animation fine-tuning.
* •
We introduce E-LASSIE, an extended LASSIE dataset [39], by collecting and
annotating noisy web images with occlusions or truncation to evaluate model
robustness. Both qualitative and quantitative results show that ARTIC3D
outputs have higher-fidelity compared to prior arts in terms of 3D shape
details, texture, and animation.
## 2 Related Work
Animal shape and pose estimation. Earlier techniques on animal shape
estimation used statistical body models [46, 45] that are learned either using
animal figurines or a large number of annotated animal images. Some other
works [35, 34, 36, 37], use video inputs to learn articulated shapes by
exploiting dense correspondence information in video. However, these methods
rely on optical flow correspondences between video frames, which are not
available in our problem setting. Other techniques [12, 11] leverage a
parametric mesh model and learn a linear blend skinning from images to obtain
a posed mesh for different animal categories. Most related to our work are
LASSIE [39] and Hi-LASSIE [38] which tackle the same problem setting of
recovering 3D shape and texture from a sparse collection of animal images in
the wild using either a manually annotated skeleton template, or by
discovering category specific template from image collections. MagicPony [33]
learns a hybrid 3D representation of the animal instance from category
specific image collections. However, these approaches require carefully
curated input data and fail to handle image collections with partial
occlusions, truncation or noise. By leveraging recent advances in diffusion
models, we support reconstruction on a wider variety of input images.
3D reconstruction from sparse images. Several recent works [30, 42, 41, 32,
24, 2, 43, 3] have used implicit representations [19] to learn geometry and
appearance from sparse image collections either by training in a category
specific manner or assuming access to multi-view consistent sparse images
during inference. However, most of these approaches demonstrate compelling
results only on rigid objects. Zhang et al. [42] is another closely related
work that finds a neural surface representation from sparse image collections
but requires coarse camera initialization. By learning a part based mesh shape
and texture, our framework naturally lends itself to modeling and animating
articulated categories such as animals in the wild without any additional
requirements on camera parameters.
Diffusion prior for 3D. Diffusion models [27, 28, 44] have recently gained
popularity for generating high resolution images guided by various kinds of
conditioning inputs. Diffusion models capture the distribution of real data
which can be used as score function to guide 3D generation with score-
distillation sampling (SDS) loss as first described in DreamFusion [21].
Several recent approaches [18, 15, 17, 29, 26, 23] leverage the SDS loss to
generate 3D representations from either text or single or sparse image
collections. Drawing inspiration from these lines of work, we propose a novel
Decoder-based accumulative Score Sampling (DASS) that exploits the high
quality images synthesized by the decoder and demonstrate improved performance
over naive SDS loss.
## 3 Approach
Given 10-30 noisy web images of an animal species, ARTIC3D first preprocesses
the images via 2D diffusion to obtain cleaner silhouette estimates, semantic
features, and 3D skeleton initialization. We then jointly optimizes the camera
viewpoint, pose articulation, part shapes and texture for each instance.
Finally, we animate the 3D shapes with rigid bone transformations followed by
diffusion-guided fine-tuning. Before introducing our diffusion-based
strategies to improve the quality of 3D outputs, we briefly review the
skeleton-based surface representation similar to [39, 38], as well as Stable
Diffusion [27] that we use as diffusion prior.
### 3.1 Preliminaries
While most 3D generation methods optimizes a volumetric neural field to
represent 3D rigid objects/scenes, we aim to produce 3D shapes that are
articulated and animatable. To enable explicit part manipulation and realistic
animation, we adopt a skeleton-based surface representation as in LASSIE [39]
and Hi-LASSIE [38]. Unlike [39, 38] which directly sample surface texture from
images, we optimize per-part texture images to obtain realistic instance
textures from novel views.
3D Skeleton. Given a user-specified reference image in the collection, Hi-
LASSIE [38] automatically discovers a 3D skeleton based on the geometric and
semantic cues from DINO-ViT [4] feature clusters. The skeleton initializes a
set of 3D joints and primitive part shapes, providing a good constraint of
part transformation and connectivity. In our framework, we obtain cleaner
feature clusters by diffusing input images, then applying Hi-LASSIE as an off-
the-shelf skeleton discovery method. For a fair comparison with existing
works, we use the same reference image for skeleton discovery as in [38] in
our experiments. Please refer to [38] for further details on skeleton
discovery.
Neural part surfaces. Following [38], using the discovered 3D skeleton, we
reconstruct a 3D part corresponding to each skeleton bone via a deformable
neural surface [42]. The neural surfaces are parameterized by multi-layer
perceptron networks (MLPs), mapping 3D surface points on a unit sphere to
their xyz deformation. Given $m$ uniformly sampled 3D points
$X\in\mathbb{R}^{3\times m}$ on a spherical surface, we can deform the 3D
shape of the $i$-th part through the part MLP as $X\mapsto\mathcal{F}_{i}(X)$.
Then, the part surfaces are rigidly transformed by the scaling
$s_{i}\in\mathbb{R}$, rotation $R_{i}\in\mathbb{R}^{3\times 3}$, and
translation $t_{i}\in\mathbb{R}^{3}$ of each skeleton part $i$. The
transformed part surface points $V_{i}$ in the global coordinate can be
written as: $V_{i}=s_{i}R_{i}\mathcal{F}_{i}(X)+t_{i}$ . Please refer to [38]
for further details.
Stable Diffusion architecture. Stable Diffusion (SD) [27] is a state-of-the-
art text-to-image generative model that can synthesize high-quality images
given a text prompt. SD mainly consists of 3 components: An image encoder
$\mathcal{E}$ that encodes a given image $x$ into a latent code $z$; a decoder
network $\mathcal{D}$ that converts the latent code back to image pixels; and
a U-Net denoiser $\epsilon_{\phi}$ that can iteratively denoise a noisy latent
code. We use SD as a diffusion prior in our framework.
Figure 2: ARTIC3D overview. Given sparse web images of an animal species,
ARTIC3D estimates the camera viewpoint, articulated pose, 3D part shapes, and
surface texture for each instance. We propose a novel DASS module to
efficiently compute image-level gradients from stable diffusion, which are
applied in 1) input preprocessing, 2) shape and texture optimization, and 3)
animation.
### 3.2 Decoder-based Accumulative Score Sampling (DASS)
To leverage the 2D diffusion prior for 3D shape learning, DreamFusion [21]
proposes a score distillation sampling (SDS) loss to distill the images
rendered from random views and propagate the image-level gradients to Neural
Radiance Field (NeRF) parameters. To reduce the computational cost and improve
training stability, recent works like Latent-NeRF [18] and Magic3D [15]
perform distillation on the low-resolution latent codes in SD and back-
propagate the gradients through the SD image encoder $\mathcal{E}$. Formally,
let $x$ be a rendered image from 3D model and $z$ denote its latent codes from
the SD image encoder $\mathcal{E}$. At each score distillation iteration, the
latent codes $z$ are noised to a random time step $t$, denoted as $z_{t}$, and
denoised by the U-Net denoiser $\epsilon_{\phi}$ of the diffusion model. The
image-level SDS gradients can then be expressed as:
$\nabla_{x}\mathcal{L}_{\text{SDS}}=w_{t}(\epsilon_{\phi}(z_{t};y,t)-\epsilon)\frac{\partial
z}{\partial x}$, where $y$ denotes the guiding text embedding, $\epsilon$ is
the random noise added to the latent codes, and $w_{t}$ is a constant
multiplier which depends on diffusion timestep $t$. The denoiser
$\epsilon_{\phi}$ uses a guidance scale $w_{g}$ to balance the text guidance
and a classifier-free guidance [8] of an unconditional model.
Although this common SDS loss is effective in generating NeRFs from text, we
observe that naively applying it in our framework leads to unstable and
inefficient training. As shown in Fig. 3 (b), the SDS gradients back-
propagated through the encoder are often quite noisy, causing undesirable
artifacts on 3D shapes and texture. Moreover, it requires the extra
computation and memory usage for gradient back propagation, limiting the
training batch size and thus decreasing stability.
To mitigate these issues, we propose a novel Decoder-based Accumulative Score
Sampling (DASS) module, an alternative to calculate pixel gradients that are
cleaner and more efficient. Fig. 2 illustrates the proposed DASS module. At a
high level, given an input image $x$, we obtain a denoised image $x^{\prime}$
from the decoder as a reconstruction target, based on our observation that
decoded outputs are generally less noisy. As shown in Fig. 2, we pass a
rendered image through the encoder $\mathcal{E}$ to obtain low-resolution
latent codes, update the latents for $n$ steps via score distillation, then
decode the updated latents with the decoder $\mathcal{D}$ as an image target.
Formally, instead of explicitly calculating the partial derivative $\partial
z/\partial x$, we use $x-\mathcal{D}(z-\nabla z)$ as a proxy to $\nabla x$,
where $\nabla z$ is the accumulated latent gradients over $n$ steps. This
makes a linear assumption on $\mathcal{D}$ around latents $z$, which we
empirically find effective to approximate the pixel gradients. The target
image $x^{\prime}=\mathcal{D}(z-\nabla z)$ can be directly used as an updated
input (Section 3.3) or to compute a pixel-level DASS loss
$\mathcal{L}_{dass}=\lVert(x-\mathcal{D}(z-\nabla z))\rVert^{2}$ in 3D
optimization (Section 3.4). Since the DASS module only involves one forward
pass of the encoder and decoder, it costs roughly half the memory consumption
during training compared to the original SDS loss. The visualizations in Fig.
3 demonstrate that DASS produces cleaner images than the original SDS loss in
one training step (b), and that the accumulated gradients can effectively
reduce noise and fill in the missing parts (c). Moreover, we show that adding
random noise to the background pixels can facilitate the shape completion by
DASS (a). We also perform ablative analyses on other diffusion parameters like
noise timestep (d) and guidance weight (e) in Fig. 3. In general, ARTIC3D
favors moderate accumulation steps $n\in(3,10)$ and lower timestep
$t\in(0.2,0.5)$ since higher variance can lead to 3D results that are not
faithful to the input images. Also, we use a lower guidance weight
$w_{g}\in(10,30)$ so that our results do not suffer from over saturation
effects common in prior works due to high guidance scale in SDS loss.
Figure 3: Ablative visualizations of the DASS method. From the example input
image (top left), we show the updated image after one optimization iteration
using various ways to obtain image-level gradients or parameter settings: (a)
shows that noised background in the input image encourages DASS to hallucinate
the missing parts; (b) compares the standard SDS (back-propagate gradients
through encoder) and our DASS (decoder-based) losses; (c) justifies our
accumulating latent gradient approach as it leads to cleaner decoded output;
(d) indicates that small timestep mostly modifies the texture, whereas large
timestep changes the geometry more (sometimes removes or creates body parts);
(e) demonstrates high-contrast colors and slightly disproportioned body with
higher guidance weight (diffusion prior is biased towards larger heads and
frontal views). Note that (b) uses the clean input in (a) for better
visualization, whereas (c),(d),(e) are obtained from the noised input.
### 3.3 Input preprocessing for noisy images
Animal bodies in real-world images often have ambiguous appearance caused by
noisy texture, dim lighting, occlusions, or truncation, as shown in Fig. 4. To
better deal with noisy or partial observations, we propose a novel method to
enhance the image quality and complete the missing parts. Given a sparse image
collection $\\{I_{j}\in\mathbb{R}^{H\times W\times 3}\\}$ ($j\in\\{1,...,N\\}$
and $N$ is typically between 10-30) of an animal species, we aim to obtain
accurate silhouettes estimates ${\\{\hat{M}_{j}\in\mathbb{R}^{H\times W}\\}}$
and clean semantic features ${\\{K_{j}\in\mathbb{R}^{h\times w\times d}\\}}$
for each instance. As shown in Fig. 2, we roughly estimate the foreground
masks via clustering salient features extracted by a trained DINO-ViT [4]
network. Then, we apply DASS to diffuse the background-masked images,
resulting in animal bodies with cleaner texture and complete shapes. Formally,
we obtain an updated image $I^{\prime}$ by $\mathcal{D}(z-\nabla z)$, where
$z=\mathcal{E}(I)$. Here, DASS serves as an image denoising and inpainting
module, which can effectively generate a high-quality version of a noisy input
via $n$ latent updates and a single forward pass of $\mathcal{D}$. Following
the noise-and-denoise nature of diffusion models, we show in Fig. 3 (a) that
manually adding Gaussian noise to the background pixels in an input image
encourages DASS to hallucinate the occluded parts while mostly preserving the
visible regions. Finally, we re-apply DINO-ViT feature extraction and
clustering [1] on the diffused images to obtain cleaner and more complete
masks as well as semantic features. Fig. 2 (left) shows sample noisy input
images and the corresponding output enhanced images and feature clusters. Note
that Farm3D [9] uses SD [27] to generate animal images from text for 3D
training, which, however, often contain irregular shapes (e.g., horses with 5
legs). On the contrary, our preprocessed images are more suitable for the
sparse-image optimization framework since our goal is to reconstruct 3D shape
and texture that are realistic and faithful to the input images.
### 3.4 Diffusion-guided optimization of shape and texture
Given the preprocessed images, silhouette estimates, and semantic features, we
jointly optimize the camera viewpoint, pose articulation, 3D part shapes, and
texture. Since we do not assume any 2D or 3D annotations, we follow Hi-LASSIE
[38] and adopt an analysis-by-synthesis approach to reconstruct 3D shape and
texture that are faithful to the input images. That is, we render the 3D part
using a differentiable renderer [16] and compare them with the 2D images,
pseudo ground-truth silhouettes, and DINO-ViT features. Fig. 2 (top)
illustrates the shape and texture optimization.
LASSIE and Hi-LASSIE losses. Given the rendered silhouette $\tilde{M}^{j}$ and
pseudo ground-truth $\hat{M}^{j}$ of instance $j$, the silhouette loss
$\mathcal{L}_{sil}$ can be written as:
$\mathcal{L}_{sil}=\sum_{j}\lVert\tilde{M}^{j}-\hat{M}^{j}\rVert^{2}$. LASSIE
[39] and Hi-LASSIE [38] further leverage the 2D correspondence of DINO
features between images of the same animal class to define a semantic
consistency loss $\mathcal{L}_{sem}$. $\mathcal{L}_{sem}$ can be interpreted
as the Chamfer distance between 3D surface points and 2D pixels, enforcing the
aggregated 3D point features to project closer to the similar pixel features
in all images. To regularize the pose articulations and part shapes, [39, 38]
also apply a part rotation loss $\mathcal{L}_{rot}$, Laplacian mesh
regularization $\mathcal{L}_{lap}$, and surface normal loss
$\mathcal{L}_{norm}$. The part rotation loss $\mathcal{L}_{rot}=\sum_{j}\lVert
R^{j}-\bar{R}\rVert^{2}$ limits the angle offsets from resting pose, where
$R^{j}$ is the part rotations of instance $j$ and $\bar{R}$ denotes the part
rotations of shared resting pose. $\mathcal{L}_{lap}$ and $\mathcal{L}_{norm}$
encourage smooth 3D surfaces by pulling each vertex towards the center of its
neighbors and enforcing neighboring faces to have similar normals,
respectively. We omit the details and refer the readers to [39, 38].
Considering that the reconstruction ($\mathcal{L}_{sil}$, $\mathcal{L}_{sem}$)
and regularization ($\mathcal{L}_{rot}$, $\mathcal{L}_{lap}$,
$\mathcal{L}_{norm}$) losses are generic and effective on articulated shapes,
we use them in ARTIC3D along with novel texture reconstruction and DASS
modules.
Texture reconstruction. Both [39, 38] directly sample texture from input RGB,
resulting in unrealistic textures in occluded regions. To obtain more
realistic textures, we also optimize a texture image $T_{i}$ for each part.
The vertex colors $C\in\mathbb{R}^{3\times m}$ are sampled via the pre-defined
UV mapping $\mathcal{S}$ of surface points $X$. Formally, the surface color
sampling of part $i$ can be expressed as $C_{i}=T_{i}(\mathcal{S}(X)).$ The
sampled surface texture are then symmetrized according to the symmetry plane
defined in the 3D skeleton. Note that the texture images are optimized per
instance since the animals in web images can have diverse texture. Similar to
the $\mathcal{L}_{lap}$, we enforce the surface texture to be close to input
image when rendered from the estimated input view. The texture reconstruction
loss is defined as:
$\mathcal{L}_{text}=\sum_{j}\lVert\hat{M}^{j}\odot(\hat{I}^{j}-\tilde{I}^{j})\rVert^{2}$
where $\hat{I}^{j}$ denotes the clean input image of instance $j$ after input
preprocessing and $\hat{M}^{j}$ denotes the corresponding animal mask;
$\tilde{I}^{j}$ is the rendered RGB image from the estimated 3D shape and
texture; and $\odot$ denotes element-wise product. The reconstruction loss is
masked by the estimated foreground silhouette so that the surface texture
optimization is only effected by the visible non-occluded animal pixels.
Distilling 3D reconstruction. In addition to the aforementioned losses, we
propose to increase the shape and texture details by distilling 3D
reconstruction. Here, we use DASS as a critic to evaluate how well a 3D
reconstruction looks in its 2D renders, and calculate pixel gradients from the
image target. Similar to prior diffusion-based methods [21, 18, 15], we render
the 3D surfaces with random viewpoints, lighting, and background colors during
training. Moreover, we design a pose exploration scheme to densify the
articulation space in our sparse-image scenario. In particular, we randomly
interpolate the estimated bone rotation $(R^{j_{1}},R^{j_{2}})$ of two
instances $(j_{1},j_{2})$, and generate a new instance with novel pose
$R^{\prime}=\alpha R^{j_{1}}+(1-\alpha)R^{j_{2}}$ for rendering, where
$\alpha\in(0,1)$ is a random scalar. As such, we can better constrain the part
deformation by diffusion prior and prevent irregular shape or disconnection
between parts. As shown in Fig. 2, we then diffuse the latent codes of
rendered images and obtain pixel gradients from the DASS module. The resulting
gradients are back-propagated to update the part surface texture, deformation
MLP, bone transformation, and camera viewpoints. In our experiments, we
observe that the RGB gradients do not propagate well through the SoftRas [16]
blending function, and we thus modify it with a layered blending approach
proposed in [20].
Optimization details. The overall optimization objective can be expressed as
the weighted sum of all the losses
$\mathcal{L}=\sum_{l\in\mathfrak{L}}\alpha_{l}\mathcal{L}_{l}$, where
$\mathfrak{L}=\\{sil,sem,rot,lap,norm,text,dass\\}$ as described above. We
optimize the shared and instance-specific shapes in two stages. That is, we
first update the shared part MLPs along with camera viewpoints and pose
parameters. Then, we fine-tune the instance-specific part MLPs and optimize
texture images for each instance. All model parameters are updated using an
Adam optimizer [10]. We render the images at 512$\times$512 resolution and at
128$\times$128 for the part texture images. More optimization details are
described in the supplemental material.
### 3.5 Animation fine-tuning
One can easily animate the resulting 3D articulated animals by gradually
rotating the skeleton bones and their corresponding parts surfaces. However,
the rigid part transformations often result in disconnected shapes or texture
around the joints. To improve the rendered animation in 2D, one can naively
use DASS frame-by-frame on a sequence of articulated shapes. However this can
produce artifacts like color flickering and shape inconsistency across the
frames. As a remedy, we further propose a fine-tuning step, called Temporal-
DASS (T-DASS), to generate high-quality and temporally consistent 2D
animations based on the ARTIC3D outputs. Given a sequence of part
transformations from simple interpolation across instances or motion re-
targeting, we render the 3D surfaces as video frames
$\\{J_{k}\in\mathbb{R}^{H\times W\times 3}(k\in\\{1,...,K\\})\\}$ and encode
them into latent codes $\\{z_{k}\in\mathbb{R}^{h\times w\times 3}\\}$ through
the SD encoder $\mathcal{E}$. Then, we design a reconstruction loss
$\mathcal{L}_{recon}$ and temporal consistency loss $\mathcal{L}_{temp}$ to
fine-tune the animation in the latent space. Similar to DASS, we obtain the
reconstruction targets $\\{z_{k}^{\prime}\\}$ by accumulating latent SDS
gradients $\nabla z_{k}$ for multiple steps: $z_{k}^{\prime}=z_{k}-\nabla
z_{k}$. The reconstruction loss can then be written as:
$\mathcal{L}_{recon}=\sum_{t}\lVert(z_{k}-z_{k}^{\prime})\rVert^{2}$. To
enforce temporal consistency, we exploit our 3D surface outputs and calculate
accurate 2D correspondences across neighboring frames. Specifically, for each
latent pixel in frame $z_{k}$, we find the closest visible 3D surfaces via
mesh rasterization, then backtrack their 2D projection in frame $z_{k-1}$,
forming a dense 2D flow field $F_{k}\in\mathbb{R}^{h\times w\times 2}$.
Intuitively, the corresponding pixels should have similar latent codes. Hence,
we use $F_{k}$ to perform temporal warpping on the latent codes $z_{k-1}$,
denoted as: warp$(z_{k-1},F_{k})$, and define $\mathcal{L}_{temp}$ as:
$\mathcal{L}_{temp}=\sum_{k=2}^{K}\lVert(z_{k}-\text{warp}(z_{k-1},F_{k})\rVert^{2}$.
We fine-tune the latent codes $\\{z_{k}\\}$ with $\mathcal{L}_{recon}$ and
$\mathcal{L}_{temp}$, where $\\{F_{k}\\}$ are pre-computed and
$\\{z_{k}^{\prime}\\}$ are updated in each iteration. Finally, we can simply
obtain the RGB video frames by passing the optimized latent codes through the
SD decoder $\\{\mathcal{D}(z_{k})\\}$. The proposed $\mathcal{L}_{recon}$
encourages better shape and texture details in each frame, and
$\mathcal{L}_{temp}$ can effectively regularize latent updates temporally.
Note that T-DASS optimizes the latent codes and takes temporal consistency
into account, which is different from DASS which operates on each image
individually.
## 4 Experiments
Figure 4: E-LASSIE samples. We extend LASSIE [39] image sets with 15 occluded
or truncated images per animal class and annotate the 2D keypoints for
evaluation. These noisy images pose great challenges to sparse-image
optimization since the per-instance 3D shapes can easily overfit to the
visible parts and ignore the rest.
Datasets. Following [39, 38], we evaluate ARTIC3D on the Pascal-Part [5] and
LASSIE [39] images. From Pascal-Part, we obtain images of horse, cow, and
sheep, as well as their 2D keypoints automatically computed using the ground-
truth 2D part masks. The LASSIE dataset includes web images of other animal
species (zebra, tiger, giraffe, elephant, kangaroo, and penguin) and 2D
keypoint annotations. Each image collection contains roughly 30 images of
different instances with diverse appearances, which are manually filtered so
that the animal bodies are fully visible in the images. To evaluate the model
robustness in a more practical setting, we extend the LASSIE image sets with
several noisy images where the animals are occluded or truncated. In
particular, we collect 15 additional web images (CC-licensed) per class and
annotate the 2D keypoints for evaluation. We call the extended image sets
E-LASSIE and show some examples in Fig. 4. For the experiments on E-LASSIE, we
optimize and evaluate on all the 45 images in each set.
Baselines. We mainly compare ARTIC3D with LASSIE [39] and Hi-LASSIE [38] as we
deal with the same problem setting, namely sparse image optimization for
articulated animal shapes. For reference, we also compare the results with
several learning-based methods like A-CSM [11], MagicPony [15], and Farm3D
[9]. Note that these approaches are not directly comparable to ARTIC3D since
they train a feedforward network on large-scale image sets (not available in
our scenario). Although related, some other recent works on 3D surface
reconstruction either cannot handle articulations [12, 14, 7, 31, 40] or
require different inputs [13, 34, 36]. As a stronger baseline, we implement
Hi-LASSIE+, incorporating the standard SDS loss as in [27, 15, 18] (back-
propagate latent gradients through encoder) during Hi-LASSIE [38] optimization
for shape and texture.
Evaluation metrics. Considering the lack of ground-truth 3D annotations in our
datasets, we follow a common practice [45, 11, 39, 38] to use keypoint
transfer accuracy as a quantitative metric to evaluate 3D reconstruction. For
each pair of images, we map the annotated 2D keypoints on source image onto
the canonical 3D surfaces, re-project them to the target image via the
estimated camera, pose, and shape, and compare the transferred keypoints with
target annotations. To further evaluate the quality of textured outputs, we
compute CLIP [22] features of the 3D output renders under densely sampled
viewpoints, and calculate the feature similarity against text prompt as well
as input images. While most prior arts on 3D shape generation [21, 23] only
evaluate the image-text similarity, we also evaluate the image-image
similarity since our outputs should be faithful to both the category-level
textual description as well as instance-specific input images. We use a text
prompt: “A photo of *” for each animal class “*” in our experiments. A CLIP
ViT-B/32 model is used to compute the average feature similarity over 36
uniformly sampled azimuth renders at a fixed elevation of 30 degrees. We show
the main results here and more quantitative and qualitative comparisons in the
supplemental material, including animation videos, user study, and more
detailed ablation study.
Qualitative results. Fig. 1 shows some sample outputs of ARTIC3D. In Fig. 5,
we compare the visual results of Hi-LASSIE, Hi-LASSIE+, and ARTIC3D on the
E-LASSIE images. Both Hi-LASSIE and Hi-LASSIE+ produce irregular pose and
shape for the invisible parts. Regarding surface texture, Hi-LASSIE
reconstructs faithful texture from the input view but noisy in novel views,
since it naively samples vertex colors from the input images. The output
texture of Hi-LASSIE+ is generally less noisy thanks to the SDS loss. By
comparison, ARTIC3D accurately estimates the camera viewpoint, pose, shape,
and texture even with the presence of occlusions or truncation. The ARTIC3D
outputs are detailed, faithful to input images, and realistic from both input
and novel views.
Figure 5: Visual comparison of ARTIC3D and other baselines. For each input
image, we show the 3D textured outputs from input (upper) and novel (lower)
views. The results demonstrate that ARTIC3D is more robust to noisy images
with occlusions or truncation, producing 3D shape and texture that are
detailed and faithful to the input images.
Quantitative comparisons. We show comparisons of the keypoint transfer
accuracy (PCK) in Tables 2. On both LASSIE and E-LASSIE image sets, Hi-LASSIE+
produces a marginal PCK gain from Hi-LASSIE [38] by naively applying the SDS
loss. ARTIC3D, on the other hand, achieves consistently higher PCK than the
baselines, especially on the noisy E-LASSIE images. The results demonstrate
that our diffusion-guided strategies can effectively learn more detailed,
accurate, and robust 3D shapes. The Pascal-Part results in Tab 2 further show
that ARTIC3D performs favorably against the state-of-the-art optimization-
based methods and are comparable to learning-based approaches. In Table 3, we
show the CLIP similarity comparisons on the E-LASSIE images, which indicate
that our textured outputs are more faithful to both the input images
(instance-level) and text prompt (class-level) for most animal classes.
Table 1: Keypoint transfer evaluations on the LASSIE [39] and E-LASSIE image sets. We report the average<EMAIL_ADDRESS>($\uparrow$) on all pairs of images. ARTIC3D performs favorably against the optimization-based prior arts on all animal classes. The larger performance gap in the E-LASSIE demonstrates that ARTIC3D is robust to noisy images. Method | Image set | Elephant | Giraffe | Kangaroo | Penguin | Tiger | Zebra
---|---|---|---|---|---|---|---
LASSIE [39] | LASSIE | 40.3 | 60.5 | 31.5 | 40.6 | 62.4 | 63.3
Hi-LASSIE [38] | LASSIE | 42.7 | 61.6 | 35.0 | 44.4 | 63.1 | 64.2
Hi-LASSIE+ | LASSIE | 43.3 | 61.5 | 35.5 | 44.6 | 63.4 | 64.0
ARTIC3D | LASSIE | 44.1 | 61.9 | 36.7 | 45.3 | 64.0 | 64.8
Hi-LASSIE [38] | E-LASSIE | 37.6 | 54.3 | 31.9 | 41.7 | 57.4 | 60.1
Hi-LASSIE+ | E-LASSIE | 38.3 | 54.8 | 32.8 | 41.8 | 57.7 | 61.3
ARTIC3D | E-LASSIE | 39.8 | 58.0 | 35.3 | 43.8 | 59.3 | 63.0
Table 2: Keypoint transfer results on Pascal-Part [6]. We report the mean<EMAIL_ADDRESS>($\uparrow$) on all pairs of images. ∗ indicates learning-based models which are trained on a large-scale image set. Method | Horse | Cow | Sheep
---|---|---|---
UMR∗ [14] | 24.4 | - | -
A-CSM∗ [11] | 32.9 | 26.3 | 28.6
MagicPony∗ [33] | 42.9 | 42.5 | 26.2
Farm3D∗ [9] | 42.5 | 40.2 | 32.8
LASSIE [39] | 42.2 | 37.5 | 27.5
Hi-LASSIE [38] | 43.7 | 42.1 | 29.9
Hi-LASSIE+ | 43.3 | 42.3 | 30.5
ARTIC3D | 44.4 | 43.0 | 31.9
Table 3: CLIP similarity ($\uparrow$) evaluations on the E-LASSIE images. For each animal class, we calculate cosine similarities $s1/s2$, where $s1$ is the image-image similarity (against masked input image) and $s2$ is the image-text similarity (against text prompt). Method | Elephant | Giraffe | Kangaroo | Penguin | Tiger | Zebra
---|---|---|---|---|---|---
Hi-LASSIE [38] | 80.0 / 26.3 | 85.2 / 29.6 | 77.4 / 25.6 | 85.8 / 30.8 | 79.7 / 25.6 | 83.8 / 27.4
Hi-LASSIE+ | 79.0 / 27.7 | 84.7 / 30.2 | 78.3 / 29.1 | 82.9 / 32.3 | 75.3 / 25.3 | 81.9 / 27.6
ARTIC3D | 82.6 / 28.4 | 85.3 / 30.7 | 81.6 / 29.9 | 85.5 / 33.1 | 80.0 / 27.8 | 84.1 / 29.4
Figure 6: Animation fine-tuning. Compared to the original animated outputs via
rigid transformation (top), our animation fine-tuning (bottom) effectively
improves the shape and texture details, especially around animal joints.
Figure 7: Texture transfer. Our part surface representation enables
applications like pose or texture transfer. Given a source shape and target
texture, we show the transferred texture between instances (left) and animal
species (right).
Animation and texture transfer. In Fig. 7, we compare the animations before
and after our fine-tuning step via T-DASS. While the skeleton-based
representation allows easy animation via rigid part transformation, the output
part shapes and texture are often disconnected and irregular around the
joints. The results show that T-DASS can effectively produce high-quality
animations that are detailed in shape and texture and temporally consistent
between frames. In addition to animation, our 3D part surfaces also enables
convenient controllable syntheses like texture transfer and pose transfer
between different instance or animal classes. Several examples of texture
transfer are shown in Fig. 7. More visual results with video results of these
applications are shown in the supplemental material.
Limitations. ARTIC3D relies on the 3D skeleton discovered by Hi-LASSIE [38] to
initialize the parts. If the animal bodies are occluded or truncated in most
images, the skeleton initialization tends to be inaccurate, and thus limiting
ARTIC3D’s ability to form realistic parts. Although our input preprocessing
method can mitigate this issue to some extent, fluffy animals (e.g.sheep) with
ambiguous skeletal configuration can still pose challenges in skeleton
discovery. In addition, the front-facing bias in diffusion models sometimes
lead to unrealistic texture like multiple faces, which also affects our
reconstruction quality. See the supplemental material for failure cases.
## 5 Conclusion
We propose ARTIC3D, a diffusion-guided framework to reconstruct 3D articulated
shapes and texture from sparse and noisy web images. Specifically, we design a
novel DASS module to efficiently calculate pixel gradients from score
distillation for 3D surface optimization and use it in the input preprocessing
of noisy images; Shape and texture optimization; as well as the animation
fine-tuning. Results on both the existing datasets as well as newly introduced
noisy web images demonstrate that ARTIC3D produces more robust, detailed, and
realistic reconstructions against prior arts.
## References
* [1] Shir Amir, Yossi Gandelsman, Shai Bagon, and Tali Dekel. Deep ViT features as dense visual descriptors. arXiv preprint arXiv:2112.05814, 2021.
* [2] Mark Boss, Raphael Braun, Varun Jampani, Jonathan T Barron, Ce Liu, and Hendrik Lensch. NeRD: Neural reflectance decomposition from image collections. In CVPR, pages 12684–12694, 2021.
* [3] Mark Boss, Andreas Engelhardt, Abhishek Kar, Yuanzhen Li, Deqing Sun, Jonathan T Barron, Hendrik Lensch, and Varun Jampani. SAMURAI: Shape and material from unconstrained real-world arbitrary image collections. NeurIPS, 2022.
* [4] Mathilde Caron, Hugo Touvron, Ishan Misra, Hervé Jégou, Julien Mairal, Piotr Bojanowski, and Armand Joulin. Emerging properties in self-supervised vision transformers. In ICCV, pages 9650–9660, 2021.
* [5] Xianjie Chen, Roozbeh Mottaghi, Xiaobai Liu, Sanja Fidler, Raquel Urtasun, and Alan Yuille. Detect what you can: Detecting and representing objects using holistic models and body parts. In CVPR, pages 1971–1978, 2014.
* [6] Mark Everingham, Luc Van Gool, Christopher KI Williams, John Winn, and Andrew Zisserman. The PASCAL visual object classes (VOC) challenge. IJCV, 88(2):303–338, 2010.
* [7] Shubham Goel, Angjoo Kanazawa, and Jitendra Malik. Shape and viewpoint without keypoints. In ECCV, pages 88–104, 2020.
* [8] Jonathan Ho and Tim Salimans. Classifier-free diffusion guidance. arXiv preprint arXiv:2207.12598, 2022.
* [9] Tomas Jakab, Ruining Li, Shangzhe Wu, Christian Rupprecht, and Andrea Vedaldi. Farm3d: Learning articulated 3d animals by distilling 2d diffusion. arXiv preprint arXiv:2304.10535, 2023.
* [10] Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.
* [11] Nilesh Kulkarni, Abhinav Gupta, David F Fouhey, and Shubham Tulsiani. Articulation-aware canonical surface mapping. In CVPR, pages 452–461, 2020.
* [12] Nilesh Kulkarni, Abhinav Gupta, and Shubham Tulsiani. Canonical surface mapping via geometric cycle consistency. In ICCV, pages 2202–2211, 2019.
* [13] Xueting Li, Sifei Liu, Shalini De Mello, Kihwan Kim, Xiaolong Wang, Ming-Hsuan Yang, and Jan Kautz. Online adaptation for consistent mesh reconstruction in the wild. NeurIPS, 33:15009–15019, 2020.
* [14] Xueting Li, Sifei Liu, Kihwan Kim, Shalini De Mello, Varun Jampani, Ming-Hsuan Yang, and Jan Kautz. Self-supervised single-view 3D reconstruction via semantic consistency. In ECCV, pages 677–693, 2020.
* [15] Chen-Hsuan Lin, Jun Gao, Luming Tang, Towaki Takikawa, Xiaohui Zeng, Xun Huang, Karsten Kreis, Sanja Fidler, Ming-Yu Liu, and Tsung-Yi Lin. Magic3d: High-resolution text-to-3d content creation. arXiv preprint arXiv:2211.10440, 2022.
* [16] Shichen Liu, Tianye Li, Weikai Chen, and Hao Li. Soft rasterizer: A differentiable renderer for image-based 3D reasoning. In CVPR, pages 7708–7717, 2019.
* [17] Luke Melas-Kyriazi, Christian Rupprecht, Iro Laina, and Andrea Vedaldi. Realfusion: 360 $\\{$$\backslash$deg$\\}$ reconstruction of any object from a single image. arXiv preprint arXiv:2302.10663, 2023.
* [18] Gal Metzer, Elad Richardson, Or Patashnik, Raja Giryes, and Daniel Cohen-Or. Latent-nerf for shape-guided generation of 3d shapes and textures. arXiv preprint arXiv:2211.07600, 2022.
* [19] Ben Mildenhall, Pratul P Srinivasan, Matthew Tancik, Jonathan T Barron, Ravi Ramamoorthi, and Ren Ng. NeRF: Representing scenes as neural radiance fields for view synthesis. In ECCV, pages 405–421, 2020.
* [20] Tom Monnier, Matthew Fisher, Alexei A Efros, and Mathieu Aubry. Share with thy neighbors: Single-view reconstruction by cross-instance consistency. In ECCV, pages 285–303, 2022.
* [21] Ben Poole, Ajay Jain, Jonathan T Barron, and Ben Mildenhall. Dreamfusion: Text-to-3d using 2d diffusion. arXiv preprint arXiv:2209.14988, 2022.
* [22] Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. Learning transferable visual models from natural language supervision. In ICML, pages 8748–8763, 2021.
* [23] Amit Raj, Srinivas Kaza, Ben Poole, Michael Niemeyer, Nataniel Ruiz, Ben Mildenhall, Shiran Zada, Kfir Aberman, Michael Rubinstein, Jonathan Barron, et al. Dreambooth3d: Subject-driven text-to-3d generation. arXiv preprint arXiv:2303.13508, 2023.
* [24] Amit Raj, Michael Zollhofer, Tomas Simon, Jason Saragih, Shunsuke Saito, James Hays, and Stephen Lombardi. Pixel-aligned volumetric avatars. In CVPR, pages 11733–11742, 2021.
* [25] Aditya Ramesh, Mikhail Pavlov, Gabriel Goh, Scott Gray, Chelsea Voss, Alec Radford, Mark Chen, and Ilya Sutskever. Zero-shot text-to-image generation. In ICML, pages 8821–8831, 2021.
* [26] Elad Richardson, Gal Metzer, Yuval Alaluf, Raja Giryes, and Daniel Cohen-Or. Texture: Text-guided texturing of 3d shapes. arXiv preprint arXiv:2302.01721, 2023.
* [27] Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, and Björn Ommer. High-resolution image synthesis with latent diffusion models. In CVPR, pages 10684–10695, 2022.
* [28] Chitwan Saharia, William Chan, Saurabh Saxena, Lala Li, Jay Whang, Emily L Denton, Kamyar Ghasemipour, Raphael Gontijo Lopes, Burcu Karagol Ayan, Tim Salimans, et al. Photorealistic text-to-image diffusion models with deep language understanding. NeurIPS, 35:36479–36494, 2022.
* [29] Uriel Singer, Shelly Sheynin, Adam Polyak, Oron Ashual, Iurii Makarov, Filippos Kokkinos, Naman Goyal, Andrea Vedaldi, Devi Parikh, Justin Johnson, et al. Text-to-4d dynamic scene generation. arXiv preprint arXiv:2301.11280, 2023.
* [30] Prune Truong, Marie-Julie Rakotosaona, Fabian Manhardt, and Federico Tombari. Sparf: Neural radiance fields from sparse and noisy poses. arXiv preprint arXiv:2211.11738, 2022.
* [31] Shubham Tulsiani, Nilesh Kulkarni, and Abhinav Gupta. Implicit mesh reconstruction from unannotated image collections. arXiv preprint arXiv:2007.08504, 2020.
* [32] Qianqian Wang, Zhicheng Wang, Kyle Genova, Pratul P Srinivasan, Howard Zhou, Jonathan T Barron, Ricardo Martin-Brualla, Noah Snavely, and Thomas Funkhouser. Ibrnet: Learning multi-view image-based rendering. In CVPR, pages 4690–4699, 2021.
* [33] Shangzhe Wu, Ruining Li, Tomas Jakab, Christian Rupprecht, and Andrea Vedaldi. Magicpony: Learning articulated 3d animals in the wild. arXiv preprint arXiv:2211.12497, 2022.
* [34] Gengshan Yang, Deqing Sun, Varun Jampani, Daniel Vlasic, Forrester Cole, Huiwen Chang, Deva Ramanan, William T Freeman, and Ce Liu. LASR: Learning articulated shape reconstruction from a monocular video. In CVPR, pages 15980–15989, 2021.
* [35] Gengshan Yang, Deqing Sun, Varun Jampani, Daniel Vlasic, Forrester Cole, Ce Liu, and Deva Ramanan. ViSER: Video-specific surface embeddings for articulated 3d shape reconstruction. NeurIPS, 34, 2021.
* [36] Gengshan Yang, Minh Vo, Natalia Neverova, Deva Ramanan, Andrea Vedaldi, and Hanbyul Joo. BANMo: Building animatable 3D neural models from many casual videos. arXiv preprint arXiv:2112.12761, 2021.
* [37] Gengshan Yang, Chaoyang Wang, N Dinesh Reddy, and Deva Ramanan. Reconstructing animatable categories from videos. arXiv preprint arXiv:2305.06351, 2023.
* [38] Chun-Han Yao, Wei-Chih Hung, Yuanzhen Li, Michael Rubinstein, Ming-Hsuan Yang, and Varun Jampani. Hi-lassie: High-fidelity articulated shape and skeleton discovery from sparse image ensemble. arXiv preprint arXiv:2212.11042, 2022.
* [39] Chun-Han Yao, Wei-Chih Hung, Yuanzhen Li, Michael Rubinstein, Ming-Hsuan Yang, and Varun Jampani. Lassie: Learning articulated shapes from sparse image ensemble via 3d part discovery. arXiv preprint arXiv:2207.03434, 2022.
* [40] Yufei Ye, Shubham Tulsiani, and Abhinav Gupta. Shelf-supervised mesh prediction in the wild. In CVPR, pages 8843–8852, 2021.
* [41] Alex Yu, Vickie Ye, Matthew Tancik, and Angjoo Kanazawa. pixelnerf: Neural radiance fields from one or few images. In CVPR, pages 4578–4587, 2021.
* [42] Jason Zhang, Gengshan Yang, Shubham Tulsiani, and Deva Ramanan. NeRS: Neural reflectance surfaces for sparse-view 3D reconstruction in the wild. NeurIPS, 34, 2021.
* [43] Kai Zhang, Fujun Luan, Qianqian Wang, Kavita Bala, and Noah Snavely. PhySG: Inverse rendering with spherical Gaussians for physics-based material editing and relighting. In CVPR, pages 5453–5462, 2021.
* [44] Lvmin Zhang and Maneesh Agrawala. Adding conditional control to text-to-image diffusion models. arXiv preprint arXiv:2302.05543, 2023.
* [45] Silvia Zuffi, Angjoo Kanazawa, Tanya Berger-Wolf, and Michael J Black. Three-D Safari: Learning to estimate zebra pose, shape, and texture from images "in the wild". In ICCV, pages 5359–5368, 2019.
* [46] Silvia Zuffi, Angjoo Kanazawa, David W Jacobs, and Michael J Black. 3D menagerie: Modeling the 3D shape and pose of animals. In CVPR, pages 6365–6373, 2017.
|
Optimality conditions, approximate stationarity, and applications
– a story beyond Lipschitzness
Alexander Y. Kruger
Federation University Australia,
Centre for Informatics and Applied Optimization,
School of Engineering, Information Technology and Physical Sciences,
Ballarat VIC 3353,
Patrick Mehlitz
Brandenburgische Technische Universität Cottbus–Senftenberg,
Institute of Mathematics,
03046 Cottbus,
Approximate necessary optimality conditions in terms of Fréchet subgradients and normals
for a rather general optimization problem with a potentially non-Lipschitzian objective function
are established with the aid of Ekeland's variational principle,
the fuzzy Fréchet subdifferential sum rule,
and a novel notion of lower semicontinuity relative to a set-valued mapping or set.
Feasible points satisfying these optimality conditions are referred to as approximately stationary.
As applications, we derive a new general version of the extremal principle.
Furthermore, we study approximate stationarity conditions for an optimization problem with a composite objective function and geometric constraints,
a qualification condition guaranteeing that approximately stationary points of such a problem are M-stationary, and
a multiplier-penalty-method which naturally computes approximately stationary points of the underlying problem.
Finally, necessary optimality conditions for an optimal control problem with a non-Lipschitzian sparsity-promoting term
in the objective function are established.
Approximate stationarity,
Generalized separation,
Non-Lipschitzian programming,
Optimality conditions,
Sparse control
49J52 49J53, 49K27, 90C30, 90C48
§ INTRODUCTION
Approximate stationarity conditions, claiming that, along a convergent sequence, a classical
stationarity condition (like a multiplier rule) holds up to a tolerance which tends to zero,
have proved
to be a powerful tool in mathematical optimization throughout the last decades.
The particular interest in such conditions is based on two prominent features.
First, they often serve as
necessary optimality conditions
even in the absence of
constraint qualifications. Second, different classes of solution algorithms for the
computational treatment of optimization problems naturally produce sequences whose accumulation
points are approximately stationary.
Approximate stationarity conditions can be traced back to the early 1980s,
see [Kruger and Mordukhovich, 1980, Kruger, 1985], where they popped up
as a consequence of the famous extremal principle.
The latter geometric result, when formulated in infinite dimensions in terms of Fréchet normals,
can itself be interpreted as a kind of approximate stationarity, see [Kruger and Mordukhovich, 1980, Kruger, 2003, Mordukhovich, 2006].
In [Andreani et al., 2010, Andreani et al., 2011], this fundamental concept, which is referred
to as Approximate Karush–Kuhn–Tucker (AKKT) stationarity in these papers, has been
rediscovered due to its significant relevance in the context of numerical standard nonlinear
programming. A notable feature of AKKT-stationary points is the potential unboundedness of
the associated sequence of Lagrange-multipliers. The latter already depicts that AKKT-stationary
points do not need to satisfy the classical KKT conditions. This observation gave rise to the
investigation of conditions ensuring that AKKT-stationary points actually are KKT points,
see e.g. [Andreani et al., 2016]. The resulting constraint qualifications for
the underlying nonlinear optimization problem turned out to be comparatively weak.
During the last decade, reasonable notions of approximate stationarity have been introduced for more
challenging classes of optimization problems like programs with complementarity, see [Andreani et al., 2019, Ramos, 2021],
cardinality, see [Kanzow et al., 2021], conic, see [Andreani et al., 2020],
nonsmooth, see [Helou et al., 2020, Mehlitz, 2020, Mehlitz, 2021], and geometric constraints, see [Jia et al., 2021],
in the finite-dimensional situation.
A generalization to optimization problems in abstract Banach spaces can be found in [Börgens et al., 2020].
In all these papers, the underlying optimization problem's objective function is assumed to be locally Lipschitzian.
Note that the (local) Lipschitz property of the (all but one) functions involved is a key assumption in most conventional
subdifferential calculus results in infinite dimensions in convex and nonconvex settings, see e.g. the sum rules in <ref>.
However, as several prominent applications like sparse portfolio selection, compressed sensing, edge-preserving image
restoration, low-rank matrix completion, or signal processing, where the objective function is often only
lower semicontinuous, demonstrate, Lipschitz continuity
might be a restrictive property of the data.
The purpose of this paper is to provide a reasonable extension of approximate stationarity to a
rather general class of optimization problems in Banach spaces with a lower semicontinuous objective function and
generalized equation constraints generated by a set-valued mapping in order to open the topic up to the aforementioned
challenging applications.
Our general approach to a notion of approximate stationarity,
which serves as a necessary optimality condition,
is based on two major classical tools: Ekeland's variational principle, see [Ekeland, 1974], and the fuzzy calculus of Fréchet normals,
see [Ioffe, 2017, Kruger, 2003].
Another convenient ingredient of the theory is a new notion of lower semicontinuity of extended-real-valued functions relative
to a given set-valued mapping which holds for free in finite dimensions.
We illustrate our findings in the context of generalized set separation and derive a novel
extremal principle which differs from the traditional one which dates back to [Kruger and Mordukhovich, 1980]. On the one hand, its prerequisites regarding
the position of the involved sets relative to each other is slightly more restrictive than in [Kruger and Mordukhovich, 1980] when the classical notion of
extremality, meaning that the sets of interest can be “pushed apart from each other”, is used.
On the other hand, our new extremal principle covers settings where extremality is based on functions
which are just lower semicontinuous, and, thus,
applies in more general situations.
The final part of the paper is dedicated to the study of optimization problems with so-called geometric constraints,
where the feasible set equals the preimage of a closed set under a smooth transformation, whose objective function is
the sum of a smooth part and a merely lower semicontinuous part. First, we apply our concept of approximate stationarity
to this problem class in order to obtain necessary optimality conditions. Furthermore, we introduce an associated
qualification condition which guarantees
M-stationarity of approximately stationary points.
As we will show, this generalizes related considerations from [Chen et al., 2017, Guo and Ye, 2018] which were done in
a completely finite-dimensional setting.
Second, we suggest an augmented Lagrangian method for the numerical solution of geometrically constrained programs and
show that it computes approximately stationary points in our new sense. Finally, we use our theory in order to state
necessary optimality conditions for optimal control problems with a non-Lipschitzian so-called sparsity-promoting term in the
objective function, see [Ito and Kunisch, 2014, Wachsmuth, 2019], which enforces optimal controls to be zero on large
parts of the domain.
The remaining parts of the paper are organized as follows.
In <ref>, we comment on the notation which is used in this manuscript and recall some
fundamentals from variational analysis.
<Ref> is dedicated to the study of a new notion of lower semicontinuity of
an extended-real-valued function relative to a given set-valued mapping or set.
We derive necessary optimality conditions of approximate stationarity type for rather general
optimization problems in <ref>.
This is used in <ref> in order to derive a novel extremal principle
in generalized set separation.
Furthermore, we apply our findings from <ref> in <ref> in
order to state necessary optimality conditions of approximate stationarity type for optimization
problems in Banach spaces with geometric constraints and a composite objective function. Based on that, we derive a new qualification
condition ensuring M-stationarity of local minimizers, see <ref>, an augmented Lagrangian method which
naturally computes approximately stationary points, see <ref>, and necessary optimality conditions for optimal
control problems with a sparsity-promoting term in the objective function, see <ref>.
Some concluding remarks close the paper in <ref>.
§ NOTATION AND PRELIMINARIES
§.§ Basic notation
Our basic notation is standard, see e.g. [Ioffe, 2017, Mordukhovich, 2006, Rockafellar and Wets, 1998].
The symbols $\R$ and $\N$ denote the sets of all real numbers and all positive integers, respectively.
Throughout the paper, $X$ and $Y$ are
either metric or Banach spaces
(although many facts, particularly, most of the definitions in <ref>, are valid in arbitrary normed vector spaces, i.e.,
do not require the spaces to be complete).
For brevity, we use the same notations $d(\cdot,\cdot)$ and $\|\cdot\|$ for distances and norms
in all spaces.
Banach spaces are often treated as metric spaces with the distance determined by the norm
in the usual way.
The distance from a point $x\in X$ to a set $\Omega\subset X$ in a metric space $X$ is defined by
$\dist_\Omega(x):=\inf_{u\in\Omega}d(x,u)$, and we use the convention $\dist_\varnothing(x) := +\infty$.
Throughout the paper, $\overline\Omega$ and $\intr\Omega$ denote the closure and the interior of $\Omega$, respectively.
Whenever $X$ is a Banach space, $\{x_k\}_{k\in\N}\subset X$ is a sequence, and $\bar x\in X$ is some
point, we exploit $x_k\to\bar x$ ($x_k\weakly\bar x$) in order to denote the strong (weak) convergence
of $\{x_k\}_{k\in\N}$ to $\bar x$. Similarly, we use $x_k^*\weaklystar x^*$ in order to express that
a sequence $\{x_k^*\}_{k\in\N}\subset X^*$ converges weakly$^*$ to $x^*\in X^*$.
Finally, $x_k\to_\Omega\bar x$ means that $\{x_k\}_{k\in\N}\subset\Omega$ converges
strongly to $\bar x$.
In case where $X$ is a Hilbert space and $K\subset X$ is a closed, convex set, we denote by
$P_K\colon X\to X$ the projection map associated with $K$.
If $X$ is a Banach space, its topological dual is denoted by $X^*$, while
$\langle\cdot,\cdot\rangle\colon X^*\times X\to\R$
denotes the bilinear form defining the pairing between the two spaces.
If not explicitly stated otherwise, products of (primal) metric or Banach spaces are equipped
with the maximum distances or norms, e.g., $\|(x,y)\|:=\max(\|x\|,\|y\|)$ for all $(x,y)\in X\times Y$.
Note that the corresponding dual norm is the sum norm given by $\|(x^*,y^*)\|:=\|x^*\|+\|y^*\|$ for all
$(x^*,y^*)\in X^*\times Y^*$.
The open unit balls in the primal and dual spaces are denoted by $\B$ and $\B^*$, respectively,
while the corresponding closed unit balls are denoted by $\overline{\B}$ and $\overline{\B}{}^*$,
The notations $B_\delta(x)$ and $\overline{B}_\delta(x)$ stand, respectively,
for the open and closed balls with center $x$ and radius $\delta>0$ in $X$.
For an extended-real-valued function $\varphi\colon X\to\R_\infty:=\R\cup\{+\infty\}$,
its domain and epigraph are defined by
$\dom \varphi:=\{x\in X\,|\,\varphi(x)< +\infty\}$ and
$\epi\varphi:=\{(x,\mu)\in X\times\R\,|\,\varphi(x)\le\mu\}$, respectively.
For each set $\Omega\subset X$, we set $\varphi_\Omega:=\varphi+i_\Omega$
where $i_\Omega\colon X\to\R_\infty$
is the so-called indicator function of $\Omega$ which equals zero on $\Omega$
and is set to $+\infty$ on $X\setminus\Omega$.
A set-valued mapping $\Upsilon\colon X\rightrightarrows Y$ between metric spaces $X$ and $Y$ is a mapping,
which assigns to every $x\in X$ a (possibly empty) set $\Upsilon(x)\subset Y$.
We use the notations
$\gph \Upsilon:=\{(x,y)\in X\times Y\,|\,y\in \Upsilon(x)\}$,
$\Im\Upsilon:=\bigcup_{x\in X}\Upsilon(x)$,
and $\dom \Upsilon:=\{x\in X\,|\,\Upsilon(x)\ne\varnothing\}$
for the graph, the image, and the domain of $\Upsilon$, respectively.
Furthermore, $\Upsilon^{-1}\colon Y\rightrightarrows X$ given by
$\Upsilon^{-1}(y) :=\{x\in X\,|\,y\in \Upsilon(x)\}$ for all $y\in Y$ is referred to as
the inverse of $\Upsilon$.
Assuming that $\bar x\in\dom \Upsilon$ is fixed,
\[
\limsup\limits_{x\to \bar x}\Upsilon(x)
\left\{
y\in Y\,\middle|\,
\exists\{(x_k,y_k)\}_{k\in\N}\subset\gph\Upsilon\colon\; x_k\to \bar x,\,y_k\to y
\right\}
\]
is referred to as the (strong) outer limit of $\Upsilon$ at $\bar x$.
Finally, if $X$ is a Banach space, for a set-valued mapping $\Xi\colon X\tto X^*$ and $\bar x\in \dom\Xi$, we use
\[
\wStarlimsup\limits_{x\to \bar x}\Xi(x)
\left\{
x^*\in X^*\,\middle|\,
\exists\{(x_k,x_k^*)\}_{k\in\N}\subset\gph\Xi\colon\; x_k\to \bar x,\,x_k^*\weaklystar x^*
\right\}
\]
in order to denote the outer limit of $\Xi$ at $\bar x$ when equipping $X^*$ with
the weak$^*$ topology.
Let us note that both outer limits from above are limits in the sense of Painlevé–Kuratowski.
Recall that a Banach space is a so-called Asplund space if every continuous, convex function
on an open convex set is Fréchet differentiable on a dense subset, or equivalently,
if the dual of each separable subspace is separable as well.
We refer the reader to [Phelps, 1993, Mordukhovich, 2006] for discussions about and characterizations
of Asplund spaces.
We would like to note that all reflexive, particularly, all finite-dimensional Banach spaces
possess the Asplund property.
§.§ Variational analysis
The subsequently introduced notions of variational analysis and generalized differentiation are
standard, see e.g. [Kruger, 2003, Mordukhovich, 2006].
Given a subset $\Omega$ of a Banach space $X$, a point $\bar x\in\Omega$, and a number $\eps\ge0$,
the nonempty, closed, convex set
\begin{equation}\label{eq:eps_normals}
N_{\Omega,\eps}(\bar x)
\left\{x^\ast\in X^\ast\,\middle|\,
\limsup_{x\to_\Omega\bar x,\,x\neq\bar x}
\frac {\langle x^\ast,x-\bar x\rangle}{\norm{x-\bar x}} \leq\eps
\right\}
\end{equation}
is the set of $\eps$-normals to $\Omega$ at $\bar x$.
In case $\eps=0$, it is a closed, convex cone called
Fréchet normal cone to $\Omega$ at $\bar x$.
In this case, we drop the subscript $\eps$ in the above notation and simply write
\begin{align*}
N_{\Omega}(\bar x)
\left\{x^\ast\in X^\ast\,\middle|\,
\limsup_{x\to_\Omega\bar x,\,x\neq\bar x}
\frac{\langle x^\ast,x-\bar x\rangle}{\norm{x-\bar x}} \leq 0
\right\}.
\end{align*}
Based on (<ref>), one can define the more robust
limiting normal cone to $\Omega$ at $\bar x$
by means of a limiting procedure:
\begin{align*}
\overline{N}_{\Omega}(\bar x)
\wStarlimsup\limits_{x\to_\Omega\bar x,\,\eps\downarrow 0}
\end{align*}
Whenever $X$ is an Asplund space, the above definition admits the following simplification:
\begin{equation*}
\overline{N}_{\Omega}(\bar x)= \wStarlimsup\limits_{x\to_\Omega\bar x} N_{\Omega}(x).
\end{equation*}
If $\Omega$ is a convex set, the Fréchet and limiting normal cones reduce to the normal cone
in the sense of convex analysis, i.e.,
\begin{align*}
N_{\Omega}(\bar x)
\overline N_{\Omega}(\bar x)
\left\{x^*\in X^*\,\middle|\,\langle x^*,x-\bar x \rangle \leq 0 \,\forall x\in \Omega\right\}.
\end{align*}
For a lower semicontinuous function $\varphi\colon X\to\R_{\infty}$, defined on a Banach space $X$,
its Fréchet subdifferential at $\bar x\in\dom \varphi$ is defined as
\begin{equation*}
\begin{aligned}
\partial \varphi(\bar x)
\left\{x^*\in X^*\,\middle|\, \liminf_{x\to\bar x,\,x\neq\bar x}
\frac{\varphi(x)-\varphi(\bar x)-\langle x^*,x-\bar x\rangle}{\norm{x-\bar x}}\geq 0
\right\}\\
\left\{x^*\in X^*\,\middle|\, (x^*,-1)\in N_{\epi \varphi}(\bar x,\varphi(\bar x))
\right\}.
\end{aligned}
\end{equation*}
The limiting and singular limiting subdifferential of $\varphi$ at $\bar x$ are defined, respectively,
by means of
\begin{align*}
\bsd \varphi(\bar x)
\left\{x^*\in X^*\,\middle|\, (x^*,-1)\in \overline{N}_{\epi \varphi}(\bar x,\varphi(\bar x))
\right\},\\
\bsd^\infty \varphi(\bar x)
\left\{x^*\in X^*\,\middle|\, (x^*,0)\in \overline{N}_{\epi \varphi}(\bar x,\varphi(\bar x))
\right\}.
\end{align*}
Note that in case where $X$ is an Asplund space, we have
\begin{align*}
\bsd \varphi(\bar x)
\wStarlimsup\limits_{x\to\bar x,\,\varphi(x)\to\varphi(\bar x)}
\partial\varphi(x),\\
\bsd^\infty\varphi(\bar x)
\wStarlimsup\limits_{x\to\bar x,\,\varphi(x)\to\varphi(\bar x),\,t\downarrow 0}
\end{align*}
see <cit.>.
If $\varphi$ is convex, the Fréchet and limiting subdifferential
reduce to the subdifferential in the sense of convex analysis, i.e.,
\begin{align*}
\partial\varphi(\bar x)
\bsd\varphi(\bar x)
\left\{x^*\in X^*\,\middle|\,
\varphi(x)-\varphi(\bar x)-\langle{x}^*,x-\bar x\rangle\ge 0\,\forall x\in X
\right\}.
\end{align*}
By convention, we set
$N_{\Omega}(x)=\overline{N}_{\Omega}(x):=\varnothing$ if
$\partial{\varphi}(x)=\bsd{\varphi}(x)=\bsd^\infty{\varphi}(x):=\varnothing$ if $x\notin\dom \varphi$.
It is easy to check that $N_{\Omega}(\bar x)=\partial i_\Omega(\bar x)$
and $\overline{N}_{\Omega}(\bar x)=\bsd i_\Omega(\bar x)$.
For a set-valued mapping $\Upsilon\colon X\rightrightarrows Y$ between Banach spaces,
its Fréchet coderivative at $(\bar x,\bar y)\in\gph \Upsilon$ is defined as
\begin{align*}
\forall y^*\in Y^*\colon\quad
{D}^*\Upsilon(\bar x,\bar y)(y^*):=
\left\{x^*\in X^*\,\middle|\, (x^*,-y^*)\in N_{\gph \Upsilon}(\bar x,\bar y)
\right\}.
\end{align*}
The proof of our main result <ref> relies on certain fundamental results of variational analysis:
Ekeland's variational principle,
see e.g. <cit.> or [Ekeland, 1974],
and two types of subdifferential sum rules
which address the subdifferential in the sense of convex analysis, see e.g.<cit.>, and the
Fréchet subdifferential, see e.g. <cit.>.
Below, we provide these results for completeness.
Let $X$ be a complete metric space,
$\varphi\colon X\to\R_{\infty}$ be lower semicontinuous and bounded from below,
$\bx\in\dom \varphi$, and $\varepsilon>0$.
Then there exists a point $\hat x\in X$ which satisfies the following conditions:
* $\varphi(\hat x)\le \varphi(\bx)$;
* $\forall x\in X\colon\quad \varphi(x)+\varepsilon d(x,\hat x)\ge \varphi(\hat x)$.
Let $X$ be a Banach space,
$\varphi_1,\varphi_2\colon X\to\R_\infty$,
and $\bar x\in\dom \varphi_1\cap\dom \varphi_2$.
Then the following assertions hold.
Convex sum rule.
Let $\varphi_1$ and $\varphi_2$ be convex, and $\varphi_1$ be continuous at a point in $\dom \varphi_2$.
Then $\partial(\varphi_1+\varphi_2)(\bar x)=\partial \varphi_1(\bar x)+\partial \varphi_2(\bar x)$.
Fuzzy sum rule.
Let $X$ be Asplund, $\varphi_1$ be Lipschitz continuous around $\bar x$,
and $\varphi_2$ be lower semicontinuous in a neighborhood of $\bar x$.
Then, for each $x^*\in\partial(\varphi_1+\varphi_2)(\bar x)$ and $\varepsilon>0$,
there exist $x_1,x_2\in X$
with $\norm{x_i-\bar x}<\varepsilon$ and $|\varphi_i(x_i)-\varphi_i(\bar x)|<\varepsilon$,
$i=1,2$, such that
$x^*\in\partial \varphi_1(x_1) +\partial \varphi_2(x_2)+\varepsilon\B^\ast$.
We will need representations of the subdifferentials of the distance function collected in the
next lemma. These results are taken from
<cit.>, <cit.>, and
Let $X$ be a Banach space, $\Omega\subset X$ be nonempty and closed, and
$\bx\in X$. Then the following assertions hold.
If $\bar x\in\Omega$, then $\partial\dist_\Omega(\bar x)=N_\Omega(\bar x)\cap\overline{\B}{}^*$.
If $\bar x\notin\Omega$ and either $X$ is Asplund or $\Omega$ is convex, then,
for each $x^*\in\partial\dist_\Omega(\bar x)$ and each $\eps>0$, there exist
$x\in\Omega$ and $u^*\in N_\Omega(x)$
such that $\norm{x-\bar x}<\dist_\Omega(\bar x)+\varepsilon$
and $\norm{x^*-u^*}<\varepsilon$.
Let us briefly mention that assertion <ref> of <ref>
can obviously be improved when the set of projections of $\bar x$ onto $\Omega$ is nonempty,
see <cit.>.
This is always the case if $\Omega$ is a nonempty, closed, convex subset of a reflexive Banach space,
since in this case $\Omega$ is weakly sequentially compact while the norm is weakly sequentially lower
The conditions in the final definition of this subsection are standard,
see e.g. [Klatte and Kummer, 2002, Kruger, 2009].
Let $X$ be a metric space, $\varphi\colon X\to\R_\infty$, and $\bx\in\dom \varphi$.
* We call $\bx$ a stationary point of $\varphi$ if
$\liminf_{ x\to\bx,\,x\neq\bar x}\frac{\varphi(x)-\varphi(\bx)}{d(x,\bx)}\geq 0$.
* Let $\eps>0$ and $U\subset X$ with $\bar x\in U$.
We call $\bx$ an $\eps$-minimal point of $\varphi$ on $U$ if
$\inf_{x\in U}\varphi(x)>\varphi(\bx)-\eps$.
If $U=X$, $\bar x$ is called a globally $\varepsilon$-minimal point of $\varphi$.
In the subsequent remark, we interrelate the concepts from <ref>.
For a metric space $X$, $\varphi\colon X\to\R_\infty$, and $\bx\in\dom \varphi$, the following assertions hold.
* If $\bar x$ is a local minimizer of $\varphi$, then it is a stationary point of $\varphi$.
* If $\bar x$ is a stationary point of $\varphi$, then, for each $\varepsilon>0$
and each sufficiently small $\delta>0$,
$\bar x$ is an $\varepsilon\delta$-minimal point of $\varphi$
on $B_\delta(\bar x)$.
* If $X$ is a normed
space, then $\bar x$ is a stationary point of $\varphi$ if and only if $0\in\sd\varphi(\bar x)$.
§ NOVEL NOTIONS OF SEMICONTINUITY
In this paper, we exploit new notions of lower semicontinuity of extended-real-valued functions
relative to a given set-valued mapping or set.
Here, we first introduce the concepts of interest before
studying their properties and presenting sufficient conditions for their validity.
§.§ Lower semicontinuity of a function relative to a set-valued mapping or set
Let us start with the definition of the property of our interest.
Fix metric spaces $X$ and $Y$,
$\Phi\colon X\rightrightarrows Y$,
$\varphi\colon X\to\R_\infty$,
and $\by\in Y$.
Let a subset $U\subset X$ be such that $U\cap\Phi^{-1}(\by)\cap\dom\varphi\ne\varnothing$.
The function $\varphi$ is
lower semicontinuous on $U$ relative to $\Phi$ at $\by$ if
\begin{equation}
\label{eq:estimate_lsc_wrt_set_valued_map}
\inf_{u\in \Phi^{-1}(\by)\cap U}\varphi(u)
\le
\inf_{\substack{U'+\rho\B\subset U,\\\rho>0}}
\liminf_{\substack{x\in U',\,y\to\by,\\
\dist_{\gph \Phi}(x,y)\to0}}\varphi(x).
\end{equation}
Let $\bx\in\Phi^{-1}(\by)\cap\dom\varphi$.
The function $\varphi$ is
lower semicontinuous near $\bx$ relative to $\Phi$ at $\by$ if there is a $\de_0>0$ such that, for each $\de\in(0,\de_0)$,
$\varphi$ is lower semicontinuous on $\overline{B}_\de(\bx)$ relative to $\Phi$ at $\by$.
Inequality (<ref>) can be strict, see <ref> below.
Note that whenever (<ref>) holds with a subset $U\subset X$, it also holds with $\overline{U}$ in place of $U$.
The converse implication is not true in general, see <ref> below.
Particularly, a function which is lower semicontinuous on a set $U$ relative to $\Phi$ at $\bar y$ may fail to have this property on a smaller set.
This shortcoming explains the idea behind <ref> <ref>.
Furthermore, we have the following result.
Fix metric spaces $X$ and $Y$,
$\Phi\colon X\rightrightarrows Y$,
$\varphi\colon X\to\R_\infty$,
$(\bx,\by)\in\gph\Phi$, and a subset $U\subset X$ with $\bx\in U\cap\dom\varphi$.
Assume that $\bx$ is a minimizer of $\varphi$ on $U$.
If $\varphi$ is lower semicontinuous on $U$ relative to $\Phi$ at $\bar y$, then it is
lower semicontinuous on $\widehat U$ relative to $\Phi$ at $\bar y$ for each subset $\widehat U$ satisfying $\bx\in\widehat U\subset U$.
For each subset $\widehat U$ satisfying $\bx\in\widehat U\subset U$, we find
\begin{align*}
\inf\limits_{u\in\Phi^{-1}(\by)\cap\widehat U}\varphi(u)
\varphi(\bar x)
\inf\limits_{u\in \Phi^{-1}(\by)\cap U}\varphi(u)
\\
\le
\inf_{\substack{U'+\rho\B\subset U,\\\rho>0}}
\liminf_{\substack{x\in U',\,y\to\by,\\
\dist_{\gph \Phi}(x,y)\to0}}\varphi(x)
\le
\inf_{\substack{U'+\rho\B\subset\widehat U,\\\rho>0}}
\liminf_{\substack{x\in U',\,y\to\by,\\
\dist_{\gph \Phi}(x,y)\to0}}\varphi(x),
\end{align*}
which shows the claim.
The properties in the next definition
are particular cases of the ones in <ref>,
corresponding to the set-valued mapping $\Phi\colon X\rightrightarrows Y$ whose graph is given by
$\gph \Phi:=\Omega\times Y$, where
$\Omega\subset X$ is a fixed set and $Y$ can be an arbitrary metric space, e.g., one can take $Y:=\R$.
Observe that in this case, $\Phi^{-1}(y)=\Omega$ is valid for all $y\in Y$.
Fix a metric space $X$,
$\varphi\colon X\to\R_\infty$,
and $\Omega\subset X$.
Let a subset $U\subset X$ be such that $U\cap\Omega\cap\dom\varphi\ne\varnothing$.
The function $\varphi$ is
lower semicontinuous on $U$ relative to $\Omega$ if
\begin{equation}
\label{eq:lower_semicontinuity_relative_to_set-1}
\inf_{u\in\Omega\cap U}\varphi(u)
\le
\inf_{\substack{U'+\rho\B\subset U,\\\rho>0}}
\liminf_{\substack{x\in U',\\
\dist_{\Omega}(x)\to0}}\varphi(x).
\end{equation}
Let $\bx\in\Omega\cap\dom\varphi$.
The function $\varphi$ is
lower semicontinuous near $\bx$ relative to $\Omega$ if there is a $\de_0>0$ such that, for each $\de\in(0,\de_0)$,
$\varphi$ is lower semicontinuous on $\overline{B}_\de(\bx)$ relative to $\Omega$.
The subsequent example shows that (<ref>) can be strict.
Consider the lower semicontinuous function $\varphi\colon\R\to\R$ given by
$\varphi(x):=0$ if $x\leq 0$ and
$\varphi(x):=1$ if $x> 0$, and the sets $\Omega=U:=[0,1]\subset\R$.
Then $\inf_{u\in\Omega\cap U}\varphi(u)=0$, while if a subset $U'$ satisfies $U'+\rho\B\subset U$ for some $\rho>0$,
then $U'\subset(0,1)$, and consequently $\varphi(x)=1$ for all $x\in U'$.
Hence, the right-hand side of (<ref>) equals $1$.
A function which is lower semicontinuous on a set $U$ relative to $\Omega$ may fail to have this property on a smaller set.
Consider the function $\varphi\colon\R\to\R$ given by
$\varphi(x):=0$ if $x\leq 0$, and
$\varphi(x):=-1$ if $x> 0$,
the set $\Omega:=\{0,1\}\subset\R$, and the point $\bar x:=0$.
Consider the closed interval $U_1:=[-1,1]$.
We find $\inf_{u\in\Omega\cap U_1}\varphi(u)=-1$ which is the global minimal value of $\varphi$ on $\R$.
Hence, $\varphi$ is lower semicontinuous on $U_1$ relative to $\Omega$ by <ref>.
For $U_2:=(-1,1)$, we find $\inf_{u\in \Omega\cap U_2}\varphi(u)=0$.
Moreover, choosing $U':=(-1/2,1/2)$ and $x_k:=1/(k+2)$ for each $k\in\N$, we find $U'+\tfrac12\mathbb B\subset U_2$,
$\{x_k\}_{k\in\N}\subset U'$, $d(x_k,\bar x)\to 0$, and $\varphi(x_k)\to-1$, i.e., $\varphi$ is not lower semicontinuous
on $U_2$ relative to $\Omega$ by definition.
Note that $\bar x$ is a local minimizer of $\varphi$ on $\Omega$ but not on $U_1$ or $U_2$.
In the next two statements, we present
sequential characterizations of the properties from <ref> <ref>
and <ref> <ref>.
Fix metric spaces $X$ and $Y$,
$\Phi\colon X\rightrightarrows Y$,
$\varphi\colon X\to\R_\infty$,
$\by\in Y$, and a subset $U\subset X$ with $U\cap\Phi^{-1}(\by)\cap\dom\varphi\ne\varnothing$.
Then $\varphi$ is lower semicontinuous on $U$ relative to $\Phi$ at $\by$ if
and only if
\begin{align*}
\inf_{u\in \Phi^{-1}(\by)\cap U}\varphi(u)
\le
\liminf_{k\to+\infty}\varphi(x_k)
\end{align*}
for all sequences $\{(x_k,y_k)\}_{k\in\N}\subset X\times Y$ satisfying
$\dist_{\gph \Phi}(x_k,y_k)\to0$,
+\rho\B\subset U$ for some $\rho>0$.
We need to show that the right-hand side of (<ref>) equals the infimum over all numbers $\liminf_{k\to+\infty}\varphi(x_k)$
where the sequence $\{(x_k,y_k)\}_{k\in\N}\subset X\times Y$ needs to satisfy
$\dist_{\gph \Phi}(x_k,y_k)\to0$,
and $\{x_k\}_{k\in\N}+\rho\B\subset U$ for some $\rho>0$.
Let $\{(x_k,y_k)\}_{k\in\N}$ be such a sequence.
\begin{align*}
\inf_{\substack{U'+\rho\B\subset U,\\\rho>0}}
\liminf_{\substack{x\in U',\,y\to\by,\\
\dist_{\gph \Phi}(x,y)\to0}}
\varphi(x)
\le
\liminf_{\substack{x\in \{x_k\}_{k\in\N},\,y\to\by,\\\dist_{\gph \Phi}(x,y)\to0}}
\varphi(x)
\le
\liminf_{k\to+\infty}\varphi(x_k).
\end{align*}
Conversely, let the right-hand side of (<ref>) be finite, and choose $\eps>0$ arbitrarily.
Then there exist a subset $\widehat U\subset U$ and a number $\hat\rho>0$ such that $\widehat U+\hat\rho\B\subset U$ and
\begin{align*}
\liminf_{k\to+\infty}
\inf_{\substack{x\in\widehat U,\,d(y,\by)<\frac1k,\\\dist_{\gph \Phi}(x,y)<\frac1k}}\varphi(x)
\liminf_{\substack{x\in\widehat U,\,y\to\by,\\ \dist_{\gph \Phi}(x,y)\to0}}
\varphi(x)
\inf_{\substack{U'+\rho\B\subset U,\\\rho>0}}
\liminf_{\substack{x\in U',\,y\to\by,\\
\dist_{\gph \Phi}(x,y)\to0}}
\varphi(x)
\eps.
\end{align*}
For each $k\in\N$ such that $\inf_{{x\in\widehat U,\,d(y,\by)<\frac1k,\,\dist_{\gph \Phi}(x,y)<\frac1k}}\varphi(x)$ is finite,
there is a tuple $(x_k,y_k)\in X\times Y$ such that $x_k\in\widehat U$, $d(y_k,\by)<1/k$,
$\dist_{\gph \Phi}(x_k,y_k)<1/k$, and
\begin{align*}
\varphi(x_k)
\inf_{\substack{x\in\widehat U,\,d(y,\by)<\frac1k,\\\dist_{\gph \Phi}(x,y)<\frac1k}}
\varphi(x)+\frac1k.
\end{align*}
Considering the tail of the sequences, if necessary, we have $\{x_k\}_{k\in\N}+\hat\rho\B\subset U$,
$\dist_{\gph \Phi}(x_k,y_k)\to0$, and
\begin{align*}
\liminf_{k\to+\infty}\varphi(x_k)
\inf_{\substack{U'+\rho\B\subset U,\\\rho>0}}
\liminf_{\substack{x\in U',\,y\to\by,\\
\dist_{\gph \Phi}(x,y)\to0}}
\varphi(x)
\eps.
\end{align*}
As the number $\eps$ has been chosen arbitrarily, this proves the converse part in the present setting.
If the right-hand side of (<ref>) equals $-\infty$, then for each $M>0$, we find a subset $\widehat U\subset U$
and a number $\hat\rho>0$ such that $\widehat U+\hat\rho\B\subset U$ and
\begin{align*}
\liminf\limits_{\substack{x\in\widehat U,\,y\to\by,\\\dist_{\gph \Phi}(x,y)\to 0}}\varphi(x)
\end{align*}
Hence, there is a sequence $\{(x_k,y_k)\}_{k\in\N}\subset X\times Y$ such that
$\{x_k\}_{k\in\N}+\hat\rho\B\subset U$, $y_k\to\bar y$, and $\dist_{\gph \Phi}(x_k,y_k)\to 0$
as $k\to+\infty$ while $\liminf_{k\to+\infty}\varphi(x_k)<-M$. Taking the infimum over all
$M>0$ now completes the proof of the assertion.
Let $X$ be a metric space, $\varphi\colon X\to\R_\infty$, and $\Omega,U\subset X$ be sets with $\Omega\cap U\cap\dom\varphi\ne\varnothing$.
Then $\varphi$ is lower semicontinuous on $U$ relative to $\Omega$ if and only if
\begin{equation}
\label{eq:sequential_characterization_lsc}
\inf_{u\in\Omega\cap U}\varphi(u)
\le
\liminf_{k\to+\infty}\varphi(x_k)
\end{equation}
for all sequences $\{x_k\}_{k\in\N}\subset X$ satisfying
and $\{x_k\}_{k\in\N}+\rho\B\subset U$ for some $\rho>0$.
§.§ Sufficient conditions for lower semicontinuity of a function relative to a set-valued mapping
As we will demonstrate below,
the property from <ref> <ref> is valid whenever the involved function $\varphi$
and the set-valued mapping $\Phi$ enjoy certain
semicontinuity properties, i.e., it can be decomposed into two independent properties regarding the
two main
data objects.
This will be beneficial in order to identify scenarios where
the new concept applies.
The upper semicontinuity properties of a set-valued mapping that we
state in the following two definitions seem to fit well for this purpose
(in combination with the corresponding lower semicontinuity properties of a function).
Fix metric spaces $X$ and $Y$,
$S\colon Y\rightrightarrows X$, and $\by\in \dom S$.
The mapping $S$ is
upper semicontinuous at $\by$
\begin{align*}
\lim_{x\in S(y),\,y\to\by}\dist_{S(\by)}(x)=0.
\end{align*}
Fix a Banach space $X$, a metric space $Y$, $S\colon Y\rightrightarrows X$,
and $\by\in \dom S$.
The mapping $S$ is
partially weakly sequentially upper semicontinuous at $\by$
if $x\in S(\by)$
holds for each sequence $\{(y_k,x_k)\}_{k\in\N}\subset\gph S$ which satisfies $y_k\to\by$ and $x_k\weakly x$.
For a discussion of the property in
<Ref>, we refer the reader to
The property in <ref> can be interpreted as the usual sequential upper semicontinuity
if $X$ is equipped with the weak topology.
In case where $Y$ is a Banach space, this property is inherent whenever the graph of the
underlying set-valued mapping is weakly sequentially closed which is naturally given whenever the latter is convex and closed.
Obviously, each closed-graph set-valued mapping with a finite-dimensional image space is partially weakly sequentially
upper semicontinuous.
Fix metric spaces $X$ and $Y$, $\Phi\colon X\rightrightarrows Y$, and $\varphi\colon X\to\R_\infty$.
Let $\by\in Y$ and a subset $U\subset X$ with $U\cap\Phi^{-1}(\by)\cap\dom\varphi\ne\varnothing$
be arbitrarily chosen.
Define $S\colon Y\rightrightarrows X$ by $S(y):=\Phi^{-1}(y)\cap U$ for all $y\in Y$.
If one of the following criteria holds, then
$\varphi$ is lower semicontinuous on $U$ relative to $\Phi$ at $\by$:
$\varphi$ is lower semicontinuous on $U$ relative to $\Phi^{-1}(\by)$ and
$S$ is upper semicontinuous at $\by$;
$X$ is a reflexive Banach space,
$U$ is closed and convex,
$\varphi$ is weakly sequentially lower semicontinuous on $U$, and $S$ is
partially weakly sequentially upper semicontinuous at $\by$.
Let a sequence $\{(x_k,y_k)\}_{k\in\N}\subset X\times Y$ satisfying
$\dist_{\gph \Phi}(x_k,y_k)\to0$,
and $\{x_k\}_{k\in\N}+\rho\B\subset U$ for some $\rho>0$ be arbitrarily chosen.
There exists a sequence $\{(x'_k,y'_k)\}_{k\in\N}\subset\gph \Phi$ such that $d((x'_k,y'_k),(x_k,y_k))\to0$.
Hence, $y'_k\to\by$ and, for all sufficiently large $k\in\N$, we have $d(x'_k,x_k)<\rho$, and, consequently, $x'_k\in U$.
* By <ref>,
$\dist_{\Phi^{-1}(\by)}(x_k)\to0$ and, by <ref>,
inequality (<ref>) holds, where $\Omega:=\Phi^{-1}(\by)$.
* Passing to a subsequence (without relabeling), we can assume $x_k\weakly\hat x$ for some $\hat x\in\clconv\{x_k\}_{k\in\N}\subset U$
since $\{x_k\}_{k\in\N}$ is a bounded sequence of a reflexive Banach space and $U$ is convex as well as closed.
Hence, we find $\varphi(\hat x)\le\liminf_{k\to+\infty}\varphi(x_k)$
by weak sequential lower semicontinuity of $\varphi$.
Obviously, we have $x'_k\weakly\hat x$.
By <ref>,
$\hat x\in \Phi^{-1}(\by)$ holds true.
$\inf_{u\in \Phi^{-1}(\by)\cap U}\varphi(u)\le
\varphi(\hat x)\le\liminf_{k\to+\infty}\varphi(x_k)$.
As the sequence
$\{(x_k,y_k)\}_{k\in\N}$ has been chosen arbitrarily, the conclusion follows from <ref>.
The next assertion is an immediate consequence of <ref>
with the conditions from <ref>.
Fix a reflexive Banach space $X$, a closed and convex set $U\subset X$,
$\varphi\colon X\to\R_\infty$ which is weakly sequentially lower semicontinuous on $U$,
$\Phi\colon X\tto Y$ where $Y$ is another Banach space, and
some $\by\in Y$ such that $U\cap\Phi^{-1}(\by)\cap\dom\varphi\neq\varnothing$.
Then $\varphi$ is lower semicontinuous on $U$ relative to $\Phi$ at $\by$ provided that one
of the following conditions is satisfied:
* $\gph\Phi\cap(U\times Y)$ is weakly sequentially closed;
* $X$ is finite-dimensional and $\gph\Phi\cap(U\times Y)$ is closed.
Particularly, whenever $\bar x\in\Phi^{-1}(\by)\cap\dom\varphi$ is fixed,
$\varphi$ is weakly sequentially lower semicontinuous, and either $\gph\Phi$
is weakly sequentially closed or $\gph\Phi$ is closed while $X$ is finite-dimensional,
then $\varphi$ is lower semicontinuous near $\bx$ relative to $\Phi$ at $\by$.
In the upcoming subsections, we discuss sufficient conditions for the semicontinuity properties of a set-valued mapping and
an extended-real-valued function appearing in the conditions <ref> of <ref>.
§.§ Sufficient conditions for lower semicontinuity of a function relative to a set
In the statement below, we present some simple situations where a function is lower semicontinuous
relative to a set in the sense of <ref> <ref>.
Let $X$ be a metric space, $\varphi\colon X\to\R_\infty$, and $\Omega,U\subset X$ be sets with $\Omega\cap U\cap\dom\varphi\ne\varnothing$.
Then $\varphi$ is lower semicontinuous on $U$ relative to $\Omega$ provided that one of the following
conditions is satisfied:
* $U\subset\Omega$;
* $\Omega\cap U=\{\bar x\}$, and
$\varphi$ is lower semicontinuous at $\bar x$;
* $\bar x\in\Omega\cap U$ is a
minimizer of $\varphi$ on $U$;
* $\varphi$ is uniformly continuous on $U$.
Under each of the condition (a), (b), and (c), the conclusion is straightforward since inequality (<ref>)
is an immediate consequence of the following simple relations, respectively, holding with any $U'\subset U$:
* $\inf\limits_{u\in\Omega\cap U}\varphi(u)=\inf\limits_{u\in U}\varphi(u)$,
$\liminf\limits_{\substack{x\in U',\,
\dist_{\Omega}(x)\to0}}\varphi(x)=\inf\limits_{x\in U'}\varphi(x) \ge\inf\limits_{x\in U}\varphi(x)$;
* $\inf\limits_{u\in\Omega\cap U}\varphi(u)=\varphi(\bx)$,
$\liminf\limits_{\substack{x\in U',\,
\dist_{\Omega}(x)\to0}}\varphi(x)
\ge\liminf\limits_{x\to\bx}\varphi(x)\ge\varphi(\bx)$;
* $\inf\limits_{u\in\Omega\cap U}\varphi(u)=\varphi(\bx)$,
$\liminf\limits_{\substack{x\in U',\,
\dist_{\Omega}(x)\to0}}\varphi(x) \ge\varphi(\bx)$.
It remains to prove the claim under
condition (d).
Let a number $\varepsilon>0$ be arbitrarily chosen.
Let a subset $U'\subset X$ and a number $\rho>0$ be such that $U'+\rho\B\subset U$.
By (d), there is a $\de>0$ such that
\[
\forall x,x'\in U\colon\quad
d(x,x')<\de\quad\Longrightarrow\quad |\varphi(x)-\varphi(x')|<\varepsilon.
\]
Let a point $x\in U'$ satisfy $\dist_\Omega(x)<\de':=\min(\rho,\de)$.
Then there is a point $x'\in\Omega$ satisfying $d(x,x')<\de'$.
Hence, $x,x'\in U$, $d(x,x')<\de$, and, consequently, $|\varphi(x)-\varphi(x')|<\varepsilon$.
Thus, we have
$\inf_{u\in\Omega\cap U}\varphi(u)\leq\varphi(x')
<\varphi(x)+\varepsilon$, and, consequently,
\begin{align*}
\inf_{u\in\Omega\cap U}\varphi(u)
\le
\liminf_{{x\in U',\,
\dist_{\Omega}(x)\to0}}\varphi(x)+\varepsilon.
\end{align*}
Taking the infimum on the right-hand side of the last inequality over $\eps$ and $U'$, we arrive at (<ref>).
As a corollary, we obtain sufficient conditions for the lower semicontinuity property from
<ref> <ref>.
Let $X$ be a metric space, $\varphi\colon X\to\R_\infty$, $\Omega\subset X$, and $\bx\in\Omega\cap\dom\varphi$.
Then $\varphi$ is lower semicontinuous near $\bx$ relative to $\Omega$ provided that one of the following
conditions is satisfied:
$\bar x\in\intr\Omega$;
$\bar x$ is an isolated point of $\Omega$, and
$\varphi$ is lower semicontinuous at $\bar x$;
$\bar x$ is an (unconditional) local minimizer of $\varphi$;
$\varphi$ is uniformly continuous near $\bx$.
It follows from <ref> <ref>
that each locally Lipschitz function is lower semicontinuous near a reference point relative
to any set containing this point.
The subsequent result can be directly distilled from
Fix a reflexive Banach space $X$, a closed and convex set $U\subset X$, and $\varphi\colon X\to\R_\infty$
which is weakly sequentially lower semicontinuous on $U$.
Let $\Omega\subset X$ be chosen such that $\Omega\cap U\cap\dom\varphi\neq\varnothing$ while $\Omega\cap U$ is weakly
sequentially closed. Then $\varphi$ is lower semicontionuous on $U$ relative to $\Omega$.
As a corollary, we obtain the subsequent result.
Fix a reflexive Banach space $X$, $\varphi\colon X\to\R_\infty$
which is weakly sequentially lower semicontinuous, and a weakly sequentially closed set $\Omega\subset X$.
Then, for each $\bar x\in\Omega\cap\dom\varphi$, $\varphi$ is lower semicontinuous near $\bar x$ relative to $\Omega$.
Note that whenever $X$ is finite-dimensional, $\varphi\colon X\to\R_\infty$ is lower semicontinuous, and $\Omega\subset X$
is closed, then the assumptions of <ref>
hold trivially.
The following statement shows that lower semicontinuity relative to a set is preserved under decoupled
Fix $n\in\N$ with $n\geq 2$.
For each $i\in\{1,\ldots,n\}$, let $X_i$ be a metric space, $\varphi_i\colon X_i\to\R_\infty$, $\Omega_i,U_i\subset X_i$,
and $\Omega_i\cap U_i\cap\dom\varphi_i\ne\varnothing$.
Suppose that $\varphi_i$ is lower semicontinuous on $U_i$
relative to $\Omega_i$.
Then $\varphi\colon X_1\times\ldots\times X_n\to\R_\infty$ given by
\[
\forall (x_1,\ldots,x_n)\in X_1\times\ldots\times X_n\colon\quad
\varphi(x_1,\ldots,x_n):=\varphi_1(x_1)+\ldots+\varphi_n(x_n)
\]
is lower semicontinuous on $U:=U_1\times\ldots\times U_n$
relative to $\Omega:=\Omega_1\times\ldots\times\Omega_n$.
The assertion is a direct consequence of <ref> <ref>.
More precisely, we find
\begin{align*}
\inf_{u\in\Omega\cap U}\varphi(u)
=\sum_{i=1}^n\inf_{u_i\in\Omega_i\cap U_i}\varphi_i(u_i)
\inf_{\substack{U_i'+\rho_i\B\subset U_i,\\\rho_i>0}}
\liminf_{\substack{x_i\in U_i',\\
\dist_{\Omega_i}(x_i)\to0}}\varphi_i(x_i)
\\&
= \inf_{\substack{U'+\rho\B\subset U,\\\rho>0}}
\liminf_{\substack{x\in U',\\
\dist_{\Omega}(x)\to0}}\varphi(x),
\end{align*}
and this proves the claim.
§.§ A sufficient condition for upper semicontinuity of the inverse of a set-valued mapping
The next statement presents a condition ensuring validity of the upper semicontinuity assumption
which appears in <ref> <ref>.
Let $X$ and $Y$ be metric spaces, $\Phi\colon X\tto Y$, and $(\bx,\by)\in\gph\Phi$.
Assume that $\Phi$ is metrically subregular at $(\bx,\by)$, i.e., that there exist a neighborhood $U$ of $\bx$ and a constant
$L>0$ such that
\begin{equation}\label{eq:metric_subregularity}
\forall x\in U\colon\quad
\dist_{\Phi^{-1}(\by)}(x)\leq L\,\dist_{\Phi(x)}(\by).
\end{equation}
Then, for each set $U'\subset U$ satisfying $\bx\in U'$, the mapping $S_{U'}\colon Y\tto X$, given by
$S_{U'}(y):=\Phi^{-1}(y)\cap U'$ for each $y\in Y$, is upper semicontinuous at $\by$.
Let a number $\varepsilon>0$ as well as $U'\subset U$ with $\bar x\in U'$ be given.
Choose a number $\de\in(0,\eps/L)$.
Then, for each $y\in B_\de(\by)$ and each $x\in S_{U'}(y)$, condition (<ref>) yields
$\dist_{S_{U'}(\by)}(x)=\dist_{\Phi^{-1}(\by)}(x)\le Ld(y,\by)<L\de<\eps$.
By <ref>, $S_{U'}$ is upper semicontinuous at $\by$.
We note that the metric subregularity condition (<ref>)
from <ref> already
amounts to a qualification condition addressing sets of type $\{x\in X\,|\,\by\in\Phi(x)\}$,
see <cit.>.
Sufficient conditions for metric subregularity can be found e.g. in [Bai et al., 2019, Dontchev and Rockafellar, 2014, Dontchev et al., 2020, Ioffe, 2017, Kruger, 2015, Maréchal, 2018, Zheng and Ng, 2010].
We would like to point the reader's attention to the fact that metric subregularity of $\Phi$ is a
quantitative continuity property coming along with a
modulus of subregularity $L>0$
while upper semicontinuity of the mappings
$S_{U'}$ in <ref> is just a
qualitative continuity property. In this regard, there
exist weaker sufficient conditions ensuring
validity of the upper semicontinuity requirements from <ref> <ref>.
However, it is not clear if such conditions can be easily checked in terms of initial problem data
while this is clearly possible for metric subregularity as the aforementioned list of references underlines.
Finally, we would like to mention that in case where one wants to avoid fixing the component $\bar x\in X$ in the preimage space
in <ref>, it is possible to demand that $\Phi^{-1}$
is Lipschitz upper semicontinuous at $\bar y$ in the sense of <cit.>. Again, this is a
quantitative continuity property.
Let $G\colon X\to Y$ be a single-valued mapping between Banach spaces.
Furthermore, let $C\subset X$ and $K\subset Y$ be nonempty, closed sets.
We investigate the feasibility mapping $\Phi\colon X\tto Y\times X$ given by
$\Phi(x):=(G(x)-K,x-C)$ for all $x\in X$ as well as some point $\bx\in X$
such that $(\bx,(0,0))\in\gph\Phi$ and some neighborhood $U$ of $\bx$.
Let us define $S\colon Y\times X\tto X$ by means of $S(y,z):=\Phi^{-1}(y,z)\cap U$
for each pair $(y,z)\in Y\times X$.
One can check that $S$ is upper semicontinuous at $(0,0)$ if and only if
\[
\dist_{K\times C}((G(x_k),x_k))\to 0
\quad\Longrightarrow\quad
\lim\limits_{k\to+\infty}\dist_{G^{-1}(K)\cap C}(x_k)=0
\]
for each sequence $\{x_k\}_{k\in\N}\subset U$,
and this is trivially satisfied if $G$ is continuous and $X$ is finite-dimensional.
For the purpose of completeness, let us also mention that $S$ is partially weakly sequentially upper semicontinuous at $(0,0)$
if and only if
\begin{equation}\label{eq:partially_weakly_sequantially_usc_geonetric_constraints}
x_k\weakly x,\quad \dist_{K\times C}((G(x_k),x_k))\to 0
\quad\Longrightarrow\quad
x\in G^{-1}(K)\cap C
\end{equation}
is valid for each sequence $\{x_k\}_{k\in\N}\subset U$ and each point $x\in U$.
Again, this is inherent if $G$ is continuous while $X$ is finite-dimensional and $U$ is closed.
In infinite-dimensional situations, whenever $G$ is continuously Fréchet differentiable and
$C$ as well as $K$ are convex,
Robinson's constraint qualification, given by
\[
G'(\bar x)\left[\bigcup\nolimits_{\alpha\in[0,+\infty)}\alpha(C-\bar x)\right]
\bigcup\nolimits_{\alpha\in[0,+\infty)}\alpha(K-G(\bar x))
\]
is equivalent to so-called metric regularity of $\Phi$ at $(\bx,(0,0))$,
see <cit.>, and the latter
is sufficient for metric subregularity of $\Phi$ at $(\bx,(0,0))$.
The final corollary of this section now follows from
Fix metric spaces $X$ and $Y$, $\Phi\colon X\tto Y$, $\varphi\colon X\to\R_\infty$, $\by\in Y$, and $\bx\in\Phi^{-1}(\by)\cap\dom\varphi$.
Assume that $\Phi$ is metrically subregular at $(\bx,\by)$ and that $\varphi$ satisfies one of the
conditions <ref>-<ref> of <ref>.
Then $\varphi$ is lower semicontinuous near $\bx$ relative to $\Phi$ at $\by$.
§ OPTIMALITY CONDITIONS AND APPROXIMATE STATIONARITY
We consider the optimization problem
\begin{equation}\label{eq:basic_problem}\tag{P}
\min\{\varphi(x)\,|\,\bar y\in\Phi(x)\},
\end{equation}
where $\varphi\colon X\to\R_\infty$ is an arbitrary function,
$\Phi\colon X\rightrightarrows Y$ is a set-valued mapping
between Banach spaces,
and $\bar y\in\Im \Phi$.
Let us mention that the model (<ref>) is quite general and covers numerous
important classes of optimization problems, see e.g. [Gfrerer, 2013, Mehlitz, 2020]
for a discussion.
The constrained problem (<ref>) is obviously equivalent to the unconditional minimization of
the restriction $\varphi_{\Phi^{-1}(\by)}$ of $\varphi$ to $\Phi^{-1}(\by)$.
We say that $\bx$ is an $\eps$-minimal point of problem (<ref>) on $U$
if it is an $\eps$-minimal point of $\varphi_{\Phi^{-1}(\by)}$ on $U$.
Analogously, stationary points of (<ref>) are defined.
The next theorem presents dual (i.e., subdifferential/coderivative based) necessary conditions for
$\eps$-minimal points of problem (<ref>).
Let $X$ and $Y$ be Banach spaces,
$\varphi\colon X\to\R_\infty$ be
lower semicontinuous, $\Phi\colon X\rightrightarrows Y$ have closed graph,
and fix $\bar y\in Y$, $\bx\in\dom \varphi\cap\Phi^{-1}(\by)$, $U\subset X$, $\eps>0$, as well as $\de>0$.
Assume that ${B}_{\de}(\bx)\subset U$, and
* on $U$, $\varphi$ is bounded from below and
lower semicontinuous relative to $\Phi$ at $\by$;
either $X$ and $Y$ are Asplund, or
$\varphi$ and $\gph\Phi$ are convex.
Suppose that $\bx$ is an $\eps$-minimal point of problem (<ref>) on $U$.
Then, for each $\eta>0$,
there exist points
$x_1,x_2\in B_\de(\bx)$
and $y_2\in \Phi(x_2)\cap B_\eta(\by)$
such that
\[
0\in{\sd}\varphi(x_1)+\Im D^*\Phi(x_2,y_2)+\frac{2\eps}\de\B^*.
\]
Moreover, if $\varphi$ and $\gph\Phi$ are convex, then
$\varphi(x_1)\le \varphi(\bx)$.
Since $\varphi$ is bounded from below on $U$, and
$\bx$ is an $\eps$-minimal of problem (<ref>) on $U$,
there exist numbers $c>0$ and $\eps'\in(0,\eps)$ such that
∀x∈U φ(x) >φ()-c,
∀x∈Φ^-1()∩U φ(x) >φ(x̅)-'.
For $\ga>0$ and $\ga_1>0$, let the functions
$\phi_\gamma,\hat\phi_{\gamma,\ga_1}\colon X\times Y\to\R_\infty$
be given by
\begin{align}
\label{phi}
\forall (x,y)\in X\times Y\colon\qquad
\phi_\ga(x,y)&:=
\varphi(x)+\ga\bigl(\norm{y-\by}+\dist_{\gph\Phi}(x,y)\bigr),
\\
\label{phi'}
\hat\phi_{\gamma,\ga_1}(x,y)&:=
\phi_\ga(x,y)+\ga_1\norm{x-\bx}^2.
\end{align}
Set $\de_0:=\de\eps'/\eps$, and choose numbers $\de'\in(\de_0,\de)$ and $\xi\in(0,\de-\de')$ such that $\xi(\de'+2)<2(\eps\de'/\de-\eps')$.
Fix an arbitrary $\eta>0$ and a positive number $\eta'<\min(\eta,2(\de-\de'))$.
Observe that $\hat\phi_{\gamma,\ga_1}(\bx,\by)=\varphi(\bx)$, and $\hat\phi_{\gamma,\ga_1}$ is bounded from below on $\overline{B}_{\de'}(\bar x)\times Y$ due
to (<ref>).
By Ekeland's variational principle, see <ref>,
for each $k\in\N$, there exists a point
$(x_k,y_k)\in\overline{B}_{\de'}(\bx)\times Y$ such that
\begin{align}
\label{eq:Ekeland_1}
&\hat\phi_{k,\ga_1}(x_k,y_k)\le \varphi(\bx),\\
\label{eq:Ekeland_2}
\forall (x,y)\in\overline{B}_{\de'}(\bar x)\times Y\colon\quad
\ge
\hat\phi_{k,\ga_1}(x_k,y_k).
\end{align}
It follows from (<ref>), (<ref>), and (<ref>) that
\begin{equation*}
\le \varphi(\bx)-\varphi(x_k)<c,
\end{equation*}
and, consequently,
\begin{align}
\label{eq:Ekeland_3}
\norm{y_k-\by}+\dist_{\gph\Phi}(x_k,y_k)<c/k,
\\
\label{eq:Ekeland_4}
\ga_1\norm{x_k-\bx}^2\le \varphi(\bx)-\varphi(x_k)
\end{align}
are valid for all $k\in\N$.
By (<ref>), $y_k\to\by$ and $\dist_{\gph\Phi}(x_k,y_k)\to0$ as $k\to+\infty$,
and $y_k\in B_{\eta'/4}(\by)$ as well as $\dist_{\gph\Phi}(x_k,y_k)<\eta'/4$ follow for all $k>4c/\eta'$.
Recall that $\{x_k\}_{k\in\N}
+\rho\B\subset B_\de(\bx)\subset U$ for any positive $\rho<\de-\de'$.
By <ref>, there exist an integer $\bar k>4c/\eta'$
and a point $x'\in\Phi^{-1}(\by)\cap U$ such that $\varphi(x')<\varphi(x_{\bar k})+\xi$.
By (<ref>), we have
Set $\ga:=\bar k$, $\hat x:=x_{\bar k}$, and $\hat y:=y_{\bar k}$.
Thus, $\hat y\in B_{\eta'/4}(\by)$ and
$\dist_{\gph\Phi}(\hat x,\hat y)<\eta'/4$.
By (<ref>),
\begin{align*}
\ga_1\norm{\hat x-\bx}^2
\le (\varphi(\bx)-\varphi(x')) +(\varphi(x')-\varphi(\hat x))
\end{align*}
Hence, we find $\norm{\hat x-\bx}<\de'$.
In view of (<ref>),
condition (<ref>) is equivalent to
\begin{equation}
\label{eq:intermediate_estimate:main_result}
\phi_\ga(\hat x,\hat y)+\ga_1\norm{\hat x-\bx}^2\le \varphi(\bx).
\end{equation}
For each $(x,y)\in B_{\de'}(\bar x)\times Y$ different from $(\hat x,\hat y)$,
it follows from (<ref>) that
\begin{align*}
\frac{\phi_\ga(\hat x,\hat y)-\phi_\ga(x,y)}
{\norm{(x,y)-(\hat x,\hat y)}}
\frac{\hat\phi_{\ga,\ga_1}(\hat x,\hat y)-\hat\phi_{\ga,\ga_1}(x,y)
+\ga_1\bigl(\norm{x-\bx}^2-\norm{\hat x-\bx}^2\bigr)}
{\norm{(x,y)-(\hat x,\hat y)}}
\\
\frac{\hat\phi_{\ga,\ga_1}(\hat x,\hat y)-\hat\phi_{\ga,\ga_1}(x,y)
+\ga_1\norm{x-\hat x}(\norm{x-\bx}+\norm{\hat x-\bx})}
{\norm{(x,y)-(\hat x,\hat y)}}
\\
\frac{\hat\phi_{\ga,\ga_1}(\hat x,\hat y)-\hat\phi_{\ga,\ga_1}(x,y)}
{\norm{(x,y)-(\hat x,\hat y)}}
+\ga_1\bigl(\norm{x-\bx}+\norm{\hat x-\bx}\bigr),
\end{align*}
and, consequently, in view of (<ref>),
\begin{align*}
\sup_{(x,y)\in(B_{\de'}(\bar x)\times Y)
\setminus\{(\hat x,\hat y)\}}
\frac{\phi_\ga(\hat x,\hat y)-\phi_\ga(x,y)}
{\norm{(x,y)-(\hat x,\hat y)}}
\end{align*}
Since $\hat x$ is an interior point of $\overline B_{\de'}(\bx)$,
it follows that
\begin{align}\label{eq:main_result_local_slope_condition}
\limsup_{(x,y)\to(\hat x,\hat y)}
\frac{\phi_\ga(\hat x,\hat y)-\phi_\ga(x,y)}
{\norm{(x,y)-(\hat x,\hat y)}}<\frac{2\eps}\de.
\end{align}
By (<ref>) and (<ref>), we find $\varphi(\hat x)\le \varphi(\bx)$, and due to
(<ref>), there is a number
$\hat\eps\in(0,\frac{2\eps}\de)$ such that
\begin{align*}
\liminf\limits_{(x,y)\to(\hat x,\hat y)}
\frac{\phi_\gamma(x,y)+\hat\eps\norm{(x,y)-(\hat x,\hat y)}-\phi_{\gamma}(\hat x,\hat y)}
{\norm{(x,y)-(\hat x,\hat y)}}
\geq 0.
\end{align*}
Set $\xi':=2{\eps}/\de-{\hat\eps}>0$.
By definition of the Fréchet subdifferential,
the above inequality yields
\begin{align}\label{eq:main_result_Fermat_rule}
(0,0)\in{\partial} \left(\phi_\ga+\hat\eps\norm{(\cdot,\cdot)}\right)
(\hat x,\hat y).
\end{align}
Condition (<ref>) can be rewritten as
$(0,0)\in{\partial}\left(\varphi+\ga g+h\right)(\hat x,\hat y)$, where the functions
$g,h\colon X\times Y\to\R$ are given by
\begin{align*}
\forall (x,y)\in X\times Y\colon\quad
h(x,y)&:=\ga\|y-\bar y\|+\hat\eps\|(x,y)-(\hat x,\hat y)\|.
\end{align*}
Note that $g$ and $h$ are Lipschitz continuous, and $h$ is convex.
We distinguish two cases.
Case 1: Let $X$ and $Y$ be Asplund spaces.
Let us recall the estimates $\norm{\hat x-\bar x}<\de'<\delta$, $\norm{\hat y-\bar y}<\eta'/4<\eta/4$,
$\dist_{\gph\Phi}(\hat x,\hat y)<\eta'/4<\eta/4$, and $\varphi(\hat x)\leq\varphi(\bar x)$.
By the fuzzy sum rule, see <ref> <ref>,
there exist points
$(x_1,y_1),(u_2,v_2)\in X\times Y$
arbitrarily close to $(\hat x,\hat y)$
with $\varphi(x_1)$ arbitrarily close to $\varphi(\hat x)$,
so that
\begin{gather*}
\norm{x_1-\bx}<\de,\quad
\norm{u_2-\bx}<\de',\quad
\varphi(x_1)<\varphi(\bx)+\eta,\quad
\norm{y_1-\by}<\frac{\eta}2,
\\
\norm{v_2-\by}<\frac{\eta'}2,\quad
\norm{u_2-x_1}<\frac{\eta'}2,\quad
\dist_{\gph\Phi}(x_1,y_1)<\frac{\eta}2,\quad
\dist_{\gph\Phi}(u_2,v_2)<\frac{\eta'}4,
\end{gather*}
and subgradients $x_{1}^*\in{\partial}\varphi(x_{1})$
and $(u_{2}^*,v_{2}^*)\in{\partial}g(u_{2},v_{2})$ satisfying
$$\norm{x_{1}^*+\ga u_{2}^*}<\hat\eps+\frac{\xi'}2.$$
Thus, $x_1\in B_\de(\bx)$
$\dist_{\gph\Phi}(x_1,\by)<\dist_{\gph\Phi}(x_1,y_1) +\norm{y_1-\by}<\eta$.
In view of
<ref> <ref>,
there exist
$(x_{2},y_{2})\in\gph\Phi$ and
$(u_{2}^{*\prime},v_{2}^{*\prime})\in { N}_{\gph\Phi}(x_{2},y_{2})$ such that
\begin{gather*}
\norm{(x_2,y_2)-(u_2,v_2)}
\norm{u_{2}^{*\prime}-u_{2}^*}<\frac{\xi'}{2\ga}.
\end{gather*}
Set $x_2^*:=\ga u_{2}^{*\prime}$ and $y^*:=-\ga v_{2}^{*\prime}$.
Thus, $x_2^*\in D^*\Phi(x_2,y_2)(y^*)$, and we have
\begin{align*}
\norm{y_2-\by}\le&\norm{v_2-\by}+\norm{y_2-v_2}<\eta'<\eta,
\\
\norm{x_2-\bx}\le&\norm{u_2-\bx}+\norm{x_2-u_2}
\\
\norm{x_2-x_1}\le&\norm{x_2-u_2}+\norm{u_2-x_1}<\eta'<\eta,
\\
\norm{x_1^*+x_2^{*}}
\leq& \norm{x_1^*+\gamma\,u_2^*}+\gamma\norm{u_2^{*\prime}-u_2^*}
\end{align*}
Case 2:
Let $\varphi$ and $\gph\Phi$ be convex.
We have $\hat x\in B_{\de'}(\bx)\subset B_\de(\bx)$,
$\varphi(\hat x)\le \varphi(\bx)$,
$\norm{\hat y-\bar y}<\eta'/4$,
$\dist_{\gph\Phi}(\hat x,\hat y)<\eta'/4<\eta$.
By the convex sum rule, see <ref> <ref>,
there exist subgradients
$x_{1}^*\in{\partial}\varphi(\hat x)$
and $(u_{2}^*,v_{2}^*)\in{\partial}g(\hat x,\hat y)$ satisfying
\begin{align*}
\norm{x_{1}^*+\ga u_{2}^*}\le\hat\eps.
\end{align*}
In view of
<ref> <ref>,
there exist
$(x_{2},y_{2})\in\gph\Phi$ and
$(u_{2}^{*\prime},v_{2}^{*\prime})\in { N}_{\gph\Phi}(x_{2},y_{2})$ such that
\begin{align*}
\norm{(x_2,y_2)-(\hat x,\hat y)}
<\dist_{\gph\Phi}(\hat x,\hat y)+\frac{\eta'}{4},\quad
\norm{u_{2}^{*\prime}-u_{2}^*}
\end{align*}
Set $x_1:=\hat x$, $x_2^*:=\ga u_{2}^{*\prime}$, and $y^*:=-\ga v_{2}^{*\prime}$.
Thus, $x_1\in B_\de(\bx)$,
$\varphi(x_1)\leq \varphi(\bar x)$, and
$x_2^*\in D^*\Phi(x_2,y_2)(y^*)$.
Replacing $(u_2,v_2)$ with $(\hat x,\hat y)$ in the corresponding estimates established in Case 1, we obtain
\begin{align*}
\norm{y_2-\by}<\eta,\quad
\norm{x_2-\bx}<\de,\quad
\norm{x_2-x_1}<\eta,
\\
\norm{x_1^*+x_2^{*}}
\leq\norm{x_1^*+\gamma\,u_2^*}
\end{align*}
This completes the proof.
Clearly, <ref> provides dual necessary conditions for
$\eps$-minimality of a feasible point of problem (<ref>)
under some additional structural assumptions on the data which are almost for free in the finite-dimensional
setting, see <ref>,
and of reasonable strength in the infinite-dimensional one.
In the subsequent remark, we comment on additional primal and dual conditions for $\eps$-minimality
which can be distilled from the proof of <ref>.
In the proof of <ref>, more sets of necessary conditions for local
$\eps$-minimality of a feasible point of problem (<ref>)
have been established along the way.
Moreover, the first part of the proof does not use the linear structure of the spaces,
i.e., the arguments work in the setting of general complete metric spaces $X$ and $Y$.
The conditions can be of independent interest and are listed below.
We assume that $X$ and $Y$ are complete metric spaces and all the other assumptions of <ref> are satisfied,
except condition <ref>.
Necessary conditions for local $\eps$-minimality.
There is a $\de_0\in(0,\de)$ such that,
for each $\de'\in(\de_0,\de)$ and $\eta>0$,
there exist points $\hat x\in B_{\de'}(\bx)$ and
$\hat y\in B_\eta(\by)$ satisfying
$\dist_{\gph\Phi}(\hat x,\hat y)<\eta$,
and numbers $\ga>0$ and $\ga_1>0$
such that,
with the function
$\phi_\gamma\colon X\times Y\to\R_\infty$
given by
\begin{align*}
\forall (x,y)\in X\times Y\colon\quad
\phi_\ga(x,y):=
\varphi(x)+\ga\bigl(d(y,\bar y)+\dist_{\gph\Phi}(x,y)\bigr),
\end{align*}
the following conditions hold:
* $\phi_\ga(\hat x,\hat y)+\ga_1 d(\hat x,\bar x)^2\le \varphi(\bx)$,
* primal nonlocal condition :
$\sup\limits_{\substack{(x,y)\ne(\hat x,\hat y)\\
x\in B_{\de'}(\bar x)}}
\dfrac{\phi_\ga(\hat x,\hat y)-\phi_\ga(x,y)}{d((x,y),(\hat x,\hat y))}
\dfrac{2\eps}\de$,
* primal local condition :
$\limsup\limits_{(x,y)\to(\hat x,\hat y)}
\dfrac{\phi_\ga(\hat x,\hat y)-\phi_\ga(x,y)}
{d((x,y),(\hat x,\hat y))}<\dfrac{2\eps}\de$,
* dual condition ($X$ and $Y$ are Banach spaces):
condition (<ref>) is satisfied with some $\hat\eps\in(0,\frac{2\eps}\de)$.
The relationship between the conditions is as follows:
The dual conditions in <ref> are consequences of the above conditions.
Let us note that the left-hand side in is the nonlocal slope, see [Fabian et al., 2010],
of the function $\phi_\ga+i_{B_{\de'}(\bx)}$ at $(\hat x,\hat y)$,
while the left-hand side in is the conventional slope of $\phi_\ga$ at $(\hat x,\hat y)$.
* Since the function $\varphi$ in <ref> is assumed to be lower semicontinuous,
it is automatically bounded from below on some neighborhood of $\bx$.
We emphasize that <ref> requires all the conditions to hold on the same
set $U$ containing a neighborhood of $\bx$.
As a consequence of <ref>, we obtain necessary conditions
characterizing local minimizers of (<ref>).
Let $X$ and $Y$ be Banach spaces,
$\varphi\colon X\to\R_\infty$
lower semicontinuous, $\Phi\colon X\rightrightarrows Y$ have closed graph, $\bar y\in Y$, and
$\bx\in\dom \varphi\cap\Phi^{-1}(\by)$.
Assume that
* the function $\varphi$ is lower semicontinuous near $\bx$ relative to $\Phi$ at $\by$;
* either $X$ and $Y$ are Asplund, or
$\varphi$ and $\gph\Phi$ are convex.
Suppose that
$\bx$ is a local minimizer
of (<ref>).
for each $\eps>0$,
there exist points
$x_1,x_2\in B_\eps(\bx)$ and
$y_2\in\Phi(x_2)\cap B_{\eps}(\by)$ such that
$|\varphi(x_1)-\varphi(\bx)|<\eps$ and
\[
0\in{\sd}\varphi (x_1)+\Im D^*\Phi(x_2,y_2)+\eps\B^*.
\]
Moreover, if $\varphi$ and $\gph\Phi$ are convex, then
$\varphi(x_1)\le \varphi(\bx)$.
Let a number $\eps>0$ be arbitrarily chosen.
Set $\eps':=\eps/2$.
By the assumptions and <ref>,
there exists a $\de\in(0,\varepsilon)$ such that
on $U:=\overline{B}_{\de}(\bx)$ the function $\varphi$ is bounded from below and
lower semicontinuous relative to $\Phi$ at $\by$,
and $\bx$ is an $\eps'\de$-minimal point of $\varphi_{\Phi^{-1}(\by)}$ on $U$.
Thus, all the assumptions of <ref> are satisfied.
Moreover, $2(\eps'\de)/\de=\eps$ and, since $\varphi$ is lower semicontinuous,
one can ensure that $\varphi(x_1)>\varphi(\bx)+\eps$.
Hence, taking any $\eta\in(0,\eps)$, the assertion follows from <ref>.
In the subsequent remark, we comment on the findings in <ref>.
* The analogues of the necessary conditions in <ref> <ref>
are valid in the setting of <ref>, too.
More precisely, it suffices to replace $\frac{2\eps}\de$ with just $\eps$ in the involved conditions.
* The necessary conditions in <ref> hold for each stationary point
(not necessarily a local minimizer) of problem (<ref>).
We now consider an important particular case of problem (<ref>), namely
\begin{equation}\label{eq:constrained_problem}\tag{$\widetilde{\text{P}}$}
\min\{\varphi(x)\,|\,x\in\Omega\},
\end{equation}
where $\Omega\subset X$ is a nonempty
subset of a Banach space.
To obtain this setting from the
one in (<ref>),
it suffices to consider
the set-valued mapping $\Phi\colon X\rightrightarrows Y$ whose graph is given by
$\gph\Phi:=\Omega\times Y$.
Here, $Y$ can be an arbitrary Asplund space, e.g., one can take $Y:=\R$.
Observe that $\Phi^{-1}(y)=\Omega$ holds for all $y\in Y$.
Hence, by <ref>, for all $y\in Y$, the mapping $\Phi^{-1}$ is upper semicontinuous at $y$.
Thus, the next statement is a consequence of <ref> and <ref>.
Let $X$ be a Banach space,
$\varphi\colon X\to\R_\infty$ lower semicontinuous,
$\Omega\subset X$ a closed set,
and fix $\bx\in\dom \varphi\cap\Omega$, $U\subset X$, $\eps>0$, and $\de>0$.
Assume that ${B}_{\de}(\bx)\subset U$, and
* on $U$, the function
$\varphi$ is bounded from below and
lower semicontinuous relative to $\Omega$;
* either $X$ is Asplund, or
$\varphi$ and $\Omega$ are convex.
Suppose that
$\bx$ is an $\eps$-minimal point of problem (<ref>) on $U$.
Then, for each $\eta>0$, there exist points
$x_1\in B_\de(\bx)$
$x_2\in\Omega\cap B_{\de}(\bx)$ such that $\norm{x_2-x_1}<\eta$,
\[
0\in{\sd}\varphi(x_1)+ N_\Omega(x_2)+\frac{2\eps}\de\B^*.
\]
Moreover, if $\varphi$ and $\Omega$ are convex, then
$\varphi(x_1)\le \varphi(\bx)$.
The next corollary follows immediately.
Let $X$ be a Banach space,
$\varphi\colon X\to\R_\infty$ lower semicontinuous,
$\Omega\subset X$ a closed set,
$\bx\in\dom \varphi\cap\Omega$.
Assume that
* the function
$\varphi$ is
lower semicontinuous near $\bx$ relative to $\Omega$;
* either $X$ is Asplund, or
$\varphi$ and $\Omega$ are convex.
Suppose that
$\bx$ is a local minimizer of (<ref>).
Then, for each $\varepsilon>0$, there exist
$x_1\in B_\eps(\bx)$ and
$x_2\in\Omega\cap B_{\eps}(\bx)$ such that
$|\varphi(x_1)-\varphi(\bx)|<\eps$ and
\[
0\in{\sd}\varphi (x_1)+ N_\Omega(x_2)+\eps\B^*.
\]
Moreover, if $\varphi$ and $\Omega$ are convex, then
$\varphi(x_1)\le \varphi(\bx)$.
Whenever $\varphi$ is Lipschitz continuous around $\bar x$, the assertion of
<ref> is an immediate consequence
of Fermat's rule and the sum
rules stated in <ref>.
We note that <ref>
is applicable in more general situations, exemplary, if $\varphi$ is only
uniformly continuous in a neighborhood of the investigated local minimizer,
see <ref>,
or if $X$ is finite-dimensional, see <ref>.
Note that the dual necessary optimality conditions in <ref>
do not hold at the reference point
but at some other points arbitrarily close to it.
Such conditions describe certain properties of admissible points which can be interpreted as a kind of dual approximate stationarity.
The precise meaning of approximate stationarity will be discussed in <ref> in the setting of
geometrically-constrained optimization problems.
§ GENERALIZED SEPARATION AND EXTREMAL PRINCIPLE
Below, we discuss certain generalized extremality and separation properties
of a collection of closed subsets $\Omega_1,\ldots,\Omega_n$ of a Banach space $X$, having a common point $\bx\in\bigcap_{i=1}^n\Omega_i$.
Here, $n$ is an integer satisfying $n\geq 2$.
We write $\{\Omega_1,\ldots,\Omega_n\}$ to denote the collection of sets as a single object.
We begin with deriving necessary conditions for so-called
$\mathcal{F}$-extremality of a collection of sets.
The property in the definition below is determined by a nonempty family $\mathcal{F}$ of nonnegative lower semicontinuous functions
$f\colon X^{n}\to\R_\infty$ and
mimics the corresponding conventional one, see e.g. [Kruger and Mordukhovich, 1980].
Let a family $\mathcal{F}$ described above be given.
Suppose that, for each $f\in\mathcal{F}$, the function
$\hat f\colon X^{n}\to\R_\infty$ is defined by
\begin{equation}\label{eq:hatf}
\forall z:=(x_1,\ldots,x_n)\in X^{n}\colon\quad
\hat f(z):=f(x_1-x_n,\ldots,x_{n-1}-x_n,x_n).
\end{equation}
The collection $\{\Omega_1,\ldots,\Omega_n\}$ is $\mathcal{F}$-extremal at $\bx$ if,
for each $\eps>0$, there exist a function $f\in\mathcal{F}$ and a number $\rho>0$
such that $f(0,\ldots,0,\bx)<\eps$ and
\begin{equation}\label{eq:extremality_nonnegativity}
\forall
x_i\in\Omega_i+\rho\B\; (i=1,\ldots,n)\colon\quad
\hat f(x_1,\ldots,x_n)>0.
\end{equation}
The following theorem, which is based on <ref>, provides
a general necessary condition for $\mathcal F$-extremality.
Assume that
* there is a neighborhood $U$ of $\bx$ such that,
for each $f\in\mathcal{F}$, the function
$\hat f\colon X^{n}\to\R_\infty$ defined by
(<ref>) is lower semicontinuous on $U^n$ relative to $\Omega:=\Omega_1\times\ldots\times\Omega_n$;
* either $X$ is Asplund, or
$\Omega_1,\ldots,\Omega_n$ and all $f\in\mathcal{F}$ are convex.
Suppose that
the collection $\{\Omega_1,\ldots,\Omega_n\}$ is
$\mathcal{F}$-extremal at $\bx$.
for each $\eps>0$ and $\eta>0$, there exist a function $f\in\mathcal{F}$ with $f(0,\ldots,0,\bx)<\eps$ and points
$x_i\in\Omega_i\cap B_{\eps}(\bx)$, $x'_i\in B_{\eta}(x_i)$, and $x_i^*\in X^*$ $(i=1,\ldots,n)$ such that
\begin{align}
\label{eq:NC_extremality_normals_close_to_cone}
\sum_{i=1}^{n} \dist_{N_{\Omega_i}(x_i)}\left(x_i^*\right) <\eps,
\\
\label{eq:NC_extremality_bounds_function_value}
\\
\label{eq:NC_extremality_subdifferential}
-\left(x_{1}^*,\ldots,x_{n-1}^*,\sum_{i=1}^{n}x_i^*\right)\in\sd f(w),
\end{align}
where $w:=(x'_1-x'_n,\ldots,x'_{n-1}-x'_n,x'_n)\in X^{n}$.
Moreover, if $f$ and $\Omega_1,\ldots,\Omega_n$ are convex, then
$f(w)\le f(0,\ldots,0,\bx)$.
Let arbitrary numbers
$\eps>0$ and $\eta>0$ be fixed.
Choose a number $\de\in(0,\eps)$ so that ${B}_{\de}(\bx)\subset U$, and set $\eps':=\eps\min(\de/2,1)$.
By <ref>, there exist a function $f\in\mathcal{F}$ and a number $\rho>0$ such that $f(0,\ldots,0,\bx)<\eps'\le\eps$, and
condition (<ref>) holds, where the function
$\hat f\colon X^{n}\to\R_\infty$ is defined by
Observe that $\Omega$ is a
closed subset of the Banach space $X^n$,
$\bz:=(\bx,\ldots,\bx)\in\Omega$, and $\hat f(\bz)=f(0,\ldots,0,\bx)<\eps'$.
Since the function $f$ is nonnegative, so is $\hat f$, and, consequently,
$\bz$ is an $\eps'$-minimal
point of $\hat f_\Omega$
(as well as $\hat f$) on $X^n$.
Set $\eta':=\min(\eta,\rho)$.
By <ref>, there exist points
$z:=(x_1,\ldots,x_n)\in\Omega\cap B_{\de}(\bz)$,
$z':=(x_1',\ldots,x_n')\in B_{\eta'}(z)$,
$x^*:=(x_{1}^*,\ldots, x_{n}^*)\in(X^*)^n$ such that
$f(w)=\hat f(z')<\hat f(\bz)+\eta=f(0,\ldots,0,\bx)+\eta$, and
\begin{equation}\label{eq:conditions_for_hatf}
-x^*\in\sd\hat f(z'),\qquad
\dist_{N_\Omega(z)}(x^*)<\frac{2\eps'}\de\le\eps.
\end{equation}
Moreover, if $f$ and $\Omega_1,\ldots,\Omega_n$ are convex, then
$f(w)\le f(0,\ldots,0,\bx)$.
Observe that $x_i'\in\Omega_i+\rho\B$ ($i=1,\ldots,n$), and it follows from (<ref>) that $f(w)=\hat f(z')>0$
which shows (<ref>).
The function $\hat f$ given by (<ref>) is a composition of $f$ and
the continuous linear mapping $A\colon X^{n}\to X^{n}$ given as follows:
\begin{align*}
\forall (u_1,\ldots,u_n)\in X^{n}\colon\quad
\end{align*}
The mapping $A$ is obviously a bijection.
It is easy to check that the adjoint mapping $A^*\colon (X^*)^{n}\to (X^*)^{n}$ is of the form
\begin{equation}\label{eq:def_linear_operator}
\forall (u_1^*,\ldots,u_{n}^*)\in(X^*)^{n}\colon\quad
\left(u^*_1,\ldots,u^*_{n-1},u_{n}^*-\sum_{i=1}^{n-1}u_{i}^*\right).
\end{equation}
By the Fréchet subdifferential chain rule, which can be distilled from
we obtain $\sd\hat f(z')=A^*\sd f(w)$,
In view of (<ref>), the inclusion in (<ref>) is equivalent to (<ref>).
It now suffices to observe that
$N_\Omega(z)=N_{\Omega_1}(x_1)\times\ldots\times N_{\Omega_n}(x_n)$, and, consequently, the inequality in (<ref>) yields (<ref>).
For the conclusions of <ref> to be non-trivial,
one must ensure that the family $\mathcal{F}$
satisfies the following conditions:
$\liminf\limits_{w\to(0,\ldots,0,\bar x),\,f\in\mathcal F,\,f(w)\downarrow0,\,w^*\in\sd f(w)}\norm{w^*}>0$.
A typical example of such a family is given by the collection $\mathcal{F}_A$ of
functions of type
\begin{equation}\label{eq:f_standard_extremality}
\forall z:=(x_1,\ldots,x_n)\in X^{n}\colon\quad
f_a(z):=\max_{1\le i\le n-1}\|x_i-a_i\|,
\end{equation}
where $a:=(a_1,\ldots,a_{n-1})\in X^{n-1}$.
The proofs of the conventional extremal principle and its extensions usually employ such functions.
Note that
functions from $\mathcal{F}_A$ are
constant in the last variable.
It is easy to see that, for each $f_a\in\mathcal{F}_A$ and
$z:=(x_1,\ldots,x_n)\in X^n$, the value $f_a(z)$ is the maximum norm of
$(x_1-a_1,\ldots,x_{n-1}-a_{n-1})$ in $X^{n-1}$.
Thus, $f_a(z)>0$ if and only if $(x_1,\ldots,x_{n-1})\ne a$, and
\begin{align*}
f_a(0,\ldots,0,\bar x)
\max_{1\le i\le n-1}\|a_i\|
\to
\quad
\text{as}
\quad
\end{align*}
showing <ref>.
Moreover, $\sd f_a(z)\ne\varnothing$ for all $z\in X^{n}$ and, if $f_a(z)>0$, then $\norm{w^*}=1$ for all
$w^*\in\sd f_a(w)$, i.e., the
in <ref> equals $1$.
Observe also that, since each function $f_a\in\mathcal{F}_A$ is convex and Lipschitz continuous,
the same holds true for the corresponding function $\hat f_a$ defined by (<ref>).
Hence, $\hat f_a$ is automatically lower semicontinuous near each point of $X^n$
relative to each set containing this point, see <ref>.
When $f_a\in\mathcal{F}_A$ is given by (<ref>), condition (<ref>) takes the following form:
\begin{equation}\label{eq:separation_new}
\bigcap_{i=1}^{n-1}(\Omega_i+\rho\B-a_i)\cap (\Omega_n+\rho\B)=\varnothing.
\end{equation}
With this in mind, the extremality property in <ref> admits a geometric interpretation.
The collection $\{\Omega_1,\ldots,\Omega_n\}$ is
$\mathcal{F}_A$-extremal at $\bx$ if and only if,
for each $\eps>0$, there exist vectors $a_1,\ldots,a_{n-1}\in X$ and a number $\rho>0$ such that
$\max_{1\le i\le n-1}\|a_i\|<\eps$, and
condition (<ref>) holds.
The characterization in <ref> means that sets with nonempty intersection can be
“pushed apart” by arbitrarily small translations in such a way that even small enlargements
of the sets become nonintersecting.
Note that condition (<ref>) is
than the conventional extremality property originating from [Kruger and Mordukhovich, 1980],
which corresponds to setting $\rho=0$ in (<ref>).
The converse statement is not true as the
next example shows.
Consider the closed sets $\Omega_1,\Omega_2\subset\R^2$ given by
\begin{align*}
\Omega_1:=\left\{(x,y)\mid x\ge0,\;y=0\right\},\quad
\Omega_2:=\left\{(x,y)\mid x\ge 0,\;|y|\ge e^{-x} \right\}\cup\{(0,0)\},
\end{align*}
see <ref>.
We have $\Omega_1\cap\Omega_2=\{(0,0)\}$ and $(\Omega_1-(t,0))\cap\Omega_2=\varnothing$ for each $t<0$.
At the same time,
$(\Omega_1+\rho\B-a)\cap(\Omega_2+\rho\B)\ne\varnothing$ for all $a\in\R^2$ and $\rho>0$.
By <ref>, $\{\Omega_1,\Omega_2\}$ is not
$\mathcal{F}_A$-extremal at $(0,0)$.
Sets from <ref>.
Sets from <ref>.
Visualization of the sets $\Omega_1$ and $\Omega_2$ from <ref>.
<Ref> produces the following necessary condition for $\mathcal{F}_A$-extremality.
Assume that
either $X$ is Asplund, or
$\Omega_1,\ldots,\Omega_n$ are convex.
Suppose that
the collection $\{\Omega_1,\ldots,\Omega_n\}$ is
$\mathcal{F}_A$-extremal at $\bx$.
for each $\eps>0$, there exist points
$x_i\in\Omega_i\cap B_{\eps}(\bx)$ and $x_i^*\in X^*$ $(i=1,\ldots,n)$ satisfying (<ref>) and
\begin{align}
\label{eq:NC_extremality-1}
\sum_{i=1}^{n}x_i^*&=0,\\
\label{eq:NC_extremality-2}
\sum_{i=1}^{n-1}\norm{x_i^*}&=1.
\end{align}
Moreover, for each $\tau\in(0,1)$, the points
$x_i$ and $x_i^*$ $(i=1,\ldots,n)$ can be chosen so that
\begin{equation}
\label{eq:NC_extremality_additional_estimate}
\sum_{i=1}^{n-1}\ang{x_{i}^*,x_n-x_i+a_i} >\tau\max_{1\le{i}\le{n}-1} \|x_n-x_i+a_i\|,
\end{equation}
where $a_1,\ldots,a_{n-1}$ are vectors satisfying the characterization in <ref>.
Fix $\eps>0$ arbitrarily.
Recall that, for each $f_a\in\mathcal{F}_A$, the function
$\hat f_a\colon X^{n}\to\R_\infty$ defined according to
(<ref>) is lower semicontinuous near $\bz:=(\bx,\ldots,\bx)$ relative to $\Omega:=\Omega_1\times\ldots\times\Omega_n$.
By definition of $\mathcal{F}_A$, <ref>, and <ref>,
for each $\eta>0$, there exist vectors $a_1,\ldots,a_{n-1}\in X$, points
$x_i\in\Omega_i\cap B_{\eps}(\bx)$, $x'_i\in B_{\eta}(x_i)$, and $x_i^*\in X^*$ $(i=1,\ldots,n)$, and a number $\rho>0$ such that
$\max_{1\le i\le n-1}\|a_i\|<\eps$,
and conditions (<ref>) and (<ref>) hold, where
$w:=(x'_1-x'_n,\ldots,x'_{n-1}-x'_n,x'_n)$ and
the function $f$ is replaced by $f_a$ defined by (<ref>).
Clearly, we find
\begin{align*}
\sd f_a(w)
\sd\|\cdot\|_{X^{n-1}}(x'_1-x'_n-a_1,\ldots,x'_{n-1}-x'_n-a_{n-1}) \times\{0\},
\end{align*}
where $\|\cdot\|_{X^{n-1}}$ is the maximum norm in
Condition (<ref>)
follows immediately from (<ref>).
since $f_a(w)>0$, we can apply <cit.> to find that condition (<ref>) is satisfied, and
\begin{equation}\label{eq:NC_extremality_intermediate_finding}
\sum_{i=1}^{n-1}\ang{x_{i}^*,x_n'-x_i'+a_i} = f_a(w).
\end{equation}
Let an arbitrary number
$\tau\in(0,1)$ be fixed, and let $\eta:=\rho(1-\tau)/4$.
In view of (<ref>), we have
\begin{equation}\label{eq:NC_extremality_intermediate_finding3}
\max_{1\le{i}\le{n}-1}\|x_n-x_i+a_i\|\ge\rho.
\end{equation}
Using (<ref>), (<ref>), (<ref>),
and (<ref>), we can prove the remaining estimate (<ref>):
\begin{align*}
\sum_{i=1}^{n-1}\ang{x_{i}^*,x_n-x_i+a_i}
\ge
&\sum_{i=1}^{n-1}\left(\ang{x_{i}^*,x_n'-x_i'+a_i} -2\norm{x_i^*}\max_{1\le{j}\le{n}}\norm{x_j-x_j'}\right)
\\
\\
\\
\\
\\
\geq
\end{align*}
This completes the proof.
The next example illustrates
application of <ref> in
the case where $\mathcal F$ consists of discontinuous functions.
Consider the closed sets $\Omega_1,\Omega_2\subset\R^2$ given by
\begin{align*}
\Omega_1:=\left\{(x,y)\mid\max(y,x+y)\ge0\right\},\quad
\Omega_2:=\left\{(x,y)\mid y\le0\right\}.
\end{align*}
Let us equip $\R^2$ with the Euclidean norm.
We have $(0,0)\in\Omega_1\cap\Omega_2$ and
$\intr(\Omega_1\cap\Omega_2)=\{(x,y)\,|\,y>0,\, x+y>0\}$.
Hence, these sets cannot be “pushed apart”, and $\{\Omega_1,\Omega_2\}$ is not
extremal at $(0,0)$ in the conventional sense, see <ref>
for an illustration.
Let the family $\mathcal{F}$ consist of all nonnegative lower semicontinuous functions
$f_t\colon\R^2\times\R^2\to\R_\infty$ of the type
\begin{equation}
\label{eq:non_standard_family_for_extremality}
\forall (x,y),(u,v)\in\R^2\times\R^2\colon\quad
\norm{(x,y+t)}+i_{(-\infty,0]}(u),
\end{equation}
corresponding to all $t\ge0$.
We now show that $\{\Omega_1,\Omega_2\}$ is $\mathcal{F}$-extremal at $(0,0)$.
Indeed, for each $\eps>0$ and $t\in(0,\eps)$, we have $f_t((0,0),(0,0))=t<\eps$.
The function from (<ref>) takes the form
\[
\forall (x,y),(u,v)\in\R^2\times\R^2\colon\quad
\hat f_t((x,y),(u,v)):=
\norm{(x-u,y-v+t)}+i_{(-\infty,0]}(u).
\]
Let $\rho\in(0,t/3)$, $(x,y)\in\Omega_1+\rho\B$, and $(u,v)\in\Omega_2+\rho\B$.
If $u>0$ or $x\ne u$, then $\hat f_t((x,y),(u,v))>0$.
Let $x=u\le0$.
Then $y>-2\rho$, $v<\rho$, and, consequently, $\hat f_t((x,y),(u,v))=|y-v+t|>-3\rho+t>0$.
Hence, condition (<ref>) holds, i.e., $\{\Omega_1,\Omega_2\}$ is $\mathcal{F}$-extremal at $(0,0)$.
For each $t\ge0$, $\hat f_t$ is Lipschitz continuous on $\dom\hat f_t=\R^2\times((-\infty,0]\times\R)$
and, for every point $((x,y),(u,v))\in\dom\hat f_t$, the distance $\dist_{\Omega_1\times\Omega_2}((x,y),(u,v))$ is attained
at some point $((x',y'),(u',v'))$ with $u'=u$, i.e., $((x',y'),(u',v'))\in\dom\hat f_t$.
Using this, it is easy to see from <ref> <ref>
that $\hat f_t$ is lower semicontinuous near $((0,0),(0,0))$ relative to $\Omega_1\times\Omega_2$.
By <ref>,
for each $\eps>0$, there exist a number $t\in(0,\eps)$ and points
$(x,y)\in\Omega_1\cap B_{\eps}(0,0)$, $(u,v)\in\Omega_2\cap B_{\eps}(0,0)$, $(x^*,y^*),(u^*,v^*)\in\R^2$, and
$w\in X^2\times X^2$
such that
$0<f_t(w)<\infty$ and
\begin{align}
\label{eq:non_trivial_generalized_separation_distance_to_cone}
\dist_{N_{\Omega_1}(x,y)}\left((x^*,y^*)\right)+ \dist_{{N}_{\Omega_2}(u,v)}\left((u^*,v^*)\right)<\eps,&
\\
\label{eq:non_trivial_generalized_separation_subdifferential}
-\left((x^*,y^*),(x^*,y^*)+(u^*,v^*)\right)\in\sd f_t(w).
\end{align}
In view of (<ref>),
it follows from (<ref>) that $\norm{(x^*,y^*)}=1$, $x^*+u^*\leq 0$, and $y^*+v^*=0$.
When $\eps$ is sufficiently small, condition (<ref>) implies one of the following situations:
* $x<0$, $y=v=0$, and $(x^*,y^*)$ as well as $(u^*,v^*)$ can be made arbitrarily close to
$(0,-1)$ and $(0,1)$, respectively,
* $x>0$, $y=-x$, $v=0$, and $(x^*,y^*)$ as well as $(u^*,v^*)$ can be made
arbitrarily close to $(-\sqrt 2/2,-\sqrt 2/2)$ and $(0,\sqrt 2/2)$, respectively.
This can be interpreted as a kind of generalized separation.
§ GEOMETRICALLY-CONSTRAINED OPTIMIZATION PROBLEMS WITH COMPOSITE OBJECTIVE FUNCTION
In this section, we are going to apply the theory of <ref> to the optimization problem
\begin{equation}\label{eq:non_Lipschitz_objective}\tag{Q}
\min\{f(x)+q(x)\,|\,G(x)\in K,\,x\in C\}
\end{equation}
where $f\colon X\to\R$ is continuously Fréchet differentiable, $q\colon X\to\R_\infty$ is
lower semicontinuous, $G\colon X\to Y$ is continuously Fréchet differentiable, and
$C\subset X$ as well as $K\subset Y$ are nonempty and closed.
Here, $X$ and $Y$ are assumed to be Banach spaces.
Throughout the section, the feasible set of (<ref>) will be denoted
by $\mathcal S$, and we implicitly assume $\mathcal S\cap\dom q\neq\varnothing$ in order
to avoid trivial situations.
Observe that the objective function $\varphi:=f+q$ can be decomposed into a regular part $f$ and
some challenging irregular part $q$ while the
constraints in (<ref>) are stated in so-called geometric form.
In this regard, the model (<ref>) still covers numerous applications
ranging from data science and image processing (in case where $q$ is a sparsity-promoting functional)
over conic programs (in which case $K$ is a convex cone) to disjunctive programs
which comprise, exemplary, complementarity- and cardinality-constrained problems (in this situation,
$K$ is a nonconvex set of combinatorial structure).
In the subsequently stated remark, we embed program (<ref>) into the
rather general framework which has been discussed in <ref>.
Observing that $f$ is differentiable, we find
\[
\forall x\in X\colon\quad
\sd \varphi(x)=\sd(f+q)(x)=f'(x)+\sd q(x)
\]
from the sum rule stated in <cit.>.
The feasibility mapping $\Phi\colon X\tto Y\times X$ associated with
(<ref>) is given by means of
$\Phi(x):=(G(x)-K,x-C)$ for all $x\in X$, see <ref>.
We find
\begin{equation}\label{eq:gph_Phi}
\gph\Phi
\{(x,(y,x'))\in X\times Y\times X\,|\,(G(x)-y,x-x')\in K\times C\}.
\end{equation}
Observing that the continuously differentiable mapping $(x,y,x')\mapsto(G(x)-y,x-x')$
possesses a surjective derivative,
we can apply the change-of-coordinates formula from
<cit.> in order to obtain
\[
\left\{(G'(x)^*\lambda+\eta,-\lambda,-\eta)\in X^*\times Y^*\times X^*\,\middle|\,
\begin{aligned}
&\lambda\in N_K(G(x)-y),\\
&\eta\in N_C(x-x')
\end{aligned}
\right\}
\]
for each triplet $(x,(y,x'))\in\gph\Phi$, and this yields
\[
\begin{cases}
G'(x)^*\lambda+\eta & \text{if }\lambda\in N_K(G(x)-y),\,\eta\in N_C(x-x'),\\
\varnothing & \text{otherwise}
\end{cases}
\]
for arbitrary $\lambda\in Y^*$ and $\eta\in X^*$.
§.§ Approximate stationarity and uniform qualification condition
The subsequent theorem is a simple consequence of <ref>
and <ref>,
and provides a necessary optimality condition for (<ref>).
Fix $\bar x\in\mathcal S\cap\dom q$ and assume that
the function $f+q$ is lower semicontinuous near $\bar x$ relative to $\Phi$ from <ref> at $(0,0)$;
either $X$ and $Y$ are Asplund, or $f$, $q$, and $\gph\Phi$ from (<ref>) are convex.
Suppose that $\bx$ is a local minimizer of (<ref>).
Then, for each $\eps>0$, there exist points $x,x',x''\in B_\eps(\bx)$ and $y\in\eps\B$ such that $|q(x)-q(\bar x)|<\varepsilon$ and
\begin{equation}\label{eq:approximate_stationarity}
0\in f'(x)+\sd q(x)+G'(x')^*N_K(G(x')-y)+N_C(x'')+\eps\B^*.
\end{equation}
In the subsequent remark, we comment on some special situations where the assumptions of <ref>
are naturally valid and which can be checked in terms of initial data.
Let $\bar x\in\mathcal S\cap\dom q$. Due to
each of the following conditions implies condition <ref> of
the function $f+q$ satisfies one of the conditions <ref>-<ref>
in <ref> and
the mapping $\Phi$ from <ref> is metrically subregular at $(\bx,(0,0))$,
see <ref>;
$X$ is reflexive, the functions $f$ and $q$ are weakly sequentially lower semicontinuous,
and condition (<ref>)
holds for all sequences $\{x_k\}_{k\in\N}\subset X$ and all points $x\in X$.
Furthermore, condition <ref> of
<ref> is valid whenever $X$ and $Y$ are Asplund, or if
$f$, $q$, and $C$ are convex, $K$ is a convex cone, and $G$ is $K$-convex in the following sense:
\[
\forall x,x'\in X\,\forall s\in[0,1]\colon\quad
G(sx+(1-s)x')-s\,G(x)-(1-s)G(x')\in K.
\]
We note that (<ref>) already satisfies condition <ref> of <ref>
as soon as $X$ and $Y$ are finite-dimensional.
In the presence of condition <ref> from <ref>,
<ref> is closely related to
<cit.> as soon as $q$ is absent.
Due to <ref>, the following definition is reasonable.
A point $\bar x\in \mathcal S\cap\dom q$ is an approximately stationary point of (<ref>)
if, for each $\varepsilon>0$, there exist points $x,x',x''\in B_\eps(\bx)$ and $y\in\eps\B$ such that $|q(x)-q(\bar x)|<\varepsilon$
and (<ref>) are valid.
Approximate necessary optimality conditions in terms of Fréchet subgradients and normals can be traced back
to the 1980s, see e.g. [Kruger and Mordukhovich, 1980, Kruger, 1985] and the references therein.
In order to compare the notion of stationarity from <ref> to others from the literature, let us mention
an equivalent characterization of asymptotic stationarity in terms of sequences.
A point $\bar x\in\mathcal S\cap\dom q$ is approximately stationary if and only if
there are sequences
$\{x_k\}_{k\in\N},\{x_k'\}_{k\in\N},\{x_k''\}_{k\in\N}\subset X$, $\{y_k\}_{k\in\N}\subset Y$,
and $\{\eta_k\}_{k\in\N}\subset X^*$ such that $x_k\to\bar x$, $x_k'\to\bar x$, $x_k''\to\bar x$,
$y_k\to 0$, $\eta_k\to 0$, $q(x_k)\to q(\bar x)$, and
\[
\forall k\in\N\colon\quad
\eta_k\in f'(x_k)+\sd q(x_k)+G'(x_k')^*N_K(G(x_k')-y_k)+N_C(x_k'').
\]
In case where $X$ and $Y$ are finite-dimensional while $q$ is locally Lipschitzian, a similar
approximate stationarity condition in terms of sequences has been investigated in <cit.>. In
[Börgens et al., 2020], the authors considered the model
(<ref>) with convex sets $K$ and $C$ in the absence of $q$.
Generally, using approximate notions of stationarity in nonlinear programming can be traced back
to [Andreani et al., 2010, Andreani et al., 2011].
Let us mention that in all these papers, the authors speak of asymptotic or sequential stationarity
A sequential Lagrange multiplier rule for convex programs in Banach spaces can be found already in [Thibault, 1997].
During the last decade, the concept of approximate stationarity
has been extended to several classes of optimization problems comprising, exemplary,
complementarity- and cardinality-constrained programs,
see [Andreani et al., 2019, Kanzow et al., 2021, Ramos, 2021],
conic optimization problems,
see [Andreani et al., 2020],
smooth geometrically-constrained optimization problems in Banach spaces,
see [Börgens et al., 2020],
and nonsmooth Lipschitzian optimization problems in finite-dimensional spaces,
see [Mehlitz, 2020, Mehlitz, 2021].
In each of the aforementioned situations, it has been demonstrated that approximate stationarity,
on the one hand, provides a necessary optimality condition in the absence of constraint qualifications, and
<ref> demonstrates that this is the case for
our concept from <ref> as well under reasonable assumptions.
On the other hand, the results from the literature underline that approximate stationarity is naturally satisfied
for accumulation points of sequences generated by some solution algorithms.
In <ref>, we extend these observations to the present setting.
Assume that $\bar x\in \mathcal S\cap\dom q$ is an approximately stationary point of (<ref>).
Due to <ref>, we find sequences $\{x_k\}_{k\in\N},\{x_k'\}_{k\in\N},\{x_k''\}_{k\in\N}\subset X$,
$\{y_k\}_{k\in\N}\subset Y$,
and $\{\eta_k\}_{k\in\N}\subset X^*$ satisfying $x_k\to\bar x$, $x_k'\to\bar x$, $x_k''\to\bar x$, $y_k\to 0$, $\eta_k\to 0$, $q(x_k)\to q(\bar x)$, and
$\eta_k\in f'(x_k)+\partial q(x_k)+G'(x_k')^*N_K(G(x_k')-y_k)+N_C(x_k'')$ for each $k\in\N$.
Particularly, we find sequences $\{\lambda_k\}_{k\in\N}\subset Y^*$ and $\{\mu_k\}_{k\in\N}\subset X^*$
of multipliers and a sequences $\{\xi_k\}_{k\in\N}\subset X^*$ of subgradients such that
$\eta_k=f'(x_k)+\xi_k+G'(x_k')^*\lambda_k+\mu_k$, $\lambda_k\in N_K(G(x_k')-y_k)$,
$\mu_k\in N_C(x_k'')$, and $\xi_k\in\partial q(x_k)$ for each $k\in\N$. Assuming for a moment
$\lambda_k\weaklystar\lambda$, $\mu_k\weaklystar\mu$, and $\xi_k\weaklystar\xi$ for some
$\lambda\in Y^*$ and $\mu,\xi\in X^*$, we find $\lambda\in\overline N_K(G(\bar x))$,
$\mu\in\overline N_C(\bar x)$, and $\xi\in\bsd q(\bar x)$ by definition of the limiting normal cone
and subdifferential, respectively, as well as $0=f'(\bar x)+\xi+G'(\bar x)^*\lambda+\mu$,
i.e., a multiplier rule is valid at $\bar x$ which is referred to as M-stationarity in the literature.
A feasible point $\bar x\in\mathcal S\cap\dom q$ is an M-stationary point of (<ref>)
\[
0\in f'(\bar x)+\bsd q(\bar x)+G'(\bar x)^*\overline N_K(G(\bar x))+\overline N_C(\bar x).
\]
Let us note that in the case of standard nonlinear programming, where $q$ vanishes while $C:=X$, $Y:=\R^{m_1+m_2}$, and $K:=(-\infty,0]^{m_1}\times\{0\}^{m_2}$
for $m_1,m_2\in\N$,
the system of M-stationarity coincides with the classical Karush–Kuhn–Tucker system.
One can easily check by means of simple examples that approximately stationary points of (<ref>)
do not need to be M-stationary even in finite dimensions. Roughly speaking, this phenomenon is caused by the fact that
the multiplier and subgradient sequences $\{\lambda_k\}_{k\in\N}$, $\{\mu_k\}_{k\in\N}$, and $\{\xi_k\}_{k\in\N}$
in the considerations which prefixed <ref> do not
need to be bounded, see <cit.> for related observations.
The following example is inspired by <cit.>.
We consider $X=Y=C:=\R$, set $f(x):=x$, $q(x):=0$, as well as $G(x):=x^2$ for all $x\in\R$, and fix $K:=(-\infty,0]$.
Let us investigate $\bar x:=0$.
Note that this is the only feasible point of the associated optimization problem (<ref>)
and, thus, its uniquely determined global minimizer.
Due to $f'(\bar x)=1$ and $G'(\bar x)=0$, $\bar x$ cannot be an M-stationary point
of (<ref>).
On the other hand, setting
\[
x_k:=0,\quad x_k':=-\frac{1}{2k},\quad y_k:=\frac{1}{4k^2},\quad\eta_k:=0,\quad\lambda_k:=k
\]
for each $k\in\N$, we have $x_k\to\bar x$, $x_k'\to\bar x$, $y_k\to 0$, $\eta_k\to 0$, as well as
$\eta_k=f'(x_k)+G'(x_k')^*\lambda_k$ and $\lambda_k\in N_K(G(x_k')-y_k)$ for each $k\in\N$, i.e.,
$\bar x$ is approximately stationary for (<ref>).
Observe that $\{\lambda_k\}_{k\in\N}$ is unbounded.
Let us underline that the above example demonstrates that local minimizers of (<ref>)
do not need to be M-stationary in general while approximate stationarity serves as a necessary optimality
condition under some assumptions on the data which are inherent in finite dimensions,
see <ref> and <ref>.
Nevertheless, M-stationarity turned out to be a celebrated stationarity condition in finite-dimensional
optimization. On the one hand, it is restrictive enough to exclude non-reasonable feasible points of
(<ref>) when used as a necessary optimality condition. On the other hand,
it is weak enough to hold at the local minimizers of (<ref>) under very
mild qualification conditions.
Exemplary, we would like to refer the reader to [Flegel et al., 2007] where this is visualized
by so-called disjunctive programs where $K$ is the union of finitely many polyhedral sets.
Another interest in M-stationarity arises from the fact that this system can often be solved directly
in order to identify reasonable feasible points of (<ref>), see e.g.[Guo et al., 2015, Harder et al., 2021].
In infinite-dimensional optimization, particularly, in optimal control, M-stationarity has turned out
to be of limited practical use since the limiting normal cone to nonconvex sets in function spaces
is uncomfortably large due to convexification effects arising when taking weak limits, see e.g.[Harder and Wachsmuth, 2018, Mehlitz and Wachsmuth, 2018].
Due to this interest in M-stationarity, at least from the finite-dimensional point of view,
we aim to find conditions guaranteeing that a given approximately stationary point of (<ref>) is
already M-stationary.
We say that the uniform qualification condition holds at $\bar x\in\mathcal S\cap\dom q$ whenever
\begin{align*}
\limsup\limits_{\substack{x\to\bar x,\,x'\to\bar x,\,x''\to\bar x,\\ y\to 0,\,q(x)\to q(\bar x)}}
&\left(\partial q(x)+G'(x')^*N_K(G(x')-y)+N_C(x'')\right)
\\
\subset
\bsd q(\bar x)+G'(\bar x)^*\overline N_K(G(\bar x))+\overline N_C(\bar x).
\end{align*}
By construction, the uniform qualification condition guarantees that a given approximately stationary
point of (<ref>) is already M-stationary as desired.
Let $\bar x\in\mathcal S\cap\dom q$
satisfy the uniform qualification condition.
If $\bar x$ is
an approximately stationary
point of (<ref>),
then it is M-stationary.
By definition of approximate stationarity, for each $k\in\N$, we find $x_k,x'_k,x''_k\in B_{1/k}(\bar x)$,
$y_k\in \tfrac1k\mathbb B$, and $\eta_k\in\tfrac1k\mathbb B^*$ such that $|q(x_k)-q(\bar x)|<\tfrac1k$ and
$\eta_k-f'(x_k)\in \partial q(x_k)+G'(x'_k)^*N_K(G(x'_k)-y_k)+N_C(x''_k)$.
Since $f$ is assumed to be continuously differentiable, we find $\eta_k-f'(x_k)\to -f'(\bar x)$.
Thus, by validity of the uniform qualification condition, it holds
\begin{align*}
-f'(\bar x)
\limsup\limits_{k\to+\infty}\left(\partial q(x_k)+G'(x'_k)^*N_K(G(x'_k)-y_k)+N_C(x''_k)\right)\\
\subset
\bsd q(\bar x)+G'(\bar x)^*\overline N_K(G(\bar x))+\overline{N}_C(\bar x),
\end{align*}
i.e., $\bar x$ is an M-stationary point of (<ref>).
Combining this with <ref> yields the following result.
Let $\bar x\in\mathcal S\cap\dom q$ be a local minimizer of (<ref>) which
satisfies the assumptions of <ref>
as well as the uniform qualification condition. Then $\bar x$ is M-stationary.
Observe that we do not need any so-called sequential normal compactness condition, see
<cit.>, for the above statement to hold which pretty much contrasts
the results obtained in <cit.>. Indeed, sequential normal compactness
is likely to fail in the function space context related to optimal control, see [Mehlitz, 2019].
Let us point the reader's attention to the fact that the uniform qualification condition is not a constraint
qualification in the narrower sense for (<ref>) since it also depends
on (parts of) the objective function.
Nevertheless, <ref> shows that it may serve as a qualification
condition for M-stationarity of local minimizers under mild assumptions on the data.
In the absence of $q$, the uniform qualification condition is related
to other prominent so-called sequential or asymptotic constraint qualifications from the literature which address
several different kinds of optimization problems, see e.g.
[Andreani et al., 2019, Andreani et al., 2019, Andreani et al., 2016, Börgens et al., 2020, Mehlitz, 2020, Mehlitz, 2021, Ramos, 2021].
In <ref>, we demonstrate by means of a prominent setting from optimal control that
the uniform qualification condition may hold in certain situations where $q$ is present,
see <ref>.
Note that in the particular setting $q\equiv 0$, the uniform qualification condition from <ref>
at some point $\bar x\in\mathcal S$ simplifies to
\begin{equation}\label{eq:uniform_CQ}
\limsup\limits_{x'\to\bar x,\,x''\to \bar x,\,y\to 0}
\bigl(G'(x')^*N_K(G(x')-y)+N_C(x'')\bigr)
\subset G'(\bar x)^*\overline N_K(G(\bar x))+\overline{N}_C(\bar x).
\end{equation}
In the light of <ref> and <ref>,
(<ref>) serves as a constraint qualification guaranteeing M-stationarity of $\bar x$
under mild assumptions
as soon as this point is a local minimizer of the associated problem (<ref>).
One may, thus, refer to (<ref>) as the uniform constraint qualification.
Observations related to the ones from <ref> have been made in [Börgens et al., 2020],
<cit.>, and <cit.>
and underline that (<ref>) is a comparatively weak constraint qualification whenever $q\equiv 0$.
Exemplary, let us mention that whenever $X$ and $Y$ are finite-dimensional the generalized Mangasarian–Fromovitz
constraint qualification
\begin{equation}\label{eq:GMFCQ}
-G'(\bar x)^*\lambda\in\overline N_C(\bar x),\,\lambda\in\overline N_K(G(\bar x))
\quad\Longrightarrow\quad
\lambda=0
\end{equation}
is sufficient for (<ref>) to hold, but the uniform constraint qualification
is often much weaker than (<ref>) which corresponds to metric regularity of $\Phi$
from <ref> at $(\bar x,(0,0))$, see <cit.> for related discussions.
Let us also mention that (<ref>) is sufficient for metric subregularity of $\Phi$ at $(\bar x,(0,0))$ exploited in
The following proposition provides a sufficient condition for validity of the uniform qualification condition in
case where $X$ is finite-dimensional.
Let $X$ be finite-dimensional and $\bar x\in \mathcal S\cap\dom q$.
Suppose that the uniform constraint qualification (<ref>) is valid at $\bar x$, and
\begin{equation}
\label{eq:BCQ}
\bigl(G'(\bar x)^*\overline N_K(G(\bar x))+\overline N_C(\bar x)\bigr)
\cap(-\bsd^\infty q(\bar x))=\{0\}.
\end{equation}
Then the uniform qualification condition holds at $\bar x$.
Let us fix
\[
x^*\in\limsup\limits_{\substack{x\to\bar x,\,x'\to\bar x,\,x''\to\bar x,\\ y\to 0,\ q(x)\to q(\bar x)}}
\left(\partial q(x)+G'(x')^*N_K(G(x')-y)+N_C(x'')\right).
\]
Then we find sequences $\{x_k\}_{k\in\N},\{x_k'\}_{k\in\N},\{x_k''\}_{k\in\N}\subset X$,
$\{y_k\}_{k\in\N}\subset Y$,
and $\{x_k^*\}_{k\in\N}\subset X^*$ such that $x_k\to\bar x$, $x_k'\to\bar x$, $x_k''\to\bar x$,
$y_k\to\bar y$, $q(x_k)\to q(\bar x)$, and $x_k^*\to x^*$ as well as
$x_k^*\in\partial q(x_k)+G'(x'_k)^*N_K(G(x'_k)-y_k)+N_C(x''_k)$ for all $k\in\N$.
Thus, there are sequences $\{u_k^*\}_{k\in\N},\{v^*_k\}_{k\in\N}\subset X^*$
$x_k^*=u_k^*+v_k^*$, $u_k^*\in\partial q(x_k)$, and
$v_k^*\in G'(x_k')^*N_K(G(x_k')-y_k)+ N_C(x_k'')$ for all $k\in\N$.
Let us assume that $\{u_k^*\}_{k\in\N}$ is unbounded. Then, due to $x_k^*\to x^*$,
$\{v_k^*\}_{k\in\N}$ is unbounded, too.
For each $k\in\N$, we define
$\tilde u_k^*:=u_k^*/(\norm{u_k^*}+\norm{v_k^*})$ and
$\tilde v_k^*:=v_k^*/(\norm{u_k^*}+\norm{v_k^*})$, i.e.,
the sequence $\{(\tilde u_k^*,\tilde v_k^*)\}_{k\in\N}$ belongs to the unit sphere of
$X^*\times X^*$. Without loss of generality, we may assume $\tilde u_k^*\to\tilde u^*$ and
$\tilde v_k^*\to\tilde v^*$ for some $\tilde u^*,\tilde v^*\in X^*$ since $X$
is finite-dimensional. We note that $\tilde u^*$ and $\tilde v^*$ cannot vanish at the same time.
Taking the limit in $x_k^*/(\norm{u_k^*}+\norm{v_k^*})=\tilde u_k^*+\tilde v_k^*$,
we obtain $0=\tilde u^*+\tilde v^*$.
By definition of the singular limiting subdifferential, we have $\tilde u^*\in\bsd^\infty q(\bar x)$
\[
\tilde v^*
\in
\limsup
\limits_{k\to+\infty}\bigl(G'(x_k')^*N_K(G(x_k')-y_k)+N_C(x_k'')\bigr)
\subset
G'(\bar x)^*\overline N_K(G(\bar x))+\overline N_C(\bar x)
\]
follows by the uniform constraint qualification (<ref>).
Thus, we find $\tilde u^*=\tilde v^*=0$ from condition (<ref>).
The latter, however, contradicts $(\tilde u^*,\tilde v^*)\neq(0,0)$.
From above, we now know that $\{u_k^*\}_{k\in\N}$ and $\{v_k^*\}_{k\in\N}$ are bounded. Without
loss of generality, we may assume $u_k^*\to u^*$ and $v_k^*\to v^*$ for some $u^*,v^*\in X^*$.
By definition of the limiting subdifferential we have $u^*\in\bsd q(\bar x)$, and
$v^*\in G'(\bar x)^*\overline N_K(G(\bar x))+\overline N_C(\bar x)$
is guaranteed by the uniform constraint qualification (<ref>).
Thus, we end up with
$x^*\in \bsd q(\bar x)+G'(\bar x)^*\overline N_K(G(\bar x))+\overline N_C(\bar x)$
which completes the proof.
<Ref> shows that in case where $X$ is finite-dimensional,
validity of the uniform qualification condition can be guaranteed in the presence of two conditions.
The first one, represented by condition (<ref>),
is a sequential constraint
qualification which guarantees regularity of the constraints at the reference point.
The second one, given by condition (<ref>), ensures in some sense that the challenging
part of the objective function and the constraints of (<ref>) are somewhat
compatible at the reference point. A similar decomposition of qualification conditions has been
used in [Chen et al., 2017, Guo and Ye, 2018] in order to ensure M-stationarity of standard nonlinear problems in
finite dimensions with a composite objective function. In the latter papers, the authors referred
to a condition of type (<ref>) as basic qualification, and this terminology can be
traced back to the works of Mordukhovich, see e.g. [Mordukhovich, 2006].
Note that in order to transfer <ref> to the
infinite-dimensional setting, one would be in need to postulate sequential compactness
properties on $q$ or the constraint data which are likely to fail in several interesting function
spaces, see [Mehlitz, 2019] again.
§.§ Augmented Lagrangian methods for optimization problems with non-Lipschitzian objective functions
We consider the optimization problem (<ref>)
such that $X$ is an Asplund space, $Y$ is a Hilbert space with $Y\cong Y^*$,
and $K$ is convex.
Let us note that the assumption on $Y$ can be relaxed by assuming the existence of a Hilbert space
$H$ with $H\cong H^*$ such that $(Y,H,Y^*)$ is a Gelfand triplet, see
<cit.> or [Börgens et al., 2019, Kanzow et al., 2018] for a discussion.
Furthermore, we will exploit the following assumption which is standing throughout this section.
At least one of the following assumptions is valid.
* The space $X$ is finite-dimensional.
* The function $q$ is uniformly continuous.
* The functions $f$, $q$, and $x\mapsto \dist_K^2(G(x))$ are weakly sequentially lower semicontinuous
and $C$ is weakly sequentially closed. Furthermore, $X$ is reflexive.
Throughout this
subsection, we assume that $C$ is a comparatively simple set, e.g., a box if $X$ is equipped with
a (partial) order relation, while the constraints $G(x)\in K$ are difficult and will be treated
with the aid of a multiplier-penalty approach.
In this regard, for some penalty parameter
$\theta>0$, we investigate the (partial) augmented Lagrangian function
$\LL_\theta\colon X\times Y\to\R_\infty$ given by
\[
\forall (x,\lambda)\in X\times Y\colon\quad
\LL_\theta(x,\lambda)
\]
We would like to point the reader's attention to the fact that the second
summand in the definition of $\LL_\theta$ is
continuously differentiable since the squared distance to a convex set possesses
this property.
For the control of the penalty parameter, we make use of the function $V_\theta\colon X\times Y\to\R$
given by
\[
\forall (x,y)\in X\times Y\colon\quad
\]
The method of interest is now given as stated in <ref>.
* Choose $(x_0,\lambda_0)\in (\dom q)\times Y$, $\theta_0>0$, $\gamma>1$,
$\tau\in(0,1)$, and a nonempty, bounded set $B\subset Y$ arbitrarily.
Set $k:=0$.
If $(x_k,\lambda_k)$ satisfies a suitable termination criterion,
then stop.
Choose $u_k\in B$ and find an approximate solution $x_{k+1}\in C\cap\dom q$ of
\begin{equation}\label{eq:ALM_subproblem}
\min\{\LL_{\theta_{k}}(x,u_k)\,|\,x\in C\}.
\end{equation}
\[
\lambda_{k+1}:=\theta_k\left[G(x_{k+1})+u_k/\theta_k
\]
If $k=0$ or $V_{\theta_k}(x_{k+1},u_k)\leq\tau\,V_{\theta_{k-1}}(x_k,u_{k-1})$,
then set $\theta_{k+1}:=\theta_k$. Otherwise, set $\theta_{k+1}:=\gamma\,\theta_k$.
* Go to <ref>.
Safeguarded augmented Lagrangian method for (<ref>).
We would like to point the reader's attention to the fact that <ref> is a so-called
safeguarded augmented Lagrangian method since the multiplier estimates $u_k$ are chosen
from the bounded set $B$. In practice, one typically chooses $B$ as a (very large) box, and
defines $u_k$ as the projection of $\lambda_k$ onto $B$ in <ref>.
Note that without safeguarding, one
obtains the classical augmented Lagrangian method. However, it is well known that the safeguarded
version possesses superior global convergence properties, see [Kanzow and Steck, 2017].
An overview of augmented Lagrangian methods in constrained optimization can be found in
[Birgin and Martínez, 2014].
Let us comment on potential termination criteria for <ref>.
On the one hand, <ref> is designed for the computation of M-stationary points of
(<ref>) which, at the latest, will become clear
in <ref>. Thus, one may check approximate validity of these
stationarity conditions in <ref>. However, if $q$ or $C$ is variationally
challenging, this might be a nontrivial task. On the other hand, at its core, <ref> is
a penalty method, so it is also reasonable to check approximate feasibility with respect to the
constraints $G(x)\in K$ in <ref>.
In [Chen et al., 2017], the authors suggest to solve (<ref>),
where all involved spaces are instances of $\R^n$ while the constraints $G(x)\in K$ are replaced
by smooth inequality and equality constraints, with the classical augmented Lagrangian
method. In case where $q$ is not present and $X$ as well as $Y$ are Euclidean spaces,
<ref> recovers the partial augmented Lagrangian
scheme studied in [Jia et al., 2021] where the authors focus on situations where
$C$ is nonconvex and of challenging variational structure. We note that, technically, <ref>
is also capable of handling this situation. However, it might be difficult to solve the appearing
subproblems (<ref>) if both $q$ and $C$ are variationally complex.
Note that we did not specify in <ref> how precisely the subproblems have to
be solved. Exemplary, one could aim to find stationary or globally $\varepsilon$-minimal points of the function
$\LL_{\theta_k}(\cdot,u_k)_C$ here. We comment on both situations below.
Our theory from <ref> can be used to show that <ref> computes
approximately stationary points of (<ref>) when the subproblems (<ref>)
are solved up to stationarity of $\LL_{\theta_k}(\cdot,u_k)_C$.
Let $\{x_k\}_{k\in\N}$ be a sequence generated by <ref> such that
$x_{k+1}$ is a stationary point of $\LL_{\theta_k}(\cdot,u_k)_C$ for each $k\in\N$.
Assume that, along a subsequence (without relabeling), we have
$x_k\to\bar x$ and $q(x_k)\to q(\bar x)$ for some $\bar x\in X$ which is feasible to (<ref>).
Then $\bar x$ is an approximately stationary point of (<ref>).
Observe that <ref> guarantees that $\LL_{\theta_k}(\cdot,u_k)$ is lower
semicontinuous relative to $C$ near each point from $C\cap\dom q$, see
Since $x_{k+1}$ is a stationary point of $\LL_{\theta_k}(\cdot,u_k)_C$, we can apply
<ref> and <ref> in order to find
$x_{k+1}'\in B_{1/k}(x_{k+1})$ and $x_{k+1}''\in C\cap B_{1/k}(x_{k+1})$ such that $|q(x_{k+1}')-q(x_{k+1})|<\tfrac1k$ and
\[
0\in\partial \LL_{\theta_{k}}(x_{k+1}',u_k)+N_C(x_{k+1}'')+\tfrac1k\,\mathbb B^*
\]
for each $k\in\N$.
From $x_k\to\bar x$ and $q(x_k)\to q(\bar x)$ we have $x_k'\to \bar x$, $x_k''\to\bar x$, and $q(x_k')\to q(\bar x)$.
Noting that $f$, $G$, and, by convexity of $K$, the squared distance function $\dist_K^2$
are continuously differentiable, we find
\begin{equation}\label{eq:non_Lipschitz_asymptotic_stationarity}
\begin{aligned}
0\in f'(x_{k+1}')
\theta_k\,G'(x_{k+1}')^*
\left[
\right]
\\
\partial q(x_{k+1}')
\tfrac1k\,\mathbb B^*
\end{aligned}
\end{equation}
for each $k\in\N$ where we used
the subdifferential sum rule from <cit.>.
Let us set $y_{k+1}:=G(x_{k+1}')-P_K(G(x_{k+1}')+u_k/\theta_k)$
for each $k\in\N$. By definition of the projection and convexity of $K$, we find
\begin{align*}
\theta_k(y_k+u_k/\theta_k)
\in
\end{align*}
so we can rewrite (<ref>) by means of
\begin{equation}\label{eq:non_Lipschitz_asymptotic_stationarity_refined}
0\in f'(x_{k+1}')+\partial q(x_{k+1}')+ G'(x_{k+1}')^*N_K(G(x_{k+1}')-y_{k+1})
+N_C(x_{k+1}'')+\tfrac1k\,\mathbb B^*
\end{equation}
for each $k\in\N$.
It remains to show $y_{k+1}\to 0$.
We distinguish two cases.
First, assume that $\{\theta_k\}_{k\in\N}$ remains bounded.
By construction of <ref>, this yields $V_{\theta_k}(x_{k+1},u_k)\to 0$ as $k\to+\infty$.
Recalling that the projection $P_K$ is Lipschitz continuous with modulus $1$ by convexity
of $K$, we have
\begin{align*}
\norm{y_{k+1}}
\leq
\norm{G(x_{k+1}')-G(x_{k+1})}\\
\norm{P_K(G(x_{k+1}')+u_k/\theta_k)-P_K(G(x_{k+1})+u_k/\theta_k)}
\\
\leq
\end{align*}
for each $k\in\N$. Due to $x_k\to \bar x$ and $x_k'\to\bar x$ as well as continuity of $G$,
this yields $y_{k+1}\to 0$.
Finally, suppose that $\{\theta_k\}_{k\in\N}$ is unbounded. Since this sequence is monotonically
increasing, we have $\theta_k\to+\infty$.
By boundedness of $\{u_k\}_{k\in\N}$, continuity of $G$ as well as the projection $P_K$,
$x_k'\to\bar x$, and feasibility of $\bar x$ for (<ref>), it holds
\[
\to
G(\bar x)-P_K(G(\bar x))
\]
and this completes the proof.
Let us mention that the assumption $q(x_k)\to q(\bar x)$ is trivially satisfied as soon as
$q$ is continuous on its domain. For other types of discontinuity, however, this does not follow
by construction of the method and has to be presumed. Let us note that this convergence is also implicitly
used in the proof of the related result <cit.> but does not follow
from the postulated assumptions, i.e., this assumption is missing there.
Note that demanding feasibility of accumulation points is a natural assumption when considering
augmented Lagrangian methods. This property naturally holds whenever the sequence $\{\theta_k\}_{k\in\N}$
remains bounded or if $q$ is bounded from below while the sequence $\{\LL_{\theta_k}(x_{k+1},u_k)\}_{k\in\N}$ remains bounded.
The latter assumption is typically satisfied whenever globally $\eps_k$-minimal points of $\LL_{\theta_k}(\cdot,u_k)_C$
can be computed in order to approximately solve the
subproblems (<ref>) in <ref>, where
$\{\varepsilon_k\}_{k\in\N}\subset[0,+\infty)$ is a bounded sequence. Indeed, we have
\begin{equation}\label{eq:consequence_of_eps_minimality}
\forall x\in\mathcal S\colon\quad
\LL_{\theta_k}(x_{k+1},u_k)
\leq
\LL_{\theta_k}(x,u_k)+\varepsilon_k
\leq
\end{equation}
in this situation, and this yields the claim by boundedness of $\{u_k\}_{k\in\N}$ and monotonicity
of $\{\theta_k\}_{k\in\N}$. If $\{\varepsilon_k\}_{k\in\N}$ is a null sequence, we obtain an even
stronger result.
Let $\{x_k\}_{k\in\N}\subset X$ be a sequence generated by <ref> and let $\{\varepsilon_k\}_{k\in\N}\subset[0,+\infty)$
be a null sequence such that $x_{k+1}$ is a globally $\varepsilon_k$-minimal point of $\LL_{\theta_k}(\cdot,u_k)_C$ for each $k\in\N$.
Then each accumulation point $\bar x\in X$ of $\{x_k\}_{k\in\N}$ is a global minimizer of
(<ref>) and, along the associated subsequence, we find $q(x_k)\to q(\bar x)$.
Without loss of generality, we assume $x_k\to\bar x$.
By closedness of $C$, we have $\bar x\in C$.
The estimate (<ref>) yields
\begin{equation}\label{eq:consequence_of_eps_minimality_rearranged}
\leq
\end{equation}
for each $x\in\mathcal S$.
We show the statement of the theorem by distinguishing two cases.
In case where $\{\theta_k\}_{k\in\N}$ remains bounded, we find $\dist_K(G(x_{k+1}))\leq V_{\theta_k}(x_{k+1},u_k)\to 0$
from <ref>, so the continuity of the distance function $\dist_K$ and $G$ yields $G(\bar x)\in K$, i.e.,
$\bar x$ is feasible to (<ref>). Using the triangle inequality, we also obtain
\[
\dist_K(G(x_{k+1})+u_k/\theta_k)
\leq
\dist_K(G(x_{k+1}))+\norm{u_k}/\theta_k
\leq
\]
for each $k\in\N$. Squaring on both sides, exploiting the boundedness of $\{u_k\}_{k\in\N}$ and $V_{\theta_k}(x_{k+1},u_k)\to 0$ yields
\[
\limsup\limits_{k\to+\infty}\left(\dist_K^2\left(G(x_{k+1})+u_k/\theta_k\right)-(\norm{u_k}/\theta_k)^2\right)\leq 0.
\]
The boundedness of $\{\theta_k\}_{k\in\N}$ and (<ref>) thus show
$\limsup_{k\to+\infty}(f(x_{k+1})+q(x_{k+1}))\leq f(x)+q(x)$ for each $x\in\mathcal S$.
Exploiting the lower semicontinuity of $q$, this leads to $f(\bar x)+q(\bar x)\leq f(x)+q(x)$, i.e., $\bar x$ is a global minimizer
of (<ref>). On the other hand, we have
\[
f(\bar x)+q(\bar x)
\leq
\liminf\limits_{k\to+\infty}\left(f(x_{k+1})+q(x_{k+1})\right)
\leq
\limsup\limits_{k\to+\infty}\left(f(x_{k+1})+q(x_{k+1})\right)
\leq
f(\bar x)+q(\bar x)
\]
from the particular choice $x:=\bar x$, so the continuity of $f$ yields $q(x_k)\to q(\bar x)$ as claimed.
Now, let us assume that $\{\theta_k\}_{k\in\N}$ is not bounded. Then we have $\theta_k\to+\infty$ from <ref>.
By choice of $x_{k+1}$, we have $\LL_{\theta_k}(x_{k+1},u_k)\leq \LL_{\theta_k}(x,u_k)+\varepsilon_k$ for all $x\in C$ and each $k\in\N$,
so the definition of the augmented Lagrangian function yields
\[
\leq
\]
for each $x\in C$. By continuity of $f$ and lower semicontinuity of $q$, $\{f(x_{k+1})+q(x_{k+1})\}_{k\in\N}$ is bounded from below.
Thus, dividing the above estimate by $\theta_k$ and taking the limit inferior, we find
\begin{align*}
\dist_K^2(G(\bar x))
\liminf\limits_{k\to+\infty} \dist_K^2\left(G(x_{k+1})+u_k/\theta_k\right)\\
\liminf\limits_{k\to+\infty} \dist_K^2\left(G(x)+u_k/\theta_k\right)
\dist_K^2(G(x))
\end{align*}
for each $x\in C$ from $\theta_k\to+\infty$ and continuity of $\dist_K$ and $G$. Hence, $\bar x$ is a global minimizer of $\dist_K^2\circ G$
over $C$. Since $\mathcal S$ is assumed to be nonempty, we infer $\dist_K^2(G(\bar x))=0$, i.e., $\bar x$ is feasible to
Exploiting boundedness of $\{u_k\}_{k\in\N}$, nonnegativity of the distance function, and $\theta_k\to+\infty$, we now obtain
$\limsup_{k\to+\infty}(f(x_{k+1})+q(x_{k+1}))\leq f(x)+q(x)$ for each $x\in\mathcal S$ from
Proceeding as in the first case now yields the claim.
It remains to clarify how the subproblems (<ref>) can be solved in practice.
If the non-Lipschitzness of $q$ is, in some sense, structured while $C$ is of simple form, it should be
reasonable to solve (<ref>) with the aid of a nonmonotone proximal gradient method,
see <cit.>.
On the other hand, in situations where $q$ is not present while $C$ possesses a variational
structure which allows for the efficient computation of projections, a nonmonotone spectral gradient method might be used to
solve (<ref>), see <cit.>.
Finally, it might be even possible to solve (<ref>) up to global optimality
in analytic way in some practically relevant applications where $q$ is a standard sparsity-promoting
term and the remaining data is simple enough.
Coming back to the assertion of <ref>, the following is now clear from <ref>.
Let $\{x_k\}_{k\in\N}$ be a sequence generated by <ref> such that $x_{k+1}$ is a stationary point
of $\LL_{\theta_k}(\cdot,u_k)_C$ for each $k\in\N$.
Assume that, along a subsequence (without relabeling), we have
$x_k\to\bar x$ and $q(x_k)\to q(\bar x)$ for some $\bar x\in X$ which is feasible to (<ref>)
and satisfies the uniform qualification condition.
Then $\bar x$ is M-stationary.
Note that in the light of <ref>,
<ref> drastically generalizes and improves
<cit.> which shows global convergence of a related augmented
Lagrangian method to certain stationary points under validity of a basic qualification,
see condition (<ref>), and the
relaxed constant positive linear dependence constraint qualification which is more restrictive
than condition (<ref>)
in the investigated setting, see <cit.> as well.
Let us mention that such a result has been foreshadowed in <cit.>.
We would like to point the reader's attention to the fact that working with strong
accumulation points in the context of <ref> and
<ref> is indispensable as long as $q$ or the sets $K$ and $C$ are not
convex since the limiting variational tools rely on strong convergence in the primal space.
In the absence of $q$ and if $K$ and $C$ are convex, some convergence results based on weak
accumulation points are available, see e.g. <cit.> and [Börgens et al., 2019, Kanzow et al., 2018].
Clearly, in finite dimensions, both types of convergence are equivalent and the consideration
of strong accumulation points is not restrictive at all.
§.§ Sparsity-promotion in optimal control
In this section, we apply the theory derived earlier to an optimal control problem
with a sparsity-promoting term in the objective function.
As it is common to denote control functions by $u$ in the context of optimal control,
we will use the same notation here for the decision variable for notational convenience.
For some bounded domain $D\subset\R^d$ and some $p\in(0,1)$, we define a function
$q\colon L^2(D)\to\R$ by means of
\begin{equation}\label{eq:sparsity_promoting_functional}
\forall u\in L^2(D)\colon
\quad
q(u):=\int_D |u(\omega)|^p\,\mathrm d\omega.
\end{equation}
Above, $L^2(D)$ denotes the standard Lebesgue space of (equivalence classes of)
measurable functions whose square is integrable and is equipped with the usual norm.
In optimal control, the function $q$ is used as an additive term in the objective function
in order to promote sparsity of underlying control
functions, see [Ito and Kunisch, 2014, Natemeyer and Wachsmuth, 2021, Wachsmuth, 2019].
A reason for this behavior is that the integrand $t\mapsto |t|^p$ possesses a unique global minimizer
and infinite growth at the origin.
In [Mehlitz and Wachsmuth, 2021], the authors explore the variational properties of the
functional $q$. It has been shown to be uniformly continuous in <cit.>.
Furthermore, in <cit.>, the following formula has been proven
for each $\bar u\in L^2(D)$:
\begin{equation}\label{eq:subdifferentials_sparsity}
\bsd q(\bar u)=\sd q(\bar u)
\bigl\{
\eta\in L^2(D)\,|\,
\eta= p\abs{\bar u}^{p-2}\bar u\text{ a.e.\ on }\{\bar u\neq 0\}
\bigr\}.
\end{equation}
Let us emphasize that this means that the Fréchet and limiting subdifferential actually coincide
and can be empty if the reference point is a function which tends to zero too fast somewhere on its
domain. This underlines the sparsity-promoting properties of $q$.
Now, for a continuously differentiable function $f\colon L^2(D)\to\R$ and functions
$u_a,u_b\in L^2(D)$ satisfying $u_a<0<u_b$ almost everywhere on $D$,
we consider the optimization problem
\begin{equation}\label{eq:optimal_control}\tag{OC}
\min\limits_u\{f(u)+q(u)\,|\,u\in C\}
\end{equation}
where $C\subset L^2(D)$ is given by the box
\[
C:=\{u\in L^2(D)\,|\,u_a\leq u\leq u_b\text{ a.e.\ on }D\}.
\]
For later use, let us mention that, for each $u\in C$, the (Fréchet) normal cone to $C$ at $u$ is
given by the pointwise representation
\begin{equation}\label{eq:normal_cone_to_pointwise_box}
\left\{
\eta\in L^2(D)\,\middle|\,
\begin{aligned}
&\eta\leq 0&&\text{a.e.\ on $\{u<u_b\}$}\\
&\eta\geq 0&&\text{a.e.\ on $\{u_a<u\}$}
\end{aligned}
\right\}.
\end{equation}
Typically, in optimal control, $f$ is a function of type
\begin{equation}\label{eq:target_type_objective}
\forall u\in L^2(D)\colon\quad
\end{equation}
where $S\colon L^2(D)\to H$ is the continuously differentiable
control-to-observation operator associated with a given
system of differential equations, $H$ is a Hilbert space, $y_\textup{d}\in H$ is the desired state, and $\sigma\geq 0$ is a
regularization parameter. Clearly, by means of the chain rule, $f$ is continuously
differentiable with derivative given by
\[
\forall u\in L^2(D)\colon\quad
f'(u)=S'(u)^*[S(u)-y_\textup{d}]+\sigma u.
\]
The presence of $q$ in the objective functional of
(<ref>) enforces sparsity of its solutions, i.e., the support of optimal controls
is likely to be small. It already has been mentioned in [Ito and Kunisch, 2014, Natemeyer and Wachsmuth, 2021]
that one generally cannot show existence of solutions to optimization problems of type
(<ref>). Nevertheless, the practical need for sparse controls makes it attractive
to consider the model and to derive necessary optimality conditions in order to identify reasonable
stationary points.
In the subsequent lemma, we show that the feasible points of (<ref>) satisfy
the uniform qualification condition stated in <ref>.
Let $\bar u\in L^2(D)$ be a feasible point of (<ref>).
Then the uniform qualification condition holds at $\bar u$.
Recalling that $q$ is continuous while $C$ is convex,
the uniform qualification condition takes the simplified form
\[
\limsup\limits_{u\to\bar u,\,u'\to\bar u}
\bigl(\sd q(u)+N_C(u')\bigr)
\subset
\bsd q(\bar u)+N_C(\bar u).
\]
Let us fix some point $\eta\in\limsup_{u\to\bar u,\,u'\to\bar u}\bigl(\sd q(u)+N_C(u')\bigr)$.
Then we find sequences
$\{u_k\}_{k\in\N},\{u_k'\}_{k\in\N},\{\eta_k\}_{k\in\N}\subset L^2(D)$
such that $u_k\to\bar u$, $u_k'\to\bar u$, $\eta_k\to \eta$, as well as
$\eta_k\in\sd q(u_k)+N_C(u_k')$ for all $k\in\N$.
Particularly, there are sequences $\{\xi_k\}_{k\in\N},\{\mu_k\}_{k\in\N}\subset L^2(D)$
such that $\xi_k\in\sd q(u_k)$, $\mu_k\in N_C(u_k')$, and $\eta_k=\xi_k+\mu_k$ for all $k\in\N$.
From (<ref>) we find $\xi_k=p\abs{u_k}^{p-2}u_k$ almost
everywhere on $\{u_k\neq 0\}$ for each $k\in\N$. Furthermore, we have $\mu_k\leq 0$
almost everywhere on $\{u_k'=u_a\}$, $\mu_k\geq 0$ almost everywhere on $\{u_k'=u_b\}$,
and $\mu_k=0$ almost everywhere on $\{u_a<u_k'<u_b\}$ for each $k\in\N$
from (<ref>).
Along a subsequence (without relabeling) we can ensure the convergences
$u_k(\omega)\to\bar u(\omega)$, $u_k'(\omega)\to\bar u(\omega)$, and $\eta_k(\omega)\to \eta(\omega)$
for almost every $\omega\in D$.
Thus, for almost every $\omega\in\{\bar u=u_a\}$, we can guarantee $u_k(\omega)<0$ and $u_k'(\omega)\in[u_a(\omega),0)$,
i.e., $\eta_k(\omega)=\xi_k(\omega)+\mu_k(\omega)\leq p|u_k(\omega)|^{p-2}u_k(\omega)$
for all large enough $k\in\N$, so, taking the
limit yields $\eta(\omega)\leq p\abs{\bar u(\omega)}^{p-2}\bar u(\omega)$.
Similarly, we find $\eta(\omega)\geq p\abs{\bar u(\omega)}^{p-2}\bar u(\omega)$ for almost every
$\omega\in\{\bar u=u_b\}$. Finally, for almost every $\omega\in\{\bar u\neq 0\}\cap\{u_a<\bar u<u_b\}$,
we have $u_k(\omega)\neq 0$ and $u_a(\omega)<u_k'(\omega)<u_b(\omega)$, i.e.,
$\eta_k(\omega)=p\abs{u_k(\omega)}^{p-2}u_k(\omega)$ for large enough $k\in\N$, so taking
the limit, we have $\eta(\omega)=p\abs{\bar u(\omega)}^{p-2}\bar u(\omega)$.
Again, from (<ref>) and (<ref>),
we have $\eta\in\bsd q(\bar u)+N_C(\bar u)$,
and this yields the claim.
Recalling that $q$ is uniformly continuous, the subsequent result now directly follows
from <ref>, the above lemma, and formulas
(<ref>) as well as (<ref>).
Let $\bar u\in L^2(D)$ be a local minimizer of (<ref>).
Then there exists a function $\eta\in L^2(D)$ such that
\begin{align}
\label{eq:sparse_control_der}
&f'(\bar u)+\eta=0,\\
\label{eq:sparse_control_subgradient_q}
&\eta=p|\bar u|^{p-2}\bar u\quad\text{a.e.\ on }\{\bar u\neq 0\}\cap\{u_a<\bar u<u_b\},\\
\label{eq:sparse_control_normal_cone_Uad_ua}
&\eta\leq p\abs{u_a}^{p-2}u_a\quad\text{a.e.\ on }\{\bar u=u_a\},\\
\label{eq:sparse_control_normal_cone_Uad_ub}
&\eta\geq p\abs{u_b}^{p-2}u_b\quad\text{a.e.\ on }\{\bar u=u_b\}.
\end{align}
We note that our approach to obtain necessary optimality conditions for (<ref>)
is much different from the one used in [Ito and Kunisch, 2014, Natemeyer and Wachsmuth, 2021] where
Pontryagin's maximum principle has been used to derive pointwise conditions characterizing
local minimizers under more restrictive assumptions than we needed to proceed.
On the one hand, this led to optimaility conditions which also provide information on
the subset of $D$ where the locally optimal control is zero, and one can easily see
that this is not the case in <ref>.
On the other hand, a detailed inspection of (<ref>)
makes clear that our necessary optimality conditions provide helpful information regarding
the structure of the optimal control as the multiplier $\eta$ possesses $L^2$-regularity
while (<ref>) causes $\eta$ to possess singularities
as the optimal control tends to zero somewhere on the domain.
Thus, this condition clearly promotes sparse controls which either are zero, tend to zero (if at all)
slowly enough, or are bounded away from it.
Note that this differs from the conditions derived in [Ito and Kunisch, 2014, Natemeyer and Wachsmuth, 2021]
which are multiplier-free.
§ CONCLUDING REMARKS
In this paper, we established a theory on approximate stationarity conditions for optimization
problems with potentially non-Lipschitzian objective functions in a very general setting.
In contrast to the finite-dimensional situation, where approximate stationarity has been shown to
serve as a necessary optimality condition for local optimality without any additional assumptions,
some additional semicontinuity properties need to be present in the infinite-dimensional context.
We exploited our findings in order to re-address the classical topic of set extremality and were
in position to derive a novel version of the popular extremal principle. This may serve as a
starting point for further research which compares the classical as well as the new version of
the extremal principle in a more detailed way.
Moreover, we used our results in order to derive an approximate notion of stationarity as well
as an associated qualification condition related to M-stationarity for optimization problems with a
composite objective function and geometric constraints
in the Banach space setting. This theory then has been applied to study the convergence properties
of an associated augmented Lagrangian method for the numerical solution of such problems.
Furthermore, we demonstrated how these findings can be used to derive necessary optimality conditions
for optimal control problems with control constraints and a sparsity-promoting term in the
objective function. Some future research may clarify whether our approximate stationarity conditions
can be used to find necessary optimality conditions for optimization problems in function spaces where
nonconvexity or nonsmoothness pop up in a different context.
Exemplary, it would be interesting to study situations where the solution operator $S$
appearing in (<ref>) is nonsmooth, see e.g.[Christof et al., 2018, Hintermüller et al., 2014, Rauls and Wachsmuth, 2020],
where the set of feasible controls is nonconvex, see e.g.[Clason et al., 2017, Clason et al., 2020, Mehlitz and Wachsmuth, 2018],
or where the function $q$ is a term promoting sharp edges in continuous image denoising or deconvolution,
see e.g. <cit.>.
§ ACKNOWLEDGMENTS
The authors are grateful to Hoa Bui who suggested <ref>.
This work is supported by the Australian Research Council, project DP160100854, and
the DFG Grant Bilevel Optimal Control: Theory, Algorithms, and Applications
(Grant No. WA 3636/4-2) within the Priority Program SPP 1962 (Non-smooth
and Complementarity-based Distributed Parameter Systems: Simulation and Hierarchical Optimization).
The first author
benefited from the support of the European Union's Horizon 2020
research and innovation programme under the Marie Skłodowska–Curie
Grant Agreement No. 823731 CONMECH, and Conicyt REDES program 180032.
[Andreani et al., 2010]
R. Andreani, J. M. Martínez, and B. F. Svaiter.
A new sequential optimality condition for constrained optimization
and algorithmic consequences.
SIAM Journal on Optimization, 200 (6):0
3533–3554, 2010.
[Andreani et al., 2011]
R. Andreani, G. Haeser, and J. M. Martínez.
On sequential optimality conditions for smooth constrained
Optimization, 600 (5):0 627–641, 2011.
[Andreani et al., 2016]
R. Andreani, J. M. Martínez, A. Ramos, and P. J. S. Silva.
A cone-continuity constraint qualification and algorithmic
SIAM Journal on Optimization, 260 (1):0
96–110, 2016.
[Andreani et al., 2019]
R. Andreani, N. S. Fazzio, M. L. Schuverdt, and L. D. Secchin.
A sequential optimality condition related to the quasi-normality
constraint qualification and its algorithmic consequences.
SIAM Journal on Optimization, 290 (1):0
743–766, 2019a.
[Andreani et al., 2019]
R. Andreani, G. Haeser, L. D. Secchin, and P. J. S. Silva.
New sequential optimality conditions for mathematical programs with
complementarity constraints and algorithmic consequences.
SIAM Journal on Optimization, 290 (4):0
3201–3230, 2019b.
[Andreani et al., 2020]
R. Andreani, G. Haeser, and D. S. Viana.
Optimality conditions and global convergence for nonlinear
semidefinite programming.
Mathematical Programming, 1800 (1):0 203–235,
[Aubin and Frankowska, 2009]
J.-P. Aubin and H. Frankowska.
Set-Valued Analysis.
Birkhäuser, Boston, 2009.
[Bai et al., 2019]
K. Bai, J. J. Ye, and J. Zhang.
Directional quasi-/pseudo-normality as sufficient conditions for
metric subregularity.
SIAM Journal on Optimization, 290 (4):0
2625–2649, 2019.
[Birgin and Martínez, 2014]
E. G. Birgin and J. M. Martínez.
Practical Augmented Lagrangian Methods for Constrained
SIAM, Philadelphia, 2014.
[Bonnans and Shapiro, 2000]
J. F. Bonnans and A. Shapiro.
Perturbation Analysis of Optimization Problems.
Springer, New York, 2000.
[Börgens et al., 2019]
E. Börgens, C. Kanzow, and D. Steck.
Local and global analysis of multiplier methods for constrained
optimization in Banach spaces.
SIAM Journal on Control and Optimization, 570
(6):0 3694–3722, 2019.
[Börgens et al., 2020]
E. Börgens, C. Kanzow, P. Mehlitz, and G. Wachsmuth.
New constraint qualifications for optimization problems in Banach
spaces based on asymptotic KKT conditions.
SIAM Journal on Optimization, 300 (4):0
2956–2982, 2020.
[Bredies and Lorenz, 2018]
K. Bredies and D. Lorenz.
Mathematical Image Processing.
Birkhäuser, Cham, 2018.
[Chen et al., 2017]
X. Chen, L. Guo, Z. Lu, and J. J. Ye.
An augmented Lagrangian method for non-Lipschitz nonconvex
SIAM Journal on Numerical Analysis, 550 (1):0
168–193, 2017.
[Christof et al., 2018]
C. Christof, C. Meyer, S. Walther, and C. Clason.
Optimal control of a non-smooth semilinear elliptic equation.
Mathematical Control & Related Fields, 80
(1):0 247–276, 2018.
[Clason et al., 2017]
C. Clason, A. Rund, and K. Kunisch.
Nonconvex penalization of switching control of partial differential
Systems & Control Letters, 106:0 1–8, 2017.
[Clason et al., 2020]
C. Clason, Y. Deng, P. Mehlitz, and U. Prüfert.
Optimal control problems with control complementarity constraints:
existence results, optimality conditions, and a penalty method.
Optimization Methods and Software, 350 (1):0
142–170, 2020.
[Dontchev and Rockafellar, 2014]
A. L. Dontchev and R. T. Rockafellar.
Implicit Functions and Solution Mappings.
Springer, New York, 2014.
[Dontchev et al., 2020]
A. L. Dontchev, H. Gfrerer, A. Y. Kruger, and J. V. Outrata.
The radius of metric subregularity.
Set-Valued and Variational Analysis, 280 (3):0
451–472, 2020.
[Ekeland, 1974]
I. Ekeland.
On the variational principle.
Journal of Mathematical Analysis and Applications,
47:0 324–353, 1974.
[Fabian, 1989]
M. J. Fabian.
Subdifferentiability and trustworthiness in the light of a new
variational principle of Borwein and Preiss.
Acta Universitatis Carolinae, 300 (2):0
51–56, 1989.
URL <http://dml.cz/dmlcz/701793>.
[Fabian et al., 2010]
M. J. Fabian, R. Henrion, A. Y. Kruger, and J. V. Outrata.
Error bounds: necessary and sufficient conditions.
Set-Valued and Variational Analysis, 180 (2):0
121–149, 2010.
[Flegel et al., 2007]
M. L. Flegel, C. Kanzow, and J. V. Outrata.
Optimality conditions for disjunctive programs with application to
mathematical programs with equilibrium constraints.
Set-Valued Analysis, 150 (2):0 139–162, 2007.
[Gfrerer, 2013]
H. Gfrerer.
On directional metric regularity, subregularity and optimality
conditions for nonsmooth mathematical programs.
Set-Valued and Variational Analysis, 210 (2):0
151–176, 2013.
[Guo and Ye, 2018]
L. Guo and J. J. Ye.
Necessary optimality conditions and exact penalization for
non-Lipschitz nonlinear programs.
Mathematical Programming, 168:0 571–598, 2018.
[Guo et al., 2015]
L. Guo, G.-H. Lin, and J. J. Ye.
Solving mathematical programs with equilibrium constraints.
Journal of Optimization Theory and Applications, 1660
(1):0 234–256, 2015.
[Harder and Wachsmuth, 2018]
F. Harder and G. Wachsmuth.
Comparison of optimality systems for the optimal control of the
obstacle problem.
GAMM-Mitteilungen, 400 (4):0 312–338, 2018.
[Harder et al., 2021] |
# Masked Autoencoders are PDE Learners
Anthony Y. Zhou
Department of Mechanical Engineering
Carnegie Mellon University
Pittsburgh, PA 15213
<EMAIL_ADDRESS>
& Amir Barati Farimani
Department of Mechanical Engineering
Carnegie Mellon University
Pittsburgh, PA 15213
<EMAIL_ADDRESS>
Corresponding author. Courtesy appointments in Machine Learning, Chemical
Engineering, and Biomedical Engineering Departments.
###### Abstract
Neural solvers for partial differential equations (PDEs) have great potential,
yet their practicality is currently limited by their generalizability. PDEs
evolve over broad scales and exhibit diverse behaviors; predicting these
phenomena will require learning representations across a wide variety of
inputs, which may encompass different coefficients, geometries, or equations.
As a step towards generalizable PDE modeling, we adapt masked pretraining for
PDEs. Through self-supervised learning across PDEs, masked autoencoders can
learn useful latent representations for downstream tasks. In particular,
masked pretraining can improve coefficient regression and timestepping
performance of neural solvers on unseen equations. We hope that masked
pretraining can emerge as a unifying method across large, unlabeled, and
heterogeneous datasets to learn latent physics at scale.
## 1 Introduction
The physical world is incredibly complex; physical phenomena can be extremely
diverse and span wide spatiotemporal scales—from neuron excitations to
turbulent flow to even global climate. Importantly, many of these phenomena
can be mathematically modeled with time-dependent partial differential
equations. These PDEs are generally analytically intractable and require the
use of numerical solvers to obtain approximate solutions. For complex
phenomena, these solutions can often be slow to obtain; furthermore, different
phenomena often require a careful design of tailored solvers.
Advances in deep learning in the past decade have led to the design of a novel
class of solvers for PDEs. These neural solvers can be extremely fast and
display resolution invariance; however, neural networks introduce training
difficulties and a lack of error bounds. Many important advances have been
made to address these challenges, with SOTA models achieving high accuracy on
well-studied PDEs under certain configurations (Raissi et al. (2019),Lu et al.
(2019), Li et al. (2020), Cao (2021), Brandstetter et al. (2022a), Li et al.
(2023a)).
A current frontier in neural PDE solvers lies in generalizing solvers to
different parameters, conditions, or equations, thereby avoiding the need to
collect new data and retrain networks when given unseen PDE dynamics.
Preliminary work in this space has explored many methods to achieve this, from
directly conditioning on PDE coefficients (Takamoto et al. (2023), Lorsung et
al. (2024), Shen et al. (2024)) to pretraining foundation models across
various equations (Subramanian et al. (2023), McCabe et al. (2023), Hao et al.
(2024)). Despite these advances, generalizable neural solvers remain a
significant challenge. PDEs can be incredibly diverse and chaotic, and neural
network predictions need to be not only semantically reasonable, but also
numerically accurate.
As a step towards addressing these challenges, we propose adapting masked
pretraining methods to PDEs. Specifically, we demonstrate that masked PDE
modeling can learn latent representations to improve performance on downstream
tasks even on unseen coefficients and PDEs. These results align with current
research on PDE pretraining, however, we demonstrate learning on a self-
supervised task—granting flexibility in selecting downstream tasks or
equations to fine-tune on and the ability to pretrain on unlabeled,
incomplete, or heterogeneous datasets. Additionally, our approach is agnostic
to downstream architecture choices, allowing standard neural solvers to
quickly finetune to new equations through conditioning on a pretrained model.
## 2 Related Work
### 2.1 Neural PDE Solvers
The field of neural PDE solvers has grown rapidly and has shown great advances
in both the accuracy of solutions and the ability to adapt to different
equations and boundary conditions. Infinite-dimensional neural operators (Li
et al. (2020); Kovachki et al. (2023); Lu et al. (2019)) have shown impressive
accuracy in solving time-dependent PDEs by learning the mappings between
initial conditions and solutions. However, these methods alone have shown
brittleness with respect to changing PDE coefficients or boundary conditions
(Gupta and Brandstetter (2022); Lu et al. (2021)), prompting recent work to
allow neural solvers to adapt to changes in PDE conditions.
A variety of approaches have considered adding PDE dynamics information to
neural solvers. (Gupta and Brandstetter (2022)) benchmark different PDE
conditioning methods across common architectures, while (Brandstetter et al.
(2022a)) design message-passing neural solvers that benefit from PDE
coefficient and boundary condition information. Beyond directly conditioning
on PDE dynamics, a class of neural PDE solvers has proposed the addition of an
encoder or adaptive network to inform a forecaster network of different PDE
coefficients (Wang et al. (2021), Kirchmeyer et al. , Takamoto et al. (2023),
Lorsung et al. (2024)). At an even broader level, (Yin et al. (2021)) and
(Zhang et al. (2023a)) propose modifications to the PDE forecasting loss
function to maximize shared learning across diverse PDE examples to meta-learn
dynamics across parameters.
Figure 1: Masked Autoencoders are PDE Learners. We investigate the ability of
autoencoders to learn diverse PDE dynamics through masked reconstruction.
(Top) We pretrain an encoder on unmasked patches of spatiotemporal PDE data,
while a decoder reconstructs the true data from latent embeddings and learned
mask patches. (Left) We evaluate the encoder’s latent representation through
regressing PDE coefficients on both interpolated and unseen equations. (Right)
We show improved PDE timestepping performance through conditioning neural
solvers on encoded PDE inputs.
### 2.2 Pretraining for PDEs
As an effort to work towards more generalizable PDE neural solvers, recent
work has followed the success of pretraining and foundational models in the
broader deep learning community. Based on contrastive pretraining methods in
computer vision problems, (Chen et al. (2020), Schroff et al. (2015), Zbontar
et al. (2021), Bardes et al. (2022)), contrastive PDE methods aim to leverage
equation coefficients (Lorsung and Farimani (2024)), physical invariances
(Zhang et al. (2023b)), or Lie point symmetries (Mialon et al. (2023)
Brandstetter et al. (2022b)) to define similar or different PDE dynamics that
can be organized in a latent space. Another approach in PDE pretraining
follows observed in-context learning and emergent behavior in LLMs (Wei et al.
(2022), Brown et al. (2020), Radford et al. ) to design neural PDE solvers
that are capable of following prompted PDE examples to forecast unseen
dynamics (Yang et al. (2023a), Chen et al. (2024)).
A more straightforward pretraining method focuses on directly training neural
solvers to transfer to new PDE dynamics (Goswami et al. (2022), Chakraborty et
al. (2022), Wang et al. (2022)). This approach has also been scaled by
training neural solvers with large and diverse training sets to characterize
its transfer behavior (Subramanian et al. (2023)). As a step toward
foundational modeling, more principled training approaches have been proposed
to learn PDE dynamics across diverse physics at scale. (Tripura and
Chakraborty (2023)) design a combinatorial neural operator that learns
different dynamics as separate modules, (McCabe et al. (2023)) use a shared
embedding to auto-regressively learn multiple physics with axial attention,
(Hao et al. (2024)) incorporate denoising with a scalable transformer
architecture to show fine-tuning performance across diverse PDE datasets, and
(Shen et al. (2024)) incorporate a unified PDE embedding to align LLMs across
PDE families.
### 2.3 Masked Pretraining
Masked reconstruction is a popular technique popularized by the language
processing (Devlin et al. (2018)) and vision (Dosovitskiy et al. (2020), Xie
et al. (2021), He et al. (2021)) domains to pretrain models for downstream
tasks. Masked modeling is a broad field that spans many masking strategies,
architectures, and applications (Li et al. (2024)); this ubiquity is
attributed to the ability of masked pretraining to increase performance in
downstream tasks, suggesting that these models can learn meaningful context
through masked reconstruction (Cao et al. (2022)). In the field of neural PDE
solvers, masked pretraining has been initially explored to investigate its
fine-tuning performance and data efficiency when applied to equations in the
same family (Chen et al. (2024)). However, masked modeling still remains to be
investigated when pretraining on datasets across equations, geometries, or
resolutions; furthermore, it’s downstream performance to novel tasks or
equations has not been characterized, which we believe may hold great
potential.
Figure 2: Masked PDE Modeling. In each triplet, the masked PDE data (left),
autoencoder reconstruction (middle), and true PDE data (right) is shown.
Additionally, we use a masking ratio of 60% in all examples. (Left) Masked
reconstruction of unseen samples of the 1D KdV-Burgers equation, which
interpolates between the Heat, Burgers, and KdV equations. (Right) Masked
reconstruction of the 2D Heat, Advection, and Burgers equations displayed at
selected timesteps. Note that a single autoencoder is used across all 2D
samples.
## 3 Methods
In this section, we describe our methodology to train masked autoencoders for
downstream PDE tasks, as shown in Figure 1. For 1D and 2D PDEs, we adopt ViT
(Dosovitskiy et al. (2020)) and ViT3D (Arnab et al. ) architectures to act as
an encoder and decoder for masked reconstruction according to (He et al.
(2021)). Additionally, we study the addition of Lie augmentations
(Brandstetter et al. (2022b)) to masked pretraining data, an approach that
follows the use of data augmentations for vision or video pretraining (He et
al. (2021); Xie et al. (2021); Feichtenhofer et al. ).
### 3.1 Masked Pretraining for PDEs
We employ a common approach of partitioning data into non-overlapping patches.
A random subset of these patches is sampled to be masked and omitted from the
encoder input. The encoder then embeds only the visible, unmasked patches
through a series of Transformer blocks. At large masking ratios, this reduces
the input complexity and allows for both larger encoders and lower
computational complexity (He et al. (2021)).
The embedded patches are then recombined with mask tokens according to their
position in the PDE trajectory. Positional embeddings are added again to
preserve positional information before being decoded. An asymmetric design is
used to further reduce training costs, as the decoder can be shallower and
narrower because it is discarded in downstream tasks (He et al. (2021)). The
decoded tokens are projected into the PDE space through a linear layer before
reconstructing the output from the patches. Lastly, the output is compared to
ground truth PDE data through an L1 loss.
### 3.2 Lie Point Symmetry Data Augmentations
To emulate a larger pretraining dataset, we consider augmenting the
pretraining dataset with Lie point symmetries (Brandstetter et al. (2022b)).
Given a PDE, one can derive or look up its symmetries as a set of
transformations $\\{g_{1},\dots,g_{i}\\}$, each with a variable $\epsilon_{i}$
that modulates the magnitude of the transformation. At training time, we apply
$g_{i}$ sequentially, each with a randomly sampled $\epsilon_{i}$ to augment
PDE samples with a certain probability. This augmented PDE sample could
represent a solution that has been shifted in space, time, or magnitude, among
other transformations, but still propagates dynamics according to the original
PDE. For a more detailed discussion of Lie point symmetries for PDEs, we refer
the reader to (Olver (1986)) and (Mialon et al. (2023)).
## 4 Experiments
We test the fine-tuning performance of masked autoencoders on PDE regression
and timestepping tasks in 1D and 2D. This approach is similar to vision or
language domains; for example, pretraining on masked image reconstruction and
fine-tuning to image classification or semantic segmentation ( He et al.
(2021); Xie et al. (2021)). We find comparable performance gains: pretrained
autoencoders are able to extract context from PDE trajectories to inform
downstream tasks and provide higher performance across different equations and
applications.
### 4.1 Equations Considered
Add information about time and spatial resolution.
1. 1.
1D KdV-Burgers Equation We pretrain and evaluate downstream performance on a
family of PDEs governed by the combined KdV-Burgers equation (Brandstetter et
al. (2022a)).
$\partial_{t}u+\alpha
u\partial_{x}u-\beta\partial_{xx}u+\gamma\partial_{xxx}u=\delta(t,x)$ (1)
This equation contains the heat, Burgers, KdV equations as corner cases.
Furthermore, periodic boundary conditions are used with a forcing function and
initial condition defined by $\delta(x,t)$.
$\delta(t,x)=\sum_{j=1}^{J}A_{j}sin(\omega_{j}t+2\pi l_{j}x/L+\phi_{j})$ (2)
$u(0,x)=\delta(0,x)$ (3)
This setup follows (Bar-Sinai et al. (2019)) and (Brandstetter et al. (2022a))
to introduce randomness and periodicity into PDE solutions. This is
implemented by sampling equation coefficients uniformly in
$\alpha\in[0,1],\beta\in[0,0.5],\gamma\in[0,6]$, and sampling forcing
coefficients uniformly in
$A_{j}\in[-0.5,0.5],\omega_{j}\in[-0.4,0.4],l_{j}\in{1,2,3},\phi_{j}\in[0,2\pi)$
while setting $J=5,L=16$. We generate samples with resolution
$(n_{t},n_{x})=(250,100)$.
2. 2.
1D Advection and KS Equations: The linear advection (4) and Kuramoto-
Sivashinsky (5) equations are considered to evaluate fine-tuning to unseen
equations.
$\partial_{t}u+c\partial_{x}u=0,\quad c\in[0.1,2.5]$ (4)
$\partial_{t}u+u\partial_{x}u+\partial_{xx}u+\partial_{xxxx}u=0$ (5)
In both equations, initial conditions are randomly sampled according to
equation (2) and periodic boundary conditions are enforced. We generate
advection samples with resolution $(n_{t},n_{x})=(250,100)$ and KS samples
with resolution $(n_{t},n_{x})=(150,100)$.
3. 3.
2D Heat, Advection and Burgers Equations: We pretrain and evaluate downstream
performance on a combined set of 2D Heat (6), Advection (7), and Burgers (8,
9) equations under periodic boundary conditions.
$\displaystyle\partial_{t}u+\nu(\partial_{xx}u+\partial_{yy}u)=0$ (6)
$\displaystyle\partial_{t}u+c_{x}\partial_{x}u+c_{y}\partial_{y}u=0$ (7)
$\displaystyle\partial_{t}u+\alpha_{x}u\partial_{x}u+\alpha_{y}v\partial_{y}u-\beta(\partial_{xx}u+\partial_{yy}u)=0$
(8)
$\displaystyle\partial_{t}v+\alpha_{x}u\partial_{x}v+\alpha_{y}v\partial_{y}v-\beta(\partial_{xx}v+\partial_{yy}v)=0$
(9)
We sample the coefficients of the equation uniformly in
$c_{x}\in[0.1,2.5],c_{y}\in[0.1,2.5],\nu\in[3\mathrm{e}{-3},3\mathrm{e}{-2}],\alpha_{x}\in[0.5,1],\alpha_{y}\in[0.5,1],\beta\in[3\mathrm{e}{-3},2\mathrm{e}{-2}]$.
Furthermore, we generate initial conditions through a similar approach using a
truncated Fourier series in 2D:
$u(x,y,0)=\sum_{j=1}^{J}A_{j}sin(2\pi l_{xj}x/L+2\pi l_{yj}y/L+\phi_{j})$ (10)
Initial condition coefficients are sampled identically to 2, with
$A_{j}\in[-0.5,0.5],\omega_{j}\in[-0.4,0.4],l_{xj},l_{yj}\in{1,2,3},\phi_{j}\in[0,2\pi)$
while setting $J=5,L=2$. Additionally, samples are generated with a resolution
of $(n_{t},n_{x},n_{y})=(100,64,64)$.
### 4.2 PDE Coefficient Regression
We evaluate the latent space of masked autoencoders after pretraining on the
KdV-Burgers equation in 1D and the combined Heat, Advection, and Burgers
equations in 2D. This is done through regressing equation coefficients after
discarding the decoder and training a linear model on top of the encoder’s
class embedding. Specifically, we use a VIT model for 1D regression with 1.6M
parameters and a VIT3D model for 2D regression with 3.5M parameters. We
compare end-to-end finetuning with a supervised baseline trained with a
randomly initialized encoder and a frozen encoder. This is similar to
pretraining methods in vision—masked autoencoders are both linearly evaluated
and fine-tuned end-to-end. Additionally, we fine-tune on regressing
coefficients from unseen equations in 1D, and present the results in Table
LABEL:tab:regression.
1D PDE Regression: We pretrain on a set of 4096 unlabeled KdV-Burgers equation
samples and fine-tune on 4096 labeled KdV-Burgers samples and 2048 labeled
Advection and KS samples. We consider three coefficients
$[\alpha,\beta,\gamma]$ in the KdV-Burgers equation to regress from the test
set. Furthermore, we regress the advection speed $c$ and a set of $2J$ initial
condition coefficients $[A_{j},\omega_{j}]$ from the advection and KS test
sets, respectively. In particular, for the 1D KS equation, we omit samples
from the first 25 timesteps to mask the initial conditions.
2D PDE Regression: In two dimensions, we use a pretraining set of 3072
unlabeled Heat/Advection/Burgers equation samples and fine-tune on 3072
labeled Heat/Advection/Burgers equation samples. We consider six coefficients
$[c_{x},c_{y},\beta,\nu,\alpha_{x},\alpha_{y}]$ to regress from the combined
Heat, Advection, and Burgers test set.
Figure 3: MAE Latent Space. We plot encoder class token embeddings after
masked pretraining and after fine-tuning with coefficient labels. Note that
the model does not see coefficient values during pretraining yet is still able
to learn approximate trends in PDEs. (Left) Embeddings of 1D PDEs. We use a 2D
PCA as dimensionality reduction and color embeddings by ascending $\alpha$ and
$c$ coefficients of the KdV-Burgers and Advection equations, respectively.
(Right) Embeddings of 2D PDEs. We use a 2D t-SNE as dimensionality reduction
and color embeddings by ascending $\nu$, $c_{x}$, and $\alpha_{x}$
coefficients of the Heat, Advection, and Burgers equations.
Table 1: Coefficient Regression Task. Test MSE errors of different models
across equations. Encoders are pretrained on equations in bold. Errors are
averaged over three seeds in all experiments, and given multiplied by 1e-3.
| 1D | 2D
---|---|---
Model | KdV-Burgers | Adv | KS | Heat/Adv/Burgers
Supervised | 11.92 | 0.772 | 104.36 | 1.203
Pretrained/Frozen | 2.925 | 116.1 | 104.33 | 4.519
Pretrained/Fine-tuned | 0.579 | 0.130 | 104.23 | 0.892
In general, we observe improved regression performance from the use of a
pretrained initialization compared to random initialization when regressing
coefficients. For the 1D KdV-Burgers equation, this is true even when the
encoder is frozen; however, end-to-end fine-tuning is necessary for
extrapolation to new equations and in 2D. We hypothesize that this could be
due to the small size of the 2D pretraining data set, consisting only of 3072
samples. Furthermore, in the 1D KS equation, all models converge to the same
performance when regressing the initial coefficients. We hypothesize that this
is due to the equation’s chaotic behavior and relatively few training samples,
since both the supervised and fine-tuned models tend to overfit to initial
coefficients on the training set. This behavior could also suggest that masked
autoencoders learn how PDEs evolve over different coefficients or equations,
rather than how PDEs evolve over different initial conditions.
We visualize the latent space learned by masked autoencoders by plotting the
encoder’s class embedding across different equations in Figure 3.
Interestingly, the class embedding is able to approximately differentiate PDE
dynamics even before seeing the labeled data. Additionally, the phenomenon is
observed on unseen equations; 1D advection samples show trends in the latent
space despite only pretraining on unlabeled KdV-Burgers samples. After fine-
tuning, the latent space predictably organizes to separate samples originating
from different coefficients well.
In two dimensions, the model is able to organize samples into Heat, Advection,
and Burgers clusters in the latent space. Furthermore, within each cluster,
the encoder is able to approximately differentiate equations by their
coefficients. Again, the model is able to learn this latent representation
before seeing labeled data; after fine-tuning, the data is similarly clustered
but better organized by their coefficients.
### 4.3 PDE Timestepping
We consider the use of autoencoder embeddings to condition neural operators in
PDE timestepping. To investigate the effect of autoencoder conditioning, we
train three model variants: Fourier Neural Operator (FNO) (Li et al. (2020)),
FNO conditioned on a pretrained but frozen encoder, and FNO conditioned on a
pretrained and end-to-end finetuned encoder. For 1D PDEs, we use VIT (1.6M)
and FNO1D (0.8M) models; for 2D PDEs we use VIT3D (3.5M) and FNO2D (2.7M)
models.
To condition neural operator models, we employ a strategy introduced in (Gupta
and Brandstetter (2022)), whereby we project embeddings into the Fourier
domain and multiply embeddings with FNO spectral weights. Additionally, the
embeddings are linearly projected and added to the residual connection and the
Fourier branch. Furthermore, to improve temporal stability, we implement the
temporal bundling and pushforward trick from (Brandstetter et al. (2022a)). At
test time, we provide an initial window of PDE data and autoregressively
rollout future timesteps; accumulated error between autoregressive predictions
and ground truth data is averaged and presented in Table
LABEL:tab:timestepping.
1D PDE Timestepping: We train on 4096 KdV-Burgers and 2048 Advection/KS
equation samples with VIT and FNO1D architectures. Our results suggest that
conditioning on a pretrained encoder is able to improve 1D performance, even
when the encoder is frozen. These performance gains are amplified by fine-
tuning the encoder to the specific PDE forecasting task. An outlier to these
observations using a frozen encoder in 1D Advection; we hypothesize that the
simple 1D dynamics are simple enough to learn without conditional information,
and additional context learned from different PDEs may confuse the neural
solver.
2D PDE Timestepping: We train on 3072 Heat, Advection, and Burgers equation
samples with VIT3D and FNO2D architectures. We observe lower errors when using
a pretrained encoder, with increased benefits when fully fine-tuning the
encoder. In a case where equation dynamics differ greatly, having prior
knowledge of equation dynamics can greatly benefit neural solvers in
differentiating between equations and solving effectively. Furthermore, it was
noted that vanilla FNO models tend to overfit to the training set when samples
exhibit diverse PDE dynamics, as such, conditional information can aid to
generalize to test samples.
Table 2: Timestepping Task. Test MSE errors of different models across
equations. Encoders are pretrained on equations in bold. Errors are averaged
over three seeds in all experiments.
| 1D | 2D
---|---|---
Model | KdV-Burgers | Adv | KS | Heat/Adv/Burgers
FNO | 6.423 | 0.432 | 22.95 | 38.54
FNO+Frozen Encoder | 5.826 | 0.463 | 7.284 | 23.91
FNO+Finetuned Encoder | 4.141 | 0.182 | 7.119 | 10.40
Compared to transfer learning (Goswami et al. (2022), Chakraborty et al.
(2022)) or large-scale pretraining of neural solvers (McCabe et al. (2023),
Hao et al. (2024), Subramanian et al. (2023)), conditionally pretrained neural
solvers can be more flexible; any downstream architecture can be chosen and
fine-tuned according to the PDE at hand, such as using FNO for periodic/low-
frequency PDEs. Neural operators such as FNO, DeepOnet, OFormer, and even
broader neural solvers including GNN/Unet-based architectures tend to be
somewhat specialized: they can be easily trained and produce accurate results
when given the necessary data (Li et al. (2020), Lu et al. (2019), Li et al.
(2023a), Brandstetter et al. (2022a), Gupta and Brandstetter (2022)). We can
take advantage of these capabilities by leveraging information from a
pretrained model to both accelerate neural solver training and improve
generalization to different PDEs.
## 5 Conclusion and Future Work
We present a method for pretraining masked autoencoders for PDEs as well as
study their performance in downstream tasks. In particular, we study
generalization behavior to interpolated and unseen PDEs in regressing
coefficients and predicting future timesteps. We find that masked pretraining
is beneficial in these tasks, learning latent representations that can extend
to novel PDE families. We hope that larger autoencoders can scale these
benefits, both in the performance of downstream tasks and diversity of PDEs
considered. This is especially promising due to the ability of masked
pretraining to be adapted to heterogeneous, multi-equation datasets that can
consist of different geometries, boundary conditions, or discretizations,
possibly originating from incomplete or even real-world data.
In future work, we plan on expanding our 2D experiments to include equations
outside of the pretraining set, such as the 2D Navier-Stokes or Darcy Flow
equations. To handle high-dimensional data, we also hope to investigate
different attention mechanisms for our encoder and decoder design, possibly
incorporating axial attention (Arnab et al. , McCabe et al. (2023)), window
attention (Liu et al. (2021)), or factorized attention (Li et al. (2023b)).
Lastly, we hope to fine-tune masked autoencoders in a super-resolution task
similar to the approach taken by (Yang et al. (2023b)); we hypothesize that
using a pretrained encoder to generate an embedding function that is upsampled
can help generalize superresolution methods across different equations or
coefficients.
## References
* Raissi et al. [2019] M. Raissi, P. Perdikaris, and G.E. Karniadakis. Physics-informed neural networks: A deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations. _Journal of Computational Physics_ , 378:686–707, 2019. ISSN 0021-9991. doi:https://doi.org/10.1016/j.jcp.2018.10.045. URL https://www.sciencedirect.com/science/article/pii/S0021999118307125.
* Lu et al. [2019] Lu Lu, Pengzhan Jin, and George Em Karniadakis. Deeponet: Learning nonlinear operators for identifying differential equations based on the universal approximation theorem of operators. 10 2019. doi:10.1038/s42256-021-00302-5. URL http://arxiv.org/abs/1910.03193http://dx.doi.org/10.1038/s42256-021-00302-5.
* Li et al. [2020] Zongyi Li, Nikola Kovachki, Kamyar Azizzadenesheli, Burigede Liu, Kaushik Bhattacharya, Andrew Stuart, and Anima Anandkumar. Fourier neural operator for parametric partial differential equations. 10 2020. URL http://arxiv.org/abs/2010.08895.
* Cao [2021] Shuhao Cao. Choose a transformer: Fourier or galerkin, 2021.
* Brandstetter et al. [2022a] Johannes Brandstetter, Daniel E. Worrall, and Max Welling. Message passing neural PDE solvers. _CoRR_ , abs/2202.03376, 2022a. URL https://arxiv.org/abs/2202.03376.
* Li et al. [2023a] Zijie Li, Kazem Meidani, and Amir Barati Farimani. Transformer for partial differential equations’ operator learning, 2023a.
* Takamoto et al. [2023] Makoto Takamoto, Francesco Alesiani, and Mathias Niepert. Learning neural pde solvers with parameter-guided channel attention. 4 2023. URL http://arxiv.org/abs/2304.14118.
* Lorsung et al. [2024] Cooper Lorsung, Zijie Li, and Amir Barati Farimani. Physics informed token transformer for solving partial differential equations, 2024.
* Shen et al. [2024] Junhong Shen, Tanya Marwah, and Ameet Talwalkar. Ups: Towards foundation models for pde solving via cross-modal adaptation, 2024.
* Subramanian et al. [2023] Shashank Subramanian, Peter Harrington, Kurt Keutzer, Wahid Bhimji, Dmitriy Morozov, Michael Mahoney, and Amir Gholami. Towards foundation models for scientific machine learning: Characterizing scaling and transfer behavior. 5 2023. URL http://arxiv.org/abs/2306.00258.
* McCabe et al. [2023] Michael McCabe, Bruno Régaldo-Saint Blancard, Liam Holden Parker, Ruben Ohana, Miles Cranmer, Alberto Bietti, Michael Eickenberg, Siavash Golkar, Geraud Krawezik, Francois Lanusse, Mariel Pettee, Tiberiu Tesileanu, Kyunghyun Cho, and Shirley Ho. Multiple physics pretraining for physical surrogate models, 2023.
* Hao et al. [2024] Zhongkai Hao, Chang Su, Songming Liu, Julius Berner, Chengyang Ying, Hang Su, Anima Anandkumar, Jian Song, and Jun Zhu. Dpot: Auto-regressive denoising operator transformer for large-scale pde pre-training, 2024.
* Kovachki et al. [2023] Nikola Kovachki, Zongyi Li, Burigede Liu, Kamyar Azizzadenesheli, Kaushik Bhattacharya, Andrew Stuart, Anima Anandkumar, and Lorenzo Rosasco. Neural operator: Learning maps between function spaces with applications to pdes, 2023.
* Gupta and Brandstetter [2022] Jayesh K. Gupta and Johannes Brandstetter. Towards multi-spatiotemporal-scale generalized pde modeling. 9 2022. URL http://arxiv.org/abs/2209.15616.
* Lu et al. [2021] Lu Lu, Xuhui Meng, Shengze Cai, Zhiping Mao, Somdatta Goswami, Zhongqiang Zhang, and George Em Karniadakis. A comprehensive and fair comparison of two neural operators (with practical extensions) based on fair data. 11 2021. doi:10.1016/j.cma.2022.114778. URL http://arxiv.org/abs/2111.05512http://dx.doi.org/10.1016/j.cma.2022.114778.
* Wang et al. [2021] Rui Wang, Robin Walters, and Rose Yu. Meta-learning dynamics forecasting using task inference. 2 2021. URL http://arxiv.org/abs/2102.10271.
* [17] Matthieu Kirchmeyer, Yuan Yin, Jérémie Donà, Nicolas Baskiotis, Alain Rakotomamonjy, and Patrick Gallinari. Generalizing to new physical systems via context-informed dynamics model.
* Yin et al. [2021] Yuan Yin, Ibrahim Ayed, Emmanuel de Bézenac, Nicolas Baskiotis, and Patrick Gallinari. Leads: Learning dynamical systems that generalize across environments. 6 2021. URL http://arxiv.org/abs/2106.04546.
* Zhang et al. [2023a] Lu Zhang, Huaiqian You, Tian Gao, Mo Yu, Chung-Hao Lee, and Yue Yu. Metano: How to transfer your knowledge on learning hidden physics. 1 2023a. URL http://arxiv.org/abs/2301.12095.
* Chen et al. [2020] Ting Chen, Simon Kornblith, Mohammad Norouzi, and Geoffrey E. Hinton. A simple framework for contrastive learning of visual representations. _CoRR_ , abs/2002.05709, 2020. URL https://arxiv.org/abs/2002.05709.
* Schroff et al. [2015] Florian Schroff, Dmitry Kalenichenko, and James Philbin. Facenet: A unified embedding for face recognition and clustering. In _2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)_. IEEE, June 2015. doi:10.1109/cvpr.2015.7298682. URL http://dx.doi.org/10.1109/CVPR.2015.7298682.
* Zbontar et al. [2021] Jure Zbontar, Li Jing, Ishan Misra, Yann LeCun, and Stéphane Deny. Barlow twins: Self-supervised learning via redundancy reduction, 2021.
* Bardes et al. [2022] Adrien Bardes, Jean Ponce, and Yann LeCun. Vicreg: Variance-invariance-covariance regularization for self-supervised learning, 2022.
* Lorsung and Farimani [2024] Cooper Lorsung and Amir Barati Farimani. Picl: Physics informed contrastive learning for partial differential equations, 2024.
* Zhang et al. [2023b] Rui Zhang, Qi Meng, and Zhi-Ming Ma. Deciphering and integrating invariants for neural operator learning with various physical mechanisms. _National Science Review_ , 11(4), December 2023b. ISSN 2053-714X. doi:10.1093/nsr/nwad336. URL http://dx.doi.org/10.1093/nsr/nwad336.
* Mialon et al. [2023] Grégoire Mialon, Quentin Garrido, Hannah Lawrence, Danyal Rehman, Yann LeCun, and Bobak T. Kiani. Self-supervised learning with lie symmetries for partial differential equations. 7 2023. URL http://arxiv.org/abs/2307.05432.
* Brandstetter et al. [2022b] Johannes Brandstetter, Max Welling, and Daniel E. Worrall. Lie point symmetry data augmentation for neural pde solvers. 2 2022b. URL http://arxiv.org/abs/2202.07643.
* Wei et al. [2022] Jason Wei, Yi Tay, Rishi Bommasani, Colin Raffel, Barret Zoph, Sebastian Borgeaud, Dani Yogatama, Maarten Bosma, Denny Zhou, Donald Metzler, Ed H. Chi, Tatsunori Hashimoto, Oriol Vinyals, Percy Liang, Jeff Dean, and William Fedus. Emergent abilities of large language models, 2022.
* Brown et al. [2020] Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel M. Ziegler, Jeffrey Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. Language models are few-shot learners, 2020.
* [30] Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. Language models are unsupervised multitask learners. URL https://github.com/codelucas/newspaper.
* Yang et al. [2023a] Liu Yang, Siting Liu, Tingwei Meng, and Stanley J Osher. In-context operator learning with data prompts for differential equation problems. 2023a. doi:10.1073/pnas. URL https://doi.org/10.1073/pnas.2310142120.
* Chen et al. [2024] Wuyang Chen, Jialin Song, Pu Ren, Shashank Subramanian, Dmitriy Morozov, and Michael W. Mahoney. Data-efficient operator learning via unsupervised pretraining and in-context learning. 2 2024. URL http://arxiv.org/abs/2402.15734.
* Goswami et al. [2022] Somdatta Goswami, Katiana Kontolati, Michael D. Shields, and George Em Karniadakis. Deep transfer operator learning for partial differential equations under conditional shift. _Nature Machine Intelligence_ , 4(12):1155–1164, December 2022. ISSN 2522-5839. doi:10.1038/s42256-022-00569-2. URL http://dx.doi.org/10.1038/s42256-022-00569-2.
* Chakraborty et al. [2022] Ayan Chakraborty, Cosmin Anitescu, Xiaoying Zhuang, and Timon Rabczuk. Domain adaptation based transfer learning approach for solving pdes on complex geometries. _Engineering with Computers_ , 38(5):4569–4588, Oct 2022. ISSN 1435-5663. doi:10.1007/s00366-022-01661-2. URL https://doi.org/10.1007/s00366-022-01661-2.
* Wang et al. [2022] Hengjie Wang, Robert Planas, Aparna Chandramowlishwaran, and Ramin Bostanabad. Mosaic flows: A transferable deep learning framework for solving pdes on unseen domains. _Computer Methods in Applied Mechanics and Engineering_ , 389, 2 2022. ISSN 00457825. doi:10.1016/j.cma.2021.114424.
* Tripura and Chakraborty [2023] Tapas Tripura and Souvik Chakraborty. A foundational neural operator that continuously learns without forgetting, 2023.
* Devlin et al. [2018] Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. Bert: Pre-training of deep bidirectional transformers for language understanding. 10 2018. URL http://arxiv.org/abs/1810.04805.
* Dosovitskiy et al. [2020] Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, and Neil Houlsby. An image is worth 16x16 words: Transformers for image recognition at scale. 10 2020. URL http://arxiv.org/abs/2010.11929.
* Xie et al. [2021] Zhenda Xie, Zheng Zhang, Yue Cao, Yutong Lin, Jianmin Bao, Zhuliang Yao, Qi Dai, and Han Hu. Simmim: A simple framework for masked image modeling. _CoRR_ , abs/2111.09886, 2021. URL https://arxiv.org/abs/2111.09886.
* He et al. [2021] Kaiming He, Xinlei Chen, Saining Xie, Yanghao Li, Piotr Dollár, and Ross B. Girshick. Masked autoencoders are scalable vision learners. _CoRR_ , abs/2111.06377, 2021. URL https://arxiv.org/abs/2111.06377.
* Li et al. [2024] Siyuan Li, Luyuan Zhang, Zedong Wang, Di Wu, Lirong Wu, Zicheng Liu, Jun Xia, Cheng Tan, Yang Liu, Baigui Sun, and Stan Z. Li. Masked modeling for self-supervised representation learning on vision and beyond, 2024.
* Cao et al. [2022] Shuhao Cao, Peng Xu, and David A. Clifton. How to understand masked autoencoders. 2 2022. URL http://arxiv.org/abs/2202.03670.
* [43] Anurag Arnab, Mostafa Dehghani, Georg Heigold, Chen Sun, Mario Lucic, and Cordelia Schmid. Vivit: A video vision transformer.
* [44] Christoph Feichtenhofer, Haoqi Fan, Yanghao Li, Kaiming He, and Meta Ai. Masked autoencoders as spatiotemporal learners. URL https://github.com/facebookresearch/mae_st.
* Olver [1986] Peter Olver. _Applications of Lie Groups to Differential Equations_. Springer New York, NY, 1986.
* Bar-Sinai et al. [2019] Yohai Bar-Sinai, Stephan Hoyer, Jason Hickey, and Michael P. Brenner. Learning data-driven discretizations for partial differential equations. _Proceedings of the National Academy of Sciences_ , 116(31):15344–15349, July 2019. ISSN 1091-6490. doi:10.1073/pnas.1814058116. URL http://dx.doi.org/10.1073/pnas.1814058116.
* Liu et al. [2021] Ze Liu, Yutong Lin, Yue Cao, Han Hu, Yixuan Wei, Zheng Zhang, Stephen Lin, and Baining Guo. Swin transformer: Hierarchical vision transformer using shifted windows, 2021.
* Li et al. [2023b] Zijie Li, Dule Shu, and Amir Barati Farimani. Scalable transformer for pde surrogate modeling, 2023b.
* Yang et al. [2023b] Qidong Yang, Alex Hernandez-Garcia, Paula Harder, Venkatesh Ramesh, Prasanna Sattegeri, Daniela Szwarcman, Campbell D. Watson, and David Rolnick. Fourier neural operators for arbitrary resolution climate data downscaling, 2023b.
|
$I\,\equiv\,\int_{0}^{1}\int_{0}^{2}g(x,y)\,dx\,dy\,=\,-\frac{1}{20}\,,\quad\qquad
J\,\equiv\,\int_{0}^{2}\int_{0}^{1}g(x,y)\,dy\,dx\,=\,\frac{1}{5}\,.$ (187)
Let us first decompose $I\,=\,I_{P.v.}+I_{\mathcal{G}}$, by using definitions
(176) and (178). The the principal value is obtained easily,
$I_{P.v.}\,=\,\int_{0}^{1}\int_{0}^{2}P.v.\left(\frac{xy(x^{2}-y^{2})}{(x^{2}+y^{2})^{3}}\right)\,dx\,dy\,=\,\lim_{\varepsilon_{1,2}\rightarrow
0}\int_{\varepsilon_{2}}^{1}\int_{\varepsilon_{1}}^{2}\frac{xy(x^{2}-y^{2})}{(x^{2}+y^{2})^{3}}\,dx\,dy\,=\,\frac{3}{40}\,.$
(188)
As mentioned before, the principal value is unaffected by iteration of
integrations, therefore,
$J_{P.v.}\,=\,\int_{0}^{2}\int_{0}^{1}P.v.\left(\frac{xy(x^{2}-y^{2})}{(x^{2}+y^{2})^{3}}\right)\,dy\,dx\,=\,\lim_{\varepsilon_{1,2}\rightarrow
0}\int_{\varepsilon_{2}}^{2}\int_{\varepsilon_{1}}^{1}\frac{xy(x^{2}-y^{2})}{(x^{2}+y^{2})^{3}}\,dy\,dx\,=\,\frac{3}{40}\,.$
(189)
In order to tackle the remaining part, let us present $g(x,y)$ by using two
representations:
$g(x,y)\,=\,\frac{1}{4}\frac{\partial}{\partial x}\frac{\partial}{\partial
y}\frac{x^{2}}{x^{2}+y^{2}}\,=\,\frac{1}{4}\frac{\partial}{\partial
y}\frac{\partial}{\partial x}\frac{-y^{2}}{x^{2}+y^{2}}\,.$ (190)
After integration, the resulting expression involves the Kroner (or discrete)
delta function565656This object is defined by
$\delta_{D}(x)\,\equiv\,\begin{cases}\begin{array}[]{c}1\\\
0\end{array}&\begin{array}[]{c}x=0\\\ x\neq 0\end{array}\end{cases}$. Note
that while $\delta_{D}(x)$ by itself is a discontinuous function (as it
obtains finite values everywhere,) its rate of change (derivative) in
unbounded and has to be regarded as a generalized object. via the limit
$\delta_{D}(x)\,\equiv\,\lim_{y\rightarrow 0}\frac{y^{2}}{x^{2}+y^{2}}$, so
that575757The lower limit, when $x=y=\varepsilon\rightarrow 0$, is
$\lim_{\varepsilon\rightarrow
0}\frac{\varepsilon^{2}}{\varepsilon^{2}+\varepsilon^{2}}=\frac{1}{2}$.
$I_{\mathcal{G}}\,\equiv\,\lim_{\varepsilon_{2}\rightarrow
0}\lim_{\varepsilon_{1}\rightarrow
0}\int_{0}^{\varepsilon_{2}}\int_{0}^{\varepsilon_{1}}\frac{xy(x^{2}-y^{2})}{(x^{2}+y^{2})^{3}}\,dx\,dy\,=\,\frac{1}{4}\lim_{\varepsilon_{2}\rightarrow
0}\lim_{\varepsilon_{1}\rightarrow
0}\left.\left.\frac{x^{2}}{x^{2}+y^{2}}\right|_{x=0}^{x=\varepsilon_{1}}\right|_{y=0}^{y=\varepsilon_{2}}\,=\,-\frac{1}{8}\,,$
(191)
and,
$J_{\mathcal{G}}\,\equiv\,\lim_{\varepsilon_{2}\rightarrow
0}\lim_{\varepsilon_{1}\rightarrow
0}\int_{0}^{\varepsilon_{2}}\int_{0}^{\varepsilon_{1}}\frac{xy(x^{2}-y^{2})}{(x^{2}+y^{2})^{3}}\,dy\,dx\,=\,\frac{1}{4}\lim_{\varepsilon_{2}\rightarrow
0}\lim_{\varepsilon_{1}\rightarrow
0}\left.\left.\frac{-y^{2}}{x^{2}+y^{2}}\right|_{y=0}^{y=\varepsilon_{1}}\right|_{x=0}^{x=\varepsilon_{2}}\,=\,\frac{1}{8}\,.$
(192)
The complete result for $I$ is obtained by adding together eqs. (188) and
(191):
$I\,=\,I_{P.v.}\,+\,I_{\mathcal{G}}\,=\,\frac{3}{40}-\frac{1}{8}\,=\,-\frac{1}{20}\,,$
(193)
while the result for $J$ is obtained by adding together eqs. (189) and (192):
$J\,=\,J_{P.v.}\,+\,J_{\mathcal{G}}\,=\,\frac{3}{40}+\frac{1}{8}\,=\,\frac{1}{5}\,.$
(194)
Therefore, we showed that the contribution from the singular point $(0,0)$ has
a comparable magnitude to the whole remaining integration. Noticeably, the
contribution of the singular point $f(x,y)$ does not exist in the classical
sense585858The classical limit exist only if the two limits are commutative,
so that $\lim_{x\rightarrow x_{0}}\lim_{y\rightarrow y_{0}}f(x,y)\,=\,L$
implies $\lim_{y\rightarrow y_{0}}\lim_{x\rightarrow x_{0}}f(x,y)\,=\,L$ and
vice versa., as approaching in different directions leads to different
results. Therefore, what creates the difference between the two integration in
(187) is a contribution that exists only in the generalized sense.
## Appendix E Can Fubini be applied for gauge theories?
"Everybody is changing, and I don’t feel the same," Keane.
In this appendix we would like to examine the mathematical justification to
use the Fubini’s theorem, as in eq. (13), in accordance with the necessary
condition for absolute convergence, eq. (18). Our intention is to compute the
second term of (10),
$\mathcal{\hat{O}}(t^{\prime})\,\equiv\,\hat{H}(t^{\prime})\int_{t_{0}}^{t^{\prime}}dt^{\prime\prime}\,\hat{H}(t^{\prime\prime}),$
(195)
using renormalizable QED, as previously discussed in 4.5. As mentioned below,
one can identify the quantity $E_{e^{-}\gamma}-E_{e^{-}}\geq 0$ in order to
write the Hamiltonian as vanishing at the asymptotic of the phase space,
$\hat{H}(t,\bm{p})\,=\,\lim_{\epsilon\rightarrow
0}e^{-\epsilon(E_{e^{-}\gamma}-E_{e^{-}})}\hat{H}(t,\bm{p}).$ (196)
By using the identity (120) and (154), denoting $\mathcal{V}_{e^{-}\rightarrow
e^{-}\gamma}\,\equiv\,\left\langle
e^{-}\gamma\right|\hat{H}(t,\bm{p})\left|e^{-}\right\rangle$, one can rewrite
(195) as
$\begin{split}&\mathcal{\hat{O}}(t^{\prime})\,=\,\int d\Pi_{\tilde{e}^{-}}\int
d\Pi_{e^{-}\gamma}\left|\tilde{e}^{-}\right\rangle\lim_{\epsilon_{2}\rightarrow
0}e^{-i(t^{\prime}-i\epsilon_{2})(E_{e^{-}\gamma}-E_{\tilde{e}^{-}})}\mathcal{V}_{e^{-}\gamma\rightarrow\tilde{e}^{-}}\\\
&\,\;\qquad\times\int_{t_{0}}^{t^{\prime}}dt^{\prime\prime}\int
d\Pi_{e^{-}}\,\lim_{\epsilon_{1}\rightarrow
0}e^{i(t^{\prime\prime}+i\epsilon_{1})(E_{e^{-}\gamma}-E_{e^{-}})}\mathcal{V}_{e^{-}\rightarrow
e^{-}\gamma}\left\langle e^{-}\right|.\end{split}$ (197)
Direct calculation iancu shows that up to translations, the dependence of the
vertex and the energy denominator on the momentum of the photon $\bm{p}^{i}$
goes as
$E_{e^{-}\gamma}-E_{e^{-}}\,\sim\,\bm{p}^{2},\qquad\qquad\quad\mathcal{V}_{e^{-}\rightarrow
e^{-}\gamma}\,\sim\,\mathcal{V}_{e^{-}\gamma\rightarrow\tilde{e}^{-}}\,\sim\,\bm{p}^{i}\,\delta(mom.),$
(198)
where $\delta(mom.)$ denotes the momentum conservation. After performing the
phase space integrations of (LABEL:ope) over the electron states it implies
that:
$\left\langle
e^{-}\right|\int_{t_{0}}^{t}dt^{\prime}\,\mathcal{\hat{O}}(t^{\prime})\left|e^{-}\right\rangle\,\sim\,\int_{t_{0}}^{t}dt^{\prime}\,\int_{t_{0}}^{t^{\prime}}dt^{\prime\prime}\,\lim_{\epsilon\rightarrow
0}\int\frac{d^{2}\bm{p}}{(2\pi)^{3}}\,e^{-i(t^{\prime}-t^{\prime\prime}-i\epsilon)\bm{p}^{2}}\,\bm{p}^{2}\,,$
(199)
with $a\,\equiv\,t^{\prime}-t^{\prime\prime}-i\epsilon$. The last integration
over $\bm{p}$ can be performed straightforwardly:
$\int_{0}^{\Lambda}d^{2}\bm{p}\,\bm{p}^{2}e^{-a\bm{p}^{2}}\,=\,2\pi\int_{0}^{\Lambda}d|\bm{p}|\,|\bm{p}|^{3}e^{-a|\bm{p}|^{2}}\,=\,\frac{\pi}{a^{2}}\left[1-e^{-a\Lambda^{2}}(1+a\Lambda^{2})\right],$
(200)
and after taking the case of an unbounded phase space595959By taking the limit
$\Lambda\rightarrow\infty$, we obtain
$\lim_{\Lambda\rightarrow\infty}\int_{0}^{\Lambda}d^{2}\bm{p}\,\bm{p}^{2}e^{-a\bm{p}^{2}}\,=\,\frac{\pi}{a^{2}}$.:
$\left\langle
e^{-}\right|\int_{t_{0}}^{t}dt^{\prime}\,\mathcal{\hat{O}}(t^{\prime})\left|e^{-}\right\rangle\,\sim\,\int_{t_{0}}^{t}dt^{\prime}\,\int_{t_{0}}^{t^{\prime}}dt^{\prime\prime}\,\lim_{\epsilon\rightarrow
0}\frac{1}{(t^{\prime}-t^{\prime\prime}-i\epsilon)^{2}}.$ (201)
The integrand of the last result can be rewritten by using the
distributionally differentiated version of the Sokhotski–Plemelj theorem, eq.
(49), with $n=2$:
$\lim_{\epsilon\rightarrow
0}\frac{1}{(t^{\prime}-t^{\prime\prime}-i\epsilon)^{2}}\,=\,P.v.\left(\frac{1}{(t^{\prime}-t^{\prime\prime})^{2}}\right)+i\pi\delta^{\prime}(t^{\prime}-t^{\prime\prime}).$
(202)
Therefore, the main implication is that the usage of the Fubini theorem, and
taking the steps (13) and (14) are unjustified for this choice of Hamiltonian.
## Declarations
Conflict of interest: The author have no relevant financial or non-financial
interests to disclose. No funding was received for conducting this study.
Data availability: Data sharing not applicable to this article as no datasets
were generated or analysed during the current study.
Open Access: This article is licensed under a Creative Commons Attribution 4.0
International License, which permits use, sharing, adaptation, distribution
and reproduction in any medium or format, as long as you give appropriate
credit to the original author(s) and the source, provide a link to the
Creative Commons licence, and indicate if changes were made. The images or
other third party material in this article are included in the article’s
Creative Commons licence, unless indicated otherwise in a credit line to the
material. If material is not included in the article’s Creative Commons
licence and your intended use is not permitted by statutory regulation or
exceeds the permitted use, you will need to obtain permission directly from
the copyright holder. To view a copy of this licence, visit
http://creativecommons.org/licenses/by/4.0/
## References
* (1) E. Schrödinger, "An undulatory theory of the mechanics of atoms and molecules," Physical review, 28(6), 1049.
* (2) Landau, L. D., & Lifshitz, E. M. (2013). Quantum mechanics: non-relativistic theory (Vol. 3). Elsevier.
Böhm, A. (2013). Quantum mechanics: foundations and applications. Springer
Science & Business Media.
Dirac, P. A. M. (1981). The principles of quantum mechanics (No. 27). Oxford
university press.
Shankar, R. (2012). Principles of quantum mechanics. Springer Science &
Business Media.
Sakurai, J. J., & Commins, E. D. (1995). Modern quantum mechanics, revised
edition.
* (3) Peskin, M. E. (2018). An introduction to quantum field theory. CRC press.
Kaiser, D. (2018). Lectures of Sidney Coleman on Quantum Field Theory:
Foreword by David Kaiser. World Scientific Publishing.
Itzykson, C., & Zuber, J. B. (2012). Quantum field theory. Courier
Corporation.
Schwartz, M. D. (2014). Quantum field theory and the standard model. Cambridge
university press.
* (4) Robinson, J. C. (2020). An introduction to functional analysis. Cambridge University Press.
Yosida, K. (2012). Functional analysis. Springer Science & Business Media.
Suhubi, E. (2013). Functional analysis. Springer Science & Business Media.
Reed, M. (2012). Methods of modern mathematical physics: Functional analysis.
Elsevier. Shalit, O. M. (2017). A first course in functional analysis. CRC
Press.
* (5) Gitman, D. M., Tyutin, I. V., & Voronov, B. L. (2012). Self-adjoint extensions in quantum mechanics: general theory and applications to Schrödinger and Dirac equations with singular potentials (Vol. 62). Springer Science & Business Media.
* (6) Gieres, F. (2000). Mathematical surprises and Dirac’s formalism in quantum mechanics. Reports on Progress in Physics, 63(12), 1893.
Bonneau, G., Faraut, J., & Valent, G. (2001). Self-adjoint extensions of
operators and the teaching of quantum mechanics. American Journal of physics,
69(3), 322-331.
* (7) A. Mariani and U. J. Wiese (2023). "Self-adjoint Momentum Operator for a Particle Confined in a Multi Dimensional Cavity".
M. H. Al-Hashimi and U.-J. Wiese (2021). "Alternative momentum concept for a
quantum mechanical particle in a box," Phys. Rev. Research, vol. 3, p.
L042008.
* (8) Dyson, Freeman J. "The radiation theories of Tomonaga, Schwinger, and Feynman." Physical Review 75.3 (1949): 486.
Dyson, Freeman J. "The $S$ matrix in quantum electrodynamics." Physical Review
75.11 (1949): 1736.
* (9) Apelian, C., & Surace, S. (2009). Real and complex analysis. CRC press.
Simon, B. (2015). Real analysis. American Mathematical Soc..
Folland, G. B. (1999). Real analysis: modern techniques and their applications
(Vol. 40). John Wiley & Sons.
Makarov, B., & Podkorytov, A. (2013). Real analysis: measures, integrals and
applications. Springer Science & Business Media.
* (10) Teschl, G. Mathematical methods in quantum mechanics. Vol. 157. American Mathematical Soc., 2014.
Dimock, J. Quantum mechanics and quantum field theory: a mathematical primer.
Cambridge University Press, 2011.
* (11) Schmüdgen, K. Unbounded self-adjoint operators on Hilbert space. Vol. 265. Springer Science Business Media, 2012.
Goldberg, S. (2006). Unbounded linear operators: Theory and applications.
Courier Corporation.
* (12) Magnus, W. (1954). On the exponential solution of differential equations for a linear operator. Communications on pure and applied mathematics, 7(4), 649-673.
* (13) Hall, B. C. (2013). Quantum theory for mathematicians. springer publication..
Neumann, J. V. (1932). Uber einen satz von herrn mh stone. Annals of
Mathematics, 567-573.
* (14) Blanes, S., Casas, F., Oteo, J. A., & Ros, J. (2009). The Magnus expansion and some of its applications. Physics reports, 470(5-6), 151-238.
* (15) Hall, B. C., & Hall, B. C. (2013). Lie groups, Lie algebras, and representations (pp. 333-366). Springer New York.
Lipschutz, S., & Lipson, M. L. (2018). Schaum’s Outline of Linear Algebra.
McGraw-Hill Education.
* (16) Lindelöf, E. (1894). "Sur l’application de la méthode des approximations successives aux équations différentielles ordinaires du premier ordre". Comptes rendus hebdomadaires des séances de l’Académie des sciences. 118: 454–7.
* (17) Seifert, C., Trostorff, S., & Waurick, M. (2022). Evolutionary Equations: Picard’s Theorem for Partial Differential Equations, and Applications (p. 317). Springer Nature.
* (18) Lax, P. D., & Richtmyer, R. D. (1956). Survey of the stability of linear finite difference equations. Communications on pure and applied mathematics, 9(2), 267-293.
* (19) Byron, Frederick W., and Robert W. Fuller. Mathematics of classical and quantum physics. Courier Corporation, 2012.
* (20) Lighthill, M.J. An Introduction to Fourier Analysis and Generalised Functions
J C. Ferreira, R. F. Hoskins, J. Sousa-Pinto, Introduction to the Theory of
Distributions.
R. F. Hoskins, Delta Functions, Introduction to Generalised Functions, 2009.
* (21) Schwartz, L. (1954). Sur l’impossibilité de la multiplication des distributions. CR Acad. Sci. Paris, 239(847-848), 6.
* (22) Kanwal, R. P. (2004). Generalized functions: theory and applications. Springer Science & Business Media.
Johnson, S. G. “When functions have no value(s)”.
* (23) Bilenky, S. M. (2013). Introduction to Feynman Diagrams: International Series of Monographs in Natural Philosophy (Vol. 65). Elsevier.
* (24) Stroock, D. W. (2013). An introduction to Markov processes (Vol. 230). Springer Science & Business Media.
Dynkin, E. B. (2012). Theory of Markov processes. Courier Corporation.
Rivas, A., Huelga, S. F., Rivas, A., & Huelga, S. F. (2012). Quantum Markov
process: mathematical structure. Open Quantum Systems: An Introduction, 33-48.
* (25) Trotter, H. F. (1958). Approximation of semi-groups of operators.
* (26) Moretti, V. (2017). Spectral theory and quantum mechanics. UNITEXT, Italy: Springer International Publishing AG.
Prugovecki, E. (1982). Quantum mechanics in Hilbert space. Academic Press.
* (27) S. Albeverio, F. Gesztesy, R. Hoegh-Krohn, H. Holden, Solvable Models in Quantum Mechanics,2012.
* (28) Flugge, S. (1999). Practical quantum mechanics (Vol. 177). Springer Science & Business Media.
Galitskii, V. M., Karnakov, B., Galitski, V., & Kogan, V. I. (2013). Exploring
quantum mechanics: A collection of 700+ solved problems for students,
lecturers, and researchers. Oxford University Press, USA.
* (29) Prugovecki, E. (1982). Quantum mechanics in Hilbert space. Academic Press.
Gallone, F. (2014). Hilbert Space and Quantum Mechanics. World Scientific
Publishing Company.
Blank, J., Exner, P., & Havlicek, M. (2008). Hilbert space operators in
quantum physics. Springer Science & Business Media.
* (30) Berezanskiĭ, I. M. (1968). Expansions in eigenfunctions of selfadjoint operators (Vol. 17). American Mathematical Soc..
* (31) Gelfand, I. M., Graev, M. I., & Vilenkin, N. Y. (1962). Generalized Functions, Vol. 1-5, Gos. Izd. Fiz. Mat. Lit., Moscow.
De la Madrid, R. (2005). The role of the rigged Hilbert space in quantum
mechanics. European journal of physics, 26(2), 287.
Roberts, J. E. (1966). Rigged Hilbert spaces in quantum mechanics.
Communications in Mathematical Physics, 3, 98-119.
* (32) Bender, C. M. (2007). Making sense of non-Hermitian Hamiltonians. Reports on Progress in Physics, 70(6), 947.
* (33) Moiseyev, N. (2011). Non-Hermitian quantum mechanics. Cambridge University Press.
Bountis, T., & Skokos, H. (2012). Complex hamiltonian dynamics (Vol. 10).
Springer Science & Business Media.
Bagarello, F., Passante, R., & Trapani, C. (2016). Non-Hermitian Hamiltonians
in quantum physics. Springer Proceedings in Physics, 184.
Berry, M V (2011) "Optical polarization evolution near a non-Hermitian
degeneracy," J. Optics 13, 11570.
Berry, M. V. (2004) "Physics of nonhermitian degeneracies," Czech.J.Phys 54
1039-1047.
* (34) Mostafazadeh, A. (2010). Pseudo-Hermitian representation of quantum mechanics. International Journal of Geometric Methods in Modern Physics, 7(07), 1191-1306.
Ashida, Y., Gong, Z., & Ueda, M. (2020). Non-hermitian physics. Advances in
Physics, 69(3), 249-435.
* (35) N. N. Bogolubov, and N. N. Bogolubov Jr. "Introduction to quantum statistical mechanics," World Scientific Publishing Company, 2009.
* (36) J. von Neumann, "Wahrscheinlichkeitstheoretischer Aufbau der Quantenmechanik," Nachr. Ges. Wiss. Göttingen, 1, 245-272, 1927.
* (37) Sylvester, J. (1884). "Sur l’equations en matrices $px=xq$". C. R. Acad. Sci. Paris. 99 (2): 67–71, 115–116.
Bartels, R. H.; Stewart, G. W. (1972). "Solution of the matrix equation
$AX+XB=C$". Comm. ACM. 15 (9): 820–826. doi:10.1145/361573.361582. S2CID
12957010.
* (38) Parks, P. C. (1992). AM Lyapunov’s stability theory—100 years on. IMA journal of Mathematical Control and Information, 9(4), 275-303.
* (39) Hannesdottir, H. S., Mizera, S. (2023). What is the $i\varepsilon$ for the S-matrix?. Springer.
Collins, J. C. (1984). Renormalization: an introduction to renormalization,
the renormalization group and the operator-product expansion. Cambridge
university press.
* (40) Halmos, P. R. (2013). Measure theory (Vol. 18). Springer.
Tao, T. (Ed.). (2011). An introduction to measure theory (Vol. 126). American
Mathematical Soc..
Swartz, C. W. (1994). Measure, integration and function spaces. World
Scientific.
R. G. Bartle, "The elements of integration and Lebesgue measure," John Wiley
Sons, 2014.
P. R. Halmos, "Measure theory," Vol. 18. Springer, 2013.
* (41) Dumitru, Adrian, and Risto Paatelainen. "Sub-femtometer scale color charge fluctuations in a proton made of three quarks and a gluon." Physical Review D 103.3 (2021): 034026.
* (42) Born, M. (1926). Quantenmechanik der stoßvorgänge. Zeitschrift für physik, 38(11-12), 803-827.
* (43) Feynman, R. P., Hibbs, A. R., & Styer, D. F. (2010). Quantum mechanics and path integrals. Courier Corporation.
Feynman, R. P. (1948). Space-time approach to non-relativistic quantum
mechanics. Reviews of modern physics, 20(2), 367.
MacKenzie, R. (2000). Path integral methods and applications. arXiv preprint
quant-ph/0004090.
Schulman, L. S. (2012). Techniques and applications of path integration.
Courier Corporation.
* (44) Souriau, J. M. (2005, May). Construction explicite de l’indice de Maslov. Applications. In Group Theoretical Methods in Physics: Fourth International Colloquium, Nijmegen 1975 (pp. 117-148). Berlin, Heidelberg: Springer Berlin Heidelberg.
Horváthy, P. A. (1979). Extended Feynman formula for harmonic oscillator (No.
CNRS-CPT–79-P-1083). Centre National de la Recherche Scientifique.
* (45) Z. Chen, and A. H. Mueller. "The dipole picture of high energy scattering, the BFKL equation and many gluon compound states," Nuclear Physics B 451.3 (1995): 579-604.
C. Marquet, "Forward inclusive dijet production and azimuthal correlations in
pA collisions." Nuclear Physics A 796.1-4 (2007): 41-60.
* (46) Kovner, A., & Wiedemann, U. A. (2001). Eikonal evolution and gluon radiation. Physical Review D, 64(11), 114002.
* (47) E. Iancu, Y. Mulian. "Forward trijet production in proton–nucleus collisions." Nuclear Physics A 985 (2019): 66-127.
E. Iancu, and Y. Mulian. "Forward dijets in proton-nucleus collisions at next-
to-leading order: the real corrections." Journal of High Energy Physics 2021.3
(2021): 1-67.
* (48) M. Lublinsky, and Y. Mulian. "High Energy QCD at NLO: from light-cone wave function to JIMWLK evolution." Journal of High Energy Physics 2017.5 (2017): 1-80.
* (49) Gomis, Jaume, and Thomas Mehen. "Space–time noncommutative field theories and unitarity." Nuclear Physics B 591.1-2 (2000): 265-276. Bahns, D. (2003). Unitary quantum field theory on the noncommutative Minkowski space. Fortschritte der Physik: Progress of Physics, 51(7-8), 658-663. Morita, K., Okumura, Y., & Umezawa, E. (2003). Lorentz invariance and the unitarity problem in non-commutative field theory. Progress of theoretical physics, 110(5), 989-1001.
* (50) Hogervorst, M., Rychkov, S., & van Rees, B. C. (2016). Unitarity violation at the Wilson-Fisher fixed point in 4-$\epsilon$ dimensions. Physical Review D, 93(12), 125025. Jin, Q., Ren, K., Yang, G., & Yu, R. (2023). Is Yang-Mills Theory Unitary in Fractional Spacetime Dimensions?. arXiv preprint arXiv:2301.01786.
* (51) Rivas, A., & Huelga, S. F. (2012). Open quantum systems (Vol. 10, pp. 978-3). Berlin: Springer.
Breuer, H. P., & Petruccione, F. (2002). The theory of open quantum systems.
Oxford University Press, USA.
Rotter, I., & Bird, J. P. (2015). A review of progress in the physics of open
quantum systems: theory and experiment. Reports on Progress in Physics,
78(11), 114001.
Isar, A., A. Sandulescu, H. Scutaru, E. Stefanescu, and W. Scheid. "Open
quantum systems." International Journal of Modern Physics E 3, no. 02 (1994):
635-714.
* (52) Higham, N. J. (2008). Functions of matrices: theory and computation. Society for Industrial and Applied Mathematics.
Horn, R. A., & Johnson, C. R. (2012). Matrix analysis. Cambridge university
press.
* (53) R. Salem, "On some singular monotonic functions which are strictly increasing." Transactions of the American Mathematical Society 53.3 (1943): 427-439.
Feder, J., & Feder, J. (1988). Cantor Sets. Fractals, 62-65.
* (54) Hardy, G. H. (1901). The Elementary Theory of Cauchy’s Principal Values. Proceedings of the London Mathematical Society, 1(1), 16-40.
* (55) Estrada, R., & Kanwal, R. P. (2000). Singular integral equations. Springer Science & Business Media.
Mandal, B. N., & Chakrabarti, A. (2016). Applied singular integral equations.
CRC press. |
Nested simulation is a natural approach to tackle nested estimation problems in operations research and financial engineering.
The outer-level simulation generates outer scenarios and the inner-level simulations are ran in each outer scenario to estimate the corresponding conditional expectation.
The resulting sample of conditional expectations is then used to estimate different risk measures of interest.
Despite its flexibility, nested simulation is notorious for its heavy computational burden.
We introduce a novel simulation procedure that reuses inner simulation outputs to improve the efficiency and accuracy in solving nested estimation problems. We analyze the convergence rates of the bias, variance, and MSE of the resulting estimator. In addition, central limit theorems and variance estimators are presented, which lead to asymptotically valid confidence intervals for the nested risk measure of interest.
We conduct numerical studies on two financial risk measurement problems.
Our numerical studies show consistent results with the asymptotic analysis and show that the proposed approach outperforms the standard nested simulation and a state-of-art regression approach for nested estimation problems.
Key words: nested simulation, risk management, likelihood ratio method, central limit theorem, confidence interval
§ INTRODUCTION
Nested estimation is the problem of estimating a functional of a conditional expectation.
In this study, we propose and analyze an efficient simulation method for a class of nested estimation problems.
Specifically, the quantity to be estimated is
\begin{equation}\label{eq:rho}
\rho = \rho(\E\left[H(X,Y)|X\right]) = \E\left[g(\E\left[H(X,Y)|X\right])\right],
\end{equation}
where $X$ and $Y$ are both random vectors of fixed dimensions, $H(\cdot,\cdot)$ is a multi-variate mapping, and $g(\cdot)$ is a real-value function.
In a nested simulation, we call $X$ the outer scenario, $Y$ the inner-level random variable, $H(\cdot,\cdot)$ the inner simulation model, and $g(\cdot)$ the risk function.
Nested estimation [Hong et al., 2017] has important applications in operations research, such as risk measurement [Lee, 1998, Gordy and Juneja, 2010] and input uncertainty quantification [Cheng and Holland, 1997, Barton, 2012, Zhu et al., 2020].
Nested simulation [Gordy and Juneja, 2010, Broadie et al., 2011], which is also known as two-level and stochastic-on-stochastic simulation, is a natural solution for the above nested estimation problems:
Consider measuring some risk measures of a portfolio of financial instruments whose values are affected by different risk factors such as equity returns, interest rates, mortality rates, etc.
In this case, $X$ represents the evolution of the underlying risk factors up to a future time (i.e., the risk horizon), say in one month, when risk measurement is required.
The outer-level simulation generates $n$ realizations of $X$, which are called the scenarios.
Given a scenario $X$, $Y|X$ denotes the risk factors' evolution between the risk horizon and the portfolio's maturity, say in one year, $H(X,Y)$ denotes the (discounted) loss of the portfolio at maturity, and $\E[H(X,Y)|X]$ denotes the portfolio's mark-to-market loss at the risk horizon.
For each scenario $X$, an inner simulation is performed where $m'$ sample paths of $Y|X$ are generated.
The discounted losses $H(X,Y)$ can then be calculated, whose sample average can be used to estimate the loss of scenario $X$, i.e., $\E[H(X,Y)|X]$.
As $X$ is stochastic, so is $\E[H(X,Y)|X]$.
Depending on the risk function $g(\cdot)$, the nested estimation problem (<ref>) can be used to estimate popular risk measures like the exceedance probability, conditional value-at-risk (CVaR), and squared tracking error of $\E[H(X,Y)|X]$.
For example, for an indicator function $g(x)=\1\{x\geq x_0\}$ and a quadratic function $g(x)=(x-x_0)^2$ for some threshold $x_0$, $\rho(\E\left[H(X,Y)|X\right])$ is the exceedance probability beyond $x_0$ and the squared tracking error, respectively.
For a hockey-stick function $g(x)= x_0 + \frac{1}{1-\alpha}\max\{x-x_0, 0\}$ where $x_0$ is the $\alpha$-Value-at-Risk (VaR) of $\E\left[H(X,Y)|X\right]$, then $\rho(\E\left[H(X,Y)|X\right])$ is the $\alpha$-CVaR.
Interested readers can refer to [Broadie et al., 2015] and [Hong et al., 2017] on nested estimation for these risk measures.
(, 0) node(s0) ;
(,) node(st) $X_{\ind}$;
[thick,->] (s0.east) – (st.west);
(,) node(sp) $Y_{\ind}$;
/in st1/sti, sti/stn, sp11/sp1m', spn1/spnm'
() – () node [font=, midway, sloped] $\dots$;
/in st1/sp11, st1/sp1m', stn/spn1, stn/spnm'
[thick,->] (.east) – (.west);
[thick, decorate,decoration=brace,amplitude=10pt](sp11.north -| sp1m'.east) – (sp1m'.south east) node[black,right, midway,xshift=0.35cm, align=center] (inner1) Sample $Y_{1j}\stackrel{i.i.d.}{\sim} f(y|X_1)$
Estimate $\E[H(X,Y)|X=X_1]$;
[thick, decorate,decoration=brace,amplitude=10pt](spn1.north -| spnm'.east) – (spnm'.south -| spnm'.east) node[black,right, midway,xshift=0.35cm, align=center] (innerm) Sample $Y_{nj}\stackrel{i.i.d.}{\sim} f(y|X_n)$
Estimate $\E[H(X,Y)|X=X_n]$;
[thick, decorate,decoration=brace,amplitude=10pt](inner1.north -| innerm.east) – (innerm.south east) node[black,right, midway,xshift=0.35cm, align=center] Estimate $\rho$;
[thick, decorate,decoration=brace,amplitude=10pt,mirror](spnm'.south -| s0.west) – (spnm'.south -| stn.center) node[black,midway,yshift=-0.8cm] Outer simulation;
[thick, decorate,decoration=brace,amplitude=10pt,mirror](spnm'.south -| stn.center) – (spnm'.south) node[black,midway,yshift=-0.8cm] Inner simulation;
Schematic illustration of standard two-level nested simulation. The outer stage generates $n$ scenarios $X_1,\ldots,X_n$. Conditional on $X_i$, $m'$ inner replications $Y_{i1},\ldots,Y_{im'}$ are generated.
Figure <ref> is a schematic illustration of a standard nested simulation procedure.
The standard nested simulation procedure estimates $\E[H|X]$ for each scenario $X$ by considering the inner replications of that scenario only.
This exclusivity leads to the nested structure, which then requires $\Gamma = m'n$ inner replications in total, e.g., $Y_{ij}$ and $H(X_i,Y_{ij})$ for $i=1,\ldots,n$ and $j=1,\ldots,m'$; $\Gamma$ is called the simulation budget.
In theory, the risk estimator in a nested simulation procedure converges to the true risk measure as the numbers of outer and inner simulations grow.
However, depending on the complexity of the risk factor models and the derivative payoffs, every inner replication can be quite time-consuming to compute.
So, in practice, the simulation budget $\Gamma$ can be an excessive computational burden and unbearably large computations may be required to achieve satisfactory accuracy.
Alleviating the computational burden, by different means and in different ways, has attracted much research attention in the simulation literature.
Firstly, some studies focus on intelligent allocations of a fixed simulation budget $\Gamma$ so that the resulting risk measure $\rho$ is accurately estimated.
[Lee, 1998], [Lee and Glynn, 2003], and [Gordy and Juneja, 2010] analyze the nested simulation estimator and demonstrate that, under some assumptions, the asymptotic mean squared error (MSE) of the standard nested risk estimator diminishes at an optimal rate of $\Gamma^{-2/3}$; [Gordy and Juneja, 2010] shows that this optimal convergence rate is achieved when $m'=\cO(\Gamma^{1/3})$ and $n=\cO(\Gamma^{2/3})$ as $\Gamma\rightarrow \infty$.
[Broadie et al., 2011] proposes a sequential allocation scheme where different outer scenarios have different number of inner replications when estimating the probabilities of large portfolio losses.
The MSE of the resulting risk estimator is shown to have a rate of convergence of $\Gamma^{-4/5+\varepsilon}$ for any $\varepsilon>0$.
[Liu et al., 2010] and [Lan et al., 2010] use ranking-and-selection techniques to adaptively allocate the simulation budget to estimate CVaR and its confidence interval, respectively.
A second line of research aims to reduce the standard nested simulation's computational burden by estimating $\E[H|X]$ via regression or metamodeling techniques.
For example, least-square Monte Carlo (LSMC) [Longstaff and Schwartz, 2001, Tsitsiklis and Van Roy, 2001] is a quintessential parametric approach for pricing American options, where a regression model is used to approximate the conditional expectation $\E[H|X]$.
See also [Carriere, 1996] for a general discussion of nonparametric regression techniques in Monte Carlo simulation.
[Broadie et al., 2015] applies this LSMC approach in nested estimation of financial risk and shows that the MSE of the resulting risk estimator converges at the order of $\Gamma^{-1+\delta}$ for any $\delta>0$.
Despite fast convergence rate, the MSE generally converges to a nonzero asymptotic squared bias that depends on the selection of basis functions.
[Liu and Staum, 2010] considers a metamodeling approach that estimates $\E[H|X]$ by a stochastic kriging model [Ankenman et al., 2010].
Besides selecting appropriate basis functions and covariance functions, the implementation of stochastic kriging is not trivial and may be prone to numerical instability [Staum, 2009].
[Hong et al., 2017] proposes a kernel smoothing approach, which estimates $\E[H|X]$ by the well-known Nadaraya-Watson kernel estimator [Nadaraya, 1964, Watson, 1964].
The MSE of the resulting risk estimator achieves a convergence rate of $\Gamma^{-\min\{1,4/(d+2)\}}$, where $d$ is the problem dimension.
These approaches use simulation outputs from different scenarios, sometimes from a pilot experiment, to calibrate the regression model or metamodel that approximates or predicts $\E[H|X]$ for different scenarios.
While the pooling of simulation outputs improves simulation efficiency, these approaches suffer from modeling errors that depend on selection of basis functions, covariance functions, or kernel bandwidth.
As a result, these approaches lead to biased estimators; sometimes this bias vanishes asymptotically, sometimes the bias persists.
In this article we study a novel simulation procedure, called the green nested simulation (GNS) procedure, that pools inner simulation outputs from different outer scenarios but avoids the difficulties in the regression- and metamodeling-based techniques.
The contributions of our study include:
* We propose an efficient simulation procedure that is non-nested in nature and recycles the same set of inner simulation outputs via the likelihood ratio method to estimate $\E[H|X]$ in different scenarios.
The proposed procedure does not require any model selection or calibration.
* We establish that the asymptotic bias, variance, and MSE of the risk estimator all converge to zero at rate $\cO(\Gamma^{-1})$.
This convergence rate is faster than that of nested stimulation with optimal allocation and that of the kernel-based approach.
Most importantly, $\cO(\Gamma^{-1})$ is the same fast convergence rate as a non-nested Monte Carlo simulation.
* We establish central limit theorem (CLT) and valid variance estimates for the nested simulation estimators for different forms of $\rho$.
These results enable users to construct valid confidence intervals for nested simulation estimators without running macro replications.
The analysis is non-trivial considering that all conditional expectations are estimated using the same set of inner simulation outputs thus are all correlated.
In essence, the GNS procedure recycles the same set of simulation outputs, via the likelihood ratio method [Beckman and McKay, 1987, L'Ecuyer, 1990], to estimate the conditional expectation $\E[H|X]$ for different scenarios $X$.
The GNS procedure is inspired by green simulation [Feng and Staum, 2017] and likelihood ratio metamodeling [Dong et al., 2018], which improve simulation efficiency by reusing simulation outputs.
Stochastic mesh for American option pricing [Broadie et al., 2000, Broadie and Glasserman, 2004, Avramidis and Hyden, 1999, Avramidis and Matzinger, 2004] is also an application of the likelihood ratio method.
The GNS procedure and the stochastic mesh are mathematically similar but the two approaches tackle different problems, serve different purposes, and are applied in different contexts.
The former aims to solve nested estimation problems (risk measurement) while the latter solves a dynamic programming problem (American option pricing).
The rest of this paper is organized as follows.
The problem statement and general mathematical framework are given in Section <ref>.
Sections <ref> and <ref> present the main asymptotic analyses: Section <ref> analyzes the convergence of the green loss estimator to the conditional expectation random variable and Section <ref> analyzes the asymptotic bias, variance, MSE, as well as the CLT and valid confidence interval of the portfolio risk estimator.
Numerical experiments are summarized in Section <ref>, followed by conclusions in Section <ref>.
Technical proofs and auxiliary discussions are provided in the appendices.
§ A SAMPLE RECYCLING APPROACH
§.§ Standard Nested Simulation
Standard nested simulation (SNS), as illustrated in Figure <ref>, is a common approach for estimating the quantity in Equation (<ref>).
* (Outer simulation) Simulate $n$ independent and identically distributed (i.i.d.) outer scenarios, denoted by $X_1,\ldots,X_n$.
* (Inner simulation) For each scenario $X_i$, $i=1,\ldots,n$, simulate $m'$ i.i.d. inner replications, e.g., $Y_{i1},\ldots,Y_{im'} \stackrel{i.i.d.}{\sim}f(y|X=x_i)$ then estimate $L(X_i)$ by $ L^{SNS}_{m'}(X_i) = \frac{1}{m'}\sum_{j=1}^{m'} H(X_i,Y_{ij})$.
* (Risk estimation) Estimate the risk measure $\rho$ in (<ref>) by $\rho^{SNS}_{m'n} = \avgni g(L^{SNS}_{m'}(X_i))$.
In general, the risk estimation step treats $L^{SNS}_{m'}(X_1),\ldots,L^{SNS}_{m'}(X_n)$ as i.i.d. samples of $L(X)$ to estimate different risk measures.
In this study, we focus on risk measures of the form (<ref>) with different risk functions $g:\R \mapsto \R$.
For illustrative purpose, we present a financial risk measurement example.
Let $S_t$ be a vector of risk factors, which may be the values of equities, bonds, interest rates, exchange rates, etc., at any time $t\geq 0$.
Consider a portfolio of financial instruments, which may include stocks, bonds, and derivatives whose values are affected by the risk factors.
Let $t=0$ be the current time when the initial risk factor values $S_0$ are known and $T>0$ be the maximum maturity of all the instruments in the portfolio.
The portfolio manager is interested in estimating some risk measures of the portfolio's profit and loss at a fixed future time $\tau\in(0,T)$.
Specifically, let $V_\tau$ be the portfolio value at time $\tau$ so the time $\tau$ portfolio loss is given by $L_\tau = V_0-V_\tau$, which is a random variable at time $0$.
Nested simulation can be used to estimate risk measures of $L_\tau$: The risk factors up to $\tau$ are denoted by $X=\{S_t:t\in[0,\tau]\}$, which are the outer-level scenarios.
The risk factors exceeding $\tau$ are denoted by $Y = \{S_t: t\in (\tau,T]\}$, which are the inner-level sample paths.
The inner simulation model $H(X,Y)$ is the discounted portfolio payoff for the simulated path $(X,Y)$ and the risk function $g(\cdot)$ depends on the risk measure of interest.
As alluded in the introduction, important risk measures such as exceedance probability, Conditional Value-at-Risk (CVaR)[Also known as the expected shortfall (ES) and conditional tail expectation (CTE).], and squared tracking error, can all be written as (<ref>) with different risk functions like the indicator function $g(x)=\1\{x\geq x_0\}$, the hockey-stick function $g(x)=(x-x_0)^+ = \max\{x-x_0, 0\}$, and the quadratic function $g(x)=(x-x_0)^2$.
These three risk functions can also be used to approximate more general risk functions, such as those with a finite number of non-differentiable or discontinuous points <cit.>.
Standard nested simulation is computationally burdensome due to its nested nature, which requires a simulation budget of $\Gamma=m'n$ inner replications.
Moreover, this nested structure leads to a wasteful use of the simulation budget because each estimator $L^{SNS}_{m'}(X_i)$ only uses the $m'$ inner stimulation outputs associated with scenario $X_i$ and ignores the $m'(n-1)$ inner simulation outputs from the other scenarios.
In the next section, we propose an efficient simulation procedure that circumvents the nested structure between the outer and inner simulation by recycling all inner simulation outputs in estimating $L(X_i)$ for every scenario $X_i$.
This recycling saves computations and improves efficiency.
§.§ Sample Recycling via Likelihood Ratios
Let $\mathcal{X}\subseteq \R^d$ be the scenario space and $X\in \mathcal{X}$ be a given scenario.
For example, $\mathcal{X}$ may be the support of the random scenario $X$.
Also, let $f(y|X)$ be the conditional density of the inner random variable $Y$ given the scenario $X$.
In other words, the distribution of the inner random variable $Y$ is characterized by the outer scenario $X$.
This is a mild limitation of our method, as majority of risk measurement problems and many nested estimation problems in operations research satisfy this condition.
Suppose there exists a sampling density $\ftilde(y)$.
We assume that one can generate samples from $\ftilde(y)$ and can calculate values for both $\ftilde(y)$ and $f(y|x)$.
Moreover, the sampling density $\ftilde$ satisfies the condition that $H(x,y) f(y|x) = 0$ whenever $\ftilde(y)=0$.
Then $L(X)=\E[H(X,Y)|X]$ can be written as
\begin{equation}\label{eq:LtauLR}
L(X) = \E[H(X,Y)|X] =\E_{\ftilde}\left[H(X,Y)\frac{f(Y|X)}{\ftilde(Y)}\right] = \E_{\ftilde}\left[\hatH(X,Y)\right],
\end{equation}
where shorthand notation $\hatH(x,y):=H(x,y)\frac{f(y|x)}{\ftilde(y)}$ denotes the likelihood-ratio-weighted simulation output and the subscript in the expectations indicates that $Y\sim \ftilde$.
The identity (<ref>) is mathematically identical to importance sampling, but we do not select the sampling density for variance reduction.
We assume that the sampling density $\ftilde$ is given and we only use the likelihood ratio as a way to recycle simulation outputs for different outer scenarios.
As we see in the numerical experiments, in practical applications usually there is a natural choice of sampling distribution $\ftilde$.
In light of (<ref>), we propose the following green nested simulation (GNS) procedure:
* (Outer simulation) Simulate $n$ independent and identically distributed (i.i.d.) outer scenarios, denoted by $X_1,\ldots,X_n$.
* (Inner simulation) Simulate $m$ i.i.d. inner replications, e.g., $Y_{1},\ldots,Y_{m} \stackrel{i.i.d.}{\sim}\ftilde(y)$ then estimate $L(X_i)$ by
\begin{equation}\label{eq:LmXi}
L_m(X_i) = \avgmj H(X_i,Y_j) \frac{f(Y_j|X_i) }{\ftilde(Y_j)} = \avgmj \hatH(X_i,Y_j), \quad i=1,\ldots,n.
\end{equation}
* (Risk estimation) Estimate the risk measure $\rho$ in (<ref>) by
\begin{equation}\label{eq:rhomn}
\rho_{mn} = \avgni g(L_m(X_i)).
\end{equation}
Figure <ref> depicts the GNS procedure, which does not have the nested structure as in Figure <ref>.
In the GNS procedure, the outer scenarios $X_1,\ldots,X_n$ and the inner replications $Y_1,\ldots,Y_m$ are simulated separately and independently.
The same inner replications are recycled to estimate all conditional expectations $L(X_1),\ldots,L(X_n)$.
[every node/.style=scale=0.9]
(, 0) node(s0) ;
(,) node(st) $X_{\ind}$;
[thick,->] (s0.east) – (st.west);
(,) node(sp) $Y_{\ind}$;
(st.east) – (sp.west);
/in st1/sti, sti/stn, sp1/spj, spj/spm
() – () node [font=, midway, sloped] $\dots$;
[thick, decorate,decoration=brace,amplitude=10pt](sp1.north -| spm.east) – (spm.south east) node[black,right, midway,xshift=0.35cm,align=center] (sample) Sample $Y_j\stackrel{i.i.d.}{\sim} \ftilde(y)$
Estimate $\E[H(X,Y)|X=X_1]$ by $L_m(X_1)$
Estimate $\E[H(X,Y)|X=X_n]$ by $L_m(X_n)$;
[thick, decorate,decoration=brace,amplitude=10pt](sample.north east) – (sample.south east) node[black,right, midway,xshift=0.35cm, align=center] Estimate $\rho$;
[thick, decorate,decoration=brace,amplitude=10pt,mirror](spm.south -| s0.west) – (spm.south -| stn.center) node[black,midway,yshift=-0.8cm] Outer simulation;
[thick, decorate,decoration=brace,amplitude=10pt,mirror](spm.south -| stn.center) – (spm.south) node[black,midway,yshift=-0.8cm,align=center] Sample recycling via
likelihood ratio;
Schematic illustration of the GNS procedure. The inner simulation replications $Y_j\sim\ftilde$ are recycled for all outer scenarios by weighting the corresponding simulation outputs by appropriate likelihood ratios.
One advantage of the GNS procedure over the standard nested simulation is the computational saving because of sample recycling.
Specifically, when $m=m'$, the GNS procedure and the standard nested simulation use the same number of inner simulation outputs, likelihood-ratio-weighted or not, to estimate each $L(X_i)$.
Then, the computational saving is significant:
* The GNS procedure generates $n$ times less inner samples compared to the standard nested simulation.
In particular, the former simulates $\{Y_{j}\sim \ftilde(y), j=1,\ldots,m\}$ while the latter simulates $\{Y_{ij}\sim f(y|X_i), j=1,\ldots,m', i=1,\ldots,n\}$.
* In many applications, the inner simulation model can be decomposed into two components, one depends on the scenario $X$ and the other depends on the inner replication $Y$, i.e., $H(X,Y)=H'(h_1(X),h_2(Y))$ for some functions $H'$, $h_1$, and $h_2$. For example, for Asian option payoffs, $h_1(X)$ and $h_2(Y)$ may be the averages of $X$ and $Y$, respectively.
In these cases, the standard nested simulation requires $m'n$ calculations of the second component $h_2(Y_{ij})$ while the GNS procedure only requires $m$ such computations.
This is an $n$-fold saving on the second component of the inner simulation model.
* In some applications, e.g., non-path-dependent payoffs, the inner simulation model depends sole on the inner replication, i.e., $H(X,Y)=H(Y)$.
Then, the GNS procedure only calculates $m$ inner simulation outputs, i.e., $\{H(Y_{j}), j=1,\ldots,m\}$, then recycle and reuse them in estimating $L(X_i)$ for all $n$ outer scenarios.
The standard nested simulation, in contracts, calculates $m'$ inner simulation outputs, i.e., $\{H(Y_{ij}), j=1,\ldots,m'\}$, for each of the $n$ outer scenarios.
This is an $n$-fold saving on the entire simulation output computation.
* Moreover, if the user chooses to increase the number of outer scenarios after an experiment, the GNS procedure can continue reusing the same set of inner simulation outputs while more inner replications are required for standard nested simulation.
* Admittedly, the GNS procedure requires likelihood ratio calculations to reuse the inner simulation outputs, but in most applications computational efforts of the likelihood ratio ${f(Y|X)/\ftilde(X)}$ is small or even negligible compared to the inner simulation model $H(X,Y)$.
For example, as we see in Section <ref>, in risk management applications where the underlying asset model is Markovian, the likelihood ratio calculation can be simplified.
Thus this additional cost is worth paying for the savings in generating new inner replications and calculating additional simulation outputs.
A second advantage of the GNS procedure is its high accuracy.
When $m=m'n$ so the GNS procedure matches the same simulation budget as standard nested simulation, each $L(X_i)$ is estimated by $m=m'n$ inner simulation outputs in the former versus $m'$ in the latter.
Despite the likelihood ratio weight, since the GNS procedure estimates each $L(X_i)$ with $n$ times more inner simulation outputs than standard nested simulation so the former is expected to be much more accurate than the latter.
Also, as indicated in Equation (<ref>), the likelihood ratio estimator (<ref>) is unbiased.
Compared to the LSMC [Longstaff and Schwartz, 2001] and to the kernel smoothing approach [Hong et al., 2017] for nested simulation, the GNS procedure does not have model error because it does not require the user to select any basis function, kernel function, or kernel bandwidth.
A third advantage of the GNS procedure is the strong convergence of the estimator $L_m(X)$ to $L(X)$ and the fast convergence of the risk estimator $\rho_{mn}$ to $\rho$ as $\min\{m,n\}\to \infty$, which are shown by the asymptotic analyses in Sections <ref> and <ref>.
§ ASYMPTOTIC ANALYSIS FOR CONDITIONAL EXPECTATION ESTIMATOR $L_M( X)$
For notational convenience, in subsequent discussions where no confusion will arise we write simply $L$, $L_m$, and $\hatH$ in places for $L(X)$, $L_m(X)$, and $\hatH(X,Y)$ respectively.
For simulated samples we will use shorthand notations $\Li$, $\Lmi$, and $\hatHij$ for $L(X_i)$, $L_m(X_i)$, and $\hatH(X_i,Y_j)$, respectively.
For example, we may write $\rho_{mn} = \avgni g(\Lmi) = \avgni g(\avgmj \hatHij)$.
* The support for conditional density $f( y|X)$ is the same for any scenario $X\in\mathcal{X}$.
Moreover, the sampling density satisfies $H(X, y) f( y|X) = 0$ whenever $\ftilde(y)=0$ for all $X\in\mathcal{X}$.
* The inner sample $Y\sim \ftilde(y)$ is independent of the outer scenario $X$. The simulated $\{X_i, i=1,\ldots,n\}$ and $\{Y_{j}, j=1,\ldots,m\}$ are i.i.d. samples of $X$ and $Y$, respectively.
The absolute continuity Assumption <ref> <ref> ensures that the likelihood ratio in (<ref>) is well-defined; this is a standard assumption for analyzing importance sampling and the likelihood ratio method.
It can be satisfied if the common support of $f(y|X)$ is contained in the support of $\ftilde(y)$.
The independence Assumption <ref> <ref> enables us to use Independence Lemma <cit.> and properties of U-Statistics <cit.> in our analysis.
For any fixed scenario $X=x$, $L_m(x)$ is an unbiased estimator for $L(x)$ according to Equation (<ref>).
Our risk measurement problem is more complicated because the scenario $X$ is stochastic.
Nonetheless, we can analyze useful properties of the random variable $\hatH(X,Y)$ and $L_m(X)$.
We first state a useful result for later discussions.
If Assumption <ref> <ref> holds and $\E\left[|\hatH|^p\right]<\infty$ for some positive integer $p$, then $\E[|L|^p] < \infty$.
For any positive integer $p$, $|x|^p$ is a convex function. Then, by the Jensen's inequality
\begin{equation*}\label{eq:boundedLp}
\E[|L|^p] = \E[(\E[\hatH| X])^p] \leq \E[\E[|\hatH|^p| X]] = \E[|\hatH|^p] < \infty.
\end{equation*}
Lemma <ref> means that $\E\left[|\hatH|^p\right]<\infty$ directly implies $\E[|L|^p]<\infty$ so the latter does not need to be explicitly stated provided the former holds.
This simplifies the statements of our propositions and theorems, e.g., Proposition <ref>, whose proof is in Appendix <ref>.
If Assumption <ref> <ref> holds, then $L_m(x)$ is an unbiased estimator for $L(x)$ for any fixed scenario $x$, i.e., $\E\left[L_m(x)\right] = L(x).$
In addition, if Assumption <ref> <ref> also holds and $\E\left[|\hatH|\right]<\infty$, then $L_m( X)$ is a strongly consistent estimator for $L( X)$, i.e.,
\begin{equation*}
L_m( X)\stackrel{a.s.}{\rightarrow} L( X) \mbox{ as } m\rightarrow \infty.
\end{equation*}
The first part of Proposition <ref> is the well-known unbiasedness of the importance sampling estimator.
The second part shows the almost sure convergence of $L_m(X)$ to $L(X)$ in light of the stochastic of $X$.
This almost sure convergence is useful for establishing asymptotic properties of the GNS risk estimator $\rho_{mn}$.
To facilitate further analysis in Section <ref>, we establish two more useful lemmas below.
Even though we attribute Lemmas <ref> and <ref> to [Avramidis and Matzinger, 2004], our lemmas are extensions of theirs to accommodate the general analysis in this article.
For completeness, we provide their detailed proofs in Appendix <ref>.
Suppose $R$ is a random variable with $\E[R^{2p}]<\infty$ for some positive integer $p$. Then, for any arbitrary $\sigma$-field $\cG$,
\begin{equation*}
\E\left[(R-\E\left[R|\cG\right])^{2p}\right] \leq 4^p\E\left[R^{2p}\right].
\end{equation*}
Consider identically distributed random variables $\{R_j\}_{j=1}^{m}$ such that $\E[R_1^{2p}]<\infty$ for some positive integer $p$.
In addition, conditional on an arbitrary $\sigma$-field $\cG$, $\{R_j\}_{j=1}^{m}$ are mutually independent and $\E\left[R_j|\cG\right]=0$ for all $1\leq j\leq m$.
\begin{equation*}\label{eq:Rmoment2p}
\E\left[\left(\avgmj R_j\right)^{2p}\right]= \frac{c_1}{m^{p}} \E\left[R_1^{2p}\right] +\cO\left(\frac{1}{m^{p+1}}\right)=\cO(m^{-p}), \mbox{ as } m\rightarrow\infty,
\end{equation*}
where $c_1=\binom{2p}{2}\binom{2p-2}{2}\cdots\binom{2}{2}/{p!}$.
In particular, for $p=1$,
\begin{equation}\label{eq:Rmoment2piid}
\E\left[\left(\frac{1}{m}\sum_{j=1}^m R_j\right)^2\right]=\frac{\E \left[R_1^2\right]}{m}.
\end{equation}
If Assumption <ref> holds and $\E\left[\hatH^{2p}\right]<\infty$ for some positive integer $p$, then
\begin{equation*}
\E\left[\left(L_m-L\right)^{2p}\right]= \cO\left(m^{-p}\right) \mbox{ as } m\rightarrow\infty.
\end{equation*}
Theorem <ref> demonstrates the $\cL^{2p}$ convergence of $L_m$ to $L$ at rate $\cO(m^{-p})$.
This is also an important result to establish asymptotic properties, such as bias, variance, MSE, and CLT, of the GNS risk estimator $\rho_{mn}$.
§ ASYMPTOTIC ANALYSIS FOR RISK ESTIMATOR $\RHO_{MN}$
While sample recycling in the GNS procedure leads to computational savings and higher accuracy, as discussed in Section <ref>, it also introduces dependency among the estimators $L_{m,i}$, $i=1,\ldots,n$.
Despite this intricate dependency, we analyze the asymptotic properties for the GNS estimators in (<ref>) and (<ref>).
The asymptotic analysis for $\rho_{mn} = \avgni g(\Lmi)$ is different for different risk functions $g$.
For linear functions, i.e., $g(x)=ax+b$ for some constants $a$ and $b$, $\rho=\E\left[g(\E\left[H|X\right])\right] = a\E\left[H\right]+b$ can be estimated without nested simulation.
To make our study meaningful, we analyze three classes of nonlinear risk functions:
* Smooth function: $g:\R\mapsto\R$ is twice differentiable with a bounded second derivative, i.e., both $g'(x)$ and $g''(x)$ exist for all $x\in\R$ and there exists a nonnegative constant $C_g \in \R^+$ such that $|g''(x)|\leq C_g<\infty$.
Analysis for this class of risk functions mainly based on the Taylor approximation
\begin{equation}\label{eq:Taylor}
g\left(L_m\right) = g\left(L\right) + g'\left(L\right)(L_m-L) + \frac{g''(\Lambda_m)}{2}(L_m-L)^2,
\end{equation}
where $\Lambda_m$ is a random variable that lies between $L_m$ and $L$.
* Hockey-stick function: $g(x): = \max\{x, 0\}$.
The hockey-stick function has a kink hence is not differentiable at $x=0$, but it is Lipschitz continuous because $|g(x)-g(y)|\leq |x-y|$.
Moreover, it is clear that $g(x)= x\cdot\1\{x\geq 0\}$ so we can define its derivative $g'(x) = \1\{x \geq 0\}$, which is valid everywhere except at $x=0$; this derivative suffices in our analysis.
The valid bounds $g(x) \leq |x|$ and $g'(x) \leq 1$ are also useful in our analysis.
* Indicator function: $g(x)=\1\{x \geq 0\}$ is neither continuous nor differentiable at $x=0$, which leads to a more complicated analysis than the other two cases.
When needed, we make additional assumptions and employ a smooth approximation to circumvent this difficulty.
Though different assumptions and mathematical tools are required to analyze the three classes of risk functions, we present a concise and coherent analysis that sheds lights on their similarities and common properties.
We also note that the kink and discontinuity at $x=0$ in our analysis is only for simplification purpose, which can be generalized to any constant threshold $x_0\in\R$ with a change of variable.
Let $L_m-L = Z_m/\sqrt{m}$ and suppose that $Z_m$ has a nontrivial limiting distribution as $m\rightarrow \infty$.
Assumption <ref> states some assumptions on the joint density function $p_m(\ell,z)$ for $(L,Z_m)$ that aid later analysis.
The joint density $p_m(\ell,z)$ of $(L,Z_m)$ and its partial derivative $\frac{\partial}{\partial \ell} p_m(\ell,z)$ exists for every positive integer $m\geq 1$ and for all $(\ell,z)$.
* For every positive integer $m\geq 1$, there exist nonnegative functions $\bar{p}_{0,m}(\cdot)$ and $\bar{p}_{1,m}(\cdot)$ such that
\begin{equation*}
p_m(\ell,z) \leq \bar{p}_{0,m}(z) \mbox{ and } \left|\frac{\partial}{\partial \ell} p_m(\ell,z)\right| \leq \bar{p}_{1,m}(z),\quad \forall (\ell,z).
\end{equation*}
* For $i=0,1$ and $0\leq r \leq 2$
\begin{equation*}
\sup_m \int_{-\infty}^\infty |z|^r \bar{p}_{i,m}(z) dz <\infty.
\end{equation*}
Assumption <ref> is difficult to verify in general, but as argued in [Gordy and Juneja, 2010], it can be expected to be true if some of the instruments in the portfolio have sufficiently smooth payoffs.
Mathematically, Assumption <ref> imposes smoothness and boundedness assumptions on the joint densities $p_{m}(\ell,z)$, which are needed in our analysis to compensate for the lack of differentiability or continuity in the hockey-stick and indicator risk function $g$.
Moreover, Assumption <ref> implies that the marginal density function of $L$, i.e., $\widetilde{p}(\ell)=\int p_m(\ell,z)dz$ exists.
Using Assumption <ref>, we can show the two identities in Lemma <ref> that are useful for later analysis.
Detailed proof for Lemma <ref> is provided in Appendix <ref>.
Suppose Assumptions <ref> and <ref> hold. Then,
\begin{align}
\E\left[\1\{L_m\geq 0\} - \1\{L\geq 0\}\right] &= \cO(m^{-1}),\mbox{ and}\label{eq:usefuleq1}\\
%\E\left[|\1\{L_m\geq 0\} - \1\{L\geq 0\}|\right] &= \cO(m^{-1}), \mbox{ and}\label{eq:usefuleq2}\\
\E\left[|L_m\cdot(\1\{L_m\geq 0\} - \1\{L\geq 0\})|\right] &= \cO(m^{-1})\label{eq:usefuleq3}.
\end{align}
§.§ Asymptotic Bias
For any given risk function $g$, the bias of the GNS estimator $\rho_{mn}$ can be decomposed as
\begin{equation}\label{eq:bias}
\Bias[\rho_{mn}] = \E\left[g\left(L_m\right) - g\left(L\right)\right]= \E\left[g'\left(L\right)(L_m - L)\right] + \E\left[r_m\right].
\end{equation}
for appropriately defined derivative $g'$ where the remainder term is
\begin{equation}\label{eq:remainder}
r_m = g\left(L_m\right) - g\left(L\right) - g'\left(L\right)(L_m - L).
\end{equation}
The first expectation in the RHS of (<ref>) contributes to the bias due to the linear approximation of $g(\cdot)$.
For well defined derivative $g'$ such as the case for smooth and hockey-stick risk functions, this contribution is zero because
\begin{align*}
&\E\left[g'\left(L\right)(L_m - L)\right] = \E\left[\E\left[g'(L(X))(L_m(X) - L(X))|X\right]\right] \nonumber\\
=& \E\left[g'(L(X))(\E\left[L_m(X)|X\right] - L(X))\right] \stackrel{(*)}{=}\E\left[g'(L(X))(L(X) - L(X))\right]=0, \label{eq:zerocontribution}
\end{align*}
where $(*)$ holds because $\E\left[L_m(X)|X\right]=L(X)$ by Proposition <ref>.
We then show that the bias (<ref>) converges to zero at the rate $\cO(m^{-1})$ for all three classes of risk functions.
Specifically, $|\E[r_m]|\leq \E[|r_m|] = \cO(m^{-1})$ for the smooth and hockey-stick risk functions, where the inequality holds by Jensen's inequality for the convex function $|x|$.
Equation (<ref>) in Lemma <ref> directly indicates that the $\E\left[g\left(L_m\right) - g\left(L\right)\right]=\cO(m^{-1})$ for indicator risk function $g(x)=\1\{x\geq 0\}$.
* For a smooth risk function $g$, using the Taylor approximation (<ref>) and Theorem <ref> with $p=1$, we have
\begin{equation}\label{eq:smoothbias}
\left|\E\left[r_m\right]\right| \leq \E\left[|r_m|\right] = \E\left[\frac{|g''(\Lambda_m)|}{2}(L_m-L)^2\right] \leq \frac{C_g}{2}\E\left[(L_m-L)^2\right]=\cO(m^{-1}).
\end{equation}
* For the hockey-stick risk function $g(x)=\max\{x,0\} = x\cdot\1\{x \geq 0\}$, we define $g'(x)=\1\{x \geq 0\}$ so
\begin{align*}
r_m = L_m\cdot\1\{L_m \geq 0\} - L\cdot\1\{L \geq 0\} - \1\{L \geq 0\} (L_m-L) = L_m\cdot(\1\{L_m\geq 0\} - \1\{L\geq 0\})
\end{align*}
Then, using Equation (<ref>) in Lemma <ref>, we have
\begin{equation}\label{eq:hockeysticbias}
\left|\E[r_m]\right| \leq \E[|r_m|] = \E\left[|L_m\cdot(\1\{L_m\geq 0\} - \1\{L\geq 0\})|\right]=\cO(m^{-1}).
\end{equation}
* For the indicator risk function $g(x)=\1\{x\geq 0\}$, we consider the bias directly, i.e., $\Bias[\rho_{mn}] = \E\left[\1\{L_m\geq 0\} - \1\{L\geq 0\}\right]$, which, based on Equation (<ref>) in Lemma <ref>, converges at the rate $\cO(m^{-1})$.
Proposition <ref> summarizes the above discussions about asymptotic biases.
Suppose that Assumption <ref> and one of the following sets of assumptions hold:
* The risk function $g(\cdot)$ is twice differentiable with a bounded second derivative and $\E\left[\hatH^2\right]<\infty$, or
* The risk function $g(\cdot)$ is a hockey-stick function and Assumption <ref> holds, or
* The risk function $g(\cdot)$ is an indicator function and Assumption <ref> holds.
\begin{equation*}
\Bias[\rho_{mn}] = \cO(m^{-1}).
\end{equation*}
We can see the advantage of our GNS estimator $L_m$ by comparing Proposition <ref> to analogous bias results for other nested estimators.
In the GNS procedure, the total number of inner samples is $m$.
The total number of inner samples for the standard nested simulation is $\Gamma=m'n$, where $n$ is the number of outer scenarios and $m'$ is the number of inner samples per outer scenario.
So we consider $m=\Gamma$ a fair comparison, i.e., the same simulation budget, between these two procedures.
Proposition <ref> shows that the bias of the GNS estimator $\rho_{mn}$ converges to zero at the rate of $\cO(m^{-1})=\cO(\Gamma^{-1})$, which is fast and depends only on the simulation budget.
In contrast, the bias of the standard nested simulation estimator using the optimal budget allocation scheme in [Gordy and Juneja, 2010] is $\cO(\Gamma^{-1/3})$.
The bias of the regression-based procedure in [Broadie et al., 2015] depends on the selection of the basis functions and is generally non-zero regardless of the simulation budget.
The bias of the kernel-based procedure in [Hong et al., 2017] depends not only on the simulation budget but also on the kernel bandwidth.
§.§ Asymptotic Variance
To analyze the variance of the GNS estimator $\rho_{mn}$, we first note that
\begin{align}
&\Var[\rho_{mn}] = \E\left[\left(\avgni g\left(\Lmi\right) - \E\left[g\left(L_m\right)\right]\right)^2\right]\nonumber\\
=& \E\left[\left(\avgni \left(g\left(\Lmi\right) - g\left(\Li\right)\right) + \avgni \left(g\left(\Li\right) -\E\left[g\left(L\right)\right]\right)+\left(\E\left[g\left(L\right)\right]-\E\left[g\left(L_m\right)\right]\right)\right)^2 \right]\nonumber\\
\stackrel{(*)}{\leq}& 3\E\left[\left(\avgni \left(g\left(\Lmi\right) - g\left(\Li\right)\right)\right)^2 + \left(\avgni g\left(\Li\right) -\E\left[g\left(L\right)\right]\right)^2+\left(\E\left[g\left(L\right)-g\left(L_m\right)\right]\right)^2 \right]\nonumber\\
\stackrel{(**)}{=}& 3\E\left[\left(\avgni \left(g\left(\Lmi\right) - g\left(\Li\right)\right)\right)^2 \right] + \frac{3}{n}\Var\left[g\left(L\right)\right] +3\left(\Bias[\rho_{mn}]\right)^2,\label{eq:varbound}
% \label{VarIndicator0}
\end{align}
where $(*)$ holds by inequality (<ref>) in Appendix <ref> and $(**)$ holds by applying Equation (<ref>) in Lemma <ref> to the second term.
We then analyze the three terms in Equation (<ref>) separately: The last term converges at the rate of $\cO(m^{-2})$ by Proposition <ref>. For the second term, we assume that $\E[(g(L))^2]<\infty$ so $\Var[g(L)] < \infty$.
As a result, the second term in Equation (<ref>) converges at the rate of $\cO(n^{-1})$.
Note that $\E[(g(L))^2]<\infty$ is a standard assumption, which dictates that the Monte Carlo estimator for $\rho$ has a finite variance.
For the hockey-stick function $g(x)=\max\{0,x\} \leq |x|$, $\E[(g(L))^2]<\infty$ is satisfied if $\E\left[L^2\right]<\infty$, which, by Lemma <ref>, holds if $\E[\hatH^2]<\infty$. For the indicator function $g(x)=\1\{x\geq 0\} \leq 1$, this assumption is implicitly satisfied because $\E[(g(L))^2]\leq 1$.
It remains to analyze the first term in Equation (<ref>).
Using the inequality (<ref>) in Appendix <ref>, we have
\begin{equation*}
\E\left[\left(\avgni \left(g\left(\Lmi\right) - g\left(\Li\right)\right)\right)^2 \right] \leq \E\left[(g\left(L_m\right)-g\left(L\right))^2\right].
\end{equation*}
* For smooth risk functions, using the Taylor approximation (<ref>), we have
\begin{align}
\E\left[(g\left(L_m\right)-g\left(L\right))^2\right]&= \E\left[\left(g'\left(L\right)(L_m-L)+\frac{g''(\Lambda_m)}{2}(L_m-L)^2\right)^2\right]\nonumber\\
&\stackrel{(*)}{\leq} 2 \E\left[(g'\left(L\right)(L_m-L))^2\right] + 2\E\left[\left(\frac{g''(\Lambda_m)}{2}(L_m-L)^2\right)^2\right]\nonumber\\
&\stackrel{(**)}{\leq} 2\left(\E\left[(g'\left(L\right))^4\right]\right)^{1/2}\left(\E\left[(L_m-L)^4\right]\right)^{1/2} + \frac{C_g^2}{2}\E\left[(L_m-L)^4\right] \nonumber\\
&\stackrel{(***)}{=} \cO(m^{-1}) + \cO(m^{-2}) = \cO(m^{-1})\label{eq:varsmooth}
\end{align}
where $(*)$, $(**)$, and $(***)$ hold by (<ref>), (<ref>), and Theorem <ref> with $p=2$, respectively, provided that $\E\left[(g'\left(L\right))^4\right]<\infty$ and $\E[\hatH^4]<\infty$.
* For the hockey-stick risk function, due to its Lipschitz continuity, i.e., $|g(x)-g(y)|\leq |x-y|$, we have
\begin{equation}\label{eq:varhockeystick}
\E\left[(g\left(L_m\right)-g\left(L\right))^2\right] \leq \E\left[(L_m-L)^2\right] = \cO(m^{-1}),
\end{equation}
where the equality holds by Theorem <ref> with $p=1$, provided that $\E[\hatH^2]<\infty$.
* For the indicator risk function, we consider the first term in Equation (<ref>) directly and show that it converges at the rate of $\cO(m^{-1})+\cO(n^{-1})$.
Assumption <ref> is needed for the analysis in this case.
Detailed analysis is provided in Appendix <ref>.
For any $i,k\in\{1,...,n\}$ and $i\neq k$, the joint density $q_m(\ell_1,\ell_2,z_1,z_2)$ of $(\Li,L_k,Z_m(X_i),Z_m(X_k))$ and its partial derivatives $\frac{\partial}{\partial \ell_u} q_m(\ell_1,\ell_2,z_1,z_2)$ ($u=1,2$) exist for every $m$ and for all $(\ell_1,\ell_2,z_1,z_2)$.
* For every $m\geq 1$, there exist nonnegative functions $\bar{q}_{v,m}(z_1,z_2), (v=0,1)$ such that for $u=1,2$,
\begin{align*}
q_m(\ell_1,\ell_2,z_1,z_2) \leq \bar{q}_{0,m}(z_1,z_2) \mbox{ and }
\left|\frac{\partial}{\partial\ell_u} q_m(\ell_1,\ell_2,z_1,z_2)\right| \leq \bar{q}_{1,m}(z_1,z_2),\ \ \forall(\ell_1,\ell_2,z_1,z_2).
\end{align*}
* For $v=0,1$ and any $r_1,r_2 \geq 0$ with $r_1 + r_2 \leq 3$,
\begin{equation*}
\sup_m \int_\R |z_1|^{r_1}|z_2|^{r_2} \bar{q}_{v,m}(z_1,z_2) dz_1dz_2 <\infty.
\end{equation*}
Suppose that Assumption <ref> and one of the following sets of assumptions hold:
* The risk function $g(\cdot)$ is twice differentiable with a bounded second derivative, $\E[\left(g\left(L\right)\right)^2]<\infty$, $\E[\left(g'\left(L\right)\right)^4]<\infty$, and $\E[\hatH^4]<\infty$, or
* The risk function $g(\cdot)$ is a hockey-stick function, $\E[\hatH^2]<\infty$, and Assumption <ref> holds, or
* The risk function $g(\cdot)$ is an indicator function and Assumptions <ref> and <ref> hold.
\begin{equation*}
\Var[\rho_{mn}] = \cO(m^{-1}) +\cO(n^{-1}) = \cO(\max\{m^{-1},n^{-1}\}).
\end{equation*}
Proposition <ref> implies that the number of outer scenarios should grow in the same order as the number of inner samples for the variance to converge quickly; this is also the condition for the MSE to converge quickly.
Matching the total number of inner samples in the GNS procedure and the standard nested simulation, i.e., $m=\Gamma$, Proposition <ref> states that the former's variance converges at $\cO(m^{-1})=\cO(\Gamma^{-1})$ while the latter's variance converges at $\cO(\Gamma^{-2/3})$ <cit.>.
§.§ Asymptotic Mean Square Error
Combining Propositions <ref> and <ref>, we immediately establish the asymptotic MSE of $\rho_{mn}$, as summarized in Theorem <ref>.
Suppose the conditions in Proposition <ref> hold. Then,
\begin{equation*}
\MSE(\rho_{mn}) = \cO(m^{-1}) +\cO(n^{-1}) = \cO(\max\{m^{-1},n^{-1}\}).
\end{equation*}
Theorem <ref> shows that $n$ and $m$ should grow at the same rate for the MSE of the GNS estimator to converge quickly.
Then, matching the total number of inner samples in the GNS procedure and the standard nested simulation, i.e., $m=\Gamma$, the GNS estimator's MSE converges at $\cO(m^{-1})=\cO(\Gamma^{-1})$ but the MSE of the nested simulation with optimal simulation budget allocation in [Gordy and Juneja, 2010] converges at $\cO(\Gamma^{-2/3})$.
Clearly the GNS estimator's MSE converges faster.
Also, the GNS procedure is arguably easier to implement compared to the regression-based approach [Broadie et al., 2015] and the kernel-based approach [Hong et al., 2017] because the GNS procedure does not require basis functions, kernel function, or kernel bandwidth.
§.§ Central Limit Theorem and Variance Estimators
In this section we establish a Central Limit Theorem (CLT) of the GNS risk estimator $\rho_{mn}$ and prove a valid variance estimator for $\rho_{mn}$.
Constructing confidence intervals is a common use of CLT, but the variance of nested estimators are usually difficult to estimate, e.g., by running macro replications to get multiple estimates of $\rho$ then estimate the sample variance.
We propose a variance estimator for $\rho_{mn}$ that requires only one run of the GNS procedure and that converges to the asymptotic variance.
Simply put, the CLT result and variance estimator in this section lead to asymptotically valid confidence intervals of the GNS estimator $\rho_{mn}$.
The analysis for the smooth and hockey-stick risk functions are similar, but are different from the analysis for the indicator risk function.
So, for clarity, we provide separate presentations in Sections <ref> and <ref>.
§.§.§ Analysis for Smooth and Hockey-Stick Functions
The CLT for the GNS estimator $\rho_{mn}$ with smooth and hockey-stick risk functions are based on two-sample U-statistics <cit.>, whose definition and asymptotic normality are stated below.
Let $\{X_i,i=1,\ldots,n\}$ and $\{Y_j,j=1,\ldots,m\}$ be i.i.d. samples of two independent random variables $X$ and $Y$, respectively.
For a given mapping $U(x,y)$, the average $\cU_{mn} = \frac{1}{mn}\sum_{i=1}^{n}\sum_{j=1}^{m}U(X_i,Y_j)$ is called a two-sample U-statistic.
Let $\cU_{mn}$ be a two-sample U-statistic in Definition <ref>.
If $\E\left[(U(X,Y))^2\right]<\infty$, then $\cU_{mn}$ is asymptotically normally distributed, as $\min\{m,n\}\rightarrow\infty$, with mean $\mu =\E\left[U(X,Y)\right]$ and variance $\sigma_{mn}^2 = \frac{\sigma_1^2}{n} + \frac{\sigma_2^2}{m}$ where $\sigma_1^2 =\Var\left[\E\left[U(X,Y)|X\right]\right]$ and $\sigma_2^2 = \Var\left[\E\left[U(X,Y)|Y\right]\right]$.
\begin{equation*}
\frac{\cU_{mn} - \mu}{\sigma_{mn}} \condist N(0,1), \mbox{ as } \min\{m,n\}\rightarrow \infty.
\end{equation*}
In the following, we will show that the GNS estimator $\rho_{mn}$ can be decomposed into two terms: One term is a two-sample U-statistic and the other term vanishes so quickly that it does not affect the asymptotic distribution of $\rho_{mn}$.
Recall that $\Lmi = \avgmj \hatHij$, so $\rho_{mn}$ can be decomposed as
\begin{equation}\label{eq:decomposerho}
\rho_{mn} =\avgni g\left(\Lmi\right) = \cU_{mn} + r_{mn},
\end{equation}
\begin{align}
\cU_{mn}&:= \frac{1}{mn}\sum_{i=1}^n\sum_{j=1}^m \left[g\left(\Li\right) + g'\left(\Li\right)\left(\hatHij - \Li\right)\right], \mbox{ and }\label{eq:ustat}\\
r_{mn} &:= \avgni \left[g\left(\Lmi\right) - g\left(\Li\right) - g'\left(\Li\right)\left(\Lmi - \Li\right)\right] .\label{eq:remainderrmn}
\end{align}
By Assumption <ref> <ref>, the outer scenarios and the inner samples are i.i.d. samples of two independent random variables.
Then $\cU_{mn}$ in (<ref>) is a two-sample U-statistic by Definition <ref> with the mapping
\begin{equation}\label{eq:Umapping}
U\left(X,Y\right) = g\left(L(X)\right) + g'\left(L\left(X\right)\right)\left(\hatH\left(X,Y\right) - L\left(X\right)\right).
\end{equation}
Next, we validate the conditions of Lemma <ref> and restate its conclusion for the mapping (<ref>).
Firstly, note that
\begin{align}
&\E\left[(U(X,Y))^2\right] \stackrel{(*)}{\leq} 3 \left(\E[(g(L))^2] + \E[(g'(L)\hatH)^2] + \E[(g'(L)L)^2]\right)\label{eq:decomposeUmap1}\\
& \stackrel{(**)}{\leq} 3 \left(\E[(g(L))^2] + (\E[(g'(L))^4])^{1/2}(\E[\hatH^4])^{1/2} + (\E[(g'(L))^4])^{1/2}(\E[L^4])^{1/2}\right) \label{eq:decomposeUmap2},
\end{align}
where $(*)$ and $(**)$ hold by inequalities (<ref>) and (<ref>) in Appendix <ref>, respectively.
Then, the moment condition in Lemma <ref>, i.e., $\E\left[(U(X,Y))^2\right]<\infty$ can be satisfied by the following:
* For the smooth risk functions, in light of (<ref>), sufficient conditions are $\E[(g(L))^2]<\infty$, $\E[(g'(L))^4]<\infty$, $\E[\hatH^4]<\infty$, and $\E[L^4]<\infty$.
Also, by Lemma <ref>, $\E[\hatH^4]<\infty$ implies $\E[L^4]<\infty$ so only the first three conditions need to be explicitly stated.
* For the hockey-stick function, in light of (<ref>), sufficient conditions are $\E[(g(L))^2]<\infty$, $\E[(g'(L)\hatH)^2]<\infty$, and $\E[(g'(L)L)^2]<\infty$.
Because $g(x)=\max\{x,0\}\leq |x|$ we have $\E[(g(L))^2]\leq \E[L^2]$.
Also, because $g'(x) = \1\{x\geq 0\}\leq 1$, we have $\E[(g'(L)\hatH)^2]\leq \E[\hatH^2]$ and $\E[(g'(L)L)^2]\leq \E[L^2]$.
So the sufficient conditions are simplified to $\E[\hatH^2]<\infty$ and $\E[L^2]<\infty$.
Lastly, by Lemma <ref>, these conditions are further simplified to $\E[\hatH^2]<\infty$.
Note that these moment conditions also ensure the existence of asymptotic variances $\sigma_{1}^2$ and $\sigma_2^2$.
Next, consider the mean $\mu$ and the two variances $\sigma_{1}^2$ and $\sigma_{2}^2$ in Lemma <ref> for the mapping (<ref>).
Note that $ \E[U(X,Y)|X] = g\left(L(X)\right) + g'\left(L\left(X\right)\right)\left(\E\left[\hatH\left(X,Y\right)|X \right] - L\left(X\right)\right) \stackrel{(*)}{=} g(L(X)),$
where $(*)$ holds because $\E\left[\hatH\left(X,Y\right)|X \right]=L(X)$ by (<ref>).
Also, by the independence of $X$ and $Y$ we have $\E\left[U(X,Y)|Y\right] = \E[g(L)] + \E\left[g'(L)\hatH|Y\right] - \E\left[g'(L)L\right]$, where the first and the last expectations are constants.
Therefore, we have
\begin{align}
\mu &=\E\left[U(X,Y)\right] = \E\left[\E\left[U(X,Y)|X\right]\right] = \E\left[g\left(L\right)\right] = \rho, \mbox{ and } \nonumber\\
\sigma_{1}^2 &= \Var\left[\E\left[U(X,Y)|X\right]\right] = \Var\left[g\left(L\right)\right] = \E[g(L)^2] - (\E[g(L)])^2, \mbox{ and } \label{eq:asymvar1}\\
\sigma_{2}^2 &= \Var\left[\E\left[g'\left(L\right)\hatH|Y\right]\right] = \E\left[\left(\E\left[g'\left(L\right)\hatH|Y\right]\right)^2\right] - \left(\E\left[g'\left(L\right)\hatH\right]\right)^2.\label{eq:asymvar2}
\end{align}
Then Lemma <ref> implies that $\sigma_{mn}^{-1}(\cU_{mn}-\rho)\condist \cN(0,1)$ as $\min\{m,n\}\rightarrow \infty$.
But, to make a conclusion about the asymptotic distribution of the GNS estimator $\rho_{mn}$, we also need to consider the remainder term $r_{mn}$ in (<ref>).
Note that $r_{mn}$ in (<ref>) is an average of $n$ identically distributed samples of $r_{m}$ as defined in (<ref>).
By (<ref>) and (<ref>) we have $\E[|r_{mn}|]\leq \E[|r_m|]=\cO(m^{-1})$ and so
\begin{equation*}
\E\left[\left|\frac{r_{mn}}{\sigma_{mn}}\right|\right] = \left( \frac{\sigma_1^2}{m} + \frac{\sigma_2^2}{n}\right)^{-1/2} \cO(m^{-1})=\cO\left(\left[m\left(\sigma_1^2+\sigma_2^2 \cdot\frac{m}{n}\right)\right]^{-\frac{1}{2}}\right)\rightarrow 0, \mbox{ as } \min\{m,n\}\rightarrow\infty.
\end{equation*}
This means that $\frac{r_{mn}}{\sigma_{mn}}\conlone 0$ and hence $\frac{r_{mn}}{\sigma_{mn}}\condist 0$ as $\min\{m,n\}\rightarrow\infty$.
Finally, applying the Slutsky's theorem to (<ref>) we arrive at the desired CLT result for the GNS estimator $\rho_{mn}$, as stated in Theorem <ref>.
Suppose that Assumption <ref> and one of the following sets of assumptions hold:
* The risk function $g(\cdot)$ is twice differentiable with a bounded second derivative, $\E\left[(g\left(L\right))^2\right] < \infty$, $\E\left[(g'\left(L\right))^4\right] < \infty$, and $\E\left[\hatH^4\right]<\infty$, or
* The risk function $g(\cdot)$ is a hockey-stick function, Assumption <ref> holds, and $\E\left[\hatH^2\right]<\infty$.
\begin{equation*}\label{eq:CLTsmooth}
\frac{\rho_{mn}-\rho}{\sigma_{mn}} \condist \cN(0,1), \mbox{ as } \min\{m,n\}\rightarrow\infty,
\end{equation*}
where $\sigma_{mn}^2 = \frac{\sigma_1^2}{n} + \frac{\sigma_2^2}{m}$, $\sigma_{1}^2 = \Var\left[g\left(L\right)\right]$, and $\sigma_2^2 = \Var\left[\E\left[g'\left(L\right)\hatH|Y\right]\right]$.
Theorem <ref> demonstrates the asymptotic normality of $\rho_{mn}$ and the asymptotic variance decomposition due to the stochasticities of $X$ and $Y$ separately.
The asymptotic variance has two parts: The first part, $\sigma_{1}^2 = \Var\left[g\left(L\right)\right]$, is due to the stochasticity of the outer scenario $X$, and $\frac{\sigma_{1}^2}{n}$ would have been the asymptotic variance in a classical CLT for the sample average of $n$ i.i.d. samples of $g(L)$.
The second part, $\sigma_2^2 = \Var\left[\E\left[g'\left(L\right)\hatH|Y\right]\right]$, is due to the stochasticity of the inner sample $Y$ that affects all outer scenarios due to sample recycling.
Moreover, the derivative $g'$ in the inner conditional expectation indicates that $\sigma_{2}^2$ is also affected by the nonlinearity of the risk function $g$.
A CLT result like Theorem <ref> is useful for constructing confidence intervals, typically by replacing unknown population mean and variance by the corresponding sample estimates.
However, the variance of nested simulation estimators are typically difficult or costly to estimate.
One way is by running macro replications, i.e., independent repetitions of the entire simulation procedure, then estimate the sample variance of i.i.d. samples of nested simulation estimators.
However, standard nested simulation procedure is costly to run even once, so running macro replications is prohibitively burdensome.
In contrast, we propose a variance estimator for our GNS estimator $\rho_{mn}$ that only requires running the GNS procedure once.
Specifically, $\sigma_{mn}^2$ is estimated by $\widehat{\sigma}_{mn}^2 = \frac{\widehat{\sigma}_{1,mn}^2}{n} + \frac{\widehat{\sigma}_{2,mn}^2}{m}$, where the estimators for $\sigma_{1}^2$ and $\sigma_{2}^2$ are
\begin{align}
\widehat{\sigma}_{1,mn}^2 &= \avgni \left(g\left(\Lmi\right)\right)^2 - \left(\avgni g\left(\Lmi\right)\right)^2, \mbox{ and }\label{eq:sig1hat}\\
\widehat{\sigma}_{2,mn}^2 &= \avgmj \left(\avgni g'\left(\Lmi\right)\hatHij\right)^2 - \left(\avgni g'\left(\Lmi\right)\Lmi\right)^2, \mbox{ respectively.}\label{eq:sig2hat}
\end{align}
Theorem <ref> shows that the proposed variance estimators are valid as they converge to the corresponding population variances.
The proof for Theorem <ref> is provided in Appendix <ref>.
Suppose the conditions in Theorem <ref> hold. Then,
\begin{equation*}\label{eq:CIsmooth}
\widehat{\sigma}_{1,mn}^2 \conprob \sigma_1^2\quad\mbox{, }\quad \widehat{\sigma}_{2,mn}^2 \conprob \sigma_2^2,\quad\mbox{ and }\quad \widehat{\sigma}_{mn}^2/\sigma_{mn}^2 \conprob 1, \quad\mbox{ as } \min\{m,n\}\rightarrow\infty.
\end{equation*}
A direct result of Theorems <ref> and <ref> is a valid confidence interval for $\rho$ with one run of the GNS procedure, as summarized in Corollary <ref>.
Suppose the conditions in Theorem <ref> hold. Then, the following is an asymptotically valid confidence interval for the nested estimator $\rho$ with a confidence level of $1-\alpha$:
\begin{equation*}\label{eq:CI1}
(\rho_{mn}-z_{1-\alpha/2}\cdot \widehat{\sigma}_{mn}, \ \rho_{mn}+z_{1-\alpha/2}\cdot\widehat{\sigma}_{mn}),
\end{equation*}
where $\widehat{\sigma}_{mn}^2 = \frac{\widehat{\sigma}_{1,mn}^2}{n} + \frac{\widehat{\sigma}_{2,mn}^2}{m}$ and $z_{1-\alpha/2}$ is the $1-\alpha/2$ quantile of the standard normal distribution.
§.§.§ Analysis for the Indicator Function
The discontinuity of the indicator risk function $g(x)=\1\{x\geq 0\}$ is a major difficulty in establishing CLT for the GNS estimator $\rho_{mn}$ in this case.
To circumvent this difficulty, we consider a sequence of smooth approximations of $g(x)$:
Let $\phi(u) = \frac{1}{4\pi} (1-\cos(u))\cdot \1\{|u| \leq 2\pi\}$, and for any $\epsilon > 0$ we define a function
\begin{equation}\label{eq:smoothapprox}
\geps(x) = \int_{-\infty}^{x/\epsilon} \phi(u) du=\begin{cases}
1,& x\geq 2\pi\epsilon,\\
\dfrac{1}{4\pi}\left[\dfrac{x}{\epsilon}-\sin\left(\dfrac{x}{\epsilon}\right)\right]+\dfrac{1}{2}, &|x|<2\pi\epsilon,\\
0,& x\leq -2\pi\epsilon.
\end{cases}
\end{equation}
One can show that $\geps(x)$ is twice differentiable for any $\epsilon > 0$.
Also, $\geps(x)$ converges pointwisely to $\1\{x\geq 0\}$ as $\epsilon\to 0$ everywhere except at $x=0$.
To establish the desired CLT in this case, we use a sequence of $\epsilon_m$ that depends on the number of inner samples $m$. We carefully construct such a sequence in the proof of the CLT, although this sequence is not part of the theorem statement.
As $\gepsm(x)$ is twice differentiable for any $\epsilon_m>0$, we use the Taylor's theorem for $\gepsm$ to decompose the GNS estimator $\rho_{mn} = \avgni g(\Lmi)$ as follows:
\begin{align}\label{eq:decomposerho_indicator}
\rho_{mn} &=\avgni g\left(\Lmi\right) = \cU_{\epsilon_m,mn} + r_{\epsilon_m,mn}^a + r_{\epsilon_m,mn}^b + r_{\epsilon_m,mn}^c + r_{\epsilon_m,mn}^d,
\end{align}
\begin{align*}
\cU_{\epsilon_m,mn} &:= \avgni\left[ g(\Li)+\gepsm'(\Li)(\Lmi-\Li)\right],\\
r_{\epsilon_m,mn}^a &:= \avgni \gepsm''(\Li)(\Lmi-\Li)^2,\\
r_{\epsilon_m,mn}^b &:= \avgni \left[\gepsm(\Li)-g(\Li)\right],\\
r_{\epsilon_m,mn}^c &:= \avgni \left[g(\Lmi)-\gepsm(\Lmi)\right],
\end{align*}
and $r_{\epsilon_m,mn}^d$ is the higher-order remainder term in the Taylor's expansion of $\gepsm$.
The decomposition (<ref>) is more complicated than (<ref>) due to using $\gepsm$ and its Taylor expansion.
Nonetheless, the general strategy to analyze $\rho_{mn}$ in this case is similar to that in Section <ref>: First show that $\cU_{\epsilon_m,mn}$ converges to an asymptotically normal distribution then show that the other remainder terms quickly vanishes.
We provide some insights in this section and defer the detailed proofs to Appendix <ref>.
We assume that the joint density $\psi(x,\ell)$ of $(X,L(X))$ exists and define $\psi_0(x)=\psi(x,0)$ for notational convenience.
Assumption <ref> is useful for establishing asymptotic results for $\rho_{mn}$ with the indicator risk function.
* The partial derivative $\frac{\partial}{\partial \ell} \psi(x,\ell)$ exists for all $x$ and $\ell$ and there exists a nonnegative function $\psi_1(x)$ such that $|\frac{\partial}{\partial \ell} \psi(x,\ell)|\leq \psi_1(x)$ in any open neighborhood of $(x,0)$ for all $x$.
* For $i=0,1$, the following quantities are finite,
\begin{equation*}
% \E\left[\left(\int \left|\hatH(x,Y)\right|\psi_i(x)\d x\right)^2\right]<\infty, \quad
\int \psi_i(x)\d x<\infty,\,\, \E\left[\int \left(\hatH(x,Y)\right)^2 \psi_i(x)\d x\right]<\infty,\, \mbox{ and } \E\left[\left(\int \left|\hatH(x,Y)\right|\psi_i(x)\d x\right)^2\right]<\infty.
\end{equation*}
Assumption <ref> <ref> is similar to Assumption <ref>, which is useful for applying Taylor theorem to the joint density function $\psi(x,\ell)$ of $(X,L(X))$.
Assumption <ref> <ref> may seem intricate, but it is a moment condition in disguise: Similar to the moment conditions in Theorem <ref> for the smooth and hockey-stick risk functions, Assumption <ref> <ref> guarantees the existence of the asymptotic variance (<ref>) for the indicator risk function.
We note that the first two conditions in Assumption <ref> <ref> are sufficient for the third one, but we state the latter explicitly nonetheless for ease of reference.
Define the mapping $U_{\epsilon_m}(X,Y)=g(L(X))+\gepsm'(L(X))(\hatH(X,Y)-L(X))$. Then we can write $\cU_{\epsilon_m,mn}= \frac{1}{mn}\sum_{i=1}^{n}\sum_{j=1}^{m}U_{\epsilon_m}(X_i,Y_j)$.
Despite the similarity, $\cU_{\epsilon_m,mn}$ is not a two-sample U-statistic as in Definition <ref> because the mapping $U_{\epsilon_m}(X,Y)$ depends on the number of scenarios $m$, so Lemma <ref> does not apply.
Nonetheless, we show in Appendix <ref> that $\cU_{\epsilon_m,mn}$ has similar asymptotic properties as $\cU_{mn}$ in Lemma <ref>, i.e., $\frac{\cU_{\epsilon_m,mn}-\rho}{\widetilde{\sigma}_{mn}} \condist \cN(0,1)$ where $\widetilde{\sigma}_{mn}^2 = \frac{\widetilde{\sigma}_{1}^2}{n} + \frac{\widetilde{\sigma}_{2}^{2}}{m}$,
\begin{align}
\widetilde{\sigma}_{1}^{2} &= \Var[g(L)] = \E[\1\{L\geq 0 \}] - (\E[\1\{L\geq 0\}])^2,\mbox{ and }\label{eq:IndVar1}\\
\widetilde{\sigma}_{2}^{2} &= \E\left[\left(\int\hatH(x,Y)\psi(x,0) \d x\right)^2\right].\label{eq:IndVar2}
\end{align}
We also show in Appendix <ref> that the remainder terms in (<ref>) vanish quickly so that $\rho_{mn}$ has the same asymptotic distribution as $\cU_{\epsilon_m,mn}$.
Then we can establish the CLT for $\rho_{mn}$ with the indicator risk function, as stated in Theorem <ref>.
Detailed proof of Theorem <ref> is provided in Appendix <ref>.
The proof has some subtle differences with that in Section <ref>:
$\rho_{mn}$ is decomposed differently and $\geps(\cdot)$ is used to circumvent discontinuity of the indicator function.
One estimator in this new decomposition can be shown to have similar asymptotic properties as a U-statistic, even though it is not one by Definition <ref>.
The other estimators, when scaled properly, are shown to vanish quickly.
Consider the indicator risk function $g(x)=\1\{x\geq 0\}$. Suppose that Assumptions <ref>, <ref>, <ref> and <ref> hold.
\begin{equation*}
\frac{\rho_{mn}-\rho}{\sigma_{mn}} \stackrel{d}{\rightarrow} \cN(0,1), \mbox{ as } \min\{m,n\}\rightarrow\infty,
\end{equation*}
where $\widetilde{\sigma}_{mn}^2 = \frac{\widetilde{\sigma}_1^2}{n} + \frac{\widetilde{\sigma}_2^2}{m}$ and $\widetilde{\sigma}_{1}^2$ and $\widetilde{\sigma}_2^2$ are defined as (<ref>) and (<ref>), respectively.
Next, we propose variance estimators that require only one run of the GNS procedure.
Specifically, $\widetilde{\sigma}_{mn}^2$ is estimated by $\widehat{\widetilde{\sigma}}_{mn}^2 = \frac{\widehat{\widetilde{\sigma}}_1^2}{n} + \frac{\widehat{\widetilde{\sigma}}_2^2}{m}$, where the estimators for $\widetilde{\sigma}_{1}^2$ and $\widetilde{\sigma}_2^2$ are
\begin{align}
\widehat{\widetilde{\sigma}}_{1,mn}^2 &= \avgni \1\{\Lmi\geq 0\}- \left(\avgni \1\{\Lmi\geq 0\}\right)^2, \mbox{ and }\label{eq:sig1hatIndicator}\\
\widehat{\widetilde{\sigma}}_{2,mn}^2 &= \avgmj \left(\avgni g'_\epsilon\left(\Lmi\right)\hatHij\right)^2, \label{eq:sig2hatIndicator}
\end{align}
These variance estimators are valid as they converge to the corresponding asymptotic population variances, as stated in Theorem <ref>; the proof is provided in Appendix <ref>.
Suppose the conditions in Theorem <ref> hold. If, in addition, $\E[\hatH^4]<\infty$ and the sequence $\epsilon$ satisfies $\epsilon\rightarrow 0$, $m\epsilon^5\rightarrow \infty$, and $n\epsilon^2\rightarrow \infty$ as $\min\{m,n\}\rightarrow\infty$. Then,
\begin{equation*}\label{eq:CIsmoothindicator}
\widehat{\widetilde{\sigma}}_{1,mn}^2 \conprob \widetilde{\sigma}_1^2\quad\mbox{, }\quad \widehat{\widetilde{\sigma}}_{2,mn}^2 \conprob \widetilde{\sigma}_2^2,\quad\mbox{ and }\quad \widehat{\widetilde{\sigma}}_{mn}^2/\widetilde{\sigma}_{mn}^2 \conprob 1, \quad\mbox{ as } \min\{m,n\}\rightarrow\infty.
\end{equation*}
Note that, unlike Theorem <ref>, the sequence $\epsilon$ is in the statement of Theorem <ref>.
This is because the variance $\widetilde{\sigma}_{2}^{2}$ in (<ref>) involves $\psi(x,0)$, the unknown density function of $(X, L(X))$. Its estimate in (<ref>) thus requires the smooth approximation function $g_\epsilon$, with $\epsilon$ satisfying the regularity conditions specified in Theorem <ref> to ensure its convergence.
A direct result of Theorems <ref> and <ref> is an asymptotically valid confidence interval for $\rho$ with one run of the GNS procedure, as summarized in Corollary <ref>.
Suppose the conditions in Theorem <ref> hold. The following is an asymptotically valid confidence interval for the nested estimator $\rho$ with a confidence level of $1-\alpha$:
\begin{equation*}\label{eq:CIindicator}
(\rho_{mn}-z_{1-\alpha/2}\widehat{\widetilde{\sigma}}_{mn},\ \rho_{mn}+z_{1-\alpha/2}\widehat{\widetilde{\sigma}}_{mn}),
\end{equation*}
where $\widehat{\widetilde{\sigma}}_{mn}^2 = \frac{\widehat{\widetilde{\sigma}}_{1,mn}^2}{n} + \frac{\widehat{\widetilde{\sigma}}_{2,mn}^2}{m}$ and $z_{1-\alpha/2}$ is the $1-\alpha/2$ quantile of the standard normal distribution.
In summary, for all three classes of risk functions, we establish CLTs for our GNS estimator $\rho_{mn}$, propose valid variance estimators that require a single run of the GNS procedure, and construct asymptotically valid confidence intervals.
§ NUMERICAL EXPERIMENTS
In this section, we consider two risk management examples to examine the performance of the proposed GNS procedure compared to the standard nested simulation procedure and a state-of-art regression-based procedure.
The first example shows that the GNS estimator's accuracy increases with the simulation budget and the convergence rate matches the asymptotic analysis in Section <ref>.
The second example is a larger example with 240 options, which demonstrates the applicability and performance of the GNS procedure in practical problems.
In the examples, we set $m=n$ as this setting leads to the fastest convergence of MSE according to the asymptotic analysis in Section <ref>.
In the examples, we consider option portfolios written on one or multiple, e.g., $d$, underlying assets, whose prices follow the Black-Scholes model.
For simplicity, we assume the same expected return $\mu$ for all underlying assets and a constant risk-free rate $r$.
That is, the price dynamics of the underlying assets $\bS_t=(S_{t}^1,...,S_{t}^d)^\top\in{\cal R}^d$ follows the follow stochastic differential equation
\begin{align*}
\d S_{t}^i=\mu' S_{t}^i \d t+\sum_{j=1}^{d}\sigma_{ij}S_{t}^i \d B_{t}^i,\quad i=1,...,d,
\end{align*}
where $\bm{B}_t=(B_{t}^1,...,B_{t}^d)$ is a $d$-dimensional standard Brownian motion and, without loss of generality, $\Sigma = [\sigma_{kk'}]$ is a $d\times d$ sub-triangular volatility matrix that specifies the volatility and correlations of the underlying assets.
Then the asset prices at any time $t>0$ are
\begin{align}\label{eq:assetmodel}
S_{t}^i=S_{0}^i \exp\left\{\left(\mu'- \frac{1}{2}\sum_{j=1}^{i} \sigma_{ij}^2\right)t + \sum_{j=1}^{i} \sigma_{ij} B_{t}^i\right\},\quad i=1,...,d.
\end{align}
We note that drift $\mu'$ equals the $k$-th asset's expected return under the real-world probability measure and equals the risk-free rate $r$ under the risk-neutral measure.
We define an option portfolio's maturity as the longest maturity among all options in the portfolio, which is denoted by $T$.
In our simulation experiments, the current time is $t=0$ and asset values are simulated at discrete times $0=t_0<t_1<\cdots<t_N=T$.
We are interested in measuring the portfolio risk at a future time $t_{k^*} = \tau \in(0,T)$, or $k^*\in\{1,\ldots,N-1\}$.
This is a nested estimation problem:
In a standard nested simulation procedure, one first simulates outer scenarios $X=\{\bS_{t_k}, k=1,...,k^*\}$ under the real-world measure then, given $X$, simulates inner sample paths $Y=\{\bS_{t_k}, k=k^*+1,...,N\}$ under the risk-neutral measure.
Denote the portfolio's current value and the payoff (discounted to time $0$) by $V_0$ and $V_T(X,Y)$, respectively.
The portfolio's loss at time $\tau$ given $X$ is
$$L(X)=\E\left[\left. V_0-V_T(X,Y)\right|X\right],$$
which is a random variable at time $0$.
We want to measure the portfolio risk $\rho = \E\left[g(L(X))\right]$, where three risk functions $g$ are considered: a quadratic function $g(x)=(x-x_0)^2$, a hockey-stick function $g(x)=(x-x_0)^+$, and an indicator function $g(x)=\1\{x > x_0\}$, all with a pre-specified threshold $x_0$.
As the Black-Scholes asset model is Markovian, the likelihood ratio calculation is simplified.
Specifically, the outer scenarios $X=\{\bS_{t_k}, k=1,...,k^*\}$ are simulated using the Black-Scholes model under the real-world measure.
Independent to the outer scenarios, we simulate $\bS_{t_{k^*+1}}\sim\ftilde_{k^*+1}(s)$ where $\ftilde_{k^*+1}$ is the marginal log-normal distribution of $\bS_{t_{k^*+1}}$ according to (<ref>) ($k^*$ steps under the real-world measure and 1 step under the risk neutral measure).
Conditional on $\bS_{t_{k^*+1}}$, we simulate later values $\bS_{t_{k^*+2}},\ldots,\bS_{t_{N}}$ under the risk-neutral measure.
Then the likelihood ratio can be calculated very efficiently, e.g.,
\begin{align*}
\frac{f(Y|X)}{\ftilde(Y)} &= \frac{f(\bS_{t_{k^*+1}},\ldots,\bS_{t_N}|\bS_{t_{1}},\ldots,\bS_{t_{k^*}})}{\ftilde(\bS_{t_{k^*+1}},\ldots,\bS_{t_N})} =\frac{f(\bS_{t_{k^*+1}},\ldots,\bS_{t_N}|\bS_{t_{k^*}})}{\ftilde(\bS_{t_{k^*+1}},\ldots,\bS_{t_N})} \\
&=\frac{f(\bS_{t_{k^*+1}}|\bS_{t_{k^*}})f(\bS_{t_{k^*+2}}|\bS_{t_{k^*+1}})\ldots f(\bS_{t_N}|\bS_{t_{N-1}})}{\ftilde_{k^*+1}(\bS_{t_{k^*+1}})f(\bS_{t_{k^*+2}}|\bS_{t_{k^*+1}})\ldots f(\bS_{t_N}|\bS_{t_{N-1}})} \\
\end{align*}
Also, calculating the likelihood ratio as a whole is faster than calculating two densities then taking the ratio.
§.§ 10 Barrier Options
In this example, we consider 10 barrier options written on one underlying asset, i.e., $d=1$.
The asset model parameters are: $S_0^1=100$, $T=1$, $\tau=3/50$, $\mu=8\%$, $r=5\%$ and volatility $\sigma=20\%$.
The option portfolio include 10 barrier options with the same strike $K=90$ but different barriers:
* 5 long up-and-out call options with barriers $U=118, 119, 120, 121, 122$, and
* 5 long down-and-out call options with barriers $D=78, 79, 80, 81, 82$.
In the implementation, when simulating the continuously monitoring maximum and minimum for barrier options, we use $N=200$ time steps and Brownian bridge approximation is applied for any two adjacent time points; see <cit.> for details of Brownian bridge approximations.
Even though there are 10 options in this example, because they are all written on the same underlying asset so we only need to calculate the likelihood ratio once to reuse different simulation outputs.
This is an appealing feature of the GNS procedure: The likelihood ratio calculation depends only on the dimension of the underlying assets, not the number of instruments in a portfolio.
To measure the performance of our GNS procedure, we accurately estimate the true value of $\rho$ as a benchmark:
So we generate a large number, i.e., $10^9$, i.i.d. scenarios $X$ then calculate the corresponding $L(X)$ and $g(L(X))$.
For barrier options, the loss $L(X)$ can be calculated analytically under the Black-Scholes model.
The 90%-tile of these losses $L(X)$ is used as the threshold $x_0$ in the three different risk functions.
The sample mean of $g(L(X))$ is then an accurate estimate of $\rho$, which is then used to assess the accuracy of the GNS estimator $\rho_{mn}$.
All results reported are estimated based on 1,000 independent macro replications (using the same benchmark).
Plot in relative terms for the GNS estimators.
Figure <ref> depicts the relative absolute biases, relative standard deviations, and the relative root mean squared error (RRMSE) with different simulation budgets.
RRMSE is the ratio between the root MSE of the GNS estimator $\rho_{mn}$ and the benchmark estimate of $\rho$.
The error measures are relative to the benchmark estimate and have the same unit (by taking square roots of the variance and MSE).
We see that all three error measures decrease as the simulation budget increases, as expected.
Moreover, we see that the relative standard deviation almost coincides with the RRMSE, as the relative bias is small.
This is consistent with our intuition that the likelihood ratio estimator $L_m(X)$ is unbiased, which leads to relatively small bias in $g(L_m(X))$ and $\rho_{mn}$.
Illustration of the convergence rate of MSE for the GNS estimators.
Figure <ref> depicts the relative MSE (square of RRMSE) in log scale; a dashed line with slope $-1$ is added to the figure to aid visualization.
We see that the relative MSE follows closely with the dashed line, which means that it decreases at $\cO(\Gamma^{-1})$, where $\Gamma$ is the simulation budget of the GNS procedure.
This observation is consistent with Theorem <ref>, as the simulation budget is $\Gamma=m$ and we set $m=n$ in this experiment.
Comparison of relative absolute bias, relative standard deviation, RRMSE, and 90% confidence interval's coverage probability of the GNS procedure for different simulation budgets and different risk functions. The three error measures are in % of the benchmark estimate of $\rho$.
Sim. Budget Risk function $g$ Rel.Abs.Bias Rel.Std.Dev. RRMSE 90% CI Cov.Prob.
Indicator 0.61% 44.20% 44.20% 80.50%
Hockey-stick 11.89% 67.68% 68.72% 86.87%
Quadratic 4.01% 21.99% 22.35% 91.20%
Indicator 0.34% 13.93% 13.93% 88.5%
Hockey-stick 1.12% 22.20% 22.23% 88.3%
Quadratic 0.33% 6.67% 6.68% 90.7%
Indicator 0.16% 4.18% 4.18% 90.8%
Hockey-stick 0.35% 6.81% 6.82% 88.4%
Quadratic 0.11% 2.00% 2.00% 88.8%
Table <ref> presents a quantitative summary of this experiment.
Consistent with the observations in Figures <ref> and <ref>, all three error measures decrease as the simulation budget increases.
Also, the main contribution in the RRMSE is the relative standard deviation, the relative bias is small in all configurations.
Besides the three relative error measures, the last column in Table <ref> includes the coverage probabilities of the 90% CIs.
That is, the percentage of 1,000 macro replications where the benchmark estimator falls in the 90% CIs according to Corollaries <ref> and <ref>.
We see that the coverage probabilities presented in Table <ref> are all close to 90%.
This observation supports the proposed variance estimators for the GNS estimator.
We emphasize that these variance estimators are obtained in one run of the GNS procedure so no macro replication is needed.
price dynamics is governed by the following geometric Brownian motion:
\begin{eqnarray*}
d S_t=\mu S_t \d t+\sigma S_t \d B_t,
\end{eqnarray*}
where $B_t$ is a standard Brownian motion process, $\mu$ is the rate of return of the underlying asset under the real-world probability measure. Under the risk-neutral pricing measure, the drift of the geometric Brownian motion is changed to $r$, the risk-free interest rate. Then,
\begin{eqnarray*}
S_t=S_0\exp\left\{\left(\mu-\frac{1}{2}\sigma^2\right)t+\sigma B_t\right\},
\end{eqnarray*}
and the transition density function is
\begin{eqnarray*}
f(x,y)=\frac{1}{\sigma\sqrt{\Delta t}y}\phi\left(\frac{\log(y/x)-(\mu-\frac{1}{2}\sigma^2)\Delta t}{\sigma\sqrt{\Delta t}}\right),
\end{eqnarray*}
where $\phi$ is the standard normal density, and $\Delta t$ is the time interval from $x$ to $y$.
§.§ A Realistic Option Portfolio
In this example, we consider an option portfolio with 240 options written on 60 different assets.
The assets are divided into three groups, each with 20 assets, and assets from different groups are assumed to be independent.
This is a more realistic risk management problem compared to the previous example.
We compare the GNS procedure's performance with standard nested simulation and a state-of-art regression based approach.
The option portfolio consists of 60 European call options, 60 geometric Asian call options, and 120 barrier options.
* In Group 1, there are 20 underlying assets. Three European call options with strikes $K=90,100,110$ are written on each asset in this group
* In Group 2, there are 20 underlying assets. Three geometric Asian call options with strikes $K=90,100,110$ are written on each asset in this group.
The payoff of a geometric Asian call option is $((\prod_{k=1}^N S_{t_k}^i)^{1/N}-K)^+$, where $K$ is the strike price.
In the implementation we use $N=50$ time steps for these Asian options.
* In Group 3, there are 20 underlying assets. Three up-and-out call options with barrier $U=120$ and three down-and-out call options with barrier $D=90$ are written on these assets.
Both type of options have three different strikes $K=90,100,110$.
In the implementation we use $N=200$ time steps for these barrier options use Brownian bridge approximation for any two adjacent time points to simulate the continuously monitoring maximum and minimum values.
We compare the GNS estimator with standard nested simulation estimators and the regression estimator proposed in [Broadie et al., 2015].
We consider different budget allocations for the standard nested simulation estimators, to identify the one with the highest accuracy.
For the regression estimator, weighted Laguerre polynomials on the underlying asset price up to an order of 4 are used as the basis functions <cit.>.
Table <ref> summarizes the RRMSEs of the three approaches.
We see that, based on the RRMSEs the GNS estimator is significantly more accurate that the standard nested simulation estimators.
For example, for a hockey-stick risk function with $10^5$ simulation budget, the lowest RRMSE of the standard nested simulation estimator, among all allocations presented in the table, is 21.98%.
The RRMSE of the GNS estimator with the same configuration is only 2.75%, which is 8 times smaller than the former.
Therefore, if we presume that the optimal convergence rate of nested simulation estimator is achieved, i.e., $\Gamma^{-1/3}$ for RRMSE, then the sampling budget for the nested simulation estimator needs to be $8^3$ times of the GNS estimator to achieve the same level of RRMSE.
Comparison of RRMSEs (%) for the standard nested simulation estimator, regression estimator, and the GNS estimator. For the standard nested simulation, the allocation $n \times m'$ means that there are $n$ outer scenarios with $m'$ inner samples each.
Sim. Budget 1 4cStandard nested simulation 1 Regression 1 GNS
$m=10^3$ $10\times100$ $20\times50$ $40\times25$ $50\times20$
Indicator 100.30% 78.88% 73.24% 77.61% 123.18% 22.75%
Hockey-stick 148.50% 127.14% 138.66% 153.53% 638.17% 29.26%
Quadratic 42.12% 32.69% 29.71% 31.42% 753.40% 13.26%
$m=10^4$ $50\times200$ $100\times100$ $200\times50$ $400\times25$
Indicator 44.50% 34.14% 34.12% 50.27% 16.85% 7.00%
Hockey-stick 60.12% 51.61% 58.48% 98.09% 48.63% 8.87%
Quadratic 18.29% 14.23% 12.81% 18.78% 10.92% 3.88%
$m=10^5$ $200\times500$ $400\times250$ $1,\!000\times100$ $2,\!000\times50$
Indicator 21.27% 15.59% 16.38% 26.81% 2.82% 2.13%
Hockey-stick 28.64% 21.98% 27.00% 48.02% 5.53% 2.75%
Quadratic 8.84% 6.52% 6.01% 9.08% 1.39% 1.16%
Table <ref> also shows that the GNS estimator outperforms the regression estimator, sometimes significantly so, e.g., when the simulation budget is small.
In all experiments presented in Table <ref>, the GNS estimator has smaller RRMSEs than the regression estimator, although the difference becomes smaller as the simulation budget increases. It should be pointed out that the bias of the regression estimator may persist regardless of how large the simulation budget is, due to the model error in selecting basis functions. By contrast, convergence of the GNS estimator to $\rho$ can be guaranteed theoretically as simulation budget increases.
§ CONCLUSIONS
We have proposed a green nested simulation (GNS) procedure, that pools inner simulation outputs from different outer scenarios, for solving nested estimation problems. Inner simulation outputs are weighted by likelihood ratios to ensure the unbiasedness of the conditional expectation estimates, helping to produce a convergent GNS estimator. The MSE of the GNS estimator is shown to converge at a rate of $\Gamma^{-1}$, the fastest rate that can be achieved by a typical simulation estimator, where $\Gamma$ is the simulation budget. This rate is achieved by simply recycling the inner simulation outputs weighted by likelihood ratios, without introducing modeling errors that appear to be common in existing regression-based and metamodeling-based methods when selecting basis functions, covariance functions or kernel bandwidth. CLT and variance estimates of the GNS procedure have been established, enabling the construction of asymptotically valid confidence intervals. Numerical examples on the portfolio risk measurement application have shown that the proposed GNS procedure works quite well.
[Ankenman et al., 2010]
Bruce Ankenman, Barry L Nelson, and Jeremy Staum.
Stochastic kriging for simulation metamodeling.
Operations Research, 580 (2):0 371–382, 2010.
[Avramidis and Hyden, 1999]
Athanassios N Avramidis and Paul Hyden.
Efficiency improvements for pricing american options with a
stochastic mesh.
In Proceedings of the 31st conference on Winter simulation:
Simulation—a bridge to the future-Volume 1, pages 344–350. ACM, 1999.
[Avramidis and Matzinger, 2004]
Athanassios N Avramidis and Heinrich Matzinger.
Convergence of the stochastic mesh estimator for pricing bermudan
Journal of Computational Finance, 70 (4):0
73–91, 2004.
[Barton, 2012]
Russel R. Barton.
Tutorial: Input uncertainty in output analysis.
In C. Laroque et al., editor, Proceedings of the 2012 Winter
Simulation Conference, pages 1–12, Piscataway, New Jersey, 2012. Institute
of Electrical and Electronics Engineers, Inc.
[Beckman and McKay, 1987]
Richard J Beckman and Michael D McKay.
Monte carlo estimation under different distributions using the same
Technometrics, 290 (2):0 153–160, 1987.
[Broadie and Glasserman, 2004]
Mark Broadie and Paul Glasserman.
A stochastic mesh method for pricing high-dimensional american
Journal of Computational Finance, 7:0 35–72, 2004.
[Broadie et al., 2000]
Mark Broadie, Paul Glasserman, and Zachary Ha.
Pricing american options by simulation using a stochastic mesh with
optimized weights.
In Probabilistic Constrained Optimization, pages 26–44.
Springer, 2000.
[Broadie et al., 2011]
Mark Broadie, Yiping Du, and Ciamac C Moallemi.
Efficient risk estimation via nested sequential simulation.
Management Science, 570 (6):0 1172–1194,
[Broadie et al., 2015]
Mark Broadie, Yiping Du, and Ciamac C Moallemi.
Risk estimation via regression.
Operations Research, 630 (5):0 1077–1097,
[Carriere, 1996]
Jacques F Carriere.
Valuation of the early-exercise price for options using simulations
and nonparametric regression.
Insurance: Mathematics and Economics, 190
(1):0 19–30, 1996.
[Cheng and Holland, 1997]
Russell C. H. Cheng and Wayne Holland.
Sensitivity of computer simulation experiments to errors in input
Journal of Statistical Computation and Simulation, 570
(1-4):0 219–241, 1997.
[Dong et al., 2018]
Jing Dong, M Ben Feng, and Barry L Nelson.
Unbiased metamodeling via likelihood ratios.
In 2018 Winter Simulation Conference (WSC), pages 1778–1789.
IEEE, 2018.
[Feng and Staum, 2017]
Mingbin Feng and Jeremy Staum.
Green simulation: Reusing the output of repeated experiments.
ACM Transactions on Modeling and Computer Simulation (TOMACS),
270 (4):0 23, 2017.
[Glasserman, 2013]
Paul Glasserman.
Monte Carlo methods in financial engineering, volume 53.
Springer Science & Business Media, 2013.
[Gordy and Juneja, 2010]
Michael B Gordy and Sandeep Juneja.
Nested simulation in portfolio risk measurement.
Management Science, 560 (10):0 1833–1848,
[Hong et al., 2017]
L Jeff Hong, Sandeep Juneja, and Guangwu Liu.
Kernel smoothing for nested estimation with application to portfolio
risk measurement.
Operations Research, 650 (3):0 657–673, 2017.
[Lan et al., 2010]
Hai Lan, Barry L Nelson, and Jeremy Staum.
A confidence interval procedure for expected shortfall risk
measurement via two-level simulation.
Operations Research, 580 (5):0 1481–1490,
[L'Ecuyer, 1990]
Pierre L'Ecuyer.
A unified view of the IPA, SF, and LR gradient estimation
Management Science, 360 (11):0 1364–1383,
[Lee, 1998]
Shing-Hoi Lee.
Monte Carlo Computation of Conditional Expectation quantiles.
PhD thesis, Stanford University, 1998.
[Lee and Glynn, 2003]
Shing-Hoi Lee and Peter W Glynn.
Computing the distribution function of a conditional expectation via
monte carlo: Discrete conditioning spaces.
ACM Transactions on Modeling and Computer Simulation (TOMACS),
130 (3):0 238–258, 2003.
[Liu and Staum, 2010]
Ming Liu and Jeremy Staum.
Stochastic kriging for efficient nested simulation of expected
Journal of Risk, 120 (3):0 3, 2010.
[Liu et al., 2010]
Ming Liu, Barry L Nelson, and Jeremy Staum.
An efficient simulation procedure for point estimation of expected
In Simulation Conference (WSC), Proceedings of the 2010
Winter, pages 2821–2831. IEEE, 2010.
[Longstaff and Schwartz, 2001]
Francis A Longstaff and Eduardo S Schwartz.
Valuing american options by simulation: a simple least-squares
The Review of Financial Studies, 140 (1):0
113–147, 2001.
[Nadaraya, 1964]
Elizbar A Nadaraya.
On estimating regression.
Theory of Probability & Its Applications, 90
(1):0 141–142, 1964.
[Serfling, 2009]
Robert J Serfling.
Approximation Theorems of Mathematical Statistics, volume
John Wiley & Sons, 2009.
[Shreve, 2004]
Steven E Shreve.
Stochastic Calculus for Finance II: Continuous-time Models,
volume 11.
Springer Science & Business Media, 2004.
[Staum, 2009]
Jeremy Staum.
Better simulation metamodeling: The why, what, and how of stochastic
In Proceedings of the 2009 Winter Simulation Conference (WSC),
pages 119–133. IEEE, 2009.
[Tsitsiklis and Van Roy, 2001]
John N Tsitsiklis and Benjamin Van Roy.
Regression methods for pricing complex american-style options.
IEEE Transactions on Neural Networks, 120
(4):0 694–703, 2001.
[Watson, 1964]
Geoffrey S Watson.
Smooth regression analysis.
Sankhyā: The Indian Journal of Statistics, Series A, pages
359–372, 1964.
[Zhu et al., 2020]
Helin Zhu, Tianyi Liu, and Enlu Zhou.
Risk quantification in stochastic simulation under input uncertainty.
ACM Transactions on Modeling and Computer Simulation (TOMACS),
300 (1):0 1–24, 2020.
§ AUXILIARY PROOFS FOR RESULTS IN SECTION <REF>
Assumption <ref> <ref> ensures that the likelihood ratio is well-defined so for any fixed scenario $x$ we have, by Equation (<ref>),
\begin{equation*}
\E\left[L_m(x)\right]= \E\left[\avgmj\hatH(x,Y_j)\right] = \E\left[\hatH(x,Y)\right] = L(x).
\end{equation*}
Also, since $\E\left[|\hatH|\right]<\infty$ and $Y_j$, $j=1,\ldots,m$ are i.i.d., by the strong law of large numbers we have $L_m(x)\stackrel{a.s.}{\to} L(x) \mbox{ as } m\to\infty$.
This means that $\P\left(\lim\limits_{m\to\infty} L_m(x)=L(x)\right)=1$ for any fixed $x$.
Because $X$ and $Y$ are independent by Assumption <ref> <ref>, the Independence Lemma <cit.> implies that
\begin{align*}
&\P\left(\lim_{m\to\infty}L_m(X)=L(X)\right)=\E\left[\1\left\lbrace \lim\limits_{m\to\infty}L_m(X)=L(X)\right\rbrace \right]\\
=&\E\left[ \E\left[\1\left\lbrace \lim\limits_{m\to\infty}L_m(X)=L(X)\right\rbrace|X \right] \right]
=\E\left[\left. \P\left(\lim\limits_{n\to\infty}L_m(x)=L(x)\right)\right|_{x=X}\right]=1.
\end{align*}
This means that $L_m(X)\stackrel{a.s.}{\to} L(X) \mbox{ as } m\to\infty$ and the proof is complete.
Note that
\begin{align*}\everymath{\displaystyle}
\E\left[(R-\E\left[R|\cG\right])^{2p}\right]\leq&\E\left[(|R|+|\E\left[R|\cG\right]|)^{2p}\right]=\E\left[\sum_{k=0}^{2p}\binom{2p}{k} |R|^{2p-k}|\E\left[R|\cG\right]|^{k}\right]\\
=&\E [R^{2p}]+\E\left[\E\left[R|\cG\right]^{2p}\right]+\sum_{k=1}^{2p-1}\binom{2p}{k}\E \left[|R|^{2p-k}|\E\left[R|\cG\right]|^{k}\right]\\
\stackrel{(*)}{\leq}&\E [R^{2p}]+\E\left[\E\left[R|\cG\right]^{2p}\right]+\sum_{k=1}^{2p-1}\binom{2p}{k} (\E [R^{2p}])^\frac{2p-k}{2p}\left(\E\left[\left( \E\left[R|\cG\right]\right)^{2p}\right]\right)^\frac{k}{2p}\\
\stackrel{(**)}{\leq}&\E [R^{2p}]+\E [R^{2p}]+\sum_{k=1}^{2p-1}\binom{2p}{k} (\E\left[R^{2p}\right])^\frac{2p-k}{2p}(\E [R^{2p}])^\frac{k}{2p}\\
=&\sum_{k=0}^{2p}\binom{2p}{k}\E [R^{2p}]=2^{2p}\E [R^{2p}],
\end{align*}
\begin{equation*}\everymath{\displaystyle}
\begin{array}{rcl}
\E\left[(R-\E\left[R|\cG\right])^{2p}\right]&\leq&\E\left[(|R|+|\E\left[R|\cG\right]|)^{2p}\right]=\E\left[\sum_{k=0}^{2p}\binom{2p}{k} |R|^{2p-k}|\E\left[R|\cG\right]|^{k}\right]\\
&=&\E [R^{2p}]+\E\left[\E\left[R|\cG\right]^{2p}\right]+\sum_{k=1}^{2p-1}\binom{2p}{k}\E \left[|R|^{2p-k}|\E\left[R|\cG\right]|^{k}\right]\\
&\stackrel{(*)}{\leq}&\E [R^{2p}]+\E\left[\E\left[R|\cG\right]^{2p}\right]+\sum_{k=1}^{2p-1}\binom{2p}{k} (\E [R^{2p}])^\frac{2p-k}{2p}\left(\E\left[\left( \E\left[R|\cG\right]\right)^{2p}\right]\right)^\frac{k}{2p}\\
&\stackrel{(**)}{\leq}&\E [R^{2p}]+\E [R^{2p}]+\sum_{k=1}^{2p-1}\binom{2p}{k} (\E\left[R^{2p}\right])^\frac{2p-k}{2p}(\E [R^{2p}])^\frac{k}{2p}\\
&=&\sum_{k=0}^{2p}\binom{2p}{k}\E [R^{2p}]=2^{2p}\E [R^{2p}],
\end{array}
\end{equation*}
where inequalities $(*)$ and $(**)$ follow from H${\rm \ddot{o}}$lder's and Jensen's inequalities, respectively. The proof is complete.
According to the multinomial theorem and the conditional independence of $R_j$'s, we have
\begin{eqnarray*}
\E\left[\left(\avgmj R_j\right)^{2p}\right] &=& \frac{1}{m^{2p}}\sum_{i_1+\cdots+i_k=2p}\frac{(2p)!}{i_1!i_2!\cdots i_k!}\E\left[R_{j_1}^{i_1}\cdots R_{j_k}^{i_k}\right]\\
&=& \frac{1}{m^{2p}}\sum_{i_1+\cdots+i_k=2p}\frac{(2p)!}{i_1!i_2!\cdots i_k!}\E\left[\E\left[R_{j_1}^{i_1}\cdots R_{j_k}^{i_k}|\cG\right]\right]\\
&=& \frac{1}{m^{2p}}\sum_{i_1+\cdots+i_k=2p}\frac{(2p)!}{i_1!i_2!\cdots i_k!}\E\left[\E\left[R_{j_1}^{i_1}|\cG\right]\cdots \E\left[R_{j_k}^{i_k}|\cG\right]\right].
\end{eqnarray*}
We will next bound the value and the number of summands. Since $i_1+\cdots+i_k=2p$, one can show that
\begin{equation*}
\E\left[R_{j_1}^{i_1}\cdots R_{j_k}^{i_k}\right] \leq \E\left[\left|R_{j_1}^{i_1} R_{j_2}^{i_2}\cdots R_{j_l}^{i_l}\right|\right]\stackrel{(*)}{\leq} \left(\E\left[ R_{j_1}^{2p}\right]\right)^\frac{i_1}{2p}\cdots\left(\E\left[ R_{j_l}^{2p}\right]\right)^\frac{i_l}{2p}=\E\left[ R_1^{2p}\right] < \infty,
\end{equation*}
where $(*)$ follows the generalized H${\rm \ddot{o}}$lder's inequality.
Since $\E\left[R_j|\cG\right]=0$ for all $1\leq j\leq m$, for a summand to be non-zero it must have all $i_1,\ldots,i_k \geq 2$.
Combine this with $i_1+\cdots +i_k = 2 p$, we have $k\leq p$.
Table <ref> summarizes the multinomial coefficients and the number of summands of the form $\E\left[R_{j_1}^{i_1}\cdots R_{j_k}^{i_k}\right]$ for fixed numbers $k=1,\ldots,p$; the special case where $k=p$ is given in the second row.
summand expression multinomial coefficient # of different $\{i_1,\ldots,i_k\}$ # of different $\{j_1,\ldots,j_k\}$ product
$\E\left[R_{j_1}^{i_1}\cdots R_{j_k}^{i_k}\right]$ $\displaystyle\frac{(2p)!}{i_1!i_2!\cdots i_k!}$ # of integer solution satisfying $i_1,\ldots, i_k \geq 2$ and $i_1+\cdots+i_k=2p$. Does not depend on $m$. $\displaystyle\binom{m}{k} = \cO(m^k)$ $\cO\left(m^k\right)\leq \cO(m^{p-1})$ for $k\leq p-1$
$\E\left[R_{j_1}^{2}\cdots R_{j_p}^{2}\right]$ $\displaystyle\frac{(2p)!}{2^p}$ 1 $\displaystyle\binom{m}{p} = \frac{m^p}{p!} + \cO(m^{p-1})$ $\displaystyle c_p m^p + \cO(m^{p-1})$ where $c_p=\frac{(2p)!}{2^p(p!)}$
A breakdown of the number of summands for $k=1,\ldots,p$ unique of $R_{j}$'s. The binomial coefficients are denoted by $\binom{n}{k} =\frac{n!}{(n-1)!k!}$.
For sufficiently large $m$, we have $\binom{m}{k} \leq \binom{m}{p}$ for $k\leq p$.
Therefore, as $m \to\infty$,
\begin{equation*}
\E\left[\left(\avgmj R_j\right)^{2p}\right] = \frac{1}{m^{2p}}\left(c_pm^p + \cO(m^{p-1}) \right) \E\left[ R_1^{2p}\right] = \cO\left(m^{-p}\right).
\end{equation*}
The proof is complete.
Let $L_m(X)-L(X)=\avgmj R_j$ where $R_j=H(X, Y_j)-L(X)$ for $j=1,...,m$ and $\cG = \sigma(X)$ then it suffices to verify that the conditions of Lemma <ref> hold.
Firstly, since $Y_j$ are i.i.d. so $R_j$'s are identically distributed and are conditional independent given $X$.
Moreover, by Equation (<ref>) we have $\E\left[H(X, Y_j)|X\right] = L(X)$ so $\E\left[R_j|\cG\right]=0$ for $j=1,\ldots,m$.
Lastly, the $2p$-moment of $R_1$ is bounded because
\begin{equation*}
\E\left[|R_1|^{2p}\right] = \E\left[\left(H(X, Y_1)-\E\left[H(X, Y_1)|X\right]\right)^{2p}\right]\stackrel{(*)}{\leq} 4^{p}\E\left[\left|H(X, Y_1)\right|^{2p}\right]<\infty,
\end{equation*}
where the inequality $(*)$ holds due to Lemma <ref> with $R=H(X, Y_1)$, and $\cG=\sigma(X)$.
The proof is complete.
§ SUPPLEMENTARY DETAILS FOR ASYMPTOTIC BIAS, VARIANCE, AND MSE
A few special instances of Cauchy-Schwartz's inequalities are frequently used in our analysis, so we summarize them in Lemma <ref> for ease of reference.
For all vectors $\bm{x}$ and $\bm{y}$ of an inner product space, Cauchy-Schwartz's inequality asserts that $|\left\langle \bm{x},\bm{y}\right\rangle |^2 \leq \left\langle \bm{x},\bm{x}\right\rangle\cdot \left\langle \bm{y},\bm{y}\right\rangle$, where $\left\langle \cdot,\cdot\right\rangle$ is the inner product.
In particular, if $\bm{x}=(x_1,\ldots,x_n)$ and $\bm{y}$ is a vector of ones with compatible dimension, then
\begin{equation}\label{eq:CauthySchwarzIneq1}
\left(\sum_{i=1}^n x_i\right)^2 \leq n\sum_{i=1}^n x_i^2.
\end{equation}
Also, if $X, X_1,\ldots,X_n$ are identically distributed random variables, then
\begin{equation}\label{eq:CauthySchwarzIneq2}
\E\left[\left(\avgni X_i\right)^2\right] = \frac{1}{n^2} \E\left[\left(\sum_{i=1}^n X_i\right)^2\right] \leq \frac{1}{n} \left(\sum_{i=1}^n\E\left[ X_i^2\right]\right) = \E[X^2].
\end{equation}
Moreover, define the inner product of two arbitrary random variables $X$ and $Y$ as the expectation of their product, then
\begin{equation}\label{eq:CauthySchwarzIneq3}
\E\left[|XY|\right] \leq \left(\E\left[|X|^2\right]\right)^{1/2}\left(\E\left[|Y|^2\right]\right)^{1/2}.
\end{equation}
Lastly, (<ref>) implies that the following inequality holds for arbitrary random variables $X$ and $Y$,
\begin{align}
&\E\left[X^2-Y^2\right] \leq \E\left[\left|X^2-Y^2\right|\right] = \E\left[\left|(X-Y)^2 + 2Y(X-Y)\right|\right]\nonumber\\
\leq &\E\left[(X-Y)^2\right] + 2\left(\E\left[Y^2\right]\right)^{1/2}\left(\E\left[(X-Y)^2\right]\right)^{1/2}.\label{eq:CauthySchwarzIneq4}
\end{align}
Proposition <ref>, Proposition <ref>, and Theorem <ref> are analyzed in Sections <ref>, <ref>, and <ref>, respectively.
This section provide additional details to unproven parts of the above results, such as proving Lemma <ref> and asymptotic variance for the indicator risk function.
For Equation (<ref>), note that
\begin{align}
\E\left[\1\{L_m\geq 0\} - \1\{L\geq 0\}\right] &= \int\int_{-z/\sqrt{m}}^{\infty} p_m(\ell,z)\d\ell\d z - \int\int_{0}^{\infty} p_m(\ell,z)\d\ell\d z\nonumber\\
&= \int\int_{-z/\sqrt{m}}^{0} p_m(\ell,z)\d\ell\d z \nonumber\\
&\stackrel{(*)}{=} \int\int_{-z/\sqrt{m}}^{0} \left[p_m(0,z) + \ell\cdot\frac{\partial}{\partial \ell} p_m(u_\ell,z)\right] \d\ell\d z \nonumber\\
&= \int \frac{z}{\sqrt{m}}p_m(0,z)dz + \int\int_{-z/\sqrt{m}}^{0} \ell \frac{\partial}{\partial \ell} p_m(u_\ell,z)\d\ell\d z.\label{eq:auxeq1}
\end{align}
where $(*)$ holds by Assumption <ref>.
The first term in (<ref>) can be written as $\frac{\widetilde{p}(\ell)}{\sqrt{m}}\E[Z_m|L=0]$, which equals 0 because, by Proposition <ref>,
\[
\frac{1}{\sqrt{m}}\E[Z_m|L=0] = \E[\E[L_m(X)-L(X)|X]|L(X)=0] = \E[L(X)-L(X)|L(X)=0]= 0.
\]
The second term of (<ref>) is of order $\cO(m^{-1})$ because, by Assumption <ref> <ref>, it is bounded by
\[
\int\int_{-z/\sqrt{m}}^{0} |\ell|\cdot\bar{p}_{1,m}(z)\d\ell\d z = \frac{1}{2m}\int z^2 \bar{p}_{1,m}(z)dz = \cO(m^{-1}).
\]
For Equation (<ref>), note that
\begin{align*}
&\E\left[|L_m\cdot(\1\{L_m\geq 0\} - \1\{L\geq 0\})|\right]\\
\leq& \E\left[|L_m|\cdot\1\{L_m\geq 0 > L\}\right] +\E\left[|L_m|\cdot\1\{L\geq 0 > L_m\}\right]\\
=&\int^{\infty}_{0}\int_{-z/\sqrt{m}}^{0} \left|\ell+\frac{z}{\sqrt{m}}\right| p_m(\ell,z)\d\ell\d z + \int_{-\infty}^{0}\int^{-z/\sqrt{m}}_{0} \left|\ell+\frac{z}{\sqrt{m}}\right| p_m(\ell,z)\d\ell\d z\\
\leq&\int^{\infty}_{0}\int_{-z/\sqrt{m}}^{0} \left(|\ell|+\frac{|z|}{\sqrt{m}}\right) \bar{p}_{0,m}(z)\d\ell\d z +\int_{-\infty}^{0}\int^{-z/\sqrt{m}}_{0} \left(|\ell|+\frac{|z|}{\sqrt{m}}\right) \bar{p}_{0,m}(z)\d\ell\d z\\
=&\int^{\infty}_{0}\left(\frac{z^2}{2m} + \frac{z^2}{m}\right) \bar{p}_{0,m}(z)\d z +\int_{-\infty}^{0}\left(\frac{z^2}{2m} + \frac{z^2}{m}\right) \bar{p}_{0,m}(z)\d z\\
=&\frac{3}{2m}\int z^2 \bar{p}_{0,m}(z)\d z= \cO(m^{-1}),
\end{align*}
where the last equality holds by Assumption <ref> <ref>.
The proof is complete.
Discussions in Section <ref> assert Proposition <ref> for smooth and hockey-stick risk functions.
For the indicator risk function, it remains to prove that the first term in (<ref>) is of order $\cO(m^{-1}) + \cO(n^{-1})$.
Note that $L_{m,i}$, $i=1,\ldots,n$ are identically distributed (so are $\Li$, $i=1,\ldots,n$), then
\begin{align}
&\E\left[\left(\avgni \left(g\left(\Lmi\right) - g\left(\Li\right)\right)\right)^2 \right]\nonumber\\
=&\frac{1}{n^2}\E\left[\sum_{i=1}^n\left(g\left(\Lmi\right) - g\left(\Li\right)\right)^2 + \sum_{i=1}^n\sum_{\substack{k=1\\ k\neq i}}^n\left(g\left(\Lmi\right) - g\left(\Li\right)\right)\left(g\left(L_{m,k}\right) - g\left(L_{k}\right)\right)\right]\nonumber\\
=&\frac{1}{n}\E\left[\left(g\left(L_{m,1}\right) - g\left(L_{1}\right)\right)^2 \right]+\frac{n-1}{n}\E\left[\left(g\left(L_{m,1}\right) - g\left(L_1\right)\right)\left(g\left(L_{m,2}\right) - g\left(L_2\right)\right)\right]\nonumber\\
\leq&\frac{1}{n}+\frac{n-1}{n}\E\left[\left(g\left(L_{m,1}\right) - g\left(L_{1}\right)\right)\left(g\left(L_{m,2}\right) - g\left(L_{2}\right)\right)\right], \label{VarIndicator01}
\end{align}
where the inequality holds because $g(x)=\1\{x\geq 0\}\leq 1$ and so $(g(x)-g(y))^2 \leq 1$.
The first term in (<ref>) is of order $\cO(n^{-1})$.
For the second term in (<ref>), note that
\begin{align}
&\E\left[\left(g\left(L_{m,1}\right) - g\left(L_1\right)\right)\left(g\left(L_{m,2}\right) - g\left(L_2\right)\right)\right]\nonumber\\
=&\E\left[\left(\1\{L_{m,1}\geq 0 > L_1\} - \1\{L_1\geq 0>L_{m,1}\}\right)\left(\1\{L_{m,2}\geq 0 > L_2\} - \1\{L_2 \geq 0 > L_{m,2}\}\right)\right]\nonumber\\
=&\P\left(L_{m,1} \geq 0 > L_1,L_{m,2} \geq 0 > L_2 \right) - \P\left(L_1\geq 0 > L_{m,1},L_{m,2} \geq 0 >L_2\right) \nonumber\\
& - \P\left(L_{m,1} \geq 0 > L_1, L_2 \geq 0 > L_{m,2}\right) + \P\left(L_1 \geq 0 >L_{m,1},L_2 \geq 0 > L_{m,2}\right).\label{eq:auxeq2}
\end{align}
We examine the convergence rate of the first term in (<ref>), which is common for all four terms.
By Assumption <ref>, we can apply the Taylor's theorem to the joint density $q_m(\ell_1,\ell_2,z_1,z_2)$ so
\begin{align}
q_m(\ell_1,\ell_2,z_1,z_2)&=q_m(0,0,z_1,z_2) + \ell_1\frac{\partial}{\partial \ell_1}q_m(\bar{\ell}_1,\bar{\ell}_2,z_1,z_2)+ \ell_2\frac{\partial}{\partial \ell_2}q_m(\bar{\ell}_1,\bar{\ell}_2,z_1,z_2)\label{indVar Taylor q}\\
&\leq q_m(0,0,z_1,z_2) + (|\ell_1|+|\ell_2|)\cdot \bar{q}_{1,m}(z_1,z_2)\label{indVar Taylor q_ineq},
\end{align}
where $\bar{\ell}_1\in(\ell_1,0)$, $\bar{\ell}_2\in(\ell_2,0)$, and the inequality holds by Assumption <ref> <ref>.
Then we have
\begin{align*}
=&\int_0^{\infty}\int_0^{\infty}\int_{-\frac{z_1}{\sqrt{m}}}^0\int_{-\frac{z_2}{\sqrt{m}}}^0 q_m(\ell_1,\ell_2,z_1,z_2)\d\ell_1 \d\ell_2 \d z_1 \d z_2\\
%=&\int_0^{\infty}\int_0^{\infty}\int_{\frac{-z_1}{\sqrt{m}}}^0\int_{\frac{-z_2}{\sqrt{m}}}^0 \left[q_m(0,0,z_1,z_2) + \ell_1\frac{\partial}{\partial \ell_1}q_m(\bar{\ell}_1,\bar{\ell}_2,z_1,z_2)\right.
% \left. + \ell_2\frac{\partial}{\partial \ell_2}q_m(\bar{\ell}_1,\bar{\ell}_2,z_1,z_2)\right]\d\ell_1 \d\ell_2 \d z_1 \d z_2\\
%=&\frac{1}{m}\int_0^{\infty}\int_0^{\infty} z_1z_2 q_m(0,0,z_1,z_2)\d z_1 \d z_2\\
% &+ \int_0^{\infty}\int_0^{\infty}\int_{\frac{-z_1}{\sqrt{m}}}^0\int_{\frac{-z_2}{\sqrt{m}}}^0 \left[\ell_1\frac{\partial}{\partial \ell_1}q_m(\bar{\ell}_1,\bar{\ell}_2,z_1,z_2)+ \ell_2\frac{\partial}{\partial \ell_2}q_m(\bar{\ell}_1,\bar{\ell}_2,z_1,z_2)\right]\d\ell_1 \d\ell_2 \d z_1 \d z_2\\
\stackrel{\eqref{indVar Taylor q_ineq}}{\leq}&\frac{1}{m}\int_0^{\infty}\int_0^{\infty} z_1z_2 \bar{q}_{0,m}(z_1,z_2)\d z_1 \d z_2 + \int_0^{\infty}\int_0^{\infty}\int_{-\frac{z_1}{\sqrt{m}}}^0\int_{-\frac{z_2}{\sqrt{m}}}^0 (|\ell_1| + |\ell_2|)\bar{q}_{1,m}(z_1,z_2)\d\ell_1 \d\ell_2 \d z_1 \d z_2\\
=&\cO(m^{-1}) - \frac{1}{2m^{3/2}}\int_0^{\infty}\int_0^{\infty}(z_1^2z_2+z_1z_2^2)\bar{q}_{1,m}(z_1,z_2)\d z_1 \d z_2\\
=&\cO(m^{-1}) + \cO(m^{-3/2})=\cO(m^{-1}).
\end{align*}
This means that the first term in (<ref>), and indeed all four terms, converge at the rate $\cO(m^{-1})$.
So (<ref>) is of order $\cO(m^{-1})+\cO(n^{-1})$.
Combining this with the latter two terms in (<ref>), which are of order $\cO(n^{-1})$ and $\cO(m^{-2})$, we see that $\Var[\rho_{mn}]=\cO(m^{-1})+\cO(n^{-1})$, as desired.
§ PROOF FOR THEOREM <REF>
We will use a few lemmas below to help prove Theorem <ref>.
Specifically, Lemmas <ref> and <ref> show that $\widehat{\sigma}_{1,mn}^2\conprob\sigma_{1}^2$ and Lemmas <ref> and <ref> show that $\widehat{\sigma}_{2,mn}^2\conprob\sigma_{2}^2$.
Then $\widehat{\sigma}_{mn}^2/\sigma_{mn}^2$ converges to 1 in probability by the continuous mapping theorem.
Suppose the conditions for Theorem <ref> hold, then the following convergences hold for any positive integer $n$,
\begin{align}
&\avgni \left[g(\Lmi)-g(\Li)\right] \conlone 0 \mbox{ as } m\to \infty, \label{eq:aux1}\\
&\avgni \left[(g(\Lmi))^2-(g(\Li))^2\right] \conlone 0 \mbox{ as } m\to \infty.\label{eq:aux2}
\end{align}
Recall that $\Lmi$, $i=1,\ldots,n$ are identically distributed, and so are $\Li$, $i=1,\ldots,n$.
\begin{align*}
\E\left[\left|\avgni \left[g(\Lmi)-g(\Li)\right]\right|\right] \leq \E[|g(L_m)-g(L)|]
\stackrel{\eqref{eq:CauthySchwarzIneq3}}{\leq} \left(\E[(g(L_m)-g(L))^2]\right)^{1/2} \stackrel{(*)}{=} \cO(m^{-1/2})
\end{align*}
where $(*)$ holds by Equations (<ref>) and (<ref>).
This means that (<ref>) holds.
\begin{align*}
&\E\left[\left|\avgni \left[(g(\Lmi))^2-(g(\Li))^2\right]\right|\right]\leq \E[|(g(L_m))^2-(g(L))^2|]\\
\stackrel{\eqref{eq:CauthySchwarzIneq4}}{\leq} &\E\left[\left(g(L_m)-g(L)\right)^2\right] + 2\left(\E\left[(g(L))^2\right]\right)^{1/2}\left(\E\left[\left(g(L_m)-g(L)\right)^2\right]\right)^{1/2}\\
=&\cO(m^{-1}) + \cO(m^{-1/2})=\cO(m^{-1/2}),
\end{align*}
where the last equality holds due to Equations (<ref>) and (<ref>).
This means that (<ref>) holds.
The proof is complete.
If the conditions for Theorem <ref> hold, then $\widehat{\sigma}_{1,mn}^2\conprob\sigma_{1}^2$ as $\min\{m,n\}\to 0$.
By (<ref>) and (<ref>), we have
\begin{equation}\label{eq:diff_sig1hat}
\widehat{\sigma}_{1,mn}^2 - \sigma_{1}^2 = \left[\avgni (g(\Lmi))^2 - \E\left[(g\left(L\right))^2\right]\right] + \left[\left(\avgni g(\Lmi)\right)^2 - \left(\E\left[g\left(L\right)\right]\right)^2\right].
\end{equation}
We then show that both terms on the RHS converge to 0 in probability as $\min\{m,n\}\to \infty$.
For the first term in (<ref>), note that
\begin{align}
&\avgni (g(\Lmi))^2 - \E\left[(g\left(L\right))^2\right] \nonumber\\
= &\avgni \left[(g(\Lmi))^2-(g(\Li))^2\right] +\left(\avgni (g(\Li))^2- \E\left[(g\left(L\right))^2\right]\right).\label{eq:diff1}
\end{align}
The first term in (<ref>) converges to 0 in probability by (<ref>) in Lemma <ref>.
The second term in (<ref>) converges to 0 in probability to zero as $n\to\infty$ by the weak law of large numbers because $(g(\Li))^2$, $i=1,\ldots,n$ are i.i.d. samples with the common expectation $\E[(g\left(L\right))^2]$.
For the second term in (<ref>), by the continuous mapping theorem it suffices to show that $\avgni g(\Lmi) \conprob \E\left[g\left(L\right)\right]$.
Note that
\begin{align}
\avgni g(\Lmi) - \E\left[g\left(L\right)\right] =\avgni \left[g(\Lmi)-g(\Li)\right] +\left(\avgni g(\Li)- \E\left[g\left(L\right)\right]\right).\label{eq:diff2}
\end{align}
The first term in the RHS of (<ref>) converges to 0 in probability by (<ref>) in Lemma <ref>.
The second term in the RHS of (<ref>) converges to 0 in probability as $n\to\infty$ by weak law of large numbers because $g(\Li)$, $i=1,\ldots,n$ are i.i.d. samples with the common expectation $\E[g\left(L\right)]$.
Therefore by the Slutsky's theorem we have $\avgni g(\Lmi) \conprob \E\left[g\left(L\right)\right]$, as desired.
In summary, both terms in (<ref>) converges to 0 in probability, as desired.
The proof is complete.
The next two lemmas show $\widehat{\sigma}_{2,mn}^2\conprob\sigma_{2}^2$.
We define new notations for the convenience to state and prove the lemmas:
For any $j=1,\ldots,m$,
\begin{equation}\label{eq:notations}
R_{j} := \E[g'(L)\hatH|Y=Y_j],\
\hatR_{j} := \avgni g'(\Li)\hatHij, \mbox{ and }
\hatR_{m,j} := \avgni g'(\Lmi)\hatHij.
\end{equation}
Note that $\{R_j,j=1,\ldots,m\}$, are identically distributed, and so are $\{\hatR_{j}, j=1,\ldots,m\}$ and $\{\hatR_{m,j},j=1,\ldots,m\}$.
When no confusion arises, the subscript $j$ is omitted to denote a generic index $j=1,\ldots,m$.
If the conditions for Theorem <ref> hold, then the following convergences hold:
\begin{align}
&\avgmj \left[\hatR_{m,j}^2-\hatR_{j}^2\right] \conlone 0, \label{eq:aux4}\\
&\avgmj \left[\hatR_{j}^2-R_{j}^2\right] \conlone 0,\label{eq:aux5} \\
% \lim\limits_{m\to\infty}\E[(\hatR_{mn}^{(1)} - \hatR_n^{(1)})^2] &= 0, \label{eq:aux3}\\
% \lim\limits_{m\to\infty}\E\left[\left|(\hatR_{mn}^{(1)})^2 - (\hatR_n^{(1)})^2\right|\right] &= 0, \label{eq:aux4}\\
% \lim\limits_{n\to\infty}\E[(\hatR_n^{(1)}-R^{(1)})^2] &= 0,\mbox{ and } \label{eq:aux5}\\
% \lim\limits_{n\to\infty}\E\left[\left|(\hatR_n^{(1)})^2-(R^{(1)})^2\right|\right] &= 0,\mbox{ and } \label{eq:aux6}\\
% \lim\limits_{m\to\infty}\E\left[\left|g'(L_m)L_m - g'(L)L\right|\right] &= 0.\label{eq:aux7}
& \avgni \left[g'(\Lmi)\Lmi - g'(\Li)\Li\right] \conlone 0.\label{eq:aux6}
\end{align}
Firstly, note that because $\hatR_{m,j}$, $j=1,\ldots,m$, are identically distributed (so are $\hatR_{j}$, $j=1,\ldots,m$), we have
\begin{align}
&\E\left[\left|\avgmj \left[\hatR_{m,j}^2-\hatR_{j}^2\right]\right|\right]\nonumber\leq \E\left[\left|\hatR_{mn}^2-\hatR_{n}^2\right|\right]\nonumber\\
\stackrel{\eqref{eq:CauthySchwarzIneq4}}{\leq}& \E[\left(\hatR_{mn} - \hatR_n\right)^2] + 2\left(\E\left[\hatR_n^2\right]\right)^{1/2}\left(\E[(\hatR_{mn} - \hatR_n)^2]\right)^{1/2}\nonumber\\
\stackrel{\eqref{eq:CauthySchwarzIneq2}}{\leq}& \E\left[\left(g'(L_m)\hatH - g'(L)\hatH\right)^2\right] + 2\left(\E\left[\left(g'(L)\hatH\right)^2\right]\right)^{1/2}\left(\E\left[\left(g'(L_m)\hatH - g'(L)\hatH\right)^2\right]\right)^{1/2}\label{eq:aux7}
\end{align}
* For smooth risk functions that have bounded second derivative $|g''(x)|\leq C_g<\infty$, by the Taylor's theorem we have
\begin{align*}
&\E[((g'(L_m)-g'(L))\hatH)^2] = \E[(g''(\Lambda_m)(L_m-L)\hatH)^2] \\
\stackrel{\eqref{eq:CauthySchwarzIneq3}}{\leq}& C_g^2 (\E\left[(L_m-L)^4\right])^{1/2}\left(\E\left[\hatH^4\right]\right)^{1/2} \stackrel{(*)}{=} \cO(m^{-1}),
\end{align*}
where $\Lambda_m$ is a random variable between $L$ and $L_m$ and $(*)$ holds because $\E\left[(L_m-L)^4\right] = \cO(m^{-2})$ by Theorem <ref> with $p=2$ and $\E[\hatH^4]<\infty$ by assumption.
Also, because $\E\left[\left(g'(L)\right)^4\right]<\infty$ and $\E\left[\hatH^4\right]<\infty$ by assumption, we have
\begin{align*}
\E\left[\left(g'(L)\hatH\right)^2\right] \leq \left(\E\left[\left(g'(L)\right)^4\right]\right)^{1/2}\left(\E\left[\hatH^4\right]\right)^{1/2} < \infty.
\end{align*}
Therefore (<ref>) is of order $\cO(m^{-1})$ so it converges to zero as $m\to\infty$ for smooth risk functions.
* For the hockey-stick risk function $g(x)=\max\{x,0\}$ with $g'(x)=\1\{x\geq 0\}\leq 1 <\infty$, so $\E\left[g'(L_m)\hatH\right]\leq \E\left[\hatH\right] <0$ as $ \E\left[\hatH^2\right] <0$.
So, by the dominated convergence theorem,
\begin{align*}
\lim\limits_{m\to\infty} \E\left[\left(g'(L_m)\hatH - g'(L)\hatH\right)^2\right] &= \lim\limits_{m\to\infty}\E[((\1\{L_m\geq 0\}-\1\{L\geq 0\})\hatH)^2]\\
&=\E\left[\lim\limits_{m\to\infty}((\1\{L_m\geq 0\}-\1\{L\geq 0\})\hatH)^2\right] = 0,
\end{align*}
where the last equality holds because $L_m\stackrel{a.s}{\to}L$ as $m\to\infty$ according to Proposition <ref>.
Also, $\E\left[\left(g'(L)\hatH\right)^2\right]\leq \E\left[\hatH^2\right] < \infty$ where the finiteness holds by assumption.
Therefore (<ref>) converges to zero as $m\to\infty$ for the hockey-stick risk function.
In summary, we have shown that $\E\left[\left|\avgmj \left[\hatR_{m,j}^2-\hatR_{j}^2\right]\right|\right]\to 0$ as $m\to\infty$, which proves the $\cL^1$-convergence in (<ref>).
Secondly, because $\hatR_{j}$, $j=1,\ldots,m$ are identically distributed (so are $R_j$, $j=1,\ldots,m$), therefore
\begin{align}
&\E\left[\left|\avgmj \left[\hatR_{j}^2-R_{j}^2\right]\right|\right] \leq \E\left[\left|\hatR_{n,1}^2-R_1^2\right|\right]\nonumber\\
\stackrel{\eqref{eq:CauthySchwarzIneq4}}{\leq}& \E\left[\left(\hatR_{n,1} - R_1\right)^2\right] + 2\left(\E\left[R_1^2\right]\right)^{1/2}\left(\E\left[\left(\hatR_{n,1} - R_1\right)^2\right]\right)^{1/2}. \label{eq:aux8}
\end{align}
We note that
where $(*)$ holds by the respective assumptions for the smooth and hockey-stick risk functions.
Next, define $R_{n,ij}^* = g'(\Li)\hatH_{ij} -\E[g'(L)\hatH|Y=Y_j]$ so $\hatR_{n,j} - R_j =\avgni R_{n,ij}^*$.
Given $Y_j$, $R_{n,ij}^*$, $i=1,\ldots,n$, are conditionally independent and identically distributed with mean
\begin{align*}
\E\left[R_{n,ij}^*|Y_j\right] = \E\left[g'(\Li)\hatH_{ij}|Y_j\right] - \E\left[\E[g'(L)\hatH|Y=Y_j]|Y_j\right] = 0.
\end{align*}
In addition,
\begin{align*}
\E[(R_{n}^*)^2] = \Var\left[g'(L)\hatH|Y\right]=\E\left[\left(g'(L)\hatH\right)^2\right] - \left(\E\left[g'(L)\hatH|Y\right]\right)^2\leq \E\left[\left(g'(L)\hatH\right)^2\right]\stackrel{(*)}{<}\infty,
\end{align*}
where $(*)$ holds by the respective assumptions for the smooth and hockey-stick risk functions.
Then, using Lemma <ref> with $p=2$ (use $R_{n}^*$ in that lemma), we have
$\E\left[\left(\hatR_{n}-R\right)^2\right]=\frac{\E\left[(R_{n}^*)^2\right]}{n} = \cO(n^{-1}).$
Therefore, (<ref>) converges to zero at the rate $\cO(n^{-1}) + \cO(n^{-1/2})=\cO(n^{-1/2})$ as $n\to\infty$.
This proves the $\cL^1$ convergence in (<ref>).
Lastly, because $g'(\Lmi)\Lmi$, $i=1,\ldots,n$, are identically distributed (so are $g'(\Li)\Li$, $i=1,\ldots,n$), so
\begin{align}
&\E\left[\left|\avgni \left[g'(\Lmi)\Lmi - g'(\Li)\Li\right]\right|\right] \leq \E\left[\left|g'(L_m)L_m - g'(L)L\right|\right]\nonumber\\
=&\E\left[\left|(g'(L_m)-g'(L))L_m - g'(L)(L-L_m)\right|\right]\nonumber\\
\leq& \E[|(g'(L_m)-g'(L))L_m |] + \E[|g'(L)(L-L_m)|] \stackrel{(*)}{=} \cO(m^{-1/2})\label{eq:aux9}
\end{align}
where $(*)$ holds because of the following:
* For smooth functions $g$ with bounded second derivative, i.e., $|g''(x)|\leq C_g<\infty$, (<ref>) equals
\begin{align*}
&\E[|g''(\Lambda_m)(L_m-L)(L_m-L+L) |] + \E[|g'(L)(L-L_m)|]\\
\leq& C_g\left(\E\left[(L_m-L)^2\right] + \E[|(L_m-L)L|]\right) + \E[|g'(L)(L-L_m)|]\\
\leq& C_g\left(\E\left[(L_m-L)^2\right] + (\E\left[(L_m-L)^2\right])^{1/2}(\E\left[L^2\right])^{1/2}\right) + (\E\left[(g'(L))^2\right])^{1/2}(\E\left[(L_m-L)^2\right])^{1/2}\\
\stackrel{(*)}{=}& C_g\left(\cO(m^{-1}) + \cO(m^{-1/2})\right) + \cO(m^{-1/2}) = \cO(m^{-1/2}),
\end{align*}
where $(*)$ holds because $\E\left[(L_m-L)^2\right]=\cO(m^{-1})$ by Theorem <ref> with $p=1$ and $\E\left[L^2\right]<\infty$ and $\E\left[(g'(L))^2\right]$ by assumptions.
* For the hockey-stick function, $g'(x)=\1\{x\geq 0\}$, (<ref>) equals
\begin{align*}
&\E[|L_m\cdot(\1\{L_m\geq 0\}-\1\{L\geq 0\})|] + \E[|1\{L \geq 0\}(L_m-L)|]\\
\leq & \E[|L_m\cdot(\1\{L_m\geq 0\}-\1\{L\geq 0\})|]+ (\E\left[(L_m-L)^2\right])^{1/2} = \cO(m^{-1}) + \cO(m^{-1/2}),
\end{align*}
where the last equality holds by (<ref>) in Lemma <ref> and Theorem <ref> with $p=2$.
In short, we have shown that (<ref>)$\to 0$ as $\min\{m,n\}\to\infty$, which proves the $\cL^1$-convergence in (<ref>).
The proof is complete.
If the conditions for Theorem <ref> hold, then $\widehat{\sigma}_{2,mn}^2\conprob\sigma_{2}^2$ as $\min\{m,n\}\to 0$.
By Equations (<ref>), (<ref>) and the notations in (<ref>), we have
\begin{equation}\label{eq:diff_sig2hat}
\widehat{\sigma}_{2,mn}^2 - \sigma_{2}^2 = \left[\avgmj \hatR_{m,j}^2 - \E\left[R^2\right]\right] + \left[\left(\avgni g'(\Lmi)\Lmi\right)^2 - \left(\E\left[g'\left(L\right)L\right]\right)^2\right].
\end{equation}
We then consider each of the two differences above and show that both converge to zero in probability as $\min\{m,n\}\to \infty$.
For the first term in (<ref>), note that
\begin{align}
\avgmj \hatR_{m,j}^2 - \E\left[R^2\right] =& \avgmj \left[\hatR_{m,j}^2-\hatR_{j}^2\right] +\avgmj \left[\hatR_{j}^2-R_{j}^2\right] +\avgmj R_{j}^2- \E\left[R^2\right].\label{eq:diff6}
\end{align}
By (<ref>) and (<ref>) in Lemma <ref>, the first two terms on the RHS of (<ref>) converge, in $\cL^1$ and hence in probability, to zero as $\min{m,n}\to\infty$.
Also, because $R_{j}^2$, $j=1,\ldots,m$ are i.i.d. samples of $R^2$, so the last term converges to zero as $m\to\infty$ in probability by the weak law of large numbers.
For the second term in (<ref>), note that
\begin{align}
&\avgni g'(\Lmi)\Lmi - \E\left[g'\left(L\right)L\right]\nonumber\\
=&\avgni \left[g'(\Lmi)\Lmi - g'(\Li)\Li\right] + \left[\avgni g'(\Li)\Li- \E\left[g'\left(L\right)L\right]\right].\label{eq:diff8}
\end{align}
The first term on the RHS of (<ref>) converges in probability to zero as $m\to\infty$ by the $\cL^1$ convergence (<ref>) in Lemma <ref>.
The second term on the RHS of (<ref>) converges in probability to zero as $n\to\infty$ by weak law of large numbers because $g'(\Li)\Li$, $i=1,\ldots,n$ are i.i.d. samples with the common expectation $\E\left[g'\left(L\right)L\right]$.
Therefore $\avgni g'(\Lmi)\Lmi \conprob \E\left[g'\left(L\right)L\right]$ and so $\left(\avgni g'(\Lmi)\Lmi\right)^2 \conprob \left(\E\left[g'\left(L\right)L\right]\right)^2$ by the continuous mapping theorem.
In summary, both terms in (<ref>) converge to 0 in probability, as desired.
The proof is complete.
§ PROOFS FOR RESULTS IN SECTION <REF>
Consider the decomposition (<ref>), in this appendix we will show that $\widetilde{\sigma}_{mn}^{-1}(\cU_{\epsilon_m,mn}-\rho)\condist \cN(0,1)$ and all $r_{\epsilon_m,mn}^a$, $r_{\epsilon_m,mn}^b$ and $r_{\epsilon_m,mn}^c$ converges to zero quickly in Lemmas <ref>, <ref>, <ref> and <ref>, respectively.
We omit the lengthy discussions on the technical assumptions needed to ensure that the remainder term in $r_{\epsilon_m,mn}^d$ is negligible and focus on analyzing the other terms.
Then, applying the Slutsky's theorem to the decomposition (<ref>), we have that $\widetilde{\sigma}_{mn}^{-1}(\rho_{mn}-\rho)\condist \cN(0,1)$ as so the proof for Theorem <ref> is complete.
In addition, in Lemmas <ref> and <ref>, we show that $\widehat{\widetilde{\sigma}}_{1,mn}^2$ and $\widehat{\widetilde{\sigma}}_{2,mn}^2$ converge to $\sigma_1^2$ and $\widetilde{\sigma}_2^2$, respectively.
Then, applying the continuous mapping theorem to $\widehat{\widetilde{\sigma}}_{mn}^2 = \frac{\widehat{\widetilde{\sigma}}_{1,mn}^2}{n} + \frac{\widehat{\widetilde{\sigma}}_{2,mn}^2}{m}$, the proof for Theorem <ref> is complete.
Before proceeding, we recall the function $\gepsm(x) = \int_{-\infty}^{x/\epsilon_m} \phi(u) du$ where $\phi(u) = \frac{1}{4\pi} (1-\cos(u))\cdot \1\{|u| \leq 2\pi\}$, as defined in (<ref>).
Then, by construction, $\gepsm'(x) = \frac{1}{4\pi \epsilon_m} \left(1-\cos\left(x/\epsilon_m\right)\right)\cdot \1\left\lbrace\left|x\right| \leq 2\pi\epsilon_m\right\rbrace$, $\gepsm''(x)= \frac{1}{4\pi \epsilon_m^2} \sin \left( x/\epsilon_m\right)\cdot \1\left\lbrace\left|x\right| \leq 2\pi\epsilon_m\right\rbrace$,
\begin{align}
\int_{-\infty}^{\infty} \phi(u) du = \int_{-2\pi}^{2\pi} \frac{1}{4\pi} (1-\cos(u)) =1,\,\int_{-\infty}^{\infty} u \cdot \phi(u) du =0,\mbox{ and }\label{eq:aux12}\\
\int_{-\infty}^{\infty} |u|^{r_1} \cdot [\phi(u)]^{r_2} du < \infty, \,\, r_1=0,1,2,3, r_2=1,2.\label{eq:aux13}
\end{align}
Suppose the conditions for Theorem <ref> hold.
$$\frac{\cU_{\epsilon_m,mn}-\rho}{\widetilde{\sigma}_{mn}}\condist \cN(0,1), \mbox{ as } \min\{m,n\}\to\infty,$$
where $\widetilde{\sigma}_{mn}^2 = \frac{\widetilde{\sigma}_1^2}{n} + \frac{\widetilde{\sigma}_2^2}{m}$ and $\widetilde{\sigma}_{1}^2$ and $\widetilde{\sigma}_2^2$ are defined as (<ref>) and (<ref>), respectively.
Let $\cV_{\epsilon_m,ij} = g'_{\epsilon_m}(\Li)(\hatHij-\Li)$, then $\cU_{\epsilon_m,mn}=\frac{1}{mn}\sum_{i=1}^n\sum_{j=1}^m [g(\Li)+\cV_{\epsilon_m,ij}]$.
Note that $\cV_{\epsilon_m,ij}$ are identically distributed for all $i=1,\ldots,n$ and $j=1,\ldots,m$ so we can write a generic $\cV_{\epsilon_m, ij}$ simply as $\cV_{\epsilon_m}$ for notational convenience.
For any $\epsilon_m>0$, we have that
\begin{align}\label{Lambda}
\E[\cV_{\epsilon_m}|X] = g'_{\epsilon_m}(L(X))\left(\E[\hatH(X,Y)|X]-L(X)\right)= g'_{\epsilon_m}(L(X))\left(L(X)-L(X)\right) = 0,
\end{align}
which also means that $\E[\cV_{\epsilon_m}] = \E[\E[\cV_{\epsilon_m}|X]]=0$.
Moreover, we see that $\E[\cU_{\epsilon_m,mn}]=\E[g(L(X))] +\E[\E[\cV_{\epsilon_m}|X]] =\E[g(L(X))] + 0= \rho$, i.e., $\cU_{\epsilon_m,mn}$ is an unbiased estimator of $\rho$.
Consider the following random variables (Hoeffding decomposition):
\begin{equation*}\label{eq:Hoeffding}
\widetilde{\cU}_{\epsilon_m,mn} =\widetilde{\cU}_{\epsilon_m,n} + \widetilde{\cU}_{\epsilon_m,m}:=\sum_{i=1}^n \E[\cU_{\epsilon_m,mn}-\rho|X_i] + \sum_{j=1}^m \E[\cU_{\epsilon_m,mn}-\rho|Y_j] .
\end{equation*}
We then consider the following decomposition:
\begin{equation}\label{eq:decomposition1}
\frac{\cU_{\epsilon_m,mn}-\rho}{\widetilde{\sigma}_{mn}} = \frac{\widetilde{\cU}_{\epsilon_m,mn}}{\widetilde{\sigma}_{mn}} + \frac{\cU_{\epsilon_m,mn}-\rho-\widetilde{\cU}_{\epsilon_m,mn}}{\widetilde{\sigma}_{mn}}.
\end{equation}
To establish $\widetilde{\sigma}_{mn}^{-1}(\cU_{\epsilon_m,mn}-\rho)\condist \cN(0,1)$ it suffices to show that $\widetilde{\sigma}_{mn}^{-1}\widetilde{\cU}_{\epsilon_m,mn}\condist \cN(0,1)$ and that $\widetilde{\sigma}_{mn}^{-1}(\cU_{\epsilon_m,mn}-\rho-\widetilde{\cU}_{\epsilon_m,mn})\stackrel{d}{\to} 0$.
To show $\widetilde{\sigma}_{mn}^{-1}\widetilde{\cU}_{\epsilon_m,mn}\condist \cN(0,1)$, we consider the convergences of $\widetilde{\cU}_{\epsilon_m,n}$ and $\widetilde{\cU}_{\epsilon_m,m}$ separately.
Firstly, for any $i=1,\ldots,n$, because $X_k$ is independent of $X_i$ for any $k\neq i$, we have
\begin{align}
&\E[\cU_{\epsilon_m,mn}-\rho|X_i] = \E\left[\left.\frac{1}{mn}\sum_{i=1}^n\sum_{j=1}^m g(\Li)\right|X_i\right] + \E\left[\left.\frac{1}{mn}\sum_{i=1}^n\sum_{j=1}^m \cV_{\epsilon_m,ij}\right|X_i\right] - \rho\nonumber\\
\stackrel{\eqref{Lambda}}{=}& \frac{1}{n}\left(g(\Li)+\sum_{\substack{k=1 \\ k\neq i}}^n\E\left[\left. g(L_k)\right|X_i\right]\right) + 0 - \rho= \frac{1}{n}g(\Li) + \frac{n-1}{n}\rho - \rho \nonumber\\
=& \frac{1}{n} g(\Li) - \frac{1}{n}\rho.\label{eq:aux14}
\end{align}
Therefore $\widetilde{\cU}_{\epsilon_m,n}=\sum_{i=1}^n \E[\cU_{\epsilon_m,mn}-\rho|X_i]=\avgni g(\Li)-\rho$.
Because $g(\Li)$, $i=1,\ldots,n$ are i.i.d. random variables with common expectation $\E[g(L)]=\rho$, so by the classic CLT we have
\begin{align}\label{CLT Un}
\sqrt{n}\widetilde{\cU}_{\epsilon_m,n}\stackrel{d}{\to} \cN(0,\widetilde{\sigma}_{1}^2) \mbox{ as } n\to\infty,
\end{align}
where $\widetilde{\sigma}_{1}^2 = \Var[g(L)] = \E[\1\{L\geq 0 \}] - (\E[\1\{L\geq 0\}])^2$.
Secondly, for any $j=1,\ldots,m$, because all $X_i$, $i=1,\ldots,n$ are independent of $Y_j$ and $Y_k$ is independent of $Y_j$ for any $k\neq j$, so
\begin{align}
&\E[\cU_{\epsilon_m,mn}-\rho|Y_j] = \E\left[\left.\frac{1}{mn}\sum_{i=1}^n\sum_{j=1}^m g(\Li)\right|Y_j\right] + \E\left[\left.\frac{1}{mn}\sum_{i=1}^n\sum_{j=1}^m \cV_{\epsilon_m,ij}\right|Y_j\right] - \rho\nonumber\\
=& \E\left[g(L)\right] + \frac{1}{m}\left(\E\left[\cV_{\epsilon_m, 1j}|Y_j\right] + \sum_{\substack{k=1 \\ k\neq j}}^m \E\left[\left.\cV_{\epsilon_m, 1k}\right|Y_j\right]\right) - \rho\stackrel{\eqref{Lambda}}{=} \rho + \frac{1}{m}\left(\E\left[\cV_{\epsilon_m, 1j}|Y_j\right] + 0\right) - \rho \nonumber\\
=& \frac{1}{m}\E\left[\cV_{\epsilon_m, 1j}|Y_j\right] =: \frac{1}{m}\widetilde{Y}_{\epsilon_m,j}.\label{eq:aux15}
\end{align}
Therefore $\widetilde{\cU}_{\epsilon_m,m}=\sum_{j=1}^m \E[\cU_{\epsilon_m,mn}-\rho|Y_j]=\avgmj \E\left[\cV_{\epsilon_m, 1j}|Y_j\right] =\avgmj \widetilde{Y}_{\epsilon_m,j}$.
Assumption <ref> implies that $\psi(x,\epsilon_m u)=\psi_0(x) +\epsilon_m u\cdot \frac{\partial}{\partial \ell}\psi(x,\bar{u})$
where $\psi_0(x)=\psi(x,0)$ and $\bar{u}$ is between 0 and $\epsilon_m u$.
Denote a generic $\widetilde{Y}_{\epsilon_m} = \widetilde{Y}_{\epsilon_m,j}$ for notational convenience.
Then it follows that
\begin{align}
\widetilde{Y}_{\epsilon_m}=&\int \int\phi(u)(\hatH(x,Y)-\epsilon_m u)\psi(x,\epsilon_m u) \d x\d u\nonumber\\
=&\int \int\phi(u)(\hatH(x,Y)-\epsilon_m u)\left[\psi_0(x) +\epsilon_m u \frac{\partial}{\partial \ell}\psi(x,\bar{u}) \right] \d x\d u\nonumber\\
=&\int\phi(u)\d u \int\hatH(x,Y)\psi_0(x) \d x -\epsilon_m\int u\cdot\phi(u) \d u \int\psi_0(x) \d x \label{eq:doubleintegral1}\\
&+ \epsilon_m\int \int \phi(u)u\left(\hatH(x,Y)-\epsilon_m u\right) \frac{\partial}{\partial \ell}\psi(x,\bar{u}) \d x\d u.\label{eq:doubleintegral}
\end{align}
By (<ref>), the first term in (<ref>) equals $\int\hatH(x,Y)\psi_0(x) \d x$ and
the second term equals $0$.
Moreover, recall $|\frac{\partial}{\partial \ell} \psi(x,\ell)|\leq \psi_1(x)$ in Assumption <ref> <ref>, so
\begin{align*}
\eqref{eq:doubleintegral}\leq&\epsilon_m\int \int \phi(u)|u|\left(|\hatH(x,Y)|+\epsilon_m |u|\right) \psi_1(x) \d x\d u\\
\leq &\epsilon_m\int |u| \cdot \phi(u) \d u \int |\hatH(x,Y)| \psi_1(x) \d x + \epsilon_m^2\int u^2\cdot\phi(u) \d u \int \psi_1(x) \d x \\
\stackrel{(*)}{=} &\cO(\epsilon_m) + \cO(\epsilon_m^2) = \cO(\epsilon_m),
\end{align*}
where $(*)$ holds because of (<ref>) and Assumption <ref> <ref>.
\begin{align}
\E\left[\widetilde{Y}_{\epsilon_m}^2\right] =& \E\left[\left(\int\hatH(x,Y)\psi_0(x)\d x+\cO(\epsilon_m)\right)^2\right]\nonumber\\
=& \E\left[\left(\int\hatH(x,Y)\psi_0(x)\d x\right)^2\right]+\cO(\epsilon_m)=\widetilde{\sigma}_2^2+\cO(\epsilon_m).\label{ind CLT Y}
\end{align}
Since $\widetilde{Y}_{\epsilon_m,j}$, $j=1,\ldots,m$ are i.i.d. samples, the characteristic function for $\sqrt{m}\widetilde{\cU}_{\epsilon_m,m}$ is given by
\begin{align*}
\varphi_{\epsilon_m,m}(t) = \E\left[\exp\left(it \sum_{j=1}^m\frac{\widetilde{Y}_{\epsilon_m,j}}{\sqrt{m}}\right)\right] = \left(\E\left[\exp\left(it \frac{\widetilde{Y}_{\epsilon_m}}{\sqrt{m}}\right)\right]\right)^m.
\end{align*}
Using the Taylor's theorem, $\E\left[\widetilde{Y}_{\epsilon_m}\right]=0$, and $\E\left[\widetilde{Y}_{\epsilon_m}^2\right]=\widetilde{\sigma}_2^2+\cO(\epsilon_m)$ we have
\begin{align*}%\label{charac1}
\E\left[\exp\left(it \frac{\widetilde{Y}_{\epsilon_m}}{\sqrt{m}}\right)\right]=&1-\frac{t^2}{2m}\E\left[\widetilde{Y}_{\epsilon_m}^2\right]+o\left(\frac{t^2}{m}\right)\\
=&1-\frac{t^2}{2m}\widetilde{\sigma}_2^2+\frac{t^2}{2m}\cO(\epsilon_m)+o\left(\frac{t^2}{m}\right),\mbox{ as } \frac{t}{\sqrt{m}}\to 0.
\end{align*}
So the characteristic function $\varphi_{\epsilon_m,m}(t) \to \exp\left(-\frac{t^2}{2}\widetilde{\sigma}_2^2\right)$ as $m\to\infty$ and $\epsilon_m\to 0$.
By the Lévy's continuity theorem, this means that
\begin{align}\label{CLT Um}
\sqrt{m}\widetilde{\cU}_{\epsilon_m,m} \stackrel{d}{\to} \cN(0,\widetilde{\sigma}_2^2).
\end{align}
Because $\widetilde{\cU}_{\epsilon_m,mn} = \widetilde{\cU}_{\epsilon_m,n} + \widetilde{\cU}_{\epsilon_m,m}$, where $\widetilde{\cU}_{\epsilon_m,n}$ and $\widetilde{\cU}_{\epsilon_m,m}$ are independent.
Then it follows from (<ref>) and (<ref>) that,
\begin{equation}\label{eq:result1}
\frac{\widetilde{\cU}_{\epsilon_m,mn}}{\widetilde{\sigma}_{mn}} \stackrel{d}{\to} \cN(0,1),\quad \mbox{ as }\min\{m,n\}\to\infty,
\end{equation}
where $\widetilde{\sigma}_{mn}^2 = \frac{\widetilde{\sigma}_{1}^2}{n} + \frac{\widetilde{\sigma}_2^2}{m}$, $\widetilde{\sigma}_{1}^2 = \Var[g(L)]$, and $\widetilde{\sigma}_2^2 = \E\left[\left(\int\hatH(x,Y)\psi_0(x) \d x\right)^2\right]$.
Next, we show $
\E\left[\widetilde{\sigma}_{mn}^{-2}(\cU_{\epsilon_m,mn}-\rho-\widetilde{\cU}_{\epsilon_m,mn})^2\right]\to 0,\ \text{as}\ \min\{m,n\}\to\infty$,
which implies $\widetilde{\sigma}_{mn}^{-1}(\cU_{\epsilon_m,mn}-\rho-\widetilde{\cU}_{\epsilon_m,mn})\stackrel{d}{\to} 0,\ \text{as}\ \min\{m,n\}\to\infty$.
Note that
\begin{align}\label{CLT indi dif}
\E\left[(\cU_{\epsilon_m,mn}-\rho-\widetilde{\cU}_{\epsilon_m,mn})^2\right]=\E\left[(\cU_{\epsilon_m,mn}-\rho)^2\right]+\E\left[\widetilde{\cU}_{\epsilon_m,mn}^2\right] -2\E\left[\left(\cU_{\epsilon_m,mn}-\rho\right)\widetilde{\cU}_{\epsilon_m,mn}\right].
\end{align}
We investigate the three terms on the RHS of (<ref>) as follows:
* For $ \E\left[(\cU_{\epsilon_m,mn}-\rho)^2\right]$, it follows from the definition of $\cU_{\epsilon_m,mn}$ that
\begin{align*}
&\E\left[(\cU_{\epsilon_m,mn}-\rho)^2\right]=\E\left[\left(\frac{1}{mn}\sum_{i=1}^n \sum_{j=1}^m \left[g(\Li) + \cV_{\epsilon_m,ij}\right]-\rho\right)^2\right]\nonumber\\
=&\E\left[\left(\frac{1}{mn}\sum_{i=1}^n \sum_{j=1}^m \cV_{\epsilon_m,ij}\right)^2\right]+\E\left[\left(\frac{1}{n}\sum_{i=1}^n g(\Li)-\rho\right)^2\right]+2\E\left[\frac{1}{mn}\sum_{i=1}^n \sum_{j=1}^m \cV_{\epsilon_m,ij}\left(\frac{1}{n}\sum_{i=1}^n g(\Li)-\rho\right)\right].
\end{align*}
We define $\cG=\sigma\left(X_1,...,X_n\right)$ and analyze the three terms on the RHS one by one.
Firstly, it follows that
\begin{align*}
&\E\left[\left(\frac{1}{mn}\sum_{i=1}^n \sum_{j=1}^m \cV_{\epsilon_m,ij}\right)^2\right]=\E\left[\E\left[\left.\left(\frac{1}{mn}\sum_{i=1}^n \sum_{j=1}^m \cV_{\epsilon_m,ij}\right)^2\right|\cG \right]\right]\\
\stackrel{(*)}{=}&\E\left[\frac{1}{m}\E\left[\left.\left(\frac{1}{n}\sum_{i=1}^n \cV_{\epsilon_m,i1}\right)^2\right|\cG \right]\right]=\frac{1}{m}\E\left[\left(\frac{1}{n}\sum_{i=1}^n \cV_{\epsilon_m,i1}\right)^2\right]\\
=&\frac{1}{mn^2}\left( \sum_{i=1}^n\E\left[\cV_{\epsilon_m,i1}^2\right]+\sum_{i=1}^n\sum_{\substack{k=1 \\ k\neq i}} \E\left[\cV_{\epsilon_m,i1}\cdot\cV_{\epsilon_m,k1}\right]\right)=\frac{1}{mn} \E\left[\cV_{\epsilon_m,11}^2\right]+\frac{n-1}{mn} \E\left[\cV_{\epsilon_m,11}\cdot\cV_{\epsilon_m,21}\right]\\
\stackrel{(**)}{=}&\frac{1}{mn} \E\left[\cV_{\epsilon_m,11}^2\right]+\frac{n-1}{mn} \E\left[\E\left[\left.\cV_{\epsilon_m,11}\right|Y_1\right]\E\left[\left.\cV_{\epsilon_m,21}\right|Y_1\right]\right]=\frac{1}{mn} \E\left[\cV_{\epsilon_m}^2\right]+\frac{n-1}{mn} \E\left[\left(\E\left[\left.\cV_{\epsilon_m}\right|Y\right]\right)^2\right],
\end{align*}
where $(*)$ holds by Lemma <ref> because given $\cG$, $\frac{1}{n}\sum_{i=1}^n \cV_{\epsilon_m,ij}$ ($j=1,...,m$) are i.i.d samples with mean 0 and $(**)$ holds because $\cV_{\epsilon_m,11}$ and $\cV_{\epsilon_m,21}$ are conditionally independent given $Y_1$. |
# Survival functions versus conditional aggregation-based survival functions
on discrete space
Basarik Stanislav11footnotemark: 1<EMAIL_ADDRESS>Borzová
Jana22footnotemark: 2<EMAIL_ADDRESS>Halčinová Lenka33footnotemark: 3
<EMAIL_ADDRESS>Institute of Mathematics, P. J. Šafárik University in
Košice, Jesenná 5, 040 01 Košice, Slovakia
###### Abstract
In this paper we deal with conditional aggregation-based survival functions
recently introduced by Boczek et al. (2020). The concept is worth to study
because of its possible implementation in real-life situations and
mathematical theory as well. The aim of this paper is the comparison of this
new notion with the standard survival function. We state sufficient and
necessary conditions under which the generalized and the standard survival
function equal. The main result is the characterization of the family of
conditional aggregation operators (on discrete space) for which these
functions coincide.
###### keywords:
aggregation, survival function, nonadditive measure, visualization, size
###### MSC:
[2010] 28A12
††journal: Information Sciencesfn2fn2footnotetext: Supported by the grants
APVV-16-0337, VEGA 1/0657/22, bilateral call Slovak-Poland grant scheme No.
SK-PL-18-0032 and grant scheme VVGS-PF-2021-1782.
## 1 Introduction
We continue to study the novel survival functions introduced in [1] as a
generalization of size-based level measure developed for the use in
nonadditive analysis in [3, 12, 13]. The concept appeared initially in time-
frequency analysis [8]. As the main result, in Theorem 4.7 we show that the
generalized survival function is equal to the original notion (for any
monotone measure and any input vector) just in very particular case. The
concept of the novel survival function is useful in many real-life situations
and pure theory as well. In fact, the standard survival function (also known
in the literature as the standard level measure [13], strict level measure [5]
or decumulative distribution function [10]) is the crucial ingredient of many
definitions in mathematical analysis. Many well-known integrals are based on
the survival function, e.g. the Choquet integral, the Sugeno integral, the
Shilkret integral, the seminormed integral [5], universal integrals [14], etc.
Also, the convergence of a sequence of functions in measure is based on the
same concept. Hence a reasonable generalization of the survival function leads
to the generalizations of all mentioned concepts. For more on applications of
the generalized survival function, see [1, 8].
Due to the number of factors needed in the definition of the generalized
survival function, it is quite difficult to understand this concept. In order
to understand it more deeply, in the following we shall focus on the graphical
visualization of inputs, see [4]. Moreover, the graphical representation will
help us to formulate basic results of this paper. In the whole paper, we
restrict ourselves to discrete settings. We consider finite basic set
$\displaystyle[n]:=\\{1,2,\dots,n\\},\,\,n\geq 1$
and a monotone measure $\displaystyle\mu$ on $\displaystyle 2^{[n]}$. If
$\displaystyle\mathbf{x}=(x_{1},\dots,x_{n})$ is a nonnegative real-valued
function on $\displaystyle[n]$, i.e., a vector, then the survival function (or
standard survival function) of the vector $\displaystyle\mathbf{x}$ with
respect to $\displaystyle\mu$, see [1, 9], is defined by
$\displaystyle\mu(\\{\mathbf{x}>\alpha\\}):=\mu\left(\\{i\in[n]:x_{i}>\alpha\\}\right),\quad\alpha\in[0,\infty).$
For the thorough exposition see Preliminaries. To avoid too abstract setting
in the following visual representations, let us consider the input vector
$\displaystyle\mathbf{x}=(2,3,4)$ and the monotone measure $\displaystyle\mu$
on $\displaystyle 2^{[3]}$ defined in Table 1.
$\displaystyle E$ | $\displaystyle\\{1,2,3\\}$ | $\displaystyle\\{2,3\\}$ | $\displaystyle\\{1,3\\}$ | $\displaystyle\\{1,2\\}$ | $\displaystyle\\{3\\}$ | $\displaystyle\\{2\\}$ | $\displaystyle\\{1\\}$ | $\displaystyle\emptyset$
---|---|---|---|---|---|---|---|---
$\displaystyle E^{c}$ | $\displaystyle\emptyset$ | $\displaystyle\\{1\\}$ | $\displaystyle\\{2\\}$ | $\displaystyle\\{3\\}$ | $\displaystyle\\{1,2\\}$ | $\displaystyle\\{1,3\\}$ | $\displaystyle\\{2,3\\}$ | $\displaystyle\\{1,2,3\\}$
$\displaystyle\mu(E^{c})$ | $\displaystyle 0$ | $\displaystyle 0.25$ | $\displaystyle 0.25$ | $\displaystyle 0.4$ | $\displaystyle 0.75$ | $\displaystyle 0.75$ | $\displaystyle 0.75$ | $\displaystyle 1$
$\displaystyle\max_{i\in E}x_{i}$ | $\displaystyle 4$ | $\displaystyle 4$ | $\displaystyle 4$ | $\displaystyle 3$ | $\displaystyle 4$ | $\displaystyle 3$ | $\displaystyle 2$ | $\displaystyle 0$
$\displaystyle\sum_{i\in E}x_{i}$ | $\displaystyle 9$ | $\displaystyle 7$ | $\displaystyle 6$ | $\displaystyle 5$ | $\displaystyle 4$ | $\displaystyle 3$ | $\displaystyle 2$ | $\displaystyle 0$
Table 1: Sample measure $\displaystyle\mu$ and two conditional aggregation
operators for vector $\displaystyle\mathbf{x}=(2,3,4)$
#### The survival functions visual representation
We begin with a nonstandard representation of standard survival function, as a
stepping stone to its generalization. Before, let us introduce the following
equivalent representation of survival function:
$\displaystyle\displaystyle\begin{split}\mu(\\{\mathbf{x}>\alpha\\})=\mu([n]\setminus\\{i\in[n]:x_{i}\leq\alpha\\})&=\min\big{\\{}\mu(E^{c}):(\forall
i\in E)\,\,x_{i}\leq\alpha,\,E\in 2^{[n]}\big{\\}}\\\
&=\min\big{\\{}\mu(E^{c}):\max_{i\in E}x_{i}\leq\alpha,\,E\in
2^{[n]}\big{\\}},\end{split}$ (1)
where $\displaystyle E^{c}=[n]\setminus E$, see motivation problem 1 in [1].
Let us start the visualization with inputs from Table 1.
|
---|---
Figure 1: The survival function visualization for
$\displaystyle\mathbf{x}=(2,3,4)$ and $\displaystyle\mu$ given in Table 1.
Let us depict all maximal values of $\displaystyle\mathbf{x}$ on
$\displaystyle E$, for each set $\displaystyle E\in 2^{[3]}$ on the lower
axis, see left image of Figure 1, in decreasing order and the corresponding
values of monotone measure of complement, i.e. $\displaystyle\mu(E^{c})$, on
the upper axis. In this picture of Figure 1, the number on lower axis is
linked with the number on the upper one via a straight line once they
correspond to the same set, i.e., $\displaystyle a$ is linked with
$\displaystyle b$ if there is $\displaystyle E\in 2^{[3]}$ such that
$\displaystyle a=\max\limits_{i\in E}x_{i}\hskip 14.22636pt\text{ and }\hskip
14.22636ptb=\mu(E^{c}).$
Finally, the value $\displaystyle\mu(\\{\mathbf{x}>\alpha\\})$ at some
$\displaystyle\alpha\in[0,\infty)$ can be read from the left image of Figure 1
considering the minimal value on the upper axis which is linked to a value
smaller than $\displaystyle\alpha$ (i.e., right-hand side value) on the lower
one. Thus considering an illustrative example in the left image of Figure 1,
the value of survival function at $\displaystyle 2{.}5$ is $\displaystyle
0{.}75$. Indeed, there are just 2 values on the right hand side of
$\displaystyle 2{.}5$, namely numbers 2 and 0. These are linked to
$\displaystyle 0{.}75$ and 1, respectively. Hence, $\displaystyle 0{.}75$ is a
smaller one. The graph of survival function is in the right image of Figure 1.
#### The generalized survival functions visual representation
In the modification of the survival function, the previously described
computational procedure stays. However, we allow to use any conditional
aggregation operator, not just maximum operator. The standard example of
conditional aggregation is the sum of components of $\displaystyle\mathbf{x}$,
see the last line in Table 1 and the corresponding visualisation in Figure 2.
Applying the described computational procedure we obtain the sum-based
survival function of vector $\displaystyle\mathbf{x}$, i.e., the generalized
survival function of vector $\displaystyle\mathbf{x}$ studied in [1, 3, 12,
13]. The formula linked to this procedure is the following:
$\displaystyle\displaystyle\mu_{\mathscr{A}^{\mathrm{sum}}}(\mathbf{x},\alpha)=\min\left\\{\mu(E^{c}):\mathsf{A}^{\mathrm{sum}}(\mathbf{x}|E)\leq\alpha,\,E\in\mathscr{E}\right\\}$
with $\displaystyle\mathsf{A}^{\mathrm{sum}}(\mathbf{x}|E)=\sum\limits_{i\in
E}x_{i}$ and $\displaystyle\\{\emptyset\\}\subseteq\,\mathscr{E}\subseteq
2^{[n]}$ (in the illustrative example $\displaystyle\mathscr{E}=2^{[3]}$).
The corresponding graph is: Considering discrete space, the computation of the
generalized survival function studied in [1, 3, 12, 13] may be always
represented via the corresponding diagrams similar to those in Figures 1 and
2.
|
---|---
Figure 2: Generalized survival function visualization for
$\displaystyle\mathbf{x}=(2,3,4)$, $\displaystyle\mu$ given in Table 1 and
$\displaystyle\mathsf{A}=\mathsf{A}^{\mathrm{sum}}$
Except for a better understanding of survival functions, the visual
representation may help us to answer the problem of their
indistinguishability. With the introduction of novel survival function a
natural question arises: When does the generalized survival function coincide
with the survival function? The motivation for answering these questions is
not only to know the relationship between mentioned concepts for given inputs,
but it will help us to compare the corresponding integrals based on them, see
[1, Definition 5.1, Definition 5.4]. In the literature, there are known some
families of conditional aggregation operators together with the collection
$\displaystyle\mathscr{E}$ when the generalized survival function equals to
the survival function. In the following we list them:
* 1.
(cf. [13, Corollary 4.15])
$\displaystyle{\mathscr{A}}={\mathscr{A}}^{\rm{size}}$ with size
$\displaystyle\mathsf{s}$ being the weighted sum444
$\displaystyle{\mathsf{s}}_{{\\#},p}(\mathbf{x})(E)=\left(\frac{1}{{\\#}(E)}\cdot\sum\limits_{x_{i}\in
E}x_{i}^{p}\right)^{\frac{1}{p}}$ for $\displaystyle E\neq\emptyset$,
$\displaystyle{\mathsf{s}}_{{\\#},p}(\mathbf{x})(\emptyset)=0$ and
$\displaystyle p>0$., $\displaystyle\mathscr{D}$ contains all singletons of
$\displaystyle[n]$ and $\displaystyle\mathscr{E}=2^{[n]}$;
* 2.
(cf. [1, Example 4.2] or [13, Section 5])
$\displaystyle{\mathscr{A}}={\mathscr{A}}^{\rm{max}}$ with
$\displaystyle\mathscr{E}=2^{[n]}$;
* 3.
(cf. [1, Proposition 4.6])
$\displaystyle{\mathscr{A}}={\mathscr{A}}^{\mu-\mathrm{ess}}$ with
$\displaystyle\mathscr{E}=2^{[n]}$.
Although the first two items appear to be different, in fact, under the above
conditions, they are equal
$\displaystyle{\mathscr{A}}^{\rm{size}}={\mathscr{A}}^{\rm{max}}$. Settings of
above mentioned examples lead to the survival function regardless of the
choice of monotone measure $\displaystyle\mu$. However, the identity between
generalized survival function and survival function may happen also for other
families of conditional aggregation operators (a FCA for short), but with
specific monotone measures, e.g. $\displaystyle{\mathscr{A}}^{\mathrm{sum}}$
with the weakest monotone measure555$\displaystyle\mu_{*}\colon
2^{[n]}\to[0,\infty)$ given by
$\displaystyle\mu_{*}(E)=\begin{cases}\mu([n]),&E=[n],\\\
0,&\textrm{otherwise}.\end{cases}$ shrinks to survival function for any input
vector $\displaystyle\mathbf{x}\in[0,\infty)^{[n]}$ and
$\displaystyle\mathscr{E}=2^{[n]}$. In this paper we shall treat the following
problems:
Problem 1: Let $\displaystyle\mathbf{x}\in[0,\infty)^{[n]}$,
$\displaystyle\mu$ be a monotone measure on $\displaystyle 2^{[n]}$, and
$\displaystyle{\mathscr{A}}$ be FCA. What are sufficient and necessary
conditions on $\displaystyle\mathbf{x}$, $\displaystyle\mu$ and
$\displaystyle{\mathscr{A}}$ to hold
$\displaystyle\mu_{\mathscr{A}}(\mathbf{x},\alpha)=\mu(\\{\mathbf{x}>\alpha\\})$?
Problem 2: Let $\displaystyle\mathbf{x}\in[0,\infty)^{[n]}$, and
$\displaystyle{\mathscr{A}}$ be FCA. What are sufficient and necessary
conditions on $\displaystyle\mathbf{x}$ and $\displaystyle{\mathscr{A}}$ to
hold
$\displaystyle\mu_{\mathscr{A}}(\mathbf{x},\alpha)=\mu(\\{\mathbf{x}>\alpha\\})$
for any monotone measure $\displaystyle\mu$?
Problem 3: Let $\displaystyle{\mathscr{A}}$ be FCA. What are sufficient and
necessary conditions on $\displaystyle{\mathscr{A}}$ to hold
$\displaystyle\mu_{\mathscr{A}}(\mathbf{x},\alpha)=\mu(\\{\mathbf{x}>\alpha\\})$
for any monotone measure $\displaystyle\mu$ and
$\displaystyle\mathbf{x}\in[0,\infty)^{[n]}$?
The paper is organized as follows. We continue with preliminary section
containing needed definitions and notations. In Section 3 we solve Problem 1,
see e.g. Corollary 3.7, Corollary 3.11, Remark 3.12, Proposition 3.15 and
Theorem 3.17. In Section 4 we provide quite surprising result, see Theorem 4.7
that characterizes the family of conditional aggregation operators (in
discrete setting) for which the generalized survival function coincides with
the standard survival function. Thus we answer Problem 3. In Section 4 we also
treat Problem 2, see Theorem 4.2 and Theorem 4.6. Many our results are
supported by appropriate examples.
## 2 Background and preliminaries
In order to be self-contained as far as possible, we recall in this section
necessary definitions and all basic notations. In the whole paper, we restrict
ourselves to discrete settings. As we have already mentioned, we shall
consider a finite set
$\displaystyle X=[n]:=\\{1,2,\dots,n\\},\,\,n\geq 1.$
We shall denote by $\displaystyle 2^{[n]}$ the power set of
$\displaystyle[n]$. A monotone or nonadditive measure on $\displaystyle
2^{[n]}$ is a nondecreasing set function $\displaystyle\mu\colon
2^{[n]}\to{{[0,\infty)}},$ i.e., $\displaystyle\mu(E)\leq\mu(F)$ whenever
$\displaystyle E\subseteq F$, with $\displaystyle\mu(\emptyset)=0.$ Moreover,
we shall suppose $\displaystyle\mu([n])>0$. The set of monotone measures on
$\displaystyle 2^{[n]}$ we shall denote by $\displaystyle\mathbf{M}$. The
monotone measure satisfying the equality $\displaystyle\mu([n])=1$ will be
called the normalized monotone measure (also known as a capacity in [15]). In
this paper we shall always work with monotone measures being defined on
$\displaystyle 2^{[n]}$, although, on several places the domain of
$\displaystyle\mu$ can be smaller. Also, we shall need special properties of
$\displaystyle\mu$ on a system $\displaystyle\mathscr{S}\subseteq 2^{[n]}$.
The monotone measure $\displaystyle\mu\in\mathbf{M}$ with the property
$\displaystyle\mu(E)\neq\mu(F)$ for any $\displaystyle
E,F\in\mathscr{S}\subseteq 2^{[n]}$, $\displaystyle E\neq F$ will be called
strictly monotone measure on $\displaystyle\mathscr{S}$. The counting measure
will be denoted by $\displaystyle{\\#}$. Further, we put
$\displaystyle\max\emptyset=0$ and $\displaystyle\sum_{i\in\emptyset}x_{i}=0$.
We shall work with nonnegative real-valued vectors, we shall use the notation
$\displaystyle\mathbf{x}=(x_{1},\dots,x_{n})$, $\displaystyle
x_{i}\in[0,\infty)$, $\displaystyle i=1,2,\dots,n$. The set
$\displaystyle[0,\infty)^{[n]}$ is the family of all nonnegative real-valued
functions on $\displaystyle[n]$, i.e. vectors. For any
$\displaystyle\mathbf{x}=(x_{1},\dots,x_{n})\in[0,\infty)^{[n]}$ we denote by
$\displaystyle(\cdot)$ a permutation $\displaystyle(\cdot)\colon[n]\to[n]$
such that $\displaystyle x_{(1)}\leq x_{(2)}\leq\dots\leq x_{(n)}$ and
$\displaystyle x_{(0)}=0$, $\displaystyle x_{(n+1)}=\infty$ by convention. Let
us remark that the permutation $\displaystyle(\cdot)$ need not be unique (this
happens if there are some ties in the sample $\displaystyle(x_{1},...,x_{n})$,
see [7]). For a fixed input vector $\displaystyle\mathbf{x}$ and a fixed
permutation $\displaystyle(\cdot)$ we shall denote by $\displaystyle E_{(i)}$
the set of the form $\displaystyle E_{(i)}=\\{(i),\dots,(n)\\}$ for any
$\displaystyle i\in[n]$ with the convention $\displaystyle
E_{(n+1)}=\emptyset$. By $\displaystyle\mathbf{1}_{E}$ we shall denote the
indicator function of a set $\displaystyle E\subseteq Y$, $\displaystyle
Y\subseteq[0,\infty)$, i.e., $\displaystyle\mathbf{1}_{E}(x)=1$ if
$\displaystyle x\in E$ and $\displaystyle\mathbf{1}_{E}(x)=0$ if
$\displaystyle x\notin E$. Especially,
$\displaystyle\mathbf{1}_{\emptyset}(x)=0$ for each $\displaystyle x\in Y$. We
shall work with indicator function with respect to two different sets. We
shall work with $\displaystyle Y=[n]$ when dealing with vectors (i.e.
$\displaystyle\mathbf{1}_{E}$ is a characteristic vector of $\displaystyle
E\subseteq[n]$ in $\displaystyle\\{0,1\\}^{{[n]}}$) and $\displaystyle
Y=[0,\infty)$ when dealing with survival functions.
In the following we list several definitions (adopted to discrete settings).
Firstly, the concept of the conditional aggregation operator is presented. Its
crucial feature is that the validity of properties is required only on
conditional set, not on the whole set. The inspiration for its introduction
came from the conditional expectation, which is the fundamental notion of
probability theory. Let us also remark that this operator generalizes the
aggregation operator introduced earlier by Calvo et al. in [6, Definition 1]
and it is the crucial ingredient in the definition of the generalized survival
function.
###### Definition 2.1
(cf. [1, Definition 3.1]) A map
$\displaystyle\mathsf{A}(\cdot|B)\colon[0,\infty)^{[n]}\to[0,\infty)$ is said
to be a conditional aggregation operator with respect to a set $\displaystyle
B\in 2^{[n]}\setminus\\{\emptyset$} if it satisfies the following conditions:
* i)
$\displaystyle\mathsf{A}(\mathbf{x}|B)\leq\mathsf{A}(\mathbf{y}|B)$ for any
$\displaystyle\mathbf{x},\mathbf{y}\in[0,\infty)^{[n]}$ such that
$\displaystyle x_{i}\leq y_{i}$ for any $\displaystyle i\in B$;
* ii)
$\displaystyle\mathsf{A}(\mathbf{1}_{B^{c}}|B)=0.$
Let us compare the settings of the previous definition with the settings of
the original definition, see [1, Definition 3.1]). We consider the greatest
$\displaystyle\sigma$-algebra as the domain of $\displaystyle\mu$ in
comparison with the original arbitrary $\displaystyle\sigma$-algebra
$\displaystyle\Sigma$. Then all vectors are measurable and this assumption may
be omitted from the definition. The measurability of each vector is desired
property mainly from the application point of view. Because of the property
$\displaystyle\mathsf{A}(\mathbf{x}|B)=\mathsf{A}(\mathbf{x}\mathbf{1}_{B}|B)$
for any $\displaystyle\mathbf{x}\in[0,\infty)^{[n]}$ with fixed $\displaystyle
B\in 2^{[n]}\setminus\\{\emptyset\\}$ the value
$\displaystyle\mathsf{A}(\mathbf{x}|B)$ can be interpreted as an aggregated
value of $\displaystyle\mathbf{x}$ on $\displaystyle B$, see [1]. In the
following we list several examples of conditional aggregation operators we
shall use in this paper. For further examples and some properties of
conditional aggregation operators we recommend [1, Section 3].
###### Example 2.2
Let $\displaystyle\mathbf{x}\in[0,\infty)^{[n]}$, $\displaystyle B\in
2^{[n]}\setminus\\{\emptyset\\}$ and $\displaystyle m\in\mathbf{M}$.
1. i)
$\displaystyle\mathsf{A}^{m-\mathrm{ess}}(\mathbf{x}|B)=\mathrm{ess}\sup_{m}(\mathbf{x}\mathbf{1}_{B})$,
where $\displaystyle\mathrm{ess}\sup_{m}(\mathbf{x})=\min\\{\alpha\geq
0:\,\\{\mathbf{x}>\alpha\\}\in\mathscr{N}_{m}\\}$.666A set $\displaystyle N\in
2^{[n]}$ is said to be a null set with respect to a monotone measure
$\displaystyle m$ if $\displaystyle m(E\cup N)=m(E)$ for all $\displaystyle
E\in 2^{[n]}.$ By $\displaystyle{{\mathscr{N}}_{m}}$ we denote the family of
null sets with respect to $\displaystyle m.$
2. ii)
$\displaystyle\mathsf{A}(\mathbf{x}|B)=\mathrm{J}(\mathbf{x}\mathbf{1}_{B},m)$,
(the multiplication of vectors is meant by components) where
$\displaystyle\mathrm{J}$ is an integral defined in [2, Definition 2.2].
Namely,
* a)
$\displaystyle\mathsf{A}^{\mathrm{Ch}_{m}}(\mathbf{x}|B)=\sum\limits_{i=1}^{n}x_{(i)}\left(m(E_{(i)}{\cap
B})-m(E_{(i+1)}{\cap B})\right)$;
* b)
$\displaystyle\mathsf{A}^{\mathrm{Sh}_{m}}(\mathbf{x}|B)=\max\limits_{i\in[n]}\left\\{x_{(i)}\cdot
m(E_{(i)}{\cap B})\right\\}$;
* c)
$\displaystyle\mathsf{A}^{\mathrm{Su}_{m}}(\mathbf{x}|B)=\max\limits_{i\in[n]}\left\\{\min\\{x_{(i)},m(E_{(i)}{\cap
B})\\}\right\\}$.
3. iii)
$\displaystyle\mathsf{A}(\mathbf{x}|B)=\frac{\max_{i\in B}(x_{i}\cdot
w_{i})}{\max_{i\in B}z_{i}},$ where $\displaystyle\mathbf{w}\in[0,1]^{[n]}$ is
a fixed weight vector, $\displaystyle\mathbf{z}\in(0,1]^{[n]}$ is fixed vector
such that $\displaystyle\max_{i\in[n]}z_{i}=1$. We note, that for
$\displaystyle\mathbf{w}=\mathbf{z}=\mathbf{1}_{[n]}$ we get
$\displaystyle\mathsf{A}^{\mathrm{max}}(\mathbf{x}|B)=\max_{i\in B}x_{i}$.
4. iv)
$\displaystyle\mathsf{A}^{p-\mathrm{mean}}(\mathbf{x}|B)=\left(\frac{1}{{\\#}(B)}\cdot\sum\limits_{i\in
B}(x_{i})^{p}\right)^{\frac{1}{p}}$ with $\displaystyle p\in(0,\infty)$. For
$\displaystyle p=1$ we get the arithmetic mean.
5. v)
$\displaystyle\mathsf{A}^{\mathrm{size}}(\mathbf{x}|B)=\max\limits_{D\in\mathscr{D}}\mathsf{s}(\mathbf{x}\mathbf{1}_{B})(D)$
with $\displaystyle\mathsf{s}$ being a size, see [3, 12, 13], is the outer
essential supremum of $\displaystyle\mathbf{x}$ over $\displaystyle B$ with
respect to a size $\displaystyle\mathsf{s}$ and a collection
$\displaystyle\mathscr{D}\subseteq 2^{[n]}$. In particular, for the sum as a
size, i.e.,
$\displaystyle\mathsf{s}_{\mathrm{sum}}(\mathbf{x})(G)=\sum\limits_{i\in
G}x_{i}$ for any $\displaystyle G\in 2^{[n]}$ and for
$\displaystyle\mathscr{D}$ such that there is a set $\displaystyle C\supseteq
B,C\in\mathscr{D}$ we get
$\displaystyle\mathsf{A}^{\mathrm{sum}}(\mathbf{x}|B)=\sum\limits_{i\in
B}x_{i}$.
Observe that the empty set is not included in the Definition 2.1. The reason
for that is the fact that the empty set does not provide any additional
information for aggregation. However, in order to have the concept of the
generalized survival function correctly introduced, it is necessary to add the
assumption $\displaystyle\mathsf{A}(\cdot|\emptyset)=0$, see [1, Section 4].
From now on, we shall consider only these conditional aggregation operators.
Let us remark, that all mappings from Example 2.2 with the convention
?$\displaystyle 0/0=0$? satisfy this property. In the following we shall
provide the definition of the generalized survival function, see [1,
Definition 4.1.]. Let us consider a collection $\displaystyle\mathscr{E}$,
$\displaystyle{{\\{}}\emptyset{{\\}}}\subseteq\mathscr{E}\subseteq 2^{[n]}$
and conditional aggregation operators on sets from $\displaystyle\mathscr{E}$
with $\displaystyle\mathsf{A}(\cdot|\emptyset)=0$. The set of such aggregation
operators we shall denote by
$\displaystyle{\mathscr{A}}=\\{\mathsf{A}(\cdot|E):E\in\mathscr{E}\\}$
and we shall call it a family of conditional aggregation operators (FCA for
short). For example,
$\displaystyle{\mathscr{A}}^{\mathrm{sum}}=\\{\mathsf{A}^{\mathrm{sum}}(\cdot|E):E\in
2^{[n]}\\}$,
$\displaystyle{\mathscr{A}}^{\mathrm{max}}=\\{\mathsf{A}^{\mathrm{max}}(\cdot|E):E\in\\{\emptyset,\\{1\\},\\{2\\},\dots,\\{n\\}\\}\\}$,
$\displaystyle\widehat{{\mathscr{A}}}^{\mathrm{max}}=\\{\mathsf{A}^{\mathrm{max}}(\cdot|E):E\in\\{\emptyset\\}\\}$
or $\displaystyle{\mathscr{A}}=\\{\mathsf{A}(\cdot|E):E\in 2^{[n]}\\}$,
$\displaystyle n\geq 2$, where
$\displaystyle\mathsf{A}(\mathbf{x}|E)=\begin{cases}\mathsf{A}^{\mathrm{max}}(\mathbf{x}|E),&E\in\\{\\{1\\},\\{2\\},\dots,\\{n\\}\\},\\\
0,&E=\emptyset,\\\
\mathsf{A}^{\mathrm{sum}}(\mathbf{x}|E),&\text{otherwise}\end{cases}$
for any $\displaystyle\mathbf{x}\in[0,\infty)^{[n]}$.
###### Definition 2.3
(cf. [1, Definition 4.1.]) Let $\displaystyle\mathbf{x}\in[0,\infty)^{[n]}$,
$\displaystyle\mu\in\mathbf{M}$. The generalized survival function with
respect to $\displaystyle{\mathscr{A}}$ is defined as
$\displaystyle\displaystyle\mu_{\mathscr{A}}(\mathbf{x},\alpha)=\min\left\\{\mu(E^{c}):\mathsf{A}(\mathbf{x}|E)\leq\alpha,\,E\in\mathscr{E}\right\\}$
for any $\displaystyle\alpha\in[0,\infty)$.
The presented definition is correct. Really, for any $\displaystyle
E\in\mathscr{E}$ it holds that $\displaystyle E^{c}\in 2^{[n]}$ is a
measurable set. Moreover, the set
$\displaystyle\\{E\in\mathscr{E}:\mathsf{A}(\mathbf{x}|E)\leq\alpha\\}$ is
nonempty for all $\displaystyle\alpha\in[0,\infty)$, because
$\displaystyle\mathsf{A}(\cdot|\emptyset)=0$ by convention and
$\displaystyle\emptyset\in\mathscr{E}$. Immediately it is seen, that for
$\displaystyle\mathscr{E}=2^{[n]}$ and
$\displaystyle{\mathscr{A}}^{\mathrm{max}}$ we get the standard survival
function, compare with (1). When it will be necessary we shall emphasize the
collection $\displaystyle\mathscr{E}$ in the notation of generalized survival
function, i.e. we shall use $\displaystyle{\mathscr{A}}^{\mathscr{E}}$.
On several places in this paper we shall work with the FCA that is
nondecreasing w.r.t sets, i.e. the map $\displaystyle
E\mapsto\mathsf{A}(\cdot|E)$ will be nondecreasing. Many FCA satisfy this
property, e.g.
$\displaystyle{\mathscr{A}}^{m-\mathrm{ess}}=\\{\mathsf{A}^{m-\mathrm{ess}}(\cdot|E):E\in\mathscr{E}\\}$,
$\displaystyle{\mathscr{A}}^{\mathrm{Ch}_{m}}=\\{\mathsf{A}^{\mathrm{Ch}_{m}}(\cdot|E):E\in\mathscr{E}\\}$,
$\displaystyle{\mathscr{A}}^{\mathrm{Su}_{m}}=\\{\mathsf{A}^{\mathrm{Su}_{m}}(\cdot|E):E\in\mathscr{E}\\}$,
$\displaystyle{\mathscr{A}}^{\mathrm{Sh}_{m}}=\\{\mathsf{A}^{\mathrm{Sh}_{m}}(\cdot|E):E\in\mathscr{E}\\}$,
$\displaystyle{\mathscr{A}}^{\mathrm{max}}=\\{\mathsf{A}^{\mathrm{max}}(\cdot|E):E\in\mathscr{E}\\}$,
see Example 2.2 i), ii), iii).
## 3 Equality and inequalities of the generalized and standard survival
function
In this section we shall treat Problem 1. We provide sufficient and necessary
conditions on $\displaystyle\mathbf{x}$, $\displaystyle\mu$ and
$\displaystyle{\mathscr{A}}$ under which the generalized survival function and
survival function coincide. The important knowledge we use is the standard
survival function formula. In what follows we shall work with the expression
of the survival function on a finite set in the form
$\mu(\\{\mathbf{x}>\alpha\\})=\sum_{i=0}^{n-1}\mu\left(E_{(i+1)}\right)\cdot\mathbf{1}_{[x_{(i)},x_{(i+1)})}(\alpha)\text{}$
(2)
with the permutation $\displaystyle(\cdot)$ such that $\displaystyle
0=x_{(0)}\leq x_{(1)}\leq x_{(2)}\leq\dots\leq x_{(n)}$ and $\displaystyle
E_{(i)}=\\{(i),\dots,(n)\\}$ for $\displaystyle i\in[n]$. However, one can
easily see that some summands in the formula (2) can be redundant. For
example, for vectors with the property $\displaystyle x_{(i)}=x_{(i+1)}$ for
some $\displaystyle i\in[n-1]\cup\\{0\\}$ we have
$\displaystyle\mu\left(E_{(i+1)}\right)\cdot\mathbf{1}_{[x_{(i)},x_{(i+1)})}(\alpha)=0$
for any $\displaystyle\alpha\in[0,\infty)$, i.e., this summand does not change
the values of survival function and can be omitted.
Let us consider an arbitrary (fixed) input vector $\displaystyle\mathbf{x}$
together with a permutation $\displaystyle(\cdot)$ such that $\displaystyle
0=x_{(0)}\leq x_{(1)}\leq x_{(2)}\leq\dots\leq x_{(n)}$. Let us denote
$\displaystyle\displaystyle\Psi_{\mathbf{x}}:=\\{i\in[n-1]\cup\\{0\\}:x_{(i)}<x_{(i+1)}\\}\cup\\{n\\}.$
(3)
For example, for the input vector $\displaystyle\mathbf{x}=(3,2,3,1)$ and the
permutation $\displaystyle(\cdot)$ such that $\displaystyle x_{(0)}=0$,
$\displaystyle x_{(1)}=1$, $\displaystyle x_{(2)}=2$, $\displaystyle
x_{(3)}=3$, $\displaystyle x_{(4)}=3$, we get
$\displaystyle\Psi_{\mathbf{x}}=\\{0,1,2,4\\}$. The following proposition
includes the very basic properties of system $\displaystyle\Psi_{\mathbf{x}}$
needed for further results.
###### Proposition 3.1
Let $\displaystyle\mathbf{x}\in[0,\infty)^{[n]}$.
1. i)
$\displaystyle\Psi_{\mathbf{x}}$ is independent on permutation
$\displaystyle(\cdot)$ of $\displaystyle\mathbf{x}$, i.e.,
$\displaystyle\Psi_{\mathbf{x}}$ contains the same values for any permutation
$\displaystyle(\cdot)$ of $\displaystyle\mathbf{x}$ such that $\displaystyle
0=x_{(0)}\leq x_{(1)}\leq x_{(2)}\leq\dots\leq x_{(n)}$.
2. ii)
For any $\displaystyle i\in[n]$ there exists $\displaystyle
k_{i}\in\Psi_{\mathbf{x}}\setminus\\{0\\}$ such that $\displaystyle
x_{i}=x_{(k_{i})}$, i.e.
$\displaystyle\\{x_{(k_{i})}:k_{i}\in\Psi_{\mathbf{x}}\setminus\\{0\\}\\}$
contains all different values of $\displaystyle\mathbf{x}$.
3. iii)
$\displaystyle x_{(\min\Psi_{\mathbf{x}})}=0$.
4. iv)
$\displaystyle\left\\{[x_{(k)},x_{(k+1)}):k\in\Psi_{\mathbf{x}}\right\\}$ is a
decomposition of interval $\displaystyle[0,\infty)$ into nonempty pairwise
disjoint sets.
Proof.
1. i)
Let us consider two different permutations of $\displaystyle\mathbf{x}$ (if
they exist) $\displaystyle(\cdot)_{1}$ and $\displaystyle(\cdot)_{2}$ with the
required property. Let us denote
$\displaystyle\displaystyle\Psi_{\mathbf{x}}$
$\displaystyle\displaystyle:=\\{i\in[n-1]\cup\\{0\\}:x_{(i)_{1}}<x_{(i+1)_{1}}\\}\cup\\{n\\},$
$\displaystyle\displaystyle\Phi_{\mathbf{x}}$
$\displaystyle\displaystyle:=\\{i\in[n-1]\cup\\{0\\}:x_{(i)_{2}}<x_{(i+1)_{2}}\\}\cup\\{n\\}.$
We show that $\displaystyle\Psi_{\mathbf{x}}=\Phi_{\mathbf{x}}$. Indeed,
$\displaystyle n\in\Psi_{\mathbf{x}},n\in\Phi_{\mathbf{x}}$. If $\displaystyle
i\in\Psi_{\mathbf{x}}\setminus\\{n\\}$, then $\displaystyle
x_{(i)_{1}}<x_{(i+1)_{1}}$. Because of nondecreasing rearangement of
$\displaystyle\mathbf{x}$ with respect to $\displaystyle(\cdot)_{1}$,
$\displaystyle(\cdot)_{2}$ we get $\displaystyle x_{(i)_{2}}<x_{(i+1)_{2}}$,
therefore $\displaystyle i\in\Phi_{\mathbf{x}}$ and
$\displaystyle\Psi_{\mathbf{x}}\subseteq\Phi_{\mathbf{x}}$. By analogy it
holds $\displaystyle\Phi_{\mathbf{x}}\subseteq\Psi_{\mathbf{x}}$.
2. ii)
Since any $\displaystyle i\in[n]$ can be represented via permutation as
$\displaystyle i={(j_{i})}$, $\displaystyle j_{i}\in[n]$, let us set
$\displaystyle k_{i}=\max\\{j_{i}\in[n]:\,x_{i}=x_{(j_{i})}\\}.$
As for any $\displaystyle k_{i}<n$ it holds that $\displaystyle
x_{(k_{i})}<x_{(k_{i}+1)}$, then $\displaystyle
k_{i}\in\Psi_{\mathbf{x}}\setminus\\{0\\}$. Moreover, $\displaystyle
k_{i}=n\in\Psi_{\mathbf{x}}$ because of the definition of
$\displaystyle\Psi_{\mathbf{x}}$, see (3).
3. iii)
It follows immediately from the fact that
$\displaystyle\min\Psi_{\mathbf{x}}=\max\\{i\in[n]\cup\\{0\\}:x_{(i)}=x_{(0)}=0\\}$.
4. iv)
It follows from part ii), iii) and from definition of system
$\displaystyle\Psi_{\mathbf{x}}$, since $\displaystyle x_{(k)}<x_{(k+1)}$ for
any $\displaystyle k\in\Psi_{\mathbf{x}}$ and $\displaystyle x_{(k_{1})}\neq
x_{(k_{2})}$ for any $\displaystyle k_{1},k_{2}\in\Psi_{\mathbf{x}}$.
Since we have shown that the system $\displaystyle\Psi_{\mathbf{x}}$ is
independent of the chosen permutation, henceforward we shall not explicitly
mention the permutation in assumptions of presented results. The following
proposition states that the formula (2) can be rewritten by the system
$\displaystyle\Psi_{\mathbf{x}}$ in the simpler form, see part
$\displaystyle{\rm{i)}}$. Moreover, in the second part of the proposition we
show that for a fixed $\displaystyle\mathbf{x}\in[0,\infty)^{[n]}$ it is
$\displaystyle\mu_{\mathscr{A}^{\mathrm{max},{{\mathscr{E}}}}}(\mathbf{x},\alpha)=\mu(\\{\mathbf{x}>\alpha\\})$
with smaller collection $\displaystyle\mathscr{E}$ than the whole powerset
$\displaystyle 2^{[n]}$ (compare with the known result [1, Example 4.2] or see
(1)). The collection $\displaystyle\mathscr{E}$ depends on
$\displaystyle\mathbf{x}$ (equivalently on $\displaystyle\Psi_{\mathbf{x}}$).
###### Proposition 3.2
Let $\displaystyle\mathbf{x}\in[0,\infty)^{[n]}$,
$\displaystyle\mu\in\mathbf{M}$.
1. i)
Then
$\mu(\\{\mathbf{x}>\alpha\\})=\sum_{k\in\Psi_{\mathbf{x}}}\mu\left(E_{(k+1)}\right)\cdot\mathbf{1}_{[x_{(k)},x_{(k+1)})}(\alpha)$
(4)
for any $\displaystyle\alpha\in[0,\infty)$ with the convention $\displaystyle
x_{(n+1)}=\infty$.
2. ii)
If
$\displaystyle\mathscr{E}\supseteq\\{E_{(k+1)}^{c}:k\in\Psi_{\mathbf{x}}\\}$,
then
$\displaystyle\mu_{\mathscr{A}^{\mathrm{max}}}(\mathbf{x},\alpha)=\mu(\\{\mathbf{x}>\alpha\\}).$
Proof.
1. i)
For $\displaystyle k\in[n-1]\cup\\{0\\},\,k\notin\Psi_{\mathbf{x}}$, we have
$\displaystyle x_{(k)}=x_{(k+1)}$. This leads to the fact that
$\displaystyle\mu\left(E_{(k+1)}\right)\cdot\mathbf{1}_{[x_{(k)},x_{(k+1)})}(\alpha)=0$
for any $\displaystyle\alpha\in[0,\infty)$. Using the Proposition 3.1 iv) we
have the required assertion.
2. ii)
According to Proposition 3.1 (iv) let us divide interval
$\displaystyle[0,\infty)$ into disjoint sets
$\displaystyle[0,\infty)=\bigcup_{k\in\Psi_{\mathbf{x}}}[x_{(k)},x_{(k+1)}).$
Let us consider an arbitrary (fixed) $\displaystyle k\in\Psi_{\mathbf{x}}$.
Then from the fact, that $\displaystyle E_{(k+1)}^{c}\in\mathscr{E}$ and
$\displaystyle\mathsf{A}^{\mathrm{max}}(\mathbf{x}|E_{(k+1)}^{c})=x_{(k)}$ we
have $\displaystyle
E_{(k+1)}^{c}\in\\{E:\mathsf{A}(\mathbf{x}|E)\leq\alpha\\}$ for any
$\displaystyle\alpha\in[x_{(k)},x_{(k+1)})$. Therefore we get
$\displaystyle\mu_{\mathscr{A}^{\mathrm{max},{{\mathscr{E}}}}}(\mathbf{x},\alpha)=\min\\{\mu(E^{c}):\mathsf{A}^{\mathrm{max}}(\mathbf{x}|E)\leq\alpha,E\in\mathscr{E}\\}\leq\mu(E_{(k+1)})=\mu(\\{\mathbf{x}>\alpha\\})$
for any $\displaystyle\alpha\in[x_{(k)},x_{(k+1)})$, where the last equality
follows from part i). On the other hand, as $\displaystyle\mathscr{E}\subseteq
2^{[n]}$ from properties of minimum we have
$\displaystyle\mu_{\mathscr{A}^{\mathrm{max},{{\mathscr{E}}}}}(\mathbf{x},\alpha)\geq\mu_{\mathscr{A}^{\mathrm{max},{{2^{[n]}}}}}(\mathbf{x},\alpha)=\mu(E_{(k+1)})=\mu(\\{\mathbf{x}>\alpha\\})$
for any $\displaystyle\alpha\in[x_{(k)},x_{(k+1)})$. To sum it up,
$\displaystyle\mu_{\mathscr{A}^{\mathrm{max},{{\mathscr{E}}}}}(\mathbf{x},\alpha)=\mu(E_{(k+1)})=\mu(\\{\mathbf{x}>\alpha\\})$
for any $\displaystyle\alpha\in[x_{(k)},x_{(k+1)})$.
###### Remark 3.3
Let us remark that in the whole paper we suppose $\displaystyle\mu$ is defined
on $\displaystyle 2^{[n]}$. However, in fact, it is enough to have a smaller
$\displaystyle\mathtt{Dom}{(\mu)}$. For example, in part ii) of the previous
proposition it is enough to consider the domain of $\displaystyle\mu$ being
$\displaystyle\\{E^{c}:E\in\mathscr{E}\\}$.
Let us note that in (4) the last summand is always equal to $\displaystyle 0$
because $\displaystyle\mu\left(E_{(n+1)}\right)=\mu(\emptyset)=0$. However, it
is useful to consider the form of survival function in (4) with sum over the
whole set $\displaystyle\Psi_{\mathbf{x}}$ not
$\displaystyle\Psi_{\mathbf{x}}\setminus\\{n\\}$ because of some technical
details in presented proofs in this paper.
###### Example 3.4
Let us take
$\displaystyle{\mathscr{A}}^{\mathrm{max}}=\\{\mathsf{A}^{\mathrm{max}}(\cdot|E):E\in\mathscr{E}\\}$
and normalized monotone measure $\displaystyle\mu$ on $\displaystyle 2^{[3]}$
given in the following table:
$\displaystyle E$ | $\displaystyle\emptyset$ | $\displaystyle\\{1\\}$ | $\displaystyle\\{2\\}$ | $\displaystyle\\{3\\}$ | $\displaystyle\\{1,2\\}$ | $\displaystyle\\{1,3\\}$ | $\displaystyle\\{2,3\\}$ | $\displaystyle\\{1,2,3\\}$
---|---|---|---|---|---|---|---|---
$\displaystyle\mu(E)$ | $\displaystyle 0$ | $\displaystyle 0$ | $\displaystyle 0.5$ | $\displaystyle 0$ | $\displaystyle 0.5$ | $\displaystyle 0.5$ | $\displaystyle 0.5$ | $\displaystyle 1$
Further, let us take the input vector $\displaystyle\mathbf{x}=(1,2,1)$ with
the permutation $\displaystyle(1)\\!=\\!1$, $\displaystyle(2)\\!=\\!3$,
$\displaystyle(3)\\!=\\!2$.
Then $\displaystyle\Psi_{\mathbf{x}}=\\{0,2,3\\}$ and the collection
guarantying the equality between survival function and generalized survival
function (of input $\displaystyle\mathbf{x}$) is according to Proposition 3.2
ii) e.g.
$\displaystyle\displaystyle\mathscr{E}$
$\displaystyle\displaystyle=\\{E_{(k+1)}^{c}:\,k\in\Psi_{\mathbf{x}}\\}=\\{E_{(1)}^{c},E_{(3)}^{c},E_{(4)}^{c}\\}=\\{\emptyset,\\{(1),(2)\\},\\{(1),(2),(3)\\}\\}$
$\displaystyle\displaystyle=\\{\emptyset,\\{1,3\\},\\{1,2,3\\}\\}.$
Indeed,
$\displaystyle\mu_{\mathscr{A}^{\mathrm{max}}}(\mathbf{x},\alpha)=1\cdot\mathbf{1}_{[0,1)}(\alpha)+0.5\cdot\mathbf{1}_{[1,2)}(\alpha)=\mu(\\{\mathbf{x}>\alpha\\}).$
Figure 3: The survival function visualization
$\displaystyle\mu_{{\mathscr{A}}^{\mathrm{max}}}$ with
$\displaystyle{\mathscr{A}}=\\{\mathsf{A}^{\mathrm{max}}(\cdot|E):E\in\\{E_{(k+1)}^{c}:k\in\Psi_{\mathbf{x}}\\}\\}$
From the previous result it follows that the standard survival function can be
represented by the formula
$\displaystyle\displaystyle\mu(\\{\mathbf{x}>\alpha\\})$
$\displaystyle\displaystyle=\min\big{\\{}\mu(E^{c}):\mathsf{A}^{\mathrm{max}}(\mathbf{x}|E)\leq\alpha,\,E\in\\{E_{(k+1)}^{c}:k\in\Psi_{\mathbf{x}}\\}\big{\\}}$
(5)
with the system $\displaystyle\Psi_{\mathbf{x}}$ given by the input vector
$\displaystyle\mathbf{x}$. This formula can be visualized by Figure 3. Let us
remark that since
$\displaystyle\mathsf{A}^{\mathrm{max}}(\mathbf{x}|E_{(k+1)}^{c})=x_{(k)}$ on
the upper line we measure sets $\displaystyle(E_{(k+1)}^{c})^{c}$. The
calculation of (generalized) survival function is processed as we have
described in the Introduction. Let us remark that the essence of the following
results is the pointwise comparison of the generalized survival function with
the standard survival function having in mind the representation (5) together
with its visualization, see Figure 3.
It is obvious that the equality of survival functions (standard and
generalized) means that they have to achieve the same values, i.e.,
$\displaystyle\mu\left(E_{(k+1)}\right)$, $\displaystyle
k\in\Psi_{\mathbf{x}}$, on the same corresponding intervals
$\displaystyle[x_{(k)},x_{(k+1)})$, $\displaystyle k\in\Psi_{\mathbf{x}}$.
Having in mind the formula (4), the survival function representation given by
(5) and the visualization, see Figure 3, we can formulate the following
sufficient conditions. While (C1) ensures that the generalized survival
function will be able to achieve the same values as the survival function,
(C2) guarantees it. Let $\displaystyle{\mathscr{A}}$ be FCA.
1. (C1)
For any $\displaystyle k\in\Psi_{\mathbf{x}}$ there exists $\displaystyle
G_{k}\in\mathscr{E}$ such that
$\displaystyle\mathsf{A}(\mathbf{x}|G_{k})=x_{(k)}\quad\text{and}\quad\mu(G_{k}^{c})=\mu(E_{(k+1)}).$
2. (C2)
For any $\displaystyle k\in\Psi_{\mathbf{x}}$ and for any $\displaystyle
E\in\mathscr{E}$ it holds:
$\displaystyle\mathsf{A}(\mathbf{x}|E)<x_{(k+1)}\Rightarrow\mu(E^{c})\geq\mu(E_{(k+1)}).$
Figure 4: The visualization of conditions $\displaystyle\mathrm{(C1)}$ and
$\displaystyle\mathrm{(C2)}$
The visualization of conditions $\displaystyle\mathrm{(C1)}$,
$\displaystyle\mathrm{(C2)}$ via two parallel lines is drawn in Figure 4. Let
us remark that for $\displaystyle k=n$ (C2) holds trivially. Also, for
$\displaystyle k=\min\Psi_{\mathbf{x}}$ (C1) holds trivially with
$\displaystyle G_{\min\Psi_{\mathbf{x}}}=\emptyset$.
###### Remark 3.5
In accordance with the above written, it can be easily seen that for
$\displaystyle{\mathscr{A}}^{\mathrm{max}}$ with
$\displaystyle\mathscr{E}\supseteq\\{E_{(k+1)}^{c}:k\in\Psi_{\mathbf{x}}\\}$
it holds $\displaystyle G_{k}=E_{(k+1)}^{c}$ for any $\displaystyle
k\in\Psi_{\mathbf{x}}$ regardless of the choice of $\displaystyle\mu$ in
$\displaystyle\mathrm{(C1)}$. Of course, for specific classes of monotone
measures $\displaystyle\mu$ also other sets $\displaystyle G_{k}$ can satisfy
$\displaystyle\rm{(C1)}$. Similarly, the validity of $\displaystyle\rm{(C2)}$
is clear. Indeed, if
$\displaystyle\mathsf{A}^{\mathrm{max}}(\mathbf{x}|E)<x_{(k+1)}$, then we have
$\displaystyle E\subseteq E_{(k+1)}^{c}$, i.e., $\displaystyle E^{c}\supseteq
E_{(k+1)}$. From the monotonicity of $\displaystyle\mu$ we have
$\displaystyle\mu(E^{c})\geq\mu(E_{(k+1)})$ for any $\displaystyle
E\in\mathscr{E}$.
Conditions $\displaystyle\mathrm{(C1)}$ and $\displaystyle\mathrm{(C2)}$
guarantee inequalities between survival functions. Thus the equality of
survival function is a consequence.
###### Proposition 3.6
Let $\displaystyle\mathbf{x}\in[0,\infty)^{[n]}$,
$\displaystyle\mu\in{\mathbf{M}}$, and let $\displaystyle{\mathscr{A}}$ be
FCA.
1. i)
If $\displaystyle\mathrm{(C1)}$ holds, then
$\displaystyle\mu_{\mathscr{A}}(\mathbf{x},\alpha)\leq\mu(\\{\mathbf{x}>\alpha\\})$
for any $\displaystyle\alpha\in[0,\infty)$.
2. ii)
$\displaystyle\mathrm{(C2)}$ holds if and only if
$\displaystyle\mu(\\{\mathbf{x}>\alpha\\})\leq\mu_{\mathscr{A}}(\mathbf{x},\alpha)$
for any $\displaystyle\alpha\in[0,\infty)$.
Proof. According to Proposition 3.1 (iv) let us divide interval
$\displaystyle[0,\infty)$ into disjoint sets
$\displaystyle[0,\infty)=\bigcup_{k\in\Psi_{\mathbf{x}}}[x_{(k)},x_{(k+1)}).$
Let us consider an arbitrary (fixed) $\displaystyle k\in\Psi_{\mathbf{x}}$.
Let us prove part i). According to $\displaystyle\mathrm{(C1)}$ there exists
the set $\displaystyle G_{k}\in\mathscr{E}$ such that
$\displaystyle\mathsf{A}(\mathbf{x}|G_{k})=x_{(k)}$ and
$\displaystyle\mu(G_{k}^{c})=\mu(E_{(k+1)})$. From the fact that
$\displaystyle\mu(E_{(k+1)})=\mu(G_{k}^{c})\in\left\\{\mu(E^{c}):\,\mathsf{A}(\mathbf{x}|E)\leq
x_{(k)}\right\\}$ and since
$\displaystyle\mu_{\mathscr{A}}(\mathbf{x},\alpha)$ is nonincreasing (see [1,
Proposition 4.3 (a)]) we have
$\displaystyle\mu_{\mathscr{A}}(\mathbf{x},\alpha)\leq\mu_{\mathscr{A}}(\mathbf{x},x_{(k)})\leq\mu(E_{(k+1)})=\mu(\\{\mathbf{x}>\alpha\\})$
for any $\displaystyle\alpha\in[x_{(k)},x_{(k+1)})$, where the last equality
follows from (4).
Let us prove part ii). From $\displaystyle\mathrm{(C2)}$ it follows that for
any $\displaystyle E\in\mathscr{E}$ if
$\displaystyle\mathsf{A}(\mathbf{x}|E)<x_{(k+1)}$, then
$\displaystyle\mu(E^{c})\geq\mu(E_{(k+1)})$. Therefore,
$\displaystyle\mu_{\mathscr{A}}(\mathbf{x},\alpha)\geq\mu(E_{(k+1)})=\mu(\\{\mathbf{x}>\alpha\\})$
for any $\displaystyle\alpha\in[x_{(k)},x_{(k+1)})$, where the last equality
follows from (4). It is enough to prove the implication
$\displaystyle\Leftarrow$. Since
$\displaystyle\mu_{\mathscr{A}}(\mathbf{x},\alpha)=\min\left\\{\mu(E^{c}):\mathsf{A}(\mathbf{x}|E)\leq\alpha<x_{(k+1)},E\in\mathscr{E}\right\\}\geq\mu(E_{(k+1)})=\mu(\\{\mathbf{x}>\alpha\\})$
for any $\displaystyle\alpha\in[x_{(k)},x_{(k+1)})$, then for any
$\displaystyle E\in\mathscr{E}$ it has to hold: if
$\displaystyle\mathsf{A}(\mathbf{x}|E)<x_{(k+1)}$, then
$\displaystyle\mu(E^{c})\geq\mu(E_{(k+1)})$.
###### Corollary 3.7
Let $\displaystyle\mathbf{x}\in[0,\infty)^{[n]}$,
$\displaystyle\mu\in{\mathbf{M}}$, and let $\displaystyle{\mathscr{A}}$ be
FCA. If $\displaystyle\mathrm{(C1)}$ and $\displaystyle\mathrm{(C2)}$ are
satisfied, then
$\displaystyle\mu_{\mathscr{A}}(\mathbf{x},\alpha)=\mu(\\{\mathbf{x}>\alpha\\})$
for any $\displaystyle\alpha\in[0,\infty)$.
The application of the previous result is illustrated in the following
example. The second example proves that $\displaystyle\mathrm{(C1)}$ and
$\displaystyle\mathrm{(C2)}$ are only sufficient and not necessary.
|
---|---
Figure 5: Generalized survival function and visualization from Example 3.8
###### Example 3.8
Let us consider
$\displaystyle{\mathscr{A}}^{\mathrm{sum}}=\\{\mathsf{A}^{\mathrm{sum}}(\cdot|E):E\in
2^{[3]}\\}$, and normalized monotone measure $\displaystyle\mu$ on
$\displaystyle 2^{[3]}$ with the following values:
$\displaystyle E$ | $\displaystyle\emptyset$ | $\displaystyle\\{1\\}$ | $\displaystyle\\{2\\}$ | $\displaystyle\\{3\\}$ | $\displaystyle\\{1,2\\}$ | $\displaystyle\\{1,3\\}$ | $\displaystyle\\{2,3\\}$ | $\displaystyle\\{1,2,3\\}$
---|---|---|---|---|---|---|---|---
$\displaystyle\mu(E)$ | $\displaystyle 0$ | $\displaystyle 0$ | $\displaystyle 0.5$ | $\displaystyle 0$ | $\displaystyle 0.5$ | $\displaystyle 0$ | $\displaystyle 0.7$ | $\displaystyle 1$
$\displaystyle\mathsf{A}^{\mathrm{sum}}(\mathbf{x}|E)$ | $\displaystyle 0$ | $\displaystyle 1$ | $\displaystyle 3$ | $\displaystyle 1$ | $\displaystyle 4$ | $\displaystyle 2$ | $\displaystyle 4$ | $\displaystyle 5$
Further, let us take the input vector $\displaystyle\mathbf{x}=(1,3,1)$ with
the permutation $\displaystyle(1)=1$, $\displaystyle(2)=3$,
$\displaystyle(3)=2$. Then $\displaystyle x_{(0)}=0$, $\displaystyle
x_{(1)}=1$, $\displaystyle x_{(2)}=1$, $\displaystyle x_{(3)}=3$, therefore
$\displaystyle\Psi_{\mathbf{x}}=\\{0,2,3\\}$ and
$\displaystyle E_{(1)}=\\{(1),(2),(3)\\}=\\{1,2,3\\},\quad
E_{(3)}=\\{(3)\\}=\\{2\\},\quad E_{(4)}=\emptyset.$
We can see, that the assertion $\displaystyle\mathrm{(C1)}$ of Corollary 3.7
is satisfied with
$\displaystyle G_{0}=\emptyset$, $\displaystyle G_{2}=\\{3\\}$, $\displaystyle
G_{3}=\\{2\\}$.
Indeed, $\displaystyle\mathsf{A}^{\mathrm{sum}}(\mathbf{x}|G_{0})=0=x_{(0)}$
and $\displaystyle\mu(G_{0}^{c})=\mu(E_{(1)})$. Further,
$\displaystyle\mathsf{A}^{\mathrm{sum}}(\mathbf{x}|G_{2})=1=x_{(2)}$ and
$\displaystyle\mu(G_{2}^{c})=\mu(\\{1,2\\})=\mu(E_{(3)})$. Finally,
$\displaystyle\mathsf{A}^{\mathrm{sum}}(\mathbf{x}|G_{3})=3=x_{(3)}$ and
$\displaystyle\mu(G_{3}^{c})=\mu(\\{1,3\\})=\mu(E_{(4)})$. The assertion
$\displaystyle\mathrm{(C2)}$ is also satisfied, see the visualisation of
generalized survival function via parallel lines in Figure 5. Discussed
survival functions coincide and take the form
$\displaystyle\mu(\\{\mathbf{x}>\alpha\\})=\mu_{\mathscr{A}^{\mathrm{sum}}}(\mathbf{x},\alpha)=\mathbf{1}_{[0,1)}(\alpha)+0{.}5\cdot\mathbf{1}_{[1,3)}(\alpha)$
for $\displaystyle\alpha\in[0,\infty)$. The plot of (generalized) survival
function is in Figure 5.
###### Example 3.9
Let us consider
$\displaystyle{\mathscr{A}}^{\mathrm{sum}}=\\{\mathsf{A}^{\mathrm{sum}}(\cdot|E):E\in
2^{[3]}\\}$, and normalized monotone measure $\displaystyle\mu$ on
$\displaystyle 2^{[3]}$ with the following values:
$\displaystyle E$ | $\displaystyle\emptyset$ | $\displaystyle\\{1\\}$ | $\displaystyle\\{2\\}$ | $\displaystyle\\{3\\}$ | $\displaystyle\\{1,2\\}$ | $\displaystyle\\{1,3\\}$ | $\displaystyle\\{2,3\\}$ | $\displaystyle\\{1,2,3\\}$
---|---|---|---|---|---|---|---|---
$\displaystyle\mu(E)$ | $\displaystyle 0$ | $\displaystyle 0$ | $\displaystyle 0$ | $\displaystyle 0.7$ | $\displaystyle 0$ | $\displaystyle 0.8$ | $\displaystyle 0.7$ | $\displaystyle 1$
$\displaystyle\mathsf{A}^{\mathrm{sum}}(\mathbf{x}|E)$ | $\displaystyle 0$ | $\displaystyle 2$ | $\displaystyle 3$ | $\displaystyle 4$ | $\displaystyle 5$ | $\displaystyle 6$ | $\displaystyle 7$ | $\displaystyle 9$
Further, let us take the input vector $\displaystyle\mathbf{x}=(2,3,4)$ with
the permutation being the identity. Then survival functions coincide
$\displaystyle\mu(\\{\mathbf{x}>\alpha\\})=\mu_{\mathscr{A}^{\mathrm{sum}}}(\mathbf{x},\alpha)=\mathbf{1}_{[0,2)}(\alpha)+0{.}7\cdot\mathbf{1}_{[2,4)}(\alpha).$
Here, $\displaystyle G_{0}=\emptyset$, $\displaystyle G_{1}=\\{1\\}$,
$\displaystyle G_{2}=\\{2\\}$, $\displaystyle G_{3}=\\{3\\}$ are the only sets
that satisfy the equality
$\displaystyle\mathsf{A}^{\mathrm{sum}}(\mathbf{x}|G_{k})=x_{(k)}$ for
$\displaystyle k\in\Psi_{\mathbf{x}}=\\{0,1,2,3\\}$. However,
$\displaystyle 0{.}8=\mu(G_{2}^{c})\neq\mu(E_{(3)})=0{.}7.$
Thus, a sufficient condition in Corollary 3.7 is not a necessary condition.
Let us return to Proposition 3.6. While $\displaystyle\mathrm{(C2)}$ is the
necessary and sufficient condition under which the generalized survival
function is greater or equal to the survival function,
$\displaystyle\mathrm{(C1)}$ is only sufficient for the reverse inequality.
Since this condition seems too strict, let us define conditions
$\displaystyle\mathrm{(C3)}$ and $\displaystyle\mathrm{(C4)}$ as follows:
1. (C3)
For any $\displaystyle k\in\Psi_{\mathbf{x}}$ there exists $\displaystyle
F_{k}\in\mathscr{E}$ such that $\displaystyle\mathsf{A}(\mathbf{x}|F_{k})\leq
x_{(k)}$ and $\displaystyle\mu(F_{k}^{c})\leq\mu(E_{(k+1)})$.
2. (C4)
For any $\displaystyle k\in\Psi_{\mathbf{x}}$ there exists $\displaystyle
F_{k}\in\mathscr{E}$ such that $\displaystyle\mathsf{A}(\mathbf{x}|F_{k})\leq
x_{(k)}$ and $\displaystyle\mu(F_{k}^{c})=\mu(E_{(k+1)})$.
The visualization of condition $\displaystyle\mathrm{(C3)}$ is drawn in Figure
6. In the following we show that exactly $\displaystyle\mathrm{(C3)}$ improves
Proposition 3.6 ii). As a consequence we also get improvement of Corollary
3.7. Replacing $\displaystyle\mathrm{(C1)}$ with $\displaystyle\mathrm{(C3)}$,
we obtain sufficient and necessary condition for equality between survival
functions. However, it will turn out that under the
$\displaystyle\mathrm{(C2)}$ assumption $\displaystyle\mathrm{(C3)}$ will be
reduced to $\displaystyle\mathrm{(C4)}$.
Figure 6: The visualization of condition $\displaystyle\mathrm{(C3)}$ from
Proposition 3.10
###### Proposition 3.10
Let $\displaystyle\mathbf{x}\in[0,\infty)^{[n]}$,
$\displaystyle\mu\in{\mathbf{M}}$, and let $\displaystyle{\mathscr{A}}$ be
FCA. $\displaystyle\mathrm{(C3)}$ holds if and only if
$\displaystyle\mu_{\mathscr{A}}(\mathbf{x},\alpha)\leq\mu(\\{\mathbf{x}>\alpha\\})$
for any $\displaystyle\alpha\in[0,\infty)$.
Proof. Let us prove the implication $\displaystyle\Rightarrow$. According to
Proposition 3.1 (iv) let us divide interval $\displaystyle[0,\infty)$ into
disjoint sets
$\displaystyle[0,\infty)=\bigcup_{k\in\Psi_{\mathbf{x}}}[x_{(k)},x_{(k+1)}).$
Let us consider an arbitrary (fixed) $\displaystyle k\in\Psi_{\mathbf{x}}$.
Then by assumptions, there is $\displaystyle F_{k}\in\mathscr{E}$ such that
$\displaystyle\mathsf{A}(\mathbf{x}|F_{k})\leq x_{(k)}$ and
$\displaystyle\mu(F_{k}^{c})\leq\mu(E_{(k+1)})$. Thus
$\displaystyle\mu(F_{k}^{c})\in\\{\mu(E^{c}):\,\mathsf{A}(\mathbf{x}|E)\leq\alpha,\,E\in\mathscr{E}\\}$
for any $\displaystyle\alpha\in[x_{(k)},x_{(k+1)})$. Hence,
$\displaystyle\displaystyle\mu_{\mathscr{A}}(\mathbf{x},\alpha)=\min\left\\{\mu(E^{c}):\mathsf{A}(\mathbf{x}|E)\leq\alpha,E\in\mathscr{E}\right\\}\leq\mu(F_{k}^{c})\leq\mu(E_{(k+1)})=\mu(\\{\mathbf{x}>\alpha\\})\text{}$
for any $\displaystyle\alpha\in[x_{(k)},x_{(k+1)})$.
Let us prove the reverse implication $\displaystyle\Leftarrow$. Let
$\displaystyle\mu_{\mathscr{A}}(\mathbf{x},\alpha)\leq\mu(\\{\mathbf{x}>\alpha\\})$
for any $\displaystyle\alpha\in[0,\infty)$. Then from this fact and from (4)
it follows:
$\displaystyle\mu_{\mathscr{A}}(\mathbf{x},x_{(k)})\leq\mu(\\{\mathbf{x}>x_{(k)}\\})=\mu(E_{(k+1)})$
for any $\displaystyle k\in\Psi_{\mathbf{x}}$. As
$\displaystyle\mu_{\mathscr{A}}(\mathbf{x},x_{(k)})=\min\left\\{\mu(E^{c}):\mathsf{A}(\mathbf{x}|E)\leq
x_{(k)},E\in\mathscr{E}\right\\}$, there exists $\displaystyle
F_{k}\in\mathscr{E}$ such that $\displaystyle\mathsf{A}(\mathbf{x}|F_{k})\leq
x_{(k)}$ and $\displaystyle\mu(F_{k}^{c})\leq\mu(E_{(k+1)})$.
###### Corollary 3.11
Let $\displaystyle\mathbf{x}\in[0,\infty)^{[n]}$,
$\displaystyle\mu\in{\mathbf{M}}$, and let $\displaystyle{\mathscr{A}}$ be
FCA.
1. i)
If $\displaystyle\mathrm{(C2)}$ holds, then $\displaystyle\mathrm{(C3)}$ is
equivalent to $\displaystyle\mathrm{(C4)}$.
2. ii)
$\displaystyle\mathrm{(C2)}$ and $\displaystyle\mathrm{(C3)}$ hold if and only
if
$\displaystyle\mu_{\mathscr{A}}(\mathbf{x},\alpha)=\mu(\\{\mathbf{x}>\alpha\\})$
for any $\displaystyle\alpha\in[0,\infty)$.
3. iii)
$\displaystyle\mathrm{(C2)}$ and $\displaystyle\mathrm{(C4)}$ hold if and only
if
$\displaystyle\mu_{\mathscr{A}}(\mathbf{x},\alpha)=\mu(\\{\mathbf{x}>\alpha\\})$
for any $\displaystyle\alpha\in[0,\infty)$.
Proof. It is enough to prove part i), more precisely, the implication
$\displaystyle\mathrm{(C3)}\Rightarrow\mathrm{(C4)}$. Let
$\displaystyle\mathrm{(C3)}$ is satisfied, we show that
$\displaystyle\mu(F_{k}^{c})=\mu(E_{(k+1)})$ holds for any $\displaystyle
k\in\Psi_{\mathbf{x}}$. Since for any $\displaystyle F_{k}\in\mathscr{E}$,
$\displaystyle k\in\Psi_{\mathbf{x}}$ we have
$\displaystyle\mathsf{A}(F_{k}|x)\leq x_{(k)}<x_{(k+1)}$, then from
$\displaystyle\mathrm{(C2)}$ we have
$\displaystyle\mu(F_{k}^{c})\geq\mu(E_{(k+1)})$. On the other hand, from
$\displaystyle\mathrm{(C3)}$ we have
$\displaystyle\mu(F_{k}^{c})\leq\mu(E_{(k+1)})$.
###### Remark 3.12
At the end of this main part let us remark that some above mentioned results
are true also without constructing $\displaystyle\Psi_{\mathbf{x}}$ system.
Let us denote:
1. $\displaystyle(\widetilde{\mathrm{C}}1)$
For any $\displaystyle i\in[n]\cup\\{0\\}$ there exists $\displaystyle
G_{i}\in\mathscr{E}$ such that
$\displaystyle\mathsf{A}(\mathbf{x}|G_{i})=x_{(i)}$ and
$\displaystyle\mu(F_{i}^{c})=\leavevmode\nobreak\ \mu(E_{(i+1)})$.
2. $\displaystyle(\widetilde{\mathrm{C}}2)$
For any $\displaystyle i\in[n]\cup\\{0\\}$ and for any $\displaystyle
E\in\mathscr{E}$ it holds:
$\displaystyle\mathsf{A}(\mathbf{x}|E)<x_{(i+1)}\Rightarrow\mu(E^{c})\geq\mu(E_{(i+1)}).$
3. $\displaystyle(\widetilde{\mathrm{C}}3)$
For any $\displaystyle i\in[n]\cup\\{0\\}$ there exists $\displaystyle
F_{i}\in\mathscr{E}$ such that $\displaystyle\mathsf{A}(\mathbf{x}|F_{i})\leq
x_{(i)}$ and $\displaystyle\mu(F_{i}^{c})\leq\mu(E_{(i+1)})$.
Then Proposition 3.6 and Corollary 3.11 (ii) remain to be true, although,
requirements in $\displaystyle(\widetilde{\mathrm{C}}1)$,
$\displaystyle(\widetilde{\mathrm{C}}2)$,
$\displaystyle(\widetilde{\mathrm{C}}3)$ will be for some $\displaystyle
i\in[n]\cup\\{0\\}$ redundant 888They will be redundant for $\displaystyle
i\in[n]\cup\\{0\\}$ such that $\displaystyle x_{(i)}=x_{(i+1)}$, compare with
the motivation of $\displaystyle\Psi_{\mathbf{x}}$ system introduction.. On
the other hand, Corollary 3.11 (i), (iii) need not be satisfied in general.
Inequalities: Let $\displaystyle\mathbf{x}\in[0,\infty)^{[n]}$,
$\displaystyle\mu\in{\mathbf{M}}$, and let $\displaystyle{\mathscr{A}}$ be
FCA.
1. i)
If $\displaystyle(\widetilde{\mathrm{C}}1)$ holds, then
$\displaystyle\mu_{\mathscr{A}}(\mathbf{x},\alpha)\leq\mu(\\{\mathbf{x}>\alpha\\})$
for any $\displaystyle\alpha\in[0,\infty)$.
2. ii)
$\displaystyle(\widetilde{\mathrm{C}}2)$ holds if and only if
$\displaystyle\mu(\\{\mathbf{x}>\alpha\\})\leq\mu_{\mathscr{A}}(\mathbf{x},\alpha)$
for any $\displaystyle\alpha\in[0,\infty)$.
Sufficient and necessary condition: Let
$\displaystyle\mathbf{x}\in[0,\infty)^{[n]}$,
$\displaystyle\mu\in{\mathbf{M}}$, and let $\displaystyle{\mathscr{A}}$ be
FCA. $\displaystyle(\widetilde{\mathrm{C}}2)$ and
$\displaystyle(\widetilde{\mathrm{C}}3)$ hold if and only if
$\displaystyle\mu_{\mathscr{A}}(\mathbf{x},\alpha)=\mu(\\{\mathbf{x}>\alpha\\})$
for any $\displaystyle\alpha\in[0,\infty)$.
### 3.1 Equality of generalized survival function and standard survival
function, further results
In this subsection we provide further results on indistinguishability of
survival functions. Considering the formula of standard survival function (4)
one can observe that the same value of monotone measure may be achieved on
several intervals. These intervals can be joined together. Thus we obtain
again a shorter formula of survival function, see Proposition 3.14 i), which
allows us to formulate further results. Let us define system
$\displaystyle\Psi_{\mathbf{x}}^{*}\subseteq\Psi_{\mathbf{x}}$ as follows:
$\displaystyle\displaystyle\Psi_{\mathbf{x}}^{*}:=\\{k\in\Psi_{\mathbf{x}}\setminus\\{\min\Psi_{\mathbf{x}}\\}:\,\mu(E_{(j+1)})>\mu(E_{(k+1)}),j<k,j\in\Psi_{\mathbf{x}}\\}\cup\\{\min\Psi_{\mathbf{x}}\\}$
(6)
(compare with the definition of system $\displaystyle\Psi_{\mathbf{x}}$ which
is analogous, however the main condition is concentrated on components of
$\displaystyle\mathbf{x}$ instead of values of $\displaystyle\mu$). Let us
give an example of the $\displaystyle\Psi_{\mathbf{x}}^{*}$ system calculation
considering inputs from Example 3.9. For given input
$\displaystyle\Psi_{\mathbf{x}}=\\{0,1,2,3\\}$. Then by definition of
$\displaystyle\Psi_{\mathbf{x}}^{*}$ we have
$\displaystyle\min\Psi_{\mathbf{x}}=0\in\Psi_{\mathbf{x}}^{*}$. For
$\displaystyle k=1,3$ the inequality
$\displaystyle\mu(E_{(j+1)})>\mu(E_{(k+1)})$, $\displaystyle j<k$ holds,
however, for $\displaystyle k=2$ we have
$\displaystyle\mu(E_{(2)})=\mu(E_{(3)})$. Thus $\displaystyle
2\notin\Psi_{\mathbf{x}}^{*}$. In summary,
$\displaystyle\Psi_{\mathbf{x}}^{*}=\\{0,1,3\\}$.
For purpose of this subsection for any $\displaystyle
k\in\Psi_{\mathbf{x}}^{*}$ let us denote
$\displaystyle\displaystyle
l_{k}:=\max\\{j\in\Psi_{\mathbf{x}}:\,\mu(E_{(j+1)})=\mu(E_{(k+1)})\\}.$ (7)
###### Proposition 3.13
Let $\displaystyle\mathbf{x}\in[0,\infty)^{[n]}$,
$\displaystyle\mu\in\mathbf{M}$.
1. i)
$\displaystyle x_{(\min\Psi_{\mathbf{x}}^{*})}=0$.
2. ii)
$\displaystyle\left\\{[x_{(k)},x_{(l_{k}+1)}):k\in\Psi_{\mathbf{x}}^{*}\right\\}$
with $\displaystyle l_{k}$ given by (7) and with the convention $\displaystyle
x_{(n+1)}=\infty$ is a decomposition of interval $\displaystyle[0,\infty)$
into nonempty pairwise disjoint sets.
3. iii)
$\displaystyle(\forall
k\in\Psi_{\mathbf{x}}^{*}\setminus\\{\min\Psi_{\mathbf{x}}^{*}\\})$
$\displaystyle(\exists r\in\Psi_{\mathbf{x}}^{*},r<k)$ $\displaystyle
x_{(k)}=x_{(l_{r}+1)}$. Moreover,
$\displaystyle\mu(E_{(k+1)})<\mu(E_{(l_{r}+1)})$.
4. iv)
If $\displaystyle\mu$ is such that it is strictly monotone on
$\displaystyle\\{E_{(k+1)}:k\in\Psi_{\mathbf{x}}\\}$, then
$\displaystyle\Psi_{\mathbf{x}}=\Psi_{\mathbf{x}}^{*}$.
Proof. Part i) follows from Proposition 3.1 iii). Since
$\displaystyle\Psi_{\mathbf{x}}^{*}\subseteq\Psi_{\mathbf{x}}$ and
$\displaystyle\min\Psi_{\mathbf{x}}\in\Psi_{\mathbf{x}}^{*}$, then
$\displaystyle\min\Psi_{\mathbf{x}}=\min\Psi_{\mathbf{x}}^{*}$. The proof of
ii) follows from Proposition 3.1 part iv) and from the fact that each partial
interval $\displaystyle[x_{(k)},x_{(l_{k}+1)})$, $\displaystyle
k\in\Psi_{\mathbf{x}}^{*}$ can be rewritten as follows
$\displaystyle[x_{(k)},x_{(l_{k}+1)})=\bigcup_{j=k,j\in\Psi_{\mathbf{x}}}^{l_{k}}[x_{(j)},x_{(j+1)}).$
The equality $\displaystyle x_{(k)}=x_{(l_{r}+1)}$ in part iii) follows from
ii) with $\displaystyle
r=\max\\{j\in\Psi_{\mathbf{x}}^{*}:x_{(j)}<x_{(k)}\\}$. Moreover, it holds
$\displaystyle\mu(E_{(l_{r}+1)})=\mu(E_{(r+1)})>\mu(E_{(k+1)})$ where the
first equality holds because of (7), the second inequality is true due to
$\displaystyle r<k$, $\displaystyle r,k\in\Psi_{\mathbf{x}}^{*}$. Part iv)
follows from (6).
###### Proposition 3.14
Let $\displaystyle\mathbf{x}\in[0,\infty)^{[n]}$,
$\displaystyle\mu\in\mathbf{M}$.
1. i)
Then
$\mu(\\{\mathbf{x}>\alpha\\})=\sum_{k\in\Psi_{\mathbf{x}}^{*}}\mu\left(E_{(k+1)}\right)\cdot\mathbf{1}_{[x_{(k)},x_{(l_{k}+1)})}(\alpha)$
(8)
for any $\displaystyle\alpha\in[0,\infty)$ with $\displaystyle l_{k}$ given by
(7) and with the convention $\displaystyle x_{(n+1)}=\infty$.
2. ii)
If
$\displaystyle\mathscr{E}\supseteq\\{E_{(k+1)}^{c}:k\in\Psi_{\mathbf{x}}^{*}\\}$,
then
$\displaystyle\mu_{\mathscr{A}^{\mathrm{max}}}(\mathbf{x},\alpha)=\mu(\\{\mathbf{x}>\alpha\\})$
for any $\displaystyle\alpha\in[0,\infty)$.
Proof. Part ii) can be proved analogously as Proposition 3.2 part ii). Part i)
follows from the fact that each partial interval
$\displaystyle[x_{(k)},x_{(l_{k}+1)})$, $\displaystyle
k\in\Psi_{\mathbf{x}}^{*}$ can be rewritten as follows
$\displaystyle[x_{(k)},x_{(l_{k}+1)})=\bigcup_{j=k,j\in\Psi_{\mathbf{x}}}^{l_{k}}[x_{(j)},x_{(j+1)}).$
From the formula (4) and from definition of $\displaystyle l_{k}$ we get
$\displaystyle\mu(\\{\mathbf{x}>\alpha\\})=\mu(E_{(j+1)})=\mu(E_{(k+1)}).$
for any $\displaystyle\alpha\in[x_{(j)},x_{(j+1)})$.
All results from the previous subsection will also be true under a slight
modification of conditions (C1), (C2), (C3) and (C4) as follows:
1. (C1∗)
For any $\displaystyle k\in\Psi_{\mathbf{x}}^{*}$ there exists $\displaystyle
G_{k}\in\mathscr{E}$ such that
$\displaystyle\mathsf{A}(\mathbf{x}|G_{k})=x_{(k)}$ and
$\displaystyle\mu(G_{k}^{c})=\mu(E_{(k+1)}).$
2. (C2∗)
For any $\displaystyle k\in\Psi_{\mathbf{x}}^{*}$ and for any $\displaystyle
E\in\mathscr{E}$ it holds:
$\displaystyle\mathsf{A}(\mathbf{x}|E)<x_{(l_{k}+1)}\Rightarrow\mu(E^{c})\geq\mu(E_{(l_{k}+1)}).$
3. (C3∗)
For any $\displaystyle k\in\Psi_{\mathbf{x}}^{*}$ there exists $\displaystyle
F_{k}\in\mathscr{E}$ such that $\displaystyle\mathsf{A}(\mathbf{x}|F_{k})\leq
x_{(k)}$ and $\displaystyle\mu(F_{k}^{c})\leq\mu(E_{(k+1)})$.
4. (C4∗)
For any $\displaystyle k\in\Psi_{\mathbf{x}}^{*}$ there exists $\displaystyle
F_{k}\in\mathscr{E}$ such that $\displaystyle\mathsf{A}(\mathbf{x}|F_{k})\leq
x_{(k)}$ and $\displaystyle\mu(F_{k}^{c})=\mu(E_{(k+1)})$.
In the following we summarize all modifications of results from the main part
of this section. Since proofs of parts i) – vii) are based on the same ideas,
we omit them. The comparison of these results with those obtained in the main
part can be found in Remark 3.16.
###### Proposition 3.15
Let $\displaystyle\mathbf{x}\in[0,\infty)^{[n]}$,
$\displaystyle\mu\in{\mathbf{M}}$, and let $\displaystyle{\mathscr{A}}$ be
FCA.
1. i)
If $\displaystyle\mathrm{(C1^{*})}$ holds, then
$\displaystyle\mu_{\mathscr{A}}(\mathbf{x},\alpha)\leq\mu(\\{\mathbf{x}>\alpha\\})$
for any $\displaystyle\alpha\in[0,\infty)$.
2. ii)
$\displaystyle\mathrm{(C2^{*})}$ holds if and only if
$\displaystyle\mu(\\{\mathbf{x}>\alpha\\})\leq\mu_{\mathscr{A}}(\mathbf{x},\alpha)$
for any $\displaystyle\alpha\in[0,\infty)$.
3. iii)
If $\displaystyle\mathrm{(C1^{*})}$ and $\displaystyle\mathrm{(C2^{*})}$ are
satisfied, then
$\displaystyle\mu_{\mathscr{A}}(\mathbf{x},\alpha)=\mu(\\{\mathbf{x}>\alpha\\})$
for any $\displaystyle\alpha\in[0,\infty)$.
4. iv)
$\displaystyle\mathrm{(C3^{*})}$ holds if and only if
$\displaystyle\mu_{\mathscr{A}}(\mathbf{x},\alpha)\leq\mu(\\{\mathbf{x}>\alpha\\})$
for any $\displaystyle\alpha\in[0,\infty)$.
5. v)
$\displaystyle\mathrm{(C2^{*})}$ and $\displaystyle\mathrm{(C3^{*})}$ hold if
and only if
$\displaystyle\mu_{\mathscr{A}}(\mathbf{x},\alpha)=\mu(\\{\mathbf{x}>\alpha\\})$
for any $\displaystyle\alpha\in[0,\infty)$.
6. vi)
If $\displaystyle\mathrm{(C2^{*})}$ holds, then
$\displaystyle\mathrm{(C3^{*})}$ is equivalent to
$\displaystyle\mathrm{(C4^{*})}$.
7. vii)
$\displaystyle\mathrm{(C2^{*})}$ and $\displaystyle\mathrm{(C4^{*})}$ hold if
and only if
$\displaystyle\mu_{\mathscr{A}}(\mathbf{x},\alpha)=\mu(\\{\mathbf{x}>\alpha\\})$
for any $\displaystyle\alpha\in[0,\infty)$.
8. viii)
$\displaystyle\mathrm{(C2)}$ holds if and only if
$\displaystyle\mathrm{(C2^{*})}$ holds.
Proof. The implication (C2) $\displaystyle\Rightarrow$ (C2∗) of part viii) is
clear. We prove the reverse implication. Let us consider any set
$\displaystyle E\in\mathscr{E}$ such that
$\displaystyle\mathsf{A}(\mathbf{x}|E)<x_{(k+1)}$ for some $\displaystyle
k\in\Psi_{\mathbf{x}}$. Let us define
$\displaystyle
j_{k}=\min\\{j\in\Psi_{\mathbf{x}}:\mu(E_{(j+1)})=\mu(E_{(k+1)})\\}.$
It is easy to see that $\displaystyle j_{k}\in\Psi_{\mathbf{x}}^{*}$,
$\displaystyle l_{j_{k}}\geq k\geq j_{k}$. Moreover,
$\displaystyle\mu(E_{(l_{j_{k}}+1)})=\mu(E_{(k+1)})=\mu(E_{(j_{k}+1)})$ and
$\displaystyle x_{(k+1)}\leq x_{(l_{j_{k}}+1)}$. Then from (C2∗) we have
$\displaystyle\mu(E^{c})\geq\mu(E_{(l_{j_{k}}+1)})=\mu(E_{(k+1)})$.
###### Remark 3.16
In comparison with results in the main part of this section, the advantage of
previous statements lies in their efficiency for survival functions equality
or inequality testing. In particular, Proposition 3.15 vii) requires to hold
the same properties as Corollary 3.11 iii), however for a smaller number of
sets, $\displaystyle k\in\Psi_{\mathbf{x}}^{*}\subseteq\Psi_{\mathbf{x}}$. On
the other hand, the equality (inequality) of survival functions implies more
information than those included in the Proposition 3.15, the results are true
for any $\displaystyle k\in\Psi_{\mathbf{x}}$ not only for $\displaystyle
k\in\Psi_{\mathbf{x}}^{*}$. Moreover, system $\displaystyle\Psi_{\mathbf{x}}$
is also easier in definition.
We have seen in the main part of this section that
$\displaystyle\mathrm{(C1)}$, $\displaystyle\mathrm{(C2)}$ are not necessary
for equality between survival functions in general, see Corollary 3.7, Example
3.9. This result we have improved by replacing $\displaystyle\mathrm{(C1)}$
with $\displaystyle\mathrm{(C4)}$. Also, Corollary 3.7 can be improved as it
follows.
###### Theorem 3.17
Let $\displaystyle\mathbf{x}\in[0,\infty)^{[n]}$,
$\displaystyle\mu\in{\mathbf{M}}$, and let $\displaystyle{\mathscr{A}}$ be
FCA. Then the following assertions are equivalent:
1. i)
$\displaystyle\mathrm{(C1^{*})}$, $\displaystyle\mathrm{(C2^{*})}$ are
satisfied.
2. ii)
$\displaystyle\mu_{\mathscr{A}}(\mathbf{x},\alpha)=\mu(\\{\mathbf{x}>\alpha\\})$
for any $\displaystyle\alpha\in[0,\infty)$.
Proof. The implication $\displaystyle\text{i)}\Rightarrow\text{ii)}$ follows
from Proposition 3.15 iii). In order to prove the reverse implication, assume
that survival functions are equal. Then (C2∗) follows from Proposition 3.15
vii). It is enough to prove (C1∗). From Proposition 3.15 vii) we have:
For any $\displaystyle k\in\Psi_{\mathbf{x}}^{*}$ there exists $\displaystyle
F_{k}\in\mathscr{E}$ such that $\displaystyle\mathsf{A}(\mathbf{x}|F_{k})\leq
x_{(k)}$ and $\displaystyle\mu(F_{k}^{c})=\mu(E_{(k+1)})$.
We show that $\displaystyle\mathsf{A}(\mathbf{x}|F_{k})=x_{(k)}$. Indeed, for
$\displaystyle k=\min\Psi_{\mathbf{x}}^{*}$ is the result immediate since
$\displaystyle
0\leq\mathsf{A}(\mathbf{x}|F_{\min\Psi_{\mathbf{x}}^{*}})\leq\leavevmode\nobreak\
x_{(\min\Psi_{\mathbf{x}}^{*})}=0$, where the last inequality follows from
Proposition 3.13 i). Let $\displaystyle k>\min\Psi_{\mathbf{x}}^{*}$,
$\displaystyle k\in\Psi_{\mathbf{x}}^{*}$. From Proposition 3.13 iii) there
exists $\displaystyle r\in\Psi_{\mathbf{x}}^{*},r<k$ such that $\displaystyle
x_{(l_{r}+1)}=x_{(k)}$ and
$\displaystyle\mu(F_{k}^{c})=\mu(E_{(k+1)})<\mu(E_{(l_{r}+1)})$. From
contraposition to (C2∗) we have $\displaystyle\mathsf{A}(\mathbf{x}|F_{k})\geq
x_{(l_{r}+1)}=x_{(k)}$.
###### Corollary 3.18
Let $\displaystyle\mathbf{x}\in[0,\infty)^{[n]}$,
$\displaystyle\mu\in{\mathbf{M}}$ such that it is strictly monotone on
$\displaystyle\\{E_{(k+1)}:k\in\Psi_{\mathbf{x}}\\}$, and let
$\displaystyle{\mathscr{A}}$ be FCA. Then the following assertions are
equivalent:
1. i)
$\displaystyle\mathrm{(C1)}$, $\displaystyle\mathrm{(C2)}$ are satisfied.
2. ii)
$\displaystyle\mu_{\mathscr{A}}(\mathbf{x},\alpha)=\mu(\\{\mathbf{x}>\alpha\\})$
for any $\displaystyle\alpha\in[0,\infty)$.
Proof. It follows from Proposition 3.13 iv) and Theorem 3.17.
A summary of relationships among some conditions as well as the summary of
sufficient and necessary conditions under which survival functions coincide or
under which they are pointwise comparable with respect to $\displaystyle\leq$,
$\displaystyle\geq$ can be found in the Appendix, see Table 2.
## 4 Equality characterization
Results of the previous section stated conditions depended on FCA
$\displaystyle{\mathscr{A}}$, input vector $\displaystyle\mathbf{x}$ and
monotone measure $\displaystyle\mu$ to hold equality between survival
functions. Of course, when one changes the monotone measure and the other
inputs stay the same, the equality can violate as the following example shows.
###### Example 4.1
Let us consider
$\displaystyle{\mathscr{A}}^{\mathrm{sum}}=\\{\mathsf{A}^{\mathrm{sum}}(\cdot|E):E\in
2^{[3]}\\}$, and normalized monotone measure $\displaystyle\mu$ on
$\displaystyle 2^{[3]}$ with the following values:
$\displaystyle E$ | $\displaystyle\\{1,2,3\\}$ | $\displaystyle\\{2,3\\}$ | $\displaystyle\\{1,3\\}$ | $\displaystyle\\{1,2\\}$ | $\displaystyle\\{3\\}$ | $\displaystyle\\{2\\}$ | $\displaystyle\\{1\\}$ | $\displaystyle\emptyset$
---|---|---|---|---|---|---|---|---
$\displaystyle E^{c}$ | $\displaystyle\emptyset$ | $\displaystyle\\{1\\}$ | $\displaystyle\\{2\\}$ | $\displaystyle\\{3\\}$ | $\displaystyle\\{1,2\\}$ | $\displaystyle\\{1,3\\}$ | $\displaystyle\\{2,3\\}$ | $\displaystyle\\{1,2,3\\}$
$\displaystyle\mu(E^{c})$ | $\displaystyle 0$ | $\displaystyle 0$ | $\displaystyle 0$ | $\displaystyle 0$ | $\displaystyle 0$ | $\displaystyle 0.5$ | $\displaystyle 0.5$ | $\displaystyle 1$
$\displaystyle\nu(E^{c})$ | $\displaystyle 0$ | $\displaystyle 0$ | $\displaystyle 0$ | $\displaystyle 0$ | $\displaystyle 0.5$ | $\displaystyle 0.5$ | $\displaystyle 0.5$ | $\displaystyle 1$
$\displaystyle\mathsf{A}^{\mathrm{sum}}(\mathbf{x}|E)$ | $\displaystyle 4$ | $\displaystyle 3$ | $\displaystyle 2$ | $\displaystyle 3$ | $\displaystyle 1$ | $\displaystyle 2$ | $\displaystyle 1$ | $\displaystyle 0$
$\displaystyle\mathsf{A}^{\mathrm{max}}(\mathbf{x}|E)$ | $\displaystyle 2$ | $\displaystyle 2$ | $\displaystyle 1$ | $\displaystyle 2$ | $\displaystyle 1$ | $\displaystyle 2$ | $\displaystyle 1$ | $\displaystyle 0$
Further, let us take the input vector $\displaystyle\mathbf{x}=(1,2,1)$.Then
we can see
$\displaystyle\mu_{\mathscr{A}^{\mathrm{sum}}}(\mathbf{x},\alpha)=\mathbf{1}_{[0,1)}(\alpha)=\mu(\\{\mathbf{x}>\alpha\\}),\quad\alpha\in[0,\infty),$
but
$\displaystyle\nu_{{\mathscr{A}}^{\mathrm{sum}}}(\mathbf{x},\alpha)=\mathbf{1}_{[0,1)}(\alpha)+0{.}5\,\mathbf{1}_{[1,2)}(\alpha)\not=\mathbf{1}_{[0,1)}(\alpha)=\nu(\\{\mathbf{x}>\alpha\\}),\quad\alpha\in[0,\infty).$
In the following we shall find sufficient and necessary conditions on
$\displaystyle{\mathscr{A}}$ and $\displaystyle\mathbf{x}$ under which
survival functions equal for any monotone measure. So, we answer Problem 2,
see Theorem 4.2, Theorem 4.6. In the second step we characterize FCA for which
survival functions equal (for any monotone measure and any input vector). We
answer Problem 3.
###### Theorem 4.2
Let $\displaystyle\mathbf{x}\in[0,\infty)^{[n]}$, and
$\displaystyle{\mathscr{A}}$ be FCA. Then the following assertions are
equivalent:
1. i)
$\displaystyle\mathscr{E}\supseteq\\{E_{(k+1)}^{c}:k\in\Psi_{\mathbf{x}}\\}$
and
$\displaystyle\mathsf{A}(\mathbf{x}|E)=\mathsf{A}^{\mathrm{max}}(\mathbf{x}|E)$
for any $\displaystyle E=E_{(k+1)}^{c}$ with $\displaystyle
k\in\Psi_{\mathbf{x}}$,
$\displaystyle\mathsf{A}(\mathbf{x}|E)\geq\mathsf{A}^{\mathrm{max}}(\mathbf{x}|E)$
otherwise.
2. ii)
For each $\displaystyle\mu\in\mathbf{M}$ such that it is strictly monotone on
$\displaystyle\\{E_{(k+1)}:k\in\Psi_{\mathbf{x}}\\}$ it holds
$\displaystyle\mu_{\mathscr{A}}(\mathbf{x},\alpha)=\mu(\\{\mathbf{x}>\alpha\\})$
for any $\displaystyle\alpha\in[0,\infty)$.
3. iii)
For each $\displaystyle\mu\in\mathbf{M}$ it holds
$\displaystyle\mu_{\mathscr{A}}(\mathbf{x},\alpha)=\mu(\\{\mathbf{x}>\alpha\\})$
for any $\displaystyle\alpha\in[0,\infty)$.
Proof. The implication $\displaystyle\text{i)}\Rightarrow\text{iii)}$ we
easily prove by Corollary 3.7. Indeed, for any $\displaystyle
k\in\Psi_{\mathbf{x}}$ (C1) is satisfied with $\displaystyle
G_{k}=E_{(k+1)}^{c}$. If $\displaystyle\mathsf{A}(\mathbf{x}|E)<x_{(k+1)}$,
$\displaystyle k\in\Psi_{\mathbf{x}}$ and $\displaystyle E\in\mathscr{E}$,
then from assumptions we have
$\displaystyle\mathsf{A}^{\mathrm{max}}(\mathbf{x}|E)\leq\mathsf{A}(\mathbf{x}|E)<x_{(k+1)}.$
Then we get $\displaystyle E\subseteq E_{(k+1)}^{c}$, i.e., $\displaystyle
E^{c}\supseteq E_{(k+1)}$ and for each monotone measure $\displaystyle\mu$ we
have $\displaystyle\mu(E^{c})\geq\mu(E_{(k+1)})$. Thus (C2) is also satisfied.
Let us prove the implication $\displaystyle\text{ii)}\Rightarrow\text{i)}$.
Since the assumption holds for any $\displaystyle\mu\colon
2^{[n]}\to[0,\infty)$ such that it is strictly monotone measure on
$\displaystyle\\{E_{(k+1)}:k\in\Psi_{\mathbf{x}}\\}$, it holds for
$\displaystyle\mu$ such that it is strictly monotone measure on $\displaystyle
2^{[n]}$ (not only on $\displaystyle\\{E_{(k+1)}:k\in\Psi_{\mathbf{x}}\\}$).
From Corollary 3.18 (C1) holds. Moreover, since sets $\displaystyle E_{(k+1)}$
are the only sets with value equal to $\displaystyle\mu(E_{(k+1)})$, we get
$\displaystyle G_{k}=E_{(k+1)}^{c}$ and
$\displaystyle\mathscr{E}\supseteq\\{E_{(k+1)}^{c}:k\in\Psi_{\mathbf{x}}\\}$.
So, from (C1) we have
$\displaystyle\mathsf{A}(\mathbf{x}|E_{(k+1)}^{c})=x_{(k)}=\mathsf{A}^{\mathrm{max}}(\mathbf{x}|E_{(k+1)}^{c})$
for any $\displaystyle k\in\Psi_{\mathbf{x}}$. Let us prove the second part of
i). Again, if the equality between survival functions holds for any strictly
monotone measure $\displaystyle\mu$ on
$\displaystyle\\{E_{(k+1)}^{c}:k\in\Psi_{\mathbf{x}}\\}$, then it holds for
$\displaystyle\mu\colon 2^{[n]}\to[0,\infty)$ being strictly monotone on the
above mentioned collection with values:
$\displaystyle\mu(E)=\mu(E_{(k+1)})\,\,\text{for any set}\,\,E\,\,\text{such
that}\,\,\mathsf{A}^{\mathrm{max}}(\mathbf{x}|E^{c})=x_{(k)},\text{
}k\in\Psi_{\mathbf{x}}.$
Let $\displaystyle E\in\mathscr{E}$. Then according to Proposition 3.1 ii)
there exists $\displaystyle k\in\Psi_{\mathbf{x}}\setminus\\{0\\}$ such that
$\displaystyle\mathsf{A}^{\mathrm{max}}(\mathbf{x}|E)=x_{(k)}.$
Since $\displaystyle\mu$ is strictly monotone on
$\displaystyle\\{E_{(k+1)}:k\in\Psi_{\mathbf{x}}\\}$, then from Proposition
3.13 iv) we have $\displaystyle\Psi_{\mathbf{x}}=\Psi_{\mathbf{x}}^{*}$.
Further, from Proposition 3.13 i), if
$\displaystyle\mathsf{A}^{\mathrm{max}}(\mathbf{x}|E)=x_{(\min\Psi_{\mathbf{x}}^{*})}=0$
the result is trivial. Let $\displaystyle k>\min\Psi_{\mathbf{x}}^{*}$. Then
from Proposition 3.13 iii) there exists $\displaystyle
r\in\Psi_{\mathbf{x}}^{*},r<k$ such that $\displaystyle x_{(l_{r}+1)}=x_{(k)}$
and $\displaystyle\mu(E_{(k+1)})<\mu(E_{(l_{r}+1)})$. Therefore
$\displaystyle\mu(E^{c})=\mu(E_{(k+1)})<\mu(E_{(l_{r}+1)})$ and from
contraposition to (C2∗) we have $\displaystyle\mathsf{A}(\mathbf{x}|E)\geq
x_{(l_{r}+1)}=x_{(k)}=\mathsf{A}^{\mathrm{max}}(\mathbf{x}|E)$.
###### Remark 4.3
From the previous theorem one can see the other sufficient condition under
which the standard and generalized survival functions coincide, i.e., the
condition i). Let us remark that this sufficient condition is more strict than
$\displaystyle\mathrm{(C1)}$ and $\displaystyle\mathrm{(C2)}$, i.e., if i) is
satisfied then $\displaystyle\mathrm{(C1)}$, $\displaystyle\mathrm{(C2)}$ are
true, however, the reverse implication need not be true in general, see
Example 3.8.
According to previous result there are vectors for which the equality between
survival functions (for any $\displaystyle\mu$) do not lead to
$\displaystyle\mathsf{A}^{\text{max}}$.
###### Example 4.4
Let us consider $\displaystyle{\mathscr{A}}=\\{\mathsf{A}(\cdot|E):E\in
2^{[3]}\\}$ with conditional aggregation operator from Example 2.2 iii) with
$\displaystyle\mathbf{w}=(0.5,0.5,1)$, $\displaystyle\mathbf{z}=(0.5,0.25,1)$.
Let us take the input vector $\displaystyle\mathbf{x}=(2,3,4)$. The values of
$\displaystyle\mathsf{A}(\mathbf{x}|E)$, $\displaystyle E\in 2^{[3]}$ are
summarized in following table:
$\displaystyle E$ | $\displaystyle\\{1,2,3\\}$ | $\displaystyle\\{2,3\\}$ | $\displaystyle\\{1,3\\}$ | $\displaystyle\\{1,2\\}$ | $\displaystyle\\{3\\}$ | $\displaystyle\\{2\\}$ | $\displaystyle\\{1\\}$ | $\displaystyle\emptyset$
---|---|---|---|---|---|---|---|---
$\displaystyle E^{c}$ | $\displaystyle\emptyset$ | $\displaystyle\\{1\\}$ | $\displaystyle\\{2\\}$ | $\displaystyle\\{3\\}$ | $\displaystyle\\{1,2\\}$ | $\displaystyle\\{1,3\\}$ | $\displaystyle\\{2,3\\}$ | $\displaystyle\\{1,2,3\\}$
$\displaystyle\mathsf{A}(\mathbf{x}|E)$ | $\displaystyle 4$ | $\displaystyle 4$ | $\displaystyle 4$ | $\displaystyle 3$ | $\displaystyle 4$ | $\displaystyle 6$ | $\displaystyle 2$ | $\displaystyle 0$
Then $\displaystyle\Psi_{\mathbf{x}}=\\{0,1,2,3\\}$ and it holds
$\displaystyle\displaystyle\mu_{\mathscr{A}}(\mathbf{x},\alpha)=\mu(\\{\mathbf{x}>\alpha\\})$
$\displaystyle\displaystyle=\mu(\\{1,2,3\\})\cdot\mathbf{1}_{[0,2)}(\alpha)+\mu(\\{2,3\\})\cdot\mathbf{1}_{[2,3)}(\alpha)+\mu(\\{3\\})\cdot\mathbf{1}_{[3,4)}(\alpha)$
$\displaystyle\displaystyle=\mu(E_{(1)})\cdot\mathbf{1}_{[0,2)}(\alpha)+\mu(E_{(2)})\cdot\mathbf{1}_{[2,3)}(\alpha)+\mu(E_{(3)})\cdot\mathbf{1}_{[3,4)}(\alpha)$
for any $\displaystyle\alpha\in[0,\infty)$ and monotone measure
$\displaystyle\mu$. So, we have shown that there is vector
$\displaystyle\mathbf{x}$ and
$\displaystyle{\mathscr{A}}\neq{\mathscr{A}}^{\mathrm{max}}$ such that
$\displaystyle\mu_{\mathscr{A}}(\mathbf{x},\alpha)=\mu(\\{\mathbf{x}>\alpha\\})$
for any monotone measure $\displaystyle\mu$. Indeed, $\displaystyle
6=\mathsf{A}(\mathbf{x}|\\{2\\})>\mathsf{A}^{\mathrm{max}}(\mathbf{x}|\\{2\\})=3$
($\displaystyle\mathsf{A}(\mathbf{x}|E)=\mathsf{A}^{\mathrm{max}}(\mathbf{x}|E)$
for any $\displaystyle E\in 2^{[3]}\setminus\\{\\{2\\}\\}$).
Aggregation functions with the property being bounded from below by the
maximum are in the literature called disjunctive, see [11]. For FCA
$\displaystyle{\mathscr{A}}$ nondecreasing w.r.t. sets we get an interesting
consequence.
###### Lemma 4.5
Let $\displaystyle{\mathscr{A}}=\\{\mathsf{A}(\cdot|E):E\in 2^{[n]}\\}$ be FCA
nondecreasing w.r.t. sets. If for $\displaystyle\mathbf{x}\in[0,\infty)^{[n]}$
it holds that
$\displaystyle\mathsf{A}(\mathbf{x}|E)=\mathsf{A}^{\mathrm{max}}(\mathbf{x}|E)$
for any $\displaystyle E=E_{(k+1)}^{c}$ with $\displaystyle
k\in\Psi_{\mathbf{x}}$ and
$\displaystyle\mathsf{A}(\mathbf{x}|E)\geq\mathsf{A}^{\mathrm{max}}(\mathbf{x}|E)$
otherwise, then
$\displaystyle\mathsf{A}(\mathbf{x}|E)=\mathsf{A}^{\mathrm{max}}(\mathbf{x}|E)$
for any $\displaystyle E\in 2^{[n]}$.
Proof. Let us consider an arbitrary set $\displaystyle E\in\mathscr{E}$ and
let us denote $\displaystyle\mathsf{A}^{\mathrm{max}}(\mathbf{x}|E):=x_{s}$.
Then according to Proposition 3.1 there exists $\displaystyle
k_{s}\in\Psi_{\mathbf{x}}$ such that $\displaystyle x_{s}=x_{(k_{s})}$. Then
$\displaystyle E\subseteq E_{(k_{s}+1)}^{c}$. From above mentioned and from
Theorem 4.2 we have
$\displaystyle
x_{(k_{s})}=\mathsf{A}^{\mathrm{max}}(\mathbf{x}|E_{(k_{s}+1)}^{c})=\mathsf{A}(\mathbf{x}|E_{(k_{s}+1)}^{c})\geq\mathsf{A}(\mathbf{x}|E)\geq\mathsf{A}^{\mathrm{max}}(\mathbf{x}|E)=x_{(k_{s})}.$
###### Theorem 4.6
Let $\displaystyle\mathbf{x}\in[0,\infty)^{[n]}$, and
$\displaystyle{\mathscr{A}}$ be FCA nondecreasing w.r.t. sets. Then the
following assertions are equivalent:
1. i)
$\displaystyle\mathscr{E}\supseteq\\{E_{(k+1)}^{c}:k\in\Psi_{\mathbf{x}}\\}$
and
$\displaystyle\mathsf{A}(\mathbf{x}|E)=\mathsf{A}^{\mathrm{max}}(\mathbf{x}|E)$
for any set $\displaystyle E\in\mathscr{E}$.
2. ii)
For each $\displaystyle\mu\in\mathbf{M}$ it holds
$\displaystyle\mu_{\mathscr{A}}(\mathbf{x},\alpha)=\mu(\\{\mathbf{x}>\alpha\\})$
for any $\displaystyle\alpha\in[0,\infty)$.
Proof. The implication $\displaystyle\text{i)}\Rightarrow\text{ii)}$ follows
from Proposition 3.2 ii). The reverse implication follows from Theorem 4.2 and
Lemma 4.5.
Let us return to Example 4.4. We have shown that for the input vector
$\displaystyle\mathbf{x}=(2,3,4)$ with $\displaystyle{\mathscr{A}}$ given in
example
$\displaystyle\mu_{\mathscr{A}}(\mathbf{x},\alpha)=\mu(\\{\mathbf{x}>\alpha\\})$
for any $\displaystyle\mu$. However, for another vector, let us take
$\displaystyle\mathbf{y}=(2,5,4)$ the equality can violate:
$\displaystyle\displaystyle\mu_{\mathscr{A}}(\mathbf{y},\alpha)$
$\displaystyle\displaystyle=\mu(\\{1,2,3\\})\cdot\mathbf{1}_{[0,2)}(\alpha)+\mu(\\{2,3\\})\cdot\mathbf{1}_{[2,4)}(\alpha),$
$\displaystyle\displaystyle\mu(\\{\mathbf{y}>\alpha\\})$
$\displaystyle\displaystyle=\mu(\\{1,2,3\\})\cdot\mathbf{1}_{[0,2)}(\alpha)+\mu(\\{2,3\\})\cdot\mathbf{1}_{[2,4)}(\alpha)+\mu(\\{2\\})\cdot\mathbf{1}_{[4,5)}(\alpha),$
i.e.
$\displaystyle\mu_{\mathscr{A}}(\mathbf{y},\alpha)=\mu(\\{\mathbf{y}>\alpha\\})$
does not hold for any $\displaystyle\mu$. In the following we shall ask for
FCA $\displaystyle{\mathscr{A}}$ for which the equality holds for any
$\displaystyle\mu$ and for any $\displaystyle\mathbf{x}$. As a last thus we
solve Problem 3.
###### Theorem 4.7
Let $\displaystyle{\mathscr{A}}$ be FCA. The following assertions are
equivalent:
1. i)
$\displaystyle\mathscr{A}=\\{\mathsf{A}^{\max}(\cdot|E):\,\,E\in 2^{[n]}\\}$.
2. ii)
For each $\displaystyle\mu\in\mathbf{M}$, and for each
$\displaystyle\mathbf{x}\in[0,\infty)^{[n]}$ it holds
$\displaystyle\mu_{\mathscr{A}}(\mathbf{x},\alpha)=\mu(\\{\mathbf{x}>\alpha\\})$
for any $\displaystyle\alpha\in[0,\infty)$.
Proof. The implication $\displaystyle\text{i)}\Rightarrow\text{ii)}$ is
immediate. We prove $\displaystyle\text{ii)}\Rightarrow\text{i)}$. Since the
equality holds for any $\displaystyle\mathbf{x}$, according to Theorem 4.2 we
get
$\displaystyle\mathscr{E}=\bigcup_{\mathbf{x}\in[0,\infty)^{[n]}}\mathscr{E}^{\Psi_{\mathbf{x}}-\text{chain}}=2^{[n]}$
with
$\displaystyle\mathscr{E}^{\Psi_{\mathbf{x}}-\text{chain}}:=\\{E_{(k+1)}^{c}:k\in\Psi_{\mathbf{x}}\\}.$
Let $\displaystyle\mathbf{x}\in[0,\infty)^{[n]}$ be an arbitrary fixed vector.
From Theorem 4.2 we have
$\displaystyle\mathsf{A}(\mathbf{x}|E)=\mathsf{A}^{\mathrm{max}}(\mathbf{x}|E)$
for any $\displaystyle E\in\mathscr{E}^{\Psi_{\mathbf{x}}-\text{chain}}$ and
$\displaystyle\mathsf{A}(\mathbf{x}|E)\geq\mathsf{A}^{\mathrm{max}}(\mathbf{x}|E)$
for any $\displaystyle E\in
2^{[n]}\setminus\mathscr{E}^{\Psi_{\mathbf{x}}-\text{chain}}$. However, we
show that
$\displaystyle\mathsf{A}(\mathbf{x}|E)=\mathsf{A}^{\mathrm{max}}(\mathbf{x}|E)$
for any $\displaystyle E\in 2^{[n]}$. Let us consider an arbitrary fixed
$\displaystyle E\in
2^{[n]}\setminus\mathscr{E}^{\Psi_{\mathbf{x}}-\text{chain}}$ and vector
$\displaystyle\widehat{\mathbf{x}}=\mathbf{x}\mathbf{1}_{E}+a\mathbf{1}_{E^{c}},\,\,a>\max_{i\in
E}x_{i}.$
The set $\displaystyle[n]\in\mathscr{E}^{\Psi_{\mathbf{x}}-\text{chain}}$ by
definition of $\displaystyle\Psi_{\mathbf{x}}$, therefore
$\displaystyle\widehat{\mathbf{x}}\neq\mathbf{x}$. Moreover, there exists
permutation $\displaystyle(\cdot)$ such that $\displaystyle
0=\widehat{x}_{(0)}\leq\widehat{x}_{(1)}\leq\widehat{x}_{(2)}\leq\dots=\widehat{x}_{(\widehat{k})}<\widehat{x}_{(\widehat{k}+1)}=\dots=\widehat{x}_{(n)}=a$
with $\displaystyle\widehat{k}=|E|$. Therefore
$\displaystyle\widehat{k}\in\Psi_{\widehat{\mathbf{x}}}$, and $\displaystyle
E=\\{(1),\dots,(\widehat{k})\\}=E^{c}_{(\widehat{k}+1)}\in\mathscr{E}^{\Psi_{\widehat{\mathbf{x}}}-\text{chain}}$.
Finally, from Theorem 4.2, and because of the property
$\displaystyle\mathsf{A}(\mathbf{y}|E)=\mathsf{A}(\mathbf{y}\mathbf{1}_{E}|E)$
for any $\displaystyle\mathbf{y}\in[0,\infty)^{[n]}$, see [1], we have:
$\displaystyle\displaystyle\mathsf{A}(\mathbf{x}|E)$
$\displaystyle\displaystyle=\mathsf{A}(\mathbf{x}\mathbf{1}_{E}|E)=\mathsf{A}(\widehat{\mathbf{x}}\mathbf{1}_{E}|E)=\mathsf{A}(\widehat{\mathbf{x}}|E)=\mathsf{A}^{\mathrm{max}}(\widehat{\mathbf{x}}|E)=\mathsf{A}^{\mathrm{max}}(\widehat{\mathbf{x}}\mathbf{1}_{E}|E)$
$\displaystyle\displaystyle=\mathsf{A}^{\mathrm{max}}(\mathbf{x}\mathbf{1}_{E}|E)=\mathsf{A}^{\mathrm{max}}(\mathbf{x}|E).$
## 5 Conclusion
In this paper we have solved three problems dealing with the question of
equality between the survival function and the generalized survival function
based on conditional aggregation operators introduced originally in [1] (the
generalization of concepts of papers [8], [13]). We have restricted ourselves
to discrete settings. The most interesting results are Corollary 3.7,
Corollary 3.11, Proposition 3.15 and Theorem 3.17 (solutions of Problem 1),
Theorem 4.2 and Theorem 4.6 (solution of Problem 2). Results were derived from
the well-known formula of the standard survival function with a permutation
$\displaystyle(\cdot)$ playing a crucial role. As the main result, we have
determined the family of conditional aggregation operators with respect to
which the novel survival function is identical to the standard survival
function regardless of the monotone measure and input vector, see Theorem 4.7.
We expect the future extension of our results into the area of integrals
introduced with respect to novel survival functions, see [1, Definition 5.1].
The relationship of studied survival functions (in the sense of equalities or
inequalities) determines also the relationship of corresponding integrals
(based on standard and generalized survival function). The interesting
question for the future work is: Is
$\displaystyle{\mathscr{A}}^{\mathrm{sup}}$ family of conditional operators
also the only one that generates the standard survival function in case of
arbitrary basic set $\displaystyle X$ instead of $\displaystyle[n]$, i.e., is
it true that
$\displaystyle\mu_{\mathscr{A}}(f,\alpha)=\mu(\\{f>\alpha\\})$,
$\displaystyle\alpha\in[0,\infty)$ for any $\displaystyle\mu$ and for any
$\displaystyle f$ if and only if
$\displaystyle{\mathscr{A}}={\mathscr{A}}^{\mathrm{sup}}$?
Up to now there are not known any other families except of
$\displaystyle{\mathscr{A}}^{\mathrm{sup}}$ generating generalized survival
function indistinguishable from survival function (for any $\displaystyle\mu$,
$\displaystyle f$). We believe that new results will be beneficial in some
applications, e.g. in the theory of decision making. The equality between
survival functions of a given alternative $\displaystyle\mathbf{x}$ means that
the overall score of it with respect to the Choquet integral and the
$\displaystyle{\mathscr{A}}$-Choquet integral is the same. Also, immediately
with decision making application the question of
$\displaystyle(\mu,{\mathscr{A}})$-indistinguishability arises, i.e. under
which condition on $\displaystyle\mu$, $\displaystyle{\mathscr{A}}$ it holds
$\displaystyle\mu_{\mathscr{A}}(\mathbf{x},\alpha)=\mu_{\mathscr{A}}(\mathbf{y},\alpha)$
for $\displaystyle\mathbf{x},\mathbf{y}\in[0,\infty)^{[n]}$. Then alternatives
$\displaystyle\mathbf{x},\mathbf{y}$ will be
$\displaystyle{\mathscr{A}}$-Choquet integral indistinguishable, i.e. they
achieve the same overall score.
## Appendix
In this subsection we summarize all sufficient and necessary conditions for
equality or inequality between survival functions, see Table 2.
$\displaystyle(\widetilde{\mathrm{C}}1)$ and $\displaystyle(\widetilde{\mathrm{C}}2)$ | $\displaystyle\Rightarrow$ | $\displaystyle\mu_{\mathscr{A}}(\mathbf{x},\alpha)=\mu(\\{\mathbf{x}>\alpha\\})$ | | Rem. 3.12
---|---|---|---|---
$\displaystyle(\widetilde{\mathrm{C}}2)$ and $\displaystyle(\widetilde{\mathrm{C}}3)$ | $\displaystyle\Leftrightarrow$ | $\displaystyle\mu_{\mathscr{A}}(\mathbf{x},\alpha)=\mu(\\{\mathbf{x}>\alpha\\})$ | | Rem. 3.12
$\displaystyle\mathrm{(C1)}$ and $\displaystyle\mathrm{(C2)}$ | $\displaystyle\Rightarrow$ | $\displaystyle\mu_{\mathscr{A}}(\mathbf{x},\alpha)=\mu(\\{\mathbf{x}>\alpha\\})$ | | Cor. 3.7
$\displaystyle\mathrm{(C2)}$ and $\displaystyle\mathrm{(C3)}$ | $\displaystyle\Leftrightarrow$ | $\displaystyle\mu_{\mathscr{A}}(\mathbf{x},\alpha)=\mu(\\{\mathbf{x}>\alpha\\})$ | | Cor. 3.11
$\displaystyle\mathrm{(C2)}$ and $\displaystyle\mathrm{(C4)}$ | $\displaystyle\Leftrightarrow$ | $\displaystyle\mu_{\mathscr{A}}(\mathbf{x},\alpha)=\mu(\\{\mathbf{x}>\alpha\\})$ | | Cor. 3.11
$\displaystyle\mathrm{(C1)}$ and $\displaystyle\mathrm{(C2)}$ | $\displaystyle\Rightarrow$ | $\displaystyle\mu_{\mathscr{A}}(\mathbf{x},\alpha)=\mu(\\{\mathbf{x}>\alpha\\})$ | | Cor. 3.18
$\displaystyle\mu\colon 2^{[n]}\to[0,\infty)$ is strictly monotone on
$\displaystyle\\{E_{(k+1)}:k\in\Psi_{\mathbf{x}}\\}$
$\displaystyle\mathrm{(C2^{*})}$ and $\displaystyle\mathrm{(C3^{*})}$ | $\displaystyle\Leftrightarrow$ | $\displaystyle\mu_{\mathscr{A}}(\mathbf{x},\alpha)=\mu(\\{\mathbf{x}>\alpha\\})$ | | Prop. 3.15
$\displaystyle\mathrm{(C2^{*})}$ and $\displaystyle\mathrm{(C4^{*})}$ | $\displaystyle\Leftrightarrow$ | $\displaystyle\mu_{\mathscr{A}}(\mathbf{x},\alpha)=\mu(\\{\mathbf{x}>\alpha\\})$ | | Prop. 3.15
$\displaystyle\mathrm{(C1^{*})}$ and $\displaystyle\mathrm{(C2^{*})}$ | $\displaystyle\Leftrightarrow$ | $\displaystyle\mu_{\mathscr{A}}(\mathbf{x},\alpha)=\mu(\\{\mathbf{x}>\alpha\\})$ | | Th. 3.17
$\displaystyle\mathrm{(C1)}$ | $\displaystyle\Rightarrow$ | $\displaystyle\mu_{\mathscr{A}}(\mathbf{x},\alpha)\leq\mu(\\{\mathbf{x}>\alpha\\})$ | | Prop. 3.6
$\displaystyle\mathrm{(C1^{*})}$ | $\displaystyle\Rightarrow$ | $\displaystyle\mu_{\mathscr{A}}(\mathbf{x},\alpha)\leq\mu(\\{\mathbf{x}>\alpha\\})$ | | Prop. 3.15
$\displaystyle\mathrm{(C3)}$ | $\displaystyle\Leftrightarrow$ | $\displaystyle\mu_{\mathscr{A}}(\mathbf{x},\alpha)\leq\mu(\\{\mathbf{x}>\alpha\\})$ | | Prop. 3.10
$\displaystyle\mathrm{(C3^{*})}$ | $\displaystyle\Leftrightarrow$ | $\displaystyle\mu_{\mathscr{A}}(\mathbf{x},\alpha)\leq\mu(\\{\mathbf{x}>\alpha\\})$ | | Prop. 3.15
$\displaystyle\mathrm{(C2)}$ | $\displaystyle\Leftrightarrow$ | $\displaystyle\mu_{\mathscr{A}}(\mathbf{x},\alpha)\geq\mu(\\{\mathbf{x}>\alpha\\})$ | | Prop. 3.6
$\displaystyle\mathrm{(C2^{*})}$ | $\displaystyle\Leftrightarrow$ | $\displaystyle\mu_{\mathscr{A}}(\mathbf{x},\alpha)\geq\mu(\\{\mathbf{x}>\alpha\\})$ | | Prop. 3.15
Table 2: Sufficient and necessary conditions for pointwise comparison of
survival functions
From the Table 2 the following relationships between conditions (C1), (C2),
(C3), (C4) and its ∗ versions hold.
###### Corollary 5.1
Let $\displaystyle\mathbf{x}\in[0,\infty)^{[n]}$,
$\displaystyle\mu\in{\mathbf{M}}$, and let $\displaystyle{\mathscr{A}}$ be
FCA. Then it holds:
$\displaystyle\displaystyle\big{(}\mathrm{(C1)}\wedge\mathrm{(C2)}\big{)}$
$\displaystyle\displaystyle\Rightarrow\big{(}\mathrm{(C1^{*})}\wedge\mathrm{(C2^{*})}\big{)}\Leftrightarrow\big{(}\mathrm{(C2)}\wedge\mathrm{(C3)}\big{)}\Leftrightarrow\big{(}\mathrm{(C2)}\wedge\mathrm{(C4)}\big{)}\Leftrightarrow\big{(}(\widetilde{\mathrm{C}}2)\wedge(\widetilde{\mathrm{C}}3)\big{)}$
$\displaystyle\displaystyle\Leftrightarrow\big{(}\mathrm{(C2^{*})}\wedge\mathrm{(C3^{*})}\big{)}\Leftrightarrow\big{(}\mathrm{(C2^{*})}\wedge\mathrm{(C4^{*})}\big{)}\Leftrightarrow\big{(}\mathrm{(C1^{*})}\wedge\mathrm{(C2)}\big{)}.$
###### Corollary 5.2
Let $\displaystyle\mathbf{x}\in[0,\infty)^{[n]}$,
$\displaystyle\mu\in{\mathbf{M}}$, and let $\displaystyle{\mathscr{A}}$ be
FCA. If $\displaystyle\mathrm{(C2^{*})}$ holds, then
$\displaystyle\mathrm{(C1^{*})}\Leftrightarrow\mathrm{(C3^{*})}\Leftrightarrow\mathrm{(C4^{*})}.$
## References
## References
* [1] M. Boczek, L. Halčinová, O. Hutník, and M. Kaluszka, Novel survival functions based on conditional aggregation operators, Inform. Sciences, (https://doi.org/10.1016/j.ins.2020.12.049).
* [2] M. Boczek, A. Hovana, O. Hutník, and M. Kaluszka, New monotone measure-based integrals inspired by scientific impact problem, European Journal of Operational Research, 290 (2021), pp. 346–357.
* [3] J. Borzová, L. Halčinová, and J. Šupina, Size-based super level measures on discrete space, Medina J. et al. (eds) Information Processing and Management of Uncertainty in Knowledge-Based Systems. Theory and Foundations. IPMU 2018. Communications in Computer and Information Science, (2018), pp. 219–230.
* [4] J. Borzová, L. Halčinová, and J. Šupina, Conditional aggregation-based Choquet integral on discrete space, (submitted).
* [5] J. Borzová, L. Halčinová, and O. Hutník, The smallest semicopula-based universal integrals: Remarks and improvements, Fuzzy Sets and Systems, 393 (2020), pp. 29–52. Copulas and Related Topics.
* [6] T. Calvo, A. Kolesárová, M. Komorníková, and R. Mesiar, Aggregation operators: Properties, classes and construction methods, aggregation operators. new trends and applications, Physica, Heidelberg, (2002), pp. 3–104.
* [7] T. Chen, R. Mesiar, J. Li, and A. Stupňanová, Possibility and necessity measures and integral equivalence, International Journal of Approximate Reasoning, 86 (2017), pp. 62–72.
* [8] Y. Do and C. Thiele, $\displaystyle l^{p}$ theory for outer measures and two themes of lennart carleson united, Bulletin of the American Mathematical Society, 52 (2015), pp. 249–296.
* [9] F. Durante and C. Sempi, Principles of Copula Theory, CRC Press, 07 2015\.
* [10] M. Grabisch, Set Functions, Games and Capacities in Decision Making, Theory and Decision Library C, Springer International Publishing, 2016\.
* [11] M. Grabisch, J. Marichal, R. Mesiar, and E. Pap, Aggregation Functions, Encyclopedia of Mathematics and its Applications, Cambridge University Press, 2009.
* [12] L. Halčinová, Sizes, super level measures and integrals, in Aggregation Functions in Theory and in Practice, 9th International Summer School on Aggregation Functions, Skövde, Sweden, 19-22 June 2017, V. Torra, R. Mesiar, and B. D. Baets, eds., vol. 581 of Advances in Intelligent Systems and Computing, Springer, 2017, pp. 181–188.
* [13] L. Halčinová, O. Hutník, J. Kiseľák, and J. Šupina, Beyond the scope of super level measures, Fuzzy Sets and Systems, 364 (2019), pp. 36–63.
* [14] E. P. Klement, R. Mesiar, and E. Pap, A universal integral as common frame for Choquet and Sugeno integral, IEEE Transactions on Fuzzy Systems, 18 (2010), pp. 178–187.
* [15] S. Weber, Two integrals and some modified versions - critical remarks, Fuzzy Sets and Systems, 20 (1986), pp. 97–105.
|
11institutetext: Department of Physics & ITCP, University of Crete, GR-70013,
Heraklion, Greece 22institutetext: Institute of Astrophysics, Foundation for
Research and Technology-Hellas, Vasilika Vouton, GR-70013 Heraklion, Greece
33institutetext: Laboratoire d’Astrophysique, EPFL, CH-1290 Sauverny,
Switzerland 44institutetext: Scuola Normale Superiore di Pisa, Piazza dei
Cavalieri 7, 56126 Pisa, Italy 55institutetext: Max Planck Institute for
Astrophysics, Karl-Schwarzschild-Straße 1, 85748 Garching, Germany
66institutetext: Ludwig Maximilian University of Munich, Geschwister-Scholl-
Platz 1, 80539 Munich, Germany 77institutetext: University of Vienna,
Department of Astrophysics, Türkenschanzstrasse 17, 1180 Vienna, Austria
# Non-parametric Bayesian reconstruction of Galactic magnetic fields using
Information Field Theory
The inclusion of line-of-sight information in ultra-high energy cosmic ray
backtracking
Alexandros Tsouros<EMAIL_ADDRESS>Abhijit B. Bendre , 3344
Gordian Edenhofer ,, 556677 Torsten Enßlin , 5566 Philipp Frank 55 Michalis
Mastorakis , 1122 Vasiliki Pavlidou , 1122
(Received ; accepted )
###### Abstract
Context. Ultra-high energy cosmic rays (UHECRs) are extremely energetic
charged particles with energies surpassing $10^{18}$ eV. Their sources remain
elusive, obscured by deflections caused by the Galactic magnetic field (GMF).
This challenge is further complicated by our limited understanding of the
three-dimensional structure of the GMF, as current GMF observations consist
primarily of quantities integrated along the line-of-sight (LOS).
Nevertheless, data from upcoming stellar polarisation surveys along with
Gaia’s stellar parallax data are expected to yield local GMF measurements.
Aims. This study is the second entry in our exploration of a Bayesian
inference approach to the local GMF that uses these forthcoming local GMF
measurements, by attempting to reconstruct its $3$D structure. The ultimate
aim is to backtrack observed UHECRs, thereby updating our knowledge about
their possible origin.
Methods. We employ methods of Bayesian statistical inference in order to
sample the posterior distribution of the GMF within part of the Galaxy. By
assuming a known energy, charge, and arrival direction of an UHECR, we
backtrack its trajectory through various GMF configurations drawn from the
posterior distribution. Our objective is to rigorously evaluate our
algorithm’s performance in scenarios that closely mirror the setting of
expected future applications. In pursuit of this, we condition the posterior
to synthetic integrated LOS measurements of the GMF, in addition to synthetic
local POS-component measurements. In this proof of concept work, we assume the
ground truth to be a magnetic field produced by a dynamo simulation of the
Galactic ISM.
Results. Our results demonstrate that for all locations of the observed
arrival direction on the POS, our algorithm is able to substantially update
our knowledge on the original arrival direction of UHECRs with rigidity
$E/Z=5\times 10^{19}$ eV, even in the case of complete absence of LOS
information. If integrated data is included in the inference, then the regions
of the celestial sphere where the maximum error occurs diminishes greatly.
Even in those regions the maximum error is diminished by a factor of about $3$
in the specific setting studied. Additionally, we are able to identify the
regions where the largest error is expected to occur.
###### Key Words.:
Galactic magnetic field – Ultra high energy cosmic ray sources – Interstellar
turbulence
## 1 Introduction
Determining the origins of ultra-high-energy cosmic rays (UHECRs) is a crucial
challenge in the field of high-energy astrophysics. Successfully addressing
this challenge could offer insights with regard to astrophysical processes
responsible for generating UHECRs, as well as their composition. Additionally,
knowledge of UHECR sources would be a crucial ingredient in multi-messenger
studies of high-energy systems (e.g. Fang & Murase 2018; Murase 2019).
Although numerous theoretical models have been proposed to explain the sources
of UHECRs (e.g Bhattacharjee & Sigl 2000; Torres & Anchordoqui 2004; Kotera &
Olinto 2011), pinpointing these sources has proven to be a complicated task.
The main challenge arises from the fact that UHECRs are charged particles, and
are deflected by both the Galactic magnetic field (GMF) and the intergalactic
magnetic field. As a result, even if multiple UHECRs were emitted from a
single, intense, and proximate cosmic ray source (di Matteo et al. 2023),
their trajectories would be dispersed across the plane of the sky (POS).
Consequently, any UHECR hotspot would not align with the source. Rather, it
would be displaced away from it due to systematic deflections by the ordered
component of the GMF, in addition to being spread out due to the random
deflections due to the turbulent component of the GMF. This situation
contrasts with that of photons or neutrinos, where establishing a connection
between observed events and their probable sources is more straightforward,
even in the limit of low statistics and poor angular resolution of their
detectors.
The primary challenge in understanding the GMF lies in the difficulty of
obtaining three-dimensional tomographic reconstruction of the intervening GMF,
as the majority of the currently accessible observations are integrated along
the LOS. This limitation has guided the predominant approach in GMF modelling
to rely on parametric models. This is typically achieved by fitting parameters
to distinct analytic components, e.g. a toroidal component, a poloidal
component, and a turbulent component. For modelling the latter, a Gaussian
random field is employed (Sun et al. 2008; Sun & Reich 2010; Takami & Sato
2010; Jansson & Farrar 2012a; Jansson & Farrar 2012b).
However, direct insights into the three-dimensional structure of the
interstellar medium of the Milky way are attainable. The Gaia mission, by
accurately measuring stellar parallaxes, has mapped the positions of over a
billion stars in the Galaxy (Gaia Collaboration et al. 2016; Gaia
Collaboration et al. 2021; Bailer-Jones et al. 2021). This dataset, combined
with other spectroscopic data, has enabled the construction of three-
dimensional tomographic maps showing the dust density distribution in certain
regions of the Galaxy (Lallement et al. 2018; Green et al. 2019; Lallement et
al. 2019; Leike & Enßlin 2019; Leike et al. 2020; Lallement et al. 2022; Leike
et al. 2022; Edenhofer et al. 2023). Nevertheless, these maps primarily focus
on dust density and do not directly constrain the magnetic field.
Yet, observational methods available that probe the three-dimensional
structure of the GMF do exist. A notable example is the linear polarization of
starlight. Typically, starlight originates from its source as unpolarized
light, but can become linearly polarized due to the dichroic absorption by
interstellar dust particles, which align themselves with the surrounding
magnetic field (Andersson et al. 2015).
Future optopolarimetric surveys, like PASIPHAE and SouthPol, are poised to
deliver high-quality stellar polarization measurements for millions of stars
(Magalhães 2012; Tassis et al. 2018; Maharana et al. 2021; Maharana et al.
2022). When combined with the stellar distance data obtained from the Gaia
survey, these measurements will enable direct tomographic measurements of the
GMF’s POS component in regions where dust clouds are present (Davis 1951;
Chandrasekhar & Fermi 1953; Panopoulou et al. 2017; Skalidis et al. 2021;
Skalidis & Tassis 2021; Pelgrims et al. 2022). Additionally, local information
can be obtained through the study of HI gas in different velocity bins, which
also provide local GMF information (Tritsis et al. 2018; Tritsis et al. 2019;
Clark & Hensley 2019). This information, in conjunction with available LOS
data (see, for example, Tahani et al. 2022a; Tahani et al. 2022b), promises to
provide localized and sparse GMF data in the future. This will be instrumental
in creating three-dimensional tomographic maps of specific areas of interest.
With such maps it becomes feasible to backtrack the paths of UHECRs through
these regions, improving source localization on the sky111However, the
contribution of the intergalactic magnetic field is still not accounted for..
Specifically, there is an intense interest in mapping the GMF in the direction
of UHECR ‘hotspots’, as well as in parts of the Galaxy likely to have been
traversed by particles comprising these hotspots (Abbasi et al. 2014; Pierre
Auger Collaboration et al. 2017; Kawata et al. 2019).
This study is the second entry in our effort to reconstruct the GMF non-
parametrically in $3$D in a Bayesian setting. It directly follows Tsouros et
al. 2024, hereafter Paper I. Essentially, we address an inverse problem within
a Bayesian framework, where the goal is to sample the posterior distribution
of GMF configurations in a specific part of the Galaxy, using a combination of
local and LOS-integrated information. In this work, local measurements only
provide information for the POS component of the magnetic field. This
corresponds to the information content of tomographic measurements of
interstellar magnetized dust through optopolarimetry of starlight. On the
other hand, LOS-integrated measurements provide information for the LOS
component of the magnetic field as derived for instance from Faraday rotation
measurements (Pandhi et al. 2022; Hutschenreuter et al. 2023). We will tackle
this problem within the context of Information Field Theory, which was
developed specifically for Bayesian inference for fields and has been applied
successfully in various contexts (Enßlin et al. 2009; Enßlin 2019; Enßlin
2022). By reconstructing the posterior distribution of GMF realizations, we
aim to accurately recover the true arrival directions of UHECRs given the
observed arrival directions, accounting for the influence of the GMF.
In section 2, we briefly describe the methodology, the forward models used,
and how the posterior is sampled. In section 3 we present the main results of
the algorithm for the considered scenarios, and in section 4 we discuss the
results further.
## 2 Methodology
In general, we are interested in inferring the configuration of the GMF,
$\bm{B}(\mathbf{x})$ with $\mathbf{x}\in\mathcal{V}$ over a domain
$\mathcal{V}\subset\mathbb{R}^{3}$, given some observed data set $d$. In the
context of Bayesian inference for continuous signals, the task is to determine
the posterior probability distribution of $\bm{B}(\mathbf{x})$ conditional to
$d$:
$P(\bm{B}|d)=\frac{1}{Z}P(d|\bm{B})P(\bm{B}).$ (1)
Here, $P(d|\bm{B})$ is the likelihood, representing the probability of
observing magnetic field measurements $d$ given a specific configuration
$\bm{B}(\mathbf{x})$. The prior, $P(\bm{B})$, encapsulates pre-existing
information about $\bm{B}(\mathbf{x})$ while $Z=P(d)$ is the normalisation
factor.
In this work, the field that serves as a ground truth (the ‘true’ field) is
generated from a dynamo MHD simulation discussed in Appendix A. The original
simulation domain extended to $\sim 1$ kpc in the $x-y$ direction, and $\sim
2$ kpc above the Galactic plane. The GMF is rescaled so that its root-mean-
square (RMS) value is $5\mu$G.
### 2.1 Likelihood
Tomography of the magnetized ISM from stellar polarisation measurements is a
highly nontrivial problem and its full discussion is beyond the scope of this
work (Pelgrims et al. 2022). However, the reader should be aware that through
the combination of Gaia data as well as stellar polarization data for stars of
known distance from the Sun, it is possible to acquire information on the
Stokes parameters that each intervening dust cloud imposes on the observed
starlight. This can then be translated into local information on the
orientation of the POS component of the GMF at that cloud, through the
connection to grain alignment, as referenced briefly in the previous section
and thoroughly in Tassis et al. 2018. Information on the POS component of GMF
in clouds can also be acquired by the use of $21$ cm neutral hydrogen (HI)
emission measurements (Clark & Hensley 2019). In this work, we assume that the
task of determining the locations to which the measurements correspond to has
been carried out.
Thus, for the $i$-th datapoint, we assume a forward model of the form
$\displaystyle\mathbf{d}_{\text{local}}^{(i)}=\int
R_{\text{local}}(\mathbf{x},\mathbf{x}_{i})\bm{B}(\mathbf{x})d^{3}x+\mathbf{n}_{\text{local}}^{(i)},$
(2) $\displaystyle
R_{\text{local}}(\mathbf{x},\mathbf{x}_{i})\equiv\delta^{(3)}(\mathbf{x}-\mathbf{x}_{i})P_{\text{POS}},$
(3)
where $\mathbf{B}(\mathbf{x})$ is the magnetic field, and
$\mathbf{n}_{\text{local}}^{(i)}$ are the observational uncertainties that
contaminate our measurements. The vector $\mathbf{x}_{i}$ is the location of
the $i$-th cloud where the magnetic field is measured, $P_{\text{POS}}$
signifies a projection operator on the POS, which reflects that (mainly) the
POS component of the magnetic field is measured via dust polarization,
$P_{\text{POS},ij}=\delta_{ij}-\hat{x}_{i}\hat{x}_{i}^{\text{T}}$ with
$\hat{x}_{i}=x_{i}/||x_{i}||$ (assuming the observer to be at the origin). The
Dirac delta function localizes the measurements at specific known locations
$\mathbf{x}_{i}$.
The option to include the operator $P_{\text{POS}}$ into the considered
scenario is central to this work, as it consists one of the main additions
compared to Paper I. A complete projection on the POS is a pessimistic
scenario, as LOS information can become available by incorporating Zeeman or
Faraday rotation data (Tahani et al. 2022a; Tahani et al. 2022b). A complete
projection on the POS should therefore be seen as an extreme benchmarking
scenario.
We note that this forward model is quite simplistic, in that it assumes that
accurate 3D locations are measured. Formally, this is captured by the Dirac
delta function and that the locations $\mathbf{x}_{i}$ are to be assumed
known. However, as we will see in section 2.4, the resolution of our
reconstruction is of the order of tens of parsecs, corresponding to the
uncertainty of cloud localisation (Pelgrims et al. 2022).
The vector $\mathbf{n}_{\text{local}}^{(i)}$ is assumed to be a random
variable drawn from a Gaussian distribution with a known covariance
$N_{\text{local}}$. Note that once specific measurement techniques are
identified, other more appropriate error distributions will be chosen.
Marginalizing over the noise, the likelihood becomes
$P(\bm{d}|\mathbf{B})=\mathcal{G}(\bm{d}_{\text{local}}-R_{\text{local}}\bm{B},N_{\text{local}}).$
(4)
The covariance $N_{\text{local}}$ is chosen to be a multiple of the identity,
$(N_{\text{local}})_{ij}=\sigma^{2}\delta_{ij}$, where we choose
$\sigma=\frac{\mathbf{|B|}_{\text{RMS}}}{2},$ (5)
where $\mathbf{|B|}_{\text{RMS}}=5\mu$G is the RMS value of the magnitude of
the ground truth. It should be noted that this does not imply that the noise
is correlated with the GMF covariance, it is merely chosen as such in order to
ensure an SNR of about 2.
In addition to local data, in this work we explore the possibility of
integrated LOS data, as inferred for instance from Faraday measurements
(Hutschenreuter et al. 2023). In this case, the forward model takes the form
$\displaystyle
d_{\text{int}}^{(i)}=(\overline{P_{\text{LOS}}\bm{B}})_{L_{i}}+n_{\text{int}}^{(i)},$
(6)
$\displaystyle(\overline{P_{\text{LOS}}\bm{B}})_{L_{i}}\equiv\frac{1}{|L_{i}|}\int_{0}^{|L_{i}|}B_{||}(\bm{x})d\ell,$
(7)
where $P_{\text{LOS}}$ projects a vector onto the LOS component ($B_{||}$),
and $L_{i}$ the specific LOS under consideration. Further, $|L_{i}|$ denotes
the limit up to which we integrate - in this application $|L_{i}|$ coincides
with the distance between the Earth and the intersection of $L_{i}$ with the
boundary of $\mathcal{V}$. Essentially, the above is equivalent to assuming
that the electron density is roughly constant and known up to $|L_{i}|$ and
then falls to zero. While this is not a valid assumption for low Galactic
latitudes, we will maintain it in this proof-of-concept work. Finally, the
vector $n_{\text{int}}^{(i)}$ corresponds to a random vector on the POS, with
covariance $N_{\text{int}}$.
The likelihood in this case is given by
$P(\bm{d}|\mathbf{B})=\mathcal{G}(\bm{d}_{\text{local}}-R_{\text{local}}\bm{B},N_{\text{local}})\mathcal{G}(d_{\text{int}}-(\overline{P_{\text{LOS}}\mathcal{P}\bm{B}})_{L_{i}},N_{\text{int}}).$
(8)
Similarly, we define the covariance for the noise of the integrated
measurements as $(N_{\text{int}})_{ij}=\sigma_{\text{int}}^{2}\delta_{ij}$,
where222While Faraday data is significantly more accurate than this assumption
suggests, we will use this pessimistic noise covariance to compensate for the
unknown $3$D electron density distribution.
$\sigma_{\text{int}}=\frac{1}{2}\mu\text{G}.$ (9)
Finally, the operator $R_{\text{local}}$, which sparsely samples the GMF, is
defined as follows. After discretising our domain to voxels (see section 4.1),
we apply a Bernoulli trial to each voxel to determine whether it is observed
or not with probability $p$ and $1-p$ respectively. The probability $p$ is
given by the expression
$p=\begin{cases}3\times 10^{-3},&\text{if }T\geq 10^{4}\text{ K}\\\ 3\times
10^{-2},&\text{if }T<10^{4}\text{ K}\end{cases}$ (10)
where $T$ is that voxel’s corresponding gas temperature, acquired from the
same simulation that produced our ground truth. This choice of $p$ reflects
the decay of the number of dust clouds as a function of distance from the
Galactic plane, which directly correlates with the expected number of
measurements with respect to the position above the Galactic plane, as the
local measurements of the GMF will ultimately exist where dust clouds are
located, after polarized-starlight tomography has been carried out. The
specific values chosen are such that the resulting density of points within
the domain is roughly $100$ measurements per kpc3 on average.
### 2.2 Prior
As in Paper I, the only hard constrain that needs to be imposed is that
whatever candidate magnetic field configuration $\bm{B}$ we consider, it must
satisfy $\nabla\cdot\bm{B}=0$ in order to be a viable candidate. To ensure
that the magnetic field is divergence free, we assume it is related to a non-
divergence-free random field $\bm{\varphi}$ by a divergence cleaning operator
$\mathcal{P}$. This transverse projection operator, defined in Fourier space
as
$\mathcal{P}_{ij}(\mathbf{k})=\delta_{ij}-\hat{k}_{i}\hat{k}^{\text{T}}_{j},$
(11)
projects out the degrees of freedom of the Gaussian random vector field that
violate the divergence-free condition. Said differently, it connects a latent
field $\bm{\varphi}(\mathbf{x})$ to the true magnetic field by the harmonic
space relation
$\hat{B}_{i}(\mathbf{k})=\frac{3}{2}\mathcal{P}_{ij}(\mathbf{k})\hat{\varphi}_{j}(\mathbf{k}),$
(12)
where $\mathbf{k}$ are Fourier modes. Eq. 12 ensures that
$\nabla\cdot\mathbf{B}=0$, while the factor $3/2$ accounts for power loss due
to reduced degrees of freedom, aligned with the original assumption of
statistical isotropy for $\bm{\varphi}$ (Jaffe et al. 2012). Our aim is
reconstructing the local GMF $\mathbf{B}$ by inferring the latent field
$\bm{\varphi}$ which is related to the latter by Eq. (12). For $\bm{\varphi}$
we will assume a Gaussian prior of the form:
$\mathcal{P}(\bm{\varphi})=\frac{1}{|2\pi\Phi|^{\frac{1}{2}}}\exp\left[-\frac{1}{2}\int
d^{3}xd^{3}x^{\prime}\sum_{ij}\varphi_{i}(\mathbf{x})\Phi_{ij}^{-1}(\mathbf{x},\mathbf{x}^{\prime})\varphi_{j}(\mathbf{x}^{\prime})\right].$
(13)
The quantity $\Phi_{ij}$ is the covariance matrix, defined as
$\Phi_{ij}(\mathbf{x},\mathbf{x^{\prime}})=\langle\varphi_{i}(\mathbf{x})\varphi_{j}^{*}(\mathbf{x^{\prime}})\rangle,$
(14)
where the symbol $\langle\cdots\rangle$ signifies an average over the
distribution $P(\bm{\varphi})$. That is, if $\mathcal{O}(\mathbf{x})$ is some
quantity of interest, then
$\langle\mathcal{O}(\mathbf{x})\rangle\equiv\int
d\bm{\varphi}P(\bm{\varphi})\mathcal{O}(\mathbf{x}).$
Notice that the average is taken over field configurations.
In our analysis, we chose not to integrate any prior knowledge about the GMF
geometry and statistics, so we use a prior distribution exhibiting statistical
isotropy, homogeneity, and mirror symmetry. This is formally encapsulated by
writing the Fourier space covariance in the form
$\langle\hat{\varphi}_{i}(\mathbf{k})\hat{\varphi}^{*}_{j}(\mathbf{k}^{\prime})\rangle=(2\pi)^{3}\delta_{ij}\delta^{(3)}(\mathbf{k}-\mathbf{k}^{\prime})P(k).$
(15)
A crucial point is that the $3$D prior power spectrum $P(k)$ is not known, and
is to be inferred as well. It is modeled as a sum of a power law and an
integrated Wiener component (Arras et al. 2022). The defining hyperparameters
and their prior PDFs (typically called hyperpriors) are summarised in Table 1,
and they are also briefly discussed in Paper I.
Table 1: Hyperparameters of the prior used in this work Parameter | Distribution | Mean | Standard deviation
---|---|---|---
Total offset ($\mathbf{B_{0}}$) | Not-applicable | $0$ | Not-applicable
Total offset st. dev. | Log-normal | $3$ $\mu$G | $1$ $\mu$G
Total spectral energy | Log-normal | $1$ $\mu$G | $1$ $\mu$G
Spectral index | Normal | $-\frac{11}{3}$ | $1$
Int. Wiener process amplitude | Log-normal | $1.5$ | $1$
### 2.3 Sampling the posterior
Equipped with the likelihood and prior, the posterior in terms of the magnetic
field $\bm{\mathbf{B}}$ is given by Eq. 1. Due to the fact that the power
spectrum $P(k)$ needs to be inferred along with the configuration of the GMF,
this inference problem is non-linear, and cannot be solved by a generalised
Wiener filter (Pratt 1972). For this reason, a non-perturbative scheme, called
geometrical variational inference (geoVI) developed by Frank et al. 2021 is
used. A brief exposition on geoVI can be found in Appendix A of Paper I. For
the purposes of this work it suffices to state that we do not sample magnetic
field configurations from the true posterior directly, but rather from an
approximate posterior, as is usually the strategy in variational methods. For
this task, we employ the Numerical Information Field Theory (NIFTy333The
documentation can be found in ift.pages.mpcdf.de/nifty/index.html.) package in
Python (Selig et al. 2013; Steininger et al. 2017; Arras et al. 2019,
Edenhofer et al. 2024). The input that is required is the likelihood and the
prior of the original physical model, as described in sections 2.1 and 2.2
respectively.
### 2.4 Procedure
The following is a summary of the specific setting probed in this work and how
the synthetic data on which the method is verified is generated.
* •
Spatial domain: The modeled space is assumed to be periodic due to
implementation details of the ground truth, and also we pad our space by a
factor of two, and so the $x$ and $y$ directions reach an extent of $\sim 2$
kpc. The resulting cube is partitioned uniformly into $N_{x}$, $N_{y}$, and
$N_{z}$ segments per axis, where $N_{x}=N_{y}=48$, and $N_{z}=64$, with
padding. In that setting, every voxel has a linear dimension of approximately
$30$ pc. This can accomodate the expected size of the dust clouds, as well as
the uncertainty of the measurement’s positions - at least as an order of
magnitude (Pelgrims et al. 2022).
* •
Data masking: We apply $R_{\text{local}}$ (see section 2.1) to the ground
truth field, in order to acquire the noiseless data.
* •
Adding noise to local data: Gaussian noise with covariance matrix
$N_{\text{local}}$ ( Eq. 5) is added to each observed data vector.
* •
Integrated data: Optionally (see section 3.3), the likelihood is supplemented
by an additional term for the integrated local measurements, as in Eq. 8. In
practice, the magnetic field is transformed from a Cartesian coordinate system
to a spherical polar coordinate system with the Earth at the origin. Then, the
radial component of the GMF - which is equivalent to the LOS component - is
integrated along individual LOSs, resulting in a set of $2$D integrated
measurements that inform the model further.
* •
Adding noise to integrated data: Gaussian noise with covariance
$N_{\text{int}}$ (Eq. 9) is added to each pixel on the celestial sphere, to
contaminate the data acquired from the previous step.
* •
Sampling the approximated posterior: Finally, the geoVI method is applied to
the true posterior distribution, resulting in samples from the approximate
distribution. To all the latent fields sampled, the projection operator (Eq.
11) is applied once more, in order to obtain posterior samples of the
divergence-free GMF.
* •
Application to UHECR backtracking: Through each of the GMF samples drawn from
(1) in the previous step, we backtrack a UHECR of known observed arrival
direction $\theta_{\text{obs}}$ and rigidity $r_{*}\equiv E/Z$. Recording the
final velocity of the particles, in particular their original directions
$\theta$ when they leave $\mathcal{V}$, essentially provides samples from the
distribution $P(\theta|D)$ of the particles’ original arrival directions
before entering the GMF, conditional to the data
$D\equiv\\{d,r_{*},\theta_{\text{obs}}\\}.$ (16)
To keep the discussion simple, in this work we only consider UHECRs of fixed
rigidity $r_{*}=5\times 10^{19}$ eV (equivalently, protons of energy
$E=5\times 10^{19}$ eV). As a way to benchmark the quality of our
reconstructions in the context of UHECR physics, we will compare the angular
separation $\delta\theta$ between the true arrival direction
$\theta_{\text{true}}$ and that of the backpropagated UHECR, ending up with a
distribution over $\delta\theta$. In this context, the ‘true arrival
direction’ always refers to the UHECR’s direction right where it entered
$\mathcal{V}$. In Fig. 1, we provide a visual representation of the quantities
defined in this section.
$\langle\delta\theta\rangle_{\theta|D}$$\alpha$$\langle\delta\phi\rangle_{\theta|D}$$\theta_{\text{obs}}$$\theta_{\text{true}}$$P(\theta|D)$
Figure 1: Illustration of relevant angles on the sky. A UHECR of known
rigidity $r_{*}$ enters the Galaxy with an arrival direction
$\theta_{\text{true}}$ (red dot). Because of the GMF, it is deflected and is
observed on Earth as arriving from $\theta_{\text{obs}}$ (black dot). The
angular distance between $\theta_{\text{obs}}$ and $\theta_{\text{true}}$ is
$\alpha$, and it is the error that the GMF induces on the observed arrival
direction. We backtrack the particle through each GMF configuration sampled
using NIFTy, thus ending up with a distribution of arrival directions
$P(\theta|D)$, with $D$ defined in Eq. 16. From the posterior samples drawn,
we calculate the mean angular distances
$\langle\delta\theta\rangle_{\theta|D}$ and
$\langle\delta\phi\rangle_{\theta|D}$ to the true and observed arrival
directions, respectively, as well as the standard deviations for the former.
Note that the scales in this artificial example are exaggerated for visual
clarity, and do not correspond to an application of the method.
## 3 Results
In this section, we use NIFTy in order to sample the posterior distribution
for three different scenarios: In scenario A, the observed data consist of
local measurements only, and at each location only the components of the GMF
that are parallel to the POS are probed. In scenario B, all three components
of the GMF (including the LOS) are probed on equal footing, for comparison.
Finally, in scenario C, we use the same dataset as in scenario A, but
additionally use integrated LOS information over the whole sky.
For each of these scenarios, we will benchmark the success of the
reconstruction by using it in order to infer the true arrival direction of a
UHECR with fiducial rigidity of $r_{*}=5\times 10^{19}$ eV for all possible
observed arrival directions on the northern sky, as described in the previous
section.
### 3.1 Scenario A: Local measurements with POS information only
The local GMF information that one can acquire through starlight polarization-
based tomography alone is confined to the celestial sphere (Panopoulou et al.
2019; Pelgrims et al. 2022). For that reason, in this section, we will sample
the posterior Eq. 1 conditional to local GMF data $d$ that are completely
blind to the LOS dimension, as is the case for polarization measurements.
To that end, we will work on a spherical polar coordinate system with the Sun
at the origin. The magnetic field is expressed as
$\mathbf{B}(\mathbf{x})=(B_{r},B_{\theta},B\phi)$ in that coordinate system.
In Fig. 2 we perform the reconstruction of the simulated GMF described in
Appendix A. In Fig. 2(a) the ground truth is shown. Fig. 2(b) depicts the
synthetic local GMF data obtained from the ground truth for this scenario. The
result of the reconstruction algorithm is a set of $100$ posterior samples of
Eq. 1, given the data of Fig. 2(b). In Fig. 2(c), the mean of the posterior
samples is shown.
In Figs. 5(a) and 6(a) we show the mean and standard deviation of the angular
distance error ($\langle\delta\theta\rangle_{\theta|D}$ and
$\sigma_{\theta|D}$ respectively) obtained through the use of the GMF
reconstructions shown in Fig. 2. Observe that
$\langle\delta\theta\rangle_{\theta|D}$ and $\sigma_{\theta|D}$ vary across
the celestial sphere, and the specific structure of these functions depends on
the specific ground truth GMF chosen. That said, the greatest error of the
reconstruction for this setting is approximately $14^{\circ}$. In order to
judge the performance, in Fig. 4(a) we depict the angular error in the arrival
direction assuming the observed ones were true - that is, ignoring the
correction using the recovered GMF. Comparing Fig. 4(a) to Fig. 5(a), we
observe that reconstructing the local GMF conditional to $d$ yields a
significant improvement in our ability to recover UHECR arrival directions.
This result suggests that $\langle\delta\theta\rangle_{\theta|D}$ is greater
for UHECRs observed to arrive from directions where the influence of the GMF
is greater (Fig. 4(a)), in this case at small longitudes. This correlation
will be explored further in section 4.1.
(a) The ground truth.
(b) The local and sparse data, confined to the POS.
(c) Mean of the posterior distribution conditional to the data of Fig. 2(b).
(d) Mean of the posterior distribution conditional to the data of Figs 2(b)
and 3(b).
Figure 2: Reconstruction of the simulated 3D magnetic field with the use of
local data that lack LOS field component information. The blue sphere
represents the celestial sphere. Top Left: The ground truth; the GMF obtained
as described in Appendix A. The field is rescaled so that it has a RMS norm of
$5$ $\mu$G. Top Right: Synthetic data based on the ground truth of Fig. 2(a).
Note that the radial component of the magnetic field is not measured. Bottom
Left: The mean of the approximating posterior distribution attained via the
geoVI algorithm based on the data provided in Fig. 2(b).Bottom Right: The mean
of the approximating posterior distribution attained conditional to the local
data of Fig. 2(b) as well as integrated measurements of the radial component
(Fig. 3(b)).
(a) The integrated LOS component of the ground truth field, shown in Fig.
2(a).
(b) As in Fig. 3(a), with Gaussian noise contamination.
(c) The integrated LOS component of the posterior mean, conditional to the
data of Figs. 2(b) and Fig. 3(b).
Figure 3: Top: Averaged LOS component of the test magnetic field, shown in
Fig. 2(a). Middle: Noisy integrated data that is used along with the sparse
and local data shown in Fig. 2(b) in order to define the LOS-informed
posterior distribution. The noise covariance is set to $0.5$ $\mu$G2, while
the density of integrated measurements is $0.1$ deg-2 Bottom: Averaged LOS
component of the mean $3$D configuration of the approximating posterior
distribution conditional to the data of Figs. 2(b) and 3(b).
(a) Deflection map for the ground truth.
(b) Mean deflection for scenario A.
(c) Mean deflection for scenario B.
(d) Mean deflection for scenario C.
Figure 4: Amount by which a UHECR of rigidity $r_{*}=5\times 10^{19}$ eV is
deflected by different GMF configurations as a function of its observed
arrival direction on Earth (the deflection map - see Fig. 1 for the definition
of the relevant angles). Top left: True deflection map. Top right: The mean
deflection over the posterior samples for scenario A. Bottom left: As in 4(b),
but the local measurements of the GMF now contain information on the LOS
component as well as the POS component (scenario B). The additional
information in this case causes a greater resemblance of the posterior mean to
the true field, and so the deflection map is closer to Fig. 4(a). Bottom
right: As in 4(b), but the posterior is additionally constrained by the
integrated data seen in Fig. 3(b) (scenario C). The colobar scale is kept up
to $30$ degrees to aid visual comparison. The red line on the colorbar
indicates the maximum deflection for each case. Notice that the dominant
central feature of Fig. 4(a) is recovered in Figs. 4(b) \- 4(d), since it is
caused by the largest scale features of the magnetic field, which we are able
to infer in every case.
(a) Scenario A
(b) Scenario B
(c) Scenario C
Figure 5: Mean angular error of the reconstruction (see Fig. 1) as a function
of all possible arrival directions on the Northern hemisphere, for the case of
a UHECR of rigidity $r_{*}=5\times 10^{19}$ eV. Top left: The magnetic field
data consist of local information with the LoS component is projected out
(scenario A). Top right: The magnetic field data consist of local information
with the LOS component measured (scenario B) Bottom: As in top left, but the
data is supplemented by integrated LOS data (scenario C)(see Fig. 3). The
colorbar scale is kept up to $30$ degrees to aid visual comparison with Fig.
4(a). The red and orange lines on the colorbar indicate the maximum and mean
values of the map, respectively.
(a) Scenario A
(b) Scenario B
(c) Scenario C
Figure 6: As in Fig. 5, but for the corresponding angular error standard
deviations as a function of observed arrival direction.
### 3.2 Scenario B: Local measurements with full $3$D information at each
measured location
In this section we examine the impact that a complete lack of observation of
the LOS (scenario A) has on the UHECR arrival direction reconstruction. For
that purpose, we perform the same inference as in section 3.1 with the
difference that now the LOS component is also probed locally, just like the
POS components. In Figs. 5(b) and 6(b) we plot the mean angular error
$\langle\delta\theta\rangle_{\theta|D}$ and the respective standard deviation,
for this scenario. Comparing to the results of scenario A (see Figs. 5(a) and
6(a)), we observe that the quality of the reconstruction greatly improves when
local LOS information is included. While the maximum-occurring mean angular
error drops by a few degrees, in general the improvement is dramatic in that
the total area of the sky where the maximum bias occurs is substantially
reduced. This observation also holds for the variance.
While we consider $\theta_{\text{obs}}$ over the whole northern hemisphere for
benchmarking purposes, in real applications only sufficiently high Galactic
latitudes are relevant. The reason for this is that we aim to reconstruct the
GMF at a scale of up to a couple of kpcs, and therefore we must choose UHECRs
that have traveled through the part of the Galaxy whose GMF we reconstruct.
That said, especially at the physically relevant case of high Galactic
latitudes, the inclusion of local LOS information dramatically improves the
backtracking results.
We have shown that knowledge of local LOS information would yield a
substantial improvement over our ability to reconstruct the GMF, at least as
far as UHECR backtracking is concerned. As stellar polarization data alone
cannot probe the LOS dimension, this information would have to be supplemented
by additional methods (e.g. Zeeman measurements). However, measurement of the
LOS GMF component locally is a notoriously difficult task, and so in what
follows, we will attempt to mitigate this by including integrated LOS
information in our likelihood.
### 3.3 Scenario C: Local measurements with POS information supplemented by
integrated LOS measurements for the whole sky
In this section we consider the inclusion of integrated constraints on the LOS
component of the GMF as shown in Fig. 3(b), while the local measurements at
the dust clouds, simulating those obtained through polarised starlight
tomography, are still projected on the celestial sphere as in Fig. 2(b).
Therefore, the likelihood used now has the full form of Eq. 8.
In Figs. 5(c) and 6(c) we show the mean and standard deviation of the angular
distance error of the inferred UHECR arrival direction using the samples that
were produced through the updated posterior, conditional to both local POS
data, as well as integrated LOS data. We observe that in comparison to
scenario A, shown in Figs. 5(a) and 6(a), the improvement in the ability to
reconstruct the UHECR arrival direction is substantial in that the maximum
mean angular error is reduced by a factor of about $1.5$, the part of the POS
where the maximum mean angular error occurs is greatly reduced, and the
variance of the posterior is diminished by a factor of about $1.2$. Thus, for
the setting considered, we have shown that inclusion of integrated LOS data of
the GMF - which is a much more realistic expectation compared to full 3D local
measurements of scenario B - does also lead to significantly better results
with regards to recovering the arrival directions of UHECRs with rigidity
$r_{*}$.
## 4 Discussion
### 4.1 Identification of a systematic bias
In Fig. 2(a) we observe that the ordered component of the field primarily lies
(anti)parallel to the $\pm\hat{y}$ direction, which corresponds to $l=\pm
90^{\circ}$ longitude. In Fig. 4(a) this is reflected by the fact that the
observed arrival directions parallel to the ordered component,
$(l,b)\simeq(\pm 90^{\circ},0^{\circ})$, are minimally deflected, while the
maximal deflection occurs at the arrival directions perpendicular to the
ordered component of the field. We call the map of Fig. 4(a) the ‘deflection
map’ of the GMF, for a UHECR with rigidity $r_{*}$. If the deflection map of
the GMF for a given of rigidity was available, then we would be able to
identify the regions of the celestial sphere where observed UHECRS with that
rigidity are deflected most.
A comparison with Fig. 4(a) with Fig. 5 yields a direct correlation between
the regions of the deflection map, and the mean angular error of our inferred
arrival directions as a function of observed arrival direction, for the same
rigidity. In qualitative terms, this correlation suggests that for observed
arrival directions perpendicular to the GMF zero mode, where the particles
must have deflected the most, our inference of their true arrival direction is
more prone to a systematic bias. This ‘bias’ is to be understood as the
angular distance of the mean of our posterior distribution with respect to the
true value.
Even though we might not be able to correct for this bias using our available
data, knowledge of how severely the GMF alters the UHECR trajectories can help
characterise the regions of the POS where our reconstructions are expected to
suffer from it. While the corresponding deflection of the true GMF for a value
of the UHECR rigidity will not be known a priori444Its derivation requires
knowledge of the full $3$D structure of the GMF, which is unknown., its
structure is largely dictated by the field’s dominating mean value which is
generally well captured by our algorithm as shown in Paper I. Indeed, as shown
in Figs. 4(b) through 4(d), we are able to recover the large-scale features of
the deflection map accurately for all three considered scenarios, thus
providing a charting of the parts of the POS where the GMF will most influence
the UHECR trajectories, and by extension the regions where our arrival
direction posterior might be shifted with respect to the true value.
### 4.2 Caveats
While tomography using starlight polarisation and Gaia data can provide the
location of dust clouds in the local Galaxy as well as the POS orientation of
the GMF at each cloud’s location, the POS direction of the GMF is generally
not known, as this inference makes use of the properties of grain alignment
which cannot infer the POS directionality of the GMF (Tassis et al. 2018).
Further, the integrated measurements used here assume that the integrated
Galactic LOS component has been measured or inferred. In practice, the
observables that need to be measured in order to estimate these integrals is
the Faraday rotation measure and the dispersion measure. That means that even
if the Galactic component is separated, it will still provide an average
weighted over the thermal electron density. Therefore, in our study we
practically made the simplifying assumption that the thermal electron density
is constant or known. In applications to the real GMF, the electron density
will be treated as an additional degree of freedom to be inferred
(Hutschenreuter et al. 2023). However, it must be noted that recent research
suggests the possibility that local LOS data can be available, at least in
part of the dataset (Tahani et al. 2022a; Tahani et al. 2022b).
In this analysis we studied only the case of UHECRs with a fixed rigidity of
$r_{*}=5\times 10^{19}$ eV. This is equivalent to assuming that the UHECRs
particles are protons of $E=5\times 10^{19}$ eV. In general the composition of
UHECRs is unknown, and is most likely mixed - especially if some of the
sources have Galactic origin (Calvez et al. 2010; Kusenko 2011; Jiang et al.
2021). The closer examination of different composition scenarios will be the
subject of future work.
### 4.3 Conclusions & Outlook
In this paper we extended the analysis of Paper I to the case of more
realistic LOS information and local data distribution. This is motivated by
the fact that in real applications, the local GMF data obtained through
stellar polarisation tomography will not contain LOS information, and the
distribution of these measurements will follow the distribution of dust clouds
which is not homogeneous, as was assumed in Paper I.
Additionally, the ground-truth GMF that was used in order to benchmark the
performance of our inference algorithm was taken from an MHD simulation, with
the aim of studying the effect of our Gaussian approach to magnetic field
configurations whose statistical properties more closely resemble those of the
real GMF. Furthermore, we supplemented the existing framework in order to
include LOS- integrated information as well.
Our results show that while the complete absence of LOS information in the
local data diminishes the accuracy of our inferred UHECR arrival directions,
even in this case we are able to significantly correct for the effect of the
GMF on the observed arrival directions, at least for the rigidity considered
here. Yet, the inclusion of integrated LOS data for the GMF - which can be
realistically expected to be part of our available information - is enough to
provide accurate enough results.
Even in directions where the angular distance between the inferred arrival
direction and the true are maximal, we are still able to correct for the
effect of the GMF by a factor of $3$, in the setting considered. Additionally,
by our ability to reconstruct the large scale features of the field which
dominate UHECR deflection, we are able to identify the regions of the POS
where our reconstructions are most likely to have summer from maximal error.
###### Acknowledgements.
A.T. and V.P. acknowledge support from the Foundation of Research and
Technology - Hellas Synergy Grants Program through project MagMASim, jointly
implemented by the Institute of Astrophysics and the Institute of Applied and
Computational Mathematics. A.T. acknowledges support by the Hellenic
Foundation for Research and Innovation (H.F.R.I.) under the “Third Call for
H.F.R.I. Scholarships for PhD Candidates” (Project 5332). V.P. acknowledges
support by the Hellenic Foundation for Research and Innovation (H.F.R.I.)
under the “First Call for H.F.R.I. Research Projects to support Faculty
members and Researchers and the procurement of high-cost research equipment
grant” (Project 1552 CIRCE). The research leading to these results has
received funding from the European Union’s Horizon 2020 research and
innovation programme under the Marie Skłodowska-Curie RISE action, Grant
Agreement n. 873089 (ASTROSTAT-II). This work also benefited greatly from
discussions during the program ”Towards a Comprehensive Model of the Galactic
Magnetic Field” at Nordita in April 2023, which is partly supported by
NordForsk and the Royal Astronomical Society. A.T. would like to thank Vincent
Pelgrims, Raphael Skalidis, Georgia V. Panopoulou, and Konstantinos Tassis for
helpful tips and stimulating discussions. G.E. acknowledges the support of the
German Academic Scholarship Foundation in the form of a PhD scholarship
(”Promotionsstipendium der Studienstiftung des Deutschen Volkes”). P.F.
acknowledges funding through the German Federal Ministry of Education and
Research for the project ErUM-IFT: Informationsfeldtheorie fuer Experimente an
Großforschungsanlagen (Foerderkennzeichen: 05D23EO1)
## Appendix A Simulated Magnetic Field
We briefly summarize the setup and results of the Galactic dynamo simulations
that have been analyzed here. A detailed description of the numerical setup is
presented in Bendre et al. (2015).
These are Magnetohydrodynamic (MHD) simulations of the Galactic interstellar
medium (ISM). The simulation domain is an elongated box, located roughly at
the solar neighbourhood of the Milky Way. It has dimensions of approximately
$1\times 1$ kpc in the radial ($x$) and azimuth ($y$) direction and ranges
from approximately $-2\,{\rm to\,}+2$ kpc in $z$ direction, on either side of
the Galactic mid-plane. It is split in a uniform Cartesian grid with a
resolution of approximately $8.3$ pc, and a set of non-ideal MHD equations is
solved in this domain using the NIRVANA code (Ziegler, 2004) (see Eq. 1 from
Bendre et al. (2015) for the set of equations we have solved). Periodic
boundary conditions were used in the $y$ direction to incorporate the
axisymmetry of the Galactic disc. The flat rotation curve is incorporated by
allowing the angular velocity to scale inversely with the Galactic radius as
$\Omega\propto 1/R$, with $\Omega_{0}=100$ km s-1 kpc-1 at the centre of the
box. Shearing periodic boundary conditions are used in the radial $x$
direction to accommodate the aforementioned radial dependence of angular
velocity. The initial density distribution of the ISM is in hydrostatic
balance with the vertical gravity pointing towards the mid-plane, such that
the vertical scale-height of the initial density was approximately $300$ pc,
with its value in the mid-plane of approximately $10^{-24}$ g cm-3. A vertical
profile of gravitational acceleration is adapted from Gilmore et al. (1989).
The ISM in this box is stirred by supernovae (SN) explosions, which inject the
thermal energy at random locations, at a rate of approximately $7.5$ kpc-2
Myr-1. The vertical distribution of the explosions scale with the mass
density. A piece-wise power law, similar to Sánchez-Salcedo et al. (2002), is
used to model the temperature-dependent rate of radiative heat transfer, which
along with SN explosions, roughly capture the observed multi-phase morphology
of the ISM. We started the simulations with negligible initial magnetic fields
of strength of the order of nG, and it grew exponentially to the strengths of
the order of $\mu$G, with an e-folding time of about $200$ Myr, such that the
final energy density of the magnetic fields reached to the equipartition with
the kinetic energy density of the ISM turbulence (shown in the right-hand
panel of Fig. 7). The exponential amplification of the magnetic energy
saturated after about a Gyr, and coherent magnetic fields of scale-height
close to $500$pc were sustained in the box, consistent with the typical scale-
height of GMFs (shown in the left-hand panel of Fig. 7). The growth and
saturation of these large-scale fields are understood in terms of a self-
consistent large-scale dynamo mechanism, governed by the SN-driven stratified
helical turbulence and the Galactic differential rotation (Bendre et al.,
2015).
Figure 7: Left: Time evolution of the vertical ($z$) profile of the azimuthal
component of the magnetic field averaged over $x-y$ plane. The color code is
normalized by an exponential factor to compensate for an exponential growth of
magnetic fields. The mean magnetic field eventually grows to a large-sale mode
symmetric with respect to the Galactic mid-plane. Right: Time evolution of
various contributions to magnetic energy, normalized to the turbulent kinetic
energy (which stays roughly constant in time). The black solid line
corresponds to the total magnetic energy contribution, the red dashed line
corresponds to the magnetic energy of mean magnetic fields (averaged over the
horizontal $x-y$ planes) and with the blue dot-dashed line to the magnetic
energy in the RMS magnetic fields. The magnetic energy is amplified
exponentially for about a Gyr and eventually reaches an equipartition with
turbulent kinetic energy.
## References
* Abbasi et al. (2014) Abbasi, R. U., Abe, M., Abu-Zayyad, T., et al. 2014, ApJ, 790, L21
* Andersson et al. (2015) Andersson, B. G., Lazarian, A., & Vaillancourt, J. E. 2015, ARA&A, 53, 501
* Arras et al. (2019) Arras, P., Baltac, M., Ensslin, T. A., et al. 2019, Astrophysics Source Code Library
* Arras et al. (2022) Arras, P., Frank, P., Haim, P., et al. 2022, Nature Astronomy, 6, 259
* Bailer-Jones et al. (2021) Bailer-Jones, C. A. L., Rybizki, J., Fouesneau, M., Demleitner, M., & Andrae, R. 2021, AJ, 161, 147
* Bendre et al. (2015) Bendre, A., Gressel, O., & Elstner, D. 2015, Astronomische Nachrichten, 336, 991
* Bhattacharjee & Sigl (2000) Bhattacharjee, P. & Sigl, G. 2000, Phys. Rep, 327, 109
* Calvez et al. (2010) Calvez, A., Kusenko, A., & Nagataki, S. 2010, Phys. Rev. Lett., 105, 091101
* Chandrasekhar & Fermi (1953) Chandrasekhar, S. & Fermi, E. 1953, ApJ, 118, 113
* Clark & Hensley (2019) Clark, S. E. & Hensley, B. S. 2019, ApJ, 887, 136
* Davis (1951) Davis, L. 1951, Physical Review, 81, 890
* di Matteo et al. (2023) di Matteo, A., Anchordoqui, L., Bister, T., et al. 2023, arXiv e-prints, arXiv:2302.04502
* Edenhofer et al. (2024) Edenhofer, G., Frank, P., Roth, J., et al. 2024, arXiv e-prints, arXiv:2402.16683
* Edenhofer et al. (2023) Edenhofer, G., Zucker, C., Frank, P., et al. 2023, arXiv e-prints, arXiv:2308.01295
* Enßlin (2022) Enßlin, T. 2022, Entropy, 24, 374
* Enßlin (2019) Enßlin, T. A. 2019, Annalen der Physik, 531, 1970017
* Enßlin et al. (2009) Enßlin, T. A., Frommert, M., & Kitaura, F. S. 2009, Phys. Rev. D, 80, 105005
* Fang & Murase (2018) Fang, K. & Murase, K. 2018, Nature Physics, 14, 396
* Frank et al. (2021) Frank, P., Leike, R., & Enßlin, T. A. 2021, Entropy, 23
* Gaia Collaboration et al. (2021) Gaia Collaboration, Brown, A. G. A., Vallenari, A., et al. 2021, A&A, 649, A1
* Gaia Collaboration et al. (2016) Gaia Collaboration, Prusti, T., de Bruijne, J. H. J., et al. 2016, A&A, 595, A1
* Gilmore et al. (1989) Gilmore, G., Wyse, R. F. G., & Kuijken, K. 1989, ARA&A, 27, 555
* Green et al. (2019) Green, G. M., Schlafly, E., Zucker, C., Speagle, J. S., & Finkbeiner, D. 2019, ApJ, 887, 93
* Hutschenreuter et al. (2023) Hutschenreuter, S., Haverkorn, M., Frank, P., Raycheva, N. C., & Enßlin, T. A. 2023, arXiv e-prints, arXiv:2304.12350
* Jaffe et al. (2012) Jaffe, T., Waelkens, A., Reinecke, M., Kitaura, F. S., & Ensslin, T. A. 2012, Hammurabi: Simulating polarized Galactic synchrotron emission, Astrophysics Source Code Library, record ascl:1201.014
* Jansson & Farrar (2012a) Jansson, R. & Farrar, G. R. 2012a, ApJ, 757, 14
* Jansson & Farrar (2012b) Jansson, R. & Farrar, G. R. 2012b, ApJ, 761, L11
* Jiang et al. (2021) Jiang, Y., Zhang, B. T., & Murase, K. 2021, Phys. Rev. D, 104, 043017
* Kawata et al. (2019) Kawata, K., di Matteo, A., Fujii, T., et al. 2019, in International Cosmic Ray Conference, Vol. 36, 36th International Cosmic Ray Conference (ICRC2019), 310
* Kotera & Olinto (2011) Kotera, K. & Olinto, A. V. 2011, ARA&A, 49, 119
* Kusenko (2011) Kusenko, A. 2011, Nuclear Physics B - Proceedings Supplements, 212-213, 194, proceedings of the Cosmic Ray International Seminars (CRIS 2010)
* Lallement et al. (2019) Lallement, R., Babusiaux, C., Vergely, J. L., et al. 2019, A&A, 625, A135
* Lallement et al. (2018) Lallement, R., Capitanio, L., Ruiz-Dern, L., et al. 2018, A&A, 616, A132
* Lallement et al. (2022) Lallement, R., Vergely, J. L., Babusiaux, C., & Cox, N. L. J. 2022, A&A, 661, A147
* Leike et al. (2022) Leike, R. H., Edenhofer, G., Knollmüller, J., et al. 2022, arXiv e-prints, arXiv:2204.11715
* Leike & Enßlin (2019) Leike, R. H. & Enßlin, T. A. 2019, A&A, 631, A32
* Leike et al. (2020) Leike, R. H., Glatzle, M., & Enßlin, T. A. 2020, A&A, 639, A138
* Magalhães (2012) Magalhães, A. M. 2012, in Science from the Next Generation Imaging and Spectroscopic Surveys, 7
* Maharana et al. (2022) Maharana, S., Anche, R. M., Ramaprakash, A. N., et al. 2022, Journal of Astronomical Telescopes, Instruments, and Systems, 8, 038004
* Maharana et al. (2021) Maharana, S., Kypriotakis, J. A., Ramaprakash, A. N., et al. 2021, Journal of Astronomical Telescopes, Instruments, and Systems, 7, 014004
* Murase (2019) Murase, K. 2019, in International Cosmic Ray Conference, Vol. 36, 36th International Cosmic Ray Conference (ICRC2019), 965
* Pandhi et al. (2022) Pandhi, A., Hutschenreuter, S., West, J. L., Gaensler, B. M., & Stock, A. 2022, MNRAS, 516, 4739
* Panopoulou et al. (2017) Panopoulou, G. V., Psaradaki, I., Skalidis, R., Tassis, K., & Andrews, J. J. 2017, MNRAS, 466, 2529
* Panopoulou et al. (2019) Panopoulou, G. V., Tassis, K., Skalidis, R., et al. 2019, ApJ, 872, 56
* Pelgrims et al. (2022) Pelgrims, V., Panopoulou, G. V., Tassis, K., et al. 2022, arXiv e-prints, arXiv:2208.02278
* Pierre Auger Collaboration et al. (2017) Pierre Auger Collaboration, Aab, A., Abreu, P., et al. 2017, Science, 357, 1266
* Pratt (1972) Pratt, W. 1972, IEEE Transactions on Computers, C-21, 636
* Sánchez-Salcedo et al. (2002) Sánchez-Salcedo, F. J., Vázquez-Semadeni, E., & Gazol, A. 2002, ApJ, 577, 768
* Selig et al. (2013) Selig, M., Bell, M. R., Junklewitz, H., et al. 2013, aap, 554, A26
* Skalidis et al. (2021) Skalidis, R., Sternberg, J., Beattie, J. R., Pavlidou, V., & Tassis, K. 2021, A&A, 656, A118
* Skalidis & Tassis (2021) Skalidis, R. & Tassis, K. 2021, A&A, 647, A186
* Steininger et al. (2017) Steininger, T., Dixit, J., Frank, P., et al. 2017, ArXiv e-prints [arXiv:1708.01073]
* Sun & Reich (2010) Sun, X.-H. & Reich, W. 2010, Research in Astronomy and Astrophysics, 10, 1287
* Sun et al. (2008) Sun, X. H., Reich, W., Waelkens, A., & Enßlin, T. A. 2008, A&A, 477, 573
* Tahani et al. (2022a) Tahani, M., Glover, J., Lupypciw, W., et al. 2022a, A&A, 660, L7
* Tahani et al. (2022b) Tahani, M., Lupypciw, W., Glover, J., et al. 2022b, A&A, 660, A97
* Takami & Sato (2010) Takami, H. & Sato, K. 2010, ApJ, 724, 1456
* Tassis et al. (2018) Tassis, K., Ramaprakash, A. N., Readhead, A. C. S., et al. 2018, arXiv e-prints, arXiv:1810.05652
* Torres & Anchordoqui (2004) Torres, D. F. & Anchordoqui, L. A. 2004, Reports on Progress in Physics, 67, 1663
* Tritsis et al. (2019) Tritsis, A., Federrath, C., & Pavlidou, V. 2019, ApJ, 873, 38
* Tritsis et al. (2018) Tritsis, A., Federrath, C., Schneider, N., & Tassis, K. 2018, MNRAS, 481, 5275
* Tsouros et al. (2024) Tsouros, A., Edenhofer, G., Enßlin, T., Mastorakis, M., & Pavlidou, V. 2024, A&A, 681, A111
* Ziegler (2004) Ziegler, U. 2004, Computer Physics Communications, 157, 207
|
In the dynamic landscape of digital information, the rise of misinformation and fake news presents a pressing challenge. This paper takes a completely new approach to verifying news, inspired by how quantum actors can reach agreement even when they are spatially spread out. We propose a radically new, to the best of our knowledge, algorithm that uses quantum “entanglement” (think of it as a special connection) to help news aggregators sniff out bad actors, whether they be other news sources or even fact-checkers trying to spread misinformation. This algorithm doesn't rely on quantum signatures, it just uses basic quantum technology we already have, in particular, special pairs of particles called “EPR pairs” that are much easier to create than other options. More complex entangled states are like juggling too many balls – they're hard to make and slow things down, especially when many players are involved. For instance, bigger, more complex states like “GHZ states” work for small groups, but they become messy with larger numbers. So, we stick with Bell states, the simplest form of entanglement, which are easy to generate no matter how many players are in the game. This means our algorithm is faster to set up, works for any number of participants, and is more practical for real-world use. Bonus points: it finishes in a fixed number of steps, regardless of how many players are involved, making it even more scalable. This new approach may lead to a powerful and efficient way to fight misinformation in the digital age, using the weird and wonderful world of quantum mechanics.
Keywords:: Fake news, quantum algorithms, quantum entanglement, Bell states, quantum games.
§ INTRODUCTION
In the contemporary digital era, the proliferation of fake news, defined as deliberately false information masquerading as legitimate news, has emerged as a pervasive challenge across online and social media platforms. The rapid dissemination of misinformation poses serious repercussions, eroding trust in institutions, inciting violence, and undermining democratic processes. The urgent need for robust fake news detection mechanisms is more pronounced than ever. Fake news flourishes in the online realm due to several contributing factors. The accessibility of content creation and sharing, coupled with the absence of stringent oversight and anonymity, empowers malicious actors to disseminate false narratives with impunity. Furthermore, social media algorithms, often designed to prioritize sensational and engaging content, inadvertently amplify the reach of fake news.
The prevalence of fake news underscores the necessity for the development and deployment of effective detection techniques. One strategy involves leveraging fact-checking organizations to manually verify the veracity of information. However, this approach is labor-intensive and unable to cope with the sheer volume of content produced daily. Alternatively, employing artificial intelligence (AI) and machine learning (ML) methodologies offers promise in automatically identifying fake news. These algorithms can scrutinize various factors, including language usage, writing style, and source reliability, to flag potentially misleading content. Despite the potential of AI-driven detection methods, they encounter challenges. Fake news purveyors continuously adapt their strategies to evade detection, and AI models may exhibit biases or inaccuracies if trained on inadequate or skewed datasets. A paradox emerges as AI, while possessing the capability to identify and mitigate false news, simultaneously facilitates the proliferation of such deceptive online content. Notwithstanding these obstacles, the pursuit of effective fake news detection remains imperative. By combating the dissemination of misinformation, we safeguard individuals and society against its deleterious effects, nurturing a more informed, civil, and democratic online discourse.
Recent research ([1]) underscores the necessity for social media platforms to integrate diverse content verification techniques alongside existing algorithms and AI approaches to combat false news effectively. Additionally, taxonomy frameworks like the one proposed in [2, 3], which categorizes false news into distinct types, can aid social media platforms in alerting users to potentially misleading content, contingent upon agreed-upon standards for content analysis. The endeavor to automate the detection and prevention of false news presents formidable challenges, particularly concerning the assessment of content legitimacy ([4, 5]). Contemporary efforts predominantly rely on machine learning techniques to identify and mitigate fake news articles, as evidenced by numerous recent scholarly works ([6, 7, 8, 9, 10, 11, 12, 13]). The fusion of AI with blockchain technology emerges as a promising avenue for combating fake news ([14]). This approach offers a decentralized and trustworthy platform for validating consent, authenticity, integrity, and perspectives on truth, thereby mitigating the spread of false narratives.
The quest for quantum computers that dethrone their classical counterparts continues. While we haven't reached the promised land yet, recent landmarks like IBM's 127-qubit Eagle [15], 433-qubit Osprey [16], and the colossal 1,121-qubit Condor [17] show the path forward is accelerating. Perhaps we are closer than we think to the quantum revolution. Given this broader context, it becomes evident that quantum technology has reached a level of maturity where it merits serious consideration for inclusion in a comprehensive framework aimed at combating misinformation, especially considering the potential of quantum computers to enhance the speed and efficiency of Machine Learning algorithms. Researchers are actively pursuing the development of algorithmic methods that could effectively detect fake news and deepfakes by integrating Quantum Machine Learning techniques [18]. Quantum Machine Learning seeks to merge the principles of quantum computing with those of Machine Learning, offering tangible advantages such as improved Deep Fake detection [19]. Tian et al. [20] proposed a fake news detection system utilizing quantum K-Nearest Neighbors. Furthermore, Google has introduced an open-source library for Quantum Machine Learning, suggesting the potential for quantum computing to address fake and deepfake challenges in the near term [21].
However, this paper explores a distinct quantum approach. It does not rely on Quantum Machine Learning, but, instead, draws inspiration from successful quantum protocols that achieve distributed consensus and detectable Byzantine Agreement in distributed environments (refer to recent work by Andronikos et al. [22] and related literature). We acknowledge the prevalent practice in contemporary social media platforms, wherein independent third-party fact-checking organizations, certified by impartial authorities, are employed to identify, assess, and take action on content. This fact-checking methodology primarily targets viral misinformation, particularly blatant falsehoods lacking factual basis. Ideally, fact-checking entities prioritize verifying demonstrably false claims that are both timely and impactful. Naturally, fact-checkers themselves should be subject to scrutiny and ongoing evaluation. The algorithm proposed herein envisions a decentralized setting where numerous news aggregators are overseen by multiple news verifiers responsible for content authentication. Described as a quantum game, our algorithm involves familiar figures such as Alice and Bob, alongside their numerous counterparts. Employing games in our presentation aims to facilitate comprehension of technical concepts. Quantum games, since their inception in 1999 [23, 24], have garnered significant attention. The main reason for his trend is the potential superiority of quantum strategies over classical ones [25, 26, 27, 28]. Notably, the renowned prisoners' dilemma game exemplifies this phenomenon, extending to other abstract quantum games as well [24, 29]. Moreover, the quantization of various classical systems can be applied to political structures, as demonstrated recently in [30]. In the realm of quantum cryptography, the presentation of protocols often takes the form of games, a common practice evident in recent works such as [31, 32, 33, 34, 22, 35]. Quantum strategies have demonstrated superiority over classical ones in various scenarios [25, 26, 27, 28]. The prisoners' dilemma game serves as a prominent example, and its applicability extends to other abstract quantum games [24, 29]. Notably, the quantization of classical systems finds applications even in political structures [30]. In the broader context of game-theoretic applications, unconventional environments, such as biological systems, have garnered significant attention [36, 37, 38]. It's intriguing to note that biological systems may give rise to biostrategies that outperform classical ones, even in iconic games like the Prisoners' Dilemma [39, 40, 41, 42, 43].
Contribution. This paper introduces a novel perspective on the pressing issue of news verification by drawing inspiration from quantum protocols achieving distributed consensus, diverging from the more conventional Quantum Machine Learning approach. We present the first entanglement-based algorithm, to the best of our knowledge, designed for news aggregators to identify potential malicious actors. These actors could include other news aggregators or, even more concerning, fact-checkers intentionally disseminating misinformation.
The key advantage of our algorithm, which does not rely on a quantum signature scheme, lies in its compatibility with current quantum technology, as it solely depends on EPR pairs. While more complex multi-particle entangled states are possible, they are challenging to produce with existing quantum systems, leading to extended preparation times and complexity, particularly in scenarios involving numerous participants. For example, while contemporary quantum computers can easily generate $\ket{GHZ_n}$ states for small values of $n$, the preparation and distribution of these states become increasingly difficult as $n$ grows. Therefore, we exclusively employ Bell states, the simplest among maximally entangled states, in our algorithm.
Utilizing only EPR pairs, specifically $\ket{\Phi^+}$ pairs, regardless of the number of news aggregators and verifiers, results in reduced preparation time, improved scalability, and enhanced practicality. Additionally, our algorithm completes in a constant number of steps, irrespective of the number of participants, further enhancing its scalability and efficiency.
§.§ Organization
This article is organized as follows. The Introduction (Section <ref>) presents the subject matter, accompanied by bibliographic references to related works. A concise overview of the essential concepts is provided in Section <ref>, laying the foundation for understanding our protocol. A detailed explanation of the hypotheses underlying the QNVA is given in Section <ref>. The QNVA is formally presented in Section <ref>, explaining its inner workings in detail.
The paper concludes with a summary and discussion of the finer points of the algorithm in Section <ref>.
§ BACKGROUND & NOTATION
§.§ EPR pairs
Quantum entanglement is a phenomenon where two or more particles become linked in such a way that they share the same fate, despite being separated by vast distances. This connection is so powerful that measuring the properties of one particle instantly determines the corresponding properties of its entangled partner, regardless of the separation between them. This instantaneous correlation defies our classical understanding of the universe and has profound implications for quantum mechanics and its potential applications. Mathematically, a single product state is not sufficient to describe entangled states of composite systems. So, they must be described as a linear combination of two or more product states of their subsystems. The famous Bell states are special quantum states of two qubits, also called EPR pairs, that represent the simplest form of maximal entanglement. These states can be compactly described by the next formula from [44].
\begin{align} \label{eq: Bell States General Equation}
\ket{ \beta_{ x, y } } = \frac { \ket{ 0 } \ket{ y } + (-1)^x \ket{ 1 } \ket{ \overline{ y } } } { \sqrt{ 2 } } \ ,
\end{align}
where $\ket{ \overline{ y } }$ is the negation of $\ket{ y }$.
There are four Bell states and their specific mathematical expression is given below. The subscripts $A$ and $B$ are used to emphasize the subsystem to which the corresponding qubit belongs, that is, qubits $\ket{ \cdot }_{ A }$ belong to Alice and qubits $\ket{ \cdot }_{ B }$ belong to Bob.
grow to left by = 1.00 cm,
grow to right by = 0.00 cm,
colback = white,
enhanced jigsaw,
sharp corners,
toprule = 0.1 pt,
bottomrule = 0.1 pt,
leftrule = 0.1 pt,
rightrule = 0.1 pt,
sharp corners,
center title,
fonttitle = ]
\begin{align} \label{eq: Bell State Phi +}
\ket{ \Phi^{ + } } = \ket{ \beta_{ 00 } } = \frac { \ket{ 0 }_{ A } \ket{ 0 }_{ B } + \ket{ 1 }_{ A } \ket{ 1 }_{ B } } { \sqrt{ 2 } }
\end{align}
\begin{align} \label{eq: Bell State Phi -}
\ket{ \Phi^{ - } } = \ket{ \beta_{ 10 } } = \frac { \ket{ 0 }_{ A } \ket{ 0 }_{ B } - \ket{ 1 }_{ A } \ket{ 1 }_{ B } } { \sqrt{ 2 } }
\end{align}
\begin{align} \label{eq: Bell State Psi +}
\ket{ \Psi^{ + } } = \ket{ \beta_{ 01 } } = \frac { \ket{ 0 }_{ A } \ket{ 1 }_{ B } + \ket{ 1 }_{ A } \ket{ 0 }_{ B } } { \sqrt{ 2 } }
\end{align}
\begin{align} \label{eq: Bell State Psi -}
\ket{ \Psi^{ - } } = \ket{ \beta_{ 11 } } = \frac { \ket{ 0 }_{ A } \ket{ 1 }_{ B } - \ket{ 1 }_{ A } \ket{ 0 }_{ B } } { \sqrt{ 2 } }
\end{align}
For existing quantum computers based on the circuit model, it is quite trivial to produce Bell states. The proposed algorithm relies on $\ket{ \Phi^{ + } } = \frac { \ket{ 0 }_{ A } \ket{ 0 }_{ B } + \ket{ 1 }_{ A } \ket{ 1 }_{ B } } { \sqrt{ 2 } }$ pairs.
§.§ The state $\ket{ + }$
Apart from $\ket{ \Phi^{ + } }$ pairs, our scheme makes use of another signature state, namely $\ket{ + }$. For completeness, we recall the definition of $\ket{ + }$ below
grow to left by = 0.00 cm,
grow to right by = 0.00 cm,
colback = white,
enhanced jigsaw,
sharp corners,
toprule = 0.1 pt,
bottomrule = 0.1 pt,
leftrule = 0.1 pt,
rightrule = 0.1 pt,
sharp corners,
center title,
fonttitle = ]
\begin{align} \label{eq: Ket +}
\ket{ + } = H \ket{ 0 } = \frac { \ket{ 0 } + \ket{ 1 } } { \sqrt{ 2 } }
\ .
\end{align}
State $\ket{ + }$ can be readily produced by applying the Hadamard transform on $\ket{ 0 }$. In the rest of this paper, the set of bit values $\{ 0, 1 \}$ is denoted by $\mathbb { B }$. As a final note, let us clarify that measurements are always performed with respect to the computational basis $\{ \ket{ 0 }, \ket{ 1 } \}$.
§ HYPOTHESES & SETTING
Before we proceed with the comprehensive presentation of the proposed algorithm, it is useful to explicitly state the envisioned setting and the hypotheses that underlie the execution of our algorithm.
§.§ The protagonists
As we have previously mentioned, we follow what can, almost, be considered a tradition and describe the proposed algorithm as a game. The players in this game are the spatially distributed news verifiers and news aggregators, collectively called Alice and Bob. First, we state the actors that appear in our settings and clarify the roles they are supposed to play.
* A trusted quantum source. There exists a trusted quantum source that generates single qubits in the $\ket{ + }$ state and EPR pairs in the $\ket{ \Phi^{ + } }$ state. The source also distributes these qubits to all other players through appropriate quantum channels, according to the entanglement distribution scheme outlined in the forthcoming Definition <ref>.
* News verifiers. There exist $m$ special players that are called news verifiers. Their mission is to fact-check every piece of news and classify it as true of fake. In our game this role is undertaken by Alice and her clones, who are denoted by Alice$_{ 1 }$, …, Alice$_{ m }$. The news verifiers work independently of each other, and no communication, classical or quantum, takes place between any two of them.
* News aggregators. There are also $n$ players that are called news aggregators, and whose purpose is to gather and disseminate news that have been certified as true. This role is assumed by Bob and his clones that are denoted by Bob$_{ 1 }$, …, Bob$_{ n }$, where, typically, $n > m$.
§.§ The connections among the players
Besides the players listed above, there is a network of quantum and classical channels that enables the exchange of information among the players. In particular, we assume the existence of the following communication channels.
* It is more realistic to consider that each Alice clone is not responsible for all news aggregators, but only for a specific group of news aggregators that are under her supervision. Henceforth, we shall assume that each Alice$_{ i }$, $1 \leq i \leq m$, is connected via pairwise authenticated classical channels to a specific subset of news aggregators who constitute her active network, and Alice$_{ i }$ is their coordinator. These aggregators are Alice$_{ i }$'s receivers, their cardinality is denoted by $n_{ i }$ and they are designated by Bob$_{ 1 }^{ i }$, Bob$_{ 2 }^{ i }$, …, Bob$_{ n_{ i } }^{ i }$.
* Each Alice$_{ i }$, $1 \leq i \leq m$, sends two things to every Bob$_{ k }^{ i }$, $1 \leq k \leq n_{ i }$, in her active network:
$\Diamond$ The result of the verification check, denoted by $c_{ k }^{ i }$.
$\Diamond$ A proof sequence, denoted by $\mathbf { p }_{ k }^{ i }$, which is intended to convince Bob$_{ k }^{ i }$ that she is honest.
The situation is visually depicted in Figure <ref>.
grow to left by = 0.00 cm,
grow to right by = 0.00 cm,
colback = MagentaLight!03,
enhanced jigsaw,
sharp corners,
toprule = 1.0 pt,
bottomrule = 1.0 pt,
leftrule = 0.1 pt,
rightrule = 0.1 pt,
sharp corners,
center title,
fonttitle = ]
[scale = 1.00]
⦜360 /
shade, top color = WordBlueDarker25, bottom color = black, rectangle, text width = 10.00 cm, align = center
] ( Label ) at ( 0.0, 7.00 )
Alice$_{ i }$ sends the verification outcome $c_{ k }^{ i }$ and the proof sequence $\mathbf { p }_{ k }^{ i }$ to every Bob$_{ k }^{ i }$ in her active network.
[ line width = 2.00 pt, MyBlue ] ( 0, 0 ) circle [ radius = cm ];
scale = 1.50,
anchor = center,
label = [ label distance = 0.00 cm ] east: Bob$_{ 1 }^{ i }$
( ) at ( * cos(1 * ⦜) , * sin(1 * ⦜) ) ;
scale = 1.50,
anchor = center,
label = [ label distance = 0.00 cm ] west: Bob$_{ 2 }^{ i }$
( ) at ( * cos(3 * ⦜) , * sin(3 * ⦜) ) ;
[ shade, shading = ball, ball color = WordAquaLighter80, circle ] () at ( * cos(4 * ⦜) , * sin(4 * ⦜) ) ;
[ shade, shading = ball, ball color = WordAquaLighter80, circle ] () at ( * cos(5 * ⦜) , * sin(5 * ⦜) ) ;
[ shade, shading = ball, ball color = WordAquaLighter80, circle ] () at ( * cos(6 * ⦜) , * sin(6 * ⦜) ) ;
scale = 1.50,
anchor = center,
label = [ label distance = 0.00 cm ] east: Bob$_{ n_{ i } }^{ i }$
( ) at ( * cos(7 * ⦜) , * sin(7 * ⦜) ) ;
scale = 1.50,
anchor = center,
label = [ label distance = 0.00 cm ] south: Alice$_{ i }$
(Alice) at ( 0.00, 0.00 ) ;
[on background layer]
∠/ in 45/1, 135/2, 315/n_ i
[ MyDarkBlue, ->, - Latex [ width = 14mm, length = 9mm ] , line width = 6.25 mm, ]
( 1.00 * cos(∠) , 1.00 * sin(∠) ) –
( 4.25 * cos(∠) , 4.25 * sin(∠) )
node [ midway, text = white, rotate = ∠] $c_{ \index }^{ i }, \ \mathbf { p }_{ \index }^{ i }$;
∠in 180, 225, 270
[ shade, shading = ball, ball color = WordAquaLighter80, circle ] () at ( * 0.65 * cos(∠) , * 0.65 * sin(∠) ) ;
[ anchor = center, below = 5.00 cm of Alice ] (PhantomNode) ;
The above figure illustrates the fact that Alice$_{ i }$, $1 \leq i \leq m$, sends the verification outcome $c_{ k }^{ i }$ and the proof sequence $\mathbf { p }_{ k }^{ i }$ to every Bob$_{ k }^{ i }$, $1 \leq k \leq n_{ i }$, in her active network.
* All news aggregators that belong to the same active network are connected via pairwise authenticated classical channels. This enables them to exchange, whenever they deem necessary, the verification outcomes and the proof sequences they received from their coordinator. This action can be considered as an extra layer of verification and an indirect way in which aggregators can assess the honesty of other aggregators and also of the coordinator. This topology is shown in Figure <ref>. We clarify that aggregators that have no coordinator in common, do not communicate in any way.
grow to left by = 0.00 cm,
grow to right by = 0.00 cm,
colback = MagentaLight!03,
enhanced jigsaw,
sharp corners,
toprule = 1.0 pt,
bottomrule = 1.0 pt,
leftrule = 0.1 pt,
rightrule = 0.1 pt,
sharp corners,
center title,
fonttitle = ]
[scale = 1.00]
⦜360 /
shade, top color = WordAquaLighter80, bottom color = WordAquaDarker25, rectangle, text width = 11.00 cm, align = center
] ( Label ) at ( 0.0, 7.00 )
Aggregators Bob$_{ 1 }^{ i }$, Bob$_{ 2 }^{ i }$, …, Bob$_{ n_{ i } }^{ i }$ that have the same coordinator Alice$_{ i }$ exchange the verification outcomes and the proof sequences they received from Alice$_{ i }$ through pairwise authenticated classical channels.
[ line width = 2.00 pt, MagentaDark ] ( 0, 0 ) circle [ radius = cm ];
scale = 1.50,
anchor = center,
label = [ label distance = 0.00 cm ] east: Bob$_{ 1 }^{ i }$
(Dave) at ( * cos(1 * ⦜) , * sin(1 * ⦜) ) ;
scale = 1.50,
anchor = center,
label = [ label distance = 0.00 cm ] west: Bob$_{ 2 }^{ i }$
(Charlie) at ( * cos(3 * ⦜) , * sin(3 * ⦜) ) ;
[ shade, shading = ball, ball color = WordAquaLighter80, circle ] () at ( * cos(4 * ⦜) , * sin(4 * ⦜) ) ;
[ shade, shading = ball, ball color = WordAquaLighter80, circle ] () at ( * cos(5 * ⦜) , * sin(5 * ⦜) ) ;
[ shade, shading = ball, ball color = WordAquaLighter80, circle ] () at ( * cos(6 * ⦜) , * sin(6 * ⦜) ) ;
scale = 1.50,
anchor = center,
label = [ label distance = 0.00 cm ] east: Bob$_{ n_{ i } }^{ i }$
(Bob) at ( * cos(7 * ⦜) , * sin(7 * ⦜) ) ;
[on background layer]
[ MagentaVeryLight, <->, Latex [ width = 14mm, length = 9mm ] - Latex [ width = 14mm, length = 9mm ] , line width = 6.25 mm, ]
(Charlie.east) – (Dave.west)
node [ black, near start, sloped ] $c_{ 1 }^{ i }, \ \mathbf { p }_{ 1 }^{ i }$
node [ black, near end, sloped ] $c_{ 2 }^{ i }, \ \mathbf { p }_{ 2 }^{ i }$ ;
[ MagentaVeryLight, <->, Latex [ width = 14mm, length = 9mm ] - Latex [ width = 14mm, length = 9mm ] , line width = 6.25 mm, ]
(Bob.north west) – (Charlie.south east)
node [ black, near start, sloped ] $c_{ 2 }^{ i }, \ \mathbf { p }_{ 2 }^{ i }$
node [ black, near end, sloped ] $c_{ n_{ i } }^{ i }, \ \mathbf { p }_{ n_{ i } }^{ i }$ ;
[ MagentaVeryLight, <->, Latex [ width = 14mm, length = 9mm ] - Latex [ width = 14mm, length = 9mm ] , line width = 6.25 mm, ]
(Bob.north) – (Dave.south)
node [ black, near start, sloped ] $c_{ 1 }^{ i }, \ \mathbf { p }_{ 1 }^{ i }$
node [ black, near end, sloped ] $c_{ n_{ i } }^{ i }, \ \mathbf { p }_{ n_{ i } }^{ i }$ ;
∠in 180, 225, 270
[ shade, shading = ball, ball color = WordAquaLighter80, circle ] () at ( * 0.65 * cos(∠) , * 0.65 * sin(∠) ) ;
[ anchor = center, below = 5.00 cm of Alice ] (PhantomNode) ;
The above figure illustrates the fact that all news aggregators with the same coordinator Alice$_{ i }$, i.e., Bob$_{ 1 }^{ i }$, Bob$_{ 2 }^{ i }$, …, Bob$_{ n_{ i } }^{ i }$, can exchange the verification outcomes and the proof sequences they received from Alice$_{ i }$ through pairwise authenticated classical channels.
* Every news aggregator is responsible for maintaining the reputation system outlined below, independently, and in parallel with every other news aggregator.
$\Diamond$ A news ranking system that characterizes news as either true or fake.
$\Diamond$ A reputation catalog that reflects the personal assessment of the aggregator regarding every other player (both verifier and aggregator) involved in information exchange.
News deemed as fake must be appropriately flagged as such, so that the public is made aware of this. The reputation catalog takes the form of two lists containing the unreliable verifiers and aggregators.
The intuition behind the latter hypothesis
quite straightforward. It is expedient to record and consider unreliable those players that have exhibited contradictory and/or malicious behavior, and distinguish them from those players that have demonstrated consistent adherence to the rules and have a history of accurate and truthful reporting. By maintaining these records, each aggregator plays an important role in ensuring the integrity and efficiency of the news network. By identifying and isolating unreliable entities, he helps to protect the network from malicious actors and maintain the trust among participants.
§ THE QUANTUM NEWS VERIFICATION ALGORITHM
In this section we present the quantum news verification algorithm, or QNVA for short. Every Alice is tasked with verifying important news, and sending the result of her verification check to all her agents.
$\Diamond$ If the news in question passed the verification check, then Alice sends via the classical channel the bit $1$ to every Bob in her active network to signify its credibility. Additionally, she sends a personalized proof, which is a sequence of bits, to each of her agents. The important thing here is that for each Bob the proof is different because it is constructed specifically for him.
$\Diamond$ Symmetrically, if the news failed to pass the check, Alice sends via the classical channel the bit $0$ to every agent in her active network to indicate that it is fake, together with a personalized proof.
The QNVA is presented from the point of view of the individual Bob, where, of course, we assume that all Bobs implement the same algorithm consistently. The algorithm itself can be conceptually organized into $3$ distinct phases.
§.§ The entanglement distribution phase
The first is the entanglement distribution phase, which refers to the generation and distribution of entangled $\ket{ \Phi^{ + } }$ pairs and qubits in the $\ket{ + }$ state. As we have explained in hypothesis ($\mathbf { H }_{ 1 }$), we assume the existence of a trusted quantum source that undertakes this task. In view of the capabilities of modern quantum apparatus, the trusted quantum source should have no difficulty in accomplishing this task. In terms of notation, we use the small Latin letters $q$ and $r$, with appropriate subscripts and superscripts, to designate the first and the second qubit, respectively, of the same $\ket{ \Phi^{ + } }$ pair.
The trusted source creates two types of sequences, one that is intended for verifiers and one that is intended for aggregators. Both are distributed through quantum channels to their intended recipients. Specifically, for each piece of news that must be checked, and for each Alice$_{ i }$, $1 \leq i \leq m$, the source creates
$\Diamond$ one verification sequence $\mathbf { q }^{ i }$ that is sent to Alice$_{ i }$ and has the form
\begin{align}
\mathbf { q }^{ i }
\underbrace { \colorbox {WordAquaLighter40} { $q_{ n_{ i }, d }^{ i } \dots q_{ k, d }^{ i } \dots q_{ 1, d }^{ i }$ } }_{ \text{ tuple } d }
\cdots
\underbrace { \colorbox {WordAquaLighter60} { $q_{ n_{ i }, 2 }^{ i } \dots q_{ k, 2 }^{ i } \dots q_{ 1, 2 }^{ i }$ } }_{ \text{ tuple } 2 }
\underbrace { \colorbox {WordAquaLighter80} { $q_{ n_{ i }, 1 }^{ i } \dots q_{ k, 1 }^{ i } \dots q_{ 1, 1 }^{ i }$ } }_{ \text{ tuple } 1 }
\ , \text{ and }
\label{eq: Alice's Verification Sequence}
\end{align}
$\Diamond$ $n_{ i }$ verification sequences $\mathbf { r }_{ 1 }^{ i }$, $\mathbf { r }_{ 2 }^{ i }$, …, $\mathbf { r }_{ n_{ i } }^{ i }$ sent to Bob$_{ 1 }^{ i }$, Bob$_{ 2 }^{ i }$, …, Bob$_{ n_{ i } }^{ i }$, respectively, that have the form
\begin{align}
\mathbf { r }_{ k }^{ i }
\underbrace { \colorbox {WordLightGreen} { $\ket{ + } \dots r_{ k, d }^{ i } \dots \ket{ + }$ } }_{ \text{ tuple } d }
\cdots
\underbrace { \colorbox {WordLightGreen!66} { $\ket{ + } \dots r_{ k, 2 }^{ i } \dots \ket{ + }$ } }_{ \text{ tuple } 2 } \
\underbrace { \colorbox {WordLightGreen!33} { $\ket{ + } \dots r_{ k, 1 }^{ i } \dots \ket{ + }$ } }_{ \text{ tuple } 1 }
\ , \ 1 \leq k \leq n_{ i }
\ .
\label{eq: Bob's Verification Sequence}
\end{align}
In the $\mathbf { q }^{ i }$ and $\mathbf { r }_{ 1 }^{ i }$, $\mathbf { r }_{ 2 }^{ i }$, …, $\mathbf { r }_{ n_{ i } }^{ i }$ sequences, the subscript $d$ is a positive integer called the degree of accuracy of the verification. Furthermore, according to our convention, $q_{ k, l }^{ i }$ and $r_{ k, l }^{ i }$ denote the first and second qubits of the same $\ket{ \Phi^{ + } }$ pair that is used in the formation of the $l^{ th }$ tuple. Obviously, $\ket{ + }$ designates qubits that are in the $\ket{ + }$ state.
The situation regarding the sequences of qubits distributed to Alice$_{ i }$ and the Bobs in her active network is visualized in Figure <ref>.
grow to left by = 1.50 cm,
grow to right by = 1.50 cm,
colback = MagentaLight!03,
enhanced jigsaw,
sharp corners,
toprule = 1.0 pt,
bottomrule = 1.0 pt,
leftrule = 0.1 pt,
rightrule = 0.1 pt,
sharp corners,
center title,
fonttitle = ,
[ scale = 0.25 ]
scale = 1.50,
anchor = center,
label = [ label distance = 0.00 cm ] west: Alice$_{ i }$
(Alice) ;
matrix of nodes, nodes in empty cells,
column sep = 0.000 mm, right = 0.50 of Alice,
nodes = circle, minimum size = 12 mm, semithick, font = ,
[ shade, outer color = RedPurple!50, inner color = white ] (A-n-d) $q_{ n_{ i }, d }^{ i }$ ;
[ shade, outer color = WordAquaLighter60, inner color = white ] …;
[ shade, outer color = GreenLighter2!50, inner color = white ] (A-2-d) $q_{ 2, d }^{ i }$ ;
[ shade, outer color = WordBlueVeryLight, inner color = white ] (A-1-d) $q_{ 1, d }^{ i }$ ;
[ shade, outer color = RedPurple!50, inner color = white ] (A-n-2) $q_{ n_{ i }, 2 }^{ i }$ ;
[ shade, outer color = WordAquaLighter60, inner color = white ] …;
[ shade, outer color = GreenLighter2!50, inner color = white ] (A-2-2) $q_{ 2, 2 }^{ i }$ ;
[ shade, outer color = WordBlueVeryLight, inner color = white ] (A-1-2) $q_{ 1, 2 }^{ i }$ ;
[ minimum size = 0.1 mm ] ;
[ shade, outer color = RedPurple!50, inner color = white ] (A-n-1) $q_{ n_{ i }, 1 }^{ i }$ ;
[ shade, outer color = WordAquaLighter60, inner color = white ] …;
[ shade, outer color = GreenLighter2!50, inner color = white ] (A-2-1) $q_{ 2, 1 }^{ i }$ ;
[ shade, outer color = WordBlueVeryLight, inner color = white ] (A-1-1) $q_{ 1, 1 }^{ i }$ ;
scale = 1.50,
anchor = center,
above = 1.50 cm of Alice,
label = [ label distance = 0.00 cm ] west: Bob$_{ n_{ i } }^{ i }$
(Bob) ;
column sep = 0.000 mm, right = 0.50 of Bob,
nodes = circle, minimum size = 12 mm, semithick, font = ,
[ shade, outer color = RedPurple!50, inner color = white ] (B-n-d) $r_{ n_{ i }, d }^{ i }$ ;
[ shade, outer color = WordIceBlue, inner color = white ] …;
[ shade, outer color = WordIceBlue, inner color = white ] $\ket{ + }$ ;
[ shade, outer color = WordIceBlue, inner color = white ] $\ket{ + }$ ;
[ shade, outer color = RedPurple!50, inner color = white ] (B-n-2) $r_{ n_{ i }, 2 }^{ i }$ ;
[ shade, outer color = WordIceBlue, inner color = white ] …;
[ shade, outer color = WordIceBlue, inner color = white ] $\ket{ + }$ ;
[ shade, outer color = WordIceBlue, inner color = white ] $\ket{ + }$ ;
[ minimum size = 0.1 mm ] ;
[ shade, outer color = RedPurple!50, inner color = white ] (B-n-1) $r_{ n_{ i }, 1 }^{ i }$ ;
[ shade, outer color = WordIceBlue, inner color = white ] …;
[ shade, outer color = WordIceBlue, inner color = white ] $\ket{ + }$ ;
[ shade, outer color = WordIceBlue, inner color = white ] $\ket{ + }$ ;
anchor = center,
above = 1.00 cm of Bob,
(Dots) ⋮;
column sep = 0.000 mm, right = 3.00 of Dots,
nodes = circle, minimum size = 12 mm, semithick, font = ,
scale = 1.50,
anchor = center,
above = 1.00 cm of Dots,
label = [ label distance = 0.00 cm ] west: Bob$_{ 2 }^{ i }$
(Charlie) ;
column sep = 0.000 mm, right = 0.50 of Charlie,
nodes = circle, minimum size = 12 mm, semithick, font = ,
[ shade, outer color = WordIceBlue, inner color = white ] $\ket{ + }$ ;
[ shade, outer color = WordIceBlue, inner color = white ] …;
[ shade, outer color = GreenLighter2!50, inner color = white ] (C-2-d) $r_{ 2, d }^{ i }$ ;
[ shade, outer color = WordIceBlue, inner color = white ] $\ket{ + }$ ;
[ shade, outer color = WordIceBlue, inner color = white ] $\ket{ + }$ ;
[ shade, outer color = WordIceBlue, inner color = white ] …;
[ shade, outer color = GreenLighter2!50, inner color = white ] (C-2-2) $r_{ 2, 2 }^{ i }$ ;
[ shade, outer color = WordIceBlue, inner color = white ] $\ket{ + }$ ;
[ minimum size = 0.1 mm ] ;
[ shade, outer color = WordIceBlue, inner color = white ] $\ket{ + }$ ;
[ shade, outer color = WordIceBlue, inner color = white ] …;
[ shade, outer color = GreenLighter2!50, inner color = white ] (C-2-1) $r_{ 2, 1 }^{ i }$ ;
[ shade, outer color = WordIceBlue, inner color = white ] $\ket{ + }$ ;
scale = 1.50,
anchor = center,
above = 1.50 cm of Charlie,
label = [ label distance = 0.00 cm ] west: Bob$_{ 1 }^{ i }$
(Dave) ;
column sep = 0.000 mm, right = 0.50 of Dave,
nodes = circle, minimum size = 12 mm, semithick, font = ,
[ shade, outer color = WordIceBlue, inner color = white ] $\ket{ + }$ ;
[ shade, outer color = WordIceBlue, inner color = white ] …;
[ shade, outer color = WordIceBlue, inner color = white ] $\ket{ + }$ ;
[ shade, outer color = WordBlueVeryLight, inner color = white ] (D-1-d) $r_{ 1, d }^{ i }$ ;
[ shade, outer color = WordIceBlue, inner color = white ] $\ket{ + }$ ;
[ shade, outer color = WordIceBlue, inner color = white ] …;
[ shade, outer color = WordIceBlue, inner color = white ] $\ket{ + }$ ;
[ shade, outer color = WordBlueVeryLight, inner color = white ] (D-1-2) $r_{ 1, 2 }^{ i }$ ;
[ minimum size = 0.1 mm ] ;
[ shade, outer color = WordIceBlue, inner color = white ] $\ket{ + }$ ;
[ shade, outer color = WordIceBlue, inner color = white ] …;
[ shade, outer color = WordIceBlue, inner color = white ] $\ket{ + }$ ;
[ shade, outer color = WordBlueVeryLight, inner color = white ] (D-1-1) $r_{ 1, 1 }^{ i }$ ;
[on background layer]
[ RedPurple, -, >=stealth, line width = 0.7 mm, decoration = coil, decorate ]
(A-n-d.center) – (B-n-d.center);
[ RedPurple, -, >=stealth, line width = 0.7 mm, decoration = coil, decorate ]
(A-n-2.center) – (B-n-2.center);
[ RedPurple, -, >=stealth, line width = 0.7 mm, decoration = coil, decorate ]
(A-n-1.center) – (B-n-1.center);
[ GreenLighter2, -, >=stealth, line width = 0.7 mm, decoration = coil, decorate ]
(A-2-d.center) – (C-2-d.center);
[ GreenLighter2, -, >=stealth, line width = 0.7 mm, decoration = coil, decorate ]
(A-2-2.center) – (C-2-2.center);
[ GreenLighter2, -, >=stealth, line width = 0.7 mm, decoration = coil, decorate ]
(A-2-1.center) – (C-2-1.center);
[ WordBlueVeryLight, -, >=stealth, line width = 0.7mm , decoration = coil, decorate ]
(A-1-d.center) – (D-1-d.center);
[ WordBlueVeryLight, -, >=stealth, line width = 0.7mm , decoration = coil, decorate ]
(A-1-2.center) – (D-1-2.center);
[ WordBlueVeryLight, -, >=stealth, line width = 0.7mm , decoration = coil, decorate ]
(A-1-1.center) – (D-1-1.center);
above right = 2.00 cm and 7.50 cm of Dave, anchor = center, shade, top color = MagentaLighter, bottom color = black, rectangle, text width = 9.50 cm, align = center
The entangled sequences of qubits distributed to Alice and the news aggregators in her active network. ;
[ anchor = west, below = 0.50 cm of Alice ] (PhantomNode1) ;
[ anchor = west, above = 0.25 cm of Label ] (PhantomNode2) ;
The above figure depicts the entangled sequences of qubits distributed to Alice and the news aggregators in her active network. Qubits that belong to the same $\ket{ \Phi^{ + } }$ pair are indicated by the same color and a wavy line that connects them. Specifically, blue indicates the EPR pairs shared between Alice$_{ i }$ and Bob$_{ 1 }^{ i }$, which occupy position $1$ in each $n_{ i }$-tuple of the $\mathbf { q }^{ i }$ and $\mathbf { r }_{ 1 }^{ i }$ sequences. Analogously, green is used for the EPR pairs shared between Alice$_{ i }$ and Bob$_{ 2 }^{ i }$, and red for the EPR pairs linking Alice$_{ i }$ and Bob$_{ n_{ i } }^{ i }$. The silver qubits designate those in the $\ket{ + }$ state that fill the remaining positions of the sequences $\mathbf { r }_{ 1 }^{ i }$, $\mathbf { r }_{ 2 }^{ i }$, …, $\mathbf { r }_{ n_{ i } }^{ i }$.
The visual convention is to draw qubits that belong to the same $\ket{ \Phi^{ + } }$ pair with the same color. Blue is used to indicate the EPR pairs shared between Alice$_{ i }$ and Bob$_{ 1 }^{ i }$, which occupy position $1$ in each $n_{ i }$-tuple of the $\mathbf { q }^{ i }$ and $\mathbf { r }_{ 1 }^{ i }$ sequences. Analogously, green is used for the EPR pairs shared between Alice$_{ i }$ and Bob$_{ 2 }^{ i }$ located in the second position of every $n_{ i }$-tuple of the $\mathbf { q }^{ i }$ and $\mathbf { r }_{ 2 }^{ i }$ sequences, and red signifies EPR pairs linking Alice$_{ i }$ and Bob$_{ n_{ i } }^{ i }$ occupying the last position of every $n_{ i }$-tuple of the $\mathbf { q }^{ i }$ and $\mathbf { r }_{ n_{ i } }^{ i }$ sequences. The silver qubits designate those in the $\ket{ + }$ state that fill the remaining positions of the sequences $\mathbf { r }_{ 1 }^{ i }$, $\mathbf { r }_{ 2 }^{ i }$, …, $\mathbf { r }_{ n_{ i } }^{ i }$. The intuition behind the construction of the above quantum sequences is outlined below.
* Alice$_{ i }$, $1 \leq i \leq m$, is linked to each one of her agents Bob$_{ 1 }^{ i }$, Bob$_{ 2 }^{ i }$, …, Bob$_{ n_{ i } }^{ i }$ because her verification sequence $\mathbf { q }^{ i }$ is entangled with their verification sequences sequences $\mathbf { r }_{ 1 }^{ i }$, $\mathbf { r }_{ 2 }^{ i }$, …, $\mathbf { r }_{ n_{ i } }^{ i }$.
* All these quantum sequences are made up of $d$ in total $n_{ i }$-tuples of qubits.
* Sequence $\mathbf { q }^{ i }$ is made up exclusively from entangled qubits.
* In $\mathbf { q }^{ i }$ the qubits in position $1$, namely $q_{ 1, 1 }^{ i }$, $q_{ 1, 2 }^{ i }$, …, $q_{ 1, d }^{ i }$, are entangled with the corresponding qubits $r_{ 1, 1 }^{ i }$, $r_{ 1, 2 }^{ i }$, …, $r_{ 1, d }^{ i }$ of the sequence $\mathbf { r }_{ 1 }^{ i }$ that belongs to Bob$_{ 1 }^{ i }$. This is because $q_{ 1, l }^{ i }$ and $r_{ 1, l }^{ i }$, $1 \leq l \leq d$, belong to the same $\ket{ \Phi^{ + } }$ pair by construction.
* For precisely the same reason, the qubits in position $k, \ k = 2, \dots, n_{ i }$, i.e., $q_{ k, 1 }^{ i }$, $q_{ k, 2 }^{ i }$, …, $q_{ k, d }^{ i }$, are entangled with the corresponding qubits $r_{ k, 1 }^{ i }$, $r_{ k, 2 }^{ i }$, …, $r_{ k, d }^{ i }$ of the sequence $\mathbf { r }_{ k }^{ i }$ owned by Bob$_{ k }^{ i }$.
* In every sequence $\mathbf { r }_{ k }^{ i }$, $k = 1, \dots, n_{ i }$, the qubits $r_{ k, l }^{ i }$, $l = 1, \dots, d$, that occupy the $k^{ th }$ position in each $n_{ i }$-tuple, are entangled with the corresponding qubits $q_{ k, l }^{ i }$ of $\mathbf { q }^{ i }$. All other qubits are in the $\ket{ + }$ state.
In Section <ref>, where we discuss the effect of the degree of accuracy of the QNVA, we shall suggest appropriate values for $d$.
In view of the structural form of the sequences defined by formulae (<ref>) and (<ref>), we also refer to them as $( d, n_{ i } )$ quantum sequences because they are constructed by $d$ repetitions of structurally similar tuples of the same length, namely $n_{ i }$. These $d$ tuples are enumerated as shown in (<ref>) and (<ref>), that is tuple $1$ is the rightmost tuple and tuple $d$ is the leftmost tuple.
§.§ Entanglement validation phase
Undoubtedly, this is a most crucial phase, as the entire algorithm hinges upon the existence of entanglement. Without guaranteed entanglement, the algorithm's functionality is compromised. The validation procedure can result into two distinct outcomes. If entanglement is successfully ascertained, the QNVA can proceed to confidently verify the information at hand. Failure to validate entanglement indicates the absence of the necessary entanglement. This could stem from either noisy quantum channels or malicious interference from an adversary. Regardless of the cause, the only viable solution is to halt the ongoing algorithm execution and commence the entire procedure anew, after implementing corrective measures.
Given its utmost significance, this phase has undergone thorough scrutiny in the existing literature. Our algorithm adheres to the sophisticated methodologies outlined in prior works, including [45, 46, 47, 48, 49, 50]. Hence, to preclude redundant exposition, we direct the reader to the previously mentioned bibliography for all the details essential for the successful implementation of this phase.
§.§ The news verification phase
Our algorithm classifies news as true or fake during the third and last phase, aptly named news verification phase. To initiate this phase, Alice$_{ i }$ and the agents in her active network, Bob$_{ 1 }^{ i }$, Bob$_{ 2 }^{ i }$, …, Bob$_{ n_{ i } }^{ i }$, measure their quantum sequences to obtain the classical bit sequences denoted by the lower case bold letters $\mathbf { a }^{ i }$ and $\mathbf { b }_{ 1 }^{ i }$, $\mathbf { b }_{ 2 }^{ i }$, …, $\mathbf { b }_{ n_{ i } }^{ i }$, respectively. Taking into account the entanglement distribution scheme of Definition <ref>, we see that the measurement has revealed some important correlations among these sequences. These correlations are depicted in the next Figure <ref>.
grow to left by = 1.50 cm,
grow to right by = 1.50 cm,
colback = MagentaLight!03,
enhanced jigsaw,
sharp corners,
toprule = 1.0 pt,
bottomrule = 1.0 pt,
leftrule = 0.1 pt,
rightrule = 0.1 pt,
sharp corners,
center title,
fonttitle = ]
[ scale = 0.25 ]
scale = 1.50,
anchor = center,
label = [ label distance = 0.00 cm ] west: Alice$_{ i }$
(Alice) ;
matrix of nodes, nodes in empty cells,
column sep = 0.000 mm, right = 0.50 of Alice,
nodes = circle, minimum size = 12 mm, semithick, font = ,
[ shade, outer color = RedPurple!50, inner color = white ] (A-n-d) $a_{ n_{ i }, d }^{ i }$ ;
[ shade, outer color = WordAquaLighter60, inner color = white ] …;
[ shade, outer color = GreenLighter2!50, inner color = white ] (A-2-d) $a_{ 2, d }^{ i }$ ;
[ shade, outer color = WordBlueVeryLight, inner color = white ] (A-1-d) $a_{ 1, d }^{ i }$ ;
[ shade, outer color = RedPurple!50, inner color = white ] (A-n-2) $a_{ n_{ i }, 2 }^{ i }$ ;
[ shade, outer color = WordAquaLighter60, inner color = white ] …;
[ shade, outer color = GreenLighter2!50, inner color = white ] (A-2-2) $a_{ 2, 2 }^{ i }$ ;
[ shade, outer color = WordBlueVeryLight, inner color = white ] (A-1-2) $a_{ 1, 2 }^{ i }$ ;
[ minimum size = 0.1 mm ] ;
[ shade, outer color = RedPurple!50, inner color = white ] (A-n-1) $a_{ n_{ i }, 1 }^{ i }$ ;
[ shade, outer color = WordAquaLighter60, inner color = white ] …;
[ shade, outer color = GreenLighter2!50, inner color = white ] (A-2-1) $a_{ 2, 1 }^{ i }$ ;
[ shade, outer color = WordBlueVeryLight, inner color = white ] (A-1-1) $a_{ 1, 1 }^{ i }$ ;
scale = 1.50,
anchor = center,
above = 1.50 cm of Alice,
label = [ label distance = 0.00 cm ] west: Bob$_{ n_{ i } }^{ i }$
(Bob) ;
column sep = 0.000 mm, right = 0.50 of Bob,
nodes = circle, minimum size = 12 mm, semithick, font = ,
[ shade, outer color = RedPurple!50, inner color = white ] (B-n-d) $b_{ n_{ i }, d }^{ i }$ ;
[ shade, outer color = WordIceBlue, inner color = white ] …;
[ shade, outer color = WordIceBlue, inner color = white ] $b_{ 2, d }^{ i }$ ;
[ shade, outer color = WordIceBlue, inner color = white ] $b_{ 1, d }^{ i }$ ;
[ shade, outer color = RedPurple!50, inner color = white ] (B-n-2) $b_{ n_{ i }, 2 }^{ i }$ ;
[ shade, outer color = WordIceBlue, inner color = white ] …;
[ shade, outer color = WordIceBlue, inner color = white ] $b_{ 2, 2 }^{ i }$ ;
[ shade, outer color = WordIceBlue, inner color = white ] $b_{ 1, 2 }^{ i }$ ;
[ minimum size = 0.1 mm ] ;
[ shade, outer color = RedPurple!50, inner color = white ] (B-n-1) $b_{ n_{ i }, 1 }^{ i }$ ;
[ shade, outer color = WordIceBlue, inner color = white ] …;
[ shade, outer color = WordIceBlue, inner color = white ] $b_{ 2, 1 }^{ i }$ ;
[ shade, outer color = WordIceBlue, inner color = white ] $b_{ 1, 1 }^{ i }$ ;
anchor = center,
above = 1.00 cm of Bob,
(Dots) ⋮;
column sep = 0.000 mm, right = 3.00 of Dots,
nodes = circle, minimum size = 12 mm, semithick, font = ,
scale = 1.50,
anchor = center,
above = 1.00 cm of Dots,
label = [ label distance = 0.00 cm ] west: Bob$_{ 2 }^{ i }$
(Charlie) ;
column sep = 0.000 mm, right = 0.50 of Charlie,
nodes = circle, minimum size = 12 mm, semithick, font = ,
[ shade, outer color = WordIceBlue, inner color = white ] $b_{ n_{ i }, d }^{ i }$ ;
[ shade, outer color = WordIceBlue, inner color = white ] …;
[ shade, outer color = GreenLighter2!50, inner color = white ] (C-2-d) $b_{ 2, d }^{ i }$ ;
[ shade, outer color = WordIceBlue, inner color = white ] $b_{ 1, d }^{ i }$ ;
[ shade, outer color = WordIceBlue, inner color = white ] $b_{ n_{ i }, 2 }^{ i }$ ;
[ shade, outer color = WordIceBlue, inner color = white ] …;
[ shade, outer color = GreenLighter2!50, inner color = white ] (C-2-2) $b_{ 2, 2 }^{ i }$ ;
[ shade, outer color = WordIceBlue, inner color = white ] $b_{ 1, 2 }^{ i }$ ;
[ minimum size = 0.1 mm ] ;
[ shade, outer color = WordIceBlue, inner color = white ] $b_{ n_{ i }, 1 }^{ i }$ ;
[ shade, outer color = WordIceBlue, inner color = white ] …;
[ shade, outer color = GreenLighter2!50, inner color = white ] (C-2-1) $b_{ 2, 1 }^{ i }$ ;
[ shade, outer color = WordIceBlue, inner color = white ] $b_{ 1, 1 }^{ i }$ ;
scale = 1.50,
anchor = center,
above = 1.50 cm of Charlie,
label = [ label distance = 0.00 cm ] west: Bob$_{ 1 }^{ i }$
(Dave) ;
column sep = 0.000 mm, right = 0.50 of Dave,
nodes = circle, minimum size = 12 mm, semithick, font = ,
[ shade, outer color = WordIceBlue, inner color = white ] $b_{ n_{ i }, d }^{ i }$ ;
[ shade, outer color = WordIceBlue, inner color = white ] …;
[ shade, outer color = WordIceBlue, inner color = white ] $b_{ 2, d }^{ i }$ ;
[ shade, outer color = WordBlueVeryLight, inner color = white ] (D-1-d) $b_{ 1, d }^{ i }$ ;
[ shade, outer color = WordIceBlue, inner color = white ] $b_{ n_{ i }, 2 }^{ i }$ ;
[ shade, outer color = WordIceBlue, inner color = white ] …;
[ shade, outer color = WordIceBlue, inner color = white ] $b_{ 2, 2 }^{ i }$ ;
[ shade, outer color = WordBlueVeryLight, inner color = white ] (D-1-2) $b_{ 1, 2 }^{ i }$ ;
[ minimum size = 0.1 mm ] ;
[ shade, outer color = WordIceBlue, inner color = white ] $b_{ n_{ i }, 1 }^{ i }$ ;
[ shade, outer color = WordIceBlue, inner color = white ] …;
[ shade, outer color = WordIceBlue, inner color = white ] $b_{ 2, 1 }^{ i }$ ;
[ shade, outer color = WordBlueVeryLight, inner color = white ] (D-1-1) $b_{ 1, 1 }^{ i }$ ;
above right = 2.00 cm and 8.00 cm of Dave, anchor = center, shade, top color = WordDarkTeal, bottom color = black, rectangle, text width = 10.00 cm, align = center
The classical bit sequences resulting after the measurement of the quantum sequences and their correlations. ;
[ anchor = west, below = 0.50 cm of Alice ] (PhantomNode1) ;
[ anchor = west, above = 0.25 cm of Label ] (PhantomNode2) ;
This figure shows the classical bit sequences that result after Alice and the news aggregators measure their quantum sequences. The correlations among pairs of bits in these sequences are visualized by drawing correlated pairs with the same color.
Upon measuring their qubit sequences, news verifier Alice$_{ i }$ and news aggregators Bob$_{ 1 }^{ i }$, Bob$_{ 2 }^{ i }$, …, Bob$_{ n_{ i } }^{ i }$, obtain the classical bit sequences $\mathbf { a }^{ i }$ and $\mathbf { b }_{ 1 }^{ i }$, $\mathbf { b }_{ 2 }^{ i }$, …, $\mathbf { b }_{ n_{ i } }^{ i }$, respectively, that can be written explicitly as follows.
\begin{align}
\mathbf { a }^{ i }
\underbrace { \colorbox {WordAquaLighter40} { $a_{ n_{ i }, d }^{ i } \dots a_{ k, d }^{ i } \dots a_{ 1, d }^{ i }$ } }_{ \text{ tuple } d }
\cdots
\underbrace { \colorbox {WordAquaLighter60} { $a_{ n_{ i }, 2 }^{ i } \dots a_{ k, 2 }^{ i } \dots a_{ 1, 2 }^{ i }$ } }_{ \text{ tuple } 2 }
\underbrace { \colorbox {WordAquaLighter80} { $a_{ n_{ i }, 1 }^{ i } \dots a_{ k, 1 }^{ i } \dots a_{ 1, 1 }^{ i }$ } }_{ \text{ tuple } 1 }
\ , \text{ and }
\label{eq: Alice's Classical Bit Sequence}
\\
\mathbf { b }_{ k }^{ i }
\underbrace { \colorbox {WordLightGreen} { $b_{ n_{ i }, d }^{ i } \dots b_{ k, d }^{ i } \dots b_{ 1, d }^{ i }$ } }_{ \text{ tuple } d }
\cdots
\underbrace { \colorbox {WordLightGreen!50} { $b_{ n_{ i }, 2 }^{ i } \dots b_{ k, 2 }^{ i } \dots b_{ 1, 2 }^{ i }$ } }_{ \text{ tuple } 2 } \
\underbrace { \colorbox {WordLightGreen!25} { $b_{ n_{ i }, 1 }^{ i } \dots b_{ k, 1 }^{ i } \dots b_{ 1, 1 }^{ i }$ } }_{ \text{ tuple } 1 }
\label{eq: Bob's Classical Bit Sequence}
\ ,
\end{align}
where $k = 1, \dots, n_{ i }$.
Although the sequences defined by formulae (<ref>) and (<ref>) consist of classical bits, their structural form is identical to that of the original qubit sequences. So, we will also refer to them as $( d, n_{ i } )$ classical sequences because they are constructed by repeating $d$ times structurally similar tuples of length $n_{ i }$. Following this line of thought, we may consider an arbitrary $( d, n_{ i } )$ sequence $\mathbf { s }$ made of symbols from some fixed alphabet, and express it in an alternative but equivalent way, emphasizing its composition in terms of $n_{ i }$-tuples, as follows.
\begin{align}
\mathbf { s }
\underbrace { \colorbox {WordAquaLighter40} { $s_{ n_{ i }, d } \dots s_{ 2, d } s_{ 1, d }$ } }_{ \text{ tuple } d }
\cdots
\underbrace { \colorbox {WordAquaLighter60} { $s_{ n_{ i }, 2 } \dots s_{ 2, 2 } s_{ 1, 2 }$ } }_{ \text{ tuple } 2 }
\underbrace { \colorbox {WordAquaLighter80} { $s_{ n_{ i }, 1 } \dots s_{ 2, 1 } s_{ 1, 1 }$ } }_{ \text{ tuple } 1 }
\nonumber \\
\mathbf { s }
\hspace{ 1.05 cm }
\overset { \downarrow } { \colorbox {MagentaVeryLight} { $\mathbf { s }_{ d }$ } }
\hspace{ 1.00 cm }
\cdots
\hspace{ 1.05 cm }
\overset { \downarrow } { \colorbox {MagentaVeryLight!40!MyLightRed} { $\mathbf { s }_{ 2 }$ } }
\hspace{ 2.00 cm }
\overset { \downarrow } { \colorbox {MyLightRed} { $\mathbf { s }_{ 1 }$ } }
\label{eq: Classical Bit Sequence Tuple Form}
\ .
\end{align}
Let us also clarify that by writing $\mathbf { s } = \mathbf { s }_{ d } \cdots \mathbf { s }_{ 2 } \mathbf { s }_{ 1 }$, where $\mathbf { s }_{ l } = s_{ n_{ i }, l } \dots s_{ 2, l } s_{ 1, l }$ and $1 \leq l \leq d$, we have effectively enumerated the $d$ tuples of $\mathbf { s }$ in a way that $1$ is the rightmost and $d$ is the leftmost tuple. In the rest of our exposition, we will also need a special $n_{ i }$-tuple that is constructed by using a new symbol $\ast$, different from $0$ and $1$. This is denoted by $\mathbf { s }_{ \ast }$ and is referred to the cryptic tuple.
\begin{align}
\mathbf { s }_{ \ast }
\underbrace { \colorbox {orange!25} { $\ast \ \dots \ \ast \ \ast$ } }_{ n_{ i } \text{ occurrences} }
\label{eq: The Special Tuple}
\ .
\end{align}
With the above convention, we may write Alice$_{ i }$'s bit sequence $\mathbf { a }^{ i }$ in the next form, emphasizing the fact that it is composed by $d$ tuples.
\begin{align}
\mathbf { a }^{ i }
\colorbox {WordAquaLighter40} { $\mathbf { a }_{ d }$ }
\cdots
\colorbox {WordAquaLighter60} { $\mathbf { a }_{ 2 }$ }
\colorbox {WordAquaLighter80} { $\mathbf { a }_{ 1 }$ }
\label{eq: Alice's Classical Bit Sequence Tuple Form}
% \ .
\end{align}
News verifier Alice$_{ i }$, $1 \leq i \leq m$, sends two things to all the news aggregators in her active network, namely Bob$_{ 1 }^{ i }$, Bob$_{ 2 }^{ i }$, …, Bob$_{ n_{ i } }^{ i }$:
$\Diamond$ The result of the verification check, denoted by $c_{ k }^{ i } \in \mathbb { B }$, which is just a single bit. If the news is true, then $c_{ k }^{ i }$ is just the bit $1$, whereas if the news is fake, $c_{ k }^{ i }$ is the bit $0$.
$\Diamond$ A proof sequence, denoted by $\mathbf { p }_{ k }^{ i }$, which is intended to convince Bob$_{ k }^{ i }$ that she is honest. Each proof sequence $\mathbf { p }_{ k }^{ i }$ is a $( d, n_{ i } )$ sequence of symbols from $\mathbb { B } \cup \{ \ast \}$, i.e., $\mathbf { p }_{ k }^{ i } = \mathbf { p }_{ d } \ \dots \ \mathbf { p }_{ 2 } \ \mathbf { p }_{ 1 }$. It is critical that these proof sequences be personalized, which effectively means they must be different for every news aggregator. Their construction is described below.
* If $c_{ k }^{ i } = 1$, the proof $\mathbf { p }_{ k }^{ i }$ sequence sent to news aggregator Bob$_{ k }^{ i }$, $1 \leq k \leq n_{ i }$, also designated by $\mathds{ 1 }_{ k }^{ i }$ for emphasis, has the explicit form shown below.
\begin{align}
\mathds{ 1 }_{ k }^{ i }
\mathbf { p }_{ d } \ \dots \ \mathbf { p }_{ 2 } \ \mathbf { p }_{ 1 }
\ ,
\ \text{ where } \
\mathbf { p }_{ l }
\left\{
\begin{matrix*}[l]
\ \mathbf { a }_{ l } & \text{ if } a_{ k, l }^{ i } = 1 \\
\ \mathbf { s }_{ \ast } & \text{ if } a_{ k, l }^{ i } = 0
\end{matrix*}
\right.
\ , \ 1 \leq l \leq d \ .
\label{eq: Bob's k Proof Sequence for True}
\end{align}
* Symmetrically, if $c_{ k }^{ i } = 0$, the proof sequence sent to Bob$_{ k }^{ i }$, $1 \leq k \leq n_{ i }$, denoted by $\vmathbb{ 0 }_{ k }^{ i }$ for emphasis, has the following explicit form.
\begin{align}
\vmathbb{ 0 }_{ k }^{ i }
\mathbf { p }_{ d } \ \dots \ \mathbf { p }_{ 2 } \ \mathbf { p }_{ 1 }
\ ,
\ \text{ where } \
\mathbf { p }_{ l }
\left\{
\begin{matrix*}[l]
\ \mathbf { a }_{ l } & \text{ if } a_{ k, l }^{ i } = 0 \\
\ \mathbf { s }_{ \ast } & \text{ if } a_{ k, l }^{ i } = 1
\end{matrix*}
\right.
\ , \ 1 \leq l \leq d \ .
\label{eq: Bob's k Proof Sequence for Fake}
\end{align}
A proof sequence for a verification check $c_{ k }^{ i }$ that is faithfully constructed according to Definition <ref> is said to be consistent with $c_{ k }^{ i }$. The previous Definition <ref> guarantees that, no matter what the verification outcome is, Bob$_{ k }^{ i }$ receives a different proof sequence from every other Bob$_{ k^{ \prime } }^{ i }$ when $k^{ \prime } \neq k$. The other crucial property that characterizes the proof sequences is the fact that besides tuples comprised entirely of $0$ and $1$ bits, they also contain a statistically equal number of cryptic tuples consisting of the special symbol $\ast$. Probabilistic analysis allows Bob$_{ k }^{ i }$ to assess whether or not Alice$_{ i }$ and the other Bobs act honestly and consistently. To this end, it is necessary to examine the positions of the $n_{ i }$-tuples that contain specific combinations of bits.
\mathbf { s }
\underbrace { s_{ n_{ i }, d } \dots s_{ 2, d } s_{ 1, d } }_{ \text{ tuple } d }
\cdots
\underbrace { s_{ n_{ i }, 2 } \dots s_{ 2, 2 } s_{ 1, 2 } }_{ \text{ tuple } 2 }
\underbrace { s_{ n_{ i }, 1 } \dots s_{ 2, 1 } s_{ 1, 1 } }_{ \text{ tuple } 1 }
be a $( d, n_{ i } )$ sequence. If $k$ and $k^{ \prime }$, $1 \leq k \neq k^{ \prime } \leq n_{ i }$, are two given indices, and $x, y \in \mathbb { B }$, we define the following sets
\begin{align}
P_{ x } ( \mathbf { s }, k )
\{ l \ \vert \ s_{ k, l } = x \}
\ ,
\label{eq: P_x Definition}
\\
P_{ x, y } ( \mathbf { s }, k, k^{ \prime } )
\{ l \ \vert \ s_{ k, l } = x \wedge s_{ k^{ \prime }, l } = y \}
\ , \text{ and }
\label{eq: P_xy Definition}
\\
P_{ \ast } ( \mathbf { s } )
\{ l \ \vert \ \mathbf { s }_{ l } = \mathbf { s }_{ \ast } \}
\ .
\label{eq: P_* Definition}
\end{align}
The previous definition completes the necessary machinery and notation for the presentation of the QNVA. We may now proceed to explain the QNVA in detail and, at the same time, prove its correctness. In the rest of this section, we present the algorithm from the point of view of the typical news aggregator Bob$_{ k }^{ i }$, $1 \leq k \leq n_{ i }$. In what follows, we use the notation $\lvert S \rvert$ to designate the number of elements of a given set $S$.
In today's complex news environment malicious intent can manifest in many subtle ways. One may easily envision the next most critical scenarios.
* An unreliable and dishonest Alice$_{ i }$ sends to Bob$_{ k }^{ i }$ the verification outcome $c_{ k }^{ i }$, but the latter is accompanied with the wrong proof sequence $\mathbf { p }_{ k }^{ i }$.
* A malicious news aggregator, say Bob$_{ k^{ \prime } }^{ i }$ ($k^{ \prime } \neq k$), falsely claims that he received from Alice$_{ i }$ the opposite verification outcome accompanied by a consistent proof sequence.
* An insidious Alice$_{ i }$ deliberately spreads disinformation and confusion by sending opposite verification outcomes $c_{ k }^{ i }$ and $\overline { c_{ k }^{ i } }$ to Bob$_{ k }^{ i }$ and Bob$_{ k^{ \prime } }^{ i }$ ($k^{ \prime } \neq k$), using consistent proof sequences $\mathbf { p }_{ k }^{ i }$ and $\mathbf { p }_{ k^{ \prime } }^{ i }$ in each case.
The first scenario (S$_{ 1 }$) can be easily detected and countered by the QNVA. The second scenario (S$_{ 2 }$) can also be countered with additional effort. Our algorithm can also deal with the third scenario (S$_{ 3 }$), which reveals the existence of a truly malicious Alice, with some additional inference on the part of Bob. QNVA owes its ability to cope with each one of the above scenarios to the structural properties of the proof sequences. These properties are a direct result of the entanglement distribution scheme explained in Definition <ref>. The next Proposition <ref> lays the groundwork for the subsequent analysis of our verification procedures.
Let us assume that news aggregator Bob$_{ k }^{ i }$, $1 \leq k \leq n_{ i }$, has received from his coordinator Alice$_{ i }$, $1 \leq i \leq m$, the verification outcome $c_{ k }^{ i } \in \mathbb { B }$, and the proof sequence $\mathbf { p }_{ k }^{ i }$. If the proof sequence $\mathbf { p }_{ k }^{ i }$ is consistent with $c_{ k }^{ i }$, then it must satisfy the following properties.
\begin{align}
\mathbb { E }
\left[
\
\lvert
\
P_{ c_{ k }^{ i } }
( \mathbf { p }_{ k }^{ i }, k )
\
\rvert
\
\right]
\mathbb { E }
\left[
\
\lvert
\
P_{ \ast }
( \mathbf { p }_{ k }^{ i } )
\
\rvert
\
\right]
\frac { d } { 2 }
\ , \ \text{and}
\label{eq: Expected Number of Tuples with Specific Value in Position k}
\\
\mathbb { E }
\left[
\
\lvert
\
P_{ c_{ k }^{ i }, c_{ k }^{ i } }
( \mathbf { p }_{ k }^{ i }, k, k^{ \prime } )
\
\rvert
\
\right]
\mathbb { E }
\left[
\
\lvert
\
P_{ c_{ k }^{ i }, \overline { c_{ k }^{ i } } }
( \mathbf { p }_{ k }^{ i }, k, k^{ \prime } )
\
\rvert
\
\right]
\frac { d } { 4 }
\ , \ \forall k^{ \prime } \ , \ 1 \leq k^{ \prime } \neq k \leq n_{ i } \ .
\label{eq: Expected Number of Tuples with Specific Values in Positions k & k'}
\end{align}
The proof is quite straightforward because it is based on the entanglement distribution scheme of Definition <ref>. The entanglement distribution scheme stipulates that each $n_{ i }$-tuple in the original qubit sequence of Alice$_{ i }$ shares one $\ket{ \Phi^{ + } } = \frac { \ket{ 0 }_{ A } \ket{ 0 }_{ k } + \ket{ 1 }_{ A } \ket{ 1 }_{ k } } { \sqrt{ 2 } }$ pair with every Bob$_{ k }^{ i }$, $k = 1, \dots, n_{ i }$. Therefore, there are $d$ in total bits $a_{ k, l }^{ i }$ that occupy the $k^{ th }$ position in every tuple $l$ of $\mathbf { a }^{ i }$, $1 \leq l \leq d$, which are equal to the corresponding bits $b_{ k, l }^{ i }$ of $\mathbf { b }_{ k }^{ i }$. This is captured by the next formula:
\begin{align}
a_{ k, l }^{ i } = b_{ k, l }^{ i }
\ , \ 1 \leq l \leq d \ .
\label{eq: Alice & Bob's k Equal Bits}
\end{align}
The remaining bits of $\mathbf { b }_{ k }^{ i }$ result from measuring qubits in the $\ket{ + }$ state. Hence, we expect half of them to end up $0$, and the remaining half to end up $1$. More importantly though, these bits are not correlated with the corresponding bits of $\mathbf { a }^{ i }$. Consequently, we can easily draw the following conclusions.
* Measuring a pair of qubits in the $\ket{ \Phi^{ + } }$ state will result in both qubits collapsing in state $\ket{ 0 }$ with probability $0.5$, or in state $\ket{ 1 }$ with probability $0.5$. This implies that the expected number of the $a_{ k, l }^{ i }$ and $b_{ k, l }^{ i }$ bits with value $1 (0)$ is $\frac { d } { 2 }$. Consequently, the expected number of tuples in $\mathbf { a }^{ i }$ (and in $\mathbf { b }_{ k }^{ i }$) in which the bit in the $k^{ th }$ position has value $1 (0)$ is $\frac { d } { 2 }$. Thus, irrespective of whether the verification outcome $c_{ k }^{ i }$ is $1$ or $0$, the expected number of tuples in $\mathbf { p }_{ k }^{ i }$ in which the bit in the $k^{ th }$ position has value $c_{ k }^{ i }$ is $\frac { d } { 2 }$, which proves property (<ref>). This also means that the expected number of the remaining tuples in $\mathbf { p }_{ k }^{ i }$, which are cryptic tuples according to Definition <ref>, is also $\frac { d } { 2 }$.
* Measuring two pairs of qubits, both in the $\ket{ \Phi^{ + } }$ state, will result in both qubits of the first pair collapsing in state $\ket{ 0 }$ with probability $0.5$, or in state $\ket{ 1 }$ with probability $0.5$, and, independently, both qubits of the second pair collapsing in state $\ket{ 0 }$ with probability $0.5$, or in state $\ket{ 1 }$ with probability $0.5$. This means that the expected number of the $a_{ k, l }^{ i }$ and $a_{ k^{ \prime }, l }^{ i }$ bits with values “$00$”, “$01$”, “$10$”, and “$11$” is $\frac { d } { 4 }$. Consequently, the expected number of tuples in $\mathbf { a }^{ i }$ in which the bits in positions $k$ and $k^{ \prime }$ contain any one of the aforementioned combinations is $\frac { d } { 4 }$. Thus, irrespective of whether the verification outcome $c_{ k }^{ i }$ is $1$ or $0$, the expected number of tuples in $\mathbf { p }_{ k }^{ i }$ in which the bits in positions $k$ and $k^{ \prime }$ are $c_{ k }^{ i } c_{ k }^{ i }$ or $c_{ k }^{ i } \overline { c_{ k }^{ i } }$ is $\frac { d } { 4 }$, which proves property (<ref>).
This completes the proof of this proposition.
The properties outlined in Proposition <ref> are instrumental in the design of the verification tests that are used as subroutines for the QNVA. These tests, which are performed by every news aggregator Bob$_{ k }^{ i }$, $1 \leq k \leq n_{ i }$, in order to assess whether or not the coordinator Alice$_{ i }$ and the other news aggregators are honest, are based on the verification outcome $c_{ k }^{ i }$ and the proof sequence $\mathbf { p }_{ k }^{ i }$ that Bob$_{ k }^{ i }$ has received from Alice$_{ i }$.
As we have emphasized, our algorithm can handle all three scenarios mentioned above. For the first scenario (S$_{ 1 }$), the verification test IsAlice'sProofConsistent contained in Figure <ref> can decide whether or not $\mathbf { p }_{ k }^{ i }$ is consistent with $c_{ k }^{ i }$ by checking if it satisfies Proposition <ref>. It relies on the auxiliary test IsProofBalanced shown below. It is essential to point out that in a real implementation of these tests one must take into account the possible imperfections of the quantum channel, and the probabilistic outcome of the measurements. That means that the stringent equality requirement of the expected values as expressed in the propositions should be relaxed and one should instead check for approximate equality $\approx$ or approximate inequality $\napprox$. In the presentation of the pseudocode, we adopt the following conventions.
* $i$, $1 \leq i \leq m$, is the index of Alice$_{ i }$
* $k$, $1 \leq k \leq n_{ i }$, is the index of Bob$_{ k }^{ i }$
* $c_{ k }^{ i }$ is the verification outcome that Alice$_{ i }$ sends to Bob$_{ k }^{ i }$
* $\mathbf { p }_{ k }^{ i }$ is the proof sequence that Alice$_{ i }$ sends to Bob$_{ k }^{ i }$
* $\mathbf { b }_{ k }^{ i }$ is the classical bit sequence of Bob$_{ k }^{ i }$
grow to left by = 0.00 cm,
grow to right by = 0.00 cm,
colback = WordVeryLightTeal!50,
enhanced jigsaw,
sharp corners,
toprule = 0.10 pt,
bottomrule = 0.10 pt,
leftrule = 0.1 pt,
rightrule = 0.1 pt,
sharp corners,
center title,
fonttitle = ]
Auxiliary Test
IsProofBalanced$( i, k, c_{ k }^{ i }, \mathbf { p }_{ k }^{ i } )$
$r ( \neq k ) = 1$ $n_{ i }$
$N_{ 1 }
\lvert
\
P_{ c_{ k }^{ i }, c_{ k }^{ i } }
( \mathbf { p }_{ k }^{ i }, k, r )
\
\rvert
$N_{ 2 }
\lvert
\
P_{ c_{ k }^{ i }, \overline { c_{ k }^{ i } } }
( \mathbf { p }_{ k }^{ i }, k, r )
\
\rvert
$( N_{ 1 } \neq \frac { d } { 4 } \ \mathbf{ OR } \ N_{ 2 } \neq \frac { d } { 4 } )$
This auxiliary algorithm is invoked by Bob$_{ k }^{ i }$ during the main verification tests to ascertain whether property (<ref>) of Proposition <ref> holds.
grow to left by = 0.00 cm,
grow to right by = 0.00 cm,
colback = WordVeryLightTeal!50,
enhanced jigsaw,
sharp corners,
toprule = 0.10 pt,
bottomrule = 0.10 pt,
leftrule = 0.1 pt,
rightrule = 0.1 pt,
sharp corners,
center title,
fonttitle = ]
Verification Test
IsAlice'sProofConsistent$( i, k, c_{ k }^{ i }, \mathbf { p }_{ k }^{ i }, \mathbf { b }_{ k }^{ i } )$
\lvert
\
P_{ c_{ k }^{ i } }
( \mathbf { p }_{ k }^{ i }, k )
\
\rvert
$N \neq \frac { d } { 2 }$
$l \in P_{ c_{ k }^{ i } } ( \mathbf { p }_{ k }^{ i }, k )$
$( p_{ k, l }^{ i } \neq b_{ k, l }^{ i } )$
IsProofBalanced$( i, k, c_{ k }^{ i }, \mathbf { p }_{ k }^{ i } )$
Bob$_{ k }^{ i }$ uses the above algorithm to check if the proof sequence $\mathbf { p }_{ k }^{ i }$ is consistent with the verification outcome $c_{ k }^{ i }$.
To cope with the second scenario (S$_{ 2 }$), the verification test IsBob'sProofConsistent contained in Figure <ref> can decide whether or not $\mathbf { p }_{ k }^{ i }$ is consistent with $c_{ k }^{ i }$ by virtue of the results proved in the next proposition.
Suppose that Bob$_{ k }^{ i }$, $1 \leq k \leq n_{ i }$, has received from Alice$_{ i }$, $1 \leq i \leq m$, the verification outcome $c_{ k }^{ i } = 1$ $( c_{ k }^{ i } = 0 )$, and the consistent proof sequence $\mathds{ 1 }_{ k }^{ i }$ $( \vmathbb{ 0 }_{ k }^{ i } )$.
Let us further assume that Bob$_{ k }^{ i }$ has also received from Bob$_{ k^{ \prime } }^{ i }$ ($k^{ \prime } \neq k$) the opposite verification outcome $c_{ k^{ \prime } }^{ i } = 0$ $( c_{ k^{ \prime } }^{ i } = 1 )$ and the sequence $\vmathbb{ 0 }_{ k^{ \prime } }^{ i }$ $( \mathds{ 1 }_{ k^{ \prime } }^{ i } )$ as proof. If $\vmathbb{ 0 }_{ k^{ \prime } }^{ i }$ $( \mathds{ 1 }_{ k^{ \prime } }^{ i } )$ is consistent with $0$ $( 1 )$, then, in addition to the properties listed in Proposition <ref>, it must also satisfy the following property.
\begin{align}
\left\{
\
\begin{matrix*}[l]
P_{ 1, 0 }
( \mathds{ 1 }_{ k }^{ i }, k, k^{ \prime } )
P_{ 1, 0 }
( \vmathbb{ 0 }_{ k^{ \prime } }^{ i }, k, k^{ \prime } )
\\
\\
P_{ 0, 1 }
( \vmathbb{ 0 }_{ k }^{ i }, k, k^{ \prime } )
P_{ 0, 1 }
( \mathds{ 1 }_{ k^{ \prime } }^{ i }, k, k^{ \prime } )
\end{matrix*}
\
\right\}
% \ .
\label{eq: The Explicit Equality of Tuples with Opposite Values in Positions k & k'}
\end{align}
Without loss of generality we consider the situation that has evolved as follows.
* Initially, Bob$_{ k }^{ i }$ received from Alice$_{ i }$ the verification outcome $c_{ k }^{ i } = 1$ and the consistent proof sequence $\mathds{ 1 }_{ k }^{ i }$, and
* subsequently, Bob$_{ k }^{ i }$ received from Bob$_{ k^{ \prime } }^{ i }$ the opposite verification outcome $c_{ k^{ \prime } }^{ i } = 0$ and the sequence $\vmathbb{ 0 }_{ k^{ \prime } }^{ i }$ as proof.
We shall prove that if $\vmathbb{ 0 }_{ k^{ \prime } }^{ i }$ is consistent with the outcome $0$, then, in addition to the properties listed in Proposition <ref>, it must also satisfy the properties outlined above.
The proof is an immediate consequence of the manner proof sequences are constructed. If we recall Definition <ref>, we see that the proof sequence $\mathds{ 1 }_{ k }^{ i }$ $( \vmathbb{ 0 }_{ k }^{ i } )$, which is consistent with the verification outcome $c_{ k }^{ i } = 1$ $( c_{ k }^{ i } = 0 )$, contains all the $n_{ i }$-tuples of Alice$_{ i }$'s bit sequence $\mathbf { a }^{ i }$ in which the bit in the $k^{ th }$ position has value $1$ $( 0 )$, including all those in which the bit in position $k^{ \prime }$ has the value $1$, and all those in which the bit in position $k^{ \prime }$ has the value $0$.
\begin{align}
\mathbf { a }^{ i }
\underbrace { \colorbox {WordAquaLighter40} { $a_{ n_{ i }, d }^{ i } \dots a_{ k, d }^{ i } \dots a_{ 1, d }^{ i }$ } }_{ \text{ tuple } d }
\cdots
\underbrace { \colorbox {WordAquaLighter60} { $a_{ n_{ i }, 2 }^{ i } \dots a_{ k, 2 }^{ i } \dots a_{ 1, 2 }^{ i }$ } }_{ \text{ tuple } 2 }
\underbrace { \colorbox {WordAquaLighter80} { $a_{ n_{ i }, 1 }^{ i } \dots a_{ k, 1 }^{ i } \dots a_{ 1, 1 }^{ i }$ } }_{ \text{ tuple } 1 }
\ , \text{ and }
\label{eq: Alice's Classical Bit Se quence}
\\
\mathbf { b }_{ k }^{ i }
\underbrace { \colorbox {WordLightGreen} { $b_{ n_{ i }, d }^{ i } \dots b_{ k, d }^{ i } \dots b_{ 1, d }^{ i }$ } }_{ \text{ tuple } d }
\cdots
\underbrace { \colorbox {WordLightGreen!50} { $b_{ n_{ i }, 2 }^{ i } \dots b_{ k, 2 }^{ i } \dots b_{ 1, 2 }^{ i }$ } }_{ \text{ tuple } 2 } \
\underbrace { \colorbox {WordLightGreen!25} { $b_{ n_{ i }, 1 }^{ i } \dots b_{ k, 1 }^{ i } \dots b_{ 1, 1 }^{ i }$ } }_{ \text{ tuple } 1 }
\label{eq: Bob's Classical Bit Se quence}
\ ,
\end{align}
Symmetrically, if the proof sequence $\vmathbb{ 0 }_{ k^{ \prime } }^{ i }$ $( \mathds{ 1 }_{ k^{ \prime } }^{ i } )$ is consistent with the opposite verification outcome $c_{ k^{ \prime } }^{ i } = 0$ $( c_{ k^{ \prime } }^{ i } = 1 )$, then it must contain all the $n_{ i }$-tuples of $\mathbf { a }^{ i }$ in which the bit in position $k^{ \prime }$ has value $0$ $( 1 )$, including all those in which the bit in position $k$ has the value $0$, and all those in which the bit in position $k$ has the opposite value $1$.
Therefore, if both proof sequences $\mathds{ 1 }_{ k }^{ i }$ and $\vmathbb{ 0 }_{ k^{ \prime } }^{ i }$ ($\vmathbb{ 0 }_{ k }^{ i }$ and $\mathds{ 1 }_{ k^{ \prime } }^{ i }$) are consistent with the verification checks $c_{ k }^{ i } = 1$ and $c_{ k^{ \prime } }^{ i } = 0$ ($c_{ k }^{ i } = 0$ and $c_{ k^{ \prime } }^{ i } = 1$), respectively, then they must contain all the $n_{ i }$-tuples of $\mathbf { a }^{ i }$ in which the bit in the $k^{ th }$ position has the value $1$ ($0$) and the bit in position $k^{ \prime }$ has the opposite value $0$ ($1$). Formally, we can express this fact as
\begin{align}
\left\{
\
\begin{matrix*}[l]
P_{ 1, 0 }
( \mathds{ 1 }_{ k }^{ i }, k, k^{ \prime } )
P_{ 1, 0 }
( \vmathbb{ 0 }_{ k^{ \prime } }^{ i }, k, k^{ \prime } )
\\
\\
P_{ 0, 1 }
( \vmathbb{ 0 }_{ k }^{ i }, k, k^{ \prime } )
P_{ 0, 1 }
( \mathds{ 1 }_{ k^{ \prime } }^{ i }, k, k^{ \prime } )
\end{matrix*}
\
\right\}
\ ,
\label{eq: Explicit Consistent Proofs Contain the Same Tuples with Opposite Values in Positions k & k'}
\end{align}
which concludes this proof.
The opposite case is completely symmetrical and will be omitted.
The previous proposition can be cast into its most general form as the following Corollary.
Let us assume that Bob$_{ k }^{ i }$, $1 \leq k \leq n_{ i }$, has received
$\Diamond$ from Alice$_{ i }$, $1 \leq i \leq m$, the verification outcome $c_{ k }^{ i }$ and the sequence $\mathbf { p }_{ k }^{ i }$ as proof, and
$\Diamond$ from Bob$_{ k^{ \prime } }^{ i }$ ($k^{ \prime } \neq k$) the opposite verification outcome $c_{ k^{ \prime } }^{ i } = \overline { c_{ k }^{ i } }$ and the sequence $\mathbf { p }_{ k^{ \prime } }^{ i }$ as proof.
Then, if both $\mathbf { p }_{ k^{ \prime } }^{ i }$ and $\mathbf { p }_{ k^{ \prime } }^{ i }$ are consistent with $c_{ k }^{ i }$ and $\overline { c_{ k }^{ i } }$, respectively, they must also satisfy the following property.
\begin{align}
P_{ c_{ k }^{ i }, \overline { c_{ k }^{ i } } }
( \mathbf { p }_{ k }^{ i }, k, k^{ \prime } )
P_{ c_{ k }^{ i }, \overline { c_{ k }^{ i } } }
( \mathbf { p }_{ k^{ \prime } }^{ i }, k, k^{ \prime } )
% \ .
\label{eq: The Equality of Tuples with Opposite Values in Positions k & k'}
\end{align}
If we recall Definition <ref> again, we see that if the proof sequence $\mathbf { p }_{ k }^{ i }$ is consistent with $c_{ k }^{ i }$, then it contains all the $n_{ i }$-tuples of Alice$_{ i }$'s bit sequence $\mathbf { a }^{ i }$ in which the bit in the $k^{ th }$ position has value $c_{ k }^{ i }$, including all those in which the bit in position $k^{ \prime }$ has the same value $c_{ k }^{ i }$, and all those in which the bit in position $k^{ \prime }$ has the opposite value $\overline { c_{ k }^{ i } }$. Symmetrically, if the proof sequence $\mathbf { p }_{ k^{ \prime } }^{ i }$ is consistent with $\overline { c_{ k }^{ i } }$, then it contains all the $n_{ i }$-tuples of $\mathbf { a }^{ i }$ in which the bit in position $k^{ \prime }$ has value $\overline { c_{ k }^{ i } }$, including all those in which the bit in position $k$ has the same value $\overline { c_{ k }^{ i } }$, and all those in which the bit in position $k$ has the opposite value $c_{ k }^{ i }$. Therefore, if they are consistent, both proof sequences $\mathbf { p }_{ k }^{ i }$ and $\mathbf { p }_{ k^{ \prime } }^{ i }$ contain all the $n_{ i }$-tuples of $\mathbf { a }^{ i }$ in which the bit in the $k^{ th }$ position has value $c_{ k }^{ i }$ and the bit in position $k^{ \prime }$ has the opposite value $\overline { c_{ k }^{ i } }$. This is simply written as
\begin{align}
P_{ c_{ k }^{ i }, \overline { c_{ k }^{ i } } }
( \mathbf { p }_{ k }^{ i }, k, k^{ \prime } )
P_{ c_{ k }^{ i }, \overline { c_{ k }^{ i } } }
( \mathbf { p }_{ k^{ \prime } }^{ i }, k, k^{ \prime } )
\ ,
\label{eq: Consistent Proofs Conatian the Same Tuples with Opposite Values in Positions k & k'}
\end{align}
which completes the proof of this corollary.
To sum it up, the property expressed by relation (<ref>) asserts that if two proof sequences $\mathbf { p }_{ k }^{ i }$ and $\mathbf { p }_{ k^{ \prime } }^{ i }$ that correspond to opposite outcomes $c_{ k }^{ i }$ and $\overline { c_{ k }^{ i } }$ are both consistent, then they must contain precisely the same tuples of $\mathbf { a }^{ i }$ in which the bit in the $k^{ th }$ position is $c_{ k }^{ i }$ and the bit in position $k^{ \prime }$ is $\overline { c_{ k }^{ i } }$. This property can be employed by Bob$_{ k }^{ i }$ to detect if Bob$_{ k^{ \prime } }^{ i }$ deliberately spreads misinformation, as formalized by the next Theorem <ref>.
Suppose that Bob$_{ k }^{ i }$, $1 \leq k \leq n_{ i }$, has received from Alice$_{ i }$, $1 \leq i \leq m$, the verification outcome $c_{ k }^{ i }$, and the consistent proof sequence $\mathbf { p }_{ k }^{ i }$.
Any attempt by another news aggregator Bob$_{ k^{ \prime } }^{ i }$ ($k^{ \prime } \neq k$) to falsely claim that he received $\overline { c_{ k }^{ i } }$ from Alice$_{ i }$, despite the fact that in reality he received $c_{ k }^{ i }$, and forge a proof sequence $\mathbf { p }_{ k^{ \prime } }^{ i }$ consistent with $\overline { c_{ k }^{ i } }$ will be detected by Bob$_{ k }^{ i }$.
The present situation concerns how a malicious Bob$_{ k^{ \prime } }^{ i }$ may try to deceive Bob$_{ k }^{ i }$ ($k^{ \prime } \neq k$). Bob$_{ k^{ \prime } }^{ i }$ has received from Alice$_{ i }$ the verification outcome $c_{ k }^{ i }$ together with a proof sequence $\mathbf { p }_{ k^{ \prime } }^{ i }$ consistent with $c_{ k }^{ i }$. Nevertheless, Bob$_{ k^{ \prime } }^{ i }$ intends to falsely claim that he has received $\overline { c_{ k }^{ i } }$. The question is: can Bob$_{ k^{ \prime } }^{ i }$ construct a convincing proof sequence $\mathbf { p }_{ k^{ \prime } }^{ i }$ consistent with $\overline { c_{ k }^{ i } }$. We proceed to show that this is probabilistically impossible.
Having received and validated $\mathbf { p }_{ k }^{ i }$, Bob$_{ k }^{ i }$ knows the set $P_{ c_{ k }^{ i }, \overline { c_{ k }^{ i } } } ( \mathbf { p }_{ k }^{ i }, k, k^{ \prime } )$ of the positions of all the $n_{ i }$-tuples of $\mathbf { a }^{ i }$ that contain $c_{ k }^{ i }$ and $\overline { c_{ k }^{ i } }$ in positions $k$ and $k^{ \prime }$ respectively.
On the other hand, Bob$_{ k^{ \prime } }^{ i }$ has received the proof sequence $\mathbf { p }_{ k^{ \prime } }^{ i }$ that is also consistent with $c_{ k }^{ i }$. Accordingly, Bob$_{ k^{ \prime } }^{ i }$ knows the following two facts.
* The indices of the $n_{ i }$-tuples of $\mathbf { a }^{ i }$ that contain $c_{ k }^{ i }$ in position $k^{ \prime }$, which includes those that also contain $c_{ k }^{ i }$ in position $k$, and those that also contain $\overline { c_{ k }^{ i } }$ in position $k$, i.e., the set
\begin{align}
P_{ c_{ k }^{ i } }
( \mathbf { p }_{ k^{ \prime } }^{ i }, k^{ \prime } )
P_{ c_{ k }^{ i }, c_{ k }^{ i } }
( \mathbf { p }_{ k^{ \prime } }^{ i }, k^{ \prime }, k )
\cup
P_{ c_{ k }^{ i }, \overline { c_{ k }^{ i } } }
( \mathbf { p }_{ k^{ \prime } }^{ i }, k^{ \prime }, k )
\ .
\end{align}
* The indices of the cryptic tuples $\mathbf { s }_{ \ast }$ of $\mathbf { a }^{ i }$, i.e., the set $P_{ \ast } ( \mathbf { p }_{ k^{ \prime } }^{ i } )$.
By knowing the indices of the cryptic tuples, Bob$_{ k^{ \prime } }^{ i }$ is able to infer with certainty, i.e., probability $1$, that these indices correspond to $n_{ i }$-tuples of $\mathbf { a }^{ i }$ that contain $\overline { c_{ k }^{ i } }$ in position $k^{ \prime }$. In his effort to forge a proof sequence consistent with $\overline { c_{ k }^{ i } }$, Bob$_{ k^{ \prime } }^{ i }$ will correctly place all the tuples of $\mathbf { a }^{ i }$ that contain $\overline { c_{ k }^{ i } }$ in position $k^{ \prime }$. According to Proposition <ref>, their expected number is $\frac { d } { 2 }$, so in reality they would be $\approx \frac { d } { 2 }$. So, Bob$_{ k^{ \prime } }^{ i }$ will avoid trivial mistakes, such as
* including a tuple where the bit in the $k^{ \prime }$ position has the wrong value, or
* using fewer than expected tuples with $\overline { c_{ k }^{ i } }$ in position $k^{ \prime }$.
Bob$_{ k^{ \prime } }^{ i }$'s real weakness stems from the fact that the tuples he must include in his forged proof sequence may contain either $c_{ k }^{ i }$ with probability $0.5$, or $\overline { c_{ k }^{ i } }$ with equal probability $0.5$ in the $k^{ th }$ position because Bob$_{ k^{ \prime } }^{ i }$ doesn't know with certainty, even for a single tuple, if it contains $c_{ k }^{ i }$ or $\overline { c_{ k }^{ i } }$ in position $k$. Therefore, when forging a proof sequence consistent with $\overline { c_{ k }^{ i } }$, Bob$_{ k^{ \prime } }^{ i }$ has to guess for every tuple whether to place $c_{ k }^{ i }$ or $\overline { c_{ k }^{ i } }$ in position $k$. Thus, he is prone to make two types of mistakes.
* Place $c_{ k }^{ i }$ in the $k^{ th }$ position of a wrong $n_{ i }$-tuple not contained in $P_{ c_{ k }^{ i }, \overline { c_{ k }^{ i } } } ( \mathbf { p }_{ k }^{ i }, k, k^{ \prime } )$.
* Place $\overline { c_{ k }^{ i } }$ in the $k^{ th }$ position of a wrong $n_{ i }$-tuple that does appear in $P_{ c_{ k }^{ i }, \overline { c_{ k }^{ i } } } ( \mathbf { p }_{ k }^{ i }, k, k^{ \prime } )$.
In other words, the question now becomes: how probable is for Bob$_{ k^{ \prime } }^{ i }$ to construct a proof sequence $\mathbf { p }_{ k^{ \prime } }^{ i }$ so that the set $P_{ c_{ k }^{ i }, \overline { c_{ k }^{ i } } } ( \mathbf { p }_{ k^{ \prime } }^{ i }, k, k^{ \prime } )$ is equal to the set $P_{ c_{ k }^{ i }, \overline { c_{ k }^{ i } } } ( \mathbf { p }_{ k }^{ i }, k, k^{ \prime } )$?
The probability that Bob$_{ k^{ \prime } }^{ i }$ succeeds in doing so equals the probability of picking the one correct configuration out of many. The total number of configurations is equal to the number of ways to place $\approx \frac { d } { 4 }$ identical objects into $\approx \frac { d } { 2 }$ distinguishable boxes. Hence, the probability that Bob$_{ k^{ \prime } }^{ i }$ places all the $\approx \frac { d } { 4 }$ values $c_{ k }^{ i }$ correctly in the $\approx \frac { d } { 2 }$ cryptic $n_{ i }$-tuples is
\begin{align}
\left(
\text{Bob$_{ k^{ \prime } }^{ i }$ places all $c_{ k }^{ i }$ correctly}
\right)
\approx
\frac { 1 }
{ \binom { \ d / 2 \ } { \ d / 4 \ } }
\ ,
\label{eq: Probability Bob k' Deceives Bob k}
\end{align}
which is practically zero for appropriately chosen values of $d$. Thus, the end result will violate property (<ref>) of Corollary <ref>.
Ergo, when Bob$_{ k }^{ i }$ checks the consistency of the proof sequence sent by Bob$_{ k^{ \prime } }^{ i }$ he will easily detect inconsistencies and infer that Bob$_{ k^{ \prime } }^{ i }$ deliberately spreads disinformation.
So, to cope with the second scenario (S$_{ 2 }$), one may rely on the verification test IsBob'sProofConsistent shown in Figure <ref>, which can decide whether or not $\mathbf { p }_{ k^{ \prime } }^{ i }$ is consistent with $\overline { c_{ k }^{ i } }$, by checking if it satisfies property (<ref>) of Corollary <ref>.
We again note that in a real implementation of the next test we must take into account the possible imperfections of the quantum channel, and the probabilistic outcome of the measurements, which implies that the strict inequality requirement should be relaxed and we should test for approximate inequality $\napprox$. In the pseudocode, we use the following conventions.
* $i$, $1 \leq i \leq m$, is the index of Alice$_{ i }$
* $k$, $1 \leq k \leq n_{ i }$, is the index of Bob$_{ k }^{ i }$
* $k^{ \prime }$, $1 \leq k^{ \prime } \neq k \leq n_{ i }$, is the index of Bob$_{ k^{ \prime } }^{ i }$
* $c_{ k }^{ i }$ is the verification outcome that Alice$_{ i }$ has send to Bob$_{ k }^{ i }$
* $\mathbf { p }_{ k }^{ i }$ is the proof sequence that Alice$_{ i }$ has send to Bob$_{ k }^{ i }$
* $\overline { c_{ k }^{ i } }$ is the verification outcome that Bob$_{ k^{ \prime } }^{ i }$ claims he received from Alice$_{ i }$
* $\mathbf { p }_{ k^{ \prime } }^{ i }$ is the proof sequence that Bob$_{ k^{ \prime } }^{ i }$ claims he received from Alice$_{ i }$
grow to left by = 0.00 cm,
grow to right by = 0.00 cm,
colback = WordVeryLightTeal!50,
enhanced jigsaw,
sharp corners,
toprule = 0.10 pt,
bottomrule = 0.10 pt,
leftrule = 0.1 pt,
rightrule = 0.1 pt,
sharp corners,
center title,
fonttitle = ]
Verification Test
IsBob'sProofConsistent$( i, k, k^{ \prime }, c_{ k }^{ i }, \mathbf { p }_{ k }^{ i }, \overline { c_{ k }^{ i } }, \mathbf { p }_{ k^{ \prime } }^{ i } )$
\lvert
\
P_{ \overline { c_{ k }^{ i } } } ( \mathbf { p }_{ k^{ \prime } }^{ i }, k^{ \prime } )
\
\rvert
$N \neq \frac { d } { 2 }$
$P_{ c_{ k }^{ i }, \overline { c_{ k }^{ i } } } ( \mathbf { p }_{ k }^{ i }, k, k^{ \prime } )
\neq
P_{ c_{ k }^{ i }, \overline { c_{ k }^{ i } } } ( \mathbf { p }_{ k^{ \prime } }^{ i }, k, k^{ \prime } )$
IsProofBalanced$( i, k^{ \prime }, \overline { c_{ k }^{ i } }, \mathbf { p }_{ k^{ \prime } }^{ i } )$
Bob$_{ k }^{ i }$ uses the above algorithm to check if $\mathbf { p }_{ k^{ \prime } }^{ i }$ is consistent with $\overline { c_{ k }^{ i } }$ that Bob$_{ k^{ \prime } }^{ i }$ claims to have received from Alice$_{ i }$.
By combining both verification checks, it is possible to detect an insidious Alice$_{ i }$ who deliberately spreads disinformation and confusion by sending opposite verification outcomes $c_{ k }^{ i }$ and $\overline { c_{ k }^{ i } }$ to Bob$_{ k }^{ i }$ and Bob$_{ k^{ \prime } }^{ i }$ ($k^{ \prime } \neq k$), using the correct proof sequences $\mathbf { p }_{ k }^{ i }$ and $\mathbf { p }_{ k^{ \prime } }^{ i }$ in each case. This is analyzed in the following Theorem <ref>.
Suppose that Bob$_{ k }^{ i }$, $1 \leq k \leq n_{ i }$, has received from Alice$_{ i }$, $1 \leq i \leq m$, the verification outcome $c_{ k }^{ i }$ and the consistent proof sequence $\mathbf { p }_{ k }^{ i }$.
Bob$_{ k }^{ i }$ infers that Alice$_{ i }$ is a malicious actor that deliberately spreads disinformation, if he also receives a proof sequence $\mathbf { p }_{ k^{ \prime } }^{ i }$ consistent with the opposite outcome $\overline { c_{ k }^{ i } }$ from another news aggregator Bob$_{ k^{ \prime } }^{ i }$ ($k^{ \prime } \neq k$).
The present situation examines how a news aggregator can uncover an insidious news verifier Alice$_{ i }$ who deliberately spreads disinformation and confusion by sending the verification outcome $c_{ k }^{ i }$ to Bob$_{ k }^{ i }$ and, at the same time, sending the opposite verification outcome $\overline { c_{ k }^{ i } }$ to Bob$_{ k^{ \prime } }^{ i }$ ($k^{ \prime } \neq k$), using consistent proof sequences $\mathbf { p }_{ k }^{ i }$ and $\mathbf { p }_{ k^{ \prime } }^{ i }$ in each case.
According to Theorem <ref>, the probability that another news aggregator Bob$_{ k^{ \prime } }^{ i }$ ($k^{ \prime } \neq k$) will manage to construct on his own a proof sequence $\mathbf { p }_{ k^{ \prime } }^{ i }$ consistent is negligible. Hence, if the verification test IsBob'sProofConsistent shown in Figure <ref> returns TRUE, the logical conclusion is that Alice$_{ i }$ herself must have sent the consistent proof sequence $\mathbf { p }_{ k^{ \prime } }^{ i }$ to Bob$_{ k^{ \prime } }^{ i }$.
Thus, Alice$_{ i }$ deliberately sends contradictory verification outcomes to create confusion and spread disinformation.
At this point, considering all the previous analysis, we present the proposed quantum news verification algorithm (QNVA) below. For every piece of news that must be checked, the QNVA is employed by each news aggregator Bob$_{ k }^{ i }$, $1 \leq k \leq n_{ i }$, independently and in parallel with every other news aggregator. In the presentation, we use the following notation.
* $i$, $1 \leq i \leq m$, is the index of Alice$_{ i }$
* $k$, $1 \leq k \leq n_{ i }$, is the index of Bob$_{ k }^{ i }$
* QNVA$( k )$ is the instance of QVNA executed by Bob$_{ k }^{ i }$
* $M_{ A }$ and $M_{ V }$ are the list of malicious news aggregators and news verifiers, respectively, as surmised by Bob$_{ k }^{ i }$. The purpose of the reputation lists is to identity insidious agents and ignore any further communication originating from them.
* $k^{ \prime }$, $1 \leq k^{ \prime } \neq k \leq n_{ i }$, is the index of Bob$_{ k^{ \prime } }^{ i }$
* $c_{ k }^{ i }$ is the verification outcome that Alice$_{ i }$ has send to Bob$_{ k }^{ i }$
* $\mathbf { p }_{ k }^{ i }$ is the proof sequence that Alice$_{ i }$ has send to Bob$_{ k }^{ i }$
* $c_{ k^{ \prime } }^{ i }$ is the verification outcome that Bob$_{ k^{ \prime } }^{ i }$ claims he received from Alice$_{ i }$
* $\mathbf { p }_{ k^{ \prime } }^{ i }$ is the proof sequence that Bob$_{ k^{ \prime } }^{ i }$ claims he received from Alice$_{ i }$
grow to left by = 0.00 cm,
grow to right by = 0.00 cm,
colback = WordVeryLightTeal!50,
enhanced jigsaw,
sharp corners,
toprule = 0.01 pt,
bottomrule = 0.01 pt,
leftrule = 0.1 pt,
rightrule = 0.1 pt,
sharp corners,
center title,
fonttitle = ]
QNVA$( k )$
* Initialize $\triangleright$ $M_{ A } = M_{ V } = \emptyset$
* Receive $\triangleright$ Bob$_{ k }^{ i }$ receives Alice$_{ i }$'s verification outcome $c_{ k }^{ i }$ and proof $\mathbf { p }_{ k }^{ i }$.
* Test $\triangleright$ Bob$_{ k }^{ i }$ calls the verification test IsAlice'sProofConsistent (Figure <ref>) to check whether $\mathbf { p }_{ k }^{ i }$ is consistent with $c_{ k }^{ i }$.
$\star$ If the test returns TRUE, then Bob$_{ k }^{ i }$ accepts Alice$_{ i }$'s assessment.
$\star$ If the test returns FALSE, then Bob$_{ k }^{ i }$ rejects the news in question as fake, adds Alice$_{ i }$ to his $M_{ V }$ list, and terminates the algorithm.
* Send $\triangleright$ Upon the successful completion of the previous verification check, Bob$_{ k }^{ i }$ sends every other Bob$_{ k^{ \prime } }^{ i }$ ($1 \leq k^{ \prime } \neq k \leq n_{ i }$) not contained in his $M_{ A }$ list, the verification outcome $c_{ k }^{ i }$ and the accompanying proof $\mathbf { p }_{ k }^{ i }$ received from Alice$_{ i }$.
* Receive $\triangleright$ Bob$_{ k }^{ i }$ receives from every other Bob$_{ k^{ \prime } }^{ i }$ ($1 \leq k^{ \prime } \neq k \leq n_{ i }$) not contained in his $M_{ A }$ list, the verification outcome $c_{ k^{ \prime } }^{ i }$ and proof $\mathbf { p }_{ k^{ \prime } }^{ i }$ Bob$_{ k^{ \prime } }^{ i }$ claims he received from Alice$_{ i }$.
* Compare $\triangleright$ Bob$_{ k }^{ i }$ compares his $c_{ k }^{ i }$ to all other $c_{ k^{ \prime } }^{ i }$.
$\star$ If all $c_{ k^{ \prime } }^{ i }$ coincide with $c_{ k }^{ i }$, then Bob$_{ k }^{ i }$ sticks to his preliminary decision, and terminates the algorithm.
$\star$ If there is at least one $c_{ k^{ \prime } }^{ i }$ such that $c_{ k^{ \prime } }^{ i } = \overline { c_{ k }^{ i } }$, Bob$_{ k }^{ i }$ calls the verification test IsBob'sProofConsistent (Figure <ref>) to check whether $\mathbf { p }_{ k^{ \prime } }^{ i }$ is consistent with $\overline { c_{ k }^{ i } }$.
$\square$ If the test returns FALSE, then Bob$_{ k }^{ i }$ adds Bob$_{ k^{ \prime } }^{ i }$ to his $M_{ A }$ list, and repeats the same procedure for the next opposite verification outcome, if any.
$\square$ If the test returns TRUE, then Bob$_{ k }^{ i }$ rejects the news in question as fake, adds Alice$_{ i }$ to his $M_{ V }$ list, and terminates the algorithm.
In real life, the existence of opposite conflicting verification outcomes increases the odds of confusion and unchecked spread of misinformation. The quantum news verification algorithm, by taking advantage of the phenomenon of entanglement and its unique ramifications, can eliminate the risks in certain critical situations, as those outline in the preceding scenarios (S$_{ 1 }$) – (S$_{ 3 }$).
§ DISCUSSION AND CONCLUSIONS
In the era of social media, the proliferation of fake news has emerged as a pressing issue. Particularly in economically developed countries, users tend to encounter more false information than accurate content. The impact of fake news on major social media platforms extends beyond the digital realm, influencing people’s opinions and actions in the real world. Researchers have been driven to seek practical solutions to address this undesirable situation.
This research paper introduces a fresh perspective on the critical topic of news verification. Departing from the conventional Quantum Machine Learning approach, our approach explores an alternative quantum avenue. Drawing inspiration from successful quantum protocols that achieve distributed and detectable Byzantine Agreement in massively distributed environments, we propose the entanglement-based quantum algorithm QNVA.
The QNVA offers several advantages:
* Generality: It can handle any number of news aggregators and verifiers.
* Efficiency: The algorithm completes in a constant number of steps, regardless of the participant count.
* Simplicity: It relies solely on EPR (specifically $\ket{ \Phi^{ + } }$) pairs. EPR pairs are the easiest maximally entangled states to produce, unlike more complex states such as $\ket{ GHZ_{ n } }$, which do not scale well as the number of players increases.
The aforementioned attributes underscore its scalability and practical applicability. To reinforce this assertion, we examine in Table <ref> how the chosen value of the accuracy degree $d$ influences the likelihood of a malicious aggregator successfully fabricating a consistent proof sequence. Notably, the accuracy degree $d$ remains independent of the number of participants, further enhancing the algorithm's scalability. Naturally, selecting an appropriate value for $d$ is crucial to ensure the negligible probability of a malicious actor successfully forging a consistent proof sequence. The rationale behind $d$ not scaling with the number of aggregators and verifiers lies in the protocol's consistent utilization of EPR pairs, signifying bipartite entanglement. As per protocol guidelines, each consistency check involves a comparison between two bit vectors. Consequently, irrespective of the participant count, each comparison entails only two bit strings. Furthermore, even in the most general scenario, this comparison involves just two bits, denoted as $i$ and $j$, in each tuple. Thus, probabilistically, the situation remains consistent. In essence, the probability of a malicious aggregator deceiving an honest aggregator hinges on the likelihood of selecting the correct configuration from many possibilities. The total number of configurations equals the ways to distribute approximately $\frac{d}{4}$ identical objects (either $0$ or $1$) into approximately $\frac{d}{2}$ distinguishable boxes (representing the uncertain tuples). The probability of a cheater correctly placing all the approximately $\frac{d}{4}$ bits within the approximately $\frac{d}{2}$ cryptic tuples is
\begin{align}
P( \text{ malicious aggregator cheats } )
\approx
\frac { 1 }
{ \binom { \ d / 2 \ } { \ d / 4 \ } }
\ , \label{eq: Malicious Aggregator Cheats}
\end{align}
which tends to zero as $d$ increases.
grow to left by = 0.00 cm,
grow to right by = 0.00 cm,
colback = WordVeryLightTeal!25,
enhanced jigsaw,
sharp corners,
boxrule = 0.1 pt,
toprule = 0.1 pt,
bottomrule = 0.1 pt
This table shows how the chosen value of the degree of accuracy $d$ affects the probability that a malicious aggregator succeeds in forging a consistent proof sequence.
[HTML]000000 How the degree of accuracy $d$ affects the probability $P$
[HTML]000000 $d$
$d / 2 $
$d / 4 $
$P( \text{ malicious aggregator cheats } )$
[HTML]000000 $4$
4 // 2
4 // 4
Combinations ( m ) := ( m / 2 )! // ( ( m / 4 )! ( m / 4 )! );
Probability ( m ) := 1 / Combinations ( m );
Probability ( 4 )
[HTML]000000 $8$
8 // 2
8 // 4
Combinations ( m ) := ( m / 2 )! // ( ( m / 4 )! ( m / 4 )! );
Probability ( m ) := 1 / Combinations ( m );
Probability ( 8 )
[HTML]000000 $16$
16 // 2
16 // 4
Combinations ( m ) := ( m / 2 )! // ( ( m / 4 )! ( m / 4 )! );
Probability ( m ) := 1 / Combinations ( m );
Probability ( 16 )
[HTML]000000 $32$
32 // 2
32 // 3
Combinations ( m ) := ( m / 2 )! // ( ( m / 4 )! ( m / 4 )! );
Probability ( m ) := 1 / Combinations ( m );
Probability ( 32 )
[HTML]000000 $64$
64 // 3
64 // 4
Combinations ( m ) := ( m / 2 )! // ( ( m / 4 )! ( m / 4 )! );
Probability ( m ) := 1 / Combinations ( m );
Probability ( 64 )
[1]
A. Campan, A. Cuzzocrea, and T. M. Truta, “Fighting fake news spread in online
social networks: Actual trends and future research directions,” in 2017
IEEE International Conference on Big Data (Big Data), IEEE, 2017.
[2]
W. Vorhies, “Using Algorithms to Detect Fake News – The State
of the Art - DataScienceCentral.com — datasciencecentral.com.”
Accessed 14-01-2024.
[3]
K. Shu, A. Sliva, S. Wang, J. Tang, and H. Liu, “Fake news detection on social
media: A data mining perspective,” ACM SIGKDD Explorations Newsletter,
vol. 19, no. 1, pp. 22–36, 2017.
[4]
H. Rashkin, E. Choi, J. Y. Jang, S. Volkova, and Y. Choi, “Truth of varying
shades: Analyzing language in fake news and political fact-checking,” in
Proceedings of the 2017 Conference on Empirical Methods in Natural
Language Processing, Association for Computational Linguistics, 2017.
[5]
E. Mustafaraj and P. T. Metaxas, “The fake news spreading plague: Was it
preventable?,” 2017.
[6]
S. Gilda, “Notice of violation of ieee publication principles: Evaluating
machine learning algorithms for fake news detection,” in 2017 IEEE 15th
Student Conference on Research and Development (SCOReD), IEEE, 2017.
[7]
M. Farajtabar, J. Yang, X. Ye, H. Xu, R. Trivedi, E. Khalil, S. Li, L. Song,
and H. Zha, “Fake news mitigation via point process based intervention,” in
Proceedings of the 34th International Conference on Machine Learning
(D. Precup and Y. W. Teh, eds.), vol. 70 of Proceedings of Machine
Learning Research, pp. 1097–1106, PMLR, 06–11 Aug 2017.
[8]
G. E. R. Agudelo, O. J. S. Parra, and J. B. Velandia, Raising a Model for
Fake News Detection Using Machine Learning in Python, pp. 596–604.
Springer International Publishing, 2018.
[9]
A. Kesarwani, S. S. Chauhan, A. R. Nair, and G. Verma, Supervised Machine
Learning Algorithms for Fake News Detection, pp. 767–778.
Springer Nature Singapore, 2020.
[10]
A. Kesarwani, S. S. Chauhan, and A. R. Nair, “Fake news detection on social
media using k-nearest neighbor classifier,” in 2020 International
Conference on Advances in Computing and Communication Engineering (ICACCE),
IEEE, 2020.
[11]
S. Vijayaraghavan, Y. Wang, Z. Guo, J. Voong, W. Xu, A. Nasseri, J. Cai, L. Li,
K. Vuong, and E. Wadhwa, “Fake news detection with different models.”
[12]
K. Nagashri and J. Sangeetha, Fake News Detection Using Passive-Aggressive
Classifier and Other Machine Learning Algorithms, pp. 221–233.
Springer Singapore, 2021.
[13]
S. Pandey, S. Prabhakaran, N. V. Subba Reddy, and D. Acharya, “Fake news
detection from online media using machine learning classifiers,” Journal of Physics: Conference Series, vol. 2161, no. 1, p. 012027, 2022.
[14]
W. J. Tee and R. K. Murugesan, “Trust network, blockchain and evolution in
social media to build trust and prevent fake news,” in 2018 Fourth
International Conference on Advances in Computing, Communication &
Automation (ICACCA), IEEE, 2018.
[15]
J. Chow, O. Dial, and J. Gambetta, “IBM Quantum breaks the 100-qubit
processor barrier.”
<https://research.ibm.com/blog/127-qubit-quantum-processor-eagle>, 2021.
Accessed: 2022-04-03.
[16]
I. Newsroom, “IBM unveils 400 qubit-plus quantum processor.”
Accessed: 2022-04-03.
[17]
J. Gambetta, “The hardware and software for the era of quantum utility is
here.” <https://www.ibm.com/quantum/blog/quantum-roadmap-2033>, 2023.
Accessed: 2023-12-06.
[18]
S. Kamal, V. Chahar, and S. Sharma, “Quantum machine learning-based detection
of fake news and deep fake videos,” IEEE Technology Policy and Ethics,
07 2022.
[19]
EPFL and Quantum Integrity, “Quantum Integrity and EPFL develop
Deep Fake Detector.”
Accessed 14-01-2024.
[20]
Z. Tian and S. Baskiyar, “Fake news detection: An application of quantum
k-nearest neighbors,” in 2021 IEEE Symposium Series on Computational
Intelligence (SSCI), IEEE, 2021.
[21]
Google, “Announcing TensorFlow Quantum: An Open Source Library
for Quantum Machine Learning — blog.research.google.”
Accessed 14-01-2024.
[22]
T. Andronikos and A. Sirokofskich, “A quantum detectable byzantine agreement
protocol using only EPR pairs,” Applied Sciences, vol. 13, no. 14,
p. 8405, 2023.
[23]
D. A. Meyer, “Quantum strategies,” Physical Review Letters, vol. 82,
no. 5, p. 1052, 1999.
[24]
J. Eisert, M. Wilkens, and M. Lewenstein, “Quantum games and quantum
strategies,” Physical Review Letters, vol. 83, no. 15, p. 3077, 1999.
[25]
T. Andronikos, A. Sirokofskich, K. Kastampolidou, M. Varvouzou, K. Giannakis,
and A. Singh, “Finite automata capturing winning sequences for all possible
variants of the PQ penny flip game,” Mathematics, vol. 6, p. 20, Feb
[26]
T. Andronikos and A. Sirokofskich, “The connection between the PQ penny flip
game and the dihedral groups,” Mathematics, vol. 9, no. 10, p. 1115,
[27]
T. Andronikos, “Conditions that enable a player to surely win in sequential
quantum games,” Quantum Information Processing, vol. 21, no. 7, 2022.
[28]
K. Kastampolidou and T. Andronikos, “Quantum tapsilou—a quantum game
inspired by the traditional greek coin tossing game tapsilou,” Games,
vol. 14, no. 6, p. 72, 2023.
[29]
K. Giannakis, G. Theocharopoulou, C. Papalitsas, S. Fanarioti, and
T. Andronikos, “Quantum conditional strategies and automata for prisoners'
dilemmata under the EWL scheme,” Applied Sciences, vol. 9, p. 2635,
Jun 2019.
[30]
T. Andronikos and M. Stefanidakis, “A two-party quantum parliament,” Algorithms, vol. 15, no. 2, p. 62, 2022.
[31]
M. Ampatzis and T. Andronikos, “QKD based on symmetric entangled
bernstein-vazirani,” Entropy, vol. 23, no. 7, p. 870, 2021.
[32]
M. Ampatzis and T. Andronikos, “A symmetric extensible protocol for quantum
secret sharing,” Symmetry, vol. 14, no. 8, p. 1692, 2022.
[33]
M. Ampatzis and T. Andronikos, “Quantum secret aggregation utilizing a network
of agents,” Cryptography, vol. 7, no. 1, p. 5, 2023.
[34]
T. Andronikos and A. Sirokofskich, “An entanglement-based protocol for
simultaneous reciprocal information exchange between 2 players,” Electronics, vol. 12, no. 11, p. 2506, 2023.
[35]
T. Andronikos and A. Sirokofskich, “One-to-many simultaneous secure quantum
information transmission,” Cryptography, vol. 7, no. 4, p. 64, 2023.
[36]
G. Theocharopoulou, K. Giannakis, C. Papalitsas, S. Fanarioti, and
T. Andronikos, “Elements of game theory in a bio-inspired model of
computation,” in 2019 10th International Conference on Information,
Intelligence, Systems and Applications (IISA), pp. 1–4, IEEE, jul 2019.
[37]
K. Kastampolidou, M. N. Nikiforos, and T. Andronikos, “A brief survey of the
prisoners' dilemma game and its potential use in biology,” in Advances
in Experimental Medicine and Biology, pp. 315–322, Springer International
Publishing, 2020.
[38]
D. Kostadimas, K. Kastampolidou, and T. Andronikos, “Correlation of biological
and computer viruses through evolutionary game theory,” in 2021 16th
International Workshop on Semantic and Social Media Adaptation &
Personalization (SMAP), IEEE, 2021.
[39]
K. Kastampolidou and T. Andronikos, “A survey of evolutionary games in
biology,” in Advances in Experimental Medicine and Biology,
pp. 253–261, Springer International Publishing, 2020.
[40]
K. Kastampolidou and T. Andronikos, “Microbes and the games they play,” in
GeNeDis 2020, pp. 265–271, Springer International Publishing, 2021.
[41]
K. Kastampolidou and T. Andronikos, “Game theory and other unconventional
approaches to biological systems,” in Handbook of Computational
Neurodegeneration, pp. 163–180, Springer International Publishing, 2023.
[42]
C. Papalitsas, K. Kastampolidou, and T. Andronikos, “Nature and
quantum-inspired procedures – a short literature review,” in
GeNeDis 2020, pp. 129–133, Springer International Publishing, 2021.
[43]
S. Adam, P. Karastathis, D. Kostadimas, K. Kastampolidou, and T. Andronikos,
“Protein misfolding and neurodegenerative diseases: A game theory
perspective,” in Handbook of Computational Neurodegeneration,
pp. 863–874, Springer International Publishing, 2023.
[44]
M. A. Nielsen and I. L. Chuang, Quantum computation and quantum
Cambridge University Press, 2010.
[45]
R. Neigovzen, C. Rodó, G. Adesso, and A. Sanpera, “Multipartite
continuous-variable solution for the byzantine agreement problem,” Physical Review A, vol. 77, no. 6, p. 062307, 2008.
[46]
Y. Feng, R. Shi, J. Zhou, Q. Liao, and Y. Guo, “Quantum byzantine agreement
with tripartite entangled states,” International Journal of Theoretical
Physics, vol. 58, no. 5, pp. 1482–1498, 2019.
[47]
W. Wang, Y. Yu, and L. Du, “Quantum blockchain based on asymmetric quantum
encryption and a stake vote consensus algorithm,” Scientific Reports,
vol. 12, no. 1, 2022.
[48]
Z. Yang, T. Salman, R. Jain, and R. D. Pietro, “Decentralization using quantum
blockchain: A theoretical analysis,” IEEE Transactions on Quantum
Engineering, vol. 3, pp. 1–16, 2022.
[49]
Z. Qu, Z. Zhang, B. Liu, P. Tiwari, X. Ning, and K. Muhammad, “Quantum
detectable byzantine agreement for distributed data trust management in
blockchain,” Information Sciences, vol. 637, p. 118909, 2023.
[50]
K. Ikeda and A. Lowe, “Quantum protocol for decision making and verifying
truthfulness among n‐quantum parties: Solution and extension of the quantum
coin flipping game,” IET Quantum Communication, vol. 4, no. 4,
pp. 218–227, 2023.
|
# Direct observation of the exchange anisotropy in the helimagnetic insulator
Cu2OSeO3
Priya R. Baral<EMAIL_ADDRESS><EMAIL_ADDRESS>Crystal
Growth Facility, Institute of Physics, École Polytechnique Fédérale de
Lausanne (EPFL), CH-1015 Lausanne, Switzerland Chair of Computational
Condensed Matter Physics, Institute of Physics, École Polytechnique Fédérale
de Lausanne (EPFL), CH-1015 Lausanne, Switzerland Laboratory for Neutron
Scattering and Imaging (LNS), Paul Scherrer Institute (PSI), CH-5232 Villigen,
Switzerland Oleg I. Utesov Center for Theoretical Physics of Complex
Systems, Institute for Basic Science, Daejeon 34126, Republic of Korea Chen
Luo Helmholtz-Zentrum Berlin für Materialien und Energie, D-14109 Berlin,
Germany Florin Radu Helmholtz-Zentrum Berlin für Materialien und Energie,
D-14109 Berlin, Germany Arnaud Magrez Crystal Growth Facility, Institute of
Physics, École Polytechnique Fédérale de Lausanne (EPFL), CH-1015 Lausanne,
Switzerland Jonathan S. White Laboratory for Neutron Scattering and Imaging
(LNS), Paul Scherrer Institute (PSI), CH-5232 Villigen, Switzerland Victor
Ukleev Helmholtz-Zentrum Berlin für Materialien und Energie, D-14109 Berlin,
Germany Swiss Light Source, Paul Scherrer Institute (PSI), CH-5232 Villigen,
Switzerland
###### Abstract
The helical magnetic structures of cubic chiral systems are well-explained by
the competition among Heisenberg exchange, Dzyaloshinskii-Moriya interaction,
cubic anisotropy, and anisotropic exchange interaction (AEI). Recently, the
role of the latter has been argued theoretically to be crucial for the low-
temperature phase diagram of the cubic chiral magnet Cu2OSeO3, which features
tilted conical and disordered skyrmion states for a specific orientation of
the applied magnetic field ($\mu_{0}\vec{\mathrm{H}}\parallel[001]$). In this
study, we exploit transmission resonant x-ray scattering ($t-$REXS) in vector
magnetic fields to directly quantify the strength of the AEI in Cu2OSeO3, and
measure its temperature dependence. We find that the AEI continuously
increases below 50 K, resulting in a conical spiral pitch variation of $10\%$
in the (001) plane. Our results contribute to establishing the interaction
space that supports tilted cone and low-temperature skyrmion state formation,
facilitating the goals for both a quantitative description and eventual design
of the diverse spiral states existing amongst chiral magnets.
exchange anisotropy, helimagnetism, chiral magnets, skyrmions
In recent years, skyrmions in magnetic materials have attracted significant
interest due to their potential spintronic functionalities that promise a
paradigm shift in magnetic random access memory, data storage technologies,
energy saving, and unconventional computing [1, 2, 3]. Skyrmions are typically
found in thin films with asymmetric interfaces [4] and bulk noncentrosymmetric
crystals, such as chiral and polar helimagnets [5, 6].
The ground-state helical magnetic structures of cubic chiral systems are well-
described by the Bak-Jensen model, which considers the interplay between
Heisenberg exchange interaction, Dzyaloshinskii-Moriya interaction (DMI),
anisotropic exchange interaction (AEI), and cubic anisotropy (CA) [7, 8, 9].
The orientation of the helix axis is determined by a subtle interplay among
DMI, AEI, and CA. The AEI has been broadly neglected due to its weak impact on
experimental observations. However, both cubic and exchange anisotropies play
a crucial role in determining the propagation direction of the helix [9], and
ultimately, the orientation of any field-induced skyrmion lattice (SkL) in
these materials [10, 11, 12]. Moreover, in centrosymmetric materials the
competition between AEI and single-ion anisotropy can stabilize SkL even
without DMI [13].
Figure 1: (a) Illustration of the spiral modulation vector $Q$ dependence on
azimuthal angle in (110) plane for positive and negative signs of the exchange
anisotropy constant, $F_{\textrm{AEI}}$. (b) Sketch of the geometry of the
$t-$REXS experiment at VEKMAG [14]. The magnetic field was vectorially varied
in $x-y$ plane. (c) $t-$REXS patterns measured in conical states for different
azimuthal angles, $\psi=3^{\circ},72^{\circ},123^{\circ}$ at $T=14$ K. Sum of
the $t-$REXS patterns over all measured $\psi$ angles from 0 to 180∘ at (d) 14
K, (e) 25 K and (f) 50 K. Figure 2: Polar plots of the extracted conical
spiral wavevector $Q$ as a function of angle $\psi$ at (a) 14 K, (b) 20 K, (c)
25 K, (d) 30 K (e) 40 K and (f) 50 K. Solid lines correspond to the fit
according to the Eq. 1 including the offset of $17^{\circ}$ between
$\psi=0^{\circ}$ and [100] axis due to imperfect sample mounting (see
Supplementary information for more details on the sample orientation [15]).
The radial scale for $Q$ is given in the panel (f) and is the same for all
panels (a–f).
In cubic chiral magnets, the nontrivial temperature evolution of anisotropic
interactions has been demonstrated in $B20$s [16, 17], $\beta$-Mn alloys [18,
19] and Zn-doped Cu2OSeO3[20]. Often, the unambiguous experimental distinction
between the effects of cubic and exchange anisotropies is challenging since
they both affect macroscopic parameters, such as the transition fields between
helical and conical states, and conical and field-polarized states [9, 21].
Even neutron scattering techniques that are sensitive to microscopic
material’s parameters, are often unable to discriminate these two interactions
without an additional theoretical model. According to phenomenological models,
a fixed sign of the cubic anisotropy constant $K_{c}>0$, ground-state helical
spirals in cubic chiral magnets propagate along $[100]$-axes in the case of a
positive AEI constant $F_{\textrm{AEI}}>0$ (e.g. in Fe0.85Co0.15Si [22]) and
along $[111]$-axes if $F_{\textrm{AEI}}<0$ (e.g. in MnSi [17]).
Notable examples in earlier work where the role of the AEI shows up clearly
include in FeGe, where the reorientation of the spiral propagation vector from
$[100]$ to $[111]$ due to a sign change of the AEI [16], and in Zn-doped
Cu2OSeO3 where a sign change of the AEI is also argued, albeit with no
reorientation of the helix due to the predominance of CA [20].
Here, we focus on pristine Cu2OSeO3, a magnetoelectric chiral magnet with
$T_{\textrm{C}}$=58 K [23, 24] which, in addition to conventional helical,
conical, and SkL phases, also features several exotic metastable states, such
as square and elongated SkL phases [25, 26]. Recently, the competition between
the cubic and exchange anisotropies was argued to be crucial for the
manifestation of unusual yet thermodynamically stable magnetic phases in
Cu2OSeO3: tilted conical spiral and disordered skyrmions that emerge at low
temperatures when a magnetic field is applied along one of the cubic axes [27,
28, 29]. Due to the magnetoelectric coupling of Cu2OSeO3 [24], its versatile
magnetic phase diagram, and the ability to train the low-temperature skyrmion
phase [26] the material is particularly interesting for exploring chiral
magnet based applications paradigms [30]. Furthermore, developing the
understanding of the fundamental mechanism of skyrmion stabilization through
anisotropy engineering paves the way for magnetic phase manipulation amongst
the known skyrmion hosts, and which can be particularly relevant for room-
temperature topological magnetic textures among noncentrosymmetric materials
with high magnetic ordering temperatures such as chiral $\beta$-Mn-type alloys
($T_{\textrm{C}}$ up to 400 K) [19] and LiFe5O8 ($T_{\textrm{C}}$$\sim$900 K)
[31]. Therefore, the unambiguous microscopic, quantitative determination of
anisotropic interactions such as the AEI in model chiral magnets such as
pristine Cu2OSeO3is highly desirable.
Here we exploit the high momentum-space resolution of transmission resonant
elastic x-ray scattering ($t$-REXS) in vector magnetic fields [32] to quantify
directly the AEI in Cu2OSeO3. We obtain the following key results. First, the
angular variation of the conical spiral pitch in the (100) plane observed by
$t$-REXS agrees with a theory allowing the quantitative extraction of the AEI.
Second, in contrast to both FeGe [16] and lightly Zn-doped Cu2OSeO3 [20], the
sign of the AEI always remains negative across the entire temperature range
below $T_{c}$. Third, the magnitude of the AEI increases continuously below 50
K, correlating with the stability window of the tilted cone and disordered
skyrmion phases. Taken together, our results implicate the thermal evolution
of the AEI and its competition with CA as determining the structure of the
phase diagram, and contribute quantitatively towards the foundation of the
theoretical modeling and manipulation of spin textures in Cu2OSeO3 and other
anisotropic chiral magnets.
In the isotropic case, the spiral propagation vector is proportional to the
ratio of the DMI and exchange, $Q_{0}\sim D/J$. When the anisotropic
interactions come into play, the conical structure becomes distorted and, in
general, contains infinite number of harmonics, and the exact solution for the
spin structure can hardly be found. However, if the characteristic helical
energy is much larger than the anisotropic contributions one can use a
perturbative approach. In our case, in order to obtain corrections to the
spiral vector, we obtain approximate solution for the sine-Gordon equation
describing in-plane magnetization component with the anisotropy-induced terms.
The latter are due to AEI, CA and easy-plane anisotropy originating from the
tensile strain of the lamella. Importantly, the leading order approximation
allows us not to take into account small local variations of the conical
angle. Details of the derivation of the following equation are given in the
Supplementary Information [15].
At high temperatures the cubic anisotropy is small [33] (it is of the fourth
order in the magnetization modulus) and we consider only the effect of AEI and
easy-plane. The result for the spiral vector reads
$\displaystyle Q$ $\displaystyle=$ $\displaystyle
Q_{0}\Biggl{\\{}1-\dfrac{F_{\textrm{AEI}}\sin^{2}{2\psi}}{4J}\Biggr{\\}}-\dfrac{JZ^{2}\cot^{2}\alpha}{2D^{3}}\sin^{2}2(\psi-\phi)$
(1) $\displaystyle-\dfrac{JZ^{2}}{8D^{3}}\sin^{4}(\psi-\phi).$
Here $\psi$ is the azimuth angle of the conical helicoid propagation vector in
the $(001)$ plane, $\alpha$ is the conical angle ($\alpha=0$ in the fully
polarized phase), $Z$ is the easy-plane anisotropy constant and $\phi$
indicates the corresponding axis direction. At small temperatures the AEI-
induced correction in the first term of Eq. (1) dominates; other terms,
including the one stemming from CA (see [15]), being less prominent. In
addition, we have tried to fit the experimental data considering the higher-
order exchange anisotropy term, and found that the result is the same within
the error bar. Therefore, it is excluded from the analysis.
A polar plot of the spiral wavevector $Q$ depending on its orientation in the
$(001)$ is shown in Fig. 1a for positive and negative $F_{\textrm{AEI}}$
constants as an example. For $F_{\textrm{AEI}}>0$ the propagation vector of
the spiral in the ground state is favored by AEI along $\langle 001\rangle$,
while for negative $F_{\textrm{AEI}}$ spirals propagate along diagonals of the
cubic lattice $\langle 111\rangle$. Experimentally, the conical propagation
vector can be determined at will azimuthally in the (001) crystal plane by a
finite vector magnetic field (Fig. 1b). The detailed description of the
samples and the measurement setup is given in the Supplementary Information
[15]. Each conical state was prepared using the following procedure. First,
the helical state was achieved by cooling the sample down to the target
temperature at zero field ($T=14$ K), followed by ramping up the magnetic
field to $70$ mT to force the sample into the field-polarized state. The field
was then ramped down to $30$ mT in order to remain inside the conical phase
for a particular $\psi$. After each acquisition, the magnetic field was again
ramped up to $70$ mT, followed by changing its in-plane direction. This
protocol was repeated for each $\psi$ ranging between 0 and 180∘ with a 3∘
step.
At each particular sample temperature, the intensity corresponding to each of
the Friedel pair of conical peaks for the measured $\psi$ was extracted and
summed up. The resulting patterns measured at the lowest (14 K), intermediate
(25 K) and highest (50 K) temperatures are shown in Figs. 1d–f. At $T=50$ K,
the intensity profile appears almost circular, but with a slight elliptical
distortion (1f). The observed ellipticity in the intensity profile is an
indication of the uniaxial anisotropy induced by strain arising from the
contacts made on the lamella sample during FIB milling [34, 35, 36]. As the
temperature decreases, the profile starts to develop subtle features along the
marked crystallographic axes. On cooling, the azimuthal scattering intensity
distribution profile deviates from the ellipticity seen at 50 K and develops
extra humps along the in-plane $[110]$ directions. This is most strongly
pronounced at the base temperature of 14 K, as shown in (1d). Concomitantly,
$|Q|$ along the $[100]$ directions is found to be the minimum, and $Q||[110]$
the maximum. Interestingly, in contrast to FeGe [32], the helical spiral was
observed to always revert the orientation of propagation back to the $[100]$
direction upon leaving the in-plane conical phase through reduction of the
field. This is another manifestation of the strong CA in Cu2OSeO3.
Figure 3: Temperature dependence of the exchange anisotropy constant
$F_{\textrm{AEI}}$ extracted from the fit of $Q(\psi)$ according to Eq. 1. The
black dashed line is a guide to the eye.
In the next step, both the radial and azimuthal profiles of the diffracted
intensity at each $\psi$ were examined. In order to only contain a single
Bragg peak, a sector box of 3∘ angular width was chosen around each. Also,
both peaks from the Friedel pair were analyzed separately, using mirror
sectors, providing us with information on $|Q|$ in all four quadrants
simultaneously. Polar plots of the extracted peak position
$Q_{\textrm{c}}(\psi)$ in Figs. 2a–f directly show the anisotropic nature of
the conical spirals in Cu2OSeO3, and how this develops on cooling. The direct
influence of temperature dependence of the AEI on $|Q_{\textrm{c}}|$ can be
seen clearly in Fig. 2. At the lowest $T=14$ K (Fig. 2a), $Q_{\textrm{c}}$
varies between 0.086 nm-1 along [100] to 0.092 nm-1 along [110] in a
monotonous fashion, as it expected according to Eq. 1. This shows that in
Cu2OSeO3 the AEI is most pronounced at low temperatures, resulting in the
conical spiral pitch variation up to 10% between the conical spirals oriented
along [100] and [110].
In order to quantify the AEI constant, $Q(\psi)$ dependencies were fitted
according to Eq. (1) (solid lines in Fig. 2). The result is shown in Fig. 3,
where $|F_{\textrm{AEI}}|$ clearly tends to monotonically increase towards low
temperatures and reaches of $F_{\textrm{AEI}}=-0.163\pm 0.012$ pJm-1 at 14 K,
and practically vanishes at 50 K. The strain-induced anisotropy terms
containing $Z$ in Eq. 1 do not show significant variation as a function of
temperature (see Supplementary Information [15]).
The sign of the AEI constant $F_{\textrm{AEI}}$ in Cu2OSeO3 is negative in the
whole temperature range, in contrast to previous results on Zn-doped Cu2OSeO3
(Fig. 3) [20]. As shown before, a few percent Zn doping can modify the
microscopic properties of pristine Cu2OSeO3 significantly [37], and our data
supports this conclusion. Moreover, this suggests that one can tune the
microscopic parameter with a small doping, and hence finely tailor the helical
(skyrmion lattice) pitch and stability windows of anisotropy-driven phases.
Importantly, at low temperatures the two systems consistently demonstrate the
strong contribution of the magnetocrystalline anisotropy that pins the spiral
wavevector along $[100]$. Nonetheless, the competition between the AEI and
cubic anisotropies favoring different orientation of magnetic spirals is known
to stabilise more unusual magnetic spiral superstructures such as tilted
conical and disordered skyrmion phases [27, 29, 10]. A fine balance between
AEI and CA is required to theoretically reproduce low-temperature magnetic
phases in Cu2OSeO3 [29]. The strong enhancement of the AEI at low temperatures
is evident from our data and provides a much needed quantitative basis for the
stability of tilted conical and disordered skyrmion states proposed by the
theory. Therefore, a chemical tuning of the AEI would be a promising approach
to stabilize new phases far below $T_{\textrm{C}}$ in other known cubic chiral
magnets.
In summary, the study of the anisotropic exchange interaction (AEI) in the
cubic chiral magnet Cu2OSeO3 using transmission resonant x-ray scattering in
vector magnetic fields has revealed that the sign of the AEI energy constant
is negative in the whole temperature range below $T_{\textrm{C}}$, and
continuously increases below 50 K, to reach $F_{\textrm{AEI}}=-0.163$ pJm-1 at
our lowest temperature of 14 K. The sign of the AEI constant is negative in
the whole temperature range, pointing to a stronger contribution of cubic
anisotropy that pins the spiral propagation vector along $[001]$. The
magnitude of $F_{\textrm{AEI}}$ is of the same order as in FeGe but with an
opposite sign. Our measurements of the strong enhancement of the AEI at low
temperatures provide a quantitative basis for phenomenological theories that
describe how competing anisotropies in chiral magnets can stabilize novel
complex spiral magnetic states such as tilted conical and disordered skyrmion
phases. Additionally, we have presented a theoretical and experimental
framework for quantifying AEI in cubic chiral magnets, and distinct it from
CA, which is valuable for comparison with $ab-initio$ theories and for
understanding the role of AEI in the emergence of skyrmions and other exotic
magnetic state. Similar approach can be further developed for broader class of
anisotropic magnets with long-periodic spin modulations stabilized by other
mechanisms, such as frustrated interactions [38, 39, 40].
## Acknowledgements
Authors thank A. Leonov for fruitful discussions, E. Deckardt, M. Bednarzik
and Th. Jung for their help in preparation of the membranes at PSI, B. Bartova
for the assistance in FIB nano-fabrication at EPFL CIME, and K. Schwarzburg
for the help with the scanning electron microscopy measurement in the Corelab
Correlative Microscopy and Spectroscopy at Helmholtz-Zentrum Berlin. The
$t-$REXS experiment was carried out at the beamline PM-2 VEKMAG at BESSY II
synchrotron as a part of the proposal 212-10682 ST. P.R.B., J.S.W., A.M., V.U.
acknowledge funding from the SNSF Project Sinergia CRSII5_171003
NanoSkyrmionics. P.R.B. also acknowledges SNSF grant no. 200020_182536
(Frustration in structures and dynamics). We acknowledge financial support for
the VEKMAG project and for the PM2-VEKMAG beamline by the German Federal
Ministry for Education and Research (BMBF 05K2010, 05K2013, 05K2016, 05K2019)
and by HZB. F.R. acknowledge funding by the German Research Foundation via
Project No. SPP2137/RA 3570. V.U. thanks the SNSF National Centers of
Competence in Research in Molecular Ultrafast Science and Technology (NCCR
MUST-No. 51NF40-183615) for the financial support.
## References
* Fert _et al._ [2017] A. Fert, N. Reyren, and V. Cros, Magnetic skyrmions: advances in physics and potential applications, Nature Reviews Materials 2, 1 (2017).
* Everschor-Sitte _et al._ [2018] K. Everschor-Sitte, J. Masell, R. M. Reeve, and M. Kläui, Perspective: Magnetic skyrmions—overview of recent progress in an active research field, Journal of Applied Physics 124, 240901 (2018).
* Song _et al._ [2020] K. M. Song, J.-S. Jeong, B. Pan, X. Zhang, J. Xia, S. Cha, T.-E. Park, K. Kim, S. Finizio, J. Raabe, _et al._ , Skyrmion-based artificial synapses for neuromorphic computing, Nature Electronics 3, 148 (2020).
* Sampaio _et al._ [2013] J. Sampaio, V. Cros, S. Rohart, A. Thiaville, and A. Fert, Nucleation, stability and current-induced motion of isolated magnetic skyrmions in nanostructures, Nature Nanotechnology 8, 839 (2013).
* Bogdanov and Hubert [1994] A. Bogdanov and A. Hubert, Thermodynamically stable magnetic vortex states in magnetic crystals, Journal of Magnetism and Magnetic Materials 138, 255 (1994).
* Tokura and Kanazawa [2020] Y. Tokura and N. Kanazawa, Magnetic skyrmion materials, Chemical Reviews 121, 2857 (2020).
* Bak and Jensen [1980] P. Bak and M. H. Jensen, Theory of helical magnetic structures and phase transitions in MnSi and FeGe, Journal of Physics C: Solid State Physics 13, L881 (1980).
* Nakanishi _et al._ [1980] O. Nakanishi, A. Yanase, A. Hasegawa, and M. Kataoka, The origin of the helical spin density wave in MnSi, Solid State Communications 35, 995 (1980).
* Maleyev [2006] S.V. Maleyev, Cubic magnets with Dzyaloshinskii-Moriya interaction at low temperature, Physical Review B 73, 174402 (2006).
* Leonov _et al._ [2020] A. O. Leonov, C. Pappas, and I. Kézsmárki, Field and anisotropy driven transformations of spin spirals in cubic skyrmion hosts, Physical Review Research 2, 043386 (2020).
* Adams _et al._ [2018] T. Adams, M. Garst, A. Bauer, R. Georgii, and C. Pfleiderer, Response of the skyrmion lattice in MnSi to cubic magnetocrystalline anisotropies, Physical Review Letters 121, 187205 (2018).
* Kindervater _et al._ [2020] J. Kindervater, T. Adams, A. Bauer, F. Haslbeck, A. Chacon, S. Mühlbauer, F. Jonietz, A. Neubauer, U. Gasser, G. Nagy, _et al._ , Evolution of magnetocrystalline anisotropies in Mn1-xFexSi and Mn1-xCoxSi as inferred from small-angle neutron scattering and bulk properties, Physical Review B 101, 104406 (2020).
* Hirschberger _et al._ [2021] M. Hirschberger, S. Hayami, and Y. Tokura, Nanometric skyrmion lattice from anisotropic exchange interactions in a centrosymmetric host, New Journal of Physics 23, 023039 (2021).
* Noll and Radu [2016] T. Noll and F. Radu, The mechanics of the VEKMAG experiment, Proc. of MEDSI2016, Barcelona, Spain , 370 (2016).
* sup [2023] See Supplementary Information at https://… for more experimental details and theory. (2023).
* Lebech _et al._ [1989] B. Lebech, J. Bernhard, and T. Freltoft, Magnetic structures of cubic fege studied by small-angle neutron scattering, Journal of Physics: Condensed Matter 1, 6105 (1989).
* Grigoriev _et al._ [2009] S. Grigoriev, D. Chernyshov, V. Dyadkin, V. Dmitriev, S. Maleyev, E. Moskvin, D. Menzel, J. Schoenes, and H. Eckerlebe, Crystal handedness and spin helix chirality in Fe1-xCoxSi, Physical Review Letters 102, 037204 (2009).
* Preißinger _et al._ [2021] M. Preißinger, K. Karube, D. Ehlers, B. Szigeti, H.-A. Krug von Nidda, J. White, V. Ukleev, H. Rønnow, Y. Tokunaga, A. Kikkawa, _et al._ , Vital role of magnetocrystalline anisotropy in cubic chiral skyrmion hosts, npj Quantum Materials 6, 1 (2021).
* Karube _et al._ [2020] K. Karube, J. White, V. Ukleev, C. Dewhurst, R. Cubitt, A. Kikkawa, Y. Tokunaga, H. Rønnow, Y. Tokura, and Y. Taguchi, Metastable skyrmion lattices governed by magnetic disorder and anisotropy in $\beta$-Mn-type chiral magnets, Physical Review B 102, 064408 (2020).
* Moody _et al._ [2021] S. Moody, P. Nielsen, M. Wilson, D. A. Venero, A. Štefančič, G. Balakrishnan, and P. Hatton, Experimental evidence of a change of exchange anisotropy sign with temperature in Zn-substituted Cu2OSeO3, Physical Review Research 3, 043149 (2021).
* Grigoriev _et al._ [2015] S. Grigoriev, A. Sukhanov, and S. Maleyev, From spiral to ferromagnetic structure in B20 compounds: Role of cubic anisotropy, Physical Review B 91, 224429 (2015).
* Grigoriev _et al._ [2006] S. Grigoriev, S. Maleyev, A. Okorokov, Y. O. Chetverikov, P. Böni, R. Georgii, D. Lamago, H. Eckerlebe, and K. Pranzas, Magnetic structure of MnSi under an applied field probed by polarized small-angle neutron scattering, Physical Review B 74, 214414 (2006).
* Seki _et al._ [2012] S. Seki, X. Yu, S. Ishiwata, and Y. Tokura, Observation of skyrmions in a multiferroic material, Science 336, 198 (2012).
* White _et al._ [2014] J. White, K. Prša, P. Huang, A. Omrani, I. Živković, M. Bartkowiak, H. Berger, A. Magrez, J. Gavilano, G. Nagy, _et al._ , Electric-field-induced skyrmion distortion and giant lattice rotation in the magnetoelectric insulator Cu2OSeO3, Physical review letters 113, 107203 (2014).
* Takagi _et al._ [2020] R. Takagi, Y. Yamasaki, T. Yokouchi, V. Ukleev, Y. Yokoyama, H. Nakao, T. Arima, Y. Tokura, and S. Seki, Particle-size dependent structural transformation of skyrmion lattice, Nature Communications 11, 1 (2020).
* Aqeel _et al._ [2021] A. Aqeel, J. Sahliger, T. Taniguchi, S. Mändl, D. Mettus, H. Berger, A. Bauer, M. Garst, C. Pfleiderer, and C. H. Back, Microwave spectroscopy of the low-temperature skyrmion state in Cu2OSeO3, Physical Review Letters 126, 017202 (2021).
* Qian _et al._ [2018] F. Qian, L. J. Bannenberg, H. Wilhelm, G. Chaboussant, L. M. Debeer-Schmitt, M. P. Schmidt, A. Aqeel, T. T. Palstra, E. Brück, A. J. Lefering, _et al._ , New magnetic phase of the chiral skyrmion material Cu2OSeO3, Science Advances 4, eaat7323 (2018).
* Chacon _et al._ [2018] A. Chacon, L. Heinen, M. Halder, A. Bauer, W. Simeth, S. Mühlbauer, H. Berger, M. Garst, A. Rosch, and C. Pfleiderer, Observation of two independent skyrmion phases in a chiral magnetic material, Nature Physics 14, 936 (2018).
* Bannenberg _et al._ [2019] L. J. Bannenberg, H. Wilhelm, R. Cubitt, A. Labh, M. P. Schmidt, E. Lelièvre-Berna, C. Pappas, M. Mostovoy, and A. O. Leonov, Multiple low-temperature skyrmionic states in a bulk chiral magnet, npj Quantum Materials 4, 1 (2019).
* Lee _et al._ [2022] O. Lee, T. Wei, K. D. Stenning, J. C. Gartside, S. Seki, A. Aqeel, C. Back, Y. Tokura, W. R. Branford, and H. Kurebayashi, Task-adaptive physical reservoir computing (2022).
* Iguchi _et al._ [2015] Y. Iguchi, S. Uemura, K. Ueno, and Y. Onose, Nonreciprocal magnon propagation in a noncentrosymmetric ferromagnet LiFe5O8, Physical Review B 92, 184419 (2015).
* Ukleev _et al._ [2021] V. Ukleev, O. Utesov, L. Yu, C. Luo, K. Chen, F. Radu, Y. Yamasaki, N. Kanazawa, Y. Tokura, T.-h. Arima, _et al._ , Signature of anisotropic exchange interaction revealed by vector-field control of the helical order in a FeGe thin plate, Physical Review Research 3, 013094 (2021).
* Grigoriev _et al._ [2022] S. Grigoriev, N. Chubova, L. Azarova, and O. Utesov, Transition from spiral to ferromagnetic structure in Fe1-xCoxSi compounds: Small-angle neutron scattering study, Annals of Physics 447, 169132 (2022).
* Shibata _et al._ [2015] K. Shibata, J. Iwasaki, N. Kanazawa, S. Aizawa, T. Tanigaki, M. Shirai, T. Nakajima, M. Kubota, M. Kawasaki, H. Park, _et al._ , Large anisotropic deformation of skyrmions in strained crystal, Nature Nanotechnology 10, 589 (2015).
* Okamura _et al._ [2017] Y. Okamura, Y. Yamasaki, D. Morikawa, T. Honda, V. Ukleev, H. Nakao, Y. Murakami, K. Shibata, F. Kagawa, S. Seki, _et al._ , Directional electric-field induced transformation from skyrmion lattice to distinct helices in multiferroic Cu2OSeO3, Physical Review B 95, 184411 (2017).
* Ukleev _et al._ [2020] V. Ukleev, Y. Yamasaki, O. Utesov, K. Shibata, N. Kanazawa, N. Jaouen, H. Nakao, Y. Tokura, and T.-h. Arima, Metastable solitonic states in the strained itinerant helimagnet FeGe, Physical Review B 102, 014416 (2020).
* Štefančič _et al._ [2018] A. Štefančič, S. Moody, T. Hicken, M. Birch, G. Balakrishnan, S. Barnett, M. Crisanti, J. Evans, S. Holt, K. Franke, _et al._ , Origin of skyrmion lattice phase splitting in Zn-substituted Cu2OSeO3, Physical Review Materials 2, 111402 (2018).
* Ballou _et al._ [1987] R. Ballou, J. Deportes, R. Lemaire, Y. Nakamura, and B. Ouladdiaf, Helimagnetism in the cubic laves phase YMn2, Journal of Magnetism and Magnetic Materials 70, 129 (1987).
* Yu _et al._ [2012] X. Yu, M. Mostovoy, Y. Tokunaga, W. Zhang, K. Kimoto, Y. Matsui, Y. Kaneko, N. Nagaosa, and Y. Tokura, Magnetic stripes and skyrmions with helicity reversals, Proceedings of the National Academy of Sciences 109, 8856 (2012).
* Lin and Hayami [2016] S.-Z. Lin and S. Hayami, Ginzburg-Landau theory for skyrmions in inversion-symmetric magnets with competing interactions, Physical Review B 93, 064430 (2016).
|
# Switch-based Hybrid Beamforming for Wideband Multi-carrier Communications
††thanks: This work has been supported in part by Academy of Finland under
6Genesis Flagship (grant 318927) and EERA Project (grant 332362).
Mengyuan Ma, Nhan Thanh Nguyen and Markku Juntti Centre for Wireless
Communications (CWC), Uninvesity of Oulu, P.O.Box 4500, FI-90014, Finland
Email: {mengyuan.ma, nhan.nguyen<EMAIL_ADDRESS>
###### Abstract
Switch-based hybrid beamforming (SW-HBF) architectures are promising for
realizing massive multiple-input multiple-output (MIMO) communications systems
because of their low cost and low power consumption. In this paper, we study
the performance of SW-HBF in a wideband multi-carrier MIMO communication
system considering the beam squint effect. We aim at designing the switch-
based combiner that maximizes the system spectral efficiency (SE). However,
the design problem is challenging because the analog combing matrix elements
are binary variables. To overcome this, we propose tabu search-based (TS) SW-
HBF schemes that can attain near-optimal performance with reasonable
computational complexity. Furthermore, we compare the total power consumption
and energy efficiency (EE) of the SW-HBF architecture to those of the phase-
shifter-based hybrid beamforming (PS-HBF) architecture. Numerical simulations
show that the proposed algorithms can efficiently find near-optimal solutions.
Moreover, the SW-HBF scheme can significantly mitigate the beam squint effect
and is less affected by the number of subcarriers than PS-HBF. It also
provides improved SE and EE performance compared to PS-HBF schemes.
###### Index Terms:
Switch-based hybrid beamforming, wideband communications, multi-carrier
systems, beam squint effect, spectral efficiency, energy efficiency.
## I Introduction
Wideband communications systems, with their large utilizable spectrum, are
promising to meet the ever-lasting escalating demand for ultra-high-speed data
rates of future 6G wireless networks [1, 2]. However, the large numbers of
antennas in millimeter wave (mmWave) and Terahertz (THz) communications
systems require large numbers of excessively high power-hungry radio frequency
(RF) chains. As a result, there could be prohibitive power consumption and
cost. Therefore, hybrid beamforming (HBF) is envisioned as a critical
technique to realize mmWave and THz communications. It can considerably reduce
the number of costly radio frequency chains and maintain the spatial
multiplexing gain [3].
In HBF, the analog beamformer can be implemented by either soft antenna
selection with variable phase-shifters or hard antenna selection using switch
networks [4] (see Fig. 1). Nevertheless, the practical realization of the
phase-shifters for high frequencies is not a trivial task [5]. Moreover, the
large number of phase-shifters may require high power consumption, degrading
the system energy efficiency (EE). Furthermore, the beam squint effect cannot
be neglected in systems employing large bandwidth and large-sized antenna
arrays, especially in phase-shift-based HBF (PS-HBF) transceivers [6, 7]. It
can significantly degrade the spectral efficiency (SE) in wideband multi-
carrier systems. To mitigate the beam squint effect, the true-time-delay (TTD)
structure can be embedded into the RF front-end [8], which, however,
inevitably causes increased power consumption to the system. In contrast, the
frequency-independent switch-based HBF (SW-HBF) is capable of alleviating the
beam squint effect without any increase in power consumption. Compared to
phase-shifters, switch networks are simple to implement, low-power, and quick
to adapt to the channel variations [5]. Nonetheless, most of the studies on
SW-HBF focus on narrowband channel models [9, 10, 11, 5]. To the best of the
authors’ knowledge, SW-HBF in frequency-selective wideband multi-carrier
systems has not been thoroughly considered in the literature.
(a) Phase shifter-based hybrid beamforming.
(b) Switch-based hybrid beamforming
Figure 1: Illustration of PS-based and SW-based hybrid combining structures.
In this paper, we investigate the potentials of SW-HBF in overcoming the beam
squint effect of a wideband multiple-input multiple-output orthogonal
frequency-division multiplexing (MIMO-OFDM) system. Specifically, we aim at
designing the SW-based combiner that maximizes the system SE. The design
problem is challenging due to the binary variable constraints and the rank
constraint of the analog beamformer. To tackle the challenges, we first
propose a tabu search (TS) algorithm that is demonstrated to find near-optimal
solutions with reasonable complexity. Then, to further refine the solution, we
employ a projected gradient ascending (PGA) algorithm to obtain a better
initial point for the TS algorithm. Furthermore, we introduce power
consumption models for the PS-HBF and the SW-HBF architectures. Finally,
intensive simulations are provided to compare the SE and EE of the SW-HBF and
PS-HBF schemes. The results show that the former is less affected by the beam
squint effect and the number of subcarriers than the latter. Moreover, the
proposed TS-based SW-HBF schemes achieve better SE and EE performance than the
PS-HBF.
## II System Model and Problem Formulation
### II-A Signal Model
We consider a point-to-point MIMO-OFDM system where the transmitter is
equipped with $N_{t}$ antennas and perform fully digital beamforming. The
receiver employs either PS-HBF or SW-HBF architecture with $N_{r}$ antennas
and $N_{RF}$ RF chains, as illustrated in Figs 1(a) and 1(b). Let $K$ be the
number of subcarriers, and let ${\bm{s}}_{k}\in{\mathbb{C}}^{N_{s}\times 1}$
$(N_{s}\leq N_{RF})$ be the transmitted symbol vector at the $k$th subcarrier,
${\mathbb{E}}\left[{\bm{s}}_{k}{\bm{s}}^{H}_{k}\right]={\bm{I}}_{N_{s}},\;k=1,\cdots,K$,
where ${\bm{I}}_{N_{s}}$ denotes the $N_{s}\times N_{s}$ identity matrix. The
transmitted signal vector ${\bm{x}}_{k}\in{\mathbb{C}}^{N_{t}\times 1}$ for
each subcarrier is given as
${\bm{x}}_{k}={\bm{F}}_{k}{\bm{s}}_{k},$ (1)
where ${\bm{F}}_{k}\in{\mathbb{C}}^{N_{t}\times N_{s}}$ is the digital
precoding matrix.
At the receiver, the signal vector is first combined by the analog combiner,
represented by ${\bm{W}}_{RF}\in{\mathbb{C}}^{N_{r}\times N_{RF}}$. After
discarding the CP and performing $N_{RF}$ $K$-point fast Fourier transforms
(FFTs), the combined signal is further processed at frequency domain by low-
dimensional baseband combiner ${\bm{W}}_{BB}[k]\in{\mathbb{C}}^{N_{RF}\times
N_{s}}$ for each subcarrier. Finally, the combined signal at the $k$th
subcarrier through channel ${\bm{H}}_{k}\in{\mathbb{C}}^{N_{r}\times N_{t}}$
is given as
${\bm{y}}_{k}={\bm{W}}^{H}_{k}{\bm{H}}_{k}{\bm{F}}_{k}{\bm{s}}_{k}+{\bm{W}}^{H}_{k}{\bm{n}}_{k},$
(2)
where ${\bm{W}}_{k}={\bm{W}}_{RF}{\bm{W}}_{BB}[k]$, and
${\bm{n}}_{k}\sim\mathcal{N}(\bm{0},\sigma_{n}^{2}{\bm{I}}_{N_{r}})$ is the
additive white Gaussian noise vector at the $k$th subcarrier with
$\sigma_{n}^{2}$ being the noise variance.
### II-B Beam Squint Effect and Channel Model
#### II-B1 Beam Squint Effect
In the conventional narrowband communications systems with analog beamforming,
the phase values of variable phase-shifters are generally optimized for the
carrier frequency. This frequency-dependent design incurs the beam squint
effect when it is applied to wideband multi-carrier systems [6]. Specifically,
there may be considerable performance loss for frequencies other than the
carrier frequency due to beam patterns of analog beamformers vary with
frequencies. Fig. 2 illustrates the beam patterns as a function of beam focus
angle $\phi$ for different frequencies. It can be observed that when the
beamformer points to angle $\phi_{0}=\pi/6$ at carrier frequency
$f_{c}=60\rm{GHz}$, the beamforming gain at other frequencies suffer a
significant loss in that their beamforming focus angles squint away from
$\pi/6$.
Figure 2: Illustration of beam squint effect in multi-carrier systems with
beam focus $\phi_{0}=\pi/6$, $N=64$ antennas, uniform linear array with
antenna sapcing $d_{s}=\lambda_{c}/2$, carrier frequency $f_{c}=60\rm{GHz}$,
bandwidth $B=4\rm{GHz}$.
#### II-B2 Wideband Channel Model with Beam Squint Effect
Since the beam squint effect is essentially induced by the frequency-dependent
beam steering vectors, we adopt the modified channel model [12], which
incorporates the frequency dependency into the classical geometric channel
model [13, 14, 15]. Assuming uniform linear array (ULA) is utilized, the
$d$-th tap of the channel at frequency $f$ can be modeled as [12]
${\bm{H}}_{f}[d]=\sum_{l=1}^{L}\alpha_{l}p\left(dT_{s}-\tau_{l}\right){\bm{a}}_{r}\left(\theta_{l}^{r},f\right){\bm{a}}_{t}^{H}\left(\theta_{l}^{t},f\right),$
(3)
where $L$ is the number of distinct scattering clusters,
$\alpha_{l}\sim\mathcal{C}\mathcal{N}(0,1)$ and $\tau_{l}$ are the complex
gain and the delay of the $l$th cluster, respectively, $\theta_{l}^{r}$ and
$\theta_{l}^{t}$ represent the angle of arrival (AoA) and angle of departure
(AoD) of the $l$th cluster, respectively, and finally $p(\tau)$ denotes the
pulse-shaping filter for $T_{s}$-spaced signaling evaluated at $\tau$ seconds
[13]. The transmit/receive steering vector at frequency $f$ is given by
${\bm{a}}(\theta,f)=\left[1,e^{-j2\pi\psi(\theta)\frac{f}{f_{c}}},\cdots,e^{-j2\pi(N-1)\psi(\theta)\frac{f}{f_{c}}}\right]^{T},$
(4)
where $N\in\\{N_{t},N_{r}\\}$ is the number of antennas,
$\psi(x)\triangleq\frac{d_{s}\sin(x)}{\lambda_{c}}$ with $\lambda_{c}$ being
the wavelength of carrier frequency, and $d_{s}$ being the antenna spacing
distance. The frequency-domain channel at the $k$th subcarrier,
$k=1,\cdots,K$, can be expressed as [13]
${\bm{H}}_{k}=\sum_{d=0}^{D-1}{\bm{H}}_{f_{k}}[d]e^{-j\frac{2\pi k}{K}d},$ (5)
where $f_{k}$ denotes the central frequency of the $k$th subcarrier with
bandwidth $B$, which can be described as [12]
$f_{k}=f_{c}+\left(k-\frac{K+1}{2}\right)\frac{B}{K},\quad\forall k.$ (6)
### II-C Problem Formulation
Assuming the availability of full channel state information, we aim at
designing the combiner of SW-HBF that maximizes the system SE of the
considered wideband MIMO-OFDM system. For the PS-HBF (see Fig. 1(a)), the set
of feasible analog combining vectors (i.e., the columns of ${\bm{W}}_{RF}$) is
given by $\mathcal{U}_{1}=\left\\{{\bm{u}}\in{\mathbb{C}}^{N_{r}\times
1}\left||u_{j}|=1,j=1,\cdots,N_{r}\right.\right\\}$, where $u_{j}$ denotes the
$j$th element of vector ${\bm{u}}$, $|x|$ denotes the modulus of a complex
number $x$. Whereas the feasible set of SW-based analog combiner in the SW-HBF
scheme (see Fig. 1(b)) is given by
$\mathcal{U}_{2}=\left\\{{\bm{u}}\in\mathcal{B}^{N_{r}\times 1}\right\\}$,
where $\mathcal{B}=\\{0,1\\}$. Assuming that the transmit symbol at each
subcarrier follows a Gaussian distribution, the problem of designing the
combiner in the PS-HBF and SW-HBF schemes can be formulated as
$\displaystyle\underset{{\bm{W}}_{RF},\atop\left\\{{\bm{F}}_{k},{\bm{W}}_{BB}[k]\right\\}_{k=1}^{K}}{\max}$
$\displaystyle\frac{1}{K}\sum_{k=1}^{K}\log_{2}\left|{\bm{I}}_{N_{s}}+\frac{1}{\sigma^{2}_{n}}{\bm{W}}_{k}^{\dagger}{\bm{H}}_{k}{\bm{F}}_{k}{\bm{F}}^{H}_{k}{\bm{H}}^{H}_{k}{\bm{W}}_{k}\right|$
(7a) $\displaystyle\rm{s.t.}\qquad$
$\displaystyle\quad\sum_{k=1}^{K}\|{\bm{F}}_{k}\|^{2}_{F}\leq P_{b},$ (7b)
$\displaystyle\quad{\bm{W}}_{RF}(:,j)\in\mathcal{U},\,j=1,\cdots,N_{RF},$ (7c)
$\displaystyle\quad{\rm rank}({\bm{W}}_{RF})\geq N_{s},$ (7d)
where $\dagger$ denotes the Moore-Penrose pseudo inversion,
${\bm{W}}_{RF}(:,j)$ denotes the $j$th column of ${\bm{W}}_{RF}$ and $P_{b}$
is the transmit power budget. Note that $\mathcal{U}=\mathcal{A}_{1}$ for PS-
HBF; in contrast, for SW-HBF, $\mathcal{U}=\mathcal{A}_{2}$.
## III SW-HBF Design
The PS-HBF design problem can be solved by the methods proposed in [16, 17].
In contrast, SW-HBF design is more challenging, and its solution is
unavailable in the literature. In (7), the objective function is non-convex
due to the binary variable constraints (7c) and the rank constraint (7d) of
the analog combiner. The optimal solution can be found by the exhaustive
search, which, however, is computationally prohibitive. To overcome this, we
develop a computationally efficient solution to the SE maximization problem by
decoupling the design of $\left\\{{\bm{F}}_{k}\right\\}_{k=1}^{K}$ and
$\left\\{{\bm{W}}_{k}\right\\}_{k=1}^{K}$. Specifically, for the design of
transmit beamformers $\left\\{{\bm{F}}_{k}\right\\}_{k=1}^{K}$, we assume that
the optimal receiver is used. Then, the receive beamformers
$\left\\{{\bm{W}}_{k}\right\\}_{k=1}^{K}$ are obtained given the transmit
beamformers $\left\\{{\bm{F}}_{k}\right\\}_{k=1}^{K}$. The solutions to these
subproblems are presented in the following subsections.
### III-A Transmit Beamformer Design
Given the receive beamforming matrices
$\left\\{{\bm{W}}_{k}\right\\}_{k=1}^{K}$, the transmit beamforming design
problem is expressed as
$\displaystyle\max\limits_{\\{{\bm{F}}_{k}\\}_{k=1}^{K}}$
$\displaystyle\quad\frac{1}{K}\sum_{k=1}^{K}\log_{2}\left|{\bm{I}}_{N_{r}}+\frac{1}{\sigma_{n}^{2}}{\bm{H}}_{k}{\bm{F}}_{k}{\bm{F}}^{H}_{k}{\bm{H}}^{H}_{k}\right|$
(8) s.t. $\displaystyle\quad\sum_{k=1}^{K}\|{\bm{F}}_{k}\|^{2}_{F}\leq P_{b}.$
Let ${\bm{H}}_{k}={\bm{U}}_{k}{\bm{\Sigma}}_{k}{\bm{V}}_{k}^{H}$ be the
singular value decomposition (SVD) of ${\bm{H}}_{k}$, where
${\bm{U}}_{k}\in{\mathbb{C}}^{N_{r}\times
N_{s}},{\bm{\Sigma}}_{k}\in{\mathbb{C}}^{N_{s}\times
N_{s}},{\bm{V}}_{k}\in{\mathbb{C}}^{N_{t}\times N_{s}}$. The optimal solution
for ${\bm{F}}_{k}$ can be given as [18]
${\bm{F}}_{k}={\bm{V}}_{k}{\bm{\Gamma}}_{k}^{\frac{1}{2}}{\bm{B}}_{k},$ (9)
where ${\bm{B}}_{k}$ is any $N_{s}\times N_{s}$ unitary matrix.
${\bm{\Gamma}}_{k}={\rm diag}(p_{k,1},\cdots,p_{k,N_{s}})$ (satisfying
$\sum_{k=1}^{K}Tr({\bm{\Gamma}}_{k})=P_{b}$) is the diagonal matrix obtained
by the water-filling approach, i.e.,
$p_{k,i}=\left[\mu-\frac{\sigma_{n}^{2}}{\lambda_{{\bm{H}}_{k},i}}\right]^{+},$
(10)
where $\lambda_{{\bm{H}}_{k},i}$ denotes the $i$th largest eigenvalue of
${\bm{H}}_{k}$, $\mu$ is the water level, and in (10),
$[x]^{+}\triangleq\max(x,0)$, $\forall x\in{\mathbb{R}}$.
### III-B Receive Beamformer Design
For a fixed analog combiner ${\bm{W}}_{RF}$, the optimal digital combiner of
each subcarrier is the MMSE solution [16],
${\bm{W}}_{BB}[k]=\left({\bm{J}}_{k}{\bm{J}}^{H}_{k}+\sigma_{n}^{2}{\bm{W}}_{RF}^{H}{\bm{W}}_{RF}\right)^{-1}{\bm{J}}_{k},$
(11)
where ${\bm{J}}_{k}\triangleq{\bm{W}}_{RF}^{H}{\bm{H}}_{k}{\bm{F}}_{k}$. Since
the optimal MMSE digital combiner can achieve maximum SE , the problem of
designing analog combiner is expressed as [19]
$\displaystyle\underset{{\bm{W}}_{RF}}{\max}$
$\displaystyle\quad\frac{1}{K}\sum_{k=1}^{K}\log_{2}\left|{\bm{I}}_{N_{RF}}+\frac{1}{\sigma^{2}_{n}}{\bm{W}}_{RF}^{\dagger}\tilde{{\bm{F}}}_{k}{\bm{W}}_{RF}\right|$
(12a) $\displaystyle\rm{s.t.}$
$\displaystyle\quad{\bm{W}}_{RF}(:,j)\in\mathcal{U}_{2},\,j=1,\cdots,N_{RF},$
(12b) $\displaystyle\quad{\rm rank}({\bm{W}}_{RF})\geq N_{s},$ (12c)
where
$\tilde{{\bm{F}}}_{k}\triangleq{\bm{H}}_{k}{\bm{F}}_{k}{\bm{F}}^{H}_{k}{\bm{H}}^{H}_{k}$.
Problem (12) is still challenging due to the binary variable constraints (12b)
and rank constraint (12c). The optima can be found by exhaustive search, which
is prohibitively complex. To solve the problem efficiently, we develop a low
complexity algorithm based on tabu search (TS) method [20].
TS algorithm begins to search for the neighbors of an initial point
${\bm{w}}_{0}$ and records the best neighbor with the largest objective value
$f({\bm{w}}_{b})$ as the best candidate ${\bm{w}}_{b}$. It then iteratively
searches for the neighbors of the best candidate ${\bm{w}}_{b}$ and updates
${\bm{w}}_{b}$ until the stopping criteria are met. During the process, a tabu
list $\mathcal{L}$ of length $L$ is used to record the visited points to avoid
cycling. The output of the TS algorithm is the best solution
${\bm{w}}_{b}^{*}$ found in the iterations, which achieves the largest value
of the objective function.
For problem (12), the objective function can be defined as
$f({\bm{W}}_{RF})=\frac{1}{K}\sum_{k=1}^{K}\log_{2}\left|{\bm{I}}_{N_{RF}}+\frac{1}{\sigma^{2}_{n}}{\bm{W}}_{RF}^{\dagger}\tilde{{\bm{F}}}_{k}{\bm{W}}_{RF}\right|.$
(13)
For notation convenience, we will use ${\bm{W}}$ in the following part to
represent ${\bm{W}}_{RF}$. Let
${\bm{e}}=[e_{1},\cdots,e_{N_{r}N_{RF}}]^{T}\in\mathcal{B}^{N_{r}N_{RF}}$, and
${\bm{e}}_{i}$ be the vector in which $i$th element is $1$ and other elements
are $0$, i.e., $e_{i}=1,e_{j\neq i}=0,\forall j$. Let ${\bm{W}}_{b}$ be the
found best candidate. Furthermore, let ${\bm{w}}_{b}\triangleq{\rm
vec}({\bm{W}}_{b})$ be the vectorization of ${\bm{W}}_{b}$. The neighbor set
of ${\bm{w}}_{b}$ is given by
$\displaystyle\mathcal{N}({\bm{w}}_{b})=$
$\displaystyle\\{{\bm{w}}\in\mathcal{B}^{N_{r}N_{RF}}\left||{\bm{w}}-{\bm{w}}_{b}|={\bm{e}}_{i},{\rm
rank}({\bm{W}})\geq N_{s},\right.$ $\displaystyle i=1,\cdots,N_{r}N_{RF}\\},$
(14)
where ${\bm{W}}\triangleq{\rm vec}^{-1}({\bm{w}})$, $|{\bm{x}}|$
(${\bm{x}}\in\mathcal{B}^{N_{r}N_{RF}\times 1}$) denotes the vector which has
element-wise absolute value of ${\bm{x}}$. The TS algorithm solving problem
(12) is summarized in Algorithm 1 where ${\bm{W}}_{0}$ is the initial
candidate, and $N_{iter}$ is the maximum iteration.
Input: $L,{\bm{W}}_{0},N_{iter},i=0$
Output: ${\bm{W}}_{b}^{*}={\rm vec}^{-1}({\bm{w}}_{b}^{*})$
1
2${\bm{w}}_{0}={\rm
vec}({\bm{W}}_{0}),{\bm{w}}_{b}\leftarrow{\bm{w}}_{0},{\bm{W}}_{b}\leftarrow{\bm{W}}_{0},{\bm{w}}_{b}^{*}\leftarrow{\bm{w}}_{0}$;
3 $\mathcal{L}=\varnothing,\mathcal{L}\leftarrow\mathcal{L}\cup{\bm{w}}_{0}$ ;
4 while _$i\leq N_{iter}\;\ &$ not converge_ do
5
6 for _${\bm{w}}\in\mathcal{N}({\bm{w}}_{b})$ _ do
7 ${\bm{W}}={\rm vec}^{-1}({\bm{w}})$;
8 if _${\bm{w}}\notin\mathcal{L}\,\ &\,f({\bm{W}})>f({\bm{W}}_{b})$ _ then
9 ${\bm{w}}_{b}\leftarrow{\bm{w}}$;
10 end if
11
12 end for
13 ${\bm{W}}_{b}={\rm vec}^{-1}({\bm{w}}_{b}),{\bm{W}}_{b}^{*}={\rm
vec}^{-1}({\bm{w}}_{b}^{*})$;
14 if _$f({\bm{W}}_{b}) >f({\bm{W}}_{b}^{*})$ _ then
15 ${\bm{w}}_{b}^{*}\leftarrow{\bm{w}}_{b}$;
16 end if
17 $\mathcal{L}\leftarrow\mathcal{L}\cup{\bm{w}}_{b}$;
18
19 if _$|\mathcal{L}| >L$ _ then
20 Remove the earliest ${\bm{w}}_{b}$ in $\mathcal{L}$;
21 end if
22 $i\leftarrow i+1$;
23
24 end while
Algorithm 1 TS algorithm for solving problem (12)
###### Remark 1
The convergence of the TS algorithm is guaranteed by iteratively moving to the
best neighbor with an equal or larger objective value from an initial point.
Therefore, it has the potential to find the near-optimal solution, which can
be corroborated by the results shown in Section V.
In Algorithm 1, line 1 initializes the best candidate as the chosen point
${\bm{W}}_{0}$, which is then added to the tabu list (line 2). Based on the
initial candidate ${\bm{W}}_{0}$, the procedure iteratively searches for the
best neighbor and updates the best candidate until convergence (lines 3-19).
In each iteration, the TS algorithm first collects the neighbor set
$\mathcal{N}({\bm{w}}_{b})$ and treats each neighbor
${\bm{w}}\in\mathcal{N}({\bm{w}}_{b})$ as the potential candidate. By
comparing the objective value of all potential candidates to that of the
previous best candidate, the new best candidate, which is not in the tabu
list, is found (see lines 4-9). Afterward, the procedure updates the best
solution ${\bm{w}}_{b}^{*}$ via comparing it with the best candidate
${\bm{w}}_{b}$ (lines 10-13). Finally, the best candidate ${\bm{w}}_{b}$ is
added to the tabu list to avoid repeated cycling of future search (lines
14-17).
Since the TS algorithm adopts the local search procedure, the initial search
point can greatly impact its performance and computational complexity. Thus,
we further develop a heuristic algorithm to improve the quality of the initial
point of the TS algorithm. By removing the rank constraint (12c) and relaxing
the binary variable constraints (12b), the problem is recast as
$\displaystyle\underset{{\bm{W}}}{\max}$ $\displaystyle\quad
f({\bm{W}})=\frac{1}{K}\sum_{k=1}^{K}\log_{2}\left|{\bm{I}}+\frac{1}{\sigma^{2}_{n}}{\bm{W}}^{\dagger}\tilde{{\bm{F}}}_{k}{\bm{W}}\right|$
(15) $\displaystyle\rm{s.t.}$ $\displaystyle\quad{\bm{W}}\in\mathcal{W},$
where $\mathcal{W}=\\{{\bm{W}}|{\bm{W}}(i,j)\in[0,1],\forall i,j\\}$,
${\bm{W}}(i,j)$ denotes the element of ${\bm{W}}$ at $i$th row and $j$th
column. Given an arbitrary initial point, problem (15) can be efficiently
solved by the projected gradient ascending (PGA) algorithm, which is
summarized in Algorithm 2. $[\cdot]_{\mathcal{W}}$ denotes the projection into
$\mathcal{W}$.
| Algorithm 2: PGA algorithm for problem (15)
---|---
| 1\. Initialize: $i=1,{\bm{W}}^{i}\in\mathcal{W},c=1$
| 2\. Repeat.
| 3\. $\alpha=\frac{c}{\sqrt{i+1}}$.
| 4\.
$\mathbf{W}^{i+1}\leftarrow[\mathbf{W}^{i}+\alpha\nabla_{{\bm{W}}^{i}}f({\bm{W}}^{i})]_{\mathcal{W}}$.
| 5\. $i=i+1$.
| 6\. Until convergence.
| 7\. Output: ${\bm{W}}_{pga}$.
Let ${\bm{W}}_{pga}$ denote the output of the PGA algorithm. The initial
search point of the TS algorithm can be obtained by rounding ${\bm{W}}_{pga}$
to the nearest solution in the feasible space of problem (12), i.e.,
${\bm{W}}_{0}=r({\bm{W}}_{pga}),$ (16)
where $r(\cdot)$ is the rounding function. By integrating Algorithm 2 into the
Algorithm 1, we get an improved TS algorithm, which is termed as PGA-aided TS
algorithm.
## IV Power consumption model
Based on the architectures illustrated in Fig. 1(a) and 1(b), the total power
consumption of the fully digital beamforming (DBF), PS-HBF, and SW-HBF schemes
are given as
$\displaystyle P^{\rm DBF}_{\rm total}$
$\displaystyle=N_{r}(P_{LNA}+P_{RF}+2P_{ADC}),$ (17) $\displaystyle P^{\rm PS-
HBF}_{\rm total}$ $\displaystyle=N_{r}(P_{LNA}+P_{SP}+N_{RF}P_{PS})$
$\displaystyle\qquad\qquad+N_{RF}(P_{RF}+P_{C}+2P_{ADC}),$ (18) $\displaystyle
P^{\rm SW-HBF}_{\rm total}$ $\displaystyle=N_{r}(P_{LNA}+P_{SP}+N_{RF}P_{SW})$
$\displaystyle\qquad\qquad+N_{RF}(P_{RF}+P_{C}+2P_{ADC}),$ (19)
respectively, where $P_{RF}$ represents the power consumption of an RF chain,
which can be given as
$P_{RF}=P_{M}+P_{LO}+P_{LPF}+P_{BBamp},$ (20)
where $P_{M}$, $P_{LO}$, $P_{LPF}$, and $P_{BBamp}$ are the power consumption
of the mixer, the local oscillator, the low pass filter, and the baseband
amplifier, respectively. As a result, the system EE is defined as
$EE=\frac{SE}{P_{\rm total}},$ (21)
where $EE$ and $SE$ represent the energy efficiency and spectral efficiency,
respectively.
## V Simulation Results
In this section, we present the numerical results to evaluate the performance
and computational complexity of the proposed SW-HBF schemes in the considered
system. In the simulations, we use the channel model given in Section II-B
with $L=10,d_{s}=\frac{\lambda_{c}}{2},f_{c}=60{\rm GHz},B=1{\rm GHz},K=64$.
The AoA/AoDs are uniformly distributed over $[0,2\pi)$, and the pulse shaping
filter is modeled as [13]
$\displaystyle
p(t)=\begin{cases}\frac{\pi}{4}\operatorname{sinc}\left(\frac{1}{2\beta}\right),&\text{if~{}}t=\pm\frac{T_{\mathrm{s}}}{2\beta}\\\
\operatorname{sinc}\left(\frac{t}{T_{\mathrm{s}}}\right)\frac{\cos\left(\frac{\pi\beta
t}{T_{\mathrm{s}}}\right)}{1-\left(\frac{2\beta
t}{T_{\mathrm{s}}}\right)^{2}},&\text{otherwise},\end{cases}$ (22)
with $T_{s}$ the sampling period and the roll-off factor $\beta=1$. The path
delay is uniformly distributed in $[0,(D-1)T_{s}]$ where $D$ is the cyclic
prefix length, given by $D=K/4$ according to 802.11ad. The SNR is defined as
SNR$\triangleq\frac{P_{b}}{K\sigma_{n}^{2}}$. The assumptions on the component
power consumptions are given in the Table I [10, 21]. All reported results are
averaged over $10^{3}$ channel realizations.
TABLE I: Power consumption of each device Device | Notation | Value
---|---|---
Low Noise Amplifier (LNA) | $P_{LNA}$ | 39mW
Splitter | $P_{SP}$ | 19.5mW
Combiner | $P_{C}$ | 19.5mW
Phase shifter | $P_{PS}$ | 30mW
Switch | $P_{SW}$ | 5mW
Mixer | $P_{M}$ | 19mW
Local oscillator | $P_{LO}$ | 5mW
Low pass filter | $P_{LPF}$ | 14mW
Base-band amplifier | $P_{BBamp}$ | 5mW
ADC | $P_{ADC}$ | 240mW
### V-A Performance Evaluation
Figs. 3 and 4 show the average SE and EE of different algorithms versus the
SNR. For comparison, we include the PS-HBF in large-scale antenna arrays (PS-
HBF-LSAA) algorithm in [17], and the PS-HBF with closed-form solutions (PS-
HBF-CS) in [16]. For SW-HBF, the performance of optimal solution obtained by
exhaustive search (ES) method and randomly generated solution by the random
strategy are presented. Moreover, the performance of optimal digital
beamforming (DBF) via the water-filling algorithm is also exhibited. It can be
observed from Figs. 3 and 4 that the optimal SW-HBF can achieve the best
performance in terms of SE and EE. Furthermore, the TS and PGA-aided TS
algorithms can obtain near-optimal solutions and perform better than the PS-
HBF-LSAA and PS-HBF-CS algorithms. The performance of the random solution is
considerably worse than the TS algorithm, which demonstrates the effectiveness
of the proposed TS algorithms. Finally, we can observe that the performance of
the PGA-aided TS algorithm is slightly better than that of the vanilla TS
algorithm. Based on the results shown in Figs. 3 and 4, we can conclude that
the SW-HBF scheme is able to provide better SE and EE performance than that of
PS-HBF schemes in wideband multi-carrier systems.
Figs. 5(a) and 5(b) show the system SE versus the system bandwidth and the
number of subcarriers, respectively. We can observe from Fig. 5(a) that as the
bandwidth increases, the system suffers from more severe beam squint effect,
rendering a significant loss in the SE. With the proposed TS algorithms, the
SW-HBF is less affected by the beam squint effect and can achieve higher SE
than PS-HBF. Moreover, it can be observed from Fig. 5(b) that with more
subcarriers, the use of the common analog beamformer induces larger loss of
SE. However, when there are more subcarriers, the channel correlation of
subcarriers gets larger, making the system SE less affected by the analog
beamformer. This explains the slight decrease of the SE with the increasing
number of subcarriers, as is shown in Fig. 5(b). In summary, the TS-based SW-
HBF schemes are less affected by the beam squint effect and the number of
subcarriers, and they can achieve higher SE than the PS-HBF-LSAA and PS-HBF-CS
schemes. Moreover, the performance of the PGA-aided TS algorithm is slightly
better than that of the TS algorithm without a PGA solution.
Figure 3: SE of considered HBF algorithms vs SNR with
$N_{t}=16,N_{r}=8,N_{s}=N_{RF}=2,K=64$ Figure 4: EE of the considered HBF
algorithms vs SNR with $N_{t}=16,N_{r}=8,N_{s}=N_{RF}=2,K=64$
### V-B Complexity Analysis
The complexity of the ES method and proposed TS algorithms come from the
computation of the objective function (12a) and the rank of the potential
solution ${\bm{W}}$. Since the TS procedure dominates the complexity of the
PGA-aided TS algorithm, the complexities of the two considered TS algorithms
are approximately the same. The complexity of the ES method and the TS
algorithm are $\mathcal{O}(KN_{r}^{2}N_{RF}2^{N_{r}N_{RF}})$ and
$\mathcal{O}(N_{iter}KN_{r}^{3}N_{RF}^{2})$, respectively. As shown in Figs.
6(a) and 6(b), the proposed TS algorithms have much lower computational
complexity compared with the optimal ES method.
(a) SE vs bandwidth.
(b) SE vs the number of subcarriers.
Figure 5: SE versus system bandwidth and number of subcarriers with
$N_{t}=64,N_{r}=64,N_{s}=N_{RF}=4,f_{c}=60{\rm GHz}$.
(a) Comparison of computational complexity.
(b) Computational complexity of TS algorithm.
Figure 6: Complexity analysis.
## VI Conclusion
In this paper, we study the performance of SW-HBF in a MIMO-OFDM system with
the frequency-selective wideband channel. The near-optimal solution to the
analog combiner that maximizes the system SE is obtained via the proposed two
TS algorithms. Furthermore, we present the power consumption model of the SW-
HBF and PS-HBF architectures. Numerical simulations compare the SE and EE
achieved by the SW-HBF to those of the PS-HBF schemes. They demonstrate that
the former is able to obtain better SE and EE and less affected by the beam
squint effect than the latter. The results show that employing SW-HBF can reap
more benefits than using PS-HBF in a wideband multi-carrier system.
## References
* [1] Y. Niu, Y. Li, D. Jin, L. Su, and A. V. Vasilakos, “A survey of millimeter wave communications (mmWave) for 5G: opportunities and challenges,” _Wireless Networks, Springer_ , vol. 21, no. 8, pp. 2657–2676, 2015.
* [2] W. Jiang, B. Han, M. A. Habibi, and H. D. Schotten, “The road towards 6G: A comprehensive survey,” _IEEE Open Journal of the Communications Society_ , vol. 2, pp. 334–366, 2021.
* [3] F. Gao, B. Wang, C. Xing, J. An, and G. Y. Li, “Wideband beamforming for hybrid massive MIMO terahertz communications,” _IEEE J. Sel. Areas Commun._ , vol. 39, no. 6, pp. 1725–1740, 2021.
* [4] S. Payami, M. Ghoraishi, M. Dianati, and M. Sellathurai, “Hybrid beamforming with a reduced number of phase shifters for massive MIMO systems,” _IEEE Trans. Veh. Technol._ , vol. 67, no. 6, pp. 4843–4851, 2018.
* [5] H. Nosrati, E. Aboutanios, X. Wang, and D. Smith, “Switch-based hybrid beamforming for massive MIMO communications in mmWave bands,” _arXiv preprint arXiv:1908.10500_ , 2019.
* [6] M. Cai, K. Gao, D. Nie, B. Hochwald, J. N. Laneman, H. Huang, and K. Liu, “Effect of wideband beam squint on codebook design in phased-array wireless systems,” in _Proc. IEEE Global Telecommun. Conf._ IEEE, 2016, pp. 1–6.
* [7] L. Dai, J. Tan, and H. V. Poor, “Delay-phase precoding for wideband Thz massive MIMO,” _arXiv preprint arXiv:2102.05211_ , 2021.
* [8] K. Spoof, V. Unnikrishnan, M. Zahra, K. Stadius, M. Kosunen, and J. Ryynänen, “True-time-delay beamforming receiver with RF re-sampling,” _IEEE Trans. Circuits Syst. I_ , vol. 67, no. 12, pp. 4457–4469, 2020.
* [9] R. Méndez-Rial, C. Rusu, A. Alkhateeb, N. González-Prelcic, and R. W. Heath, “Channel estimation and hybrid combining for mmWave: Phase shifters or switches?” in _2015 Information Theory and Applications Workshop (ITA)_. IEEE, 2015, pp. 90–97.
* [10] R. Méndez-Rial, C. Rusu, N. González-Prelcic, A. Alkhateeb, and R. W. Heath, “Hybrid MIMO architectures for millimeter wave communications: Phase shifters or switches?” _IEEE Access_ , vol. 4, pp. 247–267, 2016\.
* [11] Y. Jiang, Y. Feng, and M. K. Varanasi, “Hybrid beamforming for massive MIMO: A unified solution for both phase shifter and switch networks,” in _Proc. Int. Conf. on Wireless Commun. and Sign. Proc._ IEEE, 2018, pp. 1–5.
* [12] H. Li, M. Li, Q. Liu, and A. L. Swindlehurst, “Dynamic hybrid beamforming with low-resolution PSs for wideband mmWave MIMO-OFDM systems,” _IEEE J. Sel. Areas Commun._ , vol. 38, no. 9, pp. 2168–2181, Jun 2020.
* [13] A. Alkhateeb and R. W. Heath, “Frequency selective hybrid precoding for limited feedback millimeter wave systems,” _IEEE Trans. Commun._ , vol. 64, no. 5, pp. 1801–1818, 2016.
* [14] S. Park, A. Alkhateeb, and R. W. Heath, “Dynamic subarrays for hybrid precoding in wideband mmWave MIMO systems,” _IEEE Trans. Wireless Commun._ , vol. 16, no. 5, pp. 2907–2920, 2017.
* [15] E. Vlachos, G. C. Alexandropoulos, and J. Thompson, “Wideband MIMO channel estimation for hybrid beamforming millimeter wave systems via random spatial sampling,” _IEEE J. Sel. Topics Signal Process._ , vol. 13, no. 5, pp. 1136–1150, Aug 2019.
* [16] M. Ma, N. T. Nguyen, and M. Juntti, “Closed-form hybrid beamforming solution for spectral efficiency upper bound maximization in mmWave MIMO-OFDM systems,” _arXiv preprint arXiv:2108.06691_ , 2021.
* [17] F. Sohrabi and W. Yu, “Hybrid analog and digital beamforming for mmWave OFDM large-scale antenna arrays,” _IEEE J. Sel. Areas Commun._ , vol. 35, no. 7, pp. 1432–1443, Apr 2017.
* [18] X. Zhang, A. F. Molisch, and S.-Y. Kung, “Variable-phase-shift-based RF-baseband codesign for MIMO antenna selection,” _IEEE Trans. Signal Process._ , vol. 53, no. 11, pp. 4091–4103, Oct 2005.
* [19] Q. Shi and M. Hong, “Spectral efficiency optimization for millimeter wave multiuser mimo systems,” _IEEE J. Sel. Topics Signal Process._ , vol. 12, no. 3, pp. 455–468, 2018.
* [20] N. T. Nguyen, K. Lee, and H. Dai, “QR-decomposition-aided tabu search detection for large MIMO systems,” _IEEE Trans. Veh. Technol._ , vol. 68, no. 5, pp. 4857–4870, 2019.
* [21] W. B. Abbas, F. Gomez-Cuba, and M. Zorzi, “Millimeter wave receiver efficiency: A comprehensive comparison of beamforming schemes with low resolution ADCs,” _IEEE Trans. Wireless Commun._ , vol. 16, no. 12, pp. 8131–8146, 2017.
|
disposition [toc]toc.1toc.1ContentsContentstoc.1
# arXiv:1809.07371, DESY-17-075, KA-TP-26-2018, TTP18–035
Phenomenology of the inflation-inspired NMSSM at the electroweak scale
Wolfgang Gregor Hollika,b,c Stefan Lieblerd Gudrid Moortgat-Picka,e
Sebastian Paßehrf Georg Weigleina
aDESY, Notkestraße 85, D-22607 Hamburg, Germany
bInstitute for Nuclear Physics (IKP),
Karlsruhe Institute of Technology, D-76021 Karlsruhe, Germany
cInstitute for Theoretical Particle Physics (TTP),
Karlsruhe Institute of Technology, D-76128 Karlsruhe, Germany
dInstitute for Theoretical Physics (ITP),
Karlsruhe Institute of Technology, D-76131 Karlsruhe, Germany
eII. Institut für Theoretische Physik, Universität Hamburg,
Luruper Chaussee 149, D-22761 Hamburg, Germany
fSorbonne Université, CNRS,
Laboratoire de Physique Théorique et Hautes Énergies (LPTHE),
4 Place Jussieu, F–75252 Paris CEDEX 05, France
<EMAIL_ADDRESS>
<EMAIL_ADDRESS>
<EMAIL_ADDRESS>
<EMAIL_ADDRESS>
<EMAIL_ADDRESS>
###### Abstract
The concept of Higgs inflation can be elegantly incorporated in the Next-to-
Minimal Supersymmetric Standard Model (NMSSM). A linear combination of the two
Higgs-doublet fields plays the role of the inflaton which is non-minimally
coupled to gravity. This non-minimal coupling appears in the low-energy
effective superpotential and changes the phenomenology at the electroweak
scale. While the field content of the inflation-inspired model is the same as
in the NMSSM, there is another contribution to the $\mu$ term in addition to
the vacuum expectation value of the singlet. We explore this extended
parameter space and point out scenarios with phenomenological differences
compared to the pure NMSSM. A special focus is set on the electroweak vacuum
stability and the parameter dependence of the Higgs and neutralino sectors. We
highlight regions which yield a SM-like $125$ GeV Higgs boson compatible with
the experimental observations and are in accordance with the limits from
searches for additional Higgs bosons. Finally, we study the impact of the non-
minimal coupling to gravity on the Higgs mixing and in turn on the decays of
the Higgs bosons in this model.
###### Contents
1. 1 Introduction
2. 2 Theoretical framework
1. 2.1 Model description
2. 2.2 Higgs potential
3. 2.3 Vacuum structure and vacuum stability bounds
4. 2.4 Higher-order corrections to Higgs-boson masses and mixing
5. 2.5 Trilinear Higgs-boson self-couplings
6. 2.6 Neutralino and chargino masses
7. 2.7 Sfermion masses
3. 3 Phenomenological analysis
1. 3.1 Viable parameter space compatible with theoretical and experimental bounds
2. 3.2 Higgs-boson and neutralino mass spectra
3. 3.3 Parameter scan
4. 3.4 Higgs-boson and electroweakino production
5. 3.5 Higgs-boson mixing and decays
4. 4 Conclusions
5. A Beta functions
6. References
## 1 Introduction
In the history of our universe, there has been a period in which the size of
the universe exponentially increased. This short period is known as
inflationary epoch, and many models have been developed in order to explain
the inflation of the early universe. Unfortunately, most of these models of
inflation cannot be tested directly in the laboratory; the observation of the
universe is the only discriminator to disfavor or support such models.
Therefore, testing the phenomenology of a particle physics model of inflation
at the electroweak scale with colliders is of interest both from the point of
view of particle physics and cosmology.
One possibility to describe inflation is the extension of a particle physics
model by additional scalar fields which drive inflation but are removed from
the theory afterwards. A more economical approach is the idea of using the
Higgs field of the Standard Model (SM) as inflaton [1, 2, 3]. The simplest
version, however, is under tension as it suffers from a fine-tuning and
becomes unnatural [4]. A less minimal version of Higgs-portal inflation with
an additional complex scalar field can in addition solve further problems of
the SM, see Refs. [5, 6]. Also the concept of critical Higgs inflation can
raise the range of perturbativity to the Planck scale and solve further
problems of the SM, see Refs. [7, 8, 9]. Other solutions are offered by scale-
free extensions of the SM. A natural way of such an implementation can be
realized in canonical superconformal supergravity (CSS) models as proposed by
Refs. [10, 11] based on earlier work by Ref. [12].
The Higgs inflation in the supergravity framework is triggered by a non-
minimal coupling to Einstein gravity. For the supergravity Lagrangian this can
be achieved with an additional term $X(\hat{\Phi})\,R$ of chiral superfields
$\hat{\Phi}$ and the curvature multiplet $R$ (the supersymmetrized field
version of the Ricci scalar which contains the scalar curvature in the
Grassmannian coordinate $\theta^{2}$), following the notation of Ref. [12].
The Lagrangian then reads
$\mathcal{L}_{X}=-6\int\operatorname{d^{2}\theta}\mathcal{E}\left[R+X(\hat{\Phi})\,R-\frac{1}{4}\left({\bar{\mathcal{D}}}^{2}-8\,R\right)\hat{\Phi}^{\dagger}\,\hat{\Phi}+\mathcal{W}(\hat{\Phi})\right]+\text{h.\,c.}+\ldots,$
(1)
where $X(\hat{\Phi})$ as well as the Superpotential $\mathcal{W}(\hat{\Phi})$
are holomorphic functions of the (left) chiral superfields $\hat{\Phi}$,
$\mathcal{E}$ is the vierbein multiplet and $\bar{\mathcal{D}}$ a covariant
derivative. The ellipses encode further gauge terms. The only possible choice
of such a non-minimal coupling suitable for inflation is given by [12]
$X=\chi\,\hat{H}_{u}\cdot\hat{H}_{d},$ (2)
where $\chi$ is a dimensionless coupling and $\hat{H}_{d,u}$ contain the two
$SU(2)_{\text{L}}$ Higgs doublets of the Next-to-Minimal Supersymmetric
Standard Model (NMSSM).111The field content of the MSSM alone (without the
Higgs singlet) is not sufficient to describe inflation successfully as pointed
out in Ref. [12]. The extension by an additional scalar singlet like in the
NMSSM has been shown to be a viable model for inflation, although this version
suffers from a tachyonic instability [13]. In order to avoid this instability,
a stabilizer term has been introduced in Refs. [13, 11] that is suppressed at
low energies. The stabilizer term can be avoided in a model with minimal
supergravity couplings where the Kähler potential has a shift symmetry in the
doublet fields [14]; however, cosmological phenomenology and observations have
meanwhile ruled out this possibility [15].
The simplest implementation of a superconformal model which can accommodate
the non-minimal coupling term $\chi\,\hat{H}_{u}\cdot\hat{H}_{d}$ is the well-
known $\mathbb{Z}_{3}$-invariant NMSSM augmented by an additional $\mu$ term,
which we call $\mu$-extended NMSSM ($\mu$NMSSM) in the following. We neglect
all additional $\mathbb{Z}_{3}$-violating parameters in the superpotential at
the tree level (see the discussion below). These terms are not relevant for
the physics of inflation: the function $X$ could potentially also contain an
$\hat{S}^{2}$ term, since it has the same structure as
$\hat{H}_{u}\cdot\hat{H}_{d}$ and is allowed by gauge symmetries. However,
inflation driven by this term does not lead to the desired properties as
pointed out in Ref. [12]. The other term, which is not present in the NMSSM,
is a singlet tadpole proportional to $\hat{S}$ that is not quadratic or
bilinear in the chiral superfields and thus would need a dimensionful coupling
to supergravity instead of the dimensionless $\chi$.
In this work, we are going to study the low-energy electroweak phenomenology
of the model outlined in Refs. [10, 11] and Ref. [13], where previously the
focus was put on the description of inflation and the superconformal embedding
of the NMSSM into supergravity. We have generated a model file for FeynArts
[16, 17], where SARAH [18, 19, 20, 21] has been used to generate the tree-
level couplings of the $\mu$NMSSM, and we have implemented the one-loop
counterterms. The loop calculations have been carried out with the help of
FormCalc [22] and LoopTools [22]. In order to predict the Higgs-boson masses,
we have performed a one-loop renormalization of the Higgs sector of the
$\mu$NMSSM which is compatible with the renormalization schemes that have been
employed in Refs. [23, 24] for the cases of the MSSMand NMSSM, respectively.
This allowed us to add the leading MSSM-like two-loop corrections which are
implemented in FeynHiggs [25, 26, 27, 28, 29, 30, 31, 32] in order to achieve
a state-of-the-art prediction for the Higgs masses and mixing. The parameter
space is checked for compatibility with the experimental searches for
additional Higgs bosons using HiggsBounds version 5.1.0beta [33, 34, 35, 36,
37] and with the experimental observation of the SM-like Higgs boson via
HiggsSignals version 2.1.0beta [38]. In addition, we check the electroweak
vacuum for its stability under quantum tunneling to a non-standard global
minimum and for tachyonic Higgs states in the tree-level spectrum. Finally, we
investigate some typical scenarios and study their collider phenomenology at
the Large Hadron Collider (LHC) and a future electron-positron collider. For
this purpose in some analyses we use SusHi [39, 40] for the calculation of
neutral Higgs-boson production cross-sections. We emphasize the possibility of
light $\mathcal{CP}$-even singlets in the spectrum with masses below
$100\,\textrm{GeV}$ that could be of interest in view of slight excesses
observed in the existing data of the Large Electron–Positron collider (LEP)
[41] and the Compact Muon Solenoid (CMS) [42] which are compatible with bounds
from A Toroidal LHC ApparatuS (ATLAS) [43]. For one scenario that differs
substantially from the usual NMSSM, we exemplarily discuss the total decay
widths and branching ratios of the three lightest Higgs bosons and their
dependence on the additional parameters of the $\mu$NMSSM.
The paper is organized as follows: we start with a description of our model
and the theoretical framework in Section 2 by discussing analytically the
phenomenological differences of the Higgs potential in the $\mu$NMSSM compared
to the $\mathbb{Z}_{3}$-invariant NMSSM. We study vacuum stability and the
incorporation of higher-order corrections for the Higgs boson masses. Then, we
derive the trilinear self-couplings of the Higgs bosons and comment on the
remaining sectors of the model which are affected by the additional $\mu$
term. In Section 3, we focus on the parameter space of interest and
investigate the Higgs-boson masses as well as the stability of the electroweak
vacuum numerically and also show the neutralino spectrum. Furthermore, we
study the effect of the additional $\mu$ parameter on Higgs-boson production
and decays. Lastly, we conclude in Section 4. In the Appendix we present the
beta functions for the superpotential and some soft-breaking parameters of the
general NMSSM (GNMSSM) [44, 45, 46] including all $\mathbb{Z}_{3}$-breaking
terms.
## 2 Theoretical framework
In this section we introduce the model under consideration, the $\mu$NMSSM,
which differs by an additional $\mu$ term from the scale-invariant NMSSM. We
derive the Higgs potential and investigate vacuum stability and the prediction
for the Higgs-boson masses of the model. Furthermore, we discuss the trilinear
self-couplings of the Higgs bosons and comment on the electroweakinos—i. e.
charginos and neutralinos—as well as on the sfermion sector. We constrain our
analytical investigations in this section mostly to tree-level relations.
Higher-order contributions, e. g. for the Higgs-boson masses, are explained
generically and are evaluated numerically in the subsequent phenomenological
section.
### 2.1 Model description
For the Higgs sector of the NMSSM the superpotential is of the form222Compared
to Refs. [10, 11], we flip the sign of $\lambda$ to follow the conventions of
the NMSSM literature—see e. g. Ref. [44]—and thus have $\lambda>0$. As shown
in Ref. [10], the product of $\kappa$ and $\lambda$ needs to be positive for
that convention.
$\displaystyle\mathcal{W}_{\text{Higgs}}$
$\displaystyle=\lambda\,\hat{S}\,\hat{H}_{u}\cdot\hat{H}_{d}+\tfrac{1}{3}\,\kappa\,\hat{S}^{3}\,.$
(3)
where $\hat{H}_{u}$ and $\hat{H}_{d}$ are the well-known $SU(2)_{\text{L}}$
doublets of the MSSM, and $\hat{S}$ is the additional $SU(2)_{\text{L}}$
singlet. The $SU(2)_{\text{L}}$-invariant product
$\hat{H}_{u}\cdot\hat{H}_{d}$ is defined through
$\hat{H}_{u}\cdot\hat{H}_{d}=\sum_{a,b}\epsilon_{ab}\,\hat{H}_{d}^{a}\,\hat{H}_{u}^{b}$
with $\epsilon_{21}=1$, $\epsilon_{12}=-1$ and $\epsilon_{aa}=0$ with
$a,b\in\\{1,2\\}$. As outlined in Ref. [11], a Kähler transformation starting
from Jordan-frame supergravity introduces a correction in the superpotential,
which is of the form
$\displaystyle\mathcal{W}_{\text{Higgs}}$
$\displaystyle\rightarrow\mathcal{W}_{\text{Higgs}}+\tfrac{3}{2}\,m_{3/2}\,\chi\,\hat{H}_{u}\cdot\hat{H}_{d}\,.$
(4)
The parameter $m_{3/2}$ denotes the gravitino mass, and $\chi$ is the coupling
of Eq. (2). The scalar Higgs fields are denoted by $H_{u}$, $H_{d}$ and $S$ in
the following. During electroweak symmetry breaking, they receive the vacuum
expectation values (vevs) $v_{u}$, $v_{d}$ and $v_{s}$, respectively.
Expanding around the vevs, we decompose the fields as follows:
$\displaystyle H_{u}$ $\displaystyle\equiv\begin{pmatrix}h_{u}^{+}\\\
h_{u}\end{pmatrix}=\begin{pmatrix}\eta_{u}^{+}\\\
v_{u}+\tfrac{1}{\sqrt{2}}\left(\sigma_{u}+i\,\phi_{u}\right)\end{pmatrix},\qquad
H_{d}\equiv\begin{pmatrix}h_{d}\\\
h_{d}^{-}\end{pmatrix}=\begin{pmatrix}v_{d}+\tfrac{1}{\sqrt{2}}\left(\sigma_{d}+i\,\phi_{d}\right)\\\
\eta_{d}^{-}\end{pmatrix},$ (5a) $\displaystyle S$ $\displaystyle\equiv
v_{s}+\tfrac{1}{\sqrt{2}}\left(\sigma_{s}+i\,\phi_{s}\right)\,.$ (5b)
The additional bilinear contribution to the superpotential in Eq. (4)
generates a term which is analogous to the $\mu$ term of the MSSM, but with
$\displaystyle\mu$ $\displaystyle=\tfrac{3}{2}\,m_{3/2}\,\chi\,.$ (6)
When the singlet $S$ acquires its vev, an effective
$\mu_{\text{eff}}=\lambda\,v_{s}$ is dynamically generated. Often, the sum
$\left(\mu+\mu_{\text{eff}}\right)$ is the phenomenologically more relevant
parameter of the model. It takes the form
$\displaystyle\mu+\mu_{\text{eff}}$
$\displaystyle=\tfrac{3}{2}\,m_{3/2}\,\chi\,+\lambda\,v_{s}\,$ (7)
and corresponds to the MSSM-like higgsino mass term. In the following, we
consider both quantities $\mu$ and $\mu_{\text{eff}}$ as independent input
parameters, where $\mu$ is linearly dependent on the gravitino mass $m_{3/2}$.
In order to be a viable dark-matter candidate, the gravitino mass can range
from a few eV to multiple TeV, see e. g. Ref. [47]. The value of $\chi$ is a
priori not fixed; for cosmological reasons we adopt
$\displaystyle\chi\simeq 10^{5}\;\lambda$ (8)
according to Refs. [13, 11]. The additional contribution to the superpotential
in the $\mu$NMSSM is thus mainly steered by the gravitino mass, whereas
$v_{s}$ can be traded for $\mu_{\text{eff}}$. If we require a $\mu$ parameter
above the electroweak scale, $\mu\gtrsim 1$ TeV, and in addition a sizable
coupling $\lambda\gtrsim 0.1$, the typical gravitino mass turns out to be much
below the electroweak scale at $m_{3/2}\gtrsim 10$ MeV. However, if we allow
for very small values of $\lambda\ll 10^{-2}$ and very large values of $\mu\gg
1\,\textrm{TeV}$, the gravitino mass could as well be above the TeV scale. In
the latter case, the phenomenology of the $\mu$NMSSM is not necessarily
similar to the MSSM: the singlets only decouple for $\lambda\to 0$ with
$\kappa\propto\lambda$ and therefore $v_{s}\to\infty$. If the constraint
$\kappa\propto\lambda$ is dropped, interesting effects can occur; e. g. we
will discuss a scenario with small $\lambda$ and small $\mu_{\text{eff}}$ in
our numerical studies. In contrast to the NMSSM, the higgsino mass can be
generated by $\mu$ alone and thus even a vanishing $v_{s}$ is not in conflict
with experimental bounds.
In order to avoid the cosmological gravitino problem [48], where the light
gravitino dark matter overcloses the universe [49, 50], one has to control the
reheating temperature in order to keep the production rate of the light
gravitinos low [51]. This potential problem may affect the model under
consideration for gravitino masses in the range from MeV to GeV; it disappears
for much heavier gravitinos ($\mathord{\gtrsim}\,10\,\textrm{TeV}$). In the
latter case the inflationary $\mu$ term would dominate over the NMSSM-like
$\mu_{\text{eff}}$ and drive the higgsino masses to very high values (unless
$\mu_{\text{eff}}$ is tuned such that the sum $(\mu+\mu_{\text{eff}})$ remains
small). For gravitino masses $m_{3/2}>1\,\textrm{GeV}$ it affects Big Bang
Nucleosynthesis via photo-deconstruction of light elements, see Ref. [48]. As
discussed in Ref. [11], in the $\mu$NMSSM there is no strict constraint on the
reheating temperature $T_{R}$. We note that a reheating temperature below
$T_{R}\lesssim 10^{8}$–$10^{9}\,\textrm{GeV}$, as advocated in Ref. [52],
avoids the gravitino problem. The rough estimate of $m_{3/2}\sim
10\,\textrm{MeV}$ even needs $T_{R}\lesssim 10^{5}\,\textrm{GeV}$ in order to
not overclose the universe with thermally produced gravitinos after inflation
[53, 54, 55, 56]. Interestingly, such low reheating temperatures preserve
high-scale global minima after inflation, see Ref. [57], and disfavor the
preparation of the universe in a meta-stable state after the end of inflation
[58]. In any case, the reheating temperature at the end of inflation is very
model dependent and rather concerns the inflationary physics. A study to
estimate the reheating temperature $T_{R}$ is given in Ref. [59]. Therein, a
relation is drawn between the decay width of the inflaton and $T_{R}$.
Interestingly, if we naïvely assume that this width at the end of inflation is
equal to the SM-like Higgs width $\Gamma_{h}\approx 4\times
10^{-3}\,\textrm{GeV}$, we can estimate a rather low reheating temperature
$T_{R}\sim\sqrt{\Gamma_{h}M_{\text{Pl}}}\approx 10^{7}\,\textrm{GeV}$ with the
Planck mass $M_{\text{Pl}}\approx 2.4\times 10^{18}\,\textrm{GeV}$. For our
studies below we assume that a reheating temperature as low as $T_{R}\lesssim
10^{9}\,\textrm{GeV}$ can be achieved even with large couplings.
Since the bilinear $\mu$ term breaks the $\mathbb{Z}_{3}$ symmetry, additional
parameters are allowed compared to the NMSSM. In the general NMSSM
(GNMSSM)—including the bilinear singlet mass parameter $\nu$ and the singlet
tadpole coefficient $\xi$—the Higgs sector of the superpotential is given by
$\displaystyle\mathcal{W}_{\text{Higgs}}$
$\displaystyle=\lambda\,\hat{S}\,\hat{H}_{u}\cdot\hat{H}_{d}+\tfrac{1}{3}\,\kappa\,\hat{S}^{3}+\mu\,\hat{H}_{u}\cdot\hat{H}_{d}+\tfrac{1}{2}\,\nu\,\hat{S}^{2}+\xi\,\hat{S}\,.$
(9)
However, we assume that the non-minimal coupling of the Higgs doublets to
supergravity is the only source of superconformal and thus $\mathbb{Z}_{3}$
symmetry breaking—as outlined in Section 5 of Ref. [11]. In this case, all
other superpotential parameters that are forbidden by $\mathbb{Z}_{3}$
symmetry remain exactly zero at all scales: the beta functions for the
parameters of the superpotential are proportional to the respective parameter
itself and thus they cannot be generated radiatively.
Because the $\mathbb{Z}_{3}$ symmetry is broken (which avoids the typical
domain-wall problem of the NMSSM [60]), another symmetry at the high scale is
required in order to solve the tadpole problem [61, 62, 63, 64, 65, 66]:
without such a symmetry, Planck-scale corrections could possibly induce large
contributions to the tadpole term [67]. The superconformal embedding of the
$\mu$NMSSM, where the $\mu$ term is generated from the Kähler potential,
serves as this symmetry. As pointed out in Ref. [67], other possibilities
consist of discrete or continuous non-gauge symmetries, so-called $R$
symmetries. Imposing discrete $\mathbb{Z}_{4}$ or $\mathbb{Z}_{8}$ $R$
symmetries as proposed in Refs. [68, 69, 45] provide a viable solution, since
dimensionful linear and bilinear terms are forbidden as long as the symmetry
is not broken.333There is an interplay between discrete $R$ symmetries, SUSY
breaking and hence the gravitino mass in supergravity, which favors the
$\mathbb{Z}_{4}$ $R$ symmetry [70]. Note, however, that our model at hand is
fundamentally different from Ref. [70] as the inflaton is related to the Higgs
fields of the NMSSM.
Furthermore, each parameter in the superpotential induces a corresponding
soft-breaking term; additional mass terms are allowed:
$\displaystyle\begin{split}-\mathcal{L}_{\text{soft}}&=\left[A_{\lambda}\,\lambda\,S\,H_{u}\cdot
H_{d}+\tfrac{1}{3}\,A_{\kappa}\,\kappa\,S^{3}+B_{\mu}\,\mu\,H_{u}\cdot
H_{d}+\tfrac{1}{2}\,B_{\nu}\,\nu\,S^{2}+C_{\xi}\,\xi\,S+\text{h.\,c.}\right]\\\
&\quad+m_{H_{d}}^{2}\,\lvert H_{d}\rvert^{2}+m_{H_{u}}^{2}\,\lvert
H_{u}\rvert^{2}+m_{s}^{2}\,\lvert S\rvert^{2}\,.\end{split}$ (10)
It should be noted that the beta functions for soft-breaking parameters are
not only proportional to themselves, but also receive contributions from the
other soft-breaking parameters. Thus, in contrast to the terms in the
superpotential, finite contributions may emerge even if a soft-breaking
parameter is set to zero at the tree level. The beta functions for the
parameters of the superpotential in Eq. (9) and its corresponding soft-
breaking parameters in Eq. (10) can be found in Refs. [71, 72, 44]; however,
since we employ different conventions we list them in Appendix A.
Contrary to studies in the GNMSSM (see Refs. [44, 45, 46, 73]), where the
MSSM-like $\mu$ term can be easily shifted away and absorbed in a redefinition
of the other parameters—especially the tadpole contribution—we cannot do so in
the inflation-inspired $\mu$NMSSM. First of all, the $\mu$ term is introduced
via the $R$ symmetry-breaking non-minimal coupling to supergravity only. The
other parameters in the singlet sector are not supposed to be generated by
this breaking. Secondly, by redefining the parameters, we would introduce a
tadpole term and shift the effect simply there. Note that the authors of Ref.
[45] perform this shift in order to eliminate the linear (i. e. tadpole) term
in the superpotential and keep $\mu$, while others (e. g. Ref. [74]) shift the
$\mu$ term to zero and keep the tadpole and bilinear terms for the singlet in
the superpotential. As discussed above, in the $\mu$NMSSM considered in this
paper due to the superconformal symmetry breaking at the Planck scale solely
the $\mathbb{Z}_{3}$-breaking $\mu$ term is present.
### 2.2 Higgs potential
With the superpotential of Eq. (9) and the soft-breaking Lagrangian of Eq.
(10), we derive the following Higgs potential, where we stick to real
parameters:
$\displaystyle\begin{split}V&=\left[m_{H_{d}}^{2}+\left(\mu+\lambda\,S\right)^{2}\right]\lvert
H_{d}\rvert^{2}+\left[m_{H_{u}}^{2}+\left(\mu+\lambda\,S\right)^{2}\right]\lvert
H_{u}\rvert^{2}+\left(m_{S}^{2}+B_{\nu}\,\nu\right)S^{2}\\\
&\quad+2\,C_{\xi}\,\xi\,S+\tfrac{2}{3}\,\kappa\,A_{\kappa}\,S^{3}+\left[\xi+\nu\,S+\kappa\,S^{2}+\lambda\,H_{u}\cdot
H_{d}\right]^{2}+2\left(B_{\mu}\,\mu+\lambda\,A_{\lambda}\,S\right)H_{u}\cdot
H_{d}\\\ &\quad+\tfrac{1}{8}\left(g_{1}^{2}+g_{2}^{2}\right)\left(\lvert
H_{d}\rvert^{2}-\lvert
H_{u}\rvert^{2}\right)^{2}+\tfrac{1}{2}\,g_{2}^{2}\,\lvert
H_{d}^{\dagger}\,H_{u}\rvert^{2}\,.\end{split}$ (11)
This potential can be expanded in the components of the Higgs fields in Eq.
(5). Defining the vectors in field space
$\mathcal{S}^{\text{T}}=\left(\sigma_{d},\sigma_{u},\sigma_{s}\right)$,
$\mathcal{P}^{\text{T}}=\left(\phi_{d},\phi_{u},\phi_{s}\right)$ and
$\mathcal{C}^{\text{T}}=\left(\phi_{d}^{-},\phi_{u}^{-}\right)=\left(\eta_{d}^{+},\eta_{u}^{+}\right)^{*}$,
it reads
$\displaystyle\begin{split}V&=\text{const}-\mathcal{T}_{S}^{\text{T}}\,\mathcal{S}-\mathcal{T}_{P}^{\text{T}}\,\mathcal{P}+\tfrac{1}{2}\,\mathcal{S}^{\text{T}}\,{\mathcal{M}}_{S}^{2}\,\mathcal{S}+\tfrac{1}{2}\,\mathcal{P}^{\text{T}}\,{\mathcal{M}}_{P}^{2}\,\mathcal{P}+\mathcal{C}^{\text{T}}\,{\mathcal{M}}_{C}^{2}\,\mathcal{C}^{*}\\\
&\quad+\sum\limits_{ijk\,=\,1}^{6}\tfrac{1}{\sqrt{2}}\,\lambda_{ijk}^{\prime}\left(\mathcal{S},\mathcal{P}\right)_{i}\left(\mathcal{S},\mathcal{P}\right)_{j}\left(\mathcal{S},\mathcal{P}\right)_{k}+\sum\limits_{i\,=\,1}^{6}\sum_{jk\,=\,1}^{2}\tfrac{1}{\sqrt{2}}\,\tilde{\lambda}_{ijk}^{\prime}\left(\mathcal{S},\mathcal{P}\right)_{i}\left(\mathcal{C}\right)_{j}\left(\mathcal{C}^{*}\right)_{k}+\cdots\,,\end{split}$
(12)
where the $\mathcal{CP}$-even and $\mathcal{CP}$-odd tadpole coefficients
$\mathcal{T}_{S}$ and $\mathcal{T}_{P}$, the $\mathcal{CP}$-even,
$\mathcal{CP}$-odd and charged squared mass matrices $\mathcal{M}_{S}^{2}$,
$\mathcal{M}_{P}^{2}$ and $\mathcal{M}_{C}^{2}$ are given below, and the
trilinear couplings $\lambda_{ijk}^{\prime}$ and
$\tilde{\lambda}_{ijk}^{\prime}$ are specified in Section 2.5, though in a
basis where the Goldstone mode corresponds to a mass eigenstate and does not
mix with the other states at lowest order. The ellipses denote quadrilinear
terms which are immaterial for the following.
We substitute the electroweak vevs $v_{u}$ and $v_{d}$ by their ratio
$\tan\beta=v_{u}/v_{d}$ and the sum of their squares $v^{2}\equiv
v_{u}^{2}+v_{d}^{2}=(174\,\textrm{GeV})^{2}$. The symbols $t_{\beta}$,
$c_{\beta}$ and $s_{\beta}$ denote $\tan\beta$, $\cos\beta$ and $\sin\beta$,
respectively. Furthermore, $g_{1}$ and $g_{2}$ are substituted by the $W$ and
$Z$ gauge-boson masses,
$\displaystyle m_{W}^{2}$ $\displaystyle=\tfrac{1}{2}\,g_{2}^{2}\,v^{2}\,,$
$\displaystyle m_{Z}^{2}$
$\displaystyle=\tfrac{1}{2}\left(g_{1}^{2}+g_{2}^{2}\right)v^{2}\,.$ (13)
Using the abbreviations
$\displaystyle a_{1}$
$\displaystyle=B_{\mu}\,\mu+\xi\,\lambda+\mu_{\text{eff}}\left(\nu+\frac{\kappa}{\lambda}\,\mu_{\text{eff}}+A_{\lambda}\right),$
(14a) $\displaystyle a_{2}$
$\displaystyle=2\,v\,\lambda\left(\mu+\mu_{\text{eff}}\right),$ (14b)
$\displaystyle a_{3}$
$\displaystyle=v\,\lambda\left(\nu+2\,\frac{\kappa}{\lambda}\,\mu_{\text{eff}}+A_{\lambda}\right),$
(14c) $\displaystyle a_{4}$
$\displaystyle=\frac{1}{\mu_{\text{eff}}}\left[v^{2}\,\lambda^{2}\,c_{\beta}\,s_{\beta}\left(\nu+\frac{\kappa}{\lambda}\,\mu_{\text{eff}}+A_{\lambda}\right)-v^{2}\,\lambda^{2}\,\mu-\xi\,\lambda\left(\nu+C_{\xi}\right)\right],$
(14d) $\displaystyle a_{5}$
$\displaystyle=4\left(\frac{\kappa}{\lambda}\right)^{2}\mu_{\text{eff}}^{2}+\frac{\kappa}{\lambda}\left[\mu_{\text{eff}}\left(A_{\kappa}+3\,\nu\right)-v^{2}\,\lambda^{2}\,c_{\beta}\,s_{\beta}\right],$
(14e) $\displaystyle a_{6}$
$\displaystyle=v\,\lambda\left(\nu+2\,\frac{\kappa}{\lambda}\,\mu_{\text{eff}}-A_{\lambda}\right),$
(14f) $\displaystyle a_{7}$
$\displaystyle=-6\left(\frac{\kappa}{\lambda}\right)^{2}\mu_{\text{eff}}^{2}+2\,\frac{\kappa}{\lambda}\left(\xi\,\lambda-4\,\nu^{2}\right)+B_{\nu}\,\nu\,,$
(14g)
we can write the explicit expressions for the tadpole coefficients
$\mathcal{T}_{S,P}$ as
$\displaystyle\mathcal{T}_{S}$
$\displaystyle=\begin{pmatrix}\sqrt{2}\,v\left\\{s_{\beta}\,a_{1}-c_{\beta}\left[m_{H_{d}}^{2}+\left(\mu+\mu_{\text{eff}}\right)^{2}+v^{2}\,\lambda^{2}\,s_{\beta}^{2}+\tfrac{1}{2}\,m_{Z}^{2}\,c_{2\beta}\right]\right\\}\\\\[6.45831pt]
\sqrt{2}\,v\left\\{c_{\beta}\,a_{1}-s_{\beta}\left[m_{H_{u}}^{2}+\left(\mu+\mu_{\text{eff}}\right)^{2}+v^{2}\,\lambda^{2}\,c_{\beta}^{2}-\tfrac{1}{2}\,m_{Z}^{2}\,c_{2\beta}\right]\right\\}\\\\[6.45831pt]
\sqrt{2}\,\frac{\mu_{\text{eff}}}{\lambda}\left[a_{4}-m_{S}^{2}-a_{5}-a_{7}-v^{2}\,\lambda^{2}-\left(\nu+2\,\mu_{\text{eff}}\,\frac{\kappa}{\lambda}\right)^{2}\right]\end{pmatrix},$
$\displaystyle\mathcal{T}_{P}$ $\displaystyle=\begin{pmatrix}0\\\ 0\\\
0\end{pmatrix}\equiv\mathbf{0}\,.$ (15)
The minimization of the Higgs potential requires all tadpole coefficients in
Eq. (15) to be equal to zero. With the conditions $\mathcal{T}_{S}=\mathbf{0}$
we choose to eliminate $m_{H_{d}}^{2}$, $m_{H_{u}}^{2}$ and $m_{S}^{2}$
according to
$\displaystyle m_{H_{d}}^{2}$
$\displaystyle=-\left(\mu+\mu_{\text{eff}}\right)^{2}-v^{2}\,\lambda^{2}\,s_{\beta}^{2}-\tfrac{1}{2}\,m_{Z}^{2}\,c_{2\beta}+a_{1}\,t_{\beta}\,,$
(16a) $\displaystyle m_{H_{u}}^{2}$
$\displaystyle=-\left(\mu+\mu_{\text{eff}}\right)^{2}-v^{2}\,\lambda^{2}\,c_{\beta}^{2}+\tfrac{1}{2}\,m_{Z}^{2}\,c_{2\beta}+\frac{a_{1}}{t_{\beta}}\,,$
(16b) $\displaystyle m_{S}^{2}$
$\displaystyle=a_{4}-a_{5}-a_{7}-v^{2}\,\lambda^{2}-\left(\nu+2\,\frac{\kappa}{\lambda}\,\mu_{\text{eff}}\right)^{2}\,.$
(16c)
Substituting these expressions in the symmetric mass matrices
${\mathcal{M}}_{S,P,C}$ we find
$\displaystyle\mathcal{M}_{S}^{2}$
$\displaystyle=\begin{pmatrix}m_{Z}^{2}\,c_{\beta}^{2}+a_{1}\,t_{\beta}&\left(2\,v^{2}\,\lambda^{2}-m_{Z}^{2}\right)c_{\beta}\,s_{\beta}-a_{1}&a_{2}\,c_{\beta}-a_{3}\,s_{\beta}\\\
\cdot&m_{Z}^{2}\,s_{\beta}^{2}+a_{1}/t_{\beta}&a_{2}\,s_{\beta}-a_{3}\,c_{\beta}\\\
\cdot&\cdot&a_{4}+a_{5}\end{pmatrix},$ (17a)
$\displaystyle\mathcal{M}_{P}^{2}$
$\displaystyle=\begin{pmatrix}a_{1}\,t_{\beta}&a_{1}&-a_{6}\,s_{\beta}\\\
\cdot&a_{1}/t_{\beta}&-a_{6}\,c_{\beta}\\\
\cdot&\cdot&a_{4}-3\,a_{5}-2\,a_{7}\end{pmatrix},$ (17b)
$\displaystyle\mathcal{M}_{C}^{2}$
$\displaystyle=\left[\left(m_{W}^{2}-v^{2}\,\lambda^{2}\right)\,c_{\beta}\,s_{\beta}+a_{1}\right]\begin{pmatrix}t_{\beta}&1\\\
\cdot&1/t_{\beta}\end{pmatrix}.$ (17c)
Diagonalizing Eq. (17c) yields zero for the massless charged Goldstone boson,
and the charged Higgs-boson mass $m_{H^{\pm}}$ at the tree level is given by
$\displaystyle m_{H^{\pm}}^{2}$
$\displaystyle=m_{W}^{2}-v^{2}\,\lambda^{2}+\frac{a_{1}}{c_{\beta}\,s_{\beta}}\,,$
(18)
which we employ as an input parameter. Inserting Eq. (14a) we can then
eliminate $A_{\lambda}$ via
$\displaystyle A_{\lambda}$
$\displaystyle=\frac{c_{\beta}\,s_{\beta}}{\mu_{\text{eff}}}\left(m_{H^{\pm}}^{2}-m_{W}^{2}+v^{2}\,\lambda^{2}\right)-\frac{1}{\mu_{\text{eff}}}\left(B_{\mu}\,\mu+\xi\,\lambda\right)-\left(\nu+\frac{\kappa}{\lambda}\,\mu_{\text{eff}}\right).$
(19)
Substituting $A_{\lambda}$ in the abbreviations of Eq. (14) yields ($a_{2}$,
$a_{5}$ and $a_{7}$ are not changed)
$\displaystyle a_{1}^{\prime}$
$\displaystyle=c_{\beta}\,s_{\beta}\left(m_{H^{\pm}}^{2}-m_{W}^{2}+v^{2}\,\lambda^{2}\right),$
(20a) $\displaystyle a_{3}^{\prime}$
$\displaystyle=v\,\lambda\left[\frac{\kappa}{\lambda}\,\mu_{\text{eff}}+\frac{1}{\mu_{\text{eff}}}\left(a_{1}^{\prime}-B_{\mu}\,\mu-\xi\,\lambda\right)\right],$
(20b) $\displaystyle a_{4}^{\prime}$
$\displaystyle=c_{\beta}\,s_{\beta}\left(\frac{v\,\lambda}{\mu_{\text{eff}}}\right)^{2}\left(a_{1}^{\prime}-B_{\mu}\,\mu-\xi\,\lambda\right)-\frac{1}{\mu_{\text{eff}}}\left[\mu\,v^{2}\,\lambda^{2}+\xi\,\lambda\left(\nu+C_{\xi}\right)\right],$
(20c) $\displaystyle a_{6}^{\prime}$
$\displaystyle=v\,\lambda\left[3\,\frac{\kappa}{\lambda}\,\mu_{\text{eff}}+2\,\nu-\frac{1}{\mu_{\text{eff}}}\left(a_{1}^{\prime}-B_{\mu}\,\mu-\xi\,\lambda\right)\right]=-a_{3}^{\prime}+2\,v\,\lambda\left(2\,\frac{\kappa}{\lambda}\,\mu_{\text{eff}}+\nu\right).$
(20d)
The tree-level masses of the three neutral $\mathcal{CP}$-even Higgs bosons
$m_{h_{1,2,3}}^{2}$ are determined by diagonalizing Eq. (17a). Analogously,
diagonalizing Eq. (17b) yields the masses $m_{a_{1,2}}^{2}$ of the
$\mathcal{CP}$-odd Higgs bosons at the tree level; the third eigenvalue is
equal to zero and belongs to the neutral Goldstone boson.
Higgs doublets:
The mass-matrix elements of the doublet fields in the upper-left
$\left(2\times 2\right)$ block matrices of Eqs. (17a)–(17b) contain the
abbreviation $a_{1}^{\prime}$. From Eq. (20a) it is apparent that they are
determined by SM parameters and $m_{H^{\pm}}$, $\lambda$ and $t_{\beta}$ like
in the NMSSM. Neglecting the mixing between the doublet and singlet sector,
the mass of the light $\mathcal{CP}$-even doublet state has an upper bound of
$m_{Z}^{2}\,c^{2}_{2\beta}+\lambda^{2}\,v^{2}\,s^{2}_{2\beta}$. In the limit
$m_{H^{\pm}}\gg m_{Z}$, the other two doublet fields decouple and obtain a
mass close to $m_{H^{\pm}}$. Smaller values of $m_{H^{\pm}}$ increase the
mixing of both $\mathcal{CP}$-even doublet fields. Also $t_{\beta}$ needs to
be close to one for large doublet mixing.
Higgs singlets:
The $\left(3,3\right)$ elements of $\mathcal{M}_{S}$ and $\mathcal{M}_{P}$ in
Eqs. (17a) and (17b) set the mass scale of the Higgs singlets. They contain
the terms $a_{4}^{\prime}$ from Eq. (20c), $a_{5}$ from Eq. (14e), and $a_{7}$
from Eq. (14g). All $\mathbb{Z}_{3}$-violating parameters besides $\mu$ and
$B_{\mu}$ appear in these terms; in our later analysis we set these parameters
besides $\mu$ and $B_{\mu}$ to zero, but for completeness we mention them in
the following discussion of this section.
The parameter $A_{\kappa}$ appears only in the term $a_{5}$, whereas $B_{\nu}$
only appears in $a_{7}$. Thus it is obvious that the diagonal mass-matrix
elements for the singlet fields—and therefore their masses—can be controlled
by these two quantities, without changing any other matrix element. If all
$\mathbb{Z}_{3}$-violating parameters except $\mu$ and $B_{\mu}$ were set to
zero, we would rediscover the NMSSM-specific feature that $A_{\kappa}$ is
bound from below and above to avoid tachyonic singlet states at the tree
level.
The ratio $\kappa/\lambda$ which appears in both terms, $a_{5}$ and $a_{7}$,
has sizable impact on the mass scale of the singlets. If $\kappa\ll\lambda$
the $\mathcal{CP}$-even singlet entry is purely controlled by
$a_{4}^{\prime}$, which in turn is proportional to $1/\mu_{\text{eff}}$; in
the same limit, the $\mathcal{CP}$-odd singlet entry is controlled by
$a_{4}^{\prime}$ and the remainder of $a_{7}$ which is $B_{\nu}\,\nu$. Also
note that $a_{4}^{\prime}$ contains a term which is linear in $\mu$. In the
opposite case $\kappa\gtrsim\lambda$, the term $a_{5}$ is likely to dominate
the $\left(3,3\right)$ matrix element for the $\mathcal{CP}$-even singlet due
to the suppression of $a_{4}^{\prime}$ by $\mu_{\text{eff}}$ if it is of the
order of a few $100$ GeV. The term $a_{5}$ is proportional to
$(\kappa/\lambda)^{2}\,\mu_{\text{eff}}^{2}$, such that the
$\mathcal{CP}$-even singlet exhibits a strong dependence on
$\mu_{\text{eff}}$. On the other hand for $\mu\gtrsim\mu_{\text{eff}}$, the
term $a_{4}^{\prime}$ can balance the large $\kappa$-enhanced contribution in
$a_{5}$; thus, possible upper bounds on $\kappa$ as derived in Ref. [75] might
be evaded.
For the case of the $\mathcal{CP}$-odd singlet, the terms in $a_{5}$ and
$a_{7}$ that are quadratic in $\mu_{\text{eff}}$ cancel each other. Then the
size of the other parameters (especially $A_{\kappa}$, $\mu$ and
$\mu_{\text{eff}}$) determines which contribution is dominant. For moderate
values of $\kappa\approx\lambda\gtrsim 0.1$ together with small $A_{\kappa}$
the $\mathcal{CP}$-odd singlet develops a dependence on
$\mu/\mu_{\text{eff}}$, as we will discuss later. Lastly, we note that in the
case of $\kappa\gg\lambda$ and $A_{\kappa}\neq 0$ GeV the $\mathcal{CP}$-even
and $\mathcal{CP}$-odd singlet masses are controlled through
$(\kappa/\lambda)^{2}\,\mu_{\text{eff}}^{2}$ and
$(\kappa/\lambda)\,\mu_{\text{eff}}\,A_{\kappa}$, respectively. Later, this
will allow us to present a rescaling procedure that keeps both singlet masses
constant over a large parameter range.
Doublet–singlet mixing:
The masses of the doublet-like and the singlet-like Higgs states can be
significantly shifted by mixing between both sectors. The relevant matrix
elements are the ones in the third columns of Eqs. (17a) and (17b). They
contain the abbreviations $a_{2}$, $a_{3}^{\prime}$ and $a_{6}^{\prime}$, see
Eqs. (14b), (20b) and (20d), respectively. The mixing vanishes in the limit
$\lambda\to 0$ with constant $\kappa/\lambda$, and it is enhanced for larger
values of $\lambda$. For fixed $\lambda$ it is also strongly enhanced in the
limit $\mu_{\text{eff}}\to 0$ GeV.
In the $\mathcal{CP}$-even sector, two terms contribute to the doublet–singlet
mixing: $a_{2}$ which depends on the sum $(\mu+\mu_{\text{eff}})$, and
$a_{3}^{\prime}$ which does not directly depend on $\mu$, but only on the
soft-breaking term $B_{\mu}\,\mu$. In the case of large $\mu$ and
$\mu_{\text{eff}}$ of the same sign, $a_{2}$ often dominates the mixing with
the lighter doublet, eventually yielding a tachyonic singlet or doublet Higgs;
this behavior can be avoided by choosing a proper value for $B_{\mu}$ (or
$\xi$) to cancel the large effect in $a_{2}$ by $a_{3}^{\prime}$. In the case
of similar $\mu$ and $\mu_{\text{eff}}$ of opposite signs, $a_{3}^{\prime}$
will always dominate the mixing. Again, the mixing strength can be adjusted by
setting $B_{\mu}$ (or $\xi$).
The doublet–singlet mixing in the $\mathcal{CP}$-odd sector contains only one
term $a_{6}^{\prime}$ which is similar to $a_{3}^{\prime}$ with opposite sign.
Furthermore, the $\mathcal{CP}$-odd mixing elements can be modified by non-
zero $\xi$ and $\nu$. As indicated above, due to the dependences of
$a_{3}^{\prime}$ and $a_{6}^{\prime}$ on $1/\mu_{\text{eff}}$, a small
$\mu_{\text{eff}}\ll 100$ GeV yields a strong mixing between singlets and
doublets.
We subsequently discuss vacuum structure and vacuum stability bounds in the
$\mu$NMSSM around the electroweak scale. We do not discuss tachyonic
instabilities during inflation or the stabilization of the inflationary
direction, since they are not of relevance for our study (see e. g. Refs. [11,
13]).
### 2.3 Vacuum structure and vacuum stability bounds
The space of model parameters can be constrained using experimental exclusion
limits and theoretical bounds. Those constraints can be applied to rule out
certain parts of the parameter space. In this context, constraints from the
stability of the electroweak vacuum appear to be very robust and theoretically
well motivated. It has already been noticed in the early times of
supersymmetry that constraints from the electroweak vacuum stability on the
trilinear soft SUSY-breaking parameters can be important [76, 77, 78, 79, 80,
81, 82, 83, 84]. Recently they have been rediscussed in light of the Higgs
discovery [85, 86, 87, 88, 89]. These constraints are usually associated with
non-vanishing vacuum expectation values of sfermion fields (e. g. staus or
stops) and thus known under the phrase “charge- and color-breaking minima”.
Such minima can invalidate the electroweak vacuum and therefore lead to
unphysical parameter configurations (see below).
However, the existence of charge- and color-breaking minima is only a
necessary condition for the destabilization of the electroweak vacuum. Clearly
one has to compare the value of the potential at this new minimum with the
desired electroweak one, and only if the non-standard vacuum is deeper the
corresponding scenario is potentially excluded. In fact, some of the points
with a deeper non-standard vacuum may be valid when accepting meta-stable
vacua under the condition that the transition time from the local electroweak
vacuum to the global true vacuum appears to be longer than the age of the
universe [90]. However, the possibility of the existence of meta-stable vacua
is of limited practical relevance for our analysis: typically only parameter
points in close neighborhood to the stable region are affected by such
considerations; well-beyond the boundary region, the false vacua become rather
short-lived and thus are strictly excluded. In addition, there are thermal
corrections in the early universe which give a sizable and positive
contribution to the effective potential as the one-loop corrections are
proportional to $m^{2}(\phi)\,T^{2}$ for the field-dependent masses $m(\phi)$.
For finite temperature, they shift the ground state to the symmetric phase
around $\phi=0\,\textrm{GeV}$ [91, 92]. We presume, however, that our
inflationary scenario preselects a vacuum at field values different from zero
and, thanks to the relatively low reheating temperatures in our scenario, gets
caught in it, see Ref. [57]. Following the inflationary scenario of Ref. [11],
the trajectory in field space lies at $\beta=\pi/4$ with
$h_{u}^{2}=h_{d}^{2}=h^{2}$ and $s=0$ GeV; the presence of the singlet field
$S$ is needed for the stabilization of the inflationary trajectory in order to
not fall into the tachyonic direction as pointed out by Refs. [13, 11].
Inflation ends at field values $h=\mathcal{O}(0.01)$ in units of the Planck
mass. For small $\lambda\sim 10^{-2}$, the $D$-flat trajectory remains stable
after inflation ends according to Ref. [11], and will change to
$\beta\neq\pi/4$ and $s\neq 0$ GeV when the SUSY-breaking terms become
important. NMSSM-specific effects like the relevance of singlet Higgs bosons
and the additional contribution to the $125\,\textrm{GeV}$ Higgs boson are
usually connected to a large value of $\lambda$. This is not necessarily the
case in the $\mu$NMSSM, where striking differences also appear for small
values of $\mu_{\text{eff}}$. Moreover, we will take it as a working
assumption that after inflation ends, even for larger values of $\lambda$ the
universe will remain in the state with the inflationary field direction until
it settles down in a minimum closest to this direction. If it is the global
minimum of the zero-temperature potential, reheating may not be sufficient to
overcome the barrier and to select a false (and maybe meta-stable) vacuum. The
thermal history of the universe plays then no role for the choice of the
vacuum, and in this case the universe would remain in the global minimum.
Accordingly, we adopt the prescription to exclude all points with a global
minimum that does not coincide with the electroweak vacuum. This means that we
do not consider meta-stable electroweak vacua as they are excluded by the
selection rule. A similar discussion and argument has been given in Ref. [93],
where a selection of the vacuum with the largest expectation values was
promoted, irrespective whether or not it is the global minimum of the theory.
We will see that actually in most cases scenarios are excluded because of a
tachyonic Higgs mass. Tachyonic masses are related to the fact that the
electroweak point—around which the potential is expanded—is not a local
minimum in the scalar potential, but rather resembles a saddle point or even
local maximum, and the true vacuum lies at a deeper point along this tachyonic
direction. Thus, the true vacuum has vevs different from the input values, and
the electroweak breaking condition $\mathcal{T}_{S}=\mathbf{0}$ in Eq. (15)
does not select a minimum.
We briefly sketch how to get constraints on the relevant model parameters in
the (neutral) Higgs sector of the $\mu$NMSSM. Similar observations for the
NMSSM have been intensively discussed in the literature [94, 95]. Already the
presence of an additional Higgs singlet (see e. g. Refs. [96, 97, 98])
invalidates the well-known results that no charge-breaking Higgs vevs exist at
lowest order in the MSSM (see e. g. Refs. [99, 82]) and in two-Higgs-doublet
models (see e. g. Refs. [100, 101]). On the other hand, in the NMSSM the
inclusion of such charge-breaking minima has rather little impact on the
overall vacuum stability and gives no further information, see Ref. [102]. In
a similar manner, we neglect non-vanishing squark vevs (see discussion below)
and therefore we only have to deal with the following potential:
$\displaystyle\begin{split}V&=\kappa^{2}\,s^{4}+\tfrac{1}{8}\left(g_{1}^{2}+g_{2}^{2}\right)\left(h_{u}^{2}-h_{d}^{2}\right)^{2}+\left(\lambda^{2}\,s^{2}+2\,\lambda\,\mu\,s\right)\left(h_{u}^{2}+h_{d}^{2}\right)-2\,\lambda\left(\kappa\,s^{2}+A_{\lambda}\,s\right)h_{u}\,h_{d}\\\
&\quad+\lambda^{2}\,h_{u}^{2}\,h_{d}^{2}+\tfrac{2}{3}\,\kappa\,A_{\kappa}\,s^{3}+\left(m_{H_{u}}^{2}+\mu^{2}\right)h_{u}^{2}+\left(m_{H_{d}}^{2}+\mu^{2}\right)h_{d}^{2}+m_{S}^{2}\,s^{2}-2\,B_{\mu}\,\mu\,h_{u}\,h_{d}\,,\end{split}$
(21)
where we just presented the real fields as we do not consider spontaneous
$\mathcal{CP}$ violation.444We treat the fields as “classical field values” in
the sense of vacuum-expectation values. To avoid confusion with the true and
desired electroweak vevs, we always keep the fields as commuting variables
$h_{u}$, $h_{d}$ and $s$ and interpret them as vacuum-expectation values only
at the minima. Notice also that we do not consider the shifted theory with all
fields $\phi\to\phi-v_{\phi}$ expanded around the electroweak point,
$h_{u}=v_{u},h_{d}=v_{d},s=\mu_{\text{eff}}/\lambda$. In our case for the
stability analysis, the potential vanishes at the origin, and the electroweak
minimum is one of the minima not located at the origin. It is not necessarily
the global minimum. Furthermore, compared to Eq. (11), we neglect all
additional $\mathbb{Z}_{3}$-breaking terms besides the contributions of $\mu$
and $B_{\mu}\,\mu$ of the $\mu$NMSSM (see the discussion above).
The “desired” electroweak vacuum can be constructed by fulfilling the
minimization conditions at the tree level, $\mathcal{T}_{S}=\mathbf{0}$, with
$\mathcal{T}_{S}$ given by Eq. (15). The vevs of the doublet fields are taken
as fixed input parameters, whereas the value of $\mu_{\text{eff}}$ is treated
as variable similar to $\mu$. These equations can be solved for the soft-
breaking masses $m_{H_{u}}^{2}$, $m_{H_{d}}^{2}$ and $m_{S}^{2}$ according to
Eqs. (16).
The masses of the Higgs sector are determined in such a way that the desired
vacuum with $\langle h_{u}\rangle=v_{u}$, $\langle h_{d}\rangle=v_{d}$ and
$\langle s\rangle=\mu_{\text{eff}}/\lambda$ is a viable vacuum of the
potential $V$ in Eq. (21). However, one has to ensure that there is no deeper
minimum of $V$. This can only be achieved reasonably-well through a numerical
evaluation. For that purpose, we determine the stationary points of the
potential $V$ and then compare the corresponding values of $V$ at these points
with the desired minimum given by
$\displaystyle\begin{split}V_{\text{min}}^{\text{des}}&=-\frac{1}{8}\left(g_{1}^{2}+g_{2}^{2}\right)v^{4}\,c^{2}_{2\beta}-\frac{1}{4}\,\lambda^{2}\,v^{4}\,s^{2}_{2\beta}-v^{2}\,\mu_{\text{eff}}^{2}\left[1-\frac{\kappa^{2}}{\lambda^{2}}\,s_{2\beta}\right]\\\
&\quad-\frac{\kappa^{2}}{\lambda^{4}}\,\mu_{\text{eff}}^{4}-v^{2}\,\mu\,\mu_{\text{eff}}-\frac{1}{3}\frac{\kappa\,A_{\kappa}}{\lambda^{3}}\,\mu_{\text{eff}}^{3}+\frac{1}{2}\,v^{2}\,A_{\lambda}\,\mu_{\text{eff}}\,s_{2\beta}-B_{\mu}\,\mu\,v^{2}\,s_{2\beta}\,.\end{split}$
(22)
From the expression in Eq. (22), one can derive a few general results: (a) for
small values of $\lambda$ the desired minimum gets deeper and—as the singlet
contribution decouples from the rest of the potential—it becomes more
difficult for a non-standard vacuum to appear and to be deeper than the
desired minimum; (b) the ($\mu$)NMSSM potential at the desired minimum is
usually deeper than in the case of the MSSM555Compare Eq. (22) with the
desired minimum of the MSSM in Eq. (25) which is solely determined by the $D$
term and $M_{A}^{2}$. and is mainly driven by $\mu_{\text{eff}}$; (c) the
contribution of $A_{\lambda}$ plays a subdominant role compared to
$A_{\kappa}$ whose impact is strongly influenced by $\mu_{\text{eff}}$ and
$\lambda$; (d) parameter points with $V_{\text{min}}^{\text{des}}>0$ have to
be excluded because the trivial minimum at $\langle h_{u}\rangle=\langle
h_{d}\rangle=\langle s\rangle=0$ GeV is obviously deeper.
In our analysis, we focus for clarity on constraints from the tree-level
potential, considering the appearance of global non-standard minima and, as
discussed above, disregarding the possibility of meta-stable false vacua.
Employing higher-order (i. e. one-loop) corrections does not necessarily give
more accurate predictions of vacuum stability, see Ref. [103]. An approach to
include one-loop effects using a certain numerical procedure has been
implemented in the public code collection of Vevacious, see Ref. [104],
including a tunneling calculation also at finite temperature using
CosmoTransitions [105]. The tree-level evaluation is much faster and
numerically more stable; moreover, it has been argued that the one-loop
effective potential is problematic for tunneling rate calculations [106].
#### Constraints on the NMSSM parameters:
There are two main constraints known for the trilinear soft SUSY-breaking
parameters $A_{\kappa}$ and $A_{\lambda}$. The first constraint relies on the
existence of a non-vanishing singlet vev to generate $\mu_{\text{eff}}\neq 0$
GeV. This can be easily derived from the Higgs potential with only $s\neq 0$
GeV and is given by the requirement [75]
$A_{\kappa}^{2}>9\,m_{S}^{2}\,.$ (23)
This lower bound on $A_{\kappa}$ is inappropriate for the $\mu$NMSSM, as there
always exists a non-vanishing higgsino mass term from
$\mu=\tfrac{3}{2}\,m_{3/2}\,\chi$. As shown in Section 3, this constraint has
hardly any impact on our analyses. We simply keep it for illustrative reasons.
The second constraint, on $A_{\lambda}$, follows from a non-tachyonic charged
Higgs mass, since a tachyonic mass ($m^{2}<0\,\textrm{GeV}^{2}$ ) means that
the potential has negative curvature at this stationary point derived by the
minimization conditions. Thus, the true vacuum would have some non-zero vev
for a charged Higgs component. Configurations like this are possible in the
NMSSM, whereas they do not exist as global or local minima in the MSSM [82].
From the (tree-level) charged Higgs mass in Eq. (18), we get an indirect bound
on $A_{\lambda}$. Taking $m_{H^{\pm}}$ as input value, we can eliminate
$A_{\lambda}$ as free parameter, see Eq. (19). Hence, we can ensure that
$m_{H^{\pm}}^{2}$ is always positive. Still, it is worth noticing that by this
procedure $A_{\lambda}$ gets strongly enhanced for small $\mu_{\text{eff}}$
(compared to $m_{H^{\pm}}$) and thus drives tachyonic neutral Higgs bosons.
#### Charge and color breaking:
There exist quite strong constraints in the MSSM from the formation of non-
standard minima which break the electric and color charges, known as charge-
and color-breaking (CCB) minima. The famous “$A$-parameter bounds” read
traditionally [76, 80, 82, 107]
$\displaystyle A_{t}^{2}$
$\displaystyle<3\,\big{(}m_{H_{u}}^{2}+\mu^{2}+m_{\tilde{Q}}^{2}+m_{\tilde{t}}^{2}\big{)}\,,$
(24a) $\displaystyle A_{b}^{2}$
$\displaystyle<3\,\big{(}m_{H_{d}}^{2}+\mu^{2}+m_{\tilde{Q}}^{2}+m_{\tilde{b}}^{2}\big{)}\,,$
(24b)
where $m_{\tilde{Q}}^{2}$ and $m_{\tilde{t},\tilde{b}}^{2}$ are the soft SUSY-
breaking masses for the superpartners of the left-handed $SU(2)_{\text{L}}$
quark doublet, $\tilde{Q}$, and of the right-handed quark singlets,
$\tilde{t}$ and $\tilde{b}$. Several modifications and improvements of Ineqs.
(24) are present in the literature, see e. g. Refs. [82, 84, 90]. These
constraints follow from the “$D$-flat” directions in the scalar potential of
the MSSM, i. e. $h_{u}=\tilde{t}_{L}=\tilde{t}_{R}$ and
$h_{d}=\tilde{b}_{L}=\tilde{b}_{R}$, respectively. Thus the quartic terms
associated with squared gauge couplings vanish. In addition, one has to be
reminded that Ineqs. (24) are only necessary conditions for the formation of a
non-trivial minimum with non-vanishing squark vevs in that specific direction.
In the case of a violation of Ineqs. (24), one has to check that the generated
CCB vacuum is actually deeper than the electroweak minimum. In the MSSM the
desired minimum takes on a comparably small numerical value, only depending on
$c_{2\beta}$ (and the $B_{\mu}$ term which can be replaced by the
$\mathcal{CP}$-odd Higgs mass $M_{A}$):
$\displaystyle V_{\text{min}}^{\text{{MSSM}}}$
$\displaystyle=-\tfrac{1}{8}\left(g_{1}^{2}+g_{2}^{2}\right)v^{4}\,c^{2}_{2\beta}-\tfrac{1}{2}\,M_{A}^{2}\,v^{2}\,s^{2}_{2\beta}\,.$
(25)
In principle, the $A$-parameter bounds (24) can be simply transferred to the
$\mu$NMSSM, where $\mu$ has to be replaced by $(\mu+\mu_{\text{eff}})$, as
they can be transferred to the NMSSM [108]. The net effect is roughly the same
in the MSSM, NMSSM and $\mu$NMSSM; if $A_{t}$ fulfills Ineq. (24a), no CCB
will appear. Constraints on $\mu_{\text{eff}}$ alone may get weakened, because
the desired minimum also gets deeper for larger $\mu_{\text{eff}}$. Moreover,
the additional singlet direction stabilizes the potential with respect to CCB
minima since the $\mu_{\text{eff}}$ term originates from a quadrilinear scalar
coupling, and the vacuum with non-vanishing $\mu_{\text{eff}}$ or $v_{s}$ is
typically deeper than a CCB vacuum. Generically, constraints from the coupling
to the wrong Higgs doublet relating down-type sfermion vevs to the up-type
Higgs and vice versa, see Refs. [109, 110], are expected to be valid for
$(\mu+\mu_{\text{eff}})$ and not weakened if the singlet is fixed at its vev.
Similarly, there are bounds on $A_{t,b}$ not related to $D$-flat directions as
discussed in Ref. [111]. These can be reasonably-well determined only
numerically. Generically speaking, for the $\mu$NMSSM the risk of generating a
CCB vacuum is reduced because (a) the dependence of the desired minimum on
$\mu_{\text{eff}}$ drives the electroweak vevs to be more stable, and (b) not
as large values of $A_{t}$ are needed to raise the SM-like Higgs mass because
of the additional NMSSM-specific tree-level contribution.
Constraints from CCB minima as given in Ineqs. (24), are less important in
comparison to the MSSM for both, the NMSSM and the $\mu$NMSSM, even if large
stop corrections are needed to shift the SM-like Higgs mass (as in the case
for small $\lambda$). If the singlet-field direction were neglected and the
stop $D$-flat direction $\tilde{t}_{R}=\tilde{t}_{L}=\tilde{t}$ defined, one
could directly apply Ineqs. (24) for the $\mu$NMSSM, keeping $v_{s}\neq 0$ GeV
and replacing $\mu\to\mu+\mu_{\text{eff}}$. However, with the singlet as
dynamical degree of freedom, the stability of the electroweak vacuum is
improved as the only singlet–stop contribution is actually a quadrilinear term
$\lambda\,h_{d}\,s\,{\tilde{t}}^{2}$ and the occurrence of a true vacuum with
$\langle h_{u,d}\rangle\neq v_{u,d}$, $\langle s\rangle\neq v_{s}$ and
$\langle\tilde{t}\rangle\neq 0$ GeV is disfavored.
#### Meta-stability and tunneling rates:
Lastly, we comment on vacuum-to-vacuum transitions in case of a local
electroweak vacuum. It is in general of interest to see how long such a meta-
stable state could survive compared with the life-time of the universe. We
have outlined some arguments why—in view of the inflationary history of the
universe—we disregard meta-stable long-lived vacua. We will see in Section 3.3
that totally stable points survive in a wide range of the parameter space.
For an estimate of the bounce action of the unstable configuration [112], we
define an effectively single-field scalar potential linearly interpolating
between the electroweak local minimum and the true vacuum found by the
numerical minimization of the scalar potential at different field values and
apply an exact solution of the quartic potential given by Ref. [113]. See also
Ref. [114] for the application of this method to the $\mu$NMSSM.
### 2.4 Higher-order corrections to Higgs-boson masses and mixing
It is well-known that perturbative corrections beyond the tree level alter the
Higgs masses and mixing significantly in supersymmetric models. For instance,
in the MSSM such large corrections are needed to lift the lightest
$\mathcal{CP}$-even Higgs mass beyond the $Z$-boson mass. On the other hand,
in the NMSSM and similarly the $\mu$NMSSM there are scenarios where an
additional tree-level term lowers the tension between the tree-level SM-like
Higgs mass and the measured value of the SM-like Higgs boson at $125$ GeV.
Still, since loop corrections to the Higgs spectrum have a large impact, in
our phenomenological analysis we take into account contributions of higher
order as described in the following.
The masses of the Higgs bosons are obtained from the complex poles of the full
propagator matrix. The inverse propagator matrix is a $(6\times 6)$ matrix
that reads
$\displaystyle\mathbf{\hat{\Delta}}^{-1}{\left(k^{2}\right)}=i\left[k^{2}\mathbf{1}-\begin{pmatrix}\mathcal{M}_{S}^{2}&0\\\
0&\mathcal{M}_{P}^{2}\end{pmatrix}+\begin{pmatrix}\mathbf{\hat{\Sigma}}_{S}{\left(k^{2}\right)}&0\\\
0&\mathbf{\hat{\Sigma}}_{P}{\left(k^{2}\right)}\end{pmatrix}\right].$ (26)
Here $\mathbf{\hat{\Sigma}}_{S}$ and $\mathbf{\hat{\Sigma}}_{P}$ denote the
matrices of the renormalized self-energy corrections to the neutral
$\mathcal{CP}$-even and $\mathcal{CP}$-odd Higgs fields. In the
$\mathcal{CP}$-conserving limit there are no transition elements between
$\mathcal{CP}$-even and $\mathcal{CP}$-odd degrees of freedom, which is why
Eq. (26) is block diagonal.
In principle, contributions from mixing with the longitudinal $Z$ boson have
to be considered as well. However, these contributions as well as those from
mixing with the Goldstone mode enter the mass predictions only at subleading
two-loop level [115, 116]. Since these contributions are numerically small
[117] we neglect them in the following and use a $(5\times 5)$ propagator
matrix. The $(5\times 5)$ matrices are denoted by the symbols
$\mathbf{\hat{\Delta}}_{hh}$ for the propagators and
$\mathbf{\hat{\Sigma}}_{hh}$ for the renormalized self-energies in the
following. The complex poles of the propagator are given by the values of the
squared external momentum $k^{2}$ for which the determinant of
$\mathbf{\hat{\Delta}}_{hh}^{-1}$ vanishes,
$\displaystyle\det{\big{[}\mathbf{\hat{\Delta}}^{-1}_{hh}{\left(k^{2}\right)}\big{]}_{k^{2}\;=\;M_{h_{i}}^{2}\;+\;i\,\Gamma_{h_{i}}\,M_{h_{i}}}}$
$\displaystyle\overset{!}{=}0\,.$ (27)
The real part, $M_{h_{i}}^{2}$, of each pole yields the loop-corrected mass of
the corresponding Higgs boson $h_{i}$.
In this work, a model file for FeynArts [16, 17] of the GNMSSM at the tree
level has been generated with the help of SARAH [18, 19, 20, 21]. In addition,
the one-loop counterterms for all vertices and propagators have been
implemented, and a renormalization scheme which is consistent with Refs. [23,
24] for the cases of the MSSM and NMSSM has been set up. All
$\mathbb{Z}_{3}$-violating parameters are renormalized in the
$\overline{\text{DR}}$ scheme, see Appendix A for a list of the respective
beta functions. The numerical input values of all
$\overline{\text{DR}}$-renormalized parameters are understood to be given at a
renormalization scale which equals the top-quark pole mass. The renormalized
self-energies of the Higgs bosons $\mathbf{\hat{\Sigma}}_{hh}$ are evaluated
with the help of FormCalc [22] and LoopTools [22] by taking into account the
full contributions from the GNMSSM at the one-loop order. For other variations
of the NMSSM, similar calculations of Higgs-mass contributions up to the two-
loop order have been performed in Refs. [118, 119, 120, 121, 122, 123, 124,
125, 126]. A comparison of results from public codes using different
renormalization schemes can be found in Refs. [127, 128].
As an approximation, we have added the leading two-loop contributions in the
MSSM of $\mathcal{O}{\left(\alpha_{t}\alpha_{s}\right)}$ [129] and
$\mathcal{O}{\left(\alpha_{t}^{2}\right)}$ [130, 131] at vanishing external
momentum to their MSSM-like counterparts in the $\mu$NMSSM (for a discussion
of this approximation in the NMSSM see Ref. [124]). They are taken from their
current implementation in FeynHiggs [25, 26, 27, 28, 29, 30, 31,
32].666Additional contributions from the MSSM at the two-loop order or
beyond—e. g. further fixed-order results [132, 133] or resummation of large
logarithms for heavy sfermions [134, 30, 31]—are available. However, we will
confine our discussion in this paper to the leading two-loop contributions. We
thus have
$\displaystyle\mathbf{\hat{\Sigma}}_{hh}{\left(k^{2}\right)}\approx\left.\mathbf{\hat{\Sigma}}^{(\text{1L})}_{hh}{\left(k^{2}\right)}\right|^{\text{{GNMSSM}{}}}+\left.\mathbf{\hat{\Sigma}}^{(\text{2L})}_{hh}{\left(k^{2}\right)}\right|_{k^{2}=0}^{\text{{MSSM}{},
leading}}.$ (28)
We note that the two-loop contributions of
$\mathcal{O}{\left(\alpha_{b}\alpha_{s}\right)}$ to the MSSM-like Higgs self-
energies are not included in our calculation. However, in the definition of
the bottom-Yukawa coupling we employ a running $\overline{\text{DR}}$ bottom
mass at the scale $m_{t}$ [116] which enters
$\mathbf{\hat{\Sigma}}^{(\text{1L})}_{hh}{\left(k^{2}\right)}\big{|}^{\text{{GNMSSM}{}}}$,
and we take into account large $t_{\beta}$-enhanced contributions to the
bottom mass as discussed in Refs. [135, 136, 137, 138, 139, 140, 116]. We
expect that the missing two-loop piece of
$\mathcal{O}{\left(\alpha_{b}\alpha_{s}\right)}$ is numerically subleading
(for a discussion in the MSSM see [141, 142]).
Higher-order propagator-type corrections are not only needed for predicting
the Higgs-boson masses, but also for the correct normalization of $S$-matrix
elements involving Higgs bosons as external particles. The wave-function
normalization factors incorporating the effects of the mixing between the
different Higgs bosons can be written as a non-unitary matrix
$Z_{ij}^{\mbox{\tiny mix}}$. It is constructed from the Higgs self-energies
and their derivatives with respect to $k^{2}$, evaluated at the various
physical poles; for details we refer the reader to Refs. [143, 144, 145, 146,
24]. A recent application in the framework of the NMSSM can be found in Ref.
[147]. Here, we follow the setup outlined in Section $2.6$ of Ref. [24] and
determine the matrix elements of $Z_{ij}^{\text{\tiny mix}}$ from the
eigenvalue equation
$\displaystyle\Big{[}\text{diag}{\left(m_{h_{1}}^{2},m_{h_{2}}^{2},m_{h_{3}}^{2},m_{a_{1}}^{2},m_{a_{2}}^{2}\right)}-\mathbf{\hat{\Sigma}}_{hh}\big{|}_{k^{2}\;=\;M_{h_{i}}^{2}\;+\;i\,\Gamma_{h_{i}}\,M_{h_{i}}}\Big{]}_{kl}\,Z^{\mbox{\tiny
mix}}_{il}=\left(M_{h_{i}}^{2}+i\,\Gamma_{h_{i}}\,M_{h_{i}}\right)Z^{\mbox{\tiny
mix}}_{ik}\,.$ (29) The normalization of each eigenvector is fixed by
$\displaystyle\Bigg{[}\frac{\mathrm{d}\mathbf{\hat{\Delta}}^{-1}_{hh}}{\mathrm{d}k^{2}}\bigg{|}_{k^{2}\;=\;M_{h_{i}}^{2}\;+\;i\,\Gamma_{h_{i}}\,M_{h_{i}}}\Bigg{]}_{kl}\,Z^{\mbox{\tiny
mix}}_{ik}\,Z^{\mbox{\tiny
mix}}_{il}=\Bigg{[}\mathbf{1}+\frac{\mathrm{d}\mathbf{\hat{\Sigma}}_{hh}}{\mathrm{d}k^{2}}\bigg{|}_{k^{2}\;=\;M_{h_{i}}^{2}\;+\;i\,\Gamma_{h_{i}}\,M_{h_{i}}}\Bigg{]}_{kl}\,Z^{\mbox{\tiny
mix}}_{ik}\,Z^{\mbox{\tiny mix}}_{il}=1\,.$ (30)
In our numerical analysis we denote the three $\mathcal{CP}$-even mass
eigenstates $h_{i}$ as $h^{0}$, $H^{0}$ and $s^{0}$, and the two
$\mathcal{CP}$-odd mass eigenstates $a_{i}$ as $A^{0}$ and $a_{s}$. These
assignments become ambiguous as soon as loop corrections are included. In our
analysis we use the largest admixture to a loop-corrected mass state in order
to define the assignment. For this purpose we employ the previously discussed
loop-corrected mixing matrix $Z^{\text{\tiny mix}}_{ij}$. In this way $s^{0}$
denotes the dominantly singlet-like state. The light doublet-like state is
named $h^{0}$ and the heavy doublet-like state is $H^{0}$. The
$\mathcal{CP}$-odd Higgs bosons are the predominantly singlet-like state
$a_{s}$ and the doublet-like state $A^{0}$.
### 2.5 Trilinear Higgs-boson self-couplings
In order to discuss possible distinctions between the NMSSM and the
$\mu$NMSSM, the Higgs-boson self-couplings are particularly relevant.
Experimentally these self-couplings can be probed through Higgs pair
production or through decays of a heavier Higgs boson to two lighter ones.
Through electroweak symmetry breaking there is also a strong correlation with
Higgs-boson decays into Higgs bosons and gauge bosons, e. g. $A^{0}\to Zh^{0}$
or $H^{0}\to Za_{s}$. For both, the Higgs mixing between singlets and doublets
is essential. We take both types of decays into account when checking against
experimental limits from Higgs boson searches, but only exemplify the
parameter dependence for the decays involving only Higgs bosons in our
numerical analysis below.
The Higgs self-couplings are introduced in Eq. (12). In order to simplify
their presentation in the neutral sector we define $\phi_{i}$ to be the $i$-th
component of $\Phi=\left(\sigma_{d},\sigma_{u},\sigma_{s},A,\phi_{s}\right)$,
where in the $\mathcal{CP}$-odd sector the Goldstone boson is in a basis where
it does not mix with the other Higgs bosons at lowest order (see discussion in
Section 2.2).777The state $A$ differs from the mass eigenstate $A^{0}$ that we
defined in the previous section. We denote the couplings as $\lambda_{ijk}$
for the interactions among three Higgs bosons $\phi_{i}\phi_{j}\phi_{k}$ in
the basis $\Phi$. For the couplings among the $\mathcal{CP}$-even
components—expressed in gauge couplings (see Eq. (13) for the relation to the
gauge-boson masses)—we obtain at the tree level
$\displaystyle\lambda_{111}$
$\displaystyle=-\tfrac{3}{2}\left(g_{1}^{2}+g_{2}^{2}\right)c_{\beta}\,v\,,$
$\displaystyle\lambda_{112}$
$\displaystyle=\tfrac{1}{2}\left(g_{1}^{2}+g_{2}^{2}-4\,\lambda^{2}\right)s_{\beta}\,v\,,$
(31a) $\displaystyle\lambda_{113}$
$\displaystyle=-2\,\lambda\left(\mu+\mu_{\text{eff}}\right)\,,$
$\displaystyle\lambda_{122}$
$\displaystyle=\tfrac{1}{2}\left(g_{1}^{2}+g_{2}^{2}-4\,\lambda^{2}\right)c_{\beta}\,v\,,$
(31b) $\displaystyle\lambda_{123}$
$\displaystyle=\lambda\left(\nu+A_{\lambda}+2\,\frac{\kappa}{\lambda}\,\mu_{\text{eff}}\right),$
$\displaystyle\lambda_{133}$
$\displaystyle=2\,\lambda\left(\kappa\,s_{\beta}\,v-\lambda\,c_{\beta}\,v\right),$
(31c) $\displaystyle\lambda_{222}$
$\displaystyle=-\tfrac{3}{2}\left(g_{1}^{2}+g_{2}^{2}\right)s_{\beta}\,v\,,$
$\displaystyle\lambda_{223}$
$\displaystyle=-2\,\lambda\left(\mu+\mu_{\text{eff}}\right),$ (31d)
$\displaystyle\lambda_{233}$
$\displaystyle=2\,\lambda\left(\kappa\,c_{\beta}\,v-\lambda\,s_{\beta}\,v\right),$
$\displaystyle\lambda_{333}$
$\displaystyle=-2\,\kappa\left(A_{\kappa}+3\,\nu\right)-12\,\frac{\kappa}{\lambda}\,\mu_{\text{eff}}.$
(31e) The couplings of $\mathcal{CP}$-even components to $\mathcal{CP}$-odd
components are given by $\displaystyle\lambda_{144}$
$\displaystyle=-\tfrac{1}{2}\left(g_{1}^{2}+g_{2}^{2}\right)c_{\beta}\,v\,,$
$\displaystyle\lambda_{244}$
$\displaystyle=\tfrac{1}{2}\left(g_{1}^{2}+g_{2}^{2}-4\,\lambda^{2}\right)s_{\beta}\,v\,,$
(31f) $\displaystyle\lambda_{344}$
$\displaystyle=-2\,\lambda\left(\mu+\mu_{\text{eff}}\right)\,,$
$\displaystyle\lambda_{345}$
$\displaystyle=-\lambda\left(\nu+A_{\lambda}+2\,\frac{\kappa}{\lambda}\,\mu_{\text{eff}}\right),$
(31g) $\displaystyle\lambda_{155}$
$\displaystyle=\tfrac{1}{2}\left(g_{1}^{2}+g_{2}^{2}-4\,\lambda^{2}\right)c_{\beta}\,v\,,$
$\displaystyle\lambda_{255}$
$\displaystyle=-\tfrac{1}{2}\left(g_{1}^{2}+g_{2}^{2}\right)s_{\beta}\,v,$
(31h) $\displaystyle\lambda_{355}$
$\displaystyle=-2\,\lambda\left(\mu+\mu_{\text{eff}}\right)\,.$ (31i)
Similarly we can write down the couplings $\tilde{\lambda}_{i}$ for the
interaction $\phi_{i}H^{+}H^{-}$ of the neutral Higgs bosons in the basis
$\Phi$ to the physical charged Higgs bosons (the Goldstone bosons are again in
a basis where they do not mix) as follows:
$\displaystyle\tilde{\lambda}_{1}$
$\displaystyle=\lambda^{2}\,s_{\beta}\,s_{2\beta}\,v+\tfrac{1}{2}\left(g_{1}^{2}+g_{2}^{2}\right)c_{\beta}\,c_{2\beta}\,v-g_{2}^{2}\,c_{\beta}\,v,$
(32a) $\displaystyle\tilde{\lambda}_{2}$
$\displaystyle=\lambda^{2}\,c_{\beta}\,s_{2\beta}\,v-\tfrac{1}{2}\left(g_{1}^{2}+g_{2}^{2}\right)s_{\beta}\,c_{2\beta}\,v-g_{2}^{2}\,s_{\beta}\,v,$
(32b) $\displaystyle\tilde{\lambda}_{3}$
$\displaystyle=-\lambda\left[2\left(\mu+\mu_{\text{eff}}\right)+\left(\nu+2\,\frac{\kappa}{\lambda}\,\mu_{\text{eff}}+A_{\lambda}\right)s_{2\beta}\right].$
(32c)
The remaining couplings which are not present above are equal to zero. Again
$s_{x}$ and $c_{x}$ are defined as $s_{x}=\sin(x)$ and $c_{x}=\cos(x)$. In
most of the cases when $\mu$ or $\mu_{\text{eff}}$ appear, the coupling
depends on the sum $\left(\mu+\mu_{\text{eff}}\right)$. For the interactions
of the neutral Higgs bosons, only a few couplings carry an (additional)
proportionality to $\mu_{\text{eff}}$ itself, see $\lambda_{123}$,
$\lambda_{345}$ and $\lambda_{333}$ which all involve the singlet state. This
dependence manifests itself for the former two couplings in the Higgs-to-Higgs
decays $s^{0}\to h^{0}\,h^{0}$, $H^{0}\to s^{0}\,h^{0}$ and $A^{0}\to
s^{0}\,a_{s}$. In the charged Higgs sector, the decay $s^{0}\to H^{+}\,H^{-}$
has a direct dependence on $\mu_{\text{eff}}$ at the tree level in addition to
$(\mu+\mu_{\text{eff}})$ for a dominantly singlet-like state $s^{0}$, as can
be seen in $\tilde{\lambda}_{3}$. For both cases a very pronounced mixing of
the singlet states with the Higgs doublets, and an individual dependence on
$\mu_{\text{eff}}$ and on the sum $(\mu+\mu_{\text{eff}})$ can also occur in
other Higgs-to-Higgs decays. We will emphasize later that Higgs mixing is
crucial for the observed dependences on $\mu_{\text{eff}}$ and $\mu$. We
consider the decays at the tree level, however, including the external
corrections to Higgs-boson masses and mixing as discussed in Section 2.4.
Though, we emphasize that higher-order contributions to Higgs-boson self-
couplings and Higgs-boson decays can be large, see Refs. [148, 149, 150, 147]
for corresponding calculations in the NMSSM.
### 2.6 Neutralino and chargino masses
We write the neutralino and chargino sector in the gauge-eigenstate bases
$\displaystyle\big{(}\psi^{0}\big{)}^{\text{T}}$
$\displaystyle=\big{(}\tilde{B}^{0},\tilde{W}_{3}^{0},\tilde{h}_{d}^{0},\tilde{h}_{u}^{0},\tilde{s}\big{)}\,,$
$\displaystyle\big{(}\psi^{+}\big{)}^{\text{T}}$
$\displaystyle=\big{(}\tilde{W}^{+},\tilde{h}_{u}^{+}\big{)}$ and
$\displaystyle\big{(}\psi^{-}\big{)}^{\text{T}}$
$\displaystyle=\big{(}\tilde{W}^{-},\tilde{h}_{d}^{-}\big{)}\,,$ (33)
which includes the bino component $\tilde{B}^{0}$, the neutral and charged
wino components $\tilde{W}_{3}^{0}$ and $\tilde{W}^{\pm}$, the neutral and
charged higgsino components $\tilde{h}_{u,d}^{0}$ and $\tilde{h}_{u,d}^{\pm}$,
and the singlino component $\tilde{s}^{0}$ in the form of Weyl spinors. Their
mass terms in the Lagrangian can be written in the form
$\displaystyle-\mathcal{L}_{\chi\text{-masses}}$
$\displaystyle=\tfrac{1}{2}\big{(}\psi^{0}\big{)}^{\text{T}}{\mathcal{M}}_{\chi}\,\psi^{0}+\tfrac{1}{2}\big{[}\big{(}\psi^{-}\big{)}^{\text{T}}{\mathcal{M}}_{\chi^{\pm}}\,\psi^{+}+\big{(}\psi^{+}\big{)}^{\text{T}}{\mathcal{M}}_{\chi^{\pm}}^{\text{T}}\,\psi^{-}\big{]}+\text{h.\,c.}\,.$
(34)
The symmetric mass matrix of the neutralinos and the mass matrix of the
charginos are given by
$\displaystyle{\mathcal{M}}_{\chi}$
$\displaystyle=\begin{pmatrix}M_{1}&0&-m_{Z}\,s_{\text{w}}\,c_{\beta}&m_{Z}\,s_{\text{w}}\,s_{\beta}&0\\\
\cdot&M_{2}&m_{Z}\,c_{\text{w}}\,c_{\beta}&-m_{Z}\,c_{\text{w}}\,s_{\beta}&0\\\
\cdot&\cdot&0&-\left(\mu+\mu_{\text{eff}}\right)&-\lambda\,v\,s_{\beta}\\\
\cdot&\cdot&\cdot&0&-\lambda\,v\,c_{\beta}\\\
\cdot&\cdot&\cdot&\cdot&2\,\tfrac{\kappa}{\lambda}\,\mu_{\text{eff}}+\nu\end{pmatrix},$
(35a) $\displaystyle{\mathcal{M}}_{\chi^{\pm}}$
$\displaystyle=\begin{pmatrix}M_{2}&\sqrt{2}\,m_{W}\,s_{\beta}\\\
\sqrt{2}\,m_{W}\,c_{\beta}&\mu+\mu_{\text{eff}}\end{pmatrix}.$ (35b)
The abbreviations $s_{\text{w}}=g_{2}/\sqrt{g_{1}^{2}+g_{2}^{2}}$ and
$c_{\text{w}}=g_{1}/\sqrt{g_{1}^{2}+g_{2}^{2}}$ denote the sine and cosine of
the weak-mixing angle, respectively. We see that the mass scale of the MSSM-
like higgsinos is given by the sum $(\mu+\mu_{\text{eff}})$, and the mass
scale of the singlino is controlled by
$(2\,\kappa/\lambda\,\mu_{\text{eff}}+\nu)$. If only the electroweakinos were
taken into account at the tree level, it is apparent that the $\mu$NMSSM would
be indistinguishable from the NMSSM, since any shift in masses and mixing
induced through $\mu$ could be compensated through shifts in
$\mu_{\text{eff}}$. However, such shifts will induce differences in the Higgs
sector.
Including the singlino elements (with $\nu=0$ GeV as discussed in Section
2.1), an NMSSM-like neutralino spectrum can be generated, where
$(\mu+\mu_{\text{eff}})$ serves as the NMSSM-like $\mu_{\text{eff}}$ term and
$\kappa$ is rescaled as
$\displaystyle\kappa$
$\displaystyle\to\tilde{\kappa}=\kappa\,\frac{\mu+\mu_{\text{eff}}}{\mu_{\text{eff}}}\,.$
(36)
This rescaling on the other hand affects the Higgs spectrum, thus giving a
possible handle to distinguish the $\mu$NMSSM from the NMSSM.
Figure 1: The masses of the neutralinos and charginos are shown for different
values of $\mu$. The effective higgsino mass parameter is fixed at
$\mu+\mu_{\text{eff}}=-200\,\textrm{GeV}$, and the mass parameters for the
gauginos are set to $M_{1}=100\,\textrm{GeV}$ and $M_{2}=300\,\textrm{GeV}$.
The other relevant parameters are given in the legend of the figure. The
mostly bino- and wino-like states $\tilde{B}^{0}$ (purple) and $\tilde{W}^{0}$
(red) as well as the charginos $\tilde{\chi}^{\pm}$ (rose) have (nearly)
constant masses. The masses of the two mostly higgsino-like states
$\tilde{H}^{0}$ (orange) and the mostly singlino-like state $\tilde{S}^{0}$
(blue) vary visibly.
For the case where $\kappa$ and $\lambda$ are kept fixed, an interesting
behavior can be observed for light higgsinos. For small
$(\mu+\mu_{\text{eff}})$ huge cancellations may occur between the two
contributions with large $\mu>0$ GeV and $\mu_{\text{eff}}$ of the same size
but opposite sign. As a consequence, the singlino state becomes much heavier
compared to the case of the NMSSM (of the order of $\mu_{\text{eff}}$). Such a
scenario is displayed in Fig. 1 where the neutralino–chargino spectrum is
shown for the cases $\mu\in\\{0,200,1000\\}$ GeV ($\nu$ is set equal to zero).
The left column with $\mu=0$ GeV corresponds to the case of the NMSSM. The
masses are obtained by diagonalizing the tree-level mass matrices in Eq. (35).
With respect to the $\mathbb{Z}_{3}$-invariant NMSSM, the most significant
alteration is visible in the singlino component (blue): the mass shows an
about-linear increase with $\mu$ since the sum $(\mu+\mu_{\text{eff}})$ is
kept fixed. Due to the varying mixing, some influence on the masses of the
other two neutral higgsino states (orange) can be seen despite a constant
higgsino mass parameter $(\mu+\mu_{\text{eff}})$; the impact on the gaugino
states (red and purple) remains negligible. The chargino masses (rose) are not
influenced by the different choices.
In a scenario as discussed above, with light higgsinos as well as large $\mu$
and $\mu_{\text{eff}}$ of opposite signs, the lightest neutralino is typically
not the singlino state as the singlino mass is pushed up, see Fig. 1. The
lightest supersymmetric particle (LSP), however, tends to be the gravitino,
which is at risk to overclose the universe as dark matter candidate. In this
case, the inflationary scenario has to be such that the reheating temperature
stays below a certain value and gravitinos are not overproduced in the early
universe, see our discussion in Section 2.1.
### 2.7 Sfermion masses
The mass term for each charged sfermion—for which we distinguish the
superpartners of the left- and right-handed components by the notation
$\tilde{f}_{\text{L}}$ and $\tilde{f}_{\text{R}}$, respectively—takes the
following form in the Lagrangian
$\displaystyle-\mathcal{L}_{\tilde{f}\text{-masses}}=\left(\tilde{f}_{\text{L}}^{\dagger},\tilde{f}_{\text{R}}^{\dagger}\right){\mathcal{M}}^{2}_{\tilde{f}}\begin{pmatrix}\tilde{f}_{\text{L}}\\\
\tilde{f}_{\text{R}}\end{pmatrix},$ (37)
where the squared mass matrix reads
$\displaystyle{\mathcal{M}}^{2}_{\tilde{f}}$
$\displaystyle=\begin{pmatrix}m_{f}^{2}+m_{\tilde{f}_{\text{L}}}^{2}+m_{Z}^{2}\,c_{2\beta}\left(T^{(3)}_{f}-Q_{f}\,s_{\text{w}}^{2}\right)&m_{f}\left[A_{f}-\theta_{f}\left(\mu+\mu_{\text{eff}}\right)\right]\\\
m_{f}\left[A_{f}-\theta_{f}\left(\mu+\mu_{\text{eff}}\right)\right]&m_{f}^{2}+m_{\tilde{f}_{\text{R}}}^{2}+m_{Z}^{2}\,c_{2\beta}\,Q_{f}\,s_{\text{w}}^{2}\end{pmatrix},$
(38a) $\displaystyle\theta_{f}$
$\displaystyle=\left\\{\begin{matrix}[l]t_{\beta}\,,&f\in\\{e,\mu,\tau,d,s,b\\}\,,\\\
\frac{1}{t_{\beta}}\,,&f\in\\{u,c,t\\}\,.\end{matrix}\right.$ (38b)
Therein we denote the fermion mass by $m_{f}$, the bilinear soft-breaking
parameters by $m_{\tilde{f}_{\text{L,R}}}$, the trilinear soft-breaking
parameter by $A_{f}$, and the electric and weak charges by $Q_{f}$ and
$T^{(3)}_{f}$.
In this sector we encounter the sum $(\mu+\mu_{\text{eff}})$ in the off-
diagonal elements of the sfermion mass matrices as the only difference
compared to the NMSSM or MSSM. If this sum becomes large, $A_{f}/\theta_{f}$
needs to be adjusted in order to avoid tachyonic sfermions in particular for
the third generation squarks. In that case, bounds from vacuum stability (see
e. g. Ineqs. (24)) can also constrain the viable size of
$\left(\mu+\mu_{\text{eff}}\right)$.
## 3 Phenomenological analysis
In this section we investigate various scenarios of the $\mu$NMSSM with a
particular focus on the $\mu$ parameter. We will point out differences between
the $\mu$NMSSM and the ordinary $\mathbb{Z}_{3}$-preserving NMSSM, where the
latter corresponds to the limit $\mu=0$ GeV of the $\mu$NMSSM. At first we
qualitatively define the investigated scenarios, before we numerically analyze
them.
### 3.1 Viable parameter space compatible with theoretical and experimental
bounds
In the previous sections we have analytically discussed the relevant sectors
of the $\mu$NMSSM with respect to effects of the inflation-inspired $\mu$
parameter. Before we provide a phenomenological analysis—including the higher-
order effects specified in Section 2.4—we discuss the viability of various
parameter regions. As discussed in Section 2.1 we focus on scenarios with non-
zero $\mu$ and $B_{\mu}$, but set all other $\mathbb{Z}_{3}$-violating
parameters in the superpotential (9) and soft-breaking Lagrangian (10), i. e.
$\xi$, $C_{\xi}$, $\nu$ and $B_{\nu}$, equal to zero.
The $\mu$ parameter of the model is positive by construction in the inflation-
inspired model, see Eqs. (6) and (8). Furthermore, we only investigate
scenarios with $\mu\lesssim 2$ TeV to stay in the phenomenologically
interesting region for the collider studies. Still, we point out that also
much larger scales are viable from the inflationary point of view. As
discussed in Section 2.1, $\mu\simeq\frac{3}{2}\,m_{3/2}\,10^{5}\,\lambda$
implies that much larger values of $\mu$ are possible depending on $\lambda$
and $m_{3/2}$. However, large values of $\mu$ can cause tachyonic states as
discussed in Section 2.2.
We characterize the scenarios in the following parameter regions: small values
of $\mu\simeq 1\,\textrm{GeV}$,888We do not set $\mu$ exactly to zero for
purely technical reasons: the MSSM-like two-loop contributions to the Higgs
masses are taken from FeynHiggs where the $\mu$ parameter of the $\mu$NMSSM is
identified with the $\mu$ parameter of the MSSM. In the limit $\mu\to 0$ GeV
numerical instabilities appear. large values of $\mu\gtrsim 1\,\textrm{TeV}$
with $\mu_{\text{eff}}\simeq-\mu$, and values of $\mu$
$\mathord{\gtrsim}\,100\,\textrm{GeV}$ with moderate or small
$\lvert\mu_{\text{eff}}\rvert$ $\mathord{\lesssim}\,100\,\textrm{GeV}$.
small $\mu\simeq 1\,\textrm{GeV}$:
in the case of small $\mu$ also the soft-breaking term $B_{\mu}\,\mu$ becomes
small. Since in addition we set all other $\mathbb{Z}_{3}$-violating
parameters to zero, we recover the standard NMSSM in this limit (see the
discussion in Fig. 1). Thus, differences between the NMSSM and the $\mu$NMSSM
can directly be deduced by comparing scenarios with zero and non-zero $\mu$
parameter.
large $\mu\boldsymbol{\gtrsim}1\,\textrm{TeV}$ with
$\mu_{\text{eff}}\simeq-\mu$:
as discussed in Section 2.6, the higgsino masses depend only on the sum
$\left(\mu+\mu_{\text{eff}}\right)$ at the tree level. The same combination
contributes to the sfermion mixing in combination with the trilinear soft
SUSY-breaking terms. In order to keep these quantities small at a large value
of $\mu$, one can assign the same value with opposite sign to
$\mu_{\text{eff}}$; note, however, that the region
$\lvert\mu+\mu_{\text{eff}}\rvert\lesssim 100\,\textrm{GeV}$ is experimentally
excluded by direct searches for charginos [151, 152]. An immediate consequence
of large, opposite sign $\mu_{\text{eff}}$ and $\mu$ is that the singlino and
the singlet-like Higgs states receive large masses of the order of
$\lvert\mu_{\text{eff}}\rvert$ [see the $(5,5)$ entry in Eq. (35a) and the
$(3,3)$ elements in Eqs. (17a) and (17b)], which provides a potential
distinction from the standard NMSSM. Similar to the increase of the singlino
mass, fixing $\left(\mu+\mu_{\text{eff}}\right)$ together with an increase in
$\mu$—and thus an increase in the absolute value of $\mu_{\text{eff}}$—lifts
the masses of the singlet states also in the Higgs sector. In the neutralino
sector these contributions can be absorbed by a rescaling of $\kappa$, see
Section 2.6; however, in the Higgs sector $\mu_{\text{eff}}$ also appears in
other combinations, thus leaving traces which can potentially distinguish the
$\mu$NMSSM from the NMSSM.
$\mu\,\boldsymbol{\mathord{\gtrsim}\,}100\,\textrm{GeV}$ with
$\boldsymbol{\lvert}\mu_{\text{eff}}\boldsymbol{\rvert}\,\boldsymbol{\mathord{\lesssim}\,}100\,\textrm{GeV}$:
if we allow for a large $\mu$ parameter without constraining the sum
$\left(\mu+\mu_{\text{eff}}\right)$, the spectra of higgsinos, sfermions and
Higgs bosons are changed at the same time. A large sum
$\left(\mu+\mu_{\text{eff}}\right)$ causes very large mixing between the
singlet and doublet sectors (see discussion in Section 2.2), eventually
driving one Higgs state tachyonic. In some part of the parameter space this
can be avoided by tuning $B_{\mu}$ accordingly. Another constraint arises from
the sfermion sector, most notably the sbottoms and staus: a large
$\left(\mu+\mu_{\text{eff}}\right)$ induces large terms in the off-diagonal
elements of the sfermion mass matrices (enhanced by $\tan\beta$ for the case
of down-type sfermions) which can potentially cause tachyons, also depending
on the values of the trilinear soft-breaking parameters $A_{f}$. As discussed
in Section 2.3, constraints from charge- and color-breaking minima induced by
too large soft-breaking trilinear parameters (see Ineqs. (24) with $\mu$
promoted to $\left(\mu+\mu_{\text{eff}}\right)$), have a much smaller impact
in the $\mu$NMSSM as compared to the MSSM [108].
A special case of this scenario is the possibility of having $\mu$ at the
electroweak scale in combination with an almost vanishing
$\lvert\mu_{\text{eff}}\rvert\ll\mu$. This implies that
$\left(\mu+\mu_{\text{eff}}\right)$ remains at the electroweak scale. In
contrast to the standard NMSSM this scenario allows the occurrence of both,
$\kappa\gg\lambda$ and a light singlet sector. As discussed in Section 2.2,
the mixing between singlets and doublets is in this case dominated by terms
proportional to $\mu_{\text{eff}}^{-1}$. We will explicitly discuss such a
scenario in Section 3.5.
Table 1: The input parameters which are fixed throughout our numerical
analysis (interpreted as on-shell parameters if not specified otherwise). The
gaugino mass parameters are denoted as $M_{i}$ with $i=1,2,3$. The trilinear
soft-breaking terms for the sfermions $A_{f_{g}}$ carry the generation index
$g=1,2,3$. The charged Higgs mass $m_{H^{\pm}}$ is fixed to the shown value,
if not mentioned otherwise.
$\displaystyle m_{H^{\pm}}$ $\displaystyle=800\,\textrm{GeV},$ $\displaystyle
m_{t}$ $\displaystyle=173.2\,\textrm{GeV},$ $\displaystyle\alpha_{s}(m_{Z})$
$\displaystyle=0.118,$ $\displaystyle G_{F}$ $\displaystyle=1.16637\cdot
10^{-5}\,\textrm{GeV}^{-2},$ $\displaystyle m_{Z}$
$\displaystyle=91.1876\,\textrm{GeV},$ $\displaystyle m_{W}$
$\displaystyle=80.385\,\textrm{GeV},$ $\displaystyle M_{3}$
$\displaystyle=2.5\,\textrm{TeV},$ $\displaystyle M_{2}$
$\displaystyle=0.5\,\textrm{TeV},$ $\displaystyle M_{1}$
$\displaystyle=\frac{5}{3}\frac{g_{1}^{2}}{g_{2}^{2}}M_{2},$ $\displaystyle
m_{\tilde{f}_{\text{L}}}$ $\displaystyle=2\,\textrm{TeV},$ $\displaystyle
m_{\tilde{f}_{\text{R}}}$ $\displaystyle=2\,\textrm{TeV},$ $\displaystyle
A_{f_{3}}$ $\displaystyle=4\,\textrm{TeV},\quad A_{f_{1,2}}=0\,\textrm{TeV}.$
There are more parameters that are relevant for the following phenomenological
studies. We keep those fixed which behave similarly as in the MSSM and NMSSM.
The choice of our constant input values is given in Tab. 1. Furthermore, we
specify the values of $t_{\beta}$, $\kappa$, $\lambda$, and $A_{\kappa}$
directly at the respective places. Besides the analyses where we explicitly
study the dependence on $B_{\mu}$, we use $B_{\mu}=0\,\textrm{GeV}$ as default
value.
As our analysis is focused on the impact of the inflation model, we are not
going to discuss the influence of the sfermion parameters. If not mentioned
otherwise, we use $m_{\tilde{f}}\equiv m_{\tilde{f}_{L}}=m_{\tilde{f}_{R}}$
and $A_{f_{3}}/m_{\tilde{f}}=2$, which maximizes the prediction for the SM-
like Higgs-boson mass at $\mu+\mu_{\text{eff}}=0\,\textrm{GeV}$. The gluino
mass parameter $M_{3}$ is set well above the squark masses of the third
generation which is in accordance with the existing LHC bounds. For
completeness, we also give the parameters of the SM which are most relevant
for our numerical study in Tab. 1.
The gaugino-mass parameters $M_{1}$ and $M_{2}$ do not play a big role in the
following analysis, but are necessary input parameters for the mass matrices
of the charginos and neutralinos in Eqs. (35). We set
$M_{2}=500\,\textrm{GeV}$ and fix $M_{1}$ via the usual GUT relation, see Tab.
1. Our phenomenological analysis is most sensitive to the neutralino and
chargino spectrum if a Higgs boson can decay into them. This is in particular
the case if the particle spectrum contains light higgsinos, whose masses are
controlled through $\left(\mu+\mu_{\text{eff}}\right)$. For a scenario with
light higgsinos and a light singlino we will later also discuss the
electroweakino phenomenology at a linear collider, see Section 3.4.
As we use $\mu\simeq\frac{3}{2}\,m_{3/2}\,10^{5}\,\lambda$ and focus on
$\mu\lesssim 2\,\textrm{TeV}$, we are considering scenarios where the
gravitino typically is the LSP. We do not specify the mediator mechanism of
SUSY breaking; however, we assume that such a light gravitino is always
possible. Although the gravitino is the Dark Matter candidate, traditional
collider searches for a neutralino LSP do apply in our case: for instance, if
the next-to LSP (NLSP) is gaugino-like, it can decay into a photon and the
gravitino, where the NLSP lifetime is typically so large that it can escape
the detector [153]. We roughly estimate the NLSP phenomenology via the
approximate partial decay width of the neutralino NLSP into a photon or $Z$
boson and gravitino $\psi_{3/2}$ according to Refs. [154, 155, 156]
$\displaystyle\Gamma_{\tilde{\chi}^{0}_{1}\to\gamma\psi_{3/2}}\simeq\frac{\left|N_{11}\,c_{\text{w}}+N_{12}\,s_{\text{w}}\right|^{2}}{48\,\pi\,M_{\text{Pl}}^{2}}\frac{m_{\tilde{\chi}_{1}^{0}}^{5}}{m_{3/2}^{2}}\,,\quad\Gamma_{\tilde{\chi}^{0}_{1}\to
Z\psi_{3/2}}\simeq\frac{\left|-N_{11}\,s_{\text{w}}+N_{12}\,c_{\text{w}}\right|^{2}}{48\,\pi\,M_{\text{Pl}}^{2}}\frac{m_{\tilde{\chi}_{1}^{0}}^{5}}{m_{3/2}^{2}}\left(1-\frac{m_{Z}^{2}}{m_{\tilde{\chi}_{1}^{0}}^{2}}\right)^{4},$
(39)
where we expanded in a small gravitino mass $m_{3/2}$ and use $s_{\text{w}}$
and $c_{\text{w}}$ for the sine and cosine of the weak mixing angle,
respectively. The neutralino mixing matrix elements $N_{ij}$ follow from the
diagonalization of Eq. (35a). As an example for the decay of the NLSP with
$m_{\tilde{\chi}_{1}^{0}}\simeq 100\,\textrm{GeV}$ and $m_{3/2}\simeq
10\,\textrm{MeV}$, we find a lifetime of $\tau\equiv
1/\Gamma=\mathcal{O}(1\,\mathrm{s})$. Thus, the NLSP decays outside of the
detector and is counted as missing energy. Nevertheless, such decays might be
of certain interest with respect to future experimental searches for long-
lived particles like the MATHUSLA experiment [157]. Note that for a higgsino-
like NLSP the decay into a $Z$ boson and the gravitino is obtained by
replacing the mixing factor in Eq. (39) by
$\lvert{-}N_{13}c_{\beta}+N_{14}s_{\beta}\rvert^{2}$. If kinematically open,
also the decay into a (singlet-like) $\mathcal{CP}$-even or $\mathcal{CP}$-odd
Higgs boson and the gravitino can occur (see Ref. [155]), but this decay mode
does not change the qualitative features described above.
We have chosen $m_{H^{\pm}}$ as an input parameter and adjust $A_{\lambda}$
according to Eq. (19). If not denoted otherwise, we set
$m_{H^{\pm}}=800\,\textrm{GeV}$. We use HiggsBounds version 5.1.0beta [33, 34,
35, 36, 37] in order to implement the constraints on the parameter space of
each of our scenarios resulting from the search limits for additional Higgs
bosons. In this context, the exclusion limits from $H,A\to\tau\tau$ decays are
particularly important. For relatively low values of $\tan\beta$ the choice of
$m_{H^{\pm}}=800\,\textrm{GeV}$ is well compatible with these bounds. The code
HiggsBounds determines for each parameter point the most sensitive channel and
evaluates whether the parameter point is excluded at the $95\%$ confidence
level (C.L.). We use those exclusion bounds as a hard cut in the parameter
spaces of our analyses.
We also indicate the regions of the parameter space which provide a Higgs
boson that is compatible with the observed state at $125$ GeV. These regions
are obtained with the help of HiggsSignals version 2.1.0beta [38]. The code
HiggsSignals evaluates a total $\chi^{2}$ value, obtained as a sum of the
$\chi^{2}$ values for each of the $85$ implemented observables. Four more
observables are added, which test the compatibility of the predicted Higgs-
boson mass with the observed value of $125$ GeV. This latter test includes a
theoretical uncertainty on the predicted Higgs-boson mass of about $3$ GeV,
such that a certain deviation from the four measured mass values (from the two
channels with either a $\gamma\gamma$ or a $ZZ^{(*)}$ final state from both
experiments ATLAS and CMS) is acceptable. Thus, in total HiggsSignals tests
$89$ observables.
Since all our two-dimensional figures include a region with a SM-like Higgs
boson,999The minimal $\chi^{2}$ value obtained in our numerical analysis is
$\chi_{m}^{2}=74.6$. All subsequently discussed benchmark planes include a
parameter region with $\chi_{m}^{2}<80$. Further details are provided below.
we classify the compatibility with the observed state as follows: we determine
the minimal value of $\chi^{2}$, denoted by $\chi_{m}^{2}$, in the two-
dimensional plane and then calculate the deviation
$\Delta\chi^{2}=\chi^{2}-\chi_{m}^{2}$ from the minimal value in each
parameter point. We allow for a maximal deviation of $\Delta\chi^{2}<5.99$,
which corresponds to the $95\%$ C.L. region in the Gaussian limit. All
parameter points that fall in this region $\Delta\chi^{2}<5.99$ are considered
to successfully describe the observed SM-like Higgs boson.
Lastly, we note that HiggsBounds and HiggsSignals are operated through an
effective-coupling input. We will comment on the results of the two codes
where appropriate.
For our implementation of the constraints from the electroweak vacuum
stability we refer to Section 2.3. For informative reasons, we distinguish
long-lived vacua from short-lived ones in the numerical analysis. We do not
explicitly enforce a perturbativity bound on $\kappa$ and $\lambda$, but
discuss this issue below.
### 3.2 Higgs-boson and neutralino mass spectra
In this section, we point out the differences of the Higgs-boson and
neutralino mass spectra in the $\mu$NMSSM with respect to the NMSSM. Similar
to the case of the MSSM, the charged and the $\mathcal{CP}$-even heavy doublet
as well as the MSSM-like $\mathcal{CP}$-odd Higgs bosons are (for sufficiently
large $m_{H^{\pm}}\gg M_{Z}$) quasi-degenerate.
In Fig. 3, we show the masses of the Higgs bosons for vanishing $A_{\kappa}$
in the left, $A_{\kappa}=100\,\textrm{GeV}$ in the middle frame, and the
masses of the neutralinos in the right frame. Each frame contains three
different scenarios which are characterized by the three values
$\mu\in\\{0,200,1000\\}\,\textrm{GeV}$ while keeping all other parameters
fixed: $\mu+\mu_{\text{eff}}=-200\,\textrm{GeV}$, $t_{\beta}=3.5$,
$\lambda=0.2$, $\kappa=0.2\,\lambda$, and the other parameters as given in
Tab. 1. The additional $\mu$ term has the biggest influence on the singlet-
like states $s^{0}$ and $a_{s}$, as well as the singlino-like state
$\tilde{S}^{0}$. In analogy to the discussion in Fig. 1, the reason for this
behavior is the fixed sum $\left(\mu+\mu_{\text{eff}}\right)$: an increase in
$\mu$ causes a larger negative $\mu_{\text{eff}}$ which primarily drives the
singlet-mass terms in the $(3,3)$ elements of Eqs. (17a) and (17b), and the
singlino-mass term in the $(5,5)$ element of Eq. (35a) to large values. In the
investigated parameter region, the mass of the $\mathcal{CP}$-odd singlet is
also very sensitive to $A_{\kappa}$: in order to avoid a tachyonic state
$a_{s}$ over a large fraction of the parameter space, it is essential to keep
$A_{\kappa}$ sufficiently large. However, in the left frame a scenario is
shown where even a vanishing $A_{\kappa}$ is possible. It generates a rather
light $\mathcal{CP}$-odd singlet-like state, whereas a sizable
$A_{\kappa}=100\,\textrm{GeV}$ (middle) lifts this mass up. There is thus the
potential for a distinction between the NMSSM-limit for $\mu=0\,\textrm{GeV}$
and the $\mu$NMSSM with a large $\mu=1\,\textrm{TeV}$. Note that in the middle
frame for $\mu=200\,\textrm{GeV}$, the purple and blue lines are on top of
each other.
The masses of the neutralino sector do not depend on $A_{\kappa}$ at the tree
level. Concerning the Higgs sector, only the two cases in Fig. 3 with $\mu=0$
GeV and $A_{\kappa}\in\\{0,100\\}\,\textrm{GeV}$ yield a SM-like Higgs boson
that is compatible with the experimental data with $\chi^{2}$ values of
maximal $77$. These two cases are also compatible with searches for additional
Higgs bosons probed by HiggsBounds. The two cases with $\mu=1$ TeV and
$A_{\kappa}\in\\{0,100\\}\,\textrm{GeV}$ yield minimal $\chi^{2}$ values of
$82.6$ and $84.0$, respectively. The larger values of $\chi^{2}$ mainly arise
because the SM-like Higgs-boson mass is slightly below $122$ GeV. The large
variation with $\mu$ for the mass prediction of the mostly SM-like Higgs boson
is mainly induced by a large mixing with the $\mathcal{CP}$-even singlet. The
mixing for $\mu=200\,\textrm{GeV}$ in this scenario becomes very large for
both values of $A_{\kappa}$ such that these cases are outside the parameter
region that is compatible with the constraints by HiggsSignals. Note that the
apparent preference for $\mu=0\,\textrm{GeV}$ over
$\mu\in\\{200,1000\\}\,\textrm{GeV}$ in this scenario is purely accidental and
could be reversed by a slight shift in the input parameters, see the
discussion below.
Figure 2: The loop-corrected Higgs-boson spectrum and the tree-level
neutralino spectrum are shown in the $\mu$NMSSM for scenarios with
$\mu\in\\{0,200,1000\\}\,\textrm{GeV}$ and
$\mu+\mu_{\text{eff}}=-200\,\textrm{GeV}$ fixed. The parameters are chosen
such that the state $h^{0}$ (black) that is mostly SM-like has a mass around
$125\,\textrm{GeV}$; the gray band shows a $3\,\textrm{GeV}$ interval around
the experimentally measured Higgs mass. Furthermore, the masses of the
$\mathcal{CP}$-even singlet-like state $s^{0}$ (blue), the $\mathcal{CP}$-odd
singlet-like state $a_{s}$ (purple), and the heavy $\mathcal{CP}$-even Higgs
doublet and MSSM-like $\mathcal{CP}$-odd components $H^{0},A^{0}$ with values
close to the input $m_{H^{\pm}}\sim 800\,\textrm{GeV}$ (red) are shown, where
the assignments are made according to the loop-corrected mixing matrix
$Z^{\text{\tiny mix}}_{ij}$ for the Higgs sector, see Section 2.4. For the
neutralino sector on the right, yellow lines show the dominantly bino-like
state $\tilde{B}^{0}$, and green lines the wino-like state $\tilde{W}^{0}$.
The singlino $\tilde{S}^{0}$ is shown in rose and the two (doublet) higgsinos
$\tilde{H}^{0}$ appear in orange. The assignments are determined by the tree-
level mixing matrix. The parameter values are given in the plot and in Tab. 1.
Figure 3: In a similar manner as in Fig. 3, the spectra of Higgs bosons and
neutralinos are shown in the $\mu$NMSSM. The neutralino masses are invariant
under changes in $\mu$ by identifying the sum
$\left(\mu+\mu_{\text{eff}}\right)$ of the $\mu$NMSSM with the
$\mu_{\text{eff}}$ term of the NMSSM, and by rescaling $\kappa$ according to
Eq. (36). We set $\kappa=0.8\,\lambda$, and for $\mu=0$ GeV we assign
$\mu_{\text{eff}}=-200\,\textrm{GeV}$. The Higgs mass spectra are slightly
affected by the rescaling.
As already mentioned in Section 2.6, the electroweakino sector alone, at least
at the tree level, does not allow one to distinguish the $\mu$NMSSM from the
NMSSM: one can keep the neutralino–chargino spectrum at the tree level
invariant by identifying the sum $\left(\mu+\mu_{\text{eff}}\right)$ with the
$\mu_{\text{eff}}$ term of the NMSSM, and rescaling $\kappa$ according to Eq.
(36). However, as pointed out above, the rescaling does have an impact on the
Higgs spectrum. We show in Fig. 3 spectra for
$\mu\in\\{0,200,1000\\}\,\textrm{GeV}$ and
$A_{\kappa}\in\\{0,100\\}\,\textrm{GeV}$ with fixed
$\mu+\mu_{\text{eff}}=-200\,\textrm{GeV}$. The neutralino spectrum is shown in
only one column in the very right frame. In analogy to Fig. 3, the left and
middle frames show the Higgs-boson masses for the two values of $A_{\kappa}$
where one still can see the effect of a varying $\mu$ term. While
contributions to the mass matrices in Eqs. (17) which are proportional to
$\left(\mu+\mu_{\text{eff}}\right)$ or $\kappa\,\mu_{\text{eff}}$ are kept
constant, other terms ${\propto}\,\mu_{\text{eff}}^{-1},\mu_{\text{eff}}^{-2}$
induce variations. Accordingly, the singlet-like Higgs masses in Fig. 3 are
only slightly sensitive to $\mu$, much less than the changes observed in Fig.
3. A rising $\mu$ slightly increases the mass splitting between the singlet-
like and the SM-like Higgs state.
Still, while the Higgs masses remain almost constant for not too small
$\mu_{\text{eff}}$, the doublet–singlet mixing can be strongly affected by
varying $\mu$ and $\mu_{\text{eff}}$ (but keeping their sum constant), in
particular if the doublet–singlet mixing almost vanishes at a certain choice
of $\mu$ and $\mu_{\text{eff}}$. In general, the mixing between the singlet
and doublet states is affected by a large $\lvert\mu_{\text{eff}}\rvert$.
However, by rescaling $\kappa$ according to Eq. (36) all contributions linear
in $\mu_{\text{eff}}$ are absorbed, while the contributions
${\propto}\,\mu_{\text{eff}}^{-1}$ depend on the values of $t_{\beta}$,
$M_{H^{\pm}}$ and $B_{\mu}$, see Eqs. (20a) and (20b).101010In the GNMSSM,
there are further possibilities of absorbing shifts in $\mu_{\text{eff}}$
through a redefinition of other $\mathbb{Z}_{3}$-violating parameters. In
Section 3.5 we will further investigate scenarios with very small
$\mu_{\text{eff}}$ and enhanced Higgs-boson mixing.
In Fig. 3 only the case $A_{\kappa}=100$ GeV in combination with $\mu=0$ GeV
is allowed by HiggsBounds and HiggsSignals ($\chi^{2}=80.1$), since the other
scenarios are either ruled out by the decay of the SM-like Higgs into a pair
of light $\mathcal{CP}$-odd singlets or by a too large deviation of the SM-
like Higgs-boson mass from $125$ GeV. In addition to our discussion above, we
emphasize that in particular the latter exclusion can be easily avoided
through a slight adjustment of the input parameters.
### 3.3 Parameter scan
We have discussed above the dependence of the Higgs masses and of the
condition for the stability of the electroweak vacuum on the model parameters.
Apart from the fixed parameters in Tab. 1, we choose seven “free” parameters
that we vary in the following regimes for our analyses:
$\displaystyle\begin{aligned}
\mu_{\text{eff}}&\in[-2,2]\,\textrm{TeV}\,,&\mu&\in[0,2]\,\textrm{TeV}\,,&B_{\mu}&\in[-3,3]\,\textrm{TeV}\,,\\\\[-4.30554pt]
\lambda&\in[10^{-4},1]\,,&\kappa&\in[10^{-4},1]\,,&A_{\kappa}&\in\\{0,100\\}\,\textrm{GeV}\,,&\tan\beta&\in[1.5,3.5]\,,\end{aligned}$
(40)
where the largest values of $\lambda$ and $\kappa$ in the specified range of
(40) violate the approximate perturbativity bound
$\lambda^{2}+\kappa^{2}\lesssim 0.5$.111111This perturbativity bound was
explicitly derived for the NMSSM in Ref. [158]. According to the beta
functions for $\lambda$ and $\kappa$ (see appendix A) no additional scale-
dependent contribution is introduced by the $\mu$NMSSM at the one-loop order.
For the results presented in the following, this bound is always fulfilled and
lies outside the plot ranges. Values of $\tan\beta\gtrsim 4$ push the model
into the MSSM-like regime and are of less interest for studying the $\mu$NMSSM
effects.
We have performed a scan over the parameter space defined in (40) and
identified regions which are compatible with current observations concerning
the properties of the SM-like Higgs boson at $125\,\textrm{GeV}$ and the
limits from searches for additional Higgs bosons with HiggsBounds and
HiggsSignals as described above. In the following, we present a selection of
results from this scan; different regions of vacuum stability are illustrated,
and the experimental constraints from Higgs physics are indicated. While we
display some typical examples, it should be noted that similar observations
hold for other regions in the parameter space as well.
In Figs. 6–7, we present a selection of parameter regions. Before we discuss
them individually, their common features are explained. The colored dots in
the background display different states of the electroweak vacuum: we
distinguish stable (blue), long-lived meta-stable (purple), short-lived meta-
stable (red), and tachyonic (rose). As discussed above, we regard not only
tachyonic but also meta-stable regions as excluded in the context of this
inflationary scenario, but nevertheless display long- and short-lived meta-
stable regions for illustration. Furthermore, we indicate those points that do
not fulfill Ineq. (23) and thus have no singlet vev (orange), although, as
explained in Section 2.3, this constraint is not relevant for the $\mu$NMSSM.
We overlay mass contours for the SM-like Higgs $h^{0}$ (black), the
$\mathcal{CP}$-even singlet-like Higgs $s^{0}$ (blue), and the
$\mathcal{CP}$-odd singlet-like Higgs $a_{s}$ (red). The spectrum is
calculated taking into account the full one-loop and the known MSSM-like two-
loop contributions as described in Section 2.4. The assignment of the labels
$h^{0}$, $s^{0}$ and $a_{s}$ to the loop-corrected states is determined by the
largest respective contribution in the mixing matrix $Z^{\text{\tiny
mix}}_{ij}$. We emphasize again that the parameters of the stop sector
specified in Tab. 1 for the given scale of SUSY masses maximize the SM-like
Higgs mass for $\mu+\mu_{\text{eff}}=0\,\textrm{GeV}$; therefore, lower values
for the SM-like Higgs mass could easily be obtained by reducing the mixing in
the stop sector. Finally, we also indicate a naïve exclusion bound from direct
searches for charginos by the gray-shaded band: Ref. [152] reports a lower
bound on the chargino mass of $94\,\textrm{GeV}$ which translates into the
requirement that $\lvert\mu+\mu_{\text{eff}}\rvert$ must be above that value
in the $\mu$NMSSM. Lastly, all Figs. 6–7 show the region of parameter points
that successfully passed HiggsBounds and HiggsSignals and thus, in particular,
yield a SM-like Higgs boson compatible with the observed state at
$125\,\textrm{GeV}$. This region is represented through the larger, light
green dots in the background. We refer to Section 3.1 for our statistical
interpretation of the results obtained from the two codes.
A large part of the parameter region that is consistent with the measured SM-
like Higgs mass is also in concordance with an absolutely stable electroweak
vacuum. Small intersections between stable regions and regimes with tachyonic
Higgs states exist, where there are meta-stable non-standard vacua. The
strongest constraints arise from the existence of tachyonic masses for one of
the physical Higgs states at the tree level. In the remaining region only a
small fraction of points has a global minimum which does not coincide with the
electroweak vacuum whereas the majority has a true electroweak vacuum. For the
short-lived meta-stable regions, the vacuum lifetime is longer than the age of
the universe.
Figure 4: Contours for the SM-like Higgs mass (black) and the masses of the
two singlet-like states ($\mathcal{CP}$-even in blue and $\mathcal{CP}$-odd in
red) in the plane $\kappa/\lambda$ versus $(\mu+\mu_{\text{eff}})$, where
$\lambda=0.6$ and $\mu=500\,\textrm{GeV}$ are kept fixed and $\kappa$ and
$\mu_{\text{eff}}$ vary. In the left plot $A_{\kappa}=0$ GeV is used; in the
right one $A_{\kappa}=100\,\textrm{GeV}$. Furthermore, $\tan\beta=2.5$ is set
in both plots. The other relevant parameters are listed in Tab. 1. The few red
and purple points have a short- and long-lived meta-stable electroweak vacuum,
respectively, whereas blue points have a stable electroweak vacuum. Rose
points are excluded because of tachyonic tree-level masses. The orange points
cannot reproduce a non-vanishing $\mu_{\text{eff}}$ at the electroweak vacuum
via the constraint of Ineq. (23). With the gray vertical band we mark a naïve
direct experimental exclusion bound from the chargino mass
$m_{\chi^{\pm}}>94\,\textrm{GeV}$. Green areas are allowed by HiggsBounds and
HiggsSignals (indicated as “HBHS” in the legend).
Figure 5: The same as Fig. 6, except that $A_{\kappa}=100\,\textrm{GeV}$ is
used in both plots, and the parameter $\mu$ is set to $\mu=1000\,\textrm{GeV}$
(left) and $1500\,\textrm{GeV}$ (right).
Figure 6: The same as Fig. 6 but for $\mu=1000\,\textrm{GeV}$, $\tan\beta=3.5$
and $\lambda=0.3$.
In Fig. 6 we indicate the Higgs-mass contours and the constraints from vacuum
stability in the plane of $\left(\mu+\mu_{\text{eff}}\right)$ and
$\kappa/\lambda$ with fixed $\mu$ and $\lambda$. Note that for this choice of
variables the tree-level doublet sector in Eqs. (17a), (17b) and (20a) remains
constant; any structure visible in the prediction of the SM-like Higgs mass is
thus induced by mixing with the singlet state, or by loop corrections. The
chosen parameter values are indicated in the legends of the figures and in
Tab. 1; in the left plot $A_{\kappa}=0\,\textrm{GeV}$ is used, while in the
right plot $A_{\kappa}=100\,\textrm{GeV}$. The value of $A_{\kappa}$ has an
impact in particular on the mass scale of the $\mathcal{CP}$-odd singlet-like
Higgs which is much lighter on the left-hand side. In fact, for a light
$\mathcal{CP}$-odd singlet-like Higgs a parameter region opens up where decays
of the SM-like Higgs into a pair of them become kinematically allowed. The
$\mathcal{CP}$-even singlet-like Higgs is also somewhat lighter for
$A_{\kappa}=0$ GeV, while the SM-like Higgs is scarcely affected. The contour
lines of the Higgs masses stop when one Higgs becomes tachyonic. The reason
why this does not exactly coincide with the border between the blue and pink
dotted regions are the loop corrections to the Higgs spectrum while the
constraints from vacuum stability were investigated at the tree level. It can
be seen that the boundaries at the left of the stable region are parallel to
one of the displayed Higgs-mass contours—the corresponding particle becomes
tachyonic at this boundary. The boundary to the right of the stable region can
be understood when comparing the right plots of Fig. 6 and Fig. 6, which
differ from each other by the value of $\mu$: in the right plot of Fig. 6 a
contour for the SM-like Higgs mass which is parallel to the tachyonic border
appears around $\mu+\mu_{\text{eff}}=250\,\textrm{GeV}$ and
$\kappa/\lambda=0.5$. In Fig. 6 such a contour is not visible as this
particular parameter region is excluded by a tachyonic SM-like state at the
tree level. Note that the NMSSM\- and $\mu$NMSSM-specific one-loop
contributions to the Higgs spectrum are particularly large in that region
(about $60\,\textrm{GeV}$ additional shift compared to the same scenario in
the MSSM-limit with $\lambda\to 0$ and $\kappa/\lambda$ constant), see also
Ref. [124]; a dedicated analysis taking into account two-loop effects beyond
the MSSM-limit might be necessary for a robust prediction of the Higgs mass
close to the right border of the stable region, see e. g. Ref. [123]. It
should be noted that in Fig. 6 the region where the Higgs mass is close to the
right border of the stable region is disfavored by the limits from chargino
searches at LEP.
As expected, the region allowed by HiggsBounds and HiggsSignals is a subset of
the region where the SM-like Higgs has a mass in the vicinity of $125$ GeV. In
the green-marked region, $\Delta\chi^{2}$ is at maximum $5.99$. The minimal
value $\chi_{m}^{2}$ from HiggsSignals is $74.6$ in both figures. One can see
on the left-hand side of Fig. 6 that this region is split into two: in between
the two regions the SM-like Higgs can decay into a pair of $\mathcal{CP}$-odd
singlet-like Higgs bosons $h^{0}\to a_{s}a_{s}$ with a branching ratio of up
to $90\,\%$; this behavior is not compatible with the observed signal
strengths implying a limit on decays of the state at $125\,\textrm{GeV}$ into
non-SM particles. For a very light $\mathcal{CP}$-odd singlet, the admixture
between the SM-like Higgs and the $\mathcal{CP}$-even singlet component is
reduced, since the latter becomes heavier in this region. In the scenario
under consideration, the decay $h^{0}\to a_{s}a_{s}$ is dominated by the
coupling among the two singlet states, $\lambda_{355}$ in Eq. (31i), such that
a reduced admixture between $h^{0}$ and $s^{0}$ also closes the decay
$h^{0}\to a_{s}a_{s}$. This is why—despite the very light $\mathcal{CP}$-odd
Higgs $a_{s}$—the region at $\mu+\mu_{\text{eff}}\simeq-300$ GeV and
$\kappa/\lambda\simeq 0.4$ is allowed by the constraints from both
HiggsSignals and HiggsBounds.
In Fig. 6 we present scenarios similar to the right-hand side of Fig. 6 with
$A_{\kappa}=100\,\textrm{GeV}$, but with different values of $\mu$ (note the
larger scale at the $x$-axis). Thus, the influence of this parameter that
distinguishes the $\mu$NMSSM from the NMSSM can be seen directly. Obviously,
the parameter region with a stable vacuum is enlarged: for a given value
$(\mu+\mu_{\text{eff}})$ the tachyonic border moves to smaller ratios of
$\kappa/\lambda$ as $\mu$ increases. Concerning the Higgs spectrum, the most
notable difference is seen for the SM-like Higgs mass: for
$\mu=1\,\textrm{TeV}$ a turning point at about
$\mu+\mu_{\text{eff}}=-800\,\textrm{GeV}$ is visible, which moves to smaller
values of $\kappa/\lambda$ for $\mu=1.5\,\textrm{TeV}$. For the larger value
of $\mu$ one can see that the possibility emerges for scenarios with the
correct SM-like Higgs mass but positive $(\mu+\mu_{\text{eff}})$. Again all
tested points which yield a SM-like Higgs boson close to $125$ GeV
successfully pass the constraints implemented in HiggsBounds and HiggsSignals.
The minimal values of $\chi_{m}^{2}$ from HiggsSignals are $74.9$ and $74.6$
on the left-hand and on the right-hand side of Fig. 6, respectively.
Fig. 6 shows scenarios with larger $\tan\beta$ and smaller $\lambda$ compared
to the previous figures. Like in Fig. 6 we set $A_{\kappa}=0$ GeV on the left,
and $A_{\kappa}=100$ GeV on the right-hand side, but $\mu=1\,\textrm{TeV}$ is
used. We observe again that a larger value of $A_{\kappa}$ widens the allowed
parameter region, because the mass of the $\mathcal{CP}$-odd singlet is lifted
up, giving rise to a drastic effect in this case. In fact, for $A_{\kappa}=0$
GeV only a rather small area in the plane of $(\mu+\mu_{\text{eff}})$ and
$\kappa/\lambda$ is allowed, while the allowed region is very significantly
enhanced for $A_{\kappa}=100$ GeV. In the plot on the right-hand side one can
see a (nearly) closed $125\,\textrm{GeV}$ contour for the mass of the SM-like
Higgs with even larger values in the enclosed area. Adjusting the parameters
of the stop sector in order to obtain a smaller contribution to the SM-like
Higgs mass can render a SM-like Higgs with a mass of about $125\,\textrm{GeV}$
in the whole enclosed region. Close to the tachyonic borders we find larger
regions with a long-lived meta-stable vacuum (purple) than in Figs. 6 and 6.
However, in this part of the plot the prediction for the mass of the SM-like
Higgs is below the experimental value. On the right-hand side of Fig. 6 a
large region is allowed by the constraints from HiggsBounds and HiggsSignals.
Only low values of $\lvert\mu+\mu_{\text{eff}}\rvert<m_{h}/2$ are excluded by
HiggsSignals due to the decay of the SM-like Higgs boson into a pair of
higgsinos. However, this region is anyhow not compatible with the LEP bound on
light charginos. The minimal values of $\chi_{m}^{2}$ from HiggsSignals are
$74.7$ in both plots.
In Fig. 7 we change the parameter on the $y$-axis: $B_{\mu}$ is varied and
$\kappa$ is kept fixed. We set $A_{\kappa}=0\,\textrm{GeV}$ on the left-hand
side, and $A_{\kappa}=100$ GeV on the right-hand side. One can see that non-
zero values for $B_{\mu}$ can have a significant impact on the predicted Higgs
masses and might determine whether or not a scenario is excluded. For larger
negative values of $B_{\mu}$, one can see an area where the electroweak vacuum
is meta-stable and long-lived, while the area in the lower left corner of the
plots indicates that the electroweak vacuum is unstable and short-lived. The
effect of a larger $A_{\kappa}$ mainly lifts the tachyonic boundary at the top
so that values of $B_{\mu}=1\,\textrm{TeV}$ are allowed for
$A_{\kappa}=100\,\textrm{GeV}$ and leaves the other regions invariant.
However, towards the upper limit of $B_{\mu}$, there is a small short-lived
area. As a new feature, we find large regions with a meta-stable vacuum but a
SM-like Higgs with a mass of $125\,\textrm{GeV}$ for both values of
$A_{\kappa}$. Accordingly, scenarios with too large negative values of
$B_{\mu}$ are excluded due to a rapidly decaying vacuum despite providing a
SM-like Higgs boson close to the observed mass. The constraints from
HiggsBounds and HiggsSignals indicate that a large part of the region with the
correct Higgs mass is compatible with the experimental data. For both plots
HiggsSignals yields a minimal value of $\chi_{m}^{2}=74.9$. Only in those
scenarios where the decay channel $h^{0}\to a_{s}a_{s}$ is kinematically
allowed—which happens in the plot for $A_{\kappa}=0\,\textrm{GeV}$ for
$\mu+\mu_{\text{eff}}\gtrsim-300\,\textrm{GeV}$ and
$\mu+\mu_{\text{eff}}\lesssim-700\,\textrm{GeV}$—the parameter region is
incompatible with the data on the detected Higgs boson.
Figure 7: Dependence of mass contours and vacuum stability, see Fig. 6 for an
explanation of the color code, on the $\mathbb{Z}_{3}$-breaking soft SUSY-
breaking $B_{\mu}$ term and $(\mu+\mu_{\text{eff}})$ for $\lambda=0.5$. On the
left-hand side, the value $A_{\kappa}=0\,\textrm{GeV}$ was chosen, while on
the right $A_{\kappa}=100\,\textrm{GeV}$.
We briefly summarize the observed features and give an outlook for the
phenomenological studies in the following. The allowed parameter region is
mainly constrained by configurations where one Higgs field is tachyonic at the
tree level. It can be seen that the tachyonic boundaries follow the Higgs mass
contours in the Figs. 6–7; in addition, there are effects from
$\mu_{\text{eff}}^{-1}$ terms as discussed in Section 2.2 which enhance the
doublet–singlet mixing and eventually cause tachyons. This feature can be
observed towards the right end of the Figs. 6–6. The experimental limits and
constraints confine the allowed regions further around the region where the
SM-like Higgs has a mass of about $125\,\textrm{GeV}$ and exclude parameter
regions where for instance the decay of the SM-like Higgs into a pair of light
$\mathcal{CP}$-odd singlets has a large branching ratio. In this context, the
singlet sector has a significant impact on the features discussed in Figs.
6–7.
In the NMSSM, one usually expects to find the phenomenologically most
interesting regions (accommodating a $125\,\textrm{GeV}$ Higgs) for rather
large values of $\lambda\gtrsim 0.1$, since the NMSSM contribution to the SM-
like Higgs mass at the tree level is enhanced. In addition, large $\lambda$
enhances the doublet–singlet mixing. However, in the $\mu$NMSSM, there is
another way to obtain a large doublet–singlet mixing also for small values of
$\lambda$: this is the region of low $\mu_{\text{eff}}$ where terms
proportional to $\mu_{\text{eff}}^{-1}$ become large, as discussed in Section
3.1. We will investigate this class of scenarios, which are not possible in
the NMSSM but generic to the $\mu$NMSSM, in Section 3.5 in more detail.
Similar to the NMSSM, the chosen value of $A_{\kappa}$ has a strong influence
on the singlet-like Higgs masses, which is relevant for the tachyonic regions.
In a large part of the viable parameter space the relation
$\operatorname{sign}{(A_{\kappa})}=-\operatorname{sign}{(\mu_{\text{eff}})}$
applies, where for $A_{\kappa}=0\,\textrm{GeV}$ both signs of
$\mu_{\text{eff}}$ are allowed in general. This dependence on the relative
signs of $A_{\kappa}$ and $\mu_{\text{eff}}$ can be derived from the
discussion in Section 2.2 about the Higgs singlets and especially the
functional dependence of $a_{5}$ in Eq. (14e) versus $a_{4}^{\prime}$ in Eq.
(20c): large negative values of the sum $(a_{4}^{\prime}+a_{5})$ drive the
$\mathcal{CP}$-even singlet tachyonic. In the investigated scenarios above,
which have either $A_{\kappa}=0\,\textrm{GeV}$ or
$A_{\kappa}=100\,\textrm{GeV}$, the sign of $\mu_{\text{eff}}$ is negative in
most of the viable parameter space. Accordingly, there is a preference for
negative $(\mu+\mu_{\text{eff}})$. The allowed region with small positive
values occurs where the negative value of $\mu_{\text{eff}}$ is
overcompensated by the positive value of $\mu$. In Section 3.5 we will
investigate a scenario where we keep $(\mu+\mu_{\text{eff}})$ fixed at a
positive value, while for $A_{\kappa}$ small negative and small positive
values are used for $\mu_{\text{eff}}>0$ GeV and $\mu_{\text{eff}}<0$ GeV,
respectively. There we will also discuss the dependence of the singlet masses
on $\mu$ and $\mu_{\text{eff}}$ in more detail.
### 3.4 Higgs-boson and electroweakino production
In this and the next section we discuss phenomenological features of Higgs-
boson mixing and thus consequences on Higgs-boson production and decays due to
the $\mu$ parameter of the $\mu$NMSSM. For vanishing $\mu$ the phenomenology
of the Higgs bosons equals the one of the NMSSM, for which typical benchmark
scenarios can be found in Ref. [159] (see also Ref. [160]). Naturally they
differ from MSSM-type benchmark scenarios through singlet states modifying the
phenomenology: since the singlet states $s^{0}$ and $a_{s}$ neither directly
couple to fermions nor to gauge bosons, but only through their admixture with
the doublet states, their direct production—both at a hadron collider and a
lepton collider—is negligible in many scenarios. However, besides their direct
production light singlet states can also be potentially observable via their
production in cascade decays of heavier Higgs bosons, as we will discuss in
the following.
In most parts of our numerical study, we make use of the approximation of SM-
normalized effective couplings of a Higgs boson to gluons—calculated at
leading order—which we insert into HiggsBounds for the evaluation of the
Higgs-production cross-sections for the neutral Higgs bosons at the LHC. This
treatment should be sufficiently accurate for determining the allowed regions
in our scans over the parameter space. In the following, however, we will
investigate to what extent the $\mu$NMSSM can accommodate the slight excesses
in the data over the background expectation at a mass around
$95$–$98\,\textrm{GeV}$ that have been reported recently by CMS [42] in the
$\gamma\gamma$ channel121212The results of ATLAS [43] are presented in a
fiducial region and are compatible with both the SM expectation and the excess
reported by CMS. and earlier at LEP [41] in the $b\bar{b}$ channel. For this
purpose we use more sophisticated predictions for the Higgs-production cross-
sections in order to compare with the experimental results. We obtain those
predictions from SusHi [161, 162, 163, 39, 40, 164, 165, 166, 167], for which
a dedicated version for the NMSSM exists [168]. The predictions include N3LO
QCD corrections for the top-quark contribution of the light
$\mathcal{CP}$-even Higgs bosons, while we have neglected contributions from
heavy squarks and gluinos beyond the resummed contributions in the bottom-
Yukawa coupling.
In the NMSSM, the observed excesses in the data around $95$–$98\,\textrm{GeV}$
can be interpreted in terms of a singlet-like state $s^{0}$, see Ref. [74] for
a discussion of the LEP result, and Ref. [169] for a discussion of the CMS
data. At first sight it seems to be non-trivial to describe both excesses
simultaneously, since accommodating the LEP excess would require a rather
large rate $s^{0}\to b\bar{b}$, which in turn would suppress the channel
$s^{0}\to\gamma\gamma$ that is employed in the interpretation of the CMS
excess. As it was pointed out in Ref. [147] based on a detailed analysis of
the Higgs mixing properties, this is nevertheless possible—albeit in a
relatively narrow region of the parameter space, which is somewhat enlarged if
the possibility of non-vanishing phases giving rise to
$\mathcal{CP}$-violating effects is taken into account. We investigate in the
following to which extent the additional freedom that is present in the
$\mu$NMSSM with respect to the possible values of the masses in combination
with the mixing properties has an impact regarding a possible interpretation
of the observed excesses. In Tab. 3 we present four scenarios with $s^{0}$
masses in the range $95$–$98$ GeV that have a phenomenology addressing the
excesses observed both at LEP and CMS. Scenarios 1 and 3 have a small value of
$\mu$ and are NMSSM-like (inspired by the scenarios investigated in Ref.
[147]), while Scenarios 2 and 4 both have $\mu$ values that significantly
differ from zero, and Scenario 4 furthermore has a non-zero value of
$B_{\mu}$. These two $\mu$NMSSM scenarios are intrinsically different from the
NMSSM. Similar scenarios could also be obtained by changing the signs of
$(\mu+\mu_{\text{eff}})$ and $A_{\kappa}$ simultaneously.
Table 2: Scenarios that yield a light $\mathcal{CP}$-even singlet-like Higgs
boson. The Higgs boson at about $125$ GeV is SM-like. All other parameters are
chosen in accordance to Tab. 1.
Scenario | 1 | 2 | 3 | 4
---|---|---|---|---
$\lambda$ | $0.08$ | $0.08$ | $0.28$ | $0.08$
$\kappa$ | $0.04$ | $0.023$ | $0.08$ | $0.0085$
$\tan\beta$ | $12$ | $12$ | $2.5$ | $2$
$(\mu+\mu_{\text{eff}})$ [GeV] | $-140$ | $-140$ | $-300$ | $-400$
$\mu$ [GeV] | $5$ | $195$ | $5$ | $150$
$B_{\mu}$ [GeV] | $0$ | $0$ | $0$ | $-300$
$m_{H^{\pm}}$ [GeV] | $800$ | $800$ | $800$ | $1000$
$A_{\kappa}$ [GeV] | $130$ | $265$ | $250$ | $32$
$A_{f}$ [GeV] | $400$ | $450$ | $3200$ | $4000$
$m_{s^{0}}$ [GeV] | $97.6$ | $95.7$ | $97.2$ | $97.1$
$m_{h^{0}}$ [GeV] | $124.7$ | $126.8$ | $124.6$ | $125.0$
$m_{a^{s}}$ [GeV] | $168.2$ | $277.0$ | $257.2$ | $75.6$
$\frac{\sigma{\left(e^{+}e^{-}\to Zs^{0}\right)}\cdot\text{BR}{\left(s^{0}\to b\bar{b}\right)}}{\sigma^{\text{{SM}{}}}{\left(e^{+}e^{-}\to ZH\right)}\cdot\text{BR}^{\text{{SM}{}}}{\left(H\to b\bar{b}\right)}}$ | $0.28$ | $0.31$ | $0.14$ | $0.35$
$\sigma{\left(gg\to s^{0}\right)}$ [pb] | $25.3$ | $28.1$ | $14.4$ | $31.5$
BR${\left(s^{0}\to\gamma\gamma\right)}$ | $0.0020$ | $0.0016$ | $0.0024$ | $0.0005$
$\chi^{2}(\text{{HiggsSignals}})$ | $97$ | $96$ | $82$ | $101$
Table 3: Cross-sections for electroweakinos at an electron–positron collider
for Scenario 1 defined in Tab. 3.
Scenario 1 | $\tilde{\chi}^{0}_{1}$ | $\tilde{\chi}^{0}_{2}$ | $\tilde{\chi}^{0}_{3}$ | $\tilde{\chi}_{1}^{\pm}$ |
---|---|---|---|---|---
Masses [GeV] | $127.3$ | $138.3$ | $155.9$ | $138.4$ |
$\sigma(e^{+}e^{-}\to\tilde{\chi}_{i}\tilde{\chi}_{j})$ [fb] for $\sqrt{s}=350$ GeV | $\tilde{\chi}^{0}_{1}\tilde{\chi}^{0}_{2}$ | $\tilde{\chi}^{0}_{1}\tilde{\chi}^{0}_{3}$ | $\tilde{\chi}^{0}_{2}\tilde{\chi}^{0}_{3}$ | $\tilde{\chi}^{0}_{2}\tilde{\chi}^{0}_{2}$ | $\tilde{\chi}^{+}_{1}\tilde{\chi}^{-}_{1}$
Unpolarized | $141$ | $195$ | $0.08$ | $0.19$ | $795$
Pol($e^{+},e^{-})=(+30\%,-80\%)$ | $208$ | $287$ | $0.12$ | $0.28$ | $1620$
Pol($e^{+},e^{-})=(-30\%,+80\%)$ | $142$ | $196$ | $0.08$ | $0.19$ | $352$
$\sigma(e^{+}e^{-}\to\tilde{\chi}_{i}\tilde{\chi}_{j})$ [fb] for $\sqrt{s}=500$ GeV | $\tilde{\chi}^{0}_{1}\tilde{\chi}^{0}_{2}$ | $\tilde{\chi}^{0}_{1}\tilde{\chi}^{0}_{3}$ | $\tilde{\chi}^{0}_{2}\tilde{\chi}^{0}_{3}$ | $\tilde{\chi}^{0}_{2}\tilde{\chi}^{0}_{2}$ | $\tilde{\chi}^{+}_{1}\tilde{\chi}^{-}_{1}$
Unpolarized | $74$ | $109$ | $0.12$ | $0.22$ | $459$
Pol($e^{+},e^{-})=(+30\%,-80\%)$ | $110$ | $161$ | $0.19$ | $0.32$ | $926$
Pol($e^{+},e^{-})=(-30\%,+80\%)$ | $75$ | $110$ | $0.13$ | $0.22$ | $212$
Interpreting the LEP excess as the contribution of a singlet-like state
$s^{0}$ in the considered mass range yields a “signal strength” of
$\displaystyle\frac{\sigma{\left(e^{+}e^{-}\to
Zs^{0}\right)}\cdot\text{BR}{\left(s^{0}\to
b\bar{b}\right)}}{\sigma^{\text{{SM}{}}}{\left(e^{+}e^{-}\to
ZH\right)}\cdot\text{BR}^{\text{{SM}{}}}{\left(H\to b\bar{b}\right)}}$
$\displaystyle\simeq\text{$0.2$--$0.3$}\,,$ (41)
while a “signal rate” of $\sigma(pp\to s^{0}\to\gamma\gamma)\simeq 0.1$ pb
would be compatible with the CMS observation. As mentioned above, the cross-
section $gg\to s^{0}$ in our analysis is obtained from SusHi [161, 162] for
the $13$ TeV LHC at N3LO QCD. The renormalization- and factorization-scale
uncertainties amount to about $\pm 5\%$. Sizable values for the cross-sections
$gg\to s^{0}$ and $e^{+}e^{-}\to Zs^{0}$ as well as the branching ratio
BR$(s^{0}\to b\bar{b})$ arise if the admixture of $s^{0}$ with the SM-like
Higgs boson is sufficiently large. A sizable BR$(s^{0}\to\gamma\gamma)$ can
occur as a consequence of a significant $H_{u}$ component of the singlet state
$s^{0}$, whereas a small $H_{d}$ component suppresses the decay into
$b\bar{b}$. In all the listed scenarios the $\mathcal{CP}$-odd singlet-like
Higgs boson $a_{s}$ has a mass below $300\,\textrm{GeV}$. It should be noted
that the occurrence of the state $s^{0}$ at low masses in combination with a
very heavy $a_{s}$ state through a large value of $A_{\kappa}$ would usually
yield a meta-stable (long-lived) vacuum. The listed scenarios involve a
certain amount of tuning in the choice of $A_{\kappa}$ since an increase in
$A_{\kappa}$ by a few GeV yields a tachyonic $s^{0}$ state. It is well-known
from the NMSSM that a too large $A_{\kappa}$ yields a tachyonic
$\mathcal{CP}$-even singlet-like Higgs boson $s^{0}$, see Eq. (37) in Ref.
[158] or Eq. (26) in Ref. [170] for lower and upper bounds on $A_{\kappa}$.
Similarly, we have noted a very pronounced dependence of the masses of both
states, $s^{0}$ and $a_{s}$, on $A_{\kappa}$ for the $\mu$NMSSM scenarios
investigated here. |
Multi-Modal and Multi-Factor Branching Time Active Inference (BTAI_3MF).
Théophile Champion<EMAIL_ADDRESS>
University of Kent, School of Computing
Canterbury CT2 7NZ, United Kingdom
Marek Grześ<EMAIL_ADDRESS>
University of Kent, School of Computing
Canterbury CT2 7NZ, United Kingdom
Howard Bowman<EMAIL_ADDRESS>
University of Birmingham, School of Psychology,
Birmingham B15 2TT, United Kingdom
University of Kent, School of Computing
Canterbury CT2 7NZ, United Kingdom
TO BE FILLED
Active inference is a state-of-the-art framework for modelling the brain that explains a wide range of mechanisms such as habit formation, dopaminergic discharge and curiosity. Recently, two versions of branching time active inference (BTAI) based on Monte-Carlo tree search have been developed to handle the exponential (space and time) complexity class that occurs when computing the prior over all possible policies up to the time horizon. However, those two versions of BTAI still suffer from an exponential complexity class w.r.t the number of observed and latent variables being modelled. In the present paper, we resolve this limitation by first allowing the modelling of several observations, each of them having its own likelihood mapping. Similarly, we allow each latent state to have its own transition mapping. The inference algorithm then exploits the factorisation of the likelihood and transition mappings to accelerate the computation of the posterior. Those two optimisations were tested on the dSprites environment in which the metadata of the dSprites dataset was used as input to the model instead of the dSprites images. On this task, $BTAI_{VMP}$ [Champion et al., 2022, Champion et al., 2022] was able to solve 96.9% of the task in 5.1 seconds, and $BTAI_{BF}$ [Champion et al., 2021] was able to solve 98.6% of the task in 17.5 seconds. Our new approach ($BTAI_{3MF}$) outperformed both of its predecessors by solving the task completly (100%) in only 2.559 seconds. Finally, $BTAI_{3MF}$ has been implemented in a flexible and easy to use (python) package, and we developed a graphical user interface to enable the inspection of the model's beliefs, planning process and behaviour.
Branching Time Active Inference, Monte-Carlo Tree Search, Belief Propagation, Bayesian Prediction, Temporal Slice
§ INTRODUCTION
Active inference extends the free energy principle to generative models with actions [Friston et al., 2016, Costa et al., 2020, Champion et al., 2021] and can be regarded as a form of planning as inference [Botvinick and Toussaint, 2012]. This framework has successfully explained a wide range of neuro-cognitive phenomena, such as habit formation [Friston et al., 2016], Bayesian surprise [Itti and Baldi, 2009], curiosity [Schwartenbeck et al., 2018], and dopaminergic discharges [FitzGerald et al., 2015]. It has also been applied to a variety of tasks, such as animal navigation [Fountas et al., 2020], robotic control [Pezzato et al., 2020, Sancaktar et al., 2020], the mountain car problem [Çatal et al., 2020], the game DOOM [Cullen et al., 2018] and the cart pole problem [Millidge, 2019].
However, active inference suffers from an exponential (space and time) complexity class that occurs when computing the prior over all possible policies up to the time horizon. Recently, two versions of branching time active inference (BTAI) based on Monte-Carlo tree search [Browne et al., 2012] have been developed to handle this exponential growth. In the original formulation of the framework [Champion et al., 2022, Champion et al., 2022], inference was performed using the variational message passing (VMP) algorithm [Winn and Bishop, 2005, Champion et al., 2021]. In a follow up paper, VMP was then replaced by a Bayesian filtering [Fox et al., 2003] scheme leading to a faster inference process [Champion et al., 2021].
In this paper, we develop an extension of Branching Time Active Inference (BTAI), to allow modelling of several modalities as well as several latent states. Indeed, even if the Bayesian filtering version of Branching Time Active Inference ($BTAI_{BF}$) is fast, its modelling capacity is limited to one observation and one hidden state. Consequently, if one wanted to model $n$ latent states $S_t^1, \hdots, S_t^n$, then those $n$ latent states would have to be encoded into one latent state $X$ representing all possible configurations of the $n$ latent states $S_t^1, \hdots, S_t^n$. Unfortunatly, the total number of configurations is given by:
\begin{align*}
\nb{X} = \prod_{i=1}^n \nb{S_t^i} \geq 2^n,
\end{align*}
where $\nb{X}$ is the number of possible values taken by $X$, and similarly $\nb{S_t^i}$ is the number of possible values taken by $S_t^i$. The above inequality is obtained by realizing that $\nb{S_t^i} \geq 2$, and is problematic in practice because $\nb{X}$ is growing exponentially with the number of latent states $n$ being modelled. Also, note that in practice this exponential growth may be way worse than $2^n$. For example,
if one were to model the five modalities of the dSprites environment (c.f. Section <ref>), the total number of configurations would be:
$$\nb{S^y_t} \times \nb{S^{x}_t} \times \nb{S^{scale}_t} \times \nb{S^{orientation}_t} \times \nb{S^{scale}_t} = 33 \times 32 \times 3 \times 40 \times 6 = 760,320 \gg 2^5 = 32.$$
A similar exponential explosion also appears when trying to model several modalities $O_t^1, \hdots, O_t^m$ using a single one $Y$, i.e.
\begin{align*}
\nb{Y} = \prod_{i=1}^m \nb{O_t^i} \geq 2^m,
\end{align*}
where $\nb{Y}$ is the number of possible values taken by $Y$, and similarly $\nb{O_t^i}$ is the number of possible values taken by $O_t^i$. Note, throughout this paper, we will use the term states to refer to the latent states of the model at a specify time step, e.g., $S_t^1, \hdots, S_t^n$ for time step $t$. Additionally, we will use the terms state configurations or values to refer to particular values taken by the latent variables.
The present paper aims to remove those two exponential growths, by allowing the modelling of several observations and latent states, while providing an easy to use framework based on a high-level notation, which allows the user to create models by simply declaring the variables it contains, and the dependencies between those variables. Then, the framework performs the inference process automatically. Appendix A shows an example of how to implement a custom $BTAI_{3MF}$ agent using our framework. In section <ref>, we describe the theory underlying our approach. Importantly, $BTAI_{3MF}$ takes advantage of the generative model struture to perform inference efficiently using a mixture of belief propagation [Yedidia, 2011, Friston et al., 2017, Kschischang et al., 2001] and forward predictions as will be explained in Section <ref>. The name $BTAI_{3MF}$ is an abbreviation for $BTAI_{MMMF}$ that stands for: Multi-Modal and Multi-Factor Branching Time Active Inference. Next, in Section <ref>, we provide the definition of the expected free energy in the context of our new approach, and in Section <ref>, we describe the planning algorithm used to expand the generative model dynamically. Then, in Section <ref>, we compare $BTAI_{3MF}$ to $BTAI_{VMP}$ and $BTAI_{BF}$, and demonstrate empirically that $BTAI_{3MF}$ outperformed both $BTAI_{VMP}$ and $BTAI_{BF}$ on the dSprites environment, which requires the modelling of many latent states and modalities. Finally, Section <ref> concludes this paper by summarizing our approach and results.
§ THEORY OF $BTAI_{3MF}$
In this section, we introduce the mathematical foundation of $BTAI_{3MF}$. To simplify the graphical representation of our generative model, we first introduce a notion of “temporal slice". Then, we build on this idea to describe the generative model of $BTAI_{3MF}$. Next, we explain how belief updates are performed using a mixture of belief propagation and forward predictions. Afterwards, we provide the definition of the expected free energy for this new generative model. Finally, we describe the planning algorithm used to dynamically expand the generative model, and the action selection process.
§.§ Temporal slice
A temporal slice $TS_J = \{O_J^1, \hdots, O_J^{\nb{O}}, S_J^1, \hdots, S_J^{\nb{S}}\}$ is a set of random variables indexed by a sequence of actions $J$. Each random variable of the temporal slice represents either an observation $O_J^o$ or a latent state $S_J^s$. The index of the temporal slice correponds to the sequence of actions that lead to this temporal slice. By definition, if $J$ is an empty sequence, i.e., $J = \emptyset$, then $TS_J$ is the temporal slice of the present time step $t$, also denoted $TS_t$. Within a temporal slice $TS_J$, an observation $O_J^o$ depends on a number of latent states $\rho_J^o \subseteq \{S_J^s \mid s = 1, \hdots, \nb{S}\}$, such that $P(O_J^o|\rho_J^o)$ is a factor in the generative model. Given an action $\bm{a}$ and a sequence of actions $J$, we let $I = J{::}\bm{a}$ be the sequence of actions obtained by appending the action $\bm{a}$ at the end of the sequence of actions $J$. If $I = J{::}\bm{a}$, then the temporal slice $TS_J$ can be the parent of $TS_I$. This means that a latent state $S^s_I$ in $TS_I$ can depend on the latent states $\rho_I^s \subseteq \{S_J^s \mid s = 1, \hdots, \nb{S}\}$ in $TS_J$, such that $P(S_I^s|\rho_I^s)$ is a factor in the generative model. The concept of temporal slice is illustrated in Figure <ref>, and Figure <ref> depicts a more compact representation of the content of Figure <ref>.
[square/.style=regular polygon,regular polygon sides=4]
(TS_I) at (0,-0.25) [rectangle, draw, minimum width=3cm, minimum height=4.7cm, very thick] ;
[below=of TS_I,yshift=0.5cm] (TS_I_label) $TS_t$;
[black, very thick] (TS_I) – (TS_I_label);
[latent] (SI) at (0,0.5) $S_t^s$;
[inner xsep=1cm, inner ysep=0.5cm, yshift=0.5cm] plate_SI (SI) ;
[] (SI_label) at (0,1.5) $s = 1, \hdots, \nb{S}$;
[obs] (OI) at (0,-1) $O_t^o$;
[inner xsep=1cm, inner ysep=0.5cm, yshift=-0.3cm] plate_OI (OI) ;
[] (OI_label) at (0,-2) $o = 1, \hdots, \nb{O}$;
[densely dashed,-latex] (SI) – (OI);
(TS_J) at (4.05,-0.25) [right=of TS_I, rectangle, draw, minimum width=3cm, minimum height=4.7cm, very thick] ;
[below=of TS_J,yshift=0.5cm] (TS_J_label) $TS_I$;
[black, very thick] (TS_J) – (TS_J_label);
[latent] (SJ) at (4.05,0.5) $S_I^s$;
[inner xsep=1cm, inner ysep=0.5cm, yshift=0.5cm] plate_SJ (SJ) ;
[] (SJ_label) at (4.05,1.5) $s = 1, \hdots, \nb{S}$;
[latent] (OJ) at (4.05,-1) $O_I^o$;
[inner xsep=1cm, inner ysep=0.5cm, yshift=-0.3cm] plate_OJ (OJ) ;
[] (OJ_label) at (4.05,-2) $o = 1, \hdots, \nb{O}$;
[densely dashed,-latex] (SJ) – (OJ);
[densely dashed,-latex] (SI) – (SJ);
This figure illustrates two temporal slices $TS_t$ and $TS_I$, which are depicted by rectangles with thick border. Within each temporal slice, plate notation is used to generate $\nb{S}$ latent states and $\nb{O}$ observations. The dashed lines that connect two random variables from two different plates are new to this paper, and represent an arbitrary connectivity between the two sets of random variables generated by the plates. For example, the dashed line from $S_t^s$ to $O_t^o$, means that for each observation $O_t^o$, the parents of $O_t^o$ denoted $\rho_t^o$ is a subset of $\{S_t^s \mid s = 1, \hdots, \nb{S}\}$, i.e., the generative model contains the factor $P(O_t^o | \rho_t^o)$ where $\rho_t^o \subseteq \{S_t^s \mid s = 1, \hdots, \nb{S}\}$.
[square/.style=regular polygon,regular polygon sides=4]
(TS_I) at (0,-0.25) [rectangle, fill=gray!20, draw, minimum width=1cm, minimum height=1cm, very thick] $TS_t$;
(TS_J) at (4.05,-0.25) [right=of TS_I, rectangle, draw, minimum width=1cm, minimum height=1cm, very thick] $TS_I$;
[densely dashed,-latex] (TS_I) – (TS_J);
This figure illustrates the two temporal slices $TS_t$ and $TS_I$ from Figure <ref> in a more compact fashion. Since $O_t^o$ is an observed variable for all $o \in \{1, \hdots, \nb{O}\}$, the square representing $TS_t$ has a gray background. In contrast, the square representing $TS_I$ has a white background because $O_I^o$ is a latent variable for all $o \in \{1, \hdots, \nb{O}\}$.
§.§ Generative model
In this section, we build upon the notion of temporal slice to describe the full generative model. Intuitively, the probability of the entire generative model is the product of the probability of each temporal slice within the model. This includes the current temporal slice $TS_t$ and the future temporal slices $TS_I$ for all $I \in \mathbb{I}$, where $\mathbb{I}$ is the set of all multi-indices expanded during the tree search (c.f., Section <ref>). Within each temporal slice, there are $\nb{O}$ observations and $\nb{S}$ latent states. Each observation depends on a subset of the latent states. Moreover, each latent state depends on a subset of the latent states of the parent temporal slice. Note, the current temporal slice $TS_t$ does not have any parents, therefore its latent state does not depend on anything. In other words, the model makes the Markov assumption, i.e., each state only depends on the states at the previous time step. More formally, the generative model is defined as:
\begin{align*}
P(O_t,S_t,O_\mathbb{I},S_\mathbb{I}) &= P(TS_t) \prod_{I\in\mathbb{I}} P(TS_I)\\
&= \underbrace{\prod_{o=1}^{\nb{O}} P(O_t^o|\rho_t^o)\prod_{s=1}^{\nb{S}} P(S_t^s)}_{\text{current temporal slice }TS_t} \prod_{I\in\mathbb{I}} \Bigg[ \underbrace{\prod_{o=1}^{\nb{O}} P(O_I^o|\rho_I^o)\prod_{s=1}^{\nb{S}} P(S_I^s|\rho_I^s)}_{\text{future temporal slice }TS_I} \Bigg]
\end{align*}
where $t$ is the current time step, $\rho_\tau^x$ is the set of parents of $X^x_\tau$, $O_t = \{O_t^o \mid o = 1, \hdots, \nb{O}\}$ is the set of all observations at time $t$, $O_I = \{O_I^o \mid o = 1, \hdots, \nb{O}\}$ is the set of all future observations that would be observed after performing the sequence of actions $I$, $O_\mathbb{I} = \cup_{I \in \mathbb{I}} O_I$ is the set of all future observations contained in the temporal slices expanded during the tree search (c.f., Section <ref>), $S_t = \{S_t^s \mid s = 1, \hdots, \nb{S}\}$ is the set of all latent states at time $t$, $S_I = \{S_I^s \mid s = 1, \hdots, \nb{S}\}$ is the set of random variables describing the future latent states after performing the sequence of actions $I$, $S_\mathbb{I} = \cup_{I \in \mathbb{I}} S_I$ is the set of latent variables representing all future states contained in the temporal slices expanded during the tree search (c.f., Section <ref>). Importantly, the above generative model has to satisfy:
* $\forall I \in \mathbb{I}, \forall o \in \{1, \hdots, \nb{O}\}, \rho_I^o \subseteq S_I$;
* $\forall I{::}\bm{a} \in \mathbb{I}, \forall s \in \{1, \hdots, \nb{S}\}, \rho_{I{::}\bm{a}}^s \subseteq S_I$, also, if $I = \emptyset$ then by definition $S_I \delequal S_t$.
Additionally, we define the factors of the generative model as:
\begin{align*}
P(O_t^o|\rho_t^o) = \text{Cat}(\bm{A}^o), & \qquad P(S_t^s) = \text{Cat}(\bm{D}^s_t),\\
P(O_I^o|\rho_I^o) = \text{Cat}(\bm{A}^o), & \qquad P(S_I^s|\rho_I^s) = \text{Cat}(\bm{B}^s_I),
\end{align*}
where $\bm{A}^o$ is the tensor modelling the likelihood mapping of the $o$-th observation, $\bm{D}^s_t$ is the vector modelling the prior over the $s$-th latent state at time $t$ (see below for details), $\bm{B}^s$ is the tensor modelling the transition mapping of the $s$-th latent state under each possible action, $\bm{B}^s_I$ is the tensor modelling the transition mapping of the $s$-th latent state under the last action $I_\text{last}$ of the sequence $I$, i.e., $\bm{B}^s_I = \bm{B}^s(\, \bigcdot \,, \hdots, \bigcdot\, , I_\text{last})$. Also, note that at the beginning of a trial, i.e., when $t=0$, $\bm{D}^s_t$ is a vector that encodes the modeller's understanding of the task. Afterwards, when $t > 0$, $\bm{D}^s_t$ is a vector containing the parameters of the posterior over hidden states according to the observations made and actions taken so far, i.e., $P(S_t^s) \delequal P(S_t^s|O_{0:t-1}, A_{0:t-1}) = \text{Cat}(\bm{D}^s_t)$ for all $s \in \{1, \hdots, \nb{S}\}$. Finally, Figure <ref> illustrates the full generative model using the notion of temporal slices.
[square/.style=regular polygon,regular polygon sides=4]
(TS_I) at (0,-0.25) [rectangle, fill=gray!20, draw, minimum width=1.5cm, minimum height=1.5cm, very thick] $TS_t$;
(TS_1) at (4.05,-0.25) [below=of TS_I, rectangle, draw, minimum width=1.5cm, minimum height=1.5cm, very thick, xshift=-2cm] $TS_{(1)}$;
(TS_2) at (4.05,-0.25) [below=of TS_I, rectangle, draw, minimum width=1.5cm, minimum height=1.5cm, very thick, xshift=2cm] $TS_{(2)}$;
(TS_11) at (4.05,-0.25) [below=of TS_1, rectangle, draw, minimum width=1.5cm, minimum height=1.5cm, very thick, xshift=-1cm] $TS_{(11)}$;
(TS_12) at (4.05,-0.25) [below=of TS_1, rectangle, draw, minimum width=1.5cm, minimum height=1.5cm, very thick, xshift=1cm] $TS_{(12)}$;
(TS_21) at (4.05,-0.25) [below=of TS_2, rectangle, draw=gray!50, minimum width=1.5cm, minimum height=1.5cm, very thick, xshift=-1cm] ${\color{gray!50}TS_{(21)}}$;
(TS_22) at (4.05,-0.25) [below=of TS_2, rectangle, draw=gray!50, minimum width=1.5cm, minimum height=1.5cm, very thick, xshift=1cm] ${\color{gray!50}TS_{(22)}}$;
[densely dashed,-latex] (TS_I) – (TS_1);
[densely dashed,-latex] (TS_I) – (TS_2);
[densely dashed,-latex] (TS_1) – (TS_11);
[densely dashed,-latex] (TS_1) – (TS_12);
[densely dashed,-latex,draw=gray!50] (TS_2) – (TS_21);
[densely dashed,-latex,draw=gray!50] (TS_2) – (TS_22);
(I) at (5,-0.25) $\mathbb{I} = \Big\{(1), (2), (11), (12)\Big\}$;
This figure illustrates the full generative model of $BTAI_{3MF}$. The temporal slices depited in light gray correspond to temporal slices that have not yet been explored by the planning algorithm, c.f., Section <ref>. The numbers between parentheses correspond to the sequence of actions performed to reach the temporal slice.
§.§ Belief updates: the inference and prediction (IP) algorithm
The IP algorithm is composed of two steps, i.e., the inference step (or I-step) and the prediction step (or P-step). The goal of the I-step is to compute the posterior beliefs over all the latent variables at time $t$. In other words, the goal of the I-step is to compute: $P(S_t^s|O_t), \forall s \in \{1, \hdots, \nb{S}\}$. The P-step takes as inputs the posterior beliefs over all the latent variables corresponding to the states of the system after performing a sequence of actions $I$, and an action $\bm{a}$ to be performed next. The goal of the P-step is to compute the posterior beliefs over all the latent variables corresponding to the future states and observations after performing the sequence of actions $I{::}\bm{a}$, where $I{::}\bm{a}$ is the sequence of actions obtained by adding the action $\bm{a}$ at the end of the sequence of actions $I$. In other words, given $P(S_I^s|O_t), \forall s \in \{1, \hdots, \nb{S}\}$ and an action $\bm{a}$, the goal of the P-step is to compute: $P(S_{I{::}\bm{a}}^s|O_t), \forall s \in \{1, \hdots, \nb{S}\}$ and $P(O_{I{::}\bm{a}}^o|O_t), \forall o \in \{1, \hdots, \nb{O}\}$. Note that by definition, we let $P(S_I^m|O_t) \delequal P(S_t^m|O_t)$ if $I = \emptyset$. To derive the inference and prediction steps, the following sections make use of the sum-rule, product-rule, and d-separation criterion (c.f., Appendix C for details about those properties).
§.§.§ Inference step
As just stated, the goal of the I-step is to compute $P(S_t^m|O_t), \forall m \in \{1, \hdots, \nb{S}\}$. First, we re-write the posterior computation to fit the kind of problem that belief propagation — also known as the sum-product algorithm — can solve:
\begin{align*}
P(S_t^m|O_t) &\propto P(S^m_t, O_t){\color{white}\sum_{\sim S^m_t}^T} \tag{Bayes theorem}\\
&= \sum_{\sim S^m_t}^{{\color{white}}} P(S_t, O_t) \tag{sum rule}\\
&= \sum_{\sim S^m_t} \prod_{o=1}^{\nb{O}} P(O_t^o|\rho_t^o)\prod_{s=1}^{\nb{S}} P(S_t^s) \tag{product rule \& d-separation}\\
\end{align*}
where $S_t = \{S_t^s \mid s = 1, \hdots, \nb{S}\}$ is the set of all latent states at time $t$, ${\sim}S_t^m = S_t \setminus S_t^m$ is the set of all latent states at time $t$ except $S^m_t$, and the summation is over all possible configurations of ${\sim}S_t^m$, i.e., we are marginalizing out all states, apart from one; thus $P(S_t, O_t)$ has $\nb{S} + \nb{O}$ dimensions, while $P(S^m_t, O_t)$ has $1 + \nb{O}$ dimensions. Since $\rho_t^o \subseteq S_t$, the expression inside the summation is a function $g(S_t)$ that factorizes as follows:
\begin{align*}
g(S_t) &= \prod_{o=1}^{\nb{O}} P(O_t^o|\rho_t^o) \prod_{s=1}^{\nb{S}} P(S_t^s)\\
&\delequal \prod_{i=1}^{N} f_i(X_i),
\end{align*}
where $X_i \subseteq S_t$ for all $i \in \{1, \hdots, \nb{O} + \nb{S}\}$, the number of factors is $N = \nb{O}+\nb{S}$, and:
\begin{align*}
f_i(X_i) \delequal \begin{cases}
P(O_t^i|\rho_t^i) & \text{if } i \in \{1, \hdots, \nb{O}\}\\
P(S_t^{i - \nb{O}}) & \text{if } i \in \{\nb{O} + 1, \hdots, \nb{O}+\nb{S}\}\\
\end{cases}.
\end{align*}
Note that, because $O_t^o$ (denoted $O_t^i$ here) are known constants, we do not specify that $g(S_t)$ depends on $O_t^o$. To conclude, by substituting the definition of $g(S_t)$ into the formula of the posterior $P(S_t^m|O_t)$ presented above, we get:
\begin{align*}
P(S_t^m|O_t) &\propto \sum_{\sim S^m_t} g(S_t),
\end{align*}
which means that the posterior $P(S_t^m|O_t)$ can be computed by first marginalizing $g(S_t)$ w.r.t. $S_t^m$, i.e.,
\begin{align*}
g(S_t^m) = \sum_{\sim S^m_t} g(S_t),
\end{align*}
and then normalizing:
\begin{align*}
P(S_t^m|O_t) = \frac{g(S_t^m)}{\sum_{S^m_t} g(S^m_t)}.
\end{align*}
The marginalization of $g(S_t)$ can be performed efficiently using belief propagation [Kschischang et al., 2001], which can be understood as a message passing algorithm on a factor graph. The message from a node $x$ to a factor $f$ is given by:
\begin{align*}
m_{x \rightarrow f}(x) = \prod_{h \in n(x) \setminus \{f\}} m_{h \rightarrow x}(x),
\end{align*}
where $n(x)$ are the neighbours of $x$ in the factor graph. Note, in a factor graph the neighbours of a random variable are factors. Moreover, the message from a factor $f$ to a node $x$ is given by:
\begin{align*}
m_{f \rightarrow x}(x) = \sum_{Y} \Bigg( f(X) \prod_{y \in Y} m_{y \rightarrow f}(y)\Bigg),
\end{align*}
where $X = n(f)$ are the neighbours of $f$ in the factor graph, $Y = X \setminus \{x\}$ are all the neighbours of $f$ except $x$, and the summation is over all possible configurations of the variables in $Y$. Note, in a factor graph the neighbours of a factor are random variables. Once all the messages have been computed, the marginalization of $g(S_t)$ w.r.t. $S_t^m$ is given by the product of all the incoming messages of the node $S_t^m$, i.e.,
\begin{align*}
g(S_t^m) = \prod_{f \in n(S_t^m)} m_{f \rightarrow S_t^m}(S_t^m).
\end{align*}
§.§.§ Prediction step
The P-step is analogous to the prediction step of Bayesian filtering [Fox et al., 2003]. Given $P(S_{I}^s|O_t)$ for each $s \in \{1, \hdots, \nb{S}\}$ and an action $\bm{a}$, the goal of the P-step is to compute $P(S_{I{::}\bm{a}}^s|O_t)$ for each latent state $s \in \{1, \hdots, \nb{S}\}$ and $P(O_{I{::}\bm{a}}^o|O_t)$ for each future observation $o \in \{1, \hdots, \nb{O}\}$. For the sake of brevity, we let $J \delequal I{::}\bm{a}$. Let's start with the computation of $P(S_{I{::}\bm{a}}^s|O_t)$:
\begin{align*}
P(S_{I{::}\bm{a}}^s|O_t) \delequal P(S_J^s|O_t) &= \sum_{\rho_J^s}^{{\color{white}M}} P(S_J^s, \rho_J^s |O_t) \tag{sum rule}\\
&= \sum_{\rho_J^s}^{{\color{white}M}} P(S_J^s| \rho_J^s, O_t)P(\rho_J^s| O_t) \tag{product rule}\\
&= \sum_{\rho_J^s}^{{\color{white}M}} P(S_J^s| \rho_J^s)P(\rho_J^s| O_t) \tag{d-separation}\\
&\approx \sum_{\rho_J^s} P(S_J^s| \rho_J^s) \prod_{i=1}^{\nb{\rho_J^s}} P(\rho_{J,i}^s| O_t) \tag{mean-field approximation}
\end{align*}
where $\nb{\rho_J^s}$ is the number of parents of $S^s_J$, and $\rho_{J,i}^s$ is the $i$-th parent of $S^s_J$. Importantly, $P(S_J^s| \rho_J^s)$ is known from the definition of the generative model. Moreover, since $\rho_{J,i}^s \in S_I$, then $P(\rho_{J,i}^s| O_t) = P(S_{I}^m|O_t)$ for some $m \in \{1, \hdots, \nb{S}\}$. Thus, $P(\rho_{J,i}^s| O_t)$ is given as input to the P-step, i.e., $P(\rho_{J,i}^s| O_t)$ is a known distribution. Similarly, the computation of $P(O_{I{::}\bm{a}}^o|O_t)$ proceeds as follows:
\begin{align*}
P(O_{I{::}\bm{a}}^o|O_t) \delequal P(O_J^o|O_t) &= \sum_{\rho_J^o}^{{\color{white}M}} P(O_J^o, \rho_J^o |O_t) \tag{sum rule}\\
&= \sum_{\rho_J^o}^{{\color{white}M}} P(O_J^o| \rho_J^o, O_t)P(\rho_J^o| O_t) \tag{product rule}\\
&= \sum_{\rho_J^o}^{{\color{white}M}} P(O_J^o| \rho_J^o)P(\rho_J^o| O_t) \tag{d-separation}\\
&\approx \sum_{\rho_J^o} P(O_J^o| \rho_J^o) \prod_{i=1}^{\nb{\rho_J^o}} P(\rho_{J,i}^o| O_t) \tag{mean-field approximation}
\end{align*}
where $\nb{\rho_J^o}$ is the number of parents of $O^o_J$, and $\rho_{J,i}^o$ is the $i$-th parent of $O^o_J$. Importantly, $P(O_J^o| \rho_J^o)$ is known from the definition of the generative model. Moreover, since $\rho_{J,i}^o \in S_J$, then $P(\rho_{J,i}^o| O_t) = P(S_{J}^s|O_t)$ for some $s \in \{1, \hdots, \nb{S}\}$. Thus, $P(\rho_{J,i}^o| O_t)$ has already been computed during the first stage of the P-step and is a known distribution, c.f., derivation of $P(S_{I{::}\bm{a}}^s|O_t) \delequal P(S_J^s|O_t)$.
§.§ Expected Free Energy
In this section, we discuss the definition of the expected free energy, which quantifies the cost of pursuing a particular sequence of actions and will be useful for planning, cf. Section <ref>. The expected free energy (see below) is composed of the risk and ambiguity terms. The risk terms quantify how much the posterior beliefs over future observations (computed by the P-step) diverge from the prior preferences of the agent. On the other hand, the ambiguity terms correspond to the expected uncertainty of the likelihood mapping, where the expectation is with respect to the posterior beliefs over states computed by the P-step.
First, we partition the set of observations $O_I = \{O_I^o \mid o = 1, \hdots, \nb{O}\}$ into disjoint subsets $X_i^I$, i.e., $O_I = X_1^I \cup \hdots \cup X_N^I$ and $X_i^I \cap X_j^I = \emptyset$ if $i \neq j$. Then, we define the prior preferences over the $i$-th subset of observations as: $V(X_i^I) = \text{Cat}(\bm{C}^i)$. This formulation allows us to define prior preferences over subsets of random variables, and will be useful in Section <ref>, where the agent needs to possess preferences that depend upon both the shape and $(X, Y)$ position of the object. Finally, the expected free energy, which needs to be minimised, is given by:
\begin{align}\label{eq:efe}
\bm{G}_I \delequal \sum_{i=1}^{N} \Bigg( \underbrace{D_{\mathrm{KL}}[P(X_i^I|O_t)||V(X_i^I)]}_{\text{risk of } i \text{-th set of observations}}\Bigg)\, +\,\, \sum_{o=1}^{\nb{O}} \Bigg( \underbrace{\mathbb{E}_{P(\rho_I^o|O_t)}[\text{H}[P(O_I^o | \rho_I^o)]]}_{\text{ambiguity of } o \text{-th observation}}\Bigg),
\end{align}
where $P(X_i^I|O_t)$ and $P(\rho_I^o|O_t)$ are the posteriors over the $i$-th subset of observations and the parent of $O_I^o$, respectively, and $P(O_I^o | \rho_I^o)$ is known from the generative model. Assuming a mean-field approximation, those posteriors are given by:
\begin{align*}
P(\rho_I^o|O_t) &\approx \prod_{i = 1}^{\nb{\rho_I^o}} P(\rho_{I,i}^o|O_t)\\
P(X_i^I|O_t) \,\,\, &\approx \prod_{O_I^o \in X_i}^{{\color{white}x}} P(O_I^o|O_t)
\end{align*}
where $P(O_I^o|O_t)$ and $P(\rho_{I,i}^o|O_t)$ are the posteriors over $O_I^o$ and the $i$-th parent of $O_I^o$, respectively. Note, both $P(O_I^o|O_t)$ and $P(\rho_{I,i}^o|O_t)$ were computed during the P-step. The definition of the expected free energy given by (<ref>) may not be very intuitive. Fortunatly, the special case where each subset contains a single observation, i.e., $X_o^I = O_I^o$, leads to the following equation:
\begin{align*}
\bm{G}_I \delequal \sum_{o=1}^{\nb{O}} \Bigg( \underbrace{D_{\mathrm{KL}}[P(O_I^o|O_t)||V(O_I^o)]}_{\text{risk of } o \text{-th observation}} \,\, +\,\, \underbrace{\mathbb{E}_{P(\rho_I^o|O_t)}[\text{H}[P(O_I^o | \rho_I^o)]]}_{\text{ambiguity of } o \text{-th observation}}\Bigg),
\end{align*}
which is the summation over all observations $O_I^o$ of the expected free energy of $O_I^o$, i.e., the risk of $O_I^o$ plus the ambiguity of $O_I^o$. Finally, our framework allows to specify prior preferences over only a subset of variables in $O_I$. For example, if a task contains four variables, i.e., $O_I^x$, $O_I^y$, $O_I^{shape}$ and $O_I^{scale}$, but it only makes sense to have preferences over three of them, i.e., $O_I^x$, $O_I^y$ and $O_I^{shape}$, then the prior preference over the fourth variable is set to the posterior over this random variable, i.e., $V(O_I^{shape}) \delequal P(O_I^{shape}|O_t)$. In other words, not having prior preferences over a random variable is viewed by our framework as liking whatever we predict will happen. Effectively, this renders the risk term associated with such variable equal to zero, i.e.,
\begin{align*}
D_{\mathrm{KL}}[P(O_I^{shape}|O_t)||V(O_I^{shape})] = D_{\mathrm{KL}}[P(O_I^{shape}|O_t)||P(O_I^{shape}|O_t))] = 0.
\end{align*}
§.§ Planning: the MCTS algorithm
In this section, we describe the planning algorithm used by $BTAI_{3MF}$. At the beginning of a trial when $t = 0$, the agent is provided with the initial observations $O_0$. The I-step is performed and returns the posterior over all latent states, i.e., $P(S_0^s|O_0)$ for all $s \in \{1, \hdots, \nb{S}\}$, according to the prior over the initial hidden states provided by the modeller, i.e., $P(S_0^s)$ for all $s \in \{1, \hdots, \nb{S}\}$, and the available observations $O_0$.
Then, we use the UCT criterion to determine which node in the tree should be expanded. Let the tree's root $TS_t$ be called the current node. If the current node has no children, then it is selected for expansion. Alternatively, the child with the highest UCT criterion becomes the new current node and the process is iterated until we reach a leaf node (i.e. a node from which no action has previously been selected). The UCT criterion [Browne et al., 2012] for the $j$-th child of the current node is given by:
\begin{align}\label{eq:UCT}
UCT_j = - \bar{\bm{G}}_j + C_{explore} \sqrt{\frac{\ln n}{n_j}},
\end{align}
where $\bar{\bm{G}}_j$ is the average expected free energy calculated with respected to the actions selected from the $j$-th child, $C_{explore}$ is the exploration constant that modulates the amount of exploration at the tree level, $n$ is the number of times the current node has been visited, and $n_j$ is the number of times the $j$-th child has been visited.
Let $S_I$ be the (leaf) node selected by the above selection procedure. We then expand all the children of $S_I$, i.e., all the states of the form $S_{I{::}\bm{a}}$, where $\bm{a} \in \{1, ..., \nb{A}\}$ is an arbitrary action, $\nb{A}$ is the number of available actions, and $I{::}\bm{a}$ is the multi-index obtained by appending the action $\bm{a}$ at the end of the sequence defined by $I$. Next, we perform the P-step for each action $\bm{a}$, and obtain $P(S_{I{::}\bm{a}}^s|O_t)$ for each latent state $s \in \{1, \hdots, \nb{S}\}$ and $P(O_{I{::}\bm{a}}^o|O_t)$ for each future observation $o \in \{1, \hdots, \nb{O}\}$.
Then, we need to estimate the cost of (virtually) taking each possible action. The cost in this paper is taken to be the expected free energy given by (<ref>). Next, we assume that the agent will always perform the action with the lowest cost, and back-propagate the cost of the best (virtual) action toward the root of the tree. Formally, we write the update as follows:
\begin{align}\label{eq:backprop}
\forall K \in \mathbb{A}_I \cup \{I\}, \quad \bm{G}_K \leftarrow \bm{G}_K + \min_{\bm{a} \in \{1, ..., \nb{A}\}} \bm{G}_{I{::}\bm{a}},
\end{align}
where $I$ is the multi-index of the node that was selected for (virtual) expansion, and $\mathbb{A}_I$ is the set of all multi-indices corresponding to ancestors of $TS_I$. During the back propagation, we also update the number of visits as follows:
\begin{align}\label{eq:backprop_n}
\forall K \in \mathbb{A}_I \cup \{I\}, \quad n_K \leftarrow n_K + 1.
\end{align}
If we let $\bm{G}^{aggr}_K$ be the aggregated cost of an arbitrary node $S_K$ obtained by applying Equation <ref> after each expansion, then we are now able to express $\bar{\bm{G}}_K$ formally as:
$$\bar{\bm{G}}_K = \frac{\bm{G}^{aggr}_K}{n_K}.$$
The planning procedure described above ends when the maximum number of planning iterations is reached.
§.§ Action selection
After performing planning, the agent needs to choose the action to perform in the environment. As discussed in Section 3.1 of [Browne et al., 2012], many possible mechanisms can be used to select the action to perform in the environment. $BTAI_{3MF}$ performs the action corresponding to the root child with the highest number of visits. Formally, this is expressed as:
\begin{align}\label{eq:action_selection}
\bm{a}^* = \argmax_{\bm{a} \in \{1, ..., \nb{A}\}} n_{(\bm{a})},
\end{align}
where $\bm{a}^*$ is the action performed in the environment, and $n_{(\bm{a})}$ is the number of visits of the root child corresponding to action $\bm{a}$.
§.§ Closing the action-perception cycle
After performing an action $\bm{a}^*$ in the environment, the agent receives a new observation $O_{t+1}$, and needs to use this observation to compute the posterior over the latent states at time $t+1$, i.e., $P(S^s_{t+1}|O_{t+1})$ for all $s \in \{1, \hdots, \nb{S}\}$. This can be achieved by performing the I-step, but requires the agent to have prior beliefs over the latent states at time $t+1$, i.e., $P(S^s_{t+1})$ for all $s \in \{1, \hdots, \nb{S}\}$, in addition to the new observation $O_{t+1}$ obtained from the environment. In this paper, we define those prior beliefs as:
\begin{align*}
P(S^s_{t+1}) = P(S_{I}^s|O_t), \text{ for all } s \in \{1, \hdots, \nb{S}\},
\end{align*}
where $I = (\bm{a}^*)$ is a sequence of actions containing the action $\bm{a}^*$ performed in the environment, $P(S_I^s|O_t)$ is the predictive posterior computed by the P-step when assuming that action $\bm{a}^*$ is performed. In other words, the predictive posterior $P(S_I^s|O_t)$ computed by the P-step at time $t$, is used as an empirical prior $P(S^s_{t+1})$ at time $t+1$. This empirical prior $P(S^s_{t+1})$ along with the new observation $O_{t+1}$ can then be used to compute the posterior $P(S^s_{t+1}|O_{t+1})$ for all $s \in \{1, \hdots, \nb{S}\}$. This posterior will be used to perform planning in the next action-perception cycle. Algorithm <ref> concludes this section by summarizing our approach.
$env$ the environment,
$O_0 = \{O^o_0 \mid o = 1, \hdots \nb{O}\}$ the initial observations,
$\bm{A} = \{\bm{A}^o \mid o = 1, \hdots \nb{O}\}$ the likelihood mapping of each observation,
$\bm{B} = \{\bm{B}^s \mid s = 1, \hdots, \nb{S}\}$ the transition mapping for each hidden state,
$\bm{C} = \{\bm{C}^i \mid i = 1, \hdots N\}$ the prior preferences of each subset of observations,
$\bm{D}_0 = \{\bm{D}_0^s \mid s = 1, \hdots \nb{S}\}$ the prior over each initial state,
$N$ the number of planning iterations,
$M$ the number of action-perception cycles.
$P(S_0^s|O_0) \leftarrow $ I-step($O_0$, $\bm{A}$, $\bm{D}_0$) *I-step from Section <ref>
$root \leftarrow $ CreateTreeNode(
$\quad$ beliefs = $P(S_0^s|O_0)$, action = -1, cost = 0, visits = 1
)*Create the root node for the MCTS, where -1 is a dummy value
$node \leftarrow $ SelectNode($root$) *Using (<ref>) recursively
$eNodes \leftarrow $ ExpandChildren($node$, $\bm{B}$) *P-step from Section <ref> for each action
Evaluate($eNodes$, $\bm{A}$, $\bm{C}$) *Compute (<ref>) for each expanded node
Backpropagate($eNodes$) *Using (<ref>) and (<ref>)
$\bm{a}^* \leftarrow $ SelectAction($root$) *Using (<ref>)
$O_{t+1} \leftarrow $ $env$.Execute($\bm{a}^*$)
$child \leftarrow root.children[\bm{a}^*]$ *Get root child corresponding to $\bm{a}^*$
$P(S_{t+1}^s) \leftarrow child.beliefs$ *Get the empirical prior $P(S^s_{t+1}) = \text{Cat}(\bm{D}_{t+1}^s)$
$P(S^s_{t+1}|O_{t+1}) \leftarrow$ I-step($O_{t+1}$, $\bm{A}$, $\bm{D}_{t+1}$) *I-step from Section <ref>
$root \leftarrow $ CreateTreeNode(
$\quad$ beliefs = $P(S^s_{t+1}|O_{t+1})$, action = $\bm{a}^*$, cost = 0, visits = 1
)*Create the root node of the next action-perception cycle
$BTAI_{3MF}$: action-perception cycles (with relevant equations indicated in round brackets).
§ RESULTS
In this section, we compare our new approach to BTAI with variational message passing ($BTAI_{VMP}$) and BTAI with Bayesian filtering ($BTAI_{BF}$). Section <ref> presents the simplified version of the dSprites environment on which the agents are compared. Section <ref> describes how the task is modelled by the $BTAI_{VMP}$ agent and reports its performance, finally, Sections <ref> and <ref> do the same for the $BTAI_{BF}$ and $BTAI_{3MF}$ agents. For the reader interested in implementing a custom $BTAI_{3MF}$ agent, Appendix A provides a tutorial of how to create such an agent using our framework, and Appendix B desbribes a graphical user interface (GUI) that can be used to inspect the model. This GUI displays the structure of the generative model and prior preferences, the posterior beliefs of each latent variable, the messages sent throughout the factor graph to perform inference, the information related to the MCTS algorithm, and the expected free energy (EFE) of each node in the future. It also shows how the EFE decomposes into the risk and ambiguity terms.
§.§ dSprites Environment
The dSprites environment is based on the dSprites dataset [Matthey et al., 2017] initially designed for analysing the latent representation learned by variational auto-encoders [Doersch, 2016]. The dSprites dataset is composed of images of squares, ellipses and hearts. Each image contains one shape (square, ellipse or heart) with its own scale, orientation, and $(X,Y)$ position. In the dSprites environment, the agent is able to move those shapes around by performing four actions (i.e., UP, DOWN, LEFT, RIGHT). To make planning tractable, the action selected by the agent is executed eight times in the environment before the beginning of the next action-perception cycle, i.e., the $X$ or $Y$ position is increased or decreased by eight between time step $t$ and $t+1$. The goal of the agent is to move all squares towards the bottom-left corner of the image and all ellipses and hearts towards the bottom-right corner of the image, c.f. Figure <ref>.
This figure illustrates the dSprites environment, in which the agent must move all squares towards the bottom-left corner of the image and all ellipses and hearts towards the bottom-right corner of the image. The red arrows show the behaviour expected from the agent.
Since BTAI is a tabular model whose likelihood and transition mappings are represented using matrices, the agent does not directly take images as inputs. Instead, the metadata of the dSprites dataset is used to specify the state space. In particular, the agent observes the type of shape (i.e., square, ellipse, or heart), the scale and orientation of the shape, as well as a coarse-grained version of the shape's true position. Importantly, the original images are composed of 32 possible values for both the $X$ and $Y$ positions of the shapes. A coarse-grained representation with a granularity of two means that the agent is only able to perceive $16 \times 16$ images, and thus, the positions at coordinate $(0,0)$, $(0,1)$, $(1,0)$ and $(1,1)$ are indistinguishable. Figure <ref> illustrates the coarse grained representation with a granularity of eight and the corresponding indices observed by the $BTAI_{VMP}$ and $BTAI_{BF}$ agents. Note that this modification of the observation space can be seen as a form of state aggregation [Ren and Krogh, 2002]. Finally, as shown in Figure <ref>, the prior preferences of the agent are specified over an absorbing row below the dSprites image. This absorbing row ensures that the agent selects the action “down" when standing in the “appropriate corner", i.e., bottom-left corner for squares and bottom-right coner for ellipses and hearts.
[scale=0.4, every node/.style=scale=0.4]
at (-5.5, 2.5) ;
[step=1.0,black,thin] (0,0) grid (4,5);
[step=1.0,black,thin] (7,0) grid (11,5);
[step=1.0,black,thin] (14,0) grid (18,5);
[scale=2.5] at (2,-1) $\square$;
[scale=2.5] at (9,-1) $\heart$;
(16,-1) ellipse (0.5cm and 0.3cm);
[scale=2] at (0.5,4.5) 0;
[scale=2] at (0.5,3.5) 1;
[scale=2] at (0.5,2.5) 2;
[scale=2] at (0.5,1.5) 3;
[scale=2] at (0.5,0.5) 4;
[scale=2] at (1.5,4.5) 5;
[scale=2] at (1.5,3.5) 6;
[scale=2] at (1.5,2.5) 7;
[scale=2] at (1.5,1.5) 8;
[scale=2] at (1.5,0.5) 9;
[scale=2] at (2.5,4.5) 10;
[scale=2] at (2.5,3.5) 11;
[scale=2] at (2.5,2.5) 12;
[scale=2] at (2.5,1.5) 13;
[scale=2] at (2.5,0.5) 14;
[scale=2] at (3.5,4.5) 15;
[scale=2] at (3.5,3.5) 16;
[scale=2] at (3.5,2.5) 17;
[scale=2] at (3.5,1.5) 18;
[scale=2] at (3.5,0.5) 19;
[scale=2] at (7.5,4.5) 20;
[scale=2] at (7.5,3.5) 21;
[scale=2] at (7.5,2.5) 22;
[scale=2] at (7.5,1.5) 23;
[scale=2] at (7.5,0.5) 24;
[scale=2] at (8.5,4.5) 25;
[scale=2] at (8.5,3.5) 26;
[scale=2] at (8.5,2.5) 27;
[scale=2] at (8.5,1.5) 28;
[scale=2] at (8.5,0.5) 29;
[scale=2] at (9.5,4.5) 30;
[scale=2] at (9.5,3.5) 31;
[scale=2] at (9.5,2.5) 32;
[scale=2] at (9.5,1.5) 33;
[scale=2] at (9.5,0.5) 34;
[scale=2] at (10.5,4.5) 35;
[scale=2] at (10.5,3.5) 36;
[scale=2] at (10.5,2.5) 37;
[scale=2] at (10.5,1.5) 38;
[scale=2] at (10.5,0.5) 39;
[scale=2] at (14.5,4.5) 40;
[scale=2] at (14.5,3.5) 41;
[scale=2] at (14.5,2.5) 42;
[scale=2] at (14.5,1.5) 43;
[scale=2] at (14.5,0.5) 44;
[scale=2] at (15.5,4.5) 45;
[scale=2] at (15.5,3.5) 46;
[scale=2] at (15.5,2.5) 47;
[scale=2] at (15.5,1.5) 48;
[scale=2] at (15.5,0.5) 49;
[scale=2] at (16.5,4.5) 50;
[scale=2] at (16.5,3.5) 51;
[scale=2] at (16.5,2.5) 52;
[scale=2] at (16.5,1.5) 53;
[scale=2] at (16.5,0.5) 54;
[scale=2] at (17.5,4.5) 55;
[scale=2] at (17.5,3.5) 56;
[scale=2] at (17.5,2.5) 57;
[scale=2] at (17.5,1.5) 58;
[scale=2] at (17.5,0.5) 59;
This figure illustrates the observations made by the agent when using a coarse-grained representation with a granularity of eight on the input image. On the left, one can see an image from the dSprites dataset and a grid containing red squares of $8\times8$ pixels. Any positions in those $8\times8$ squares are indistinguishable from the perspective of the agent. Also, the bottom most row is an absorbing row used to specify the prior preferences of the agent, i.e. the green square is the goal state and the orange squares correspond to undesirable states. Finally, the three tables on the right contain the indices observed by the $BTAI_{VMP}$ and $BTAI_{BF}$ agents for each type of shape at each possible position.
The evaluation of the agent's performance is based on the reward obtained by the agent. Briefly, the agent receives a reward of $-1$, if it never enters the absorbing row or if it does so at the antipode of the appropriate corner. As the agent enters the absorbing row closer and closer to the appropriate corner, its reward increases until reaching a maximum of $1$. The percentage of the task solved (i.e., the evaluation metric) is calculated as follows:
$$P(\text{solved}) = \frac{\text{total rewards} + \text{number of runs}}{2.0 \times \text{number of runs}}.$$
Intuitively, the numerator shifts the rewards so that they are bounded between zero and two, and the denominator renormalises the reward to give a score between zero and one. A score of zero therefore corresponds to an agent always failing to enter the absorbing row or doing so at the antipode of the appropriate corner. In contrast, a score of one corresponds to an agent always entering the absorbing row through the appropriate corner.
§.§ $BTAI_{VMP}$ modeling approach and results
In this section, we evaluate $BTAI_{VMP}$ [Champion et al., 2022, Champion et al., 2022] on the dSprites environment. As shown in Figure <ref>, $BTAI_{VMP}$ observes one index for each possible configuration of shape, and $(X, Y)$ positions. Importantly, this version of BTAI suffers from the exponential growth described in the introduction, and thus does not model the scale and orientation modalities. Also, to make the inference and planning process tractable, the granularity of the coarse-grained representation was set to four or eight. Table <ref> provides the value of each hyper-parameter used by $BTAI_{VMP}$ in this section. Note, the hyper-parameter values are the same for all BTAI models presented in this paper. Only the number of action perception cycles, and the number of planning iterations may vary from one experiment to the next.
Name Value
<NB_SIMULATIONS> 100
<NB_ACTION_PERCEPTION_CYCLES> 30
<NB_PLANNING_STEPS> 10, 25 or 50
<EXPLORATION_CONSTANT> 2.4
<PRECISION_PRIOR_PREFERENCES> 2
<PRECISION_ACTION_SELECTION> 100
<EVALUATION_TYPE> EFE
The value of each hyper-parameter used by $BTAI_{VMP}$ in this section. <NB_SIMULATIONS> is the number of simulations run during the experiment. <NB_ACTION_PERCEPTION_CYCLES> is the maximum number of actions executed in each simulation, after which the simulation is terminated. <NB_PLANNING_STEPS> is the number of planning iterations performed by the agent. <EXPLORATION_CONSTANT> is the exploration constant of the UCT criterion. <PRECISION_PRIOR_PREFERENCES> is the precision of the prior preferences. <PRECISION_ACTION_SELECTION> is the precision of the distribution used for action selection. <EVALUATION_TYPE> is the type of cost used to evaluate the node during the tree search. Those hyper-parameters can be used to re-run the experiments using the code of the following GitHub repository: <https://github.com/ChampiB/Experiments_AI_TS>.
Briefly, the agent is able to solve 88.5% of the task when using a granularity of eight, c.f. Table <ref>. To understand why $BTAI_{VMP}$ was not able to solve the task with 100% accuracy, let us consider the example of an ellipse at position $(24,31)$. With a granularity of eight, the agent perceives that the ellipse is in the bottom-right corner of the image, i.e., in the red square just above the goal state in Figure <ref>. From the agent's perspective, it is thus optimal to pick the action “down" to reach the goal state. However, in reality, the agent will not receive the maximum reward because its true $X$ position is $24$ instead of the optimal $X$ position of $31$.
Planning iterations P(solved) Time (sec)
10 0.813 0.859 $\pm$ 0.868
25 0.846 0.862 $\pm$ 0.958
50 0.885 1.286 $\pm$ 1.261
The percentage of the dSprites environment solved by the $BTAI_{VMP}$ agent when using a granularity of eight, c.f. Figure <ref>. The last column reports the average execution time required for one simulation and the associated standard deviation.
As shown in Table <ref>, we can improve the agent's perfomance, by using a granularity of four. This allows the agent to differentiate between a larger number of $(X,Y)$ positions, i.e., it reduces the size of the red square in Figure <ref>. With this setting, the agent is able to solve 96.9% of the task. However, when decreasing the granularity, the number of states goes up, and so does the width and height of the $\bm{A}$ and $\bm{B}$ matrices. As a result, more memory and computational time is required for the inference and planning process. This highlights a trade-off between the agent's performance and the amount of memory and time required. Indeed, a smaller granularity leads to better performance, but requires more time and memory.
Planning iterations P(solved) Time (sec)
10 0.859 3.957 $\pm$ 4.027
25 0.933 3.711 $\pm$ 4.625
50 0.969 5.107 $\pm$ 5.337
The percentage of the dSprites environment solved by the $BTAI_{VMP}$ agent when using a granularity of four. In this setting, there are $9 \times 8 \times 3 = 216$ states. The last column reports the average execution time required for one simulation and the associated standard deviation.
§.§ $BTAI_{BF}$ modeling approach and results
In this section, we evaluate $BTAI_{BF}$ [Champion et al., 2021] on the dSprites environment. As shown in Figure <ref>, $BTAI_{BF}$ observes one index for each possible configuration of shape, and $(X, Y)$ positions. Also, to make the inference and planning process tractable, the granularity of the coarse-grained representation was set to two, four or eight. Table <ref> provides the value of each hyper-parameter used by $BTAI_{BF}$ in this section. Note, the hyper-parameter values are the same for all BTAI models presented in this paper. Only the number of action perception cycles, and the number of planning iterations may vary from one experiment to the next.
Name Value
<NB_SIMULATIONS> 100
<NB_ACTION_PERCEPTION_CYCLES> 20
<NB_PLANNING_STEPS> 50
<EXPLORATION_CONSTANT> 2.4
<PRECISION_PRIOR_PREFERENCES> 1
<PRECISION_ACTION_SELECTION> 100
<EVALUATION_TYPE> EFE
The value of each hyper-parameter used by $BTAI_{BF}$ in this section. <NB_SIMULATIONS> is the number of simulations run during the experiment. <NB_ACTION_PERCEPTION_CYCLES> is the maximum number of actions executed in each simulation, after which the simulation is terminated. <NB_PLANNING_STEPS> is the number of planning iterations performed by the agent. <EXPLORATION_CONSTANT> is the exploration constant of the UCT criterion. <PRECISION_PRIOR_PREFERENCES> is the precision of the prior preferences. <PRECISION_ACTION_SELECTION> is the precision of the distribution used for action selection. <EVALUATION_TYPE> is the type of cost used to evaluate the node during the tree search. Those hyper-parameters can be used to re-run the experiments using the code of the following GitHub repository: <https://github.com/ChampiB/Branching_Time_Active_Inference>.
As shown in Table <ref>, the agent is able to solve: 86.1% of the task when using a granularity of eight, 97.7% of the task when using a granularity of four, and 98.6% of the task when using a granularity of two. However, as the performance improves from 86.1% to 98.6%, the computational time required to run each simulation skyrockets from around 50 milliseconds to around 17.5 seconds. In other words, a simulation with a granularity of two is 350 times slower than a simulation with a granularity of eight.
Planning iterations Granularity P(solved) Time (ms)
50 8 0.861 49.93 $\pm$ 36.4124
50 4 0.977 241.63 $\pm$ 118.379
50 2 0.986 17503.8 $\pm$ 12882.8
The percentage of the dSprites environment solved by the $BTAI_{BF}$ agent when using a granularity of eight, four and two. Note, when a granularity of two is used, there are $17 \times 16 \times 3 = 816$ possible states. The last column reports the average execution time required for one simulation and the associated standard deviation. Note, the change in time granularity to milliseconds.
§.§ $BTAI_{3MF}$ modeling approach and results
In this section, we evaluate our new approach ($BTAI_{3MF}$) on the dSprites environment. In contrast to what is shown in Figure <ref>, $BTAI_{3MF}$ does not observe one index for each possible configuration of shape, and $(X, Y)$ positions. Instead, $BTAI_{3MF}$ has five observed variables representing the shape, the orientation, the scale, as well as the X and Y position, respectively. Each of those observed variable has its hidden state counterparts. Each observation depends on its hidden state counterparts through an identity matrix. This parametrisation is common in the literature on active inference, see [Sajid et al., 2021] for an example. The transition mappings of the hidden variables representing the shape, orientation, and scale, are defined as an indentity matrix. This forwards the state value at time $t$ to the next time step $t + 1$. For the hidden variables representing the X and Y position of the shape, the transition is set to reflect the dynamics of the dSprites environment when the actions taken are repeated eight times, i.e., if the action “DOWN" is selected, then the agent's position in Y will be decreased by eight before the start of the next action-perception cycle [Fountas et al., 2020].
The hyper-parameters used in those simualtions are presented in Table <ref>. Note, the hyper-parameter values are the same for all BTAI models presented in this paper. Only the number of action perception cycles, and the number of planning iterations may vary from one experiment to the next.
Table <ref> shows the results obtained by $BTAI_{3MF}$ on the dSprites environment when running 100 trials. Due to the change in the format of representations, the agent exhibits little increase in execution time as the granularity decreases, however, in general, the capacity to solve the task increases with this reduction in granularity. When a granularity of one is used, the agent is able to solve the task perfectly with 150 planning iterations.
Note, the agent using a granularity of 1 and 150 planning iterations is as fast as the agent using a granularity of 1 and 50 planning iterations. This is because as the number of planning iterations increases the agent requires more computation time per action-perception cycle, but as the agent performance increases on the task, the agent reaches the goal state faster, and therefore requires less action-perception cycles per simulation. To conclude, the agent with 150 planning iterations requires less action-perception cycles per simulation, but more time per action-perception cycle than the agent with 50 planning iterations. The code relevant to this section is available at the following URL: <https://github.com/ChampiB/BTAI_3MF>.
Name Value
<NB_SIMULATIONS> 100
<NB_ACTION_PERCEPTION_CYCLES> 50
<NB_PLANNING_STEPS> 50 or 100 or 150
<EXPLORATION_CONSTANT> 2.4
<PRECISION_PRIOR_PREFERENCES> 1
<EVALUATION_TYPE> EFE
The value of each hyper-parameter used by $BTAI_{3MF}$ in this section. <NB_SIMULATIONS> is the number of simulations run during the experiment. <NB_ACTION_PERCEPTION_CYCLES> is the maximum number of actions executed in each simulation, after which the simulation is terminated. <NB_PLANNING_STEPS> is the number of planning iterations performed by the agent. <EXPLORATION_CONSTANT> is the exploration constant of the UCT criterion. <PRECISION_PRIOR_PREFERENCES> is the precision of the prior preferences. <EVALUATION_TYPE> is the type of cost used to evaluate the node during the tree search. Those hyper-parameters can be used to re-run the experiments using the code of the following GitHub repository: <https://github.com/ChampiB/BTAI_3MF>.
Planning iterations Granularity P(solved) Time (sec)
50 8 0.895 1.279 $\pm$ 12.8
50 4 0.977 1.279 $\pm$ 12.8
50 2 0.996 1.279 $\pm$ 12.8
50 1 0.72 2.559 $\pm$ 18.01
100 1 0.77 5.119 $\pm$ 25.209
150 1 1 2.559 $\pm$ 18.01
This table presents the percentage of the dSprites environment solved by the $BTAI_{3MF}$ agent when using a granularity of eight, four, two and one. Note, when a granularity of one is used, there are $33 \times 32 \times 3 \times 40 \times 6 = 760,320$ possible state configurations. The last column reports the average execution time required of one simulation and the associated standard deviation.
§ CONCLUSION
In this paper, we presented a new version of Branching Time Active Inference that allows for modelling of several observed and latent variables. Taken together, those variables constitute a temporal slice. Within a slice, the model is equipped with prior beliefs over the initial latent variables, and each observation depends on a subset of the latent variables through the likelihood mapping. Additionally, the latent states evolve over time according to the transition mapping that describes how each latent variable at time $t+1$ is generated from a subset of the hidden states at time $t$ and the action taken.
At the beginning of each trial, the agent makes an observation for each observed variable, and computes the posterior over the latent variables using belief propagation. Then, a Monte-Carlo tree search is performed to explore the space of possible policies. During the tree search, each planning iteration starts by selecting a node to expand using the UCT criterion. Then, the children of the selected node are expanded, i.e., one child per action. Next, the posterior over the latent variables of the expanded nodes is computed by performing forward predictions using the known transition mapping, and the posterior beliefs over the latent states of the node selected for expansion. Once the posterior is computed, the expected free energy can be computed and back-propagated through the tree. The planning process stops after reaching a maximum number of iterations.
In the results section, we compared our new approach, called $BTAI_{3MF}$, to two earlier versions of branching time active inference, named $BTAI_{VMP}$ [Champion et al., 2022, Champion et al., 2022] and $BTAI_{BF}$ [Champion et al., 2021]. Briefly, at the current time step $t$: $BTAI_{VMP}$ performs variational message passing (VMP) with a variational distribution composed of only one factor, $BTAI_{BF}$ performs exact inference using Bayes theorem, and $BTAI_{3MF}$ implements belief propagation to compute the marginal posterior over each latent variable. For the hidden variables in the future, $BTAI_{VMP}$ does the same mean-field approximation as at time step $t$ and performs VMP, $BTAI_{BF}$ performs Bayesian prediction to compute the posterior over the only latent variable being modelled, and likewise, $BTAI_{3MF}$ performs prediction to compute the posterior over all future latent variables.
Since, none of the aforementioned approaches are equipped with deep neural networks, we compared them on a version of the dSprites environment in which the metadata of the dSprites dataset are used as inputs to the model instead of the dSprites images. The best performance obtained by $BTAI_{VMP}$ was to solve 96.9% of the task in 5.1 seconds. Importantly, $BTAI_{VMP}$ was previously compared to active inference as implemented in SPM both theoretically and experimentally [Champion et al., 2022, Champion et al., 2022]. $BTAI_{BF}$ was able to solve 98.6% of the task but at the cost of 17.5 seconds of computation. Note, $BTAI_{BF}$ was using a granularity of two (i.e., 816 states) while $BTAI_{VMP}$ was using a granularity of four (i.e., 216 states), which is why $BTAI_{BF}$ seems to be three times slower than $BTAI_{VMP}$. In reality, if $BTAI_{BF}$ had been using a granularity of four, it would have been much faster than $BTAI_{VMP}$ while maintaining a similar performance, i.e., around 96.9% of the task solved. Finally, $BTAI_{3MF}$ outperformed both of its predecessors by solving the task completely (100%, granularity of 1) in only 2.559 seconds. Importantly, $BTAI_{3MF}$ was able to model all the modalities of the dSprites environment for a total of $760,320$ possible states.
In addition to the major boost in performance and computational time, $BTAI_{3MF}$ provides an improved modelling capacity. Indeed, the framework can now handle the modelling of several observed and latent variables, and takes advantage of the factorisation of the generative model to perform inference efficiently. As described in detail in Appendix A, we also provide a high level notation for the creation of $BTAI_{3MF}$ that aims to make our approach as staightforward as possible to apply to new domains. The high-level notational language allows the user to create models by simply declaring the variables it contains, and the dependencies between those variables. Then, the framework performs the inference process automatically. Moreover, driven by the need for interpretability, we developed a graphical user interface to analyse the behaviour and reasoning of our agent, which is described in Appendix B.
There are two major directions of future research that may be explored to keep scaling up this framework. First, $BTAI_{3MF}$ is not yet equipped with deep neural networks (DNNs), and is therefore unable to handle certain types of inputs, such as images. In addition to the integration of DNNs into the framework, further research should be performed in order to learn useful sequences of actions. Typically, in the current version of $BTAI_{3MF}$, we built in the fact that each action should be repeated eight times in a row. This inductive bias works well in the context of the dSprites environment, but may be a limitation in other contexts.
It is also worth reflecting on how the $BTAI_{3MF}$ model sits with theories of brain function. In this respect, it is interesting to consider neural correlates of the “standard" approach that $BTAI_{3MF}$ is being placed in opposition to. As previously discussed, this standard active inference approach could be considered as monolithically tabular; that is, the key matrices, such as the likelihood mapping (the $\bm{A}$ matrix) and the transition mapping (the $\bm{B}$ matrix), grow in size exponentially with the number of states and observations. This is simply due to a combinatorial explosion, e.g. the set of all combinations of states grows intractably with the number of states.
How would the combinations of states in the monolithic tabular approach be represented in the brain? The obvious neural correlate would be conjunctive (binding) neurons [O'Reilly and Rudy, 2001], which become active when multiple feature values are present; for example, one might have a neural unit for every X, Y combination in the dSprites environment. If this is to be realised with a fully localist code, i.e. one unit for every combination, in the absence of any hierarchical structure, the required number of conjunctive units would explode in the same way as the $\bm{A}$ and $\bm{B}$ matrices do. This is why some models have proposed a binding resource that supports distributed (rather than localist) representations [Bowman and Wyble, 2007], which scale more tractably.
$BTAI_{3MF}$ avoids this combinatorial explosion by not combining features, enabling them to be represented separately. In a very basic sense, this separated representation is consistent with the observation that the brain contains distinct, physically separated, feature maps, e.g. Itti et al., 1998. Thus, at least to some extent, different feature dimensions are processed separately in the brain, as they are in $BTAI_{3MF}$.
The time-slice idea in $BTAI_{3MF}$ assumes a kind of discrete synchronising global clock. Thus, even though features have been separated from one another and may be considered to execute in different parts of the system, they update in lock-step. That is, implicitly, time is a binder, it determines which values of different feature dimensions/states are associated, e.g. an X-dimension value is associated with a particular Y-dimension value because they are so assigned in the same temporal slice. In this sense, in $BTAI_{3MF}$, time synchronisation resolves the binding problem.
This aspect of $BTAI_{3MF}$ resonates with theories of binding based upon oscillatory synchrony [Uhlhaas et al., 2009]. These theories suggest that different feature dimensions are bound by the corresponding neurons firing in synchrony relative to an ongoing oscillation, with that ongoing oscillation potentially playing the role of a global clock. Such oscillatory synchrony can be seen as a way to resolve the binding problem that does not require conjunctive units.
Conjunction error experiments, e.g. Botella et al., 2001, are also relevant here. In these experiments, participants make errors in associating multiple feature dimensions, perceiving illusory percepts, e.g. if a red K is presented before a blue A in a rapid serial visual presentation stream, in some cases, a red A and a blue K is perceived. These experiments firstly, re-emphasize that different feature dimensions are processed separately, as per $BTAI_{3MF}$: if feature dimensions were not separated, then conjunction errors could not happen. Additionally though, these experiments suggest that there is not a “perfect" synchronising global clock, since if there were, there would not be any conjunction errors even despite separation of feature dimensions. Generating such conjunction error patterns is an interesting topic for future $BTAI_{3MF}$ modelling work.
TO BE FILLED
[Botella et al., 2001]
J Botella, M Suero, and MI Barriopedro.
A model of the formation of illusory conjunctions in the time domain.
Journal of experimental psychology. Human perception and
performance, 270 (6):0 1452—1467, December 2001.
ISSN 0096-1523.
URL <https://doi.org/10.1037//0096-1523.27.6.1452>.
[Botvinick and Toussaint, 2012]
Matthew Botvinick and Marc Toussaint.
Planning as inference.
Trends in Cognitive Sciences, 160 (10):0 485
– 488, 2012.
ISSN 1364-6613.
[Bowman and Wyble, 2007]
Howard Bowman and Brad Wyble.
The simultaneous type, serial token model of temporal attention and
working memory.
Psychological review, 1140 (1):0 38, 2007.
[Browne et al., 2012]
C. B. Browne, E. Powley, D. Whitehouse, S. M. Lucas, P. I. Cowling,
P. Rohlfshagen, S. Tavener, D. Perez, S. Samothrakis, and
S. Colton.
A survey of monte carlo tree search methods.
IEEE Transactions on Computational Intelligence and AI in
Games, 40 (1):0 1–43, 2012.
[Champion et al., 2021]
Théophile Champion, Marek Grześ, and Howard Bowman.
Branching Time Active Inference with Bayesian Filtering,
[Champion et al., 2021]
Théophile Champion, Marek Grześ, and Howard Bowman.
Realizing Active Inference in Variational Message Passing: The
Outcome-Blind Certainty Seeker.
Neural Computation, 330 (10):0 2762–2826, 09
ISSN 0899-7667.
URL <https://doi.org/10.1162/neco_a_01422>.
[Champion et al., 2022]
Théophile Champion, Howard Bowman, and Marek Grześ.
Branching time active inference: Empirical study and complexity class
Neural Networks, 2022a.
ISSN 0893-6080.
[Champion et al., 2022]
Théophile Champion, Lancelot Da Costa, Howard Bowman, and Marek Grześ.
Branching time active inference: The theory and its generality.
Neural Networks, 151:0 295–316, 2022b.
ISSN 0893-6080.
[Costa et al., 2020]
Lancelot Da Costa, Thomas Parr, Noor Sajid, Sebastijan Veselic, Victorita
Neacsu, and Karl Friston.
Active inference on discrete state-spaces: a synthesis, 2020.
[Cullen et al., 2018]
Maell Cullen, Ben Davey, Karl J. Friston, and Rosalyn J. Moran.
Active inference in openai gym: A paradigm for computational
investigations into psychiatric illness.
Biological Psychiatry: Cognitive Neuroscience and
Neuroimaging, 30 (9):0 809 – 818, 2018.
ISSN 2451-9022.
Computational Methods and Modeling in Psychiatry.
[Doersch, 2016]
Carl Doersch.
Tutorial on variational autoencoders, 2016.
[FitzGerald et al., 2015]
Thomas H. B. FitzGerald, Raymond J. Dolan, and Karl Friston.
Dopamine, reward learning, and active inference.
Frontiers in Computational Neuroscience, 9:0 136,
ISSN 1662-5188.
[Fountas et al., 2020]
Zafeirios Fountas, Noor Sajid, Pedro A. M. Mediano, and Karl Friston.
Deep active inference agents using Monte-Carlo methods, 2020.
[Fox et al., 2003]
V. Fox, J. Hightower, Lin Liao, D. Schulz, and G. Borriello.
Bayesian filtering for location estimation.
IEEE Pervasive Computing, 20 (3):0 24–33,
[Friston et al., 2016]
Karl Friston, Thomas FitzGerald, Francesco Rigoli, Philipp Schwartenbeck,
John O Doherty, and Giovanni Pezzulo.
Active inference and learning.
Neuroscience & Biobehavioral Reviews, 68:0 862 –
879, 2016.
ISSN 0149-7634.
[Friston et al., 2017]
Karl J. Friston, Thomas Parr, and Bert de Vries.
The graphical brain: Belief propagation and active inference.
Network Neuroscience, 10 (4):0 381–414, 2017.
URL <https://doi.org/10.1162/NETN_a_00018>.
[Itti et al., 1998]
L. Itti, C. Koch, and E. Niebur.
A model of saliency-based visual attention for rapid scene analysis.
IEEE Transactions on Pattern Analysis and Machine
Intelligence, 200 (11):0 1254–1259, 1998.
[Itti and Baldi, 2009]
Laurent Itti and Pierre Baldi.
Bayesian surprise attracts human attention.
Vision Research, 490 (10):0 1295 – 1306,
ISSN 0042-6989.
Visual Attention: Psychophysics, electrophysiology and neuroimaging.
[Kschischang et al., 2001]
Frank R Kschischang, Brendan J Frey, and H-A Loeliger.
Factor graphs and the sum-product algorithm.
IEEE Transactions on information theory, 470
(2):0 498–519, 2001.
[Matthey et al., 2017]
Loic Matthey, Irina Higgins, Demis Hassabis, and Alexander Lerchner.
dsprites: Disentanglement testing sprites dataset.
https://github.com/deepmind/dsprites-dataset/, 2017.
[Millidge, 2019]
Beren Millidge.
Combining active inference and hierarchical predictive coding: A
tutorial introduction and case study., 2019.
URL <https://doi.org/10.31234/osf.io/kf6wc>.
[O'Reilly and Rudy, 2001]
Randall C O'Reilly and Jerry W Rudy.
Conjunctive representations in learning and memory: principles of
cortical and hippocampal function.
Psychological review, 1080 (2):0 311, 2001.
[Pezzato et al., 2020]
Corrado Pezzato, Carlos Hernandez, and Martijn Wisse.
Active inference and behavior trees for reactive action planning and
execution in robotics, 2020.
[Ren and Krogh, 2002]
Zhiyuan Ren and B.H. Krogh.
State aggregation in Markov decision processes.
In Proceedings of the 41st IEEE Conference on Decision and
Control, 2002., volume 4, pages 3819–3824 vol.4, 2002.
[Sajid et al., 2021]
Noor Sajid, Philip J. Ball, Thomas Parr, and Karl J. Friston.
Active Inference: Demystified and Compared.
Neural Computation, 330 (3):0 674–712, 03
ISSN 0899-7667.
URL <https://doi.org/10.1162/neco_a_01357>.
[Sancaktar et al., 2020]
Cansu Sancaktar, Marcel van Gerven, and Pablo Lanillos.
End-to-end pixel-based deep active inference for body perception and
action, 2020.
[Schwartenbeck et al., 2018]
Philipp Schwartenbeck, Johannes Passecker, Tobias U Hauser, Thomas H B
FitzGerald, Martin Kronbichler, and Karl Friston.
Computational mechanisms of curiosity and goal-directed exploration.
bioRxiv, 2018.
URL <https://www.biorxiv.org/content/early/2018/09/07/411272>.
[Uhlhaas et al., 2009]
Peter Uhlhaas, Gordon Pipa, Bruss Lima, Lucia Melloni, Sergio Neuenschwander,
Danko Nikolić, and Wolf Singer.
Neural synchrony in cortical networks: history, concept and current
Frontiers in integrative neuroscience, 3:0 17, 2009.
[Winn and Bishop, 2005]
John Winn and Christopher Bishop.
Variational message passing.
Journal of Machine Learning Research, 6:0 661–694,
[Yedidia, 2011]
Jonathan S. Yedidia.
Message-passing algorithms for inference and optimization.
Journal of Statistical Physics, 1450 (4):0
860–890, Nov 2011.
ISSN 1572-9613.
URL <https://doi.org/10.1007/s10955-011-0384-7>.
[Çatal et al., 2020]
Ozan Çatal, Tim Verbelen, Johannes Nauta, Cedric De Boom, and Bart Dhoedt.
Learning perception and planning with deep active inference, 2020.
§ APPENDIX A: HOW TO CREATE A $BTAI_{3MF}$ AGENT?
In this appendix, we describe how to build a $BTAI_{3MF}$ agent using our framework. The relevant code can be found in the file <main_BTAI_3MF.py> at the following URL: <https://github.com/ChampiB/BTAI_3MF>. Any script running a $BTAI_{3MF}$ agent must start by instantiating an environment in which the agent will be run. Our code provides an implementation of the dSprites environment, which can be created as follow:
c+c1 Create the environment.
nenv o= ndSpritesEnvp(ngranularityo=l+m+mi1p, nrepeato=l+m+mi8p)
nenv o= ndSpritesPreProcessingWrapperp(nenvp)
The first line creates the dSprites environment, the second makes sure that the observations generated by the environment are in the format expected by the agent. Once the environment has been created, we need to define the parameters of the model. Assume that we want to have a latent variable $S_t^{shape}$ representing the shape in the current image. This variable can takes three values, i.e., zero for squares, one for ellipses and two for hearts. In this case, the parameters of the prior over $S_t^{shape}$ may be created as:
c+c1 Create the parameters of the prior over the latent variable shape.
nd o= p
ndp[l+s+s2Sshapep] o= ntorcho.ntensorp([l+m+mf0.2p, l+m+mf0.3p, l+m+mf0.5p])
The first line above creates a python dictionary, the second line adds a vector of parameters in the dictionary. This vector can be accessed using the key “S_shape", which corresponds to the name of the latent variable. The values in d[“S_shape"] mean that a priori the agent believes it will observe a square with probability 0.2, an ellipse with probability 0.3, and a heart with probability 0.5. Also, by convention, the name of a latent variable must start with “S_". Similarly, if we assume that the shape is provided to the agent through an observed variable $O_t^{shape}$, we can create the parameters of the likelihood mapping for this variable as:
c+c1 Create the parameters of the likelihood mapping for the shape variable.
na o= p
nap[l+s+s2Oshapep] o= ntorcho.neyep(l+m+mi3p)
The first line above creates a python dictionary, and the second line adds a 3$\times$3 identity matrix[Note, in practice the identity matrix is noisy to avoid taking the logarithm of zero.] in the dictionary. This reflects the fact that there is a one-to-one relationship between the value taken by $S_t^{shape}$ and $O_t^{shape}$. Also, by convention, the observations name must start with “O_". Since, defining all the parameters manually can be tedious, our framework provides built-in functions that return the model parameters for the dSprites environment. Using those functions, the parameters can be retrieved as follows:
c+c1 Define the parameters of the generative model.
na o= nenvo.nap()
nb o= nenvo.nbp()
nc o= nenvo.ncp()
nd o= nenvo.ndp(nuniformo=n+nb+bpTruep)
Once all the parameters have been created, it is time to define the structure of the generative model. This can be done using a temporal slice builder, which is an object used to facilitate the creation of a temporal slice. First, we need to create the builder as follows:
c+c1 Create the temporal slice builder.
ntsbuilder o= nTemporalSliceBuilderp(l+s+s2A1p, nenvo.nnactionsp)
The builder takes two parameters, i.e., the name of the action random variable (i.e., “A_1") that must start by “A_", and the number of possible actions (i.e., env.n_actions = 4). Then, we need to tell the builder what state variables should be created, and what are the parameters of the prior beliefs over those variables. For the dSprites environment, this can be done as follows:
c+c1 Add the latent states of the model to the temporal slice.
ntsbuildero.naddstatep(l+s+s2Sposxp, ndp[l+s+s2Sposxp])
o.naddstatep(l+s+s2Sposyp, ndp[l+s+s2Sposyp])
o.naddstatep(l+s+s2Sshapep, ndp[l+s+s2Sshapep])
o.naddstatep(l+s+s2Sscalep, ndp[l+s+s2Sscalep])
o.naddstatep(l+s+s2Sorientationp, ndp[l+s+s2Sorientationp])
The function “add_state" adds a state variable to the temporal slice. The first parameter of this function is the name of the state to be added, and the second argument is the parameters of the prior beliefs over this new state. Next, we need to add the variables corresponding to the observations made by the agent. For the dSprites environment, this can be done as follows:
c+c1 Define the likelihood mapping of the temporal slice.
ntsbuildero.naddobservationp(l+s+s2Oposxp, nap[l+s+s2Oposxp], p[l+s+s2Sposxp])
o.naddobservationp(l+s+s2Oposyp, nap[l+s+s2Oposyp], p[l+s+s2Sposyp])
o.naddobservationp(l+s+s2Oshapep, nap[l+s+s2Oshapep], p[l+s+s2Sshapep])
o.naddobservationp(l+s+s2Oscalep, nap[l+s+s2Oscalep], p[l+s+s2Sscalep])
o.naddobservationp(l+s+s2Oorientationp, nap[l+s+s2Oorientationp], p[l+s+s2Sorientationp])
The function “add_observation" adds an observation variable to the temporal slice. The first parameter of this function is the name of the observation to be added, the second argument is the parameters of the likelihood mapping for this new observation, and the third parameter is the list of parents on which the observation depends. The next step is the definition of the transition mapping for each hidden state, which can be performed as follows:
c+c1 Define the transition mapping of the temporal slice.
ntsbuildero.naddtransitionp(l+s+s2Sposxp, nbp[l+s+s2Sposxp], p[l+s+s2Sposxp, l+s+s2A1p])
o.naddtransitionp(l+s+s2Sposyp, nbp[l+s+s2Sposyp], p[l+s+s2Sposyp, l+s+s2A1p])
o.naddtransitionp(l+s+s2Sshapep, nbp[l+s+s2Sshapep], p[l+s+s2Sshapep])
o.naddtransitionp(l+s+s2Sscalep, nbp[l+s+s2Sscalep], p[l+s+s2Sscalep])
o.naddtransitionp(l+s+s2Sorientationp, nbp[l+s+s2Sorientationp], p[l+s+s2Sorientationp])
The function “add_transition" adds a transition mapping to the temporal slice. The first parameter of this function is the name of the state for which the transition is defined, the second argument is the parameters of the transition mapping for this state, and the third parameter is the list of parents on which the state depends. Importantly, in the above snippet of code, only the states representing the position in x and y of the shape depends on the action variable “A_1". The final step is about the defintion of the prior preferences of the agent, and can be done as follows:
c+c1 Define the prior preferences of the temporal slice.
ntsbuildero.naddpreferencep([l+s+s2Oposxp, l+s+s2Oposyp, l+s+s2Oshapep], ncp[l+s+s2Oshapeposxyp])
The function “add_preference" adds some prior preferences to the temporal slice. The first parameter of this function is the list of observations for which the prior preferences are defined, and the second argument are the parameters of the prior preferences for those observations. At this stage, the initial temporal slice can be built:
c+c1 Create the initial temporal slice.
nts o= ntsbuildero.nbuildp()
Once the initial temporal slice has been created, it is possible to instantiate the agent and implement the action-perception cycle as follows:
c+c1 Create the agent.
nagent o= nBTAI3MFp(ntsp, nmaxplanningstepso=l+m+mi150p, nexpconsto=l+m+mf2.4p)
c+c1 Implement the actionperception cycles.
nntrials o= l+m+mi100
kfor ni o+owin n+nbrangep(nntrialsp):
nobs o= nenvo.nresetp()
kwhile o+ownot nenvo.ndonep():
naction o= nagento.nstepp()
nobs o= nenvo.nexecutep(nactionp)
nagento.nupdatep(nactionp, nobsp)
Most of the above code is self explanatory. Put simply, this code runs “n_trials" simulations of the dSprites environment. The line “action = agent.step()" performs inference, planning and action selection. The line “obs = env.execute(action)" executes the selected action in the environment, and the line “agent.update(action, obs)" updates the agent so that it has taken into account the action taken in the environment and the observations received.
§ APPENDIX B: HOW TO INSPECT A $BTAI_{3MF}$ AGENT?
In this appendix, we describe how to analyse a $BTAI_{3MF}$ agent using our graphical user interface (GUI). The relevant code can be found in the file <analysis_BTAI_3MF.py> at the following URL: <https://github.com/ChampiB/BTAI_3MF>. The first step is to create the environment and agent as described in Appendix A. Then, we create a GUI object and run the main loop as follows:
c+c1 Create the GUI for analysis.
ngui o= nGUIp(nenvp, nagentp)
The above two lines should open a graphical user interface as shown in Figure <ref>. When clicking on the node of the current temporal slice $TS(t)$, one can obtain additional information about this temporal slice, c.f., Figure <ref>. When clicking on the button named “Next planning iteration" in Figure <ref>, a planning iteration is performed and the tree displayed on the right-hand-side of this frame is updated as shown in Figure <ref>. When clicking on the root's children, e.g., “TS(1)", it is possible to navigate through the tree created by the MCTS algorithm as shown in Figure <ref>. When “TS(1)" is displayed as the new root as in Figure <ref>, clicking on “TS(1)" again will display the information of this node as depicted by Figure <ref>. Finally, Figure <ref> shows how the ambiguity term of the expected free energy can be decomposed into its component parts.
This figure illustrates the visualisation frame of the GUI used to analyse a $BTAI_{3MF}$ agent. The image corresponding to the current state of the environment is displayed in the upper-left corner. Under the image are four buttons allowing the user to: reset the environment and agent, perform the next planning iteration, perform all the remaining planning iterations, and perform the current best action in the environment. Finally, on the right hand side of the image is a depiction of the MCTS planning, where $TS(t)$ represents the current temporal slice. At the moment, the current temporal slice has no children, and therefore its children are displayed in orange with the text “None". Additionally, the current slice has no parent because it is the tree's root. Therefore, the arrow above the $TS(t)$ node is also orange.
This figure illustrates the visualisation frame of the GUI used to analyse a $BTAI_{3MF}$ agent after performing one planning iteration. The children of the root node are now available. One of them is displayed in green, it corresponds to the best action found so far by the MCTS algorithm. The root node has a red square surronding it, which means that it was selected for expansion by the MCTS algorithm.
This figure illustrates the frame displaying the information of the current temporal slice of the $BTAI_{3MF}$ agent. Six widgets are displayed. The first displays the structure of the likelihood model using the factor graph formalism. On this graph, we see that the model is composed of five obervations and five hidden states. Each observation depends on only one hidden state. The second widget displays the structure of the transition mapping. We see that only two hidden states depend on the action taken by the agent, i.e., the hidden states corresponding to the X and Y position of the shape. The third widget shows the structure of the prior preferences. Here, there is only one factor over three random variables, i.e., the shape and its (X, Y) position. Note, when moving your mouse over a variable in the likelihood, transition or prior preference widget the complete name of the variable is displayed, e.g., when moving over “S1" the label “S_shape" is displayed. The fourth widget illustrates the posterior over the latent variable corresponding to the x position of the shape. The random variable whose posterior is displayed can be changed either by using the combo box in the bottom-right corner of the widget or by clicking on a latent variable in the likelihood model widget. The fifth widget displays information related to the Monte-Carlo tree search. Finally, the last widget illustrates the message sent from the observation variable corresponding to the X position of the shape to its likelihood factor.
This figure illustrates what happens when clicking on the child “TS(1)" in Figure <ref>. Put simply, “TS(1)" becomes the new root and we see that its children have not been expanded yet. Additionally, the arrow above the “TS(1)" node is gray meaning that this node has a parent, i.e., “TS(t)". Clicking on this arrow leads us back to Figure <ref>.
This figure illustrates what happens when clicking on “TS(1)" in Figure <ref>. Most of the widgets have already been explained with the exception of the one in the bottom right-corner, which displays how the expected free energy decomposes into risk (blue box) and ambiguity (red box). When clicking on the blue or red box, the decomposition of the risk or ambiguity term is displayed as shown in Figure <ref>.
This figure illustrates how the ambiguity term decomposes into the ambiguity of the likelihood of each observed variable, i.e., the ambiguity of “O_shape" in blue, “O_scale" in red, “O_orientation" in orange, “O_pos_x" in green, and “O_pos_y" in gray.
§ APPENDIX C: SUM-RULE, PRODUCT-RULE AND D-SEPARATION CRITERION.
In this appendix, we explain three important properties than are used in the core of the paper, namely: the sum-rule and product-rule of probability and the d-separation criterion.
§.§ Sum-rule of probability
Given a set of random variables $X = \{X_1, ..., X_n\}$, and a joint distribution $P(X_1, ..., X_n)$ over $X$. The sum-rule allows to sum out a subset of the random variables. Here are a few examples:
\begin{align*}
P(X_1, ..., X_{n-1}) &= \sum_{X_n} P(X_1, ..., X_n),\\
P(X_1, ..., X_{n-2}) &= \sum_{X_{n-1}} \sum_{X_n} P(X_1, ..., X_n),\\
P(X_1, ..., X_{n-3}) &= \sum_{X_{n-2}} \sum_{X_{n-1}} \sum_{X_n} P(X_1, ..., X_n).
\end{align*}
Note, the sum-rule can also be used with a conditional distribution $P(X_1, ..., X_n|Y_1, ..., Y_m)$, for examples:
\begin{align*}
P(X_1, ..., X_{n-1}|Y_1, ..., Y_m) &= \sum_{X_n} P(X_1, ..., X_n|Y_1, ..., Y_m),\\
P(X_1, ..., X_{n-2}|Y_1, ..., Y_m) &= \sum_{X_{n-1}} \sum_{X_n} P(X_1, ..., X_n|Y_1, ..., Y_m),\\
P(X_1, ..., X_{n-3}|Y_1, ..., Y_m) &= \sum_{X_{n-2}} \sum_{X_{n-1}} \sum_{X_n} P(X_1, ..., X_n|Y_1, ..., Y_m).
\end{align*}
§.§ Product-rule of probability
Given a set of random variables $X = \{X_0, ..., X_n\}$, and a joint distribution $P(X_0, ..., X_n)$ over $X$. The product-rule allows us to factorise the joint into a product of factors without doing any conditional independence assumptions about $P(X_1, ..., X_n)$. More formally:
\begin{align*}
P(X_0, ..., X_n) &= P(X_n)\prod_{i = 0}^{n-1} P(X_i|X_{i+1:n}),
\end{align*}
where $X_{i:j} = \{X_i, ..., X_j\}$ is the set of random variables containing all the variables between $X_i$ and $X_j$ (included). Note, the product-rule can also be used with a conditional distribution $P(X_0, ..., X_n|Y_1, ..., Y_m)$:
\begin{align*}
P(X_0, ..., X_n|Y_1, ..., Y_m) &= P(X_n|Y_1, ..., Y_m)\prod_{i = 0}^{n-1} P(X_i|X_{i+1:n}, Y_1, ..., Y_m).
\end{align*}
§.§ The d-separation criterion
The d-separation criterion is a tool than can be used to check whether two sets of random variables ($X$ and $Y$) are independent given a third set of random variables $Z$. More formally, the d-separation criterion is a tool to check whether $X \indep Y\,| \,Z$. Knowing that $X \indep Y\,|\,Z$ holds in a distribution $P$ is useful because if $X \indep Y\,|\,Z$, then:
\begin{align*}
P(X, Y|Z) &= P(X| Y, Z)P(Y|Z) \tag{product-rule}\\
&= P(X|Z)P(Y|Z).\tag{$X \indep Y \,|\, Z$}
\end{align*}
First, let $G = (\mathcal{X}, \mathcal{E})$ be a graph over a set of nodes $\mathcal{X}$ connected by a set of directed edges $\mathcal{E}$. Given two nodes in the graph (i.e., $N_i, N_j \in \mathcal{X}$), we note: (i) $N_i \rightarrow N_j$ if there is a directed edge from $N_i$ to $N_j$ in the graph, (ii) $N_i \leftarrow N_j$ if the graph contains a directed edge from $N_j$ to $N_i$, and (iii) $N_i \rightleftarrows N_j$ if (i) or (ii) holds. Second, we say that there is a trail between two nodes (i.e., $N_1, N_n$) in the graph, if there is a sequence of distinct nodes $N = (N_1, ..., N_n)$, such that: $N_i \rightleftarrows N_{i+1}$ holds for all $i \in \{1, ..., n-1\}$. Third, we say that a trail between $N_1$ and $N_n$ is active if: (a) each time there is a v-structure (i.e., $N_{i-1} \rightarrow N_i \leftarrow N_{i+1}$) in the trail, then either $N_i$ or (at least) one of its descendants are in $Z$, and (b) no other node along the trail are in $Z$. Finally, we say that $X$ and $Y$ are d-separated by $Z$ if for all $X_i \in X$ and $Y_i \in Y$ there is no active trail between $X_i$ and $Y_i$ (given $Z$).
Using our terminology, the d-separation criterion states that if $X$ and $Y$ are d-separated by $Z$ in a graph $G$ representing the factorisation of a distribution $P$, then $X \indep Y\,|\,Z$ holds in the distribution $P$. Intuitively, the d-separation criterion help us to determine whether $X \indep Y\,|\,Z$ holds in $P$ by looking at the topology of the graph $G$. For example, consider the Bayesian network illustrated in Figure <ref>, and let $P$ be the joint distribution represented by this Bayesian network. Using the product rule, we get:
\begin{align*}
P(A, B, C, D, E, F) &= P(F|A, B, C, D, E)P(E|A, B, C, D)P(C|A, B, D)P(D|A, B)P(B|A)P(A).
\end{align*}
Note, that all trails between $C$ and $A, D$ are blocked by $B$, i.e., there is no active trails between $C$ and $A, D$ given $B$. Thus, we have $C \indep A, D\,|\,B$ and:
\begin{align*}
P(A, B, C, D, E, F) &= P(F|A, B, C, D, E)P(E|A, B, C, D)\bm{P(C|B)}P(D|A, B)P(B|A)P(A).
\end{align*}
Moreover, there is no active trail between $B$ and $A$ given $\emptyset$, therefore $B \indep A\,|\,\emptyset$ and:
\begin{align*}
P(A, B, C, D, E, F) &= P(F|A, B, C, D, E)P(E|A, B, C, D)P(C|B)P(D|A, B)\bm{P(B)}P(A).
\end{align*}
Using the same reasoning, one can see that $F \indep A, B, C, D\,|\,E$ and thus:
\begin{align*}
P(A, B, C, D, E, F) &= \bm{P(F|E)}P(E|A, B, C, D)P(C|B)P(D|A, B)P(B)P(A).
\end{align*}
Finally, using the d-separation one more time leads to the following factorisation for $P$:
\begin{align*}
P(A, B, C, D, E, F) &= P(F|E)\bm{P(E|B, D)}P(C|B)P(D|A, B)P(B)P(A).
\end{align*}
[square/.style=regular polygon,regular polygon sides=4]
[latent] (A) at (0,0.5) $A$;
[latent] (B) at (2,0.5) $B$;
[latent] (C) at (4,0.5) $C$;
[latent] (D) at (0,-1.5) $D$;
[latent] (E) at (2,-1.5) $E$;
[latent] (F) at (4,-1.5) $F$;
[-latex] (A) – (D);
[-latex] (D) – (E);
[-latex] (E) – (F);
[-latex] (B) – (D);
[-latex] (B) – (E);
[-latex] (B) – (C);
This figure illustrates a Bayesian network in which the following independences assumptions hold: $A \indep B\,|\, \emptyset$; $A, D \indep C\,|\,B$; and $A \indep E\,|\,D, B, C$. In contrast, the following independences assumptions does not hold: $A \indep B\,|\, D$; $A \indep E\,|\,B, C$; and $A \indep B\,|\,E$ .
|
$\mathbb{V}^{\varepsilon_{\angle}}_{\text{conc}}(\,\cdot\,)$ | $\mathbb{V}^{\varepsilon_{\angle}}_{\text{conc}}(F)$ returns all $v\in\mathbb{V}(F)$ that have been flagged concave according to $\varepsilon_{\angle}$
$\mathcal{Q}(\,\cdot\,)$ | Operator providing a measure $\mathcal{Q}(C)$ for the straightness of a curve $C\in C^{2}([0,1])$
$\mathcal{Q}^{\mu}(\,\cdot\,)$ | Operator favouring curves $C$ that connect two concave corners
$\mathcal{T}_{i}=(V_{i},E_{i}\mathcal{Q}_{i})$ | Template graph with vertices $V_{i}$, edges $E_{i}$ and quadrangular faces $\mathcal{Q}_{i}$
$\mathbb{T}$ | Complete pre-computed catalogue of templates $T_{i}$
$\phi_{i}:\partial E_{i}\rightarrow F_{i}$ | Boundary correspondence between boundary edges $\partial E_{i}$ of $T_{i}\in\mathcal{T}$ and the face $F_{i}\in\mathcal{F}$
$\operatorname{val}(\,\cdot\,)$ | Operator returning the valence of a vertex $v\in V$ in $G=(V,E,\mathcal{F})$
$F_{e}$ | The set of faces $F\in\mathcal{F}$ with $\pm e\in F$
$F_{\mathcal{T}}$ | Subset of faces to which a template has been assigned
$G^{\square}=(V^{\square},E^{\square},\mathcal{F}^{\square})$ | Canonical template skeleton graph of $G$
$E_{\mathcal{T}}$ | Subset of edges associated with at least one template $T_{i}\in\mathcal{T}$
$L(e)$ | Length of the piecewise linear curve resulting from the points $p=w(e)$
$L^{\mu_{\mathcal{T}},\mu_{\partial}}(\,\cdot\,)$ | Scaled length function, assigning larger values to edges $e\notin E_{\mathcal{T}}$ and $e\in\partial E$
$\varepsilon_{L}$ | Parameter $0<\varepsilon_{L}\leq 1$ that marks an edge eligible for splitting if $L(e)\geq\varepsilon_{L}L_{\max}$
${\mathbf{x}}_{h}^{\mathbf{r}}:{\hat{\Omega}}_{h}^{\mathbf{r}}\rightarrow{\Omega}_{h}$ | Surrogate harmonic map between ${\hat{\Omega}}_{h}^{\mathbf{r}}\approx{\hat{\Omega}}^{\mathbf{r}}$ and ${\Omega}_{h}\approx{\Omega}^{S}$ computed using Floater’s algorithm
$\theta(\hat{v})$ | Preferred angle created by point sets incident to ${\mathbf{x}}_{h}\circ{\mathbf{r}}(\hat{v})$
Section 3 |
---|---
Symbol | Property
$\Xi^{0}$ | Base knotvector with specified number of interior knots
$r_{j}:=\|s(\xi_{j})-p_{j}\|$ | $l^{2}$ mismatch between the spline fit and the $j$-th fitting point $p_{j}$
$\mu_{LS}$ | Threshold value that flags a knotspan for refinement if $r_{j}\geq\mu_{LS}$
$w^{\Xi}(\,\cdot\,)$ | Weight function assigning to $e\in E$ the knotvector $\Xi$ associated with $s=w^{S}(e)$
$w^{\Xi}_{i}(\,\cdot\,)$ | Weight function assigning a knotvector to each $\hat{e}\in\partial E_{i}$ of $\mathcal{T}_{i}=(V_{i},E_{i},\mathcal{Q}_{i})$
$\mathcal{V}_{h,i}$ | Canonical spline space on ${\hat{\Omega}}_{i}$ under the layout $T_{i}\in\mathcal{T}$ and the knotvectors assigned by $w_{i}^{\Xi}(\,\cdot\,)$
$\mathcal{L}_{\eta}^{\mu}(\,\cdot\,,\,\cdot\,)$ | Semi-linear form used for computing an inversely harmonic map
Section 5 |
---|---
Symbol | Property
$W(\,\cdot\,)$ | Winslow function
$W_{\varepsilon}(\,\cdot\,)$ | $l^{2}$ Regularised Winslow function
$\mathcal{R}_{\varepsilon}(\,\cdot\,)$ | Jacobian determinant regulariser
$\mathcal{L}_{\varepsilon}^{W}$ | Regularised weak form discretisation semi-linear form |
# R(Det)2: Randomized Decision Routing for Object Detection
Ya-Li Li Shengjin Wang
Department of Electronic Engineering, Tsinghua University and BNRist, Beijing,
China
<EMAIL_ADDRESS>Corresponding author
###### Abstract
In the paradigm of object detection, the decision head is an important part,
which affects detection performance significantly. Yet how to design a high-
performance decision head remains to be an open issue. In this paper, we
propose a novel approach to combine decision trees and deep neural networks in
an end-to-end learning manner for object detection. First, we disentangle the
decision choices and prediction values by plugging soft decision trees into
neural networks. To facilitate effective learning, we propose randomized
decision routing with node selective and associative losses, which can boost
the feature representative learning and network decision simultaneously.
Second, we develop the decision head for object detection with narrow branches
to generate the routing probabilities and masks, for the purpose of obtaining
divergent decisions from different nodes. We name this approach as the
randomized decision routing for object detection, abbreviated as R(Det)2.
Experiments on MS-COCO dataset demonstrate that R(Det)2 is effective to
improve the detection performance. Equipped with existing detectors, it
achieves $1.4\sim 3.6$% AP improvement.
## 1 Introduction
Figure 1: Overview of the proposed approach. (a) Inspired by decision trees,
we disentangle the decision choices and predictive values by introducing tree
structure for decision head in object detection. With multi-node prediction,
we can explore more diverse cues. (b) We use the soft probability to denote
decision choices for different routes of nodes. The overall decision is the
weighted sum of prediction values from different nodes. Specially, we propose
randomized decision routing to learn divergent decisions from different nodes
for overall performance improvement.
Object detection, which aims to recognize and localize the objects of interest
in images, is a fundamental yet challenging task in computer vision. It is
important for various applications, such as video surveillance, autonomous
driving, and robotics vision. Due to its practical importance, object
detection has attracted significant attention in the community. In recent
decades, deep neural networks (DNNs) have brought significant progress into
object detection. Typically, existing deep learning-based detection methods
include one-stage detectors [31, 25, 22], two-stage detectors [16, 33, 7, 1,
30], end-to-end detectors [3, 51, 39].
Generally, current deep architectures constructed for object detection involve
two components. One is the backbone for feature extraction, which can be pre-
trained with large-scale visual recognition datasets such as ImageNet [35].
The other is the decision head, which produces the predictions for computing
losses or inferring detection boxes. Collaborated with region sampling, object
detection can be converted into a multitask learning issue, where the decision
tasks include classification and bounding box (bbox) regression. For existing
detection networks, the decision head is simply constructed by sequentially
connecting several convolution or fully-connected layers. For one-stage
detectors, the decision head is commonly constructed by stacking several
convolutional layers. The decision head for region proposal in two-stage
detectors is similar. For two-stage detectors, the region-wise decision in
R-CNN stage is typically implemented with 2 fully-connected layers. Since the
decision head is quite important for high-performance detectors, there are
recently devoted researches [43, 37, 8, 12]. However, most of these works
focus on task disentanglement and task-aware learning, leaving the universal
decision mechanism far from exploitation.
Considering that the features from DNNs show great potential for high-level
vision tasks, the simple design of widely-adopted single-node decision might
impede the performance of object detection. A natural question arises: is
single-node prediction good enough for feature exploration in object
detection? To answer this, we focus on novel decision mechanism and propose an
approach to introduce soft decision trees into object detection. As in Figure
1, we integrate soft decision trees to disentangle the routing choices and
prediction values. To jointly learn the soft decision trees and neural
networks in an end-to-end manner, we propose the randomized decision routing
with the combination of so-called selective loss and associate loss.
Experiments validate the effectiveness of the proposed approach and address
the necessity of introducing multi-node predictions. Since our work is mainly
on Randomized Decision routing for object Detection, we name it as R(Det)2.
From the perspective of machine learning, our R(Det)2 is an attempt to bridge
the neural networks and decision trees – two mainstream algorithms, which
would bring insights into future research.
The contributions of this paper are three-fold.
* •
We propose to disentangle the route choices and prediction values for multi-
node decision in object detection. In particular, we propose randomized
decision routing for the end-to-end joint learning of the tree-based decision
head.
* •
We construct a novel decision head for object detection, which introduces
routing probabilities and masks to generate divergent decisions from multiple
nodes for the overall decision boosting.
* •
Extensive experiments validate the effectiveness of our proposed R(Det)2. In
particular, R(Det)2 achieves over 3.6% of $AP$ improvement when equipped with
Faster R-CNN. It improves the detection accuracy of large objects by a large
margin as well.
## 2 Related work
One-stage detectors. Overfeat [36] predicts the decision values for
classification and localization directly with convolutional feature maps. YOLO
[31, 32] regresses the object bounds and category probabilities directly based
on image gridding. SSD [25] improves the one-stage detection with various
scales of multilayer features. Retina Net [22] proposes the focal loss to
tackle the foreground-background imbalance issue. Besides, keypoints-based
one-stage detectors [20, 49, 11, 5] have been extensively studied. CornerNet
[20] generates the heatmaps of top-left and bottom-right corners for
detection. CenterNet [11] uses a triplet of keypoints for representation with
additional center points. Moreover, FCOS [40] and ATSS [47] introduce
centerness branch for anchor-free detection. Other methods delve into sample
assignment strategies [47, 50, 2, 19, 14, 28].
Two-stage detectors. R-CNN [16], Fast R-CNN [15], Faster R-CNN [33] predict
object scores and bounds with pooled features of proposed regions. R-FCN [7]
introduces position-sensitive score maps to share the per-ROI feature
computation. Denet [41] predicts and searches sparse corner distribution for
object bounding. CCNet [29] connects chained classifiers from multiple stages
to reject background regions. Cascade R-CNN [1] uses sequential R-CNN stages
to progressively refine the detected boxes. Libra R-CNN [30] mainly tackles
the imbalance training. Grid R-CNN [27] introduces pixel-level grid points for
predicting the object locations. TSD [37] decouples the predictions for
classification and box bounding with the task-aware disentangled proposals and
task-specific features. Dynamic R-CNN [46] adjusts the label-assigning IoU
thresholds and regression hyper-parameters to improve the detection quality.
Sparse R-CNN [38] learns a fixed set of sparse candidates for region proposal.
End-to-end detectors. DETR [3] models object detection as a set prediction
issue and solve it with transformer encoder-decoder architecture. It inspires
the researches on transformer-based detection frameworks [10, 51, 24, 9, 39].
Deformable DETR [51] proposes the sparse sampling for key elements. TSP [39]
integrates FCOS and R-CNN head into set prediction issue for faster
convergence.
Decision mechanism. The decision head in object detection frameworks usually
involves multiple computational layers (i.e., convolution layers, fully-
connected layers and transformer modules). Typically, for one-stage detectors
with dense priors [31, 25, 22, 11, 40], stacked convolutions are used to
obtain features with larger receptive fields, with separate convolution for
classification, localization and other prediction tasks. For the decision in
R-CNN stages [33, 1, 30, 46, 27], stacked fully-connected layers are common.
Double-head R-CNN [43] uses fully-connected layers for position-insensitive
classification and fully-convolutional layers for position-sensitive
localization. Dynamic head [8] unifies the scale-, spatial- and task-aware
self-attention modules for multitask decisions.
## 3 Randomized decision trees
### 3.1 Soft decision trees
To disentangle the decision choices and prediction values, we first construct
soft decision trees [13] for multiclass classification and bbox regression in
object detection. We use the soft routing probability ranging from 0 to 1 to
represent the decision choice and facilitate network optimization.
Soft decision tree for classification. For multiclass classification, the soft
decision tree is formulated as:
$\mathbf{c}=\sum_{j\in\textit{Nodes}}p_{j}\mathbf{c}_{j},\sum_{j\in\textit{Nodes}}p_{j}=1$
(1)
where $\mathbf{c}$ is the output of the whole classification tree and
$\mathbf{c}_{j}$ is the prediction value from each node. $p_{j}$ is the
routing probability for decision choice. It indicates the probability of
choosing $j$-th classification node. For all the nodes,
$\sum_{j\in\textit{Nodes}}p_{j}=1$. Eqn. 1 shows that $\mathbf{c}$ is the
weighted sum of the classification scores from all the nodes. Different from
traditional decision tree, $p_{j}$ is ”soft” ranging from 0 to 1. $p_{j}$ can
be obtained in networks by a scalar score with activations such as Softmax,
Sigmoid.
Soft decision tree for regression. For bbox regression, we formulate the soft
decision tree in a similar way as:
$\mathbf{b}=\sum_{j\in\textit{Nodes}}q_{j}\mathbf{b}_{j},\sum_{j\in\textit{Nodes}}q_{j}=1$
(2)
where $\mathbf{b}_{j}$ is the regression value output from each node $j$.
$q_{j}$ is the routing probability for the $j$-th regression node.
$\mathbf{b}$ is the output of the tree regressor. Similar to soft
classification tree, the routing probability $q_{j}\in[0,1]$ is “soft”.
Noting that the routing probabilities $p_{j}$, $q_{j}$ denote decision
choices, which indicates the probability of routing the $j$-th node. It can be
viewed as decision confidence in test phase. $\mathbf{c}_{j}$ and
$\mathbf{b}_{j}$ are the prediction values for classification and regression
tasks attached with the $j$-th node. Both the decision choices and prediction
values can be easily obtained with neural layers. With soft decision trees,
multiple discriminative and divergent decisions can be obtained with features
from different aspects. To facilitate the discussion, we restrict the soft
decision tree as binary and $j\in\\{l,r\\}$.
### 3.2 Randomized Decision Routing
To learn soft decision trees in neural networks, we propose randomized
decision routing. The motivation is two-fold. First, in order to obtain a
high-performance decision head with tree structure, we need to avoid the high
relevance of multiple predictions from different nodes. It means that we
should differentiate the training to reduce the decision relevance of
different nodes. Second, we also need to guarantee the decision performance of
the whole tree. In a word, we need to achieve high-performance tree decision
with low-relevant node decisions. To realize this, we propose the selective
loss to supervise the per-node learning and associative loss to guide the
whole-tree optimization. We then unify the selective and associative loss into
a general training framework. Since we involve random factors to model the
probability of routing different nodes, we name this training strategy as
randomized decision routing.
Figure 2: Illustration on training deep networks with decision tree head. We
propose randomized decision routing which includes selective and associative
losses. The selective loss identifies the dominant decisive prediction and
weights the node loss accordingly in a randomized way. The associate loss
learns the routing probability by measuring the difference between the fused
output and the ground truth.
To achieve node decisions with low relevance, we first perform node selection
to identify the node with higher optimization priority. We then attach the
selected node with a higher routing probability. Oppositely, a lower routing
probability is attached with the remaining node. Divergent routing
probabilities lead to different learning rates for different nodes. Therefore,
to diversify the decision of different nodes, we construct the selective loss
by setting different randomized weights for different node losses. As
illustrated in Figure 2-left, the selective losses for classification and bbox
regression are denoted as:
$\displaystyle L_{s}^{cls}(\mathbf{c}_{l},$
$\displaystyle\mathbf{c}_{r},y)=\gamma_{l}^{c}L_{l}^{c}+\gamma_{r}^{c}L_{r}^{c}$
(3)
$\displaystyle=\gamma_{l}^{c}L^{cls}(\mathbf{c}_{l},y)+\gamma_{r}^{c}L^{cls}(\mathbf{c}_{r},y)$
$\displaystyle L_{s}^{bbox}(\mathbf{b}_{l},$
$\displaystyle\mathbf{b}_{r},B)=\gamma_{l}^{b}L_{l}^{b}+\gamma_{r}^{b}L_{r}^{b}$
(4)
$\displaystyle=\gamma_{l}^{b}L^{bbox}(\mathbf{b}_{l},B)+\gamma_{r}^{b}L^{bbox}(\mathbf{b}_{r},B)$
where $y$ is the ground truth label and $B$ is the ground truth for bbox
regression. $\gamma_{l}^{c},\gamma_{r}^{c}$ are the weights indicating the
probability for selective routing of classification tree.
$\gamma_{l}^{b},\gamma_{r}^{b}$ are the weights indicating the probability for
selective decision routing of bbox regression tree.
Figure 3: Decision head for object detection. (a) shows the common decision
head. (b) shows R(Det)2-B which disentangles the decision choice and values by
soft decision trees. (c) shows R(Det)2-M which leverages the routing masks to
produce the divergent input features for decision. (d) shows R(Det)2-T which
unifies task disentanglement into R(Det)2-based decision head.
We leverage random weights to differentiate the node learning. For
classification, we set $\gamma^{c}_{l}$, $\gamma^{c}_{r}$ based on the
comparison of $L_{l}^{c},L_{r}^{c}$. We set the nodes with lower loss values
with higher random weights. For bbox regression, we set the weights
$\gamma^{b}_{l}$, $\gamma_{r}^{b}$ according to the relative comparison of
$q_{l},q_{r}$. For instance, if $q_{l}<q_{r}$, we restrict
$\gamma_{l}^{b}<\gamma_{r}^{b}$. It is consistent with the intuition that we
learn the selective node with higher priority in a fast way, meanwhile
learning the remaining one in a slow way. Empirically, we sample the lower
weight from $U(0.1,0.3)$ and the higher weight from $U(0.9,1.1)$. This slow-
fast randomized manner would benefit the learning of the whole decision head.
Besides of differentiating node decisions, we also need to ensure the
performance of the whole decision tree. That is, the predictive decision
output from the whole tree should be good. To achieve this, we formulate
associative loss based on the fused prediction $\mathbf{c}$, $\mathbf{b}$. The
associative loss can be the same as the original classification or bbox
regression loss in form, with the fused prediction as the input. As
illustrated in Figure 2-right, the associative loss for classification and
bbox regression is formulated as:
$L_{a}^{cls}\left(\mathbf{c},y\right)=L^{cls}\left(p_{l}\mathbf{c}_{l}+p_{r}\mathbf{c}_{r},y\right)$
(5)
$L_{a}^{bbox}\left(\mathbf{b},B\right)=L^{bbox}\left(q_{l}\mathbf{b}_{l}+q_{r}\mathbf{b}_{r},B\right)$
(6)
The routing probabilities and prediction values are simultaneously optimized
with the associative loss. Specially, the routing probability which indicates
the decision choice is only supervised by this associative loss, resulting in
appropriate routing in inference.
The whole loss is formulated as follows:
$L_{all}=\lambda\left(L_{s}^{cls}+L_{s}^{bbox}\right)+(1-\lambda)\left(L_{a}^{cls}+L_{a}^{bbox}\right)$
(7)
where $\lambda\in[0,1]$ is the coefficient to balance between selective loss
and associative loss. It is noteworthy that the $L^{cls}$, $L^{bbox}$ for
computing the selective and associative loss can be commonly-used loss
functions for classification (e.g., cross-entropy loss, Focal loss [22]) and
bbox regression (e.g., Smooth-L1 loss, IoU loss [45, 42, 34, 48]). With soft
decision trees, we can generate multiple decisions with different visual cues.
Moreover, the divergent learning helps enhance feature representations and
suppress over-optimization, further promote object detection.
## 4 Decision head for detection
We construct the head with decision trees for object detection. The common-
used head of R-CNN detectors [33, 17, 1, 21] is single-prediction type, as in
Figure 3(a). Typically, two fully-connected (fc) layers are sequentially
connected with region-pooled features, with one additional fc layer for
classification and bbox regression, respectively. In order to obtain decision
values for multiple nodes, we first generate predictions
$\mathbf{c}_{l},\mathbf{c}_{r}$ and $\mathbf{b}_{l},\mathbf{b}_{r}$ with the
features output from the same structure as the common head. We further add
another narrow branch with 1$\sim$2 fc layers to produce the routing
probabilities $p_{l},p_{r}$ and $q_{l},q_{r}$, as illustrated in Figure 3(b).
We record this as the Basic head for randomized decision routing, as
R(Det)2-B. The routing choices and predictions are disentangled with this
basic head structure.
Moreover, we add the routing masks for features before prediction to increase
the divergence of decisions from multiple nodes. The decision values
$\mathbf{c}_{l},\mathbf{c}_{r}$ and $\mathbf{b}_{l},\mathbf{b}_{r}$ are
generated with route-wise masked features. As in Figure 3(c), we average the
batched region-wise features to obtain a single context-like vector. Another
fc layer with Sigmoid is imposed on this vector to produce routing masks for
different nodes. By multiplying the route-wise masks on the last features
before decision, we further diversify the input for different nodes of
decision. The dependence of node decisions can be further reduced. We record
this as Masked head for randomized decision routing, as R(Det)2-M.
Inspired by efforts on disentangling the classification and localization tasks
for detection, we develop another R(Det)2-T. We separate the last feature
computation before the multitask prediction and unify the task-aware feature
learning into our framework, as in Figure 3(d). Since it is not the main focus
of this work, we have not involved more complicated task-aware head designs
[43, 37, 46]. Yet it is noteworthy that the proposed R(Det)2 can easily be
plugged into these detectors for performance improvement.
## 5 Experiments
Datasets. We evaluate our proposed approach on the large-scale benchmark MS
COCO 2017 [23]. Following common practice, we train detectors on training
split with $\sim$115k images and evaluate them on val split with 5k images. We
also report the results and compare with the state-of-the-art on COCO test-dev
split with 20k images. The standard mean average precision (AP) across
different IoU thresholds is used as the evaluation metric.
Training details. We implement the proposed R(Det)2 as the plug-in head and
integrate it into existing detectors. Our implementation is based on the
popular mmdetection [4] platform. If not specially noted, the R(Det)2 serves
for the decision in R-CNN of two-stage detectors, as Faster R-CNN [33],
Cascade R-CNN [1]. We train the models with ResNet-50/ResNet-101 [18]
backbones with 8 Nvidia TitanX GPUs. The learning rate is set to 0.02 and the
weight decay is 1e-4, with momentum 0.9. The models for ablation studies are
trained with the standard 1$\times$ configuration. No data augmentation is
used except for standard horizontal image flipping. We only conduct multiscale
training augmentation for evaluation on COCO test-dev to compare with the
state-of-the-art.
Inference details. It is noteworthy that the randomized decision routing is
only performed in training phase. In inference, we perform on the single image
scale without specific noticing. Following standard practice, we evaluate the
models with test time augmentation (TTA) as multiscale testing to compare with
the state-of-the-art.
| B | M | T | $AP$ | $AP_{50}$ | $AP_{75}$ | $AP_{S}$ | $AP_{M}$ | $AP_{L}$
---|---|---|---|---|---|---|---|---|---
2fc | | | | 37.4 | 58.1 | 40.4 | 21.2 | 41.0 | 48.1
2fc | ✓ | | | 38.8 | 59.8 | 41.8 | 22.3 | 42.3 | 50.9
| | ✓ | | 39.1 | 60.5 | 42.3 | 22.5 | 43.1 | 50.5
| | | ✓ | 38.9 | 60.2 | 42.1 | 23.1 | 42.1 | 50.2
4conv | ✓ | | | 38.7 | 59.0 | 41.9 | 22.4 | 42.0 | 50.4
1fc | | ✓ | | 39.2 | 59.7 | 42.4 | 22.8 | 42.8 | 51.5
| | | ✓ | 39.5 | 59.8 | 42.9 | 22.7 | 43.1 | 51.7
4conv | ✓ | | | 39.3 | 60.2 | 42.7 | 22.5 | 42.8 | 51.6
(res) | | ✓ | | 40.1 | 60.8 | 43.3 | 23.3 | 43.5 | 52.6
1fc | | | ✓ | 40.4 | 61.2 | 44.1 | 23.8 | 43.7 | 53.0
Table 1: Ablation study on different types with R(Det)2. The baseline is
Faster R-CNN equipped with ResNet-50 backbone. B, M and T represents
R(Det)2-B, R(Det)2-M and R(Det)2-T for decision heads, respectively.
### 5.1 Ablation study
Effects of components. We first conduct the ablative experiment to evaluate
the effects of different components for R(Det)2 (Table. 1). We integrate the
proposed decision head structure into the R-CNN stage and apply randomized
decision routing for training. We first follow the common setting with
2$\times$1024 fully-connected layers (referred as 2fc) to generate region-wise
features, with decision values for multiclass classification and bbox
regression predicted based on them. By converting 2fc to R(Det)2-B, we
increase the detection $AP$ to 38.8%, yielding 1.4% of improvement. By adding
routing masks for region-wise features, R(Det)2-M achieves 39.1% detection
$AP$, 1.7% of improvement. It is reasonable since the mask multiplying would
promote the decision differences between nodes, leading to the improvement of
joint decision. We further replace 2fc with 4$\times$256 convolutional layers
with 1 fully-connected layer (referred as 4conv1fc). The achieved $AP$
increases to 38.7%, 39.2% and 39.5% with R(Det)2-B, R(Det)2-M, R(Det)2-T,
respectively. We further add residual connections between neighboring
convolutions for feature enhancement, referred to as 4conv(res)1fc. By
integrating 4conv(res)1fc with R(Det)2-B, we achieve $AP$ of 39.3% and
$AP_{75}$ of 42.7%. By integrating R(Det)2-M, the achieved $AP$ is 40.1% and
$AP_{75}$ is 43.3%. With task disentanglement as R(Det)2-T, we achieve $AP$,
$AP_{50}$, $AP_{75}$ of 40.4%, 61.2% and 44.1%, respectively. Compared to the
baseline, the $AP$, $AP_{50}$, $AP_{75}$ is increased by 3.0%, 3.1% and 3.7%,
respectively. In particular, the R(Det)2 significantly improves the detection
accuracy on large objects, leading to the $AP_{L}$ improvement by a large
margin. Compared with the baseline, we achieve 4.9% of $AP_{L}$ improvement
ultimately. It verifies that the features contain much more information to be
exploited, especially for larger objects with high-resolution visual cues. Our
proposed R(Det)2 which produces decisions with multiple nodes can focus on the
evidence from diverse aspects, leading to significant performance improvement.
$L^{cls}$ | $L^{bbox}$ | $AP$ | $AP_{50}$ | $AP_{75}$ | $AP_{S}$ | $AP_{M}$ | $AP_{L}$
---|---|---|---|---|---|---|---
Baseline | | 37.4 | 58.1 | 40.4 | 21.2 | 41.0 | 48.1
CE | S-L1 | 40.4 | 61.2 | 44.1 | 23.8 | 43.7 | 53.0
Focal | S-L1 | 40.5 | 61.2 | 44.4 | 24.2 | 43.6 | 52.6
CE | IoU | 40.9 | 61.2 | 44.5 | 23.9 | 44.2 | 53.7
Focal | IoU | 41.0 | 61.1 | 44.5 | 24.3 | 44.3 | 53.7
Table 2: Comparison with different loss functions. The baseline model is
Faster R-CNN with ResNet-50 as the backbone. CE indicates the cross-entropy
loss. Focal indicates the original focal loss [22]. S-L1 indicates the
Smooth-L1 loss. IoU indicates the loss computed by the negative-log of
intersection-over-union [45].
Effectiveness with different loss functions. The proposed randomized decision
routing can be combined with any existing classification and localization
losses. We conduct experiments to evaluate the effectiveness of R(Det)2 with
different loss functions(Table 2). When we apply the Softmax cross-entropy
loss for classification and Smooth-L1 loss for bbox regression, we achieve
40.4% $AP$, 61.2% $AP_{50}$, 44.1% $AP_{75}$. Compared to baseline Faster
R-CNN with the same losses, we increase the $AP$, $AP_{50}$, $AP_{75}$ by
3.0%, 3.1%, 3.7%, respectively. The $AP$ is slightly higher with focal loss
[22] applying for classification. The detection $AP$ is further increased with
IoU loss [45] applied for bbox regression. The detection $AP$ reaches 41.0%.
Compared with the baseline, the $AP$ is increased by 3.6% and $AP_{L}$ is
increased by 5.6%. It indicates that the proposed R(Det)2 performs well with
different combinations of loss functions, which further demonstrates its
effectiveness.
Backbone | $AP$ | $AP_{50}$ | $AP_{75}$ | $AP_{S}$ | $AP_{M}$ | $AP_{L}$
---|---|---|---|---|---|---
R50 | 37.4 | 58.1 | 40.4 | 21.2 | 41.0 | 48.1
+R(Det)2 | 41.0 | 61.2 | 44.8 | 24.6 | 44.1 | 53.7
| (+3.6) | (+3.1) | (+4.4) | (+3.4) | (+3.1) | (+5.6)
R50-DCN | 41.3 | 62.4 | 45.0 | 24.6 | 44.9 | 54.4
+R(Det)2 | 44.2 | 64.5 | 48.3 | 26.6 | 47.7 | 58.6
| (+2.9) | (+2.1) | (+3.3) | (+2.0) | (+2.8) | (+4.2)
R101 | 39.4 | 60.1 | 43.1 | 22.4 | 43.7 | 51.1
+R(Det)2 | 42.5 | 62.8 | 46.3 | 25.1 | 46.4 | 55.7
| (+3.1) | (+2.7) | (+3.2) | (+2.7) | (+3.7) | (+4.8)
R101-DCN | 42.7 | 63.7 | 46.8 | 24.9 | 46.7 | 56.8
+R(Det)2 | 45.0 | 65.4 | 49.2 | 27.2 | 48.8 | 59.6
| (+2.3) | (+1.7) | (+2.4) | (+2.3) | (+2.1) | (+2.8)
Table 3: Comparison with different backbone networks. R-50 and R-101 indicates
ResNet-50 and ResNet-101, respectively. R(Det)2 is plugged in Faster R-CNN
with various backbones and achieves consistent performance gains.
Effectiveness on different backbone networks. With Faster R-CNN as the
baseline detector, we conduct the ablative experiment to evaluate the
effectiveness of R(Det)2 on various backbones(Table 3). With ResNet-50 as the
backbone, the achieved $AP$, $AP_{50}$ and $AP_{75}$ of R(Det)2 is improved by
3.6%, 3.0%, and 4.1%, respectively. With ResNet-50-DCN (ResNet-50 with
deformable convolution) as the backbone, we achieve the detection $AP$ of
44.2%, 2.9% improvement. The performance gain of R(Det)2 with ResNet-101 is
also significant. By equipping with R(Det)2, the detection $AP$ of ResNet-101
reaches 42.5% and $AP_{75}$ reaches 46.3%, 3.1% and 3.2% higher than the
baseline. With ResNet-101-DCN as the backbone, the $AP$ reaches 45.0% and
$AP_{75}$ is 49.2%. In particular, the detection accuracy over large objects
is improved significantly. The $AP_{L}$ over the different backbones is
increased by 5.6%, 4.2%, 4.8% and 2.8%, respectively. Experiments show that
the proposed R(Det)2 is effective among object detectors with various
backbones.
Detector | $AP$ | $AP_{50}$ | $AP_{75}$ | $AP_{S}$ | $AP_{M}$ | $AP_{L}$
---|---|---|---|---|---|---
Libra R-CNN [30] | 38.3 | 59.5 | 41.9 | 22.1 | 42.0 | 48.5
+R(Det)2 | 41.4(+3.1) | 61.4(+1.9) | 45.5(+3.6) | 24.7(+2.5) | 45.0(+3.0) | 53.7(+5.2)
Cascade R-CNN [1] | 40.3 | 58.6 | 44.0 | 22.5 | 43.8 | 52.9
+R(Det)2 | 42.5(+2.2) | 61.0(+2.4) | 45.8(+1.8) | 24.6(+2.1) | 45.5(+1.7) | 57.0(+4.1)
Dynamic R-CNN [46] | 38.9 | 57.6 | 42.7 | 22.1 | 41.9 | 51.7
+R(Det)2 | 41.0(+2.1) | 59.7(+2.1) | 44.8(+2.1) | 23.3(+1.2) | 44.2(+2.3) | 54.8(+3.1)
DoubleHead R-CNN [43] | 40.1 | 59.4 | 43.5 | 22.9 | 43.6 | 52.9
+R(Det)2 | 41.5(+1.4) | 60.8(+1.4) | 44.5(+1.0) | 24.2(+1.3) | 45.0(+1.4) | 53.9(+1.0)
RetinaNet [22] | 36.5 | 55.4 | 39.1 | 20.4 | 40.3 | 48.1
+R(Det)2 | 38.3(+1.8) | 57.4(+2.0) | 40.8(+1.7) | 22.6(+2.2) | 42.0(+1.7) | 50.5(+2.4)
Table 4: Generalization with different detectors. R(Det)2 shows $AP$
improvement on various detectors.
Generalization on different detectors. We plug R(Det)2 into existing detectors
to evaluate the generalization capability (Table 4). Other than Faster R-CNN,
we integrate R(Det)2 with libra R-CNN [30], dynamic R-CNN [46], cascade R-CNN
[1]. The backbone is ResNet-50. Upon libra R-CNN, R(Det)2 improves the
detection $AP$ by 3.1% and $AP_{75}$ by 3.6%, yielding 41.4% $AP$ and 45.5%
$AP_{75}$. On cascade R-CNN, the powerful detector with cascade structure,
R(Det)2 also shows consistent improvement. It improves the detection $AP$ by
2.2% and $AP_{50}$ by 2.4%, respectively. Since the dynamic R-CNN [46]
adaptively changes the hyperparameters of Smooth-L1 loss for bbox regression,
we present the detection accuracy by randomized routing upon Smooth-L1 loss,
instead of IoU loss with better performance. By equipping R(Det)2, the $AP$
and $AP_{75}$ is increased by 2.1%. Besides, R(Det)2 is quite effective to
improve the detection performance of large objects. The $AP_{L}$ of libra
R-CNN and cascade R-CNN is increased by a large margin with R(Det)2, leading
to 5.2% and 4.1% improvement, respectively. For DoubleHead R-CNN [43] and one-
stage RetinaNet [22] with designed head, we fix the head for task-aware
decision. Only randomized routing based training leads to 1.4% of $AP$
improvement with DoubleHead R-CNN and 1.8% of $AP$ improvement with RetinaNet
[22]. The experiment validates that the proposed R(Det)2 performs well on
existing detectors.
Figure 4: Effects on hyperparameter $\lambda$ to balance the selective loss
and associative loss for decision routing.
Effects of hyperparameter $\lambda$. We leverage the hyperparameter $\lambda$
to balance the selective and associative loss in randomized decision routing.
We further evaluate the effects of $\lambda$ with ResNet-50-based Faster
R-CNN. The curves of detection $AP$ changing along with $\lambda$ are plotted
in Figure 4. The detection accuracy is the highest when $\lambda=0.5$. That
means we assign the weights for the selective and associative loss nearly
equal. The detection $AP$ remains stable when $\lambda$ is between 0.1 to 0.9.
If we further reduce $\lambda$ to 0.001 and reduce the impact of selective
loss, the detection $AP$ with Smooth-L1 loss for bbox regression decreases to
38.6%, by 1.8% points. It indicates that the selective loss which aims to
differentiate node decisions is essential for performance improvement. Since
only associative loss guides the optimization of routing probabilities,
increasing $\lambda$ to nearly 1 would lead to unstable models (the parameters
to generate routing probabilities $p_{l},p_{r},q_{l},q_{r}$ is nearly the same
as random initialized ones), we restrict $\lambda\leq 0.95$. The detection
$AP$ at $\lambda=0.95$ is decreased by 0.3$\sim$0.4%.
Type | #FLOPs | #params | $AP$(%)
---|---|---|---
4conv1fc | 129.0G | 15.62M | 37.6
R(Det)2-B | 132.6G | 19.31M | 39.8
R(Det)2-M | 132.6G | 25.88M | 40.5
R(Det)2-T | 146.3G | 45.97M | 40.9
R(Det)2-Lite | 130.2G | 18.48M | 40.2
Table 5: Model complexity comparison of R(Det)2 head.
Model complexity and computational efficiency. The model complexity of R(Det)2
is mainly caused by the additional branches for routing probability, routing
mask, and task-aware features. From Table 5 we can see that the complexity is
mainly caused by task-aware feature computation. Considering this, we develop
R(Det)2-Lite with narrow computation for routing probabilities and masks,
leading to 40.2% $AP$ and nearly ignorable model complexity.
Visualization. We present the comparative visualization in Figure 5. The
detected results by ResNet-101 based Faster R-CNN are shown in Figure 5(a) and
those from the R(Det)2 are shown in Figure 5(b). It can be seen that the
proposed R(Det)2 is effective to improve both the detection and localization
performance. Specially, the R(Det)2 is quite effective in reducing the
repeated detections and avoiding over-confident ones.
Figure 5: Comparison of detection results for the baseline Faster R-CNN and R(Det)2 equipped one. The models are with ResNet-101 as the backbone and trained with COCO 115k-train. The example test images are from COCO 5k-val. The rectangles mark the detected bounding boxes with attached category labels and confidences. The detection results of baseline model are presented in (a) (39.3% AP) and those of R(Det)2 are presented in (b) (42.5% AP). Methods | Backbone | ME | TTA | $AP$ | $AP_{50}$ | $AP_{75}$ | $AP_{S}$ | $AP_{M}$ | $AP_{L}$
---|---|---|---|---|---|---|---|---|---
Retina-Net [22] | ResNeXt-101 | 18e | | 40.8 | 61.1 | 44.1 | 24.1 | 44.2 | 51.2
FCOS [40] | ResNeXt-101 | 24e | | 43.2 | 62.8 | 46.6 | 26.5 | 46.2 | 53.3
ATSS [47] | ResNeXt-101-DCN | 24e | | 47.7 | 66.5 | 51.9 | 29.7 | 50.8 | 59.4
OTA [14] | ResNeXt-101-DCN | 24e | | 49.2 | 67.6 | 53.5 | 30.0 | 52.5 | 62.3
IQDet [28] | ResNeXt-101-DCN | 24e | | 49.0 | 67.5 | 53.1 | 30.0 | 52.3 | 62.0
Faster R-CNN [33] | ResNet-101 | 12e | | 36.7 | 54.8 | 39.8 | 19.2 | 40.9 | 51.6
Libra R-CNN [30] | ResNeXt-101 | 12e | | 43.0 | 64.0 | 47.0 | 25.3 | 45.6 | 54.6
Cascade R-CNN [1] | ResNet-101 | 18e | | 42.8 | 62.1 | 46.3 | 23.7 | 45.5 | 55.2
TSP-RCNN [39] | ResNet-101-DCN | 96e | | 47.4 | 66.7 | 51.9 | 29.0 | 49.7 | 59.1
Sparse R-CNN [38] | ResNeXt-101-DCN | 36e | | 48.9 | 68.3 | 53.4 | 29.9 | 50.9 | 62.4
Deformable DETR [51] | ResNeXt-101-DCN | 50e | | 50.1 | 69.7 | 54.6 | 30.6 | 52.8 | 64.7
Ours - R(Det)2 | ResNeXt-101-DCN | 12e | | 50.0 | 69.2 | 54.3 | 30.9 | 53.0 | 63.9
Ours - R(Det)2 | Swin-L [26] | 12e | | 55.1 | 74.1 | 60.4 | 36.0 | 58.6 | 70.0
Centernet [11] | Hourglass-104 | 100e | ✓ | 47.0 | 64.5 | 50.7 | 28.9 | 49.9 | 58.9
ATSS [47] | ResNeXt-101-DCN | 24e | ✓ | 50.7 | 68.9 | 56.3 | 33.2 | 52.9 | 62.4
IQDet [28] | ResNeXt-101-DCN | 24e | ✓ | 51.6 | 68.7 | 57.0 | 34.5 | 53.6 | 64.5
OTA [14] | ResNeXt-101-DCN | 24e | $\checkmark$ | 51.5 | 68.6 | 57.1 | 34.1 | 53.7 | 64.1
Dynamic R-CNN [46] | ResNet-101-DCN | 36e | ✓ | 50.1 | 68.3 | 55.6 | 32.8 | 53.0 | 61.2
TSD [37] | SENet154-DCN | 36e | ✓ | 51.2 | 71.9 | 56.0 | 33.8 | 54.8 | 64.2
Sparse R-CNN [38] | ResNeXt-101-DCN | 36e | ✓ | 51.5 | 71.1 | 57.1 | 34.2 | 53.4 | 64.1
RepPoints v2 [5] | ResNeXt-101-DCN | 24e | ✓ | 52.1 | 70.1 | 57.5 | 34.5 | 54.6 | 63.6
Deformable DETR [51] | ResNeXt-101-DCN | 50e | ✓ | 52.3 | 71.9 | 58.1 | 34.4 | 54.4 | 65.6
RelationNet++ [6] | ResNeXt-101-DCN | 24e | ✓ | 52.7 | 70.4 | 58.3 | 35.8 | 55.3 | 64.7
DyHead [8] | ResNeXt-101-DCN | 24e | ✓ | 54.0 | 72.1 | 59.3 | 37.1 | 57.2 | 66.3
Ours - R(Det)2 | ResNeXt-101-DCN | 24e | ✓ | 54.1 | 72.4 | 59.4 | 35.5 | 57.0 | 67.3
Ours - R(Det)2 | Swin-L [26] | 12e | ✓ | 57.4 | 76.1 | 63.0 | 39.4 | 60.5 | 71.5
Table 6: Comparison of R(Det)2 with the state-of-the-art object detection
methods on COCO test-dev dataset. DCN indicates that using the deformable
convolution to enhance the feature representations of backbone. TTA indicates
test-time augmentation such as multi-scale testing and horizontal image
flipping. ME indicates more epochs of training.
### 5.2 Comparison with the state-of-the-art
We integrate the proposed R(Det)2 into Cascade R-CNN to compare with the
state-of-the-art methods on COCO test-dev dataset. The backbone is ResNeXt-101
(64$\times$4d) [44] with deformable convolution and swin transformer [26]. The
comparative study is presented in Table 6. We first compare the single-model
single-scale model performance. With 12 epochs ($1\times$) of training, the
R(Det)2 achieves $AP$ of 50.0%, outperforming Faster R-CNN [33], Libra R-CNN
[30], Cascade R-CNN [1] by a large margin. Compared with the recent Sparse
R-CNN [38] with the same backbone, we achieve 1.1% $AP$ improvement with 1/3
training iterations. It is also comparable with deformable DETR [51] with
transformer architecture and much more epochs of training (50 epochs). The
detection accuracy is further improved with more epochs of training and test-
time augmentation as multi-scale testing and horizontal image flipping. With
24 epochs of training and TTA, the R(Det)2 achieves $AP$ of 54.1% and
$AP_{50}$ of 72.4%. Compared with DyHead with stacked self-attention modules
[8], the $AP_{50}$, $AP_{L}$ is improved by 0.3% and 1.0%, respectively.
Besides, we adapt the backbone of ViT as swin transformer [26]. With 12 epochs
of training, the achieved $AP$ of single-scale testing is 55.1% and that of
multi-scale testing is 57.4%. It validates the R(Det)2 performs well with
various backbones and is effective for high-performance object detection.
## 6 Conclusion
The decision head is important for high-performance object detection. In this
paper, we propose a novel approach as the randomized decision routing for
object detection. First, we plug soft decision trees into neural networks. We
further propose the randomized routing to produce accurate yet divergent
decisions. By randomized routing for soft decision trees, we can obtain multi-
node decisions with diverse feature exploration for object detection. Second,
we develop the decision head for detection with a narrow branch to generate
routing probabilities and a wide branch to produce routing masks. By reducing
the relevance of node decisions, we develop a novel tree-like decision head
for deep learning-based object detection. Experiments validate the performance
of our proposed R(Det)2.
## Acknowledgement
This work was supported by the state key development program in 14th Five-Year
under Grant Nos.2021QY1702, 2021YFF0602103, 2021YFF0602102. We also thank for
the research fund under Grant No. 2019GQG0001 from the Institute for Guo
Qiang, Tsinghua University.
## References
* [1] Zhaowei Cai and Nuno Vasconcelos. Cascade r-cnn: Delving into high quality object detection. CVPR, pages 6154–6162, 2018.
* [2] Yuhang Cao, Kai Chen, Chen Change Loy, and Dahua Lin. Prime sample attention in object detection. CVPR, pages 11583–11591, 2020.
* [3] Nicolas Carion, Francisco Massa, Gabriel Synnaeve, Nicolas Usunier, Alexander Kirillov, and Sergey Zagoruyko. End-to-end object detection with transformers. ECCV, pages 213–229, 2020.
* [4] Kai Chen, Jiaqi Wang, Jiangmiao Pang, Yuhang Cao, Yu Xiong, Xiaoxiao Li, Shuyang Sun, Wansen Feng, Ziwei Liu, Jiarui Xu, Zheng Zhang, Dazhi Cheng, Chenchen Zhu, Tianheng Cheng, Qijie Zhao, Buyu Li, Xin Lu, Rui Zhu, Yue Wu, Jifeng Dai, Jingdong Wang, Jianping Shi, Wanli Ouyang, Chen Change Loy, and Dahua Lin. MMDetection: Open mmlab detection toolbox and benchmark. arXiv preprint arXiv:1906.07155, 2019.
* [5] Yihong Chen, Zheng Zhang, Yue Cao, Liwei Wang, Stephen Lin, and Han Hu. Reppoints v2: Verification meets regression for object detection. NIPS, pages 5621–5631, 2020.
* [6] Cheng Chi, Fangyun Wei, and Han Hu. Relationnet++: Bridging visual representations for object detection via transformer decoder. NIPS, pages 13564–13574, 2020.
* [7] Jifeng Dai, Yi Li, Kaiming He, and Jian Sun. R-fcn: Object detection via region-based fully convolutional networks. NIPS, pages 379–387, 2016.
* [8] Xiyang Dai, Yinpeng Chen, Bin Xiao, Dongdong Chen, Mengchen Liu, Lu Yuan, and Lei Zhang. Dynamic head: Unifying object detection heads with attentions. CVPR, pages 7373–7382, 2021.
* [9] Xiyang Dai, Yinpeng Chen, Jianwei Yang, Pengchuan Zhang, Lu Yuan, and Lei Zhang. Dynamic detr: End-to-end object detection with dynamic attention. ICCV, pages 2988–2997, 2021.
* [10] Zhigang Dai, Bolun Cai, Yugeng Lin, and Junying Chen. Up-detr: Unsupervised pre-training for object detection with transformers. CVPR, pages 1601–1610, 2021.
* [11] Kaiwen Duan, Song Bai, Lingxi Xie, Honggang Qi, Qingming Huang, and Qi Tian. Centernet: Keypoint triplets for object detection. ICCV, pages 6569–6578, 2019.
* [12] Chengjian Feng, Yujie Zhong, Yu Gao, Matthew R. Scott, and Weilin Huang. Tood: Task-aligned one-stage object detection. ICCV, pages 3490–3499, 2021.
* [13] Nicholas Frosst and Geoffrey Hinton. Distilling a neural network into a soft decision tree. arXiv preprint arXiv:1711.09784, 2017.
* [14] Zheng Ge, Songtao Liu, Zeming Li, Osamu Yoshie, and Jian Sun. Ota: Optimal transport assignment for object detection. CVPR, pages 303–312, 2021.
* [15] Ross Girshick. Fast r-cnn. ICCV, pages 1440–1448, 2015.
* [16] Ross Girshick, Jeff Donahue, Trevor Darrell, and Jitendra Malik. Rich feature hierarchies for accurate object detection and semantic segmentation. CVPR, pages 580–587, 2014.
* [17] Kaiming He, Georgia Gkioxari, Piotr Dollar, and Ross Girshick. Mask r-cnn. ICCV, pages 2961–2969, 2017.
* [18] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. CVPR, pages 770–778, 2016.
* [19] Tao Kong, Fuchun Sun, Huaping Liu, Yuning Jiang, Lei Li, and Jianbo Shi. Foveabox: Beyound anchor-based object detection. IEEE Trans. Image Proc., 29:7389–7398, 2020.
* [20] Hei Law and Jia Deng. Cornernet: Detecting objects as paired keypoints. ECCV, pages 734–750, 2018.
* [21] Zeming Li, Chao Peng, Gang Yu, Xiangyu Zhang, Yangdong Deng, and Jian Sun. Light-head r-cnn: In defense of two-stage object detector. arXiv:1711.07264, 2017.
* [22] Tsung-Yi Lin, Priya Goyal, Ross Girshick, Kaiming He, and Piotr Dollar. Focal loss for dense object detection. ICCV, pages 2980–2988, 2017.
* [23] Tsung-Yi Lin, Michael Maire, Serge Belongie, Lubomir Bourdev, Ross Girshick, James Hays, Pietro Perona, Deva Ramanan, C. Lawrence Zitnick, and Piotr Dollar. Microsoft coco: Common objects in context. ECCV, pages 740–755, 2014.
* [24] Fanfan Liu, Haoran Wei, Wenzhe Zhao, Guozhen Li, Jingquan Peng, and Zihao Li. Wb-detr: Transformer-based detector without backbone. ICCV, pages 2979–2987, 2021.
* [25] Wei Liu, Dragomir Anguelov, Dumitru Erhan, Christian Szegedy, Scott Reed, Cheng-Yang Fu, and Alexander C. Berg. Ssd: Single shot multibox detector. ECCV, pages 21–37, 2016.
* [26] Ze Liu, Yutong Lin, Yue Cao, Han Hu, Yixuan Wei, Zheng Zhang, Stephen Lin, and Baining Guo. Swin transformer: Hierarchical vision transformer using shifted windows. ICCV, pages 10012–10022, 2021.
* [27] Xin Lu, Buyu Li, Yuxin Yue, Quanquan Li, and Junjie Yan. Grid r-cnn. In CVPR, pages 7363–7372, 2019.
* [28] Yuchen Ma, Songtao Liu, Zeming Li, and Jian Sun. Iqdet: Instance-wise quality distribution sampling for object detection. CVPR, pages 1717–1725, 2021.
* [29] Wanli Ouyang, Kun Wang, Xin Zhu, and Xiaogang Wang. Chained cascade network for object detection. ICCV, pages 1938–1946, 2017.
* [30] Jiangmiao Pang, Kai Chen, Jianping Shi, Huajun Feng, Wanli Ouyang, and Dahua Lin. Libra r-cnn: Towards balanced learning for object detection. CVPR, pages 821–830, 2019.
* [31] Joseph Redmon, Santosh Divvala, Ross Girshick, and Ali Farhadi. You only look once: Unified, real-time object detection. CVPR, pages 779–788, 2016.
* [32] Joseph Redmon and Ali Farhadi. Yolov3: An incremental improvement. arXiv:1804.02767, 2018.
* [33] Shaoqing Ren, Kaiming He, Ross Girshick, and Jian Sun. Faster r-cnn: Towards real-time object detection with region proposal networks. IEEE Trans. on Pat. Anal. and Mach. Intell., 39(6):1137–1149, 2017\.
* [34] Hamid Rezatofighi, Nathan Tsoi, JunYoung Gwak, Amir Sadeghian, Ian Reid, and Silvio Savarese. Generalized intersection over union: A metric and a loss for bounding box regression. CVPR, pages 658–666, 2019.
* [35] Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, Alexander C. Berg, and Li Fei-Fei. Imagenet large scale visual recognition challenge. arXiv:1409.0575, 2014.
* [36] Pierre Sermanet, David Eigen, Xiang Zhang, Michael Mathieu, Rob Fergus, and Yann LeCun. Overfeat: Integrated recognition, localization and detection using convolutional networks. arXiv:1312.6229, 2013.
* [37] Guanglu Song, Yu Liu, and Xiaogang Wang. Revisiting the sibling head in object detector. CVPR, pages 11563–11572, 2020.
* [38] Peize Sun, Rufeng Zhang, Yi Jiang, Tao Kong, Chenfeng Xu, Wei Zhan, Masayoshi Tomizuka, Lei Li, Zehuan Yuan, Changhu Wang, and Ping Luo. Sparse r-cnn: End-to-end object detection with learnable proposals. CVPR, pages 14454–14463, 2021.
* [39] Zhiqing Sun, Shengcao Cao, Yiming Yang, and Kris Kitani. Rethinking transformer-based set prediction for object detection. ICCV, pages 3611–3620, 2021.
* [40] Zhi Tian, Chunhua Shen, Hao Chen, and Tong He. Fcos: Fully convolutional one-stage object detection. ICCV, pages 9627–9636, 2019.
* [41] Lachlan Tychsen-Smith and Lars Petersson. Denet: Scalable real-time object detection with directed sparse sampling. ICCV, pages 428–436, 2017.
* [42] Lachlan Tychsen-Smith and Lars Petersson. Improving object localization with fitness nms and bounded iou loss. CVPR, pages 6877–6885, 2018.
* [43] Yue Wu, Yinpeng Chen, Lu Yuan, Zicheng Liu, Lijuan Wang, Hongzhi Li, and Yun Fu. Rethinking classification and localization for object detection. CVPR, pages 10186–10195, 2020.
* [44] Saining Xie, Ross Girshick, Piotr Dollar, Zhuowen Tu, and Kaiming He. Aggregated residual transformations for deep neural networks. arXiv preprint arXiv:1611.05431, 2016.
* [45] Jiahui Yu, Yuning Jiang, Zhangyang Wang, Zhimin Cao, and Thomas Huang. Unitbox: An advanced object detection network. Proceedings of the 24th ACM international conference on Multimedia, pages 516–520, 2016.
* [46] Hongkai Zhang, Hong Chang, Bingpeng Ma, Naiyan Wang, and Xilin Chen. Dynamic r-cnn: Towards high quality object detection via dynamic training. ECCV, pages 260–275, 2020.
* [47] Shifeng Zhang, Cheng Chi, Yongqiang Yao, Zhen Lei, and Stan Z. Li. Bridging the gap between anchor-based and anchor-free detection via adaptive training sample selection. CVPR, pages 9759–9768, 2020.
* [48] Zhaohui Zheng, Ping Wang, Wei Liu, Jinze Li, Rongguang Ye, and Dongwei Ren. Distance-iou loss: Faster and better learning for bounding box regression. AAAI, 34(07):12993–13000, 2020.
* [49] Xingyi Zhou, Dequan Wang, and Philipp Krähenbühl. Objects as points. In arXiv preprint arXiv:1904.07850, 2019.
* [50] Chenchen Zhu, Yihui He, and Marios Savvides. Feature selective anchor-free module for single-shot object detection. CVPR, pages 840–849, 2019.
* [51] Xizhou Zhu, Weijie Su, Lewei Lu, Bin Li, Xiaogang Wang, and Jifeng Dai. Deformable detr: Deformable transformers for end-to-end object detection. ICLR, 2021.
|
# Geodesic packing in graphs
Paul Manuela Boštjan Brešarb,c Sandi Klavžarb,c,d
###### Abstract
A geodesic packing of a graph $G$ is a set of vertex-disjoint maximal
geodesics. The maximum cardinality of a geodesic packing is the geodesic
packing number ${{\rm gpack}}(G)$. It is proved that the decision version of
the geodesic packing number is NP-complete. We also consider the geodesic
transversal number, ${{\rm gt}}(G)$, which is the minimum cardinality of a set
of vertices that hit all maximal geodesics in $G$. While ${\rm gt}(G)\geq{\rm
gpack}(G)$ in every graph $G$, the quotient ${\rm gt}(G)/{\rm gpack}(G)$ is
investigated. By using the rook’s graph, it is proved that there does not
exist a constant $C<3$ such that $\frac{{\rm gt}(G)}{{\rm gpack}(G)}\leq C$
would hold for all graphs $G$. If $T$ is a tree, then it is proved that ${\rm
gpack}(T)={\rm gt}(T)$, and a linear algorithm for determining ${\rm
gpack}(T)$ is derived. The geodesic packing number is also determined for the
strong product of paths.
a Department of Information Science, College of Life Sciences, Kuwait
University, Kuwait
<EMAIL_ADDRESS>
b Faculty of Natural Sciences and Mathematics, University of Maribor, Slovenia
<EMAIL_ADDRESS>
c Institute of Mathematics, Physics and Mechanics, Ljubljana, Slovenia
d Faculty of Mathematics and Physics, University of Ljubljana, Slovenia
<EMAIL_ADDRESS>
Keywords: geodesic packing; geodesic transversal; computational complexity;
tree; diagonal grid
AMS Subj. Class.: 05C69; 05C12; 05C85
## 1 Introduction
Pairs of covering-packing problems, known also as dual min-max invariant
problems [2], are important topics in graph theory and in combinatorics. The
max independent set problem and the min vertex cover problem is an appealing
example [3]. Another well-known example is the max matching problem versus the
min edge cover problem [6]. Examples from combinatorial optimization are the
min set cover problem & the max set packing problem, and the bin covering &
bin packing problem [8]. In this paper, we identify a new dual min-max pair:
the geodesic transversal problem and the geodesic packing problem. The first
one was recently independently investigated in [13, 15], here we complement
these studies by considering the geodesic packing problem.
A geodesic (i.e., a shortest path) in a graph $G$ is maximal if it is not
contained (as a subpath) in any other geodesic of $G$. A set $S$ of vertices
of $G$ is a geodesic transversal of $G$ if every maximal geodesic of $G$
contains at least one vertex of $S$. When $s\in S$ is contained in a maximal
geodesic $P$ we say that vertex $s$ hits or covers $P$. The geodesic
transversal number of $G$, ${\rm gt}(G)$, is the minimum cardinality of a
geodesic transversal of $G$. A geodesic packing of a graph $G$ is a set of
vertex-disjoint maximal geodesics in $G$. The geodesic packing number, ${\rm
gpack}(G)$, of $G$ is the maximum cardinality of a geodesic packing of $G$,
and the geodesic packing problem of $G$ is to determine ${\rm gpack}(G)$. By a
${\rm gpack}$-set of $G$ we mean a geodesic packing of size ${\rm gpack}(G)$.
Let us mention some related concepts. A packing of a graph often means a set
of vertex-disjoint (edge-disjoint) isomorphic subgraphs, that is, the
$H$-packing problem for an input graph $G$ is to find the largest number of
its disjoint subgraphs that are isomorphic to $H$. In particular, the problem
has been investigated for different types of paths. For instance, Akiyama and
Chvátal [1] considered the problem from algorithmic point of view when $H$ is
a path of fixed length. A survey on efficient algorithms for vertex-disjoint
(as well as edge-disjoint) Steiner trees and paths packing problems in planar
graphs was given in [16]. Dreier et al. [4] have studied the complexity of
packing edge-disjoint paths where the paths are restricted to lengths $2$ and
$3$. In [11] edge-disjoint packing by stars and edge-disjoint packing by
cycles were studied.
In the rest of this section we first recall some notions needed in the rest of
the paper. In the next section it is first proved that the geodesic packing
problem is NP-complete. After that we investigate the quotient ${\rm
gt}(G)/{\rm gpack}(G)$. We first prove that ${\rm
gt}(K_{n}\,\square\,K_{n})=n^{2}-2n+2$ and use this result to demonstrate that
there does not exist a constant $C<3$ such that $\frac{{\rm gt}(G)}{{\rm
gpack}(G)}\leq C$ would hold for all graphs $G$. In Section 3 we consider the
geodesic packing number of trees and prove that for a tree $T$ we have ${\rm
gpack}(T)={\rm gt}(T)$. A linear algorithm for determining ${\rm gpack}(T)$ is
also derived. In the subsequent section the geodesic packing number is
determined for the strong product of paths, while the paper is concluded with
some closing remarks.
Let $G=(V(G),E(G))$ be a graph. The order of $G$ will be denoted by $n(G)$. A
path on consecutive vertices $a_{1},a_{2}\ldots,a_{k}$ will be denoted by
$a_{1}a_{2}\ldots a_{k}$. If $n$ is a positive integer $n$, then let
$[n]=\\{1,\ldots,n\\}$. The Cartesian product $G\,\square\,H$ of graphs $G$
and $H$ is the graph with the vertex set $V(G)\times V(H)$ and edges
$(g,h)(g^{\prime},h^{\prime})$, where either $g=g^{\prime}$ and
$hh^{\prime}\in E(H)$, or $h=h^{\prime}$ and $gg^{\prime}\in E(G)$. The strong
product $G\,\boxtimes\,H$ is obtained from $G\,\square\,H$ by adding, for
every edge $gg^{\prime}\in E(G)$ and every edge $hh^{\prime}\in E(H)$, an edge
between the vertices $(g,h)$ and $(g^{\prime},h^{\prime})$ and another edge
between the vertices $(g,h^{\prime})$ and $(g^{\prime},h)$.
## 2 Preliminary results and NP-completeness
We start by showing NP-completeness of the geodesic packing problem which is
formally defined as follows.
Geodesic Packing Problem
Input: A graph $G$ and a positive integer $k$. Question: Does there exist a
set of $k$ vertex-disjoint maximal geodesics in $G$?
For our reduction we use the concept of the induced path partition.
Computationally, given a graph $G$ and a positive integer $k$, the
MaxInduced$P_{k}$Packing Problem seeks for a maximum number of vertex-disjoint
induced paths $P_{k}$. Saying that a set of vertex-disjoint induced paths on
$k$ vertices is an induced $P_{k}$-packing of $G$, the problem is thus to
maximize the cardinality of an induced $P_{k}$-packing. By [14, Theorem 3.1]
we know that MaxInduced$P_{3}$Packing Problem is NP-hard on bipartite graphs
with maximum degree $3$.
Let $G$ be a graph with $V(G)=\\{x_{1},\ldots,x_{n}\\}$. Then the derived
graph $G^{\prime}$ is defined as follows: $V(G^{\prime})=V(G)\cup\\{x,y,z\\}$
and $E(G^{\prime})=E(G)\cup\\{xz,zy\\}\cup\\{zx_{i}:\,i\in[n]\\}$. Without any
possibility of confusion, we denote by $G$ also the subgraph of $G^{\prime}$
induced by the vertices of the derived graph $G$.
###### Lemma 2.1.
A set $\Psi$ of vertex-disjoint induced paths $P_{3}$ in $G$ is an induced
$P_{3}$-packing of $G$ if and only if $\Psi\cup\\{(x,z,y)\\}$ is a geodesic
packing of the derived graph $G^{\prime}$.
###### Proof.
Note that all maximal geodesics in $G^{\prime}$ are of length $2$. In
particular, the path $P:xyz$ is a maximal geodesic, and every induced path
$P_{3}$ in $G$ is a maximal geodesic in $G^{\prime}$. The statement of the
lemma now follows. ∎
From Lemma 2.1 we also infer that ${\rm
gpack}(G^{\prime})=1+pack_{ind}^{3}(G)$, where we denote by
$pack_{ind}^{k}(G)$ the maximum size of an induced $P_{k}$-packing in $G$.
Now, turning back out attention to the decision versions of the problem, it is
easy to see that a polynomial-time algorithm to resolve Geodesic Packing
Problem in general graphs would imply that there is a polynomial time
algorithm to resolve the MaxInduced$P_{3}$Packing Problem in bipartite graphs
with maximum degree $3$ (taking $G$ as such a graph). We have thus derived the
desired computational complexity result.
###### Theorem 2.2.
Geodesic Packing Problem is NP-complete.
By Theorem 2.2 it is of interest to bound the geodesic packing number and to
determine it for specific families of graphs. The following straightforward
upper bound is useful.
###### Lemma 2.3.
Let $d$ be the length of a shortest maximal geodesic of a graph $G$. Then,
${\rm gpack}(G)\leq\lfloor n(G)/(d+1)\rfloor$.
Given a set of vertex-disjoint maximal geodesics, each geodetic transversal
clearly hits each of the paths by at least one private vertex of the path.
This fact in particular implies the following upper bound.
###### Lemma 2.4.
If $G$ is a graph, then ${\rm gpack}(G)\leq{\rm gt}(G)$.
It is clear that ${\rm gpack}(P_{n})=1={\rm gt}(P_{n})$ as well as ${\rm
gpack}(K_{1,n})=1={\rm gt}(K_{1,n})$, hence the bound of Lemma 2.4 is sharp.
On the other hand, the value ${\rm gt}(G)$ can be arbitrarily bigger than
${\rm gpack}(G)$. For instance, ${\rm gpack}(K_{n})=\lfloor\frac{n}{2}\rfloor$
and ${\rm gt}(K_{n})=n-1$. Observe also that in $K_{n,n}$, $n\geq 2$, every
maximal geodesic is of length $2$, hence ${\rm
gpack}(K_{n,n})=\lfloor\frac{2n}{3}\rfloor$, while on the other hand ${\rm
gt}(K_{n,n})=n$. However, we do not know whether the ratio of the two
invariants is bounded and pose this as a problem.
###### Problem 2.5.
Is there an absolute constant $C$ such that $\frac{{\rm gt}(G)}{{\rm
gpack}(G)}\leq C$, for all graphs $G$?
The example of complete graphs shows that if the constant $C$ in Problem 2.5
exists, it cannot be smaller than $2$. To show that it actually cannot be
smaller than $3$, consider the rook’s graphs [10] that can be described as the
Cartesian product of two complete graphs or, equivalently, as the line graphs
of complete bipartite graphs [9].
###### Proposition 2.6.
If $n\geq 1$, then ${\rm gt}(K_{n}\,\square\,K_{n})=n^{2}-2n+2$.
###### Proof.
Set $R_{n}=K_{n}\,\square\,K_{n}$, and note that vertices of $R_{n}$ can be
presented in the Cartesian $n\times n$ grid such that two vertices are
adjacent if and only if they belong to the same row or the same column.
For $n=1$, the statement is clear, so let $n\geq 2$. Note that maximal
geodesics $P$ in $R_{n}$ are of length $2$ and consist of three vertices,
which can be described as follows: $(g,h)\in V(P)$, and there is a vertex
$(g^{\prime},h)\in V(P)$ in the same column as $(g,h)$ and a vertex
$(g,h^{\prime})\in V(P)$ that is in the same row as $(g,h)$. Let $S$ be the
complement of a (smallest) ${\rm gt}$-set of $R_{n}$. Hence $S$ contains no
maximal geodesics as just described.
First, we prove that $|S|\leq 2n-2$. Let $S_{i}$ be the set of vertices in $S$
that belong to the $i^{\rm th}$ row of $R_{n}$. Due to symmetry, we may assume
that rows are ordered in such a way that $|S_{1}|\geq\cdots\geq|S_{n}|$. Note
that $|S_{1}|=1$, implies $|S|\leq n$ and we are done. Hence, let $|S_{1}|\geq
2$. Note that in the column in which there is a vertex of $S_{1}$ there are no
other vertices of $S$, and the same holds for every row $S_{i}$ having more
than one vertex in $S$. Let $k\geq 1$ be the number of rows in which there are
more than two vertices in $S$, That is, in $S_{i}$, $i\in[k]$, we have
$|S_{i}|\geq 2$, but if $|S_{j}|>0$, where $j>k$, then $|S_{j}|=1$. Let $C$ be
the set of columns in which there are vertices from the sets $S_{i}$, where
$i\in[k]$. Note that there are $|C|$ vertices of $S$ in these columns. Since
in the remaining columns there are at most $n-k$ vertices from $S$ (because in
each of the remaining rows there is at most one vertex in $S$), we altogether
get $|S|\leq|C|+n-k$. Now, if $|C|=n$, then $|S|=n$ and we are done.
Otherwise, $|S|\leq|C|+n-k\leq(n-1)+(n-1)=2n-2$. To see that $|S|=2n-2$, take
$k=1$ with $|S_{1}|=n-1$, and add $n-1$ vertices in the last column to $S$. ∎
Since all maximal geodesics in $K_{n}\,\square\,K_{n}$ are of length $2$,
Lemma 2.3 implies that ${\rm
gpack}(K_{n}\,\square\,K_{n})\leq\frac{n^{2}}{3}$. We can thus estimate as
follows:
$\displaystyle\frac{{\rm gt}(K_{n}\,\square\,K_{n})}{{\rm
gpack}(K_{n}\,\square\,K_{n})}$
$\displaystyle\geq\frac{3(n^{2}-2n+2)}{n^{2}}=3\left(1-\frac{2}{n}+\frac{2}{n^{2}}\right)\,.$
Letting $n$ to infinity we have shown that in case the constant $C$ from
Problem 2.5 exists, it cannot be smaller than $3$.
In rook’s graphs $K_{n}\,\square\,K_{n}$, $n\geq 2$, every maximal geodesic is
of length $2={\rm diam}(K_{n}\,\square\,K_{n})$. More generally, a graph $G$
is uniform geodesic if every maximal geodesic in $G$ is of length ${\rm
diam}(G)$. Complete graphs, cycles, and paths are simple additional families
of uniform geodesic graphs. The fact that rook’s graphs are uniform geodesic
generalizes as follows.
###### Proposition 2.7.
If $G_{1},\ldots,G_{r}$, $r\geq 1$, are uniform geodesic graphs, then the
product $G_{1}\,\square\,\cdots\,\square\,G_{r}$ is also a uniform geodesic
graph.
###### Proof.
The result clearly holds for $r=1$. Moreover, by the associativity of the
Cartesian product, it suffices to prove the lemma for two factors. Let hence
$P$ be an arbitrary maximal geodesic in $G\,\square\,H$. Then the projections
$P_{G}$ and $P_{H}$ of $P$ on $G$ and on $H$ are geodesics in $G$ and $H$,
respectively. If $P_{G}$ is not maximal in $G$, then $P_{G}$ can be extended
to a longer geodesic in $G$, but then also $P$ can be extended to a longer
geodesic in $G\,\square\,H$, a contradiction. So $P_{G}$ and $P_{H}$ are
maximal geodesics in $G$ and $H$, respectively. By our assumption this means
that the lengths of $P_{G}$ and $P_{H}$ are ${\rm diam}(G)$ and ${\rm
diam}(H)$, respectively. As the distance function is additive in Cartesian
products, it follows that the length of $P$ is ${\rm diam}(G)+{\rm
diam}(H)={\rm diam}(G\,\square\,H)$. ∎
Proposition 2.7, Lemma 2.3, and the fact that the diameter is also additive on
Cartesian products, yield the following result.
###### Corollary 2.8.
If $G_{1},\ldots,G_{r}$, $r\geq 1$, are uniform geodesic graphs, then
${\rm
gpack}(G_{1}\,\square\,\cdots\,\square\,G_{r})\leq\left\lfloor\frac{n(G_{1})\cdots
n(G_{r})}{{\rm diam}(G_{1})+\cdots+{\rm diam}(G_{r})+1}\right\rfloor\,.$
## 3 Trees
In this section we derive an efficient algorithm to obtain the geodesic
packing number of an arbitrary tree. The approach used is in part similar to
the approach from [13] to determine the geodetic transversal number of a tree.
Let $G$ be a graph, let $u\in V(G)$ be a vertex of degree $2$, and let $x$ and
$y$ be the neighbors of $u$. If $G^{\prime}$ is the graph obtained from $G$ be
removing the vertex $u$ and adding the edge $xy$, then we say that
$G^{\prime}$ is obtained from $G$ by smoothing the vertex $u$. Let further
${\rm SM}(G)$ denote the graph obtained from $G$ by smoothing all the vertices
of $G$ of degree $2$. Since the smoothing operation preserves the degree of
vertices, ${\rm SM}(G)$ is well-defined, that is, unique up to isomorphism. It
was proved in [13, Lemma 4.2] that ${\rm gt}(T)={\rm gt}({\rm SM}(T))$ in any
tree $T$. We prove a similar result for the packing invariant.
###### Lemma 3.1.
If $T$ is a tree, then ${\rm gpack}(T)={\rm gpack}({\rm SM}(T))$.
###### Proof.
Note that each maximal geodesic in a tree connects two leaves of the tree. Let
$\Psi_{T}$ be a largest geodesic packing in $T$. Its elements can thus be
represented by pairs of leaves that are endvertices of the corresponding
geodesics. Note that a maximal geodesic in $\Psi_{T}$ from which we remove all
vertices of degree $2$ becomes a maximal geodesic in ${\rm SM}(T)$. Thus the
same pairs of leaves can be used in ${\rm SM}(T)$ to represent the maximal
geodesics by its end-vertices. We denote by ${\rm SM}(\Psi_{T})$ the resulting
set of maximal geodesics in ${\rm SM}(T)$. Since any two geodesics
$g_{1},g_{2}\in\Psi_{T}$ are disjoint, so are also the corresponding geodesics
in ${\rm SM}(\Psi_{T})$. This implies that ${\rm gpack}(T)\leq{\rm gpack}({\rm
SM}(T))$. The reversed inequality can be proved in a similar way. Notably,
since the maximal geodesics in ${\rm SM}(T)$ have two leaves of ${\rm SM}(T)$
as its end-vertices, the same two leaves are end-vertices of a maximal
geodesic in $T$. It is clear that the resulting maximal geodesics in $T$ are
also mutually vertex-disjoint, and thus together form a geodesic packing in
$T$ of cardinality ${\rm gpack}({\rm SM}(T)$. Thus, ${\rm gpack}(T)\geq{\rm
gpack}({\rm SM}(T))$. ∎
Lemma 3.1 does not hold for an arbitrary graph $G$. See Fig. 1, where a graph
$G$ is shown for which we have ${\rm gpack}(G)=4$ and ${\rm gpack}({\rm
SM}(G))=3$. Pairs of endvertices of maximal geodesics are marked by distinct
colors.
Figure 1: A graph $G$ with ${{\rm gpack}}(G)=4$, and ${\rm SM}(G)$ with ${{\rm
gpack}}({\rm SM}(G))=3$.
A support vertex in a tree is a vertex adjacent to a leaf. An end support
vertex is a support vertex that has at most one non-leaf neighbor. It is easy
to see that an end support vertex does not lie between two end support
vertices. In addition, every tree on at least two vertices contains an end
support vertex (see, for instance, [13]). In [13, Lemma 4.3] the following
result was proved.
###### Lemma 3.2.
[13] Let $T$ be a tree with no vertices of degree $2$. Let $u$ be an end
support vertex of $T$ and $u_{1},\ldots,u_{s}$ the leaves adjacent to $u$.
Then ${\rm gt}(T)={\rm gt}(T-\\{u,u_{1},\ldots,u_{s}\\})+1$. Moreover, there
exists a gt-set $S$ of $T$ such that $u\in S$.
We prove a result parallel to Lemma 3.2 concerning the geodesic packing
number.
###### Lemma 3.3.
Let $T$ be a tree with no vertices of degree $2$. Let $u$ be an end support
vertex of $T$ and $u_{1},\ldots,u_{s}$ the leaves adjacent to $u$. Then ${\rm
gpack}(T)={\rm gpack}(T-\\{u,u_{1},\ldots,u_{s}\\})+1$. Moreover, there exists
a ${\rm gpack}$-set $\Psi$ of $T$ such that $u_{1}uu_{2}\in\Psi$.
###### Proof.
Since $T$ has no vertices of degree $2$, the end support vertex $u$ is
adjacent to at least two leaves, that is, $s\geq 2$. If $T$ is a star, and
hence $u$ being the center of it, then the assertion of the lemma is clear. In
the rest of the proof we may thus assume that $u$ has at least one non-leaf
neighbor, and since $u$ is an end support vertex, it has only one non-leaf
neighbor. We denote the latter vertex by $w$, and let $T^{\prime}$ be the
component of $T-u$ that contains the vertex $w$.
Let $\Psi^{\prime}$ be a ${\rm gpack}$-set of $T^{\prime}$. Since
$u_{1}uu_{2}$ is a maximal geodesic in $T$, and every maximal geodesic in
$T^{\prime}$ is a maximal geodesic also in $T$, we infer that
$\Psi^{\prime}\cup\\{u_{1}uu_{2}\\}$ is a geodesic packing of $T$. Hence ${\rm
gpack}(T)\geq{\rm gpack}(T^{\prime})+1$.
Note that there can be at most one maximal geodesic in a geodesic packing of
$T$ that contains vertex $u$. In addition, there is at least one geodesic that
contains $u$ if a geodesic packing of $T$ is of maximum cardinality (for
otherwise, one could add the geodesic $u_{1}uu_{2}$ and make it of larger
cardinality, which is a contradiction). Now, let $\Psi$ be a ${\rm gpack}$-set
of $T$ and let $P\in\Psi$ be the geodesic that contains $u$. It is easy to see
that all maximal geodesics in $\Psi\setminus\\{P\\}$ belong to $T^{\prime}$
and are also pairwise vertex-disjoint maximal geodesics of $T^{\prime}$. Hence
${\rm gpack}(T^{\prime})\geq{\rm gpack}(T)-1$, and we are done. ∎
Combining the facts that ${\rm gpack}(K_{2})=1={\rm gt}(K_{2})$, that in any
tree $T$ we have ${\rm gt}(T)={\rm gt}({\rm SM}(T))$ and ${\rm gpack}(T)={\rm
gpack}({\rm SM}(T))$, and using Lemmas 3.2 and 3.3, we deduce the following
result.
###### Theorem 3.4.
If $T$ is a tree, then ${\rm gpack}(T)={\rm gt}(T)$.
Using the lemmas from this section, we can now present an algorithm that
constructs a ${\rm gpack}$-set of an arbitrary tree $T$. Note that a ${\rm
gpack}$-set of $T$ is uniquely determined by pairs of endvertices of its
maximal geodesics, and the outcome of the algorithm is the set of such
(ordered) pairs.
Input: A tree $T$.
Output: A ${\rm gpack}$-set $\Psi$, represented by pairs of end-vertices.
1
2 $\Psi=\emptyset$
3 $T={\rm SM}(T)$
4 while _$n(T)\geq 3$_ do
5 identify an end support vertex $p$ of ${\rm SM}(T)$, and its leaf-neigbors
$u_{1},u_{2}$
6 $\Psi=\Psi\cup\\{(u_{1},u_{2})\\}$
7 $T=T-\\{p,u_{1},\ldots,u_{t}\\}$, where $u_{1},\ldots,u_{t}$ are the leaf
neighbors of $p$
8 $T={\rm SM}(T)$
9
10if _$n(T)=2$_ then
11 $\Psi=\Psi\cup V(T)$
Algorithm 1 ${\rm gpack}$-set of a tree
###### Theorem 3.5.
Given a tree $T$, Algorithm 1 returns the set of pairs of end vertices of
maximal geodesics of a ${\rm gpack}$-set of $T$ in linear time.
The correctness of Algorithm 1 follows from Lemmas 3.1 and 3.3. The time
complexity of the algorithm is clearly linear. For the running time of the
algorithm, in Step 7, there is nothing to be done if $T$ is a star. Otherwise,
the unique non-leaf neighbor of the vertex $p$ selected in Step 4 is the only
vertex for which we need to check whether the smoothing operation is required.
## 4 Diagonal grids
Diagonal grids are strong products of paths [9]. If a diagonal grid is the
strong product of $r$ paths, then it is called an $r$-dimensional diagonal
grid. By definition, the $r$-dimensional grid
$P_{d_{1}}\,\square\,\cdots\,\square\,P_{d_{r}}$ is a spanning subgraph of
$P_{d_{1}}\,\boxtimes\,\cdots\,\boxtimes\,P_{d_{r}}$, cf. Fig. 2. The edges of
$P_{d_{1}}\,\square\,\cdots\,\square\,P_{d_{r}}$ (considered as a subgraph of
$P_{d_{1}}\,\boxtimes\,\cdots\,\boxtimes\,P_{d_{r}}$) are called Cartesian
edges of $P_{d_{1}}\,\boxtimes\,\cdots\,\boxtimes\,P_{d_{r}}$, the other edges
are diagonal edges. We say that a geodesic consisting of only Cartesian edges
is a Cartesian geodesic of
$P_{d_{1}}\,\boxtimes\,\cdots\,\boxtimes\,P_{d_{r}}$. In the rest we will
assume that the vertices of a path on $r$ vertices are integers $1,\ldots,r$,
and if $x\in V(P_{d_{1}}\,\boxtimes\,\cdots\,\boxtimes\,P_{d_{r}})$, then we
will use the notation $x=(x_{1},\ldots,x_{r})$.
Figure 2: (a) A $2$-dimensional grid $P_{5}\,\square\,P_{7}$ and (b) a
$2$-dimensional diagonal grid $P_{5}\,\boxtimes\,P_{7}$
###### Lemma 4.1.
If $P$ is a maximal geodesic in $P_{d_{1}}\boxtimes\cdots\boxtimes P_{d_{r}}$,
where $r\geq 2$, and $d_{1},\ldots,d_{r}\geq 2$, then
$n(P)\in\\{d_{1},\ldots,d_{r}\\}$.
###### Proof.
Let $P$ be an arbitrary geodesic of $G=P_{d_{1}}\boxtimes\cdots\boxtimes
P_{d_{r}}$ of length $\ell\geq 2$, so that $n(P)=\ell+1$. Let $xx^{\prime}$
and $yy^{\prime}$ be the first and the last edge of $P$, where $x$ and
$y^{\prime}$ are the first and the last vertex of $P$, respectively. It is
possible that $x^{\prime}=y$. Then
$\ell=d_{G}(x,y^{\prime})=1+d_{G}(x^{\prime},y)+1$. (Note that if
$x^{\prime}=y$, then $d_{G}(x^{\prime},y)=0$.)
Since
$d_{G}(x,y^{\prime})=\max\\{|x_{1}-y_{1}^{\prime}|,\ldots,|x_{r}-y_{r}^{\prime}|\\}$,
we may without loss of generality assume (having in mind that the strong
product operation is commutative) that
$\ell=d_{G}(x,y^{\prime})=|x_{1}-y_{1}^{\prime}|$. We now claim that
$y_{1}\neq y_{1}^{\prime}$ and suppose on the contrary that
$y_{1}=y_{1}^{\prime}$. Using the facts that
$d_{G}(x^{\prime},y)=\max\\{|x_{1}^{\prime}-y_{1}|,\ldots,|x_{r}^{\prime}-y_{r}|\\}$,
$|x_{1}-y_{1}^{\prime}|=\ell$, $|x_{1}-x_{1}^{\prime}|\leq 1$, and
$y_{1}=y_{1}^{\prime}$, we get that $|x_{1}^{\prime}-y_{1}|\geq\ell-1$.
Consequently, $d_{G}(x^{\prime},y)\geq\ell-1$, which in turn implies that
$\ell=d_{G}(x,y^{\prime})=1+d_{G}(x^{\prime},y)+1\geq 1+(\ell-1)+1=\ell+1\,,$
a contradiction. We have thus proved that if
$d_{G}(x,y^{\prime})=|x_{1}-y_{1}^{\prime}|$, then $y_{1}\neq y_{1}^{\prime}$.
Let us emphasize that $P$ was assumed to be an arbitrary geodesic.
Let now $P$ be a maximal geodesic in $G$ and use the same notation as above.
Assume again wlog that $\ell=d_{G}(x,y^{\prime})=|x_{1}-y_{1}^{\prime}|$. If
$uv$ is an arbitrary edge of $P$ which is different from $xx^{\prime}$, then
the above claim asserts that $u_{1}\neq v_{1}$. Since
$\ell=d_{G}(x,y^{\prime})=|x_{1}-y_{1}^{\prime}|$ it follows that the first
coordinates of the vertices of $P$ are $\ell+1$ consecutive integers
$i,i+1,\ldots,i+\ell$. If $i>1$, then adding the edge between $x$ and the
vertex $(i-1,x_{2},\ldots,x_{r})$ yields a geodesic which strictly contains
$P$, a contradiction. Hence $i=1$. By a parallel argument we get that
$i+\ell=d_{1}$. We conclude that $n(P)=d_{1}$. ∎
From the proof of Lemma 4.1 we can deduce also the following.
###### Lemma 4.2.
Let $G=P_{d_{1}}\,\boxtimes\,\cdots\,\boxtimes\,P_{d_{r}}$, where $r\geq 2$
and $d_{i}\geq 2$ for $i\in[r]$. If $x=(x_{1},\ldots,x_{i-1},1,x_{i+1},\ldots
x_{r})$ and $y=(y_{1},\ldots,y_{i-1},d_{i},y_{i+1},\ldots y_{r})$ are vertices
of $G$, then there exists a maximal $x,y$-geodesic in $G$ of length $d_{i}-1$.
We are now in position to determine the geodesic packing number of diagonal
grids.
###### Theorem 4.3.
If $r\geq 2$ and $2\leq d_{1}\leq\min\\{d_{2},\ldots,d_{r}\\}$, then
${\rm gpack}(P_{d_{1}}\,\boxtimes\,\cdots\,\boxtimes\,P_{d_{r}})=d_{2}\cdot
d_{3}\cdots d_{r}\,.$
###### Proof.
Set $G=P_{d_{1}}\,\boxtimes\,\cdots\,\boxtimes\,P_{d_{r}}$. For each vector
$(i_{2},\ldots,i_{r})$, where $i_{j}\in[d_{j}]$, $j\in\\{2,\ldots,r\\}$, let
$P_{i_{2},\ldots,i_{r}}$ be the path
$(1,i_{2},\ldots,i_{r})(2,i_{2},\ldots,i_{r})\ldots(d_{1},i_{2},\ldots,i_{r})\,.$
By Lemma 4.2, $P_{i_{2},\ldots,i_{r}}$ is a maximal geodesic of $G$. Hence the
set
$\\{P_{i_{2},\ldots,i_{r}}:\ i_{j}\in[d_{j}],j\in\\{2,\ldots,r\\}\\}$
is a geodesic packing of $G$. Its size is $d_{2}\cdot d_{3}\cdots d_{r}$ which
means that hence ${\rm gpack}(G)\geq d_{2}\cdot d_{3}\cdots d_{r}$.
From Lemma 4.1 we know that a shortest maximal geodesic of $G$ is of length
$d_{1}-1$. This implies, by using Lemma 2.3, that ${\rm gpack}(G)\leq
n(G)/d_{1}=d_{2}\cdot d_{3}\cdots d_{r}$ and we are done. ∎
## 5 Conclusions
We have introduced the geodesic packing problem which is a min-max dual
invariant to the earlier studied geodesic transversal problem. We have settled
the complexity status of the geodesic packing problem for general graphs and
arbitrary trees, and determined the geodesic packing number for several
classes of graphs. We have proved that ${\rm gpack}(T)={\rm gt}(T)$ for
arbitrary trees $T$. It is not known that ${\rm gpack}(G)={\rm gt}(G)$ when
$G$ is a cactus graph or block graphs. There are numerous open problems that
are left for future investigation. One open problem is explicitly stated in
Problem 2.5. Other natural extensions of our research would be to study the
geodesic packing number for general strong products or other graph products
and the general packing number for intersection graphs such as interval
graphs, circular arc graphs or chordal graphs.
## Acknowledgments
This work was supported and funded by Kuwait University, Research Project No.
(FI01/22).
## Conflict of interest
The authors declare that they have no conflict of interest.
## References
* [1] J. Akiyama, V. Chvátal, Packing paths perfectly, Discrete Math. 85 (1990) 247–255.
* [2] Y. Azar, N. Buchbinder, H. Chan, S. Chen, I. Cohen, A. Gupta, Z. Huang, N. Kang, V. Nagarajan, J. Naor, D. Panigrahi, Online algorithms for covering and packing problems with convex objectives. 57th Annual IEEE Symposium on Foundations of Computer Science—FOCS 2016, 148–157, IEEE Computer Soc., Los Alamitos, CA, 2016.
* [3] K. Casel, H. Fernau, M. Khosravian Ghadikolaei, J. Monnot, F. Sikora, Extension of vertex cover and independent set in some classes of graphs, Lecture Notes in Comput. Sci. 11485 (2019) 124–136.
* [4] J. Dreier, J. Fuchs, T.A. Hartmann, P. Kuinke, P. Rossmanith, B. Tauer H.-L. Wang, The complexity of packing edge-disjoint paths. 14th International Symposium on Parameterized and Exact Computation, Art. No. 10, 16 pp., Leibniz Int. Proc. Inform., 148, Schloss Dagstuhl. Leibniz-Zent. Inform., Wadern, 2019.
* [5] D. C. Fisher, S.L. Fitzpatrick, The isometric number of a graph, J. Combin. Math. Combin. Comput. 38 (2001) 97–110.
* [6] T. Gallai, Über extreme Punkt- und Kantenmengen, Ann. Univ. Sci. Budapest. Eötvös Sect. Math. 2 (1959) 133–138.
* [7] M. Ghorbani, S. Klavžar, H.R. Maimani, M. Momeni, F. Rahimi-Mahid, G. Rus, The general position problem on Kneser graphs and on some graph operations, Discuss. Math. Graph Theory 41 (2021) 1199–1213.
* [8] P. Hansen, M. Labbé, D. Schindl, Set covering and packing formulations of graph coloring: Algorithms and first polyhedral results, Discrete Opt. 6 (2009) 135–147.
* [9] R. Hammack, W. Imrich, S. Klavžar, Handbook of Product Graphs, Second Edition, CRC Press, Boca Raton, FL, 2011.
* [10] A.J. Hoffman, On the line graph of the complete bipartite graph, Ann. Math. Statist. 35 (1964) 883–885.
* [11] M. Jiang, G. Xia, Y. Zhang, Edge-disjoint packing of stars and cycles, Theoret. Comput. Sci. 640 (2016) 61–69.
* [12] P. Manuel, On the isometric path partition problem, Discuss. Math. Graph Theory 41 (2021) 1077–1089.
* [13] P. Manuel, B. Brešar, S. Klavžar, The geodesic-transversal problem, Appl. Math. Comput. 413 (2022) 126621.
* [14] J. Monnot, S. Toulouse, The path partition problem and related problems in bipartite graphs, Oper. Res. Lett. 35 (2007) 677–684.
* [15] I. Peterin, G. Semanišin, On the maximal shortest path cover number, Mathematics 9 (2021) 1592.
* [16] D. Wagner, Simple algorithms for Steiner trees and paths packing problems in planar graphs, CWI Quarterly 6 (1993) 219–240.
|
# Full-aperture extended-depth oblique plane microscopy through dynamic remote
focusing
Paolo Pozzi<EMAIL_ADDRESS>Dipartimento di Fisica, Politecnico di
Milano, Piazza Leonardo da Vinci 32, I-20133 Milano, Italy Vipin Balan
Dipartimento di Fisica, Politecnico di Milano, Piazza Leonardo da Vinci 32,
I-20133 Milano, Italy Alessia Candeo Dipartimento di Fisica, Politecnico di
Milano, Piazza Leonardo da Vinci 32, I-20133 Milano, Italy Alessia Brix
Dipartimento di Biotecnologie Mediche e Medicina Traslazionale, Università
degli Studi di Milano, Via Festa del Perdono 7 - 20122 Milano, Italy Anna
Silvia Pistocchi Dipartimento di Biotecnologie Mediche e Medicina
Traslazionale, Università degli Studi di Milano, Via Festa del Perdono 7 -
20122 Milano, Italy Cosimo D’Andrea Dipartimento di Fisica, Politecnico di
Milano, Piazza Leonardo da Vinci 32, I-20133 Milano, Italy Gianluca Valentini
Dipartimento di Fisica, Politecnico di Milano, Piazza Leonardo da Vinci 32,
I-20133 Milano, Italy Andrea Bassi Dipartimento di Fisica, Politecnico di
Milano, Piazza Leonardo da Vinci 32, I-20133 Milano, Italy
###### Abstract
Oblique plane microscopy is a method enabling light-sheet fluorescence imaging
through a single microscope objective lens by focusing on a tilted plane
within the sample. To focus the fluorescence emitted by the oblique plane on a
camera, the light is imaged through a pair of remote objective lenses, facing
each other at an angle. The aperture mismatch resulting from this
configuration limits the effective numerical aperture of the system, reducing
image resolution and signal intensity.
This manuscript introduces an alternative method to capture the oblique plane
on the camera. Instead of relying on angled objective lenses, an electrically
tunable lens is employed. This lens adjusts the focal plane of the microscope
synchronously with the rolling shutter of a scientific CMOS camera. In this
configuration the entire aperture of the objective is effectively employed,
increasing the resolution of the system. Moreover, a variety of objective
lenses can be employed, enabling the acquisition of wider axial fields of view
compared to conventional oblique plane microscopy.
## 1 Introduction
Light-sheet microscopy is the gold standard method for fluorescence imaging of
complex three dimensional samples at high rates [1]. Due to the need for
multiple perpendicular microscope objectives around the sample, standard
light-sheet microscopy has significant geometric constraints for the shape and
mounting method of the observed sample. Although this three-dimensional
arrangement of sample mounting can sometimes be an advantage[2, 3], its use
with standard slides and Petri dishes, or in intravital applications in small
rodents can be challenging, requiring complex geometries [4] for the objective
lenses and for sample positioning.
Oblique plane microscopy (OPM) [5] is a variant of light-sheet microscopy, in
which a single objective lens is used both for light-sheet illumination and
fluorescence detection, greatly simplifying imaging of samples optically
accessible from a single direction. To achieve this, the excitation light is
confined to one edge of the objective aperture, so that the light sheet
propagates at an angle with respect to the optical axis. Since the illuminated
plane is tilted from the optical axis, only a small portion of it would be in
focus on a camera in a standard detection path. To have the oblique plane in
focus within the whole field of view of the microscope, a re-projection setup
is employed in the detection path, with a secondary microscope objective
forming a low magnification image of the sample, and a tertiary objective
observing such image at an angle.
While this configuration is effective and relatively simple to implement, the
aperture mismatch between the secondary and tertiary objectives constitutes a
significant drawback, limiting the performance and versatility of OPM. In
fact, as the angle between the light sheet and the optical axis decreases, the
fraction of fluorescence light coupled in the tertiary objective becomes
smaller, reducing both signal intensity and imaging resolution [6]. The
problem is visually represented in figure 1, which shows the effective
aperture angles for water immersion objectives on standard OPM for high and
low numerical apertures. The scheme assumes the rarely implemented ideal
scenario in which the tertiary objective has the same numerical aperture as
the primary, which would require the presence of an immersion medium between
the secondary and tertiary objective. Nonetheless, it can be observed that a
significant portion of the aperture of the primary objective is not employed
by the system, up to the extreme scenario of 0.5 NA objectives, in which the
two apertures have no overlap, and no fluorescence photons can be detected.
Figure 1: OPM apertures. Representation of OPM apertures for high and low
numerical aperture water immersion objectives. The angle $\alpha$ represents
the aperture of the primary objective, $\beta$ is the light-sheet aperture,
$\gamma$ is the aperture of an ideal tertiary objective in standard OPM with
the same numerical aperture as the primary. The effective aperture of the
system is the angle of the overlap between $\alpha$ and $\gamma$
As a result, only high numerical aperture primary objectives can be used
effectively in standard OPM, and the total aperture and resolution of the
system remain impaired from the re-projection setup.
This drawback can be mitigated by tilting the direction of propagation of
fluorescence light through the use of a carefully positioned discontinuity in
refractive index between the secondary and tertiary objectives. This can be
achieved either through an immersion chamber with two separate liquids [7] or
through the use of a specialized axially asymmetric tertiary objective with
null working distance[8]. However, these modifications can be complex and
expensive to achieve, and still require the use of high numerical aperture
primary objectives in order to keep the angle between the light-sheet and the
optical axis suitably large. OPM with low NA objectives has been proven
possible through the use of a diffractive grating between the secondary and
tertiary objectives [9], which however introduces constraints on the maximum
usable NA, and restricts use for multi-color imaging.
This manuscript presents remote focusing oblique plane microscopy (RF-OPM), an
alternative approach to the oblique imaging problem. RF-OPM images the sample
through a single objective of arbitrary numerical aperture, presenting no
aperture mismatches in the detection path. Instead of using the standard OPM
re-projection method, an electrically tunable lens (ETL)[10] is introduced in
the back focal plane of the optical system, and a CMOS camera is positioned in
the image plane. In this configuration, at any given time, only a narrow
section of the oblique plane is in focus on the camera. The position of such
narrow section can be dynamically shifted along the oblique plane by changing
the focal power of the ETL. Through the linear modulation of the focal power
of the lens in sync with the rolling shutter exposure of the camera, images of
the oblique plane can be acquired entirely in focus. While limiting the
rolling shutter exposure to a small sub-region of the detector significantly
reduces the photon budget, the approach is already vastly employed in
conventional light-sheet imaging, either to increase axial confinement in
confocal light-sheet setups [11], or to enable wide fields of view with high
light-sheet numerical aperture [12]. These methods are generally considered a
standard procedure when imaging very large optically cleared samples[13].
## 2 Method
### 2.1 Working principle
While RF-OPM can in principle be implemented using any form of incoherent
remote focusing, including deformable lenses [14], deformable mirrors [15],
Alvarez lenses [16] or optical elements mounted on fast accelerating
translation stages [17], this manuscript reports an ETL based design. The ETL
was chosen mostly for simplicity of implementation and reproducibility of the
results, as it constitutes a relatively inexpensive and readily available
device that has performance compatible with the experimental design.
A simplified representation of the RF-OPM optical setup is reported in figure
2, panel B. The setup consists of a conventional epifluorescence microscope,
with a 4-f telescope conjugating the objective’s back focal plane with the
aperture of an ETL. The conventional dichroic and filter set is then
positioned between the ETL and the tube lens.
Figure 2: Method. A. Scheme of the imaging geometry in the axial direction for
different focal powers of the ETL. B. Simplified schematic of the optical
setup. FP - Objective focal plane, OL - Objective lens, PP - Pupil plane of
the system, L1, L2 - 4f telescope, DM1, DM2, DM3 - Dichroic mirrors, M1, M2 -
Image plane mirrors, ETL - Tunable lens, CL - Cylindrical lens, L3 - Tube
lens, CAM - Camera detector plane.
The laser light forming the light-sheet is focused by a cylindrical lens in
the center of the ETL. Between the two lenses of the 4-f telescope, a pair of
dichroic mirrors splits the excitation and detection paths, so that a manually
adjustable mirror in the image plane of the system can be tilted to move the
laser light at the edge of the objective aperture, achieving illumination
along an oblique plane.
In this configuration, the light-sheet remains positioned along a fixed
oblique plane, but its focus moves along the plane linearly with the focal
power of the ETL, as shown in figure 2, panel A. Neglecting the chromatic
aberrations of the ETL, the camera always remains focused on the horizontal
plane intersecting the light-sheet focus. As a result, at any given time, only
a thin strip of the camera detects details in focus, with the lateral position
of the strip shifting linearly with the ETL focal power. Using a rolling
shutter sensor with a short exposure, it is possible to linearly modulate the
focal power of the ETL so that the image is always in focus on the exposed
pixels. The final output of the detector will therefore be an image of the
light-sheet plane (indicated in red in panel A of figure 2), fully in focus.
Three-dimensional images can be acquired by either moving the sample through
the light-sheet with a translation stage or, in principle, by conjugating the
ETL plane with a galvanometric mirror, as is generally done in OPM.
It should be noted that the camera output obtained in this configuration
consists of the projection of the oblique plane in the horizontal direction,
and as such the magnification of the final image will be different for the $x$
and $y$ axes of the camera. An appropriate affine transformation should be
applied to obtain an unwarped image.
### 2.2 Field of view
As discussed in the previous section, the magnification of the system is
different along the axes parallel and perpendicular to the propagation
direction of the light-sheet. When the light-sheet is correctly aligned at the
edge of the pupil aperture, the angle of the beam from the optical axis is
$\alpha=\arcsin{\left(\frac{{NA}_{obj}}{n}\right)}-0.5\arcsin{\left(\frac{{NA}_{ls}}{n}\right)}$
(1)
where ${NA}_{obj}$ and ${NA}_{ls}$ are the numerical apertures of the
objective and of the light-sheet beam respectively, and $n$ is the refractive
index of the sample. For simplicity in the design of the system, this can also
be approximated as
$\alpha=\arcsin{\left(\frac{D_{obj}-R_{ls}}{2fn}\right)}$ (2)
.
Where $f$ is the focal length of the objective lens, $D_{obj}$ is the diameter
of the back aperture of the objective, and $R_{ls}$ is the radius of the
light-sheet beam at the objective back focal plane.
Given the magnification $M$ of the system and assuming the use of a square
detector of side $D$, the dimensions of the field of view of the acquired
images is
${FOV}_{\bot},{FOV}_{\parallel}=\frac{D}{M},\frac{D}{M\sin{\alpha}}$ (3)
where ${FOV}_{\bot}$ is the size in the direction perpendicular to the
propagation of the light-sheet, and ${FOV}_{\parallel}$ is the one along the
propagation direction of the light-sheet. As a consequence, the total axial
distance covered by the field of view is
${FOV}_{axial}=\frac{D}{M\tan{\alpha}}$ (4)
.
It is important to notice that the final pixel size along the propagation
direction of the light-sheet increases with the tangent of the angle,
approaching infinity as the light-sheet becomes vertical. In principle, an
extremely wide field of view can be achieved in the axial direction by using a
very low numerical aperture objective or by moving the light-sheet beam
towards the center of the objective’s aperture, but very low pixel sampling
would be obtained in this situation.
A second limit to the achievable axial size of the field of view is given by
the effective dioptric power range of the ETL, and its ability to linearly
modulate it at a frequency compatible with the desired frame rate. To ensure
the most effective use of the ETL focal power range, the telescope conjugating
it to the objective back focal plane should precisely match the two apertures.
In this configuration, the required achievable focal length (both converging
and diverging) of the ETL is
${f}_{tl}=\frac{2M_{tel}^{2}f_{obj}^{2}}{{FOV}_{axial}}$ (5)
.
where $f_{obj}$ is the focal length of the objective lens and $M_{tel}$ is the
magnification of the telescope between the objective and the ETL.
### 2.3 Resolution and frame rate
The estimation of the resolution of a standard OPM can be complex due to the
dependence of the effective numerical aperture of the system on the angle
between the secondary and tertiary objectives. A complete and exhaustive
analysis can be found in Ref. [18]. In principle, RF-OPM images can achieve
the nominal resolution of the objective in the oblique plane, while the
resolution in the direction perpendicular to the oblique plane is given by the
thickness of the light-sheet at its focus.
However, a limit to the performance is imposed by optical aberrations
introduced by the ETL. When mounted horizontally, a conventional ETL
introduces aberrations from the desired spherical phase with an RMS amplitude
of less than $200\,nm$, which can generally be neglected in light-sheet
microscopy, since typical samples often introduce more severe aberrations [19,
20]. However, when the ETL is operated at increasing frequencies, secondary
modes other than pure defocus begin to be excited, which generally introduce
spherical-like aberrations. For readily available commercial ETLs, such
secondary modes exhibit a resonant frequency at approximately $400\,Hz$,
therefore precluding applications at extremely high frame rates. At more
conventional light-sheet frame rates of tens of $Hz$, these effects can be
considered negligible.
In addition to the excitation of secondary modes in the ETL, a second
limitation to the maximum frame rate achievable in RF-OPM is given by the
capability of the ETL to linearly modulate focal power over time. If a
detector capable of alternating the direction of the rolling shutter at each
frame is employed, the ETL can be synchronized with the detector through a
triangular waveform. Triangular waveforms are relatively trivial to generate
with ETLs, and nonlinear behavior is limited to short intervals between frames
in which the direction of the scan is reversed. However, most detectors
generally utilised in light-sheet microscopy are not capable of alternating
rolling shutter direction at each frame. In this case, the ETL must generate a
sawtooth waveform, which presents a critical point at the discontinuity
between frames. Since the ETL cannot instantly switch from a large positive
focal power to a large negative one or vice versa, a significant interval
between frames is necessary to allow the ETL to reset its position. Hence, as
the microscope frame rate increases, the fraction of time in which the
detector is actually exposed decreases, which could lead to an insufficient
signal-to-noise ratio.
### 2.4 Experimental setup
Images were acquired on a custom RF-OPM setup. The setup is designed for a
60X, $1.1\,NA$ water-dipping objective (LUMFL N 60XW, Olympus, Japan), with a
back focal plane aperture diameter of $6.6\,mm$. In order to prove the
versatility of the method, additional experiments were performed with a 20X,
$0.5\,NA$ water-dipping objective (UMPlanFL N 20XW, Olympus, Japan).
The back focal plane of the objective is conjugated with the $16\,mm$ aperture
of the ETL (EL-16-40-TC-VIS-5D with ECC-1C controller, Optotune, Switzerland)
through two lenses of $125\,mm$ and $300\,mm$ focal respectively (AC254-125-A
and AC508-300-A, Thorlabs, USA), forming a 2.4X 4-f telescope. This resulted
in perfect matching of the aperture of the 60X objective, while the numerical
aperture of the 20X objective was cropped down by the ETL diameter to
approximately $0.37$.
In order to image green fluorophores, long-pass dichroic mirrors with a cutoff
wavelength at around $500\,nm$ are used to split the path between the two
lenses of the telescope. Dichroic mirrors with mm-scale thickness are employed
(DMLP505L, Thorlabs, USA, $5\,mm$ thick, and Di03-R488-t3-25x36, Semrock, USA,
$3\,mm$ thick) in order to minimize aberrations in the light-sheet path
introduced by their curvature.
Excitation light is provided by a $40\,mW$, $473\,nm$ diode laser
($\lambda$-beam, RGB laser systems, Germany) coupled to a single-mode fiber.
Light from the fiber is collimated to a beam diameter of $8\,mm$ with a
reflective collimator (RC08SMA-P01, Thorlabs, USA), focused through a
$400\,mm$ focal cylindrical lens (LJ1363RM-A, Thorlabs, USA) in a light-sheet
conjugated to infinity by a $150\,mm$ focal length lens (AC254-150-A,
Thorlabs, USA), and then reflected through the center of the ETL by a dichroic
mirror (DMLP505L, Thorlabs, USA). This configuration led to a final ratio of
$0.18$ between the aperture diameters of the light-sheet and the objective at
the back focal plane of the system.
An additional 0.5X 4-f telescope (AC508-200-A and AC508-100-A, Thorlabs, USA)
is present in the fluorescence light optical path between the ETL and the
tube-lens of the system. This addition serves two purposes: firstly, to extend
the optical path length, allowing the horizontal mounting of the ETL; and
secondly, to accommodate potential future upgrades, such as the integration of
a galvanometric mirror at the system’s back focal plane, which would enable
faster dynamic imaging. A fluorescence filter (MF525-39, Thorlabs, USA) is
present after the relayed back focal plane, and a $200\,mm$ focal length tube
lens (MXA20696, Nikon, Japan) is employed to conjugate the image plane of the
system to an sCMOS camera (Orca Flash 4.0 v2, Hamamatsu, Japan), with a final
effective magnification of 55.55X when using the 60X objective, and of 16.66X
when using the 20X objective. Images were acquired on a 2048 by 1024 pixels
subregion of the detector. The active region was cropped vertically to 1024
pixels, since the 60X objective showed significant spherical aberration
outside this range, while the axial field of view achievable with the 20X
objective was considered more than adequate for most available samples. Given
the geometry of the setup, the final field of view of raw images for the 60X
objective span $240\,\mu m$ on the horizontal plane by $156\,\mu m$ along the
oblique plane, at an angle of $48^{\circ}$ with the optical axis, for a total
axial range of $104\,\mu m$. For the 20X objective, the field of view span
$720\,\mu m$ on the horizontal plane by $1020\,\mu m$ along the oblique plane,
at an angle of $20^{\circ}$ with the optical axis, for a total axial range of
$958\,\mu m$.
Due to the inability of the detector to image with an alternating direction
rolling shutter, a sawtooth function is used for modulating the ETL focal
power in time. The maximum usable imaging rate achievable is $50\,Hz$, with a
duty cycle of exposure of approximately $60\%$, while higher duty cycles can
be achieved at lower frame rates, up to $90\%$ at $10\,Hz$. Higher frequencies
or better duty cycles could be achieved by vertically cropping the active
region of the detector, reducing the axial size of the field of view.
Three-dimensional datasets were acquired by translating the sample
horizontally with a servo-controlled actuator (M-405.CG, Physik Instrumente,
Germany). A custom software script was used to correct the dataset shearing
and stretching through an affine transform.
## 3 Results
### 3.1 Microbeads imaging
In order to evaluate the performance of the system, a $3\,mm$ thick sample of
$0.17\mu m$ yellow-green fluorescent microbeads (P7220 PS-Speck Point Source
Kit, Thermo Fisher Scientific, USA) embedded in agarose gel was imaged with
the two objectives. Representative datasets are reported in figure 3.
Figure 3: Microbeads imaging A x-z maximum intensity projection (where z is
the direction of the optical axis of the objective) over a $25\,\mu m$ range
in y of a microbead image, acquired with the 20X objective. Scale bar is
$100\,\mu m$. B x-z maximum intensity projection over a $100\,\mu m$ range in
y of a microbead image, acquired with the 60X objective. Scale bar is $25\,\mu
m$. C,D images on a horizontal and vertical plane of a single microbead,
acquired with a 60X objective. Scale bar is $1\,\mu m$. E estimation of the
full width at half maximum of the size of a single bead at the optimal working
distance of the 60X objective. Lines are spline interpolation of data.
A first, important observation is that high aperture objectives are generally
optimized for diffraction-limited imaging within a relatively short axial
range of operation. This can be clearly observed in figure 3, panels A and B,
which show how the axial resolution of the system is conserved throughout the
axial field of view with the 0.5 NA 20X objective lens, but is only optimal
within a range of approximately $30\,\mu m$ for the 1.1 NA 60X objective.
Within the optimal range of the objective, the full width at half maximum of
the psf of the system is sub-micrometric laterally and around $1\,\mu m$
axially. The resolving power is slightly worse than the nominal diffraction
limit of the objective, and a slight star shape can be observed in the PSF.
Both these effects are, most likely, due to slight astigmatism introduced in
the detection path by the three dichroic mirrors utilised in the system. To
avoid the introduction of vignetting in the system at high defocusing power of
the ETL, large dichroics (2-inch diameter) were employed, which introduced
non-negligible wavefront distortion. Moreover, the employed objectives are not
optimized for use with a coverslip, which is necessary for imaging beads in an
inverted microscope. A more optimized layout of the system using a coverslip-
optimized objective and smaller, higher-quality dichroics should solve this
issue.
### 3.2 Mouse kidney imaging
The main advantage of OPM systems over traditional light-sheet microscopy is
its ability to image samples conventionally mounted on a microscopy slide. In
order to show this capability, a commercially available and fairly widespread
sample, a $16\,\mu m$ cryostat section of mouse kidney with immunofluorescence
staining (Fluocells prepared slide #3, Invitrogen, USA), was imaged with the
setup.
Figure 4: Prepared mouse kidney slide imaging. A. Geometry of the acquisition,
not to scale. The green plane represents the objective’s focal plane, the red
plane represents the raw field of view of the system, and the blue plane
represents the axial field of view of the system. The orange arrow show the
direction of the sample translation. B. Representative images of the sample.
Image border colors report their geometry referring to the scheme in panel A.
Images in the horizontal and vertical plane are obtained through affine
transform of the raw data. Scale bar for the horizontal and the vertical image
is $50\,\mu m$, scale bars in raw image are $20\,\mu m$, and intensity scale
bar is in arbitrary units.
The glomeruli and convoluted tubules, stained with Alexa 488, were visible
with the laser and filter set present in the setup. Due to the thin nature of
the sample, only the 60X objective was utilised. Figure 4 shows the
acquisition geometry and a representative image of the sample. Raw planes were
acquired at $50\,Hz$, with a $0.4\,\mu m$ spacing between planes, on a 2048 by
256 pixels sub-region of the detector. The dataset shows good optical
sectioning, and resolution performance comparable to those measured in the
microbeads test. The images produced are also comparable to those reported in
literature or on commercial microscopes documentation for the same sample. The
main drawback of the current setup is the single-channel acquisition, which
limits the amount of information that can be retrieved. Future upgrades with
multi-edge dichroics could easily enable the use of multiple excitation
wavelengths. The widely available dual rolling shutter feature of sCMOS
cameras, together with the use of an image splitter, could also allow the
simultaneous acquisition of two channels, doubling the acquisition pixel rate
of the system.
### 3.3 Zebrafish imaging
Tg(kdrl:eGFP)s843 zebrafish embryos, expressing a green fluorescent protein in
the vascular structure [21] were imaged at 3 to 5 days post fertilization
(dpf). The pigmentation was suppressed through 1-phenyl-2-thiourea (PTU)
treatment [22]. To immobilize the larvae, 0.016% tricaine anesthetic solution
was used. The inverted nature of the microscope allowed convenient horizontal
mounting of the larvae on a coverslip. A layer of 1% w/v agarose was laid on
the coverslip, in which a $0.8\,mm$ wide groove cast with a 3D-printed comb
was created to hold the larvae during imaging, as described in ref [23].
Experiments were conducted with both objectives. High-resolution details of 3
dpf embryos were obtained with the 60X objective, while images of the full
vascular system of a 5 dpf embryo were collected in a single sweep of the
translation stage with the 20X objective. Images at 60X magnification were
recorded at $20\,Hz$ on the full 2048 by 1024 pixel active region, with a
total data throughput of approximately $42\times 10^{6}\,pixels/s$, while 20X
datasets were acquired at $50\,Hz$ on a 2048 by 768 pixel active region, with
a total data throughput of approximately $78\times 10^{6}\,pixels/s$. Three-
dimensional datasets were captured by moving the translation stage, with
horizontal spacing between frames of $0.4\,\mu m$ for 60X images, and of
$1\,\mu m$ for 20X images.
Figure 5: Zebrafish larvae imaging A, B. Representative raw images from the
detector during stack acquisition, with 60X and 20X objectives respectively.
Scale bars are $50\,\mu m$ in A and $100\,\mu m$ in B. C. Typical two-
dimensional lateral image after affine transform for a 60X image of the larva
tail vasculature. Scale bar is $50\,\mu m$. D. Typical two-dimensional lateral
image after affine transformation for a 20X image of the larva vasculature.
Scale bar is $50\,\mu m$.E, F. Depth encoded projection of full dataset of the
eye and tail vasculature respectively in 3 dpf larvae, acquired with the 60X
objective at $20\,Hz$, scale bars are $100\,\mu m$. G. Maximum intensity
projection of an entire 5 dpf larva acquired with the 20X objective at
$50\,Hz$, scale bar is $200\,\mu m$.
Typical datasets are reported in figure 5. Images acquired with the 60X 1.1 NA
objective showed fine details in the vascular structure, both in the
relatively clear tail sections and in the more complex and optically dense
vasculature of the eye. The 20X objective produced, as expected, lower
brightness and lower resolution images, but its wide field of view enabled the
acquisition of images of the entire $3.5\,mm$ long larva in a single sweep of
the actuator in approximately one minute. Although the numerical aperture is
lower, and therefore the quality of the images is not comparable, the field of
view and pixel throughput of the microscope are comparable to state-of-the-art
OPM with custom tertiary objectives [24].
## 4 Discussion
This manuscript introduces a novel approach to the visualization of tilted
planes in OPM, employing an ETL and the detector rolling shutter to replace
the reprojection setup of standard OPM. The proposed setup presents several
advantages compared to a standard OPM, namely:
* •
The system collects photons from the full aperture of the objective. While the
gain in terms of effective signal is reduced by the need to have an exposure
time shorter than the frame time to synchronize the rolling shutter with the
ETL scan, this still enables imaging at higher resolution than standard OPM
[25]. Similar performance can be achieved with discontinuities in the
refractive index between the secondary and tertiary objectives of standard
OPM, but the presented method is arguably simpler and cost-effective.
* •
The proposed method allows great flexibility in the selection of the
microscope objective. Unlike in standard OPM, low-aperture objectives can be
used. Moreover, the objective can be rapidly switched during experiments,
without the need to realign the optical system. The only limitation to the
capability of operation with different objectives lies in the need to match
the diameter of the back aperture of the objective with the diameter of the
ETL. When the aperture diameter of the chosen objectives differs, as it did in
the case of the ones employed here, the setup should be designed to either
crop the pupil of the objectives with larger back apertures, or to underfill
the ETL aperture when using objectives with smaller back apertures. Since,
from equation 5, reducing the magnification of the objective pupil at the ETL
plane results in smaller axial shifts for the same lens current input, the
presented setup was designed to always use the full aperture of the ETL and
maximize the axial field of view when using the 60X objective, cropping the
aperture of the 20X objective as a tradeoff. Different experimental needs may
require different tradeoffs. Ideally, alternative remote focusing approaches
with faster actuators and longer ranges may allow the use of the full aperture
of any objective needed.
* •
The use of the same ETL for both excitation and fluorescence light ensures the
light-sheet is always at its thinnest point in the region of the exposed
rolling shutter. This greatly improves resolution at the axial edges of the
field of view, and provides imaging of wider fields of view when compared to
standard OPMs, or even traditional light-sheet microscopes with fixed
cylindrical lenses. While this feature could be fully exploited in the
presented data with the lower aperture objective, the narrow optimal axial
range of higher performance objectives does hinder this capability of the
system, due to the appearance of non-negligible spherical aberration when
focusing further from the objective’s focal plane. This position-dependent
aberration could be reduced through the implementation of multi-conjugate
aberration correction in the system [19].
## 5 Funding
The research has received funding from LASERLAB-EUROPE (grant agreement no.
871124, European Union’s Horizon 2020 research and innovation programme) and
the European Union’s NextGenerationEU Programme with the I-PHOQS
Infrastructure (IR0000016, ID D2B8D520, CUP B53C22001750006) “Integrated
infrastructure initiative in Photonic and Quantum Sciences.” The PhD student
Alessia Brix was supported by the PhD program in Experimental Medicine of the
University of Milan, Milan.
## 6 Data availability
All the raw data for the images presented in the manuscript, together with
software for affine transformation are available as an open dataset on Zenodo
[26].
## References
* [1] John M Girkin and Mariana Torres Carvalho. The light-sheet microscopy revolution. Journal of Optics, 20(5):053002, 2018.
* [2] Alessia Candeo, Fabrizio G Doccula, Gianluca Valentini, Andrea Bassi, and Alex Costa. Light sheet fluorescence microscopy quantifies calcium oscillations in root hairs of arabidopsis thaliana. Plant and Cell Physiology, 58(7):1161–1172, 2017.
* [3] Alessia Candeo, Ilenia Sana, Eleonora Ferrari, Luigi Maiuri, Cosimo D’Andrea, Gianluca Valentini, and Andrea Bassi. Virtual unfolding of light sheet fluorescence microscopy dataset for quantitative analysis of the mouse intestine. Journal of Biomedical Optics, 21(5):056001–056001, 2016.
* [4] Adam K Glaser, Kevin W Bishop, Lindsey A Barner, Etsuo A Susaki, Shimpei I Kubota, Gan Gao, Robert B Serafin, Pooja Balaram, Emily Turschak, Philip R Nicovich, et al. A hybrid open-top light-sheet microscope for versatile multi-scale imaging of cleared tissues. Nature methods, 19(5):613–619, 2022.
* [5] Christopher Dunsby. Optically sectioned imaging by oblique plane microscopy. Optics express, 16(25):20306–20316, 2008.
* [6] Jeongmin Kim. Recent advances in oblique plane microscopy. Nanophotonics, 0, 2023.
* [7] Bin Yang, Xingye Chen, Yina Wang, Siyu Feng, Veronica Pessino, Nico Stuurman, Nathan H Cho, Karen W Cheng, Samuel J Lord, Linfeng Xu, et al. Epi-illumination spim for volumetric imaging with high spatial-temporal resolution. Nature methods, 16(6):501–504, 2019.
* [8] Etai Sapoznik, Bo-Jui Chang, Jaewon Huh, Robert J Ju, Evgenia V Azarova, Theresa Pohlkamp, Erik S Welf, David Broadbent, Alexandre F Carisey, Samantha J Stehbens, et al. A versatile oblique plane microscope for large-scale and high-resolution imaging of subcellular dynamics. Elife, 9:e57681, 2020.
* [9] Maximilian Hoffmann and Benjamin Judkewitz. Diffractive oblique plane microscopy. Optica, 6(9):1166–1170, 2019.
* [10] Florian O Fahrbach, Fabian F Voigt, Benjamin Schmid, Fritjof Helmchen, and Jan Huisken. Rapid 3d light-sheet microscopy with a tunable lens. Optics express, 21(18):21010–21026, 2013.
* [11] Eugen Baumgart and Ulrich Kubitscheck. Scanned light sheet microscopy with confocal slit detection. Optics express, 20(19):21805–21814, 2012.
* [12] Kevin M Dean, Tonmoy Chakraborty, Stephan Daetwyler, Jinlong Lin, Gerard Garrelts, Ons M’Saad, Hannahmariam T Mekbib, Fabian F Voigt, Martina Schaettin, Esther T Stoeckli, et al. Isotropic imaging across spatial scales with axially swept light-sheet microscopy. Nature protocols, 17(9):2025–2053, 2022.
* [13] Hiroki R Ueda, Hans-Ulrich Dodt, Pavel Osten, Michael N Economo, Jayaram Chandrashekar, and Philipp J Keller. Whole-brain profiling of cells and circuits in mammals by tissue clearing and light-sheet microscopy. Neuron, 106(3):369–387, 2020.
* [14] Jun Jiang, Dapeng Zhang, Steven Walker, Chenglin Gu, Ya Ke, Wing Ho Yung, and Shih-chi Chen. Fast 3-d temporal focusing microscopy using an electrically tunable lens. Optics express, 23(19):24362–24368, 2015.
* [15] Terry Wright, Hugh Sparks, Carl Paterson, and Chris Dunsby. Video-rate remote refocusing through continuous oscillation of a membrane deformable mirror. Journal of Physics: Photonics, 3(4):045004, 2021.
* [16] M Bawart, A Jesacher, S Bernet, and M Ritsch-Marte. Remote focusing in confocal microscopy by means of a modified alvarez lens. Journal of Microscopy, 271(3):337–344, 2018.
* [17] Edward J Botcherby, R Juškaitis, Martin J Booth, and Tony Wilson. An optical technique for remote focusing in microscopy. Optics Communications, 281(4):880–887, 2008.
* [18] Jeongmin Kim, Tongcang Li, Yuan Wang, and Xiang Zhang. Vectorial point spread function and optical transfer function in oblique plane imaging. Optics Express, 22(9):11140–11151, 2014.
* [19] T Furieri, A Bassi, and S Bonora. Large field of view aberrations correction with deformable lenses and multi conjugate adaptive optics. Journal of Biophotonics, page e202300104, 2023.
* [20] Dean Wilding, Paolo Pozzi, Oleg Soloviev, Gleb Vdovin, and Michel Verhaegen. Adaptive illumination based on direct wavefront sensing in a light-sheet fluorescence microscope. Optics express, 24(22):24896–24906, 2016.
* [21] Suk-Won Jin, Dimitris Beis, Tracy Mitchell, Jau-Nian Chen, and Didier Y. R. Stainier. Cellular and molecular analyses of vascular tube and lumen formation in zebrafish. Development, 132(23):5199–5209, 12 2005.
* [22] Johnny Karlsson, Jonas Von Hofsten, and Per-Erik Olsson. Generating transparent zebrafish: a refined method to improve detection of gene expression during embryonic development. Marine biotechnology, 3:522–527, 2001.
* [23] Abdullah R Ahmed, Alessia Candeo, Sofia D’Abrantes, Sarah R Needham, Rahul B Yadav, Stanley W Botchway, and Anthony W Parker. Directly imaging the localisation and photosensitization properties of the pan-mtor inhibitor, azd2014, in living cancer cells. Journal of Photochemistry and Photobiology B: Biology, 213:112055, 2020.
* [24] Bin Yang, Merlin Lange, Alfred Millett-Sikking, Xiang Zhao, Jordão Bragantini, Shruthi VijayKumar, Mason Kamb, Rafael Gómez-Sjöberg, Ahmet Can Solak, Wanpeng Wang, et al. Daxi—high-resolution, large imaging volume and multi-view single-objective light-sheet microscopy. Nature methods, 19(4):461–469, 2022.
* [25] Venkatakaushik Voleti, Kripa B Patel, Wenze Li, Citlali Perez Campos, Srinidhi Bharadwaj, Hang Yu, Caitlin Ford, Malte J Casper, Richard Wenwei Yan, Wenxuan Liang, et al. Real-time volumetric microscopy of in vivo dynamics and large-scale samples with scape 2.0. Nature methods, 16(10):1054–1062, 2019.
* [26] Paolo Pozzi. Raw data from the manuscript ”full-aperture extended-depth oblique plane microscopy through dynamic remote focusing”, https://zenodo.org/doi/10.5281/zenodo.10036881, 2023.
|
# Optical control protocols for high-fidelity spin rotations of single SiV-
and SnV- centers in diamond
Evangelia Takou Department of Physics, Virginia Polytechnic Institute and
State University, 24061 Blacksburg, VA, USA Sophia E. Economou Department of
Physics, Virginia Polytechnic Institute and State University, 24061
Blacksburg, VA, USA
###### Abstract
Silicon-vacancy and tin-vacancy defects in diamond are of interest as
alternative qubits to the NV center due to their superior optical properties.
While the availability of optical transitions in these defects is one of their
assets, high-fidelity optical coherent control has not been demonstrated.
Here, we design novel optical control schemes tailored to these defects. We
evaluate the performance of arbitrary single-qubit rotations of the electron
spin qubit both in the absence and presence of an external magnetic field, by
taking into account both coherent and incoherent errors. We find that
rotations in excess of $98.0\%$ ($T=4$ K) and $99.71\%$ ($T=6$ K) can be
achieved for Si-V and Sn-V respectively in the presence of realistic
relaxation and leakage errors.
††preprint: APS/123-QED
## I Introduction
Over the past decades, color centers in diamond have been investigated for
their potential use as the hardware for solid-state quantum processing
applications. The most well-known defect in a diamond host is the NV center.
Its most prominent features are the long coherence times of the NV electron
spin [1] and the room temperature operation [2, 3]. Practical experimental
demonstrations regarding the NV center involve readout [4, 5, 6, 7],
initialization [8, 9], entanglement generation schemes, as well as control of
the surrounding nuclear bath [10, 11]. However, the optical control usually
relies on the application of external perturbations such as strain/electric or
magnetic fields, to lift the ground state degeneracy, or allow for spin-
flipping transitions [12, 13]. In addition, NV centers have poor optical
properties, with large phonon sideband and low probability of emission in the
zero phonon line (ZPL). While this can be boosted using coupling to cavities
and waveguides [14], the low ZPL emission still limits the performance of
entanglement generation schemes. Moreover, the sensitivity of NV centers to
charge noise introduces spectral diffusion to optical transitions [15].
As a result, alternative defects are explored in diamond for quantum
information applications. Two that stand out are the negatively charged
silicon vacancy (SiV-) [16, 17, 18, 19, 20, 21, 22, 23, 24] in diamond and the
newly emerging tin vacancy (SnV-) color centers [25, 26, 27, 28, 29, 30].
These spin $S=1/2$ systems, have excellent optical properties, such as narrow
linewidths of the ZPL transition which comprises $70-80\%$ of the emitted
light [31], small phonon sidebands, and spectral stability [32, 33]. They
belong to the $D_{3\text{d}}$ point group [16] and display inversion symmetry,
which renders them robust to charge fluctuations, since, unlike the NV, they
lack permanent electric dipole moment to first order [34]. Consequently, they
are excellent indistinguishable single photon sources [35, 36], and they are
immune to noise emerging from integration into photonic devices [37].
Furthermore, due to the large ground state splitting of the SnV- defect [38,
25], no spin mixing is observed in the presence of external magnetic fields
[25], which can lead to improved optical control. In addition, nuclear spin
control has been achieved for SiV- in nano-waveguides [39, 40]. Initialization
and readout of the SiV- have also been demonstrated in [41, 42], while
coherent control and single qubit rotations have been shown in [43, 44, 45,
46, 47]. In [46], the control of the electronic spin is achieved using MW
pulses, resulting in slower rotations. This approach also requires microwave
frequency generators and amplifiers, thus increasing the experimental demands.
In [45], full SU(2) spin rotations were demonstrated via Ramsey interference
generated by two temporally separated pulses. Each of these pulses originated
from a single broadband laser that addressed both transitions of a
$\Lambda$-system. To avoid driving unwanted transitions, a far off-resonant
Raman pulse was used, which restricted the achievable rotation angles due to
limitation of the laser power and did not mitigate the decoherence entirely
due to the excitation of an unwanted excited state. Moreover, the fidelity of
the rotations was not quantified nor were the error mechanisms investigated in
detail.
So far, the theoretical models of optical control usually involve
approximations under which the off-resonant transitions are ignored. This is a
good approximation for longer gate durations, i.e., narrowband pulses. For
faster pulses, which are needed in these systems to ensure the control takes
place well within the optical coherence and relaxation times, such off-
resonant transitions cannot be ignored. Another issue typically present in
defect systems is orbital mixing of the states caused by the Jahn-Teller
effect or crystal strain. In the case of $\Lambda$-system schemes, each
transition dipole couples to both driving fields, leading to additional errors
during the optical control. Thus, these error mechanisms present in many
defects give rise to the need for high-fidelity control techniques tailored to
these systems.
In this paper, we address the aforementioned challenges by developing all-
optical control protocols for the SiV- and SnV- color centers in diamond. We
start by demonstrating that existing approaches, in particular coherent
population trapping, do not suffice for high-fidelity gates. We show how to
eliminate cross-talk errors and reduce the number of unwanted leakage
transitions by appropriately selecting the polarization of the lasers. We
further optimize the gates by analyzing the full dynamics of the systems, and
we identify the coherent errors as well as incoherent errors that arise due to
unwanted excitations of the multi-level electronic structures. We resolve the
leakage through two corrective methods, one available in the literature and
one developed here, that allow for faster implementation of the rotations
without compromising the gate fidelity.
This paper is organized as follows. In Sec. II we describe the two main
sources of errors of the optical control; the cross-talk and leakage. In Sec.
III, we discuss the cross-talk issue and provide an effective and simple
solution based on polarization schemes. In Sec. IV we analyze two approaches
for leakage mitigation. Finally, in Secs. V and VI we apply our protocols to
the SiV- and SnV- defects, respectively, and quantify the fidelity for various
rotations angles and pulse durations.
## II Overview of the problem and error mechanisms
A well-known technique that provides fast all-optical control of
$\Lambda$-systems is coherent population trapping (CPT). In CPT, the two
transitions of a $\Lambda$-system are driven with two laser fields $E_{1}$ and
$E_{2}$, each acting on a distinct transition, as shown in Fig. 1(a), which
satisfy the two-photon resonance condition. CPT is based on the destructive
interference of the quantum processes driven by the different fields, which
leads to trapping of the population into a dark state. In this so-called dark-
bright frame, the dark state is completely decoupled from the dynamics of the
other two levels in the system [Fig. 1(b)]. The transformation to the dark-
bright frame defines the rotation axis of the qubit; by combining CPT with
hyperbolic secant pulses, we can design arbitrary single-qubit rotations, as
explained in Appendix A and in Refs. [48, 49].
In an ideal CPT scheme, the distinct couplings can be satisfied by either
energy separation of the ground states, or polarization selection rules that
ensure each transition is accessed by a single laser. However, energy
separation alone does not guarantee negligible cross-talk errors for all gate
durations, and the approximation of distinguishable couplings breaks down for
broadband pulses. Unfortunately, the two transition dipoles are not orthogonal
for the SiV- and the SnV- systems, leading to the cross talk (dashed arrows)
shown in Fig. 1(c). The source of the cross talk and our solution to this
problem will be explained in Sec. III. For now, we stress that this setting is
unavoidable if each laser field is chosen according to the polarization
selection rules, i.e., such that its coupling to one of the two
$\Lambda$-transitions is maximized. Henceforth, we refer to this approach as
“naive”.
In addition to the cross-talk, each laser field removes population from the
$\Lambda$-subspace inducing thus leakage errors to the control. As an example,
we show the leakage transitions of the SiV- system in Fig. 1(e), which occur
with an off-resonant energy cost $\delta_{\text{es}}$. In the dark-bright
frame, these errors translate into couplings between the dark/bright states
and the unwanted excited level, $|\text{C}\rangle$ [Fig. 1(f)].
In the following sections, we propose schemes to resolve the cross-talk by
polarization tuning of the lasers, as well as to counteract the leakage errors
via pulse modulation. These protocols are analyzed in Sec. III and Sec. IV
respectively. The readers who are more interested in the numerical results
could directly proceed with Sec. V (for the SiV-) and Sec. VI (for the SnV-).
Figure 1: Summary of the error mechanisms for the SiV- system. (a) Ideal CPT
scheme performed with two fields $E_{1}$ and $E_{2}$ acting on distinct
transitions. The ground state splitting is denoted as $\delta_{\text{gs}}$.
(b) Transformation to the dark-bright basis for the case of (a) successfully
decouples the dark state $|\text{d}\rangle$ and transitions are driven between
the bright $|\text{b}\rangle$ and excited state $|\text{A}\rangle$. The
effective Rabi frequency is expressed in terms of the Rabi frequencies in the
lab frame, i.e.
$\Omega_{\text{eff}}=\sqrt{|\Omega_{1}|^{2}+|\Omega_{2}|^{2}}$. (c) Cross-talk
within the $\Lambda$-system leads to off-resonant errors (dashed green and
blue arrows), that oscillate with an additional energy shift,
$\delta_{\text{gs}}$. (d) In the presence of cross-talk couplings as shown in
(c), the dark state is not completely decoupled. (e) Both laser fields drive
the leakage transitions C1 and C4, introducing errors to the optical control.
$\delta_{\text{es}}$ is the excited states splitting. (f) In addition to the
cross-talk shown in (d), each laser drives the
$|\text{b}\rangle\leftrightarrow|\text{C}\rangle$ and
$|\text{d}\rangle\leftrightarrow|\text{C}\rangle$ transitions in the db-frame.
## III Addressing Cross-talk errors
One advantage of SiV- and SnV- defects is that $\Lambda$-schemes can be
realized even at zero-magnetic fields, which simplifies the dynamics and
facilitates experimental implementations of optical control. In Fig. 2 we show
the electronic structure for the SiV- and SnV- at zero magnetic fields; each
ground- and excited-state manifold is pairwise degenerate. We follow the
literature convention of labeling the ground states as $|1\rangle-|4\rangle$,
and the excited states as $|\text{A}\rangle-|\text{D}\rangle$ (for more
precise labeling we use the eigenstates of the spin-orbit coupling,
$|e_{\pm}\rangle$, for the orbital part of the states).
Based on group theory, the allowed optical transitions can be accessed by
either linear $z$-polarization, which drives transitions between orbital
states with the same symmetry, or by circular polarization, which drives
transitions between the states
$|e_{\text{g},\pm}\rangle\leftrightarrow|e_{\text{u},\mp}\rangle$ [16].
However, a small orbital mixing of the states caused by the Jahn-Teller effect
[16] (or by crystal strain) introduces non-zero $z$-dipoles ($x,y$ dipoles) to
the transitions mainly accessed by $\sigma^{\pm}$ ($z$) polarization.
Consequently, the choice of polarizations as dictated by selection rules would
give rise to a cross-talk, i.e. coupling of each laser field to both
$\Lambda$-transitions. In such a setting, the dark state is not completely
decoupled from the dynamics, as shown in Fig. 1(d).
Figure 2: Electronic structure for the negatively silicon vacancy (SiV-) (a),
and for the negatively charged tin vacancy (SnV-) in diamond (b), for zero
magnetic fields. We label the states using the eigenstates of the spin-orbit
coupling (i.e. $|e_{\pm}\rangle$) which is the largest perturbation term, but
in the text, we use the notation $|1\rangle-|4\rangle$ for the ground states
and $|\text{A}\rangle-|\text{D}\rangle$ for the excited states.
The contribution of the off-resonant cross-talk is quantified by the ground
state splitting of the qubit states. For the SnV- system, the errors average
out more efficiently due to its large ground-state splitting
($\delta_{\text{gs}}^{\text{SnV${}^{-}$}}=825$ GHz). These errors however are
not negligible for SiV- (for which
$\delta_{\text{gs}}^{\text{SiV${}^{-}$}}=50$ GHz). Nevertheless, when
broadband pulses are considered, the cross-talk becomes the central source of
infidelity for both defects since off-resonant transitions are more strongly
coupled.
We propose a simple cross-talk elimination scheme that is achieved by tuning
the laser field polarizations. We consider as an example the SiV- system and
express the direction and polarization of the laser fields in the defect
coordinate frame. In an experimental setup, most diamond samples are cut with
$[0~{}0~{}1]$ as the surface normal. Thus, the polarization directions that we
express in the internal coordinate frame would require a non-zero angle of
incidence on the sample.
We assume two lasers of $xz$-polarization and $y$-propagation, where the
former drives the A1 transition and the latter the A4 transition. To fix the
orthogonality of each laser with one of the $\Lambda$-transitions, we require
that its polarization is orthogonal to the corresponding dipole,
$\textbf{d}_{ij}$. In this particular case of $\Lambda$-system, we require
$\textbf{d}_{\text{A1}}\cdot\textbf{E}_{2}=0$ and
$\textbf{d}_{\text{A4}}\cdot\textbf{E}_{1}=0$, from which we find that the
electric fields need to be defined as:
$\textbf{E}_{1}=E_{01}\left(\hat{\textbf{x}}-\frac{\langle
p_{x}\rangle_{\text{A4}}}{\langle
p_{z}\rangle_{\text{A4}}}\hat{\textbf{z}}\right)e^{i(k_{1}y-\omega_{1}t)}+\text{cc.},$
(1) $\textbf{E}_{2}=E_{02}\left(\hat{\textbf{x}}-\frac{\langle
p_{x}\rangle_{\text{A1}}}{\langle
p_{z}\rangle_{\text{A1}}}\hat{\textbf{z}}\right)e^{i(k_{2}y-\omega_{2}t)}+\text{cc.}.$
(2)
Here $\langle p_{k}\rangle_{ij}=\langle\psi_{i}|p_{k}|\psi_{j}\rangle$ (with
$k\in\\{x,y,z\\}$) is the transition dipole overlap that can be calculated
according to group theory. The polarizations of the lasers that satisfy the
orthogonality conditions are not unique; we choose to restrict the
polarization vectors in the $xz$ plane, in which case the polarizations are
uniquely determined. The definitions of Eq. (1) and Eq. (2) can be generalized
easily to other choices of $\Lambda$-systems or other polarization directions.
For the SnV- system, we chose $yz$-polarization instead, and the reasons for
this choice are explained in Sec. VI.1 and in Appendix F.
Throughout the paper, we combine the sech-based CPT scheme with E-field
polarizations that satisfy the orthogonality conditions to design arbitrary
gates free from cross-talk errors.
## IV Corrective methods for leakage suppression
### IV.1 General strategy of Magnus expansion
As we mentioned in Sec. III, we resolve the cross-talk issue of the
$\Lambda$-system by redefining the polarization of the laser fields. However,
leakage errors reduce the gate fidelity of fast optical control schemes. To
counteract this problem, we use a Magnus-based expansion approach developed in
Ref. [50]. Here we outline the basic steps of the method, and we provide
further details about the procedure we follow in Appendix F.
Let us consider a generic Hamiltonian $H(t)$ given by:
$H(t)=H_{0}(t)+\epsilon V(t),$ (3)
where $H_{0}(t)$ implements our analytically solvable target gate, and $V(t)$
introduces an error to the dynamics generated by $H_{0}(t)$. The error term is
assumed to be perturbative, as it contains off-resonant terms, oscillating
faster than $H_{0}$. To mitigate these errors, we additionally consider a
control Hamiltonian $W(t)$, such that the total Hamiltonian is modified into
$\bar{H}(t)=H_{0}(t)+\epsilon V(t)+W(t).$ (4)
Further, the control Hamiltonian is expanded in a power series according to:
$W(t)=\sum_{k=0}^{\infty}\epsilon^{k}W^{(k)}(t).$ (5)
By going into the interaction picture of $H_{0}(t)$, the Hamiltonian
transforms into $\bar{H}_{\text{I}}(t)=\epsilon
V_{\text{I}}(t)+W_{\text{I}}(t)$, and the total evolution operator becomes
$U(t)=U_{0}(t)U_{\text{I}}(t),$ (6)
where $U_{0}$ is the ideal gate, and $U_{\text{I}}(t)$ is generated by the
error and control Hamiltonian. The implementation of the ideal gate is
achieved if $U_{\text{I}}(T)=\textbf{1}$ (where $T$ is the gate time), such
that $U(T)=U_{0}(T)$, based on Eq. (6). To this end, the evolution operator
$U_{\text{I}}(t)$ is expanded in a Magnus series, and as was shown in Ref.
[50], the solutions for the control are obtained iteratively via the equation:
$\epsilon^{n}\int_{0}^{T}dt^{\prime}W_{\text{I}}^{(n)}(t^{\prime})=-i\sum_{k=1}^{n}\Omega_{k}^{(n-1)}(T),$
(7)
where $\Omega_{k}$ is the $k$-th Magnus expansion order. In this work, we
focus on first order corrections, i.e. we truncate the Magnus series to the
first order, which leads to the equation:
$\epsilon\int_{0}^{T}dtV_{\text{I}}(t)=-\int_{0}^{T}dtW_{\text{I}}^{(1)}(t).$
(8)
Equation (8) can be reformulated into a linear system of equations via the
decomposition of the error and control part into an operator basis, which
enables to rewrite it as [50]:
$B\textbf{x}^{(1)}=\textbf{y}^{(1)},$ (9)
where $B$ is a matrix that encodes the dynamics of $H_{0}$, $\textbf{y}^{(1)}$
are the error terms, and $\textbf{x}^{(1)}$, is a vector that contains the
solutions to the first order of control expansion.
An essential requirement of the Magnus scheme is that the control Hamiltonian
is decomposed to at least the same operators as the errors, in the final
interaction frame of $H_{0}$. However, it is not a strict requirement that the
control pulse has access to all error transitions in the initial frame. For
both defect systems, the leakage transitions that remove the population
outside of the $\Lambda$-subspace correspond to the C transitions. To cancel
out the leakage in both cases, we need to modify only one of the original sech
pulses that drive the $\Lambda$-transitions.
As we already mentioned, the control Hamiltonian is expanded in a power
series. In our case, we consider the total envelope:
$g^{(n)}(t)=g_{1}^{(n)}(t)\cos(\omega_{\text{d}}t)+g_{2}^{(n)}(t)\sin(\omega_{\text{d}}t),$
(10)
composed of two $\pi/2$-shifted envelopes $g_{l}^{(n)}(t)$, which are expanded
in Fourier series. In particular, we use only the cosine terms with
$g_{l}^{(n)}(t)$ given by:
$g_{l}^{(n)}(t)=\sum_{k}c_{l,k}^{(n)}\left(1-\cos\left(\frac{2\pi
tk}{T}\right)\right),$ (11)
where $n$ denotes the Magnus expansion order and $k$ denotes the Fourier
expansion order. We have also fixed $g_{l}^{(n)}(0)=g_{l}^{(n)}(T)=0$ such
that the corrective pulse is zero at the beginning and end of the evolution.
Throughout this paper, we always truncate the Magnus expansion to the first
order, i.e. we set $n=1$.
The driving frequency of the control $\omega_{\text{d}}$ is another free
parameter that can be tuned to lead to the most effective leakage
cancellation. Nonetheless, introducing and modifying a new laser field is more
challenging experimentally. To that end, we restrict the control pulse to have
the same frequency with the original pulse that we modulate.
### IV.2 DRAG framework
An alternative route to leakage suppression is based on the adiabatic removal
of errors, which we analyze in this subsection. The DRAG technique is a widely
known method, extensively used for correcting leakage errors in
superconducting qubits [51, 52, 53]. Based on the DRAG formalism, analytically
derived controls are obtained via a time-dependent Schrieffer-Wolff
transformation. The generator of the transformation is $A(t)=e^{-iS(t)}$,
where $S(t)$ is a Hermitian operator, and leads to the effective DRAG
Hamiltonian
$H_{\text{D}}=A^{\dagger}H_{\text{db}}A+i\dot{A}^{\dagger}A.$ (12)
The dark-bright frame Hamiltonian is given by:
$H_{\text{db}}=(\Omega_{\text{eff}}f(t)\sigma_{\text{be}}+\text{H.c.})-\Delta\sigma_{\text{ee}},$
(13)
where $\Delta$ is the two-photon detuning and
$f(t)=\text{sech}(\sigma(t-t_{0}))$. Also, $|\text{b}\rangle$ is the bright
state and $|\text{e}\rangle$ the excited state. By requiring that the frame
transformation vanishes at the boundaries (i.e. $A(0)=A(T)=\textbf{1}$), the
target evolution in the initial (in this case, dark-bright) and DRAG frames
remains the same at the end of the pulse. To reduce the leakage errors, one
needs to find an appropriate adiabatic transformation, $S(t)$ that respects
the boundary conditions. Besides this restriction, $S(t)$ can be an arbitrary
Hermitian operator, which allows for the suppression of leakage errors.
The original DRAG scheme is designed to cancel out leakage errors of a ladder-
type system (e.g. transmon), which are caused by transitions between
consecutive levels. In our work, we extend this formalism to a
$\Lambda$-system. This is qualitatively different, since in our case the
population is removed from the system via transitions that link the ground
(qubit) states to an unwanted excited level. Moreover, the complexity is
increased, since each leakage transition is driven by both laser fields used
for the CPT control.
In the DRAG framework, $H_{\text{D}}$ has to be constrained in a way that it
implements an ideal evolution dictated by a target Hamiltonian. In our case,
the target Hamiltonian as defined in the CPT frame has the form:
$H_{\text{target}}=\frac{h_{x}^{(0)}(t)}{2}\sigma_{x,\text{be}}+h_{z}^{(0)}(\sigma_{\text{bb}}-\sigma_{\text{ee}}),$
(14)
where $|\text{d}\rangle$ ($|\text{b}\rangle$) is the dark (bright) state,
$|\text{e}\rangle$ is the excited state of the $\Lambda$-system
($|\text{e}\rangle=|\text{A}\rangle$), and $h_{z}^{(0)}$ is the two-photon
detuning. Also, we have defined $\sigma_{x,ij}=|i\rangle\langle
j|+|j\rangle\langle i|$ and $\sigma_{y,ij}=-i(|i\rangle\langle
j|-|j\rangle\langle i|)$. At this point, we should emphasize that our
treatment is different from Ref. [51], where the leakage-robust gates are
designed according to a target qubit-Hamiltonian in the rotating frame.
Instead, to reduce the leakage errors from the qubit subspace
($|\text{d}\rangle$, $|\text{b}\rangle$), we formulate an indirect treatment
which involves the bright-excited subspace.
Based on the target Hamiltonian of Eq. (14), our target constraints are:
$\displaystyle h_{x}^{(n)}$ $\displaystyle=$
$\displaystyle\text{Tr}[H_{\text{D}}^{(n)}\sigma_{x,\text{bA}}],$ (15)
$\displaystyle h_{y}^{(n)}$ $\displaystyle=$
$\displaystyle\text{Tr}[H_{\text{D}}^{(n)}\sigma_{y,\text{bA}}]=0,$ (16)
$\displaystyle h_{z}^{(n)}$ $\displaystyle=$
$\displaystyle\text{Tr}[H_{\text{D}}^{(n)}(\sigma_{\text{bb}}-\sigma_{\text{AA}})].$
(17)
The zero-th order target constraints ensure that
$H_{\text{D}}^{(0)}=H_{\text{target}}$. To satisfy the decoupling of the
$\Lambda$-system from the $|\text{C}\rangle$ leakage subspace we require the
following decoupling constraints, with $k\in\\{x,y\\}$:
$\displaystyle\text{Tr}[H_{\text{D}}^{(n)}\sigma_{k,\text{dC}}]$
$\displaystyle=$ $\displaystyle 0,$ (18)
$\displaystyle\text{Tr}[H_{\text{D}}^{(n)}\sigma_{k,\text{bC}}]$
$\displaystyle=$ $\displaystyle 0,$ (19)
$\displaystyle\text{Tr}[H_{\text{D}}^{(n)}\sigma_{k,\text{AC}}]$
$\displaystyle=$ $\displaystyle 0,$ (20)
as well as:
$\text{Tr}[H_{\text{D}}^{(n)}\sigma_{k,\text{dA}}]=0,$ (21)
which ensures that in the DRAG frame there is no transition between the dark
and excited states. Intuitively, for any order of $H_{\text{D}}^{(n)}$ with
$n\geq 0$, the elements of the DRAG Hamiltonian that do not correspond to the
target subspace should be zero. To obtain the $n$-th order DRAG Hamiltonian,
we expand both $S(t)$ and $H_{\text{db}}$ in power series. The appropriate
pulse modifications to the initial fields are obtained by satisfying the
constraints consistently. In the particular case of $R_{x}(\pi)$ rotations
(where the two-photon detuning is zero), the modulation of one laser vanishes,
which in terms of experimental requirements matches the Magnus scheme. More
details about the analytic derivation of the corrective envelopes are provided
in Appendix G.
## V Control of SiV- system
### V.1 Zero magnetic fields
We begin by testing our protocols for the SiV- system at $B=0$ T. The bright
transitions at zero magnetic fields are the A1, C1, B2, D2, B3, D3, A4 and C4
transitions. We consider the $\Lambda$-system formed by the states
$|1\rangle$, $|4\rangle$ and $|\text{A}\rangle$ shown in Fig. 3(b). By
choosing the $|\text{A}\rangle$ state to be the excited state of our
$\Lambda$-system, we avoid downward orbital relaxations (which are more likely
than the upward ones) that are present in the higher excited manifold.
Figure 3: The selection rules-based E-field polarizations of (a) lead to
cross-talk and four leakage transitions shown in (b). (c,d) The redefined
polarizations in the $xz$-plane can eliminate the cross-talk of (b). Figure 4:
Optical control of SiV- at $B=0$: Infidelity of $R_{x}(\phi)$ (a), and optimal
gate time (b), corresponding to four different protocols. The pulse envelopes
for $R_{x}(-\pi)$ rotations are shown in (c), (d), (e), (f) and the pulse
envelopes for $R_{x}(-\pi/2)$ are depicted in (g), (h), (i), (g). Gray: naive,
red: orthogonal, purple: Magnus and blue: DRAG.
We assume no initialization errors and a temperature of $T=4$ K. At this
temperature according to Ref. [41], the spin relaxation time is
$T_{1,\text{spin}}=2.4$ ms, the orbital relaxation is $T_{1,\text{orbit}}=38$
ns, and the dephasing time $T_{2}^{*}=35$ ns. We also set the optical lifetime
to be $\tau=4.5$ ns [45].
We start from the simplest approach of controlling the electronic spin, which
we refer to as naive, which is simply based on the CPT control of an ideal
Lambda system. It utilizes two laser fields according to the polarization
selection rules; in this setup the A1 transition is driven by $z$-polarized
light, and the A4 transition by circularly polarized light [see Figs. 3(a),
(b)]. Notice that we define the polarizations in the defect coordinate frame.
This choice of polarizations is far from optimal, since it introduces many
off-resonant errors. In particular, in Fig. 3(b) we show all the possible
transitions driven by the naive laser fields. The main error is the cross-talk
within the $\Lambda$-system, as this involves a transition that is detuned by
only $\delta_{\text{gs}}=50$ GHz, which is the ground state splitting. Each
laser field additionally drives transitions C1 and C4 (off-resonant by
$\delta_{\text{es}}=260$ GHz), introducing leakage outside of the
$\Lambda$-subspace.
In the naive approach, the leakage issue can be partially resolved by
considering more narrowband pulses. Nevertheless, the relaxation mechanisms
for longer pulses are detrimental to the optical control of the SiV-, leading
to enhanced gate errors. In Fig. 4(a), we show (gray curve) the infidelity of
arbitrary rotations, $R_{x}(\phi)$, and in Fig. 4(b) the optimal gate time.
The naive approach is the slowest of all our proposed protocols, since it
balances the trade-off between leakage and cross-talk errors with relaxation-
induced effects. The sech pulses for the optimal implementation of
$R_{x}(-\pi)$ and $R_{x}(-\pi/2)$ by the naive approach, are shown in the
panels of Fig. 4(c) and Fig. 4(g) respectively.
As mentioned in Sec. III, the cross-talk can be completely eliminated by
redefining the polarization of the two driving fields. In particular, we
choose the polarization vectors to be in the $xz$-plane as shown in Fig. 3(c).
We refer to this approach as orthogonal, shown with the red curve in Fig. 4.
This protocol allows us to use more broadband pulses (still well-protected
from leakage errors), while simultaneously reducing the effect of relaxations.
Consequently, the pulses in the orthogonal scheme can be up to four times
faster compared to the naive approach [see Fig. 4(b)], and the rotations
display lower infidelity [see Fig. 4(a)]. In Fig. 4(d) and Fig. 4(h) we show
the pulse envelopes of the orthogonal method for the optimal implementation of
$R_{x}(-\pi)$ and $R_{x}(-\pi/2)$ rotations, respectively.
The orthogonal approach sets the upper bound to the gate fidelities in the
absence of additional pulse shaping. However, the optimal gate time of this
method still lies within the regime where relaxation errors have non-zero
contribution. It is also apparent that by reducing the gate time, the leakage
errors gradually become more important. To mitigate the unwanted couplings to
the upper excited manifold we use the corrective techniques we described in
Sec. IV, and combine them with the orthogonal scheme to avoid the cross-talk
within the $\Lambda$-system. To minimize the experimental overhead, we do not
introduce new pulse envelopes, but instead modify the initial laser fields.
The first corrective method that we employ is the Magnus-based scheme [50]. In
our protocol, we modify only one of the initial laser fields, which in this
case is the laser field $E_{1}$ that drives the A1 transition, while the laser
field $E_{2}$ that drives the A4 transition remains intact. Both initial
fields introduce leakage outside of the $\Lambda$-system, via the excitation
of the C1 and C4 transitions [each driven by both lasers as shown in Fig.
3(d)]. The modulated pulse has additional cosine envelopes, and the Fourier
coefficients are obtained by solving a linear system of equations that we
describe in Appendix F. To optimize the performance of the Magnus scheme, we
search solutions that reduce the gate error for different Fourier series
truncation and gate time intervals, while keeping the Magnus truncation to the
first order.
We show the infidelity of the Magnus protocol in Fig. 4(a), and the optimal
gate time in Fig. 4(b) (purple curve). The Magnus scheme allows us to reach an
even faster regime while simultaneously restricting the leakage errors
contributions. Therefore, with a simple modulation of one of the initial
pulses we can retain the same fidelity as in the orthogonal scheme. The
optimal pulse envelopes for $R_{x}(\pi)$ and $R_{x}(-\pi/2)$ are shown in Fig.
4(e) and Fig. 4(i) respectively. In both cases, the top panel corresponds to
the modified pulse.
The alternative corrective method for leakage suppression is the DRAG
technique. This scheme requires pulse modulation of both initial laser fields,
but in the particular case of $R_{x}(\pi)$ rotations, the correction to the
field driving the A1 transition goes to zero. This is a consequence of the
condition for achieving $R_{x}(\pi)$ gates, which requires zero two-photon
detuning, leading to vanishing correction for one pulse. We notice that for
rotation angles $\phi>-\pi/2$ the DRAG method (blue curve) has a longer
optimal gate time compared to the Magnus method. For rotation angles
$\phi<-\pi/2$, however, the gate time is further reduced [Fig. 4(b)] and the
fidelity enhancement becomes more apparent [see blue curve of Fig. 4(a)]. We
should also mention that the DRAG pulse modulations are obtained analytically
[see Appendix G], but we also perform a simple optimization by redefining the
amplitude of the corrections. The optimal pulses for $R_{x}(\pi)$ and
$R_{x}(-\pi/2)$ are displayed in Fig. 4(f) and Fig. 4(g) respectively.
### V.2 Non-zero magnetic fields
In the presence of an external non-axial B-field, the pairwise degeneracy of
each manifold is lifted, and all transitions become allowed and are no longer
spin-conserving. This phenomenon is caused by the off-axial Zeeman interaction
that gives rise to $S_{x}B_{x}$ and $S_{y}B_{y}$ terms, which cause spin-
mixing of the states.
Arbitrary magnetic field directions are more difficult to implement
experimentally since a vector magnet is required. For this reason, we assume a
fixed magnetic field orientation where the $B_{j}$ magnetic field components
in the SiV- frame are expressed in terms of the $B_{\parallel}$ and
$B_{\perp}$ magnetic field strengths in the lab frame. The lab frame magnetic
fields in an experimental setting would be applied parallel and perpendicular
to the cryostat axis, where the sample is placed. The parallel magnetic field
strengths reach up to $|B_{\parallel}|=9$ T and the perpendicular up to
$|B_{\perp}|=3$ T. In the coordinate frame of the SiV- defect, we define the
magnetic fields as:
$B_{x}=B_{\parallel}\cos(54.7^{o})+B_{\perp}\sin(54.7^{o}),$ (22) $B_{y}=0,$
(23)
and
$B_{z}=B_{\parallel}\sin(54.7^{o})-B_{\perp}\cos(54.7^{o}),$ (24)
where $\gamma=54.7~{}^{o}$ is the angle between the symmetry axis [1 1 1] and
the $(1~{}0~{}0)$ sample surface.
The spin coherence of the $S=1/2$ systems shows an angular dependence on the
direction of the external magnetic field. Larger deviation from the symmetry
axis results in enhanced spin-mixing, which consequently reduces the spin
coherence. In particular, according to Ref. [41], the reported
$T_{1,\text{spin}}$ for the SiV- reduces to $3.6~{}\mu$s at $20^{o}$
misaligned field and to 60 ns at $70^{o}$ misalignment. In our simulations, we
assume a spin relaxation time of $T_{1,\text{spin}}=300$ ns for the SiV-.
We consider zero two-photon detuning corresponding to $R_{x}(\pi)$ gates, and
we examine the $\Lambda$-system formed by the states $|1\rangle$, $|2\rangle$
and $|\text{A}\rangle$. We study only the performance of the orthogonal method
and using the results of the analysis of Sec. V.1, we assume a fixed laser
power, that balances the contribution of relaxations and leakage errors.
In Fig. 5, we show the fidelity [Fig. 5(a)] and gate time [Fig. 5(b)] of
$R_{x}(\pi)$ gates for the SiV- for a fixed laser field intensity. The maximum
fidelity for the SiV- corresponds to
$F_{\text{max}}^{\text{SiV${}^{-}$}}=0.975$ for $B_{\perp}=-2.8$ T and
$B_{\parallel}=-2$ T. The corresponding gate time is $T=0.235$ ns (versus
$T=0.3$ ns at $B=0$ T, see Appendix D). We observe a small increase in the
fidelity for the SiV- ($\sim 1\%$), and reduction of the gate time, compared
to orthogonal scheme at zero magnetic fields. Note that even though the laser
intensity is fixed, the gate time varies as the transition dipole overlaps
$\langle\psi_{i}|p_{k}|\psi_{j}\rangle$ (which change the effective Rabi
frequency and hence the bandwidth) are different for each magnetic field
strength.
Figure 5: Optical control of SiV- at $B\neq 0$: Dependence of the fidelity (a)
and duration (b) of $R_{x}(\pi)$ gates on the parallel and perpendicular (with
respect to the cryostat axis) magnetic field strengths. For $B\neq 0$ we
consider the A1-A2 $\Lambda$-system. The regions of low fidelity correspond to
weakly excited $\Lambda$-system.
In the low fidelity range, one or both $\Lambda$-transitions become weakly
allowed, while other transitions are driven more strongly. As an example, in
Fig. 5(b), the regions of longest gate time correspond to weakly allowed
$\Lambda$-transitions, which consequently lowers the fidelity in Fig. 5(a),
for the same magnetic field values. The long gate time of these weakly excited
transitions is associated with the transitionless-pulse condition that we
explain in Appendix. A, so for weak effective Rabi frequency, the pulses are
narrow-band. In the remaining low fidelity range, one of the
$\Lambda$-transitions is weakly driven, thus requiring an increase of the
laser power driving the particular transition to match the Rabi frequency of
the second $\Lambda$-transition. This is a requirement that we impose to the
CPT transformation to achieve $R_{x}$ gates. Consequently, with higher laser-
power, other bright transitions are driven more strongly, which results in low
overall fidelity. Nevertheless, our choice of $\Lambda$-system is not
restricted, and for the magnetic field values of low fidelity shown in Fig. 5,
we could instead select a different $\Lambda$-system.
## VI Control of SnV- system
### VI.1 Zero magnetic fields
The main advantage of the SnV- defect is its large ground and excited states
splittings. This suppresses both incoherent and coherent errors. Due to the
large energy separation, orbital relaxations are further suppressed compared
to the SiV-, meaning that high fidelity control is possible without
millikelvin cooling. The larger splitting also reduces the cross-talk of the
lasers driving the transitions. For zero magnetic fields, we find that the
bright transitions are A2, C2, B1, D1, B3, D3, A4, and C4. We form the
$\Lambda$-system by selecting the $|2\rangle$, $|4\rangle$ and
$|\text{A}\rangle$ states, as shown in Fig. 6(b). Again, we assume no
initialization errors, and in this case, a temperature of $T=6$ K. The spin
relaxation time is set to $T_{1,\text{spin}}=1.26$ ms, the orbital relaxation
time to $T_{1,\text{orbit}}=38$ ns, the dephasing time to $T_{2}^{*}=59$ ns
and the optical lifetime to $\tau=4.5$ ns [25].
Figure 6: The selection rules-based E-field polarizations of (a) lead to
cross-talk and four leakage transitions shown in (b). (c,d) The redefined
polarizations in the $yz$-plane can eliminate the cross-talk and additionally
two leakage transitions of (b). Figure 7: Optical control of SnV- at $B=0$:
Infidelity of $R_{x}(\phi)$ (a), and optimal gate time (b), corresponding to
four different protocols. The pulse envelopes for $R_{x}(-\pi)$ rotations are
shown in (c), (d), (e), (f) and the pulse envelopes for $R_{x}(-\pi/2)$ are
depicted in (g), (h), (i), (g). Gray: naive, red: orthogonal, purple: Magnus
and blue: DRAG.
In the naive scheme, which is based on selection rules, the A2 transition is
strongly driven by $z$-polarized light and the A4 by circularly polarized
light [Fig. 6(a)]. As a result, both cross-talk and leakage errors are present
in the optical control [Fig. 6(b)]. However, due to the large ground and
excited states splittings of the SnV-, these errors average out more
effectively than the SiV- system, when the pulses are not extremely broadband.
Again, we test the performance of our four protocols; the naive, the
orthogonal, the Magnus and the DRAG approaches. All of these schemes can
achieve faster and higher fidelity gates compared to the SiV- system. The
large excited state splitting of almost $3$ THz allows to implement the
rotations at less than hundreds of picoseconds, while preserving selectivity
of the A transitions. Consequently, in contrast to the SiV-, the gates can be
performed in a completely relaxation-free duration range, even for the naive
approach. Thus, there is no trade-off between frequency selectivity and
relaxation errors.
Starting with the naive approach (gray curve) we show the infidelity of
$R_{x}(\phi)$ rotations in Fig. 7(a) and the optimal gate time in Fig. 7(b).
We notice that the gates are well-protected and considerably faster than the
SiV-, as the error transitions average out efficiently. The selectivity of the
A transitions and the contribution of the cross-talk set a lower bound for the
optimal gate time, which on average is greater than 50 ps [see Fig. 7(b)].
In the orthogonal scheme, we can remove the cross-talk within the
$\Lambda$-system by redefining the polarization of the laser fields. One
example would be to select $xz$-polarization for the driving fields (such that
$\textbf{E}_{1}\cdot\textbf{d}_{\text{A4}}=0=\textbf{E}_{2}\cdot\textbf{d}_{\text{A2}}$),
similar to the SiV- system. However, we found that a different choice of
polarizations can additionally eliminate two out of the four leakage
transitions.
We model the Jahn-Teller (JT) contribution according to [25], which gives rise
to purely real $\langle p_{z}\rangle$ and $\langle p_{x}\rangle$ transition
dipoles and purely imaginary $\langle p_{y}\rangle$ transition dipoles. Under
this assumption, and considering $yz$-polarization for $\textbf{E}_{1}$ and
$\textbf{E}_{2}$, we find that by setting the polarizations to be:
$\textbf{E}_{1}=E_{01}\left(\textbf{y}-\frac{\langle
p_{y}\rangle_{\text{A4}}}{\langle
p_{z}\rangle_{\text{A4}}}\textbf{z}\right)e^{i(k_{1}x-\omega_{1}t)}+\text{c.c.}$
(25)
$\textbf{E}_{2}=E_{02}\left(\textbf{y}-\frac{\langle
p_{y}\rangle_{\text{A2}}}{\langle
p_{z}\rangle_{\text{A2}}}\textbf{z}\right)e^{i(k_{2}x-\omega_{2}t)}+\text{c.c.},$
(26)
we not only resolve the cross-talk, but we also fulfil the relations
$\textbf{E}_{1}\cdot\textbf{d}_{\text{C2}}=0$ and
$\textbf{E}_{2}\cdot\textbf{d}_{\text{C4}}=0$. Thus, the remaining leakage
transitions correspond to the driving of C2 by the $\textbf{E}_{2}$ field and
to the driving of C4 by the $\textbf{E}_{1}$ field [see Figs. 6(c), (d)]. In
Appendix C, we derive the polarizations we need to define to eliminate the
cross-talk and two out of the four leakage transitions for arbitrary JT
parameters.
With this simple redefinition of the E-field polarizations, the orthogonal
approach (red curve) achieves enhanced gate fidelities compared to the naive
scheme, as shown by the red curve in Fig. 7(a). By removing the cross-talk and
reducing the leakage, we manage to decrease the optimal gate time below 50 ps
[Fig. 7(b)]. The optimal pulse envelopes for the $R_{x}(-\pi)$ and
$R_{x}(-\pi/2)$ are shown in Fig. 7(d) and Fig. 7(h) respectively, which still
correspond to simple sech pulses.
The orthogonal scheme sets the lower bound of gate infidelities and gate
durations for unmodulated pulses. To go beyond this limit, we allow for pulse
modifications by using the Magnus- and DRAG-based protocols. First, we test
the performance of the Magnus protocol. In this method, only the pulse
envelope of the $\textbf{E}_{1}$-field is modified. Even though in the initial
frame the control pulse has access only to the C4 error transition and not the
C2 (which is driven by the $\textbf{E}_{2}$-laser field), we find that the
linear system we solve to specify the control is still well-defined. More
details for the Magnus scheme are given in Appendix F. The Magnus scheme has
the shortest optimal gate-time duration of all methods [see purple curve Fig.
7(b)]. Although it seems to underperform compared to the orthogonal scheme for
larger rotation angles, the pulses are much more broadband than in the former
case. The pulse envelopes are displayed in Fig. 7(e) [$R_{x}(-\pi)$], and Fig.
7(i) [$R_{x}(-\pi/2)$] , where the top panels involve pulse modulation of the
laser driving the A2 transition.
Finally, we evaluate the performance of the DRAG protocol. In Fig. 7(a), we
show with blue curve the infidelity of arbitrary rotations for the DRAG
scheme. For almost all rotation angles the infidelity owed to leakage is
suppressed, and the gate time is reduced compared to the orthogonal method.
The optimal pulse envelopes that implement $R_{x}(-\pi)$ and $R_{x}(-\pi/2)$
rotations are shown in Fig. 7(f) and Fig. 7(g) respectively. In both cases,
the modified pulse shown in the bottom panels corresponds to the laser driving
the A4 transition. In general, for rotations other than $R_{x}(-\pi)$, both
envelopes require modulation. However, we have performed a simple optimization
search on the DRAG corrections, which allows for a redefinition of their
amplitude strength. Thus, in this particular case, the optimal solution for
the $R_{x}(-\pi/2)$ gate involves modification of one of the initial driving
fields.
### VI.2 Non-zero magnetic fields
Similar to the SiV-, we consider zero two-photon detuning which corresponds to
$R_{x}(\pi)$ rotations, and we select the $\Lambda$-system formed by the
states $|1\rangle$, $|3\rangle$ and $|\text{A}\rangle$. For the spin
relaxation time, we assume $T_{1,\text{spin}}=150$ ns. We consider the
orthogonal approach and in this case we select the $xz$-polarization
definition, similar to Sec. III. For non-zero magnetic field strengths, all
transitions become bright, and selecting the $yz$-polarizations for the lasers
does not offer any advantage (since the transition dipoles are modified due to
the Zeeman Hamiltonian). For each magnetic field strength, we would have to
select the optimal $\Lambda$-system, and define laser field polarizations that
eliminate the cross-talk and also reduce or remove the contribution of the
dominant leakage transitions. Although this analysis would be more complete,
we instead prefer to showcase the performance of a particular polarization of
the E-fields of the orthogonal scheme (that mitigates only the cross-talk),
and optimize in terms of the magnetic field strengths.
Figure 8: Optical control of SnV- at $B\neq 0$: Dependence of the fidelity (a)
and duration (b) of $R_{x}(\pi)$ gates on the parallel and perpendicular (with
respect to the cryostat axis) magnetic field strengths. For $B\neq 0$ we
consider the A1-A3 $\Lambda$-system. The regions of low fidelity correspond to
weakly excited $\Lambda$-system.
We assume the same magnetic field definitions in the defect coordinate frame
as in Sec. V.2, and we vary the parallel and perpendicular magnetic field
strenghts, with respect to the cryostat axis. In Fig. 8(a) we show the
fidelity and in Fig. 8(b) the gate time of $R_{x}(\pi)$ rotations for the
optimal laser field intensity. The maximum fidelity corresponds to
$F_{\text{max}}^{\text{SnV${}^{-}$}}=0.996$ for $B_{\perp}=0.3$ T and
$B_{\parallel}=0.2$ T and the gate time is $T=80$ ps. This is also the maximum
achievable fidelity for the $xz$-polarization in the absence of magnetic field
strengths, and the gate time is also close to the zero magnetic field case
[see Appendix D].
The low fidelity range arises for the same reasons as in the case of the SiV-
system. Specifically, the $\Lambda$-transitions are weakly excited and our
choice of $\Lambda$-system is not optimal for the particular magnetic fields.
As we mentioned for the SiV- system, even though the laser field intensity is
fixed in this case, the bandwidth of the pulse varies, since the transition
dipoles depend on the Zeeman Hamiltonian term. For the magnetic field
strengths where the fidelity is low, the choice of a different
$\Lambda$-system could still maintain high fidelity control.
## VII Conclusions
In conclusion, we have designed optical control protocols for high-fidelity
rotations of two defect systems: the SiV- and SnV- in diamond. We use coherent
population trapping techniques combined with judicious choice of laser
polarizations to mitigate the cross-talk issue of the $\Lambda$-transitions
caused by the Jahn-Teller effect. Importantly, strain induced due to
integration of the defects in photonic structures can result in enhanced
orbital mixing and modification of the selection rules, and hence, more
intensified cross-talk. Thus, our cross-talk elimination approach could also
be beneficial in such a context. We implement simulations of arbitrary
rotations both in the absence and presence of external magnetic fields and
thoroughly test the maximum fidelity that we can reach without any additional
corrections. To an extended generalization, the choice of polarizations can
ensure both vanishing cross-talk and reduced number of leakage transitions.
For the SiV-, there is a trade-off between faster gates protected from
relaxation and slower gates protected from leakage errors. On the contrary,
for the SnV-, we can safely reach the gate time range where the dissipation
mechanisms are negligible, without causing enhanced leakage. We show that with
our orthogonal pulse scheme we achieve fast and high fidelity control for the
SnV- system, due to its larger ground and excited state splittings.
Further, we use a Magnus expansion technique, as well as a newly developed
version of the DRAG technique, to mitigate leakage errors when considering
broadband pulses. The corrective modifications in the Magnus and DRAG schemes
involve simple cosine envelopes that can be generated using arbitrary waveform
generators and electro-optical modulators, which create modulated pulses from
a CW laser. In general, pulses carved out of a CW laser have limited power and
speed, but a power enhancement could be achieved with a fast response optical
amplifier (e.g. semiconductor amplifiers with up to tens of GHZ repetition
rate [54]). Depending on experimental constraints (e.g. laser power,
duration), one could select the least demanding and most practical approach to
counteract leakage errors.
###### Acknowledgements.
The authors would like to thank Shahriar Aghaei, Jonas Becker, Alison Emiko
Rugar, Jelena Vuckovic, as well as Arian Vevzaee for valuable discussions. The
authors were supported by the United States National Science Foundation under
the grant 1838976.
## Appendix A CPT control with sech pulses
As we mentioned in the main text, the destructive interference in the CPT
scheme leads to a dark state that is completely decoupled from the dynamics of
the three-level system. This is achieved by tuning the laser parameters
(relative amplitudes and phases), and satisfying the two-photon resonance
condition $(\Delta_{1}=\Delta_{2}\equiv\Delta)$, where $\Delta_{\text{j}}$ is
the detuning of the transition labeled $j$. The mapping from the initial qubit
states in the lab frame to the dark-bright basis can be performed via the
transformation:
$R_{\text{db}}=\begin{pmatrix}\cos\frac{\theta}{2}&-e^{-i\alpha}\sin\frac{\theta}{2}\\\
e^{i\alpha}\sin\frac{\theta}{2}&\cos\frac{\theta}{2}\end{pmatrix}.$ (27)
Effectively, this transformation defines the rotation axis of the qubit, which
is $\textbf{n}=(\sin\theta\cos\alpha,\sin\theta\sin\alpha,\cos\theta)$, while
it also enables the reduction of the initial problem into a two-level system.
In particular, the Hamiltonian in the dark-bright basis reads:
$H_{\text{db}}=\Omega_{\text{eff}}f(t)e^{i\Delta
t}\sigma_{\text{be}}+\text{H.c.},$ (28)
where $\sigma_{\text{be}}=|\text{b}\rangle\langle\text{e}|$, with
$|\text{b}\rangle$ being the bright state and $|\text{e}\rangle$ the excited
state. Also, the effective Rabi frequency in this frame is expressed in terms
of the original Rabi frequencies as
$\Omega_{\text{eff}}=\sqrt{|\Omega_{1}|^{2}+|\Omega_{2}|^{2}}$. For a general
pulse envelope, the two-level problem is not analytically solvable. Here we
consider hyperbolic secant pulses (i.e. $f(t)=\text{sech}(\sigma t)$), that
have been proven to be analytically solvable [55], and lead to a rotation in
the qubit subspace given by [48, 49]:
$U_{0}=\begin{pmatrix}1&0\\\ 0&e^{-i\phi}\end{pmatrix}.$ (29)
As shown by Eq. (29), the dark state does not evolve, whereas the bright state
picks up a phase given by $\phi=2\tan^{-1}(\sigma/\Delta)$, where $\Delta$ is
the detuning and $\sigma$ the bandwidth. Control of both rotation axis and
angle is achieved by combining CPT with hyperbolic secant pulses, which allows
us to design arbitrary single-qubit gates. The only additional requirement is
that the bandwidth is equal to the effective Rabi frequency in the CPT frame
($\sigma=\Omega_{\text{eff}}$), such that the pulse is transitionless, i.e.
the population returns to the ground states at the end of the pulse.
## Appendix B Details of the simulations
In this section, we provide further details regarding our simulations. First,
based on Ref. [44], we use the laser power applied on the SiV- system for
$\pi$-rotations to calculate the electric field amplitude and estimate the
Rabi frequencies. For a numerical aperture NA=0.7, the spot-size (radius) of
the laser is given by $w_{0}=\lambda_{0}/(\pi\text{NA})$, where $\lambda_{0}$
is the wavelength of a specific transition for the SiV- or the SnV-, which can
be assumed to be close to the central transition. For the SiV- the central
wavelength is $\lambda_{0}\approx 736$ nm, while for the SnV- it is
$\lambda_{0}\approx 620$ nm. Assuming an emitter focused at the center of the
beam, the intensity is related with the power and spot-size by the expression:
$I=\frac{P_{0}}{\pi w_{0}^{2}},$ (30)
while it can also be expressed as:
$I=\frac{cn\epsilon_{0}}{2}|E_{0}|^{2},$ (31)
where $c$ is the speed of light, $n=2.4$ is the refractive of diamond, $E_{0}$
is the electric field amplitude and $\epsilon_{0}$ is the vacuum permittivity.
The factor of 1/2 comes from averaging the intensity. By combining Eq. (30)
with Eq. (31) we can express the electric field in terms of the laser power
as:
$|E_{0}|=\sqrt{\frac{2P_{0}}{\pi w_{0}^{2}cn\epsilon_{0}}}.$ (32)
From Eq. (32), we calculate the electric field amplitude based on the laser
powers of Ref. [44], shown in Fig. B.1. For the SiV-, the maximum electric
field amplitude we have considered is $E_{0}=8.5\times 10^{4}$ (V/m), while
for the SnV-, we have considered up to $E_{0}\approx 1\times 10^{6}$ (V/m).
(These values exclude the numerically optimized DRAG pulses, whose amplitude
corrections have a multiplicative factor $|c|<4$). Further, for the
$z$-transition dipoles, we have taken into account the experimental
enhancement factor of 2 of the $z$-dipole. In general, the optimal ranges of
operation we found for both defects are smaller than these maximum $E_{0}$
amplitudes, so the laser power should correspond to the experimentally safe
and achievable ranges.
Figure B.1: Electric field amplitude versus the square root of the laser
power, for the SiV- central wavelength (blue) and the SnV- central wavelength
(red).
Further, we calculate the Rabi frequencies for each transition as:
$\Omega^{ij}=\alpha\frac{er_{0}|E_{0}|}{\hbar}\langle\psi_{i}|p_{k}|\psi_{j}\rangle,$
(33)
where $e$ is the electronic charge, and for $r_{0}$ we assume $r_{0}=0.53$ Å.
We estimate the multiplicative factor $\alpha\approx 6.663$ for the SiV- and
$\alpha\approx 3.3$ for the SnV-. In particular, for the SiV-, that would give
rise to a dipole moment of approximately $\mu\approx 16.6$ Debye, which is
close to $\mu=14.3$ Debye reported in [32]. Also,
$\langle\psi_{i}|p_{k}|\psi_{j}\rangle$, is the dipole overlap of the
transition and the matrices $p_{x}$, $p_{y}$, $p_{z}$ are given by group
theory [16]. We should mention that in our simulations we start by defining
the driving Hamiltonian without the factor of $1/2$ in front of the Rabi-
frequencies, which would be a result of the RWA. This means that the $E_{0}$
value should be twice as much as the approximate values we report above. To
calculate the eigensystem of all eight levels of each defect, we consider
three main interaction terms: the spin-orbit coupling, the Jahn-Teller and the
Zeeman effects.
Regarding the Lindblad relaxation operators, we follow a similar convention as
in Ref. [46]. First, for the dephasing mechanism, we assume an equal dephasing
of all states:
$G_{\text{deph}}=\frac{1}{\sqrt{T_{2}^{*}}}|i\rangle\langle i|.$ (34)
Spin-relaxation mechanisms lead to a change of the spin-state while preserving
the same orbital part, which occur within the ground or excited state
manifold. We define the Lindblad spin-relaxation operators as:
$G_{\text{spin}}=\frac{1}{\sqrt{2T_{1,\text{spin}}}}|i\rangle\langle j|.$ (35)
Orbital dissipation mechanisms occur between different orbital states of the
same spin projection. We define the associated Lindblad operator as:
$G_{\text{orbit}}=\sqrt{F_{\text{orbit}}}|i\rangle\langle j|,$ (36)
where the decay rate $F_{\text{orbit}}$ is different for an upward or downward
relaxation. For a downward relaxation we define:
$F_{\text{orbit, down}}=\frac{1}{T_{1,\text{orbit}}(1+e^{-|\Delta
E|/k_{b}T})}$ (37)
and for an upward:
$F_{\text{orbit,up}}=F_{\text{orbit,down}}e^{-|\Delta E|/k_{b}T},$ (38)
i.e. the orbital relaxations are scaled by Boltzman factors, with the upward
orbital relaxations being less probable. $\Delta E$ is the energy difference
between the levels that participate in the orbital relaxation mechanism.
Finally, for the lifetime relaxations, we define the Lindblad operators as:
$G_{\text{lifetime}}=\frac{1}{\sqrt{\tau}}|i\rangle\langle j|,$ (39)
where $\sigma_{ij}=|i\rangle\langle j|$ corresponds to a bright transition,
and $\tau$ is the lifetime.
## Appendix C General method for removing the cross-talk and one leakage
transition
As we mentioned in the main text, we can always redefine the polarization of
the laser fields to remove completely the cross-talk within the
$\Lambda$-system. We also mentioned that for the SnV- and by using the Jahn-
Teller (JT) parameters of [25], the polarization of the E-fields was found to
be additionally orthogonal to one leakage transition each.
However, this extra property depends on the modeling of the JT interaction. To
resolve this subtlety, we derive analytically the transition dipoles for
arbitrary JT parameters. Assuming no crystal strain, and working at $B=0$ T,
the only non-zero interaction terms are the spin-orbit coupling and the JT
effect.
By expressing the interaction Hamiltonian in the $|e_{\pm}\rangle$ orbital
basis and in the $\\{|\uparrow\rangle,|\downarrow\rangle\\}$ spin basis, the
two interaction terms read:
$H_{\text{g/e}}=\begin{pmatrix}Q_{x,\text{g/e}}&Q_{y,\text{g/e}}\\\
Q_{y,\text{g/e}}&-Q_{x,\text{g/e}}\end{pmatrix}\otimes\textbf{1}-\frac{\lambda_{\text{SO},\text{g/e}}}{2}L_{z}\otimes
S_{z},$ (40)
where $Q_{x,\text{g/e}}=Q_{\text{g/e}}\cos\phi_{\text{g/e}}$,
$Q_{y,\text{g/e}}=Q_{\text{g/e}}\sin\phi_{\text{g/e}}$,
$\lambda_{\text{SO}_{\text{g/e}}}=\Delta
E_{\text{g/e}}\cos\theta_{\text{g/e}}$ and $Q_{\text{g/e}}=\Delta
E_{\text{g/e}}/2\sin\theta_{\text{g/e}}$. The subscript $g$ and $e$ denote
ground and excited state respectively. The parameter $\theta_{\text{g/e}}$ can
be tuned so as to give the relative strength of the SO and JT contributions
that are experimentally observed. The unnormalized eigenvectors are given by:
$\displaystyle v_{1,\text{g/e}}$ $\displaystyle=$
$\displaystyle\begin{pmatrix}0&e^{i\phi_{\text{g/e}}}\tan\frac{\theta_{\text{g/e}}}{2}&0&1\end{pmatrix}^{T}$
(41) $\displaystyle v_{2,\text{g/e}}$ $\displaystyle=$
$\displaystyle\begin{pmatrix}e^{i\phi_{\text{g/e}}}\cot\frac{\theta_{\text{g/e}}}{2}&0&1&0\end{pmatrix}^{T}$
(42) $\displaystyle v_{3,\text{g/e}}$ $\displaystyle=$
$\displaystyle\begin{pmatrix}0&-e^{i\phi_{\text{g/e}}}\cot\frac{\theta_{\text{g/e}}}{2}&0&1\end{pmatrix}^{T}$
(43) $\displaystyle v_{4,\text{g/e}}$ $\displaystyle=$
$\displaystyle\begin{pmatrix}-e^{i\phi_{\text{g/e}}}\cot\frac{\theta_{\text{g/e}}}{2}&0&1&0\end{pmatrix}^{T},$
(44)
where $v_{1,\text{g/e}}$ and $v_{2,\text{g/e}}$ correspond to the eigenenergy
$-\Delta E_{\text{g/e}}/2$ and the eigenvectors $v_{3,\text{g/e}}$ and
$v_{4,\text{g/e}}$ to the eigenenergy $\Delta E_{\text{g/e}}/2$.
As an example let’s assume that we use the $\Lambda$-transitions A2 and A4,
and that we want to make the $\textbf{E}_{1}$ field orthogonal to
$\textbf{d}_{\text{A4}}$ and $\textbf{d}_{\text{C2}}$. Under this notation,
the non-zero transitions would be between $v_{1,\text{g}}\leftrightarrow
v_{1,\text{e}}$ and $v_{1,\text{g}}\leftrightarrow v_{3,\text{e}}$. Thus, the
transition dipole $\textbf{d}_{1}=\textbf{d}_{v_{1,\text{g}}v_{1,\text{e}}}$
(of A2) is:
$\textbf{d}_{1}=\frac{1}{|\sec\frac{\theta_{\text{e}}}{2}\sec\frac{\theta_{\text{g}}}{2}|}\begin{pmatrix}-(e^{-i\phi_{\text{e}}}\tan\frac{\theta_{\text{e}}}{2}+e^{i\phi_{\text{g}}}\tan\frac{\theta_{\text{g}}}{2})\\\
i(e^{i\phi_{\text{e}}}\tan\frac{\theta_{\text{e}}}{2}-e^{i\phi_{\text{g}}}\tan\frac{\theta_{\text{g}}}{2})\\\
2(1+e^{i(\phi_{\text{e}}+\phi_{\text{g}})}\tan\frac{\theta_{\text{e}}}{2}\tan\frac{\theta_{\text{g}}}{2})\end{pmatrix}.$
(45)
Similarly the dipole
$\textbf{d}_{2}=\textbf{d}_{v_{1,\text{g}}v_{3,\text{e}}}$ (of C2) is:
$\textbf{d}_{2}=\frac{1}{|\sec\frac{\theta_{\text{e}}}{2}\sec\frac{\theta_{\text{g}}}{2}|}\begin{pmatrix}(e^{i\phi_{\text{e}}}\cot\frac{\theta_{\text{e}}}{2}-e^{i\phi_{\text{g}}}\tan\frac{\theta_{\text{g}}}{2})\\\
-i(e^{i\phi_{\text{e}}}\cot\frac{\theta_{\text{e}}}{2}+e^{i\phi_{\text{g}}}\tan\frac{\theta_{\text{g}}}{2})\\\
2(-1+e^{i(\phi_{\text{e}}+\phi_{\text{g}})}\cot\frac{\theta_{\text{e}}}{2}\tan\frac{\theta_{\text{g}}}{2})\end{pmatrix}.$
(46)
Finally, the dipole $\textbf{d}_{3}=\textbf{d}_{v_{3,\text{g}}v_{1,\text{e}}}$
(of A4) is:
$\textbf{d}_{3}=\frac{1}{|\sec\frac{\theta_{\text{e}}}{2}\sec\frac{\theta_{\text{g}}}{2}|}\begin{pmatrix}e^{i\phi_{\text{g}}}(\cot\theta_{\text{g}}+\csc\theta_{\text{g}})-e^{i\phi_{\text{e}}}\tan\frac{\theta_{\text{e}}}{2}\\\
-i(e^{i\phi_{\text{g}}}(\cot\theta_{\text{g}}+\csc\theta_{\text{g}})+e^{i\phi_{\text{e}}}\tan\frac{\theta_{\text{e}}}{2})\\\
2(1-e^{i(\phi_{\text{e}}+\phi_{\text{g}})}\cot\frac{\theta_{\text{g}}}{2}\tan\frac{\theta_{\text{e}}}{2})\end{pmatrix}.$
(47)
The goal is to make $\textbf{E}_{1}\cdot\textbf{d}_{2}=0$ and
$\textbf{E}_{1}\cdot\textbf{d}_{3}=0$. Thus, considering the general
expression:
$\textbf{E}_{1}=E_{01}(c_{1}\textbf{x}+c_{2}\textbf{y}+c_{3}\textbf{z})e^{i(\textbf{k}\cdot\textbf{r}-\omega
t)}+\text{c.c.},$ (48)
we specify the $c_{2}$ and $c_{3}$ that satisfy the orthogonality relations:
$c_{3}=\frac{-ic_{1}(\cos\theta_{\text{e}}+\cos\theta_{\text{g}})}{2(\sin\theta_{\text{e}}\sin\phi_{\text{e}}+\sin\theta_{\text{g}}\sin\phi_{\text{g}})}$
(49)
$c_{2}=\frac{c_{1}(-\cos\phi_{\text{e}}\sin\theta_{\text{e}}+\cos\phi_{\text{g}}\sin\theta_{\text{g}})}{\sin\theta_{\text{e}}\sin\phi_{\text{e}}+\sin\theta_{\text{g}}\sin\phi_{\text{g}}}.$
(50)
Similarly, we could follow the same procedure to satisfy
$\textbf{E}_{2}\cdot\textbf{d}_{\text{A2}}=0=\textbf{E}_{2}\cdot\textbf{d}_{\text{C4}}$.
Alternatively, we could choose the polarization of $\textbf{E}_{1}$ such that
we satisfy the orthogonality relation to the A4 $\Lambda$-transition while
also minimizing both leakage transitions C2 and C4 that are driven by each
laser field (and similarly for the $\textbf{E}_{2}$ field).
## Appendix D Gate time dependence of the fidelity
Here we show the time dependence of the fidelity of the gates for the SiV- and
the SnV- defects. For both systems we consider the orthogonal scheme and use a
combination of $xz$-polarizations for the E-fields (such that we cancel the
cross-talk errors). In Fig. D.1 (a) and Fig. D.1(b) we show the fidelity
versus the gate time for $R_{x}(\pi)$ and $R_{x}(-\pi/2)$ rotations for the
SiV-.
Figure D.1: Gate time dependence of the fidelity for $R_{x}(-\pi)$ (a) and
$R_{x}(-\pi/2)$ gates for the SiV-. Fidelity of $R_{x}(\phi)$ rotations (c)
and rotation angles (d) versus the gate time and two-photon detuning.
Narrowband pulses suffer from relaxation errors, while significantly broadband
pulses suffer from enhanced leakage errors. The optimal gate time for both
rotations is $T=0.3$ ns, with a fidelity close to $F=0.97-0.98$. In Fig.
D.1(c) we show the fidelity of arbitrary rotations versus the gate time and
two-photon detuning. For $\Delta\gtrsim 200$ GHz, which corresponds to the
excited state splitting, the upper-excited manifold is driven more strongly
leading to significant leakage errors. Nevertheless, the same rotation angles
can be implemented with negative detuning at high fidelities. The
corresponding rotation angles are shown in Fig. D.1(d).
Similarly, we show the gate time dependence of $R_{x}(\pi)$ and
$R_{x}(-\pi/2)$ for the SnV- system in Fig. D.2(a) and Fig. D.2(b).
Figure D.2: Gate time dependence of the fidelity for $R_{x}(-\pi)$ (a) and
$R_{x}(-\pi/2)$ gates for the SnV-. Fidelity of $R_{x}(\phi)$ rotations (c)
and rotation angles (d) versus the gate time and two-photon detuning.
The infidelity of arbitrary rotations in logarithmic scale is shown in Fig.
D.2(c) and the rotation angles are shown in Fig. D.2(d). In this case, due to
the large excited state splitting of the defect, the positive rotation angles
exhibit low infidelity.
## Appendix E Effect of relaxations on the fidelity
In Fig. E.1(a), we test the fidelity of $R_{x}(\pi)$ rotations (orthogonal
scheme) for the case of the SnV-, considering two different temperatures. For
$T=3$ K, we assume $T_{2}^{*}=540$ ns and $T_{1,\text{spin}}=10.2$ ms, while
for $T=6$ K we assume $T_{1,\text{spin}}=1.26$ ms and $T_{2}^{*}=59$ ns. We
observe that the two curves are almost identical for gate times $T<0.1$ ns,
while for longer gates the deviation starts to increase further.
In general, for all rotation angles, the optimal fidelity range should lie
below $\sim 0.1$ ns for the SnV-, such that the contribution of the
relaxations is negligible and the fidelity is almost independent of the
temperature. On the other hand, for the SiV-, the optimal range is shifted to
longer times, as the leakage errors tend to increase substantially for
broadband pulses [Fig. E.1(b)]. (Again we show the performance of the
orthogonal scheme.) Upon cooling to mK temperatures, the phonon induced
relaxations can be suppressed substantially [42], since the qubit states
become decoupled from the phonon bath.
Figure E.1: (a) Fidelity of $R_{x}(\pi)$ rotations at zero magnetic fields for
the SnV-, for temperature $T=3$ K (blue) and $T=6$ K (red). For a gate time
$T<$0.1 ns there is no temperature dependence of the fidelity, as below this
gate time the relaxations seize to contribute. (b) Fidelity of $R_{x}(\pi)$
rotations for the SiV- in the presence of relaxations (red) and without
dissipation mechanisms (blue).
## Appendix F Corrections to leakage errors with the Magnus expansion
approach
In this section, we provide the linear system of equations of the Magnus
methods, for both defect systems. According to the Magnus expansion approach
of [50], we specify the control fields by reducing the problem to a linear set
of equations. We consider zero external magnetic fields since, in this case,
we have a smaller number of unwanted transitions that we want to cancel out.
We further distinguish two cases: i) resonant driving ($R_{x}(\pi)$ rotations)
and ii) off-resonant driving ($R_{x}(\phi)$ rotations). As we will show in
subsequence, we can generalize from the resonant to the off-resonant case by a
slight modification of our linear system of equations.
Starting from the SiV- system, our goal is to find a corrective Hamiltonian
$W(t)$ to suppress the leakage errors. In the main text, we chose the
$\Lambda$-system formed by the states $|1\rangle,|4\rangle$ and
$|\text{A}\rangle$. Assuming perfect initialization, the main error of our
orthogonal scheme is the driving of the C1 and C4 transitions, which leads to
leakage outside of our $\Lambda$-system.
We first decompose the error terms of our Hamiltonian into the Gell-Mann
basis. Starting from the lab frame, we fix two orthogonal polarizations for
the $\Lambda$-transitions, as described in the main text. Thus, in the
interaction frame, our initial Hamiltonian including only the A1, A4 and C1
and C4 transitions reads:
$\begin{split}H&=(|\Omega_{1}^{\text{A1}}|e^{i\phi_{1}}e^{i\Delta
t}\sigma_{1\text{A}}+\Omega_{1}^{\text{C1}}e^{i(\Delta-\delta_{\text{es}})t}\sigma_{1\text{C}}+|\Omega_{1}^{\text{C4}}|e^{i\phi_{\text{C}4}}e^{i(\Delta-(\delta_{\text{es}}-\delta_{\text{gs}}))t}\sigma_{4\text{C}}+\text{H.c.})\\\
&+(|\Omega_{2}^{\text{A4}}|e^{i\phi_{1}}e^{i\Delta
t}\sigma_{4\text{A}}+|\Omega_{2}^{\text{C1}}|e^{-i\phi_{\text{C}4}}e^{i(\Delta-(\delta_{\text{es}}+\delta_{\text{gs}}))t}\sigma_{1\text{C}}-|\Omega_{2}^{\text{C4}}|e^{i(\Delta-\delta_{\text{es}})t}\sigma_{4\text{C}}+\text{H.c.})f(t),\end{split}$
(51)
where $\sigma_{ij}=|i\rangle\langle j|$, $\delta_{\text{es}}=260$ GHz,
$\delta_{\text{gs}}=50$ GHz are the excited and ground state splittings
respectively, $f(t)=\text{sech}(\sigma(t-t_{0}))$ and $\Delta$ is the two-
photon detuning. Note that in order to define the error and ideal Hamiltonians
we need only these four transitions. For the SiV- at zero magnetic fields, we
also find that $|\Omega_{2}^{\text{C1}}|=|\Omega_{1}^{\text{C4}}|$, and
$\phi_{\text{C2}}=-\phi_{\text{C}4}$. Since we are further interested for the
$R_{x}$ rotations, we fix $|\Omega_{1}^{\text{A1}}|=|\Omega_{2}^{\text{A4}}|$.
The transformation matrix to the dark-bright frame is given by:
$\begin{split}R&=\frac{\sigma_{11}-\sigma_{14}+\sigma_{41}+\sigma_{44}}{\sqrt{2}}+\sigma_{22}+\sigma_{33}\\\
&+\sigma_{\text{AA}}+\sigma_{\text{BB}}+\sigma_{\text{CC}}+\sigma_{\text{DD}},\end{split}$
(52)
which transforms our initial ground states into the dark-bright states (i.e.
$|1\rangle\rightarrow|\text{d}\rangle$ and
$|4\rangle\rightarrow|\text{b}\rangle$), and the initial Hamiltonian $H$ into
$H_{\text{db}}=RHR^{\dagger}$. Our target Hamiltonian in the db-frame is given
by:
$H_{0,\text{db}}=(\sigma e^{i\Delta t}\sigma_{\text{bA}}+\text{H.c.})f(t),$
(53)
where we have substituted $\sigma=\sqrt{2}|\Omega_{1}^{\text{A1}}|$, and we
have defined $|\text{b}\rangle=1/\sqrt{2}(|1\rangle+|4\rangle)$ to be the
bright state. Next, we apply one more transformation by going to the
interaction picture generated by the ideal Hamiltonian, $H_{0,\text{db}}$,
which is given by $U_{0}$:
$\begin{split}U_{0}&=\sigma_{11}+\sigma_{22}+\sigma_{33}+\sigma_{\text{BB}}+\sigma_{\text{CC}}+\sigma_{\text{DD}}\\\
&+\cos\theta(\sigma_{44}+\sigma_{\text{AA}})+i\sin\theta(\sigma_{4\text{A}}+\sigma_{\text{A}4}).\end{split}$
(54)
Note that $U_{0}$ of Eq. (54) should not be confused with the target gate
$U_{0}$ of Sec. A. Here, $\theta(t)$ is the integral of the pulse envelope,
which for the resonant case reads:
$\begin{split}\theta(t)&=\int_{0}^{t}\sigma\text{sech}(\sigma(t^{\prime}-t_{0}))dt^{\prime}\\\
&=2(\tan^{-1}(e^{\sigma(t-t_{0})})-\tan^{-1}(e^{-\sigma t_{0}})).\end{split}$
(55)
For the off-resonant case, we need to evaluate:
$\begin{split}\theta_{\pm}(t)&=\int_{0}^{t}\sigma\text{sech}(\sigma(t^{\prime}-t_{0}))e^{\pm
i\Delta t^{\prime}}dt^{\prime}\\\ &=\sigma e^{\pm i\Delta
t_{0}}\int_{-t_{0}}^{t-t_{0}}e^{\pm i\Delta u}\text{sech}(\sigma
u)du.\end{split}$ (56)
The solution of these indefinite integrals is:
$g_{\pm}(u)={{}_{2}}F_{1}(1,\frac{\sigma\pm i\Delta}{2\sigma},\frac{3\sigma\pm
i\Delta}{2\sigma},-e^{2\sigma u})\frac{e^{i[(\sigma\pm\Delta)u\pm\Delta
t_{0}]}}{\frac{\sigma\pm i\Delta}{2\sigma}},$ (57)
where ${}_{2}F_{1}(a,b,c,z)$ is the Gauss hypergeometric function. By
evaluating this expression in the limits of the integration, we obtain
$\theta_{\pm}(t)$. We notice that the two functions $\theta_{\pm}(t)$ are
complex conjugates, which simplifies the equations for the off-resonant case.
To obtain the set of equation for off-resonant driving, we replace $\theta(t)$
of the resonant case by $\tilde{\theta}(t)=|\theta_{+}(t)|=|\theta_{-}(t)|$.
The error terms in the interaction picture of $H_{0,\text{db}}$, are given by
$H_{\text{error}}=U_{0}(H_{\text{db}}-H_{0,\text{db}})U_{0}^{\dagger}$. Using
the Gell-Mann basis we now decompose the error terms (where the RWA has been
applied) into the operators
$\displaystyle H_{\text{er},1}^{(1)}$ $\displaystyle=$
$\displaystyle\frac{(|\Omega_{1}^{\text{C1}}|+|\Omega_{2}^{\text{C4}}|)\cos(t\delta_{\text{es}})-2|\Omega_{1}^{\text{C4}}|\sin(t\delta_{\text{es}})\sin(\phi_{\text{C}4}+t\delta_{\text{gs}})}{\sqrt{2}}f(t)L_{s,17}$
(58) $\displaystyle H_{\text{er},2}^{(1)}$ $\displaystyle=$
$\displaystyle\frac{\cos\theta(t)\cos(\phi_{1}/2+\delta_{\text{es}}t)(|\Omega_{1}^{\text{C1}}|-|\Omega_{2}^{\text{C4}}|+2|\Omega_{1}^{\text{C4}}|\cos(\phi_{\text{C}4}+\delta_{\text{gs}}t))}{\sqrt{2}}f(t)L_{s,47}$
(59) $\displaystyle H_{\text{er},3}^{(1)}$ $\displaystyle=$
$\displaystyle\frac{\sin\theta(t)\sin(\phi_{1}/2+\delta_{\text{es}}t)(|\Omega_{1}^{\text{C1}}|-|\Omega_{2}^{\text{C4}}|+2|\Omega_{1}^{\text{C4}}|\cos(\phi_{\text{C}4}+\delta_{\text{gs}}t))}{\sqrt{2}}f(t)L_{s,57}$
(60) $\displaystyle H_{\text{er},4}^{(1)}$ $\displaystyle=$
$\displaystyle\frac{(|\Omega_{1}^{\text{C1}}|+|\Omega_{2}^{\text{C4}}|)\sin(\delta_{\text{es}}t)+2|\Omega_{1}^{\text{C4}}|\cos(\delta_{\text{es}}t)\sin(\phi_{\text{C}4}+\delta_{\text{gs}}t)}{\sqrt{2}}f(t)L_{a,17}$
(61) $\displaystyle H_{\text{er},5}^{(1)}$ $\displaystyle=$
$\displaystyle\frac{\cos\theta(t)\sin(\phi_{1}/2+\delta_{\text{es}}t)(|\Omega_{1}^{\text{C1}}|-|\Omega_{2}^{\text{C4}}|+2|\Omega_{1}^{\text{C4}}|\cos(\phi_{\text{C}4}+\delta_{\text{gs}}t))}{\sqrt{2}}f(t)L_{a,47}$
(62) $\displaystyle H_{\text{er},6}^{(1)}$ $\displaystyle=$
$\displaystyle\frac{\sin\theta(t)\cos(\phi_{1}/2+\delta_{\text{es}}t)(-|\Omega_{1}^{\text{C1}}|+|\Omega_{2}^{\text{C4}}|-2|\Omega_{1}^{\text{C4}}|\cos(\phi_{\text{C}4}+\delta_{\text{gs}}t))}{\sqrt{2}}f(t)L_{a,57},$
(63)
where $L_{s}$ are the symmetric and $L_{a}$ the anti-symmetric Gell-Mann
operators given by [56]:
$L_{s,jk}=|j\rangle\langle k|+|k\rangle\langle j|$ (64)
$L_{a,jk}=-i(|j\rangle\langle k|-|k\rangle\langle j|),$ (65)
where $1\leq j<k\leq d$, with $d=8$ being the dimension of the Hilbert space.
As we explained in the main text, we need a corrective Hamiltonian that is
decomposed into at least the same operators as the error terms. In other
words, starting from the lab frame, we are looking for control pulses that
drive the C1 and C4 unwanted transitions. However, it is not a strict
requirement that the control pulse drives both error transitions (we will show
a counter-example later for the SnV-).
In the general case, the lab-frame control Hamiltonian for the SiV- has the
form:
$W_{\text{lab}}^{(n)}=(\Omega_{1}^{\text{A1}}\sigma_{1\text{A}}+\Omega_{1}^{\text{C1}}\sigma_{1\text{C}}+\Omega_{1}^{\text{C4}}\sigma_{4\text{C}}+\text{H.c.})g^{(n)}(t),$
(66)
where
$g^{(n)}(t)=(g_{1}^{(n)}\cos(\omega_{\text{d}}t)+g_{2}^{(n)}\sin(\omega_{\text{d}}t)$
and $\omega_{\text{d}}$ is the frequency of the control. The amplitudes
$g_{1/2}^{(n)}$ are expanded in a Fourier series:
$g_{1/2}^{(n)}=\sum_{k}c_{k,1/2}^{(n)}\left(1-\cos\left(\frac{2\pi
kt}{T}\right)\right),$ (67)
where $T$ is the gate time, $k$ is the order of truncation of the Fourier
expansion, and $n$ is the order of truncation of the Magnus series expansion.
We follow the same procedure of transforming our lab frame control Hamiltonian
into the interaction picture generated by $H_{0,\text{db}}$. More accurately,
we first transform $W_{\text{lab}}^{(n)}(t)$ into the interaction picture via
$R_{\text{int}}=\sum_{j}e^{i\omega_{j}t}|j\rangle\langle j|$ (where
$\omega_{j}$ are the eigenergies), then to the dark-bright frame via
$R_{\text{db}}$, and finally into the interaction picture generated by
$H_{0,\text{db}}$. After this series of transformations, the decomposition of
$W(t)$ in the final frame (and after applying the RWA) yields:
$\displaystyle W_{1}$ $\displaystyle=$
$\displaystyle\frac{|\Omega_{1}^{\text{A1}}|\sin\theta(-g_{2}\cos(\phi_{1}/2+t\Delta_{\text{c}})+g_{1}\sin(\phi_{1}/2+t\Delta_{\text{c}}))}{\sqrt{2}}L_{s,14}$
(68) $\displaystyle W_{2}$ $\displaystyle=$
$\displaystyle\frac{|\Omega_{1}^{\text{A1}}|\cos\theta(g_{1}\cos(\phi_{1}/2+t\Delta_{\text{c}})+g_{2}\sin(\phi_{1}/2+t\Delta_{\text{c}}))}{\sqrt{2}}L_{s,15}$
(69) $\displaystyle W_{3}$ $\displaystyle=$
$\displaystyle\frac{|\Omega_{1}^{\text{C1}}|(g_{1}\cos(t\bar{\Delta})+g_{2}\sin(t\bar{\Delta}))+|\Omega_{1}^{\text{C4}}|(g_{1}\cos(\phi_{\text{C4}}+t\tilde{\Delta}))+g_{2}\sin(\phi_{\text{C4}}+t\tilde{\Delta}))}{\sqrt{2}}L_{s,17}$
(70) $\displaystyle W_{4}$ $\displaystyle=$
$\displaystyle\frac{|\Omega_{1}^{\text{A1}}|(g_{1}\cos(t\Delta_{\text{c}})+g_{2}\sin(t\Delta_{\text{c}}))}{\sqrt{2}}L_{s,45}$
(71) $\displaystyle W_{5}$ $\displaystyle=$
$\displaystyle\frac{\cos\theta(|\Omega_{1}^{\text{C1}}|(g_{1}\cos\alpha-
g_{2}\sin\alpha)+|\Omega_{1}^{\text{C4}}|(g_{1}\cos\beta-
g_{2}\sin\beta))}{\sqrt{2}}L_{s,47}$ (72) $\displaystyle W_{6}$
$\displaystyle=$
$\displaystyle\frac{|\Omega_{1}^{\text{C1}}|(g_{2}\cos\alpha+g_{1}\sin\alpha)+|\Omega_{1}^{\text{C4}}|(g_{2}\cos\beta+g_{1}\sin\beta)}{\sqrt{2}}L_{s,57}$
(73) $\displaystyle W_{7}$ $\displaystyle=$
$\displaystyle\frac{|\Omega_{1}^{\text{A1}}|\sin\theta(g_{1}\cos(\phi_{1}/2+t\Delta_{\text{c}})+g_{2}\sin(\phi_{1}/2+t\Delta_{\text{c}}))}{\sqrt{2}}L_{a,14}$
(74) $\displaystyle W_{8}$ $\displaystyle=$
$\displaystyle\frac{|\Omega_{1}^{\text{A1}}|\cos\theta(g_{2}\cos(\phi_{1}/2+t\Delta_{\text{c}})-g_{1}\sin(\phi_{1}/2+t\Delta_{\text{c}}))}{\sqrt{2}}L_{a,15}$
(75) $\displaystyle W_{9}$ $\displaystyle=$
$\displaystyle\frac{|\Omega_{1}^{\text{C1}}|(g_{2}\cos(t\bar{\Delta})-g_{1}\sin(t\bar{\Delta}))+|\Omega_{1}^{\text{C4}}|(-g_{2}\cos(\phi_{\text{C}4}+t\tilde{\Delta})+g_{1}\sin(\phi_{\text{C}4}+t\tilde{\Delta}))}{\sqrt{2}}L_{a,17}$
(76) $\displaystyle W_{10}$ $\displaystyle=$
$\displaystyle\frac{|\Omega_{1}^{\text{A1}}|\cos(2\theta)(g_{2}\cos(t\Delta_{\text{c}})-g_{1}\sin(t\Delta_{\text{c}}))}{\sqrt{2}}L_{a,45}$
(77) $\displaystyle W_{11}$ $\displaystyle=$
$\displaystyle\frac{\cos\theta(|\Omega_{1}^{\text{C1}}|(g_{2}\cos\alpha+g_{1}\sin\alpha)+|\Omega_{1}^{\text{C4}}|(g_{2}\cos\beta+g_{1}\sin\beta))}{\sqrt{2}}L_{a,47}$
(78) $\displaystyle W_{12}$ $\displaystyle=$
$\displaystyle\frac{\sin\theta(|\Omega_{1}^{\text{C1}}|(-g_{1}\cos\alpha+g_{2}\sin\alpha)+|\Omega_{1}^{\text{C4}}|(-g_{1}\cos\beta+g_{2}\sin\beta))}{\sqrt{2}}L_{a,57}$
(79) $\displaystyle W_{13}$ $\displaystyle=$
$\displaystyle\frac{\sqrt{3}}{4}|\Omega_{1}^{\text{A1}}|\sin(2\theta)(g_{2}\cos(t\Delta_{\text{c}})-g_{1}\sin(t\Delta_{\text{c}}))L_{33}$
(80) $\displaystyle W_{14}$ $\displaystyle=$
$\displaystyle\frac{\sqrt{5}}{4}|\Omega_{1}^{\text{A1}}|\sin(2\theta)(-g_{2}\cos(t\Delta_{\text{c}})+g_{1}\sin(t\Delta_{\text{c}}))L_{44}$
(81)
with $\bar{\Delta}=\Delta_{\text{c}}-\delta_{\text{es}}$,
$\tilde{\Delta}=\bar{\Delta}+\delta_{\text{gs}}$,
$\alpha=\phi_{1}/2-t\bar{\Delta}$ and
$\beta=\phi_{1}/2-\phi_{\text{C}4}-t\tilde{\Delta}$. Here $\Delta_{\text{c}}$
is the detuning of the control measured from the A1 transition, which is fixed
to be the same with the detuning of the laser field $\textbf{E}_{1}$, since we
modulate that initial laser. We have also dropped the superscript $n$ in
$g_{1}$, $g_{2}$ and $W_{i}$, that denotes the order of Magnus truncation.
The linear system of equations for the first order Magnus expansion is formed
as follows:
$\left(\begin{array}[]{ccc|ccc}w_{j=1,k=1,l}^{(1)}&\ldots&w_{j=1,k=k_{\mathrm{max}},l}^{(1)}&w_{j=1,k=1,l^{\prime}}^{(1)}&\ldots&w_{j=1,k=k_{\text{max}},l^{\prime}}^{(1)}\\\
\vdots&\ddots&\vdots&\vdots&\ddots&\vdots\\\
w_{j=j_{\text{max}},k=1,l}^{(1)}&\ldots&w_{j=j_{\text{max}},k=k_{\text{max}},l}^{(1)}&w_{j=j_{\text{max}},k=1,l^{\prime}}^{(1)}&\ldots&w_{j=j_{\text{max}},k=k_{\text{max}},l^{\prime}}^{(1)}\end{array}\right)\begin{pmatrix}c_{k=1,l}^{(1)}\\\
\vdots\\\ c_{k=k_{\text{max}},l}^{(1)}\\\ c_{k=1,l^{\prime}}^{(1)}\\\
\vdots\\\
c_{k=k_{\text{max}},l^{\prime}}^{(1)}\end{pmatrix}=\begin{pmatrix}h_{\text{err}.,j=1}^{(1)}\\\
\vdots\\\ h_{\text{err}.,j=j_{\text{max}}}^{(1)}\end{pmatrix},$ (82)
with $l=1$ corresponding to $g_{1}$ and $l^{\prime}=2$ corresponding to
$g_{2}$. Also, we have defined
$h_{\text{err}.j}^{(1)}=-i\int_{0}^{T}dt^{\prime}H_{\text{err},j}^{(1)}(t^{\prime})$
to be the integral of the error term of the $j$-th operator. The components of
the first matrix are given by:
$w^{(1)}_{j,k,m}=\int_{0}^{T}dt^{\prime}W_{j,m}(t^{\prime})\left(1-\cos\left(\frac{2\pi
kt^{\prime}}{T}\right)\right),$ (83)
where $W_{j,m}$ corresponds to the coefficient of $g_{1}$ ($m=l$) or $g_{2}$
($m=l^{\prime}$), for each $j$ operator we decomposed the control into. Since
the control is decomposed into more operators than the errors, we set
$h_{\text{err},j}=0$, for the components of the error vector where the error
Hamiltonian has no decomposition.
Regarding the SnV-, we use $yz$-polarization which leads to two vanishing
leakage transitions, i.e. $\Omega_{1}^{\text{C2}}=0$ and
$\Omega_{2}^{\text{C4}}=0$. In this case, the error terms in the final
interaction frame have the decomposition:
$\displaystyle H_{\text{er},1}^{(1)}$ $\displaystyle=$
$\displaystyle\frac{|\Omega_{2}^{\text{C2}}|\cos
t\bar{\Delta}-|\Omega_{1}^{\text{C4}}|\cos\beta}{\sqrt{2}}L_{s,27}$ (84)
$\displaystyle H_{\text{er,2}}^{(1)}$ $\displaystyle=$
$\displaystyle\frac{\cos\theta(|\Omega_{2}^{\text{C2}}|\cos
t\bar{\Delta}+|\Omega_{1}^{\text{C4}}|\cos\beta)}{\sqrt{2}}L_{s,47}$ (85)
$\displaystyle H_{\text{er,3}}^{(1)}$ $\displaystyle=$
$\displaystyle-\frac{\sin\theta(|\Omega_{2}^{\text{C2}}|\sin
t\bar{\Delta}+|\Omega_{1}^{\text{C4}}|\sin\beta)}{\sqrt{2}}L_{s,57}$ (86)
$\displaystyle H_{\text{er,4}}^{(1)}$ $\displaystyle=$
$\displaystyle\frac{-|\Omega_{2}^{\text{C2}}|\sin
t\bar{\Delta}+|\Omega_{1}^{\text{C4}}|\sin\beta}{\sqrt{2}}L_{a,27}$ (87)
$\displaystyle H_{\text{er,5}}^{(1)}$ $\displaystyle=$
$\displaystyle-\frac{\cos\theta(|\Omega_{2}^{\text{C2}}|\sin
t\bar{\Delta}+|\Omega_{1}^{\text{C4}}|\sin\beta)}{\sqrt{2}}L_{a,47}$ (88)
$\displaystyle H_{\text{er,6}}^{(1)}$ $\displaystyle=$
$\displaystyle-\frac{\sin\theta(|\Omega_{2}^{\text{C2}}|\cos
t\bar{\Delta}+|\Omega_{1}^{\text{C4}}|\cos\beta)}{\sqrt{2}}L_{a,57},$ (89)
where $\bar{\Delta}=\Delta-(\delta_{\text{es}}+\delta_{\text{gs}})$ and
$\beta=t(\Delta-\delta_{\text{es}}+\delta_{\text{gs}})+2\phi_{1}$. For the
resonant case, $\Delta=0$.
For the control, we start from the following lab-frame corrective Hamiltonian:
$W_{\text{lab}}=(\Omega_{1}^{\text{A2}}\sigma_{\text{2A}}+\Omega_{1}^{\text{C4}}\sigma_{4\text{C}}+\text{H.c.})g^{(n)}(t),$
(90)
where we have assumed same polarization as the original laser that drives the
A2 transition. Following a similar procedure, we find that the control
Hamiltonian in the final frame has the decomposition:
$\displaystyle W_{1}$ $\displaystyle=$
$\displaystyle\frac{|\Omega_{1}^{\text{A2}}|\sin\theta(-g_{2}\cos(t\Delta_{\text{c}})+g_{1}\sin(t\Delta_{\text{c}}))}{\sqrt{2}}L_{s,24}$
(91) $\displaystyle W_{2}$ $\displaystyle=$
$\displaystyle\frac{|\Omega_{1}^{\text{A2}}|\cos\theta(g_{1}\cos(t\Delta_{\text{c}})+g_{2}\sin(t\Delta_{\text{c}}))}{\sqrt{2}}L_{s,25}$
(92) $\displaystyle W_{3}$ $\displaystyle=$
$\displaystyle-\frac{|\Omega_{1}^{\text{C4}}|(g_{1}\cos\beta+g_{2}\sin\beta)}{\sqrt{2}}L_{s,27}$
(93) $\displaystyle W_{4}$ $\displaystyle=$
$\displaystyle\frac{|\Omega_{1}^{\text{A2}}|(g_{1}\cos(t\Delta_{\text{c}})+g_{2}\sin(t\Delta_{\text{c}}))}{\sqrt{2}}L_{s,45}$
(94) $\displaystyle W_{5}$ $\displaystyle=$
$\displaystyle\frac{|\Omega_{1}^{\text{C4}}|\cos\theta(g_{1}\cos\beta+g_{2}\sin\beta)}{\sqrt{2}}L_{s,47}$
(95) $\displaystyle W_{6}$ $\displaystyle=$
$\displaystyle\frac{|\Omega_{1}^{\text{C4}}|\sin\theta(g_{2}\cos\beta-
g_{1}\sin\beta)}{\sqrt{2}}L_{s,57}$ (96) $\displaystyle W_{7}$
$\displaystyle=$
$\displaystyle\frac{|\Omega_{1}^{\text{A2}}|\sin\theta(g_{1}\cos(t\Delta_{\text{c}})+g_{2}\sin(t\Delta_{\text{c}}))}{\sqrt{2}}L_{a,24}$
(97) $\displaystyle W_{8}$ $\displaystyle=$
$\displaystyle\frac{|\Omega_{1}^{\text{A2}}|\cos\theta(g_{2}\cos(t\Delta_{\text{c}})-g_{1}\sin(t\Delta_{\text{c}}))}{\sqrt{2}}L_{a,25}$
(98) $\displaystyle W_{9}$ $\displaystyle=$
$\displaystyle\frac{|\Omega_{1}^{\text{C4}}|(-g_{2}\cos\beta+g_{1}\sin\beta)}{\sqrt{2}}L_{a,27}$
(99) $\displaystyle W_{10}$ $\displaystyle=$
$\displaystyle\frac{|\Omega_{1}^{\text{A2}}|\cos(2\theta)(g_{2}\cos(t\Delta_{\text{c}})-g_{1}\sin(t\Delta_{\text{c}}))}{\sqrt{2}}L_{a,45}$
(100) $\displaystyle W_{11}$ $\displaystyle=$
$\displaystyle\frac{|\Omega_{1}^{\text{C4}}|\cos\theta(g_{2}\cos\beta-
g_{1}\sin\beta)}{\sqrt{2}}L_{a,47}$ (101) $\displaystyle W_{12}$
$\displaystyle=$
$\displaystyle-\frac{|\Omega_{1}^{\text{C4}}|(g_{1}\cos\beta+g_{2}\sin\beta)}{\sqrt{2}}L_{a,57}$
(102) $\displaystyle W_{13}$ $\displaystyle=$
$\displaystyle\frac{\sqrt{3}}{4}|\Omega_{1}^{\text{A2}}|\sin(2\theta)(g_{2}\cos(t\Delta_{\text{c}})-g_{1}\sin(t\Delta_{\text{c}}))L_{33}$
(103) $\displaystyle W_{14}$ $\displaystyle=$
$\displaystyle\frac{\sqrt{5}}{4}|\Omega_{1}^{\text{A2}}|\sin(2\theta)(-g_{2}\cos(t\Delta_{\text{c}})+g_{1}\sin(t\Delta_{\text{c}}))L_{44},$
(104)
where we have defined $\beta=t(\Delta-\delta_{\text{es}}+\delta_{\text{gs}})$.
The detuning of the control is set equal to the two-photon detuning (same
frequency as the $\textbf{E}_{1}$ laser field), i.e. $\Delta_{c}=\Delta$.
Notice that even though we started with a control pulse that does not have
access to the error transition C2, in the final interaction frame the linear
system of equations is well-defined, as for each error decomposition term,
there is a corresponding control decomposition.
Even though the controls are obtained in the interaction frame generated by
the ideal Hamiltonian, the fidelity of the Magnus scheme in the main text is
evaluated in the initial dark-bright (interaction) frame.
## Appendix G Pulse corrections obtained via the DRAG method
Here, we provide further details regarding the derivation of the control
pulses based on the DRAG method. First, we briefly highlight the strategy for
deriving the controls. Following the procedure of Ref. [51], we start by
transforming our Hamiltonian that also includes the error terms in the
rotating frame. We further mention that for the derivation of the corrections,
we consider only the subspace composed of the
$\\{|1\rangle,|4\rangle,|\text{A}\rangle,|\text{C}\rangle\\}$
($\\{|2\rangle,|4\rangle,|\text{A}\rangle,|\text{C}\rangle\\}$) states for the
SiV- (for the SnV-).
Regarding the SiV- system, the Hamiltonian for this reduced subspace in the
lab frame reads:
$H=H_{\text{lab},1}+H_{\text{lab},2}+H_{0},$ (105)
where
$H_{0}=\text{diag}[\omega_{1},\omega_{1}+\delta_{\text{gs}},\omega_{A},\omega_{A}+\delta_{\text{es}}]$,
with $\delta_{\text{gs}}$ and $\delta_{\text{es}}$ being the ground and
excited states splittings and $\omega_{1}$, $\omega_{A}$ being the eigen-
energies of $|1\rangle$ and $|\text{A}\rangle$ respectively. Also,
$H_{\text{lab},1}$ and $H_{\text{lab},2}$ are given by:
$\begin{split}H_{\text{lab},1}^{(n)}=\Big{(}\Omega_{1}^{(n)}\cos(\omega_{\text{d1}}t)\Big{[}e^{i\phi_{\text{A1}}}\sigma_{\text{1A}}+\lambda_{1}\sigma_{\text{1C}}+\lambda_{12}e^{i\phi_{C4}}\sigma_{\text{4C}}\Big{]}&+\Omega_{2}^{(n)}\cos(\omega_{\text{d2}}t)\Big{[}e^{i\phi_{\text{A4}}}\sigma_{\text{4A}}-\lambda_{2}\sigma_{\text{4C}}+\lambda_{21}e^{i\phi_{\text{C1}}}\sigma_{\text{1C}}]\Big{)}f(t)\\\
&+\text{H.c.},\end{split}$ (106)
$\begin{split}H_{\text{lab},2}^{(n)}=\Big{(}\bar{\Omega}_{1}^{(n)}\sin(\omega_{\text{d1}}t)\Big{[}e^{i\phi_{\text{A1}}}\sigma_{\text{1A}}+\lambda_{1}\sigma_{\text{1C}}+\lambda_{12}e^{i\phi_{C4}}\sigma_{\text{4C}}\Big{]}&+\bar{\Omega}_{2}^{(n)}\sin(\omega_{\text{d2}}t)\Big{[}e^{i\phi_{\text{A4}}}\sigma_{\text{4A}}-\lambda_{2}\sigma_{\text{4C}}+\lambda_{21}e^{i\phi_{\text{C1}}}\sigma_{\text{1C}}]\Big{)}f(t)\\\
&+\text{H.c.}~{},\end{split}$ (107)
with $f(t)=\text{sech}(\sigma(t-t_{0}))$ and $\omega_{\text{d}1}$,
$\omega_{\text{d}2}$ the laser frequencies. The fields $\bar{\textbf{E}}_{1}$
and $\bar{\textbf{E}}_{2}$ are $\pi/2$-shifted compared to $\textbf{E}_{1}$
and E2. Starting from Eq. (105) we perform the transformation
$U_{\text{rot}}=\text{diag}[e^{i\omega_{1}t},e^{i(\omega_{1}+\omega_{\text{d1}}-\omega_{\text{d2}})t},e^{i(\omega_{1}+\omega_{\text{d1}})t},e^{i(\omega_{1}+\omega_{\text{d1}})t}],$
(108)
which leads to the rotating frame Hamiltonian, as well as the transformation
$U_{\phi}=\text{diag}[e^{-i\phi_{\text{A1}}},e^{-i\phi_{\text{A1}}},1,e^{-i\phi_{\text{A1}}}].$
(109)
Notice that the transformation $U_{\phi}$ removes the complex part
$e^{i\phi_{\text{A1}}}$ from the Rabi frequency corresponding to the A1 as
well as A4 transitions, since we fix the Rabi frequencies to be equal a priori
to satisfy the db transformation for $R_{x}$ gates. At this step, our rotating
frame Hamiltonian reads:
$\begin{split}H_{\text{rot}}&=\frac{1}{2}\Big{[}\Big{(}\Omega_{1}\sigma_{1\text{A}}+\Omega_{2}\sigma_{4\text{A}}+(\lambda_{1}\Omega_{1}+e^{-it(\delta_{\text{gs}}+\phi_{\text{C}4})}\lambda_{21}\Omega_{2})\sigma_{1\text{C}}+(e^{it(\delta_{\text{gs}}+\phi_{\text{C}4})}\lambda_{12}\Omega_{1}-\lambda_{2}\Omega_{2})\sigma_{4\text{C}}\Big{)}+\text{H.c.}\Big{]}\\\
&+\frac{-i}{2}\Big{[}\Big{(}\bar{\Omega}_{1}\sigma_{1\text{A}}+\bar{\Omega}_{2}\sigma_{4\text{A}}+(\lambda_{1}\bar{\Omega}_{1}+e^{-it(\delta_{\text{gs}}+\phi_{\text{C}4})}\lambda_{21}\bar{\Omega}_{2})\sigma_{1\text{C}}+(e^{it(\delta_{\text{gs}}+\phi_{\text{C}4})}\lambda_{12}\bar{\Omega}_{1}-\lambda_{2}\bar{\Omega}_{2})\sigma_{4\text{C}}\Big{)}+\text{H.c.}\Big{]}\\\
&-\Delta\sigma_{\text{AA}}+(\delta_{\text{es}}-\Delta)\sigma_{\text{CC}}.\end{split}$
(110)
For clarity, we mention that we have defined
$\lambda_{1}=|\Omega_{1}^{\text{C1}}|/|\Omega_{1}|$,
$\lambda_{12}=|\Omega_{1}^{\text{C4}}|/|\Omega_{1}|$,
$\lambda_{2}=|\Omega_{2}^{\text{C4}}|/|\Omega_{2}|$ and
$\lambda_{21}=|\Omega_{2}^{\text{C1}}|/|\Omega_{2}|$, where the subscripts
$k=\\{1,2\\}$ in $\Omega_{k}^{ij}$ correspond to the lasers by which the error
transitions are driven by. Finally, in order to go to the db-frame we apply
the transformation:
$R_{\text{db}}=\frac{1}{\sqrt{2}}(\sigma_{11}-\sigma_{14}+\sigma_{41}+\sigma_{44})+\sigma_{\text{AA}}+\sigma_{\text{CC}}.$
(111)
To decouple the dark state from the excited, we further set
$\Omega_{1}=\Omega_{2}$ and $\bar{\Omega}_{1}=\bar{\Omega}_{2}$. Thus, the
dark-bright (rotating) Hamiltonian reads:
$\begin{split}H_{\text{db}}=\frac{1}{\sqrt{2}}\Big{(}&-\frac{(e^{it(\delta_{\text{gs}}+\phi_{\text{C}4})}\lambda_{12}-(\lambda_{1}+\lambda_{2})-\lambda_{21}e^{-it(\delta_{\text{gs}}+\phi_{\text{C}4})})(\Omega_{1}-i\bar{\Omega}_{1})}{2}\sigma_{\text{dC}}+(\Omega_{1}-i\bar{\Omega}_{1})\sigma_{\text{bA}}\\\
&+\frac{(e^{it(\delta_{\text{gs}}+\phi_{\text{C}4})}\lambda_{12}+(\lambda_{1}-\lambda_{2})+\lambda_{21}e^{-it(\delta_{\text{gs}}+\phi_{\text{C}4})})(\Omega_{1}-i\bar{\Omega}_{1})}{2}\sigma_{\text{bC}}+\text{H.c.}\Big{)}\\\
&-\Delta\sigma_{\text{AA}}+(-\Delta+\delta_{\text{es}})\sigma_{\text{CC}}.\end{split}$
(112)
The leakage subspace $|\text{C}\rangle$ is off-resonant from the remaining
Hamiltonian by an energy cost $\delta_{\text{es}}$. Effectively, this allows
us to perform an expansion of the control fields in the parameter
$\epsilon=1/(T\delta_{\text{es}})$. More analytically, according to Ref. [51],
by multiplying $H_{\text{db}}$ by the gate time we convert it to the
dimensionless form:
$\tilde{H}_{\text{db}}=\frac{1}{\epsilon}H_{0}+\sum_{n=0}^{\infty}\epsilon^{n}\tilde{H}_{\text{db}}^{(n)}(t),$
(113)
with $H_{0}=\text{diag}[0,0,0,1]$ and $\tilde{H}_{\text{db}}^{(n)}(t)$ given
by:
$\begin{split}\tilde{H}_{\text{db}}^{(n)}(t)=\frac{1}{\sqrt{2}}\Big{(}&-\frac{(e^{it(\delta_{\text{gs}}+\phi_{\text{C}4})}\lambda_{12}-(\lambda_{1}+\lambda_{2})-\lambda_{21}e^{-it(\delta_{\text{gs}}+\phi_{\text{C}4})})(\Omega_{1}^{(n)}-i\bar{\Omega}_{1}^{(n)})}{2}\sigma_{\text{dC}}+(\Omega_{1}^{(n)}-i\bar{\Omega}_{1}^{(n)})\sigma_{\text{bA}}\\\
&+\frac{(e^{it(\delta_{\text{gs}}+\phi_{\text{C}4})}\lambda_{12}+(\lambda_{1}-\lambda_{2})+\lambda_{21}e^{-it(\delta_{\text{gs}}+\phi_{\text{C}4})})(\Omega_{1}^{(n)}-i\bar{\Omega}_{1}^{(n)})}{2}\sigma_{\text{bC}}+\text{H.c.}\Big{)}\\\
&-\Delta\sigma_{\text{AA}}+(-\Delta+\delta_{\text{es}})\sigma_{\text{CC}}.\end{split}$
(114)
Note that now in Eq. (114), the control fields $\Omega_{k}^{(n)}$ and
$\bar{\Omega}_{k}^{(n)}$, as well as the detuning, $\Delta^{(n)}$, should be
understood as dimensionless. The next step is to satisfy the target
constraints that will allow us to implement the ideal Hamiltonian of Eq. (14)
of Sec. IV.2, as well as the decoupling constraints that will suppress the
leakage to the $|\text{C}\rangle$ subspace. These constraints are imposed in
the DRAG frame, which as we mentioned in the main text is generated by the
Hermitean operator $S(t)$, via $A=e^{-iS(t)}$. To this end, the operator
$S(t)$ is expanded in power series in $\epsilon$, as
$S(t)=\sum_{n=1}\epsilon^{n}S^{(n)}(t)$, with $S^{(n)}(t)$ given by:
$S^{(n)}(t)=\sum_{j=1}s^{(n)}_{z,j}\sigma_{jj}+\sum_{j<k}s^{(n)}_{x,jk}\sigma_{x,jk}+\sum_{j<k}s^{(n)}_{y,jk}\sigma_{y,jk}.$
(115)
As a result, the decoupling and target constraints can be solved iteratively
in a consistent manner, and the set of equations for the $n$-th order can be
found in the Appendix of Ref. [51]. For transparency, we highlight how we
solve the constraints and provide the equations for the corrective
modulations.
The first step is to define the target Hamiltonian, which as given in the main
text reads:
$H_{\text{target}}=\frac{h_{x}^{(0)}}{2}\sigma_{x,\text{be}}+h_{z}^{(0)}(\sigma_{\text{bb}}-\sigma_{\text{ee}}).$
(116)
By satisfying the zero-th order constraints we ensure that
$H_{\text{D}}^{(0)}=H_{\text{db},0}$, where $H_{\text{db},0}$ is the ideal
Hamiltonian:
$H_{\text{db},0}=(\Omega_{eff}f(t)\sigma_{x,\text{be}}+\text{H.c.})-\Delta\sigma_{\text{ee}}.$
(117)
Effectively, this means that to the zero-th order, the target gate is the same
in both frames. At the same time, satisfying the zero-th order constraints
implies that certain $S^{(1)}(t)$ elements need to be restricted; these
correspond to $S^{(1)}_{k,\text{dC}}$, $S^{(1)}_{k,\text{bC}}$,
$S^{(1)}_{k,\text{AC}}$, with $k=\\{x,y\\}$. This leaves the parameters
$S^{(1)}_{z,j}$ (with $j=\\{\text{d, b, A, C}\\}$), $S^{(1)}_{k,\text{db}}$,
$S^{(1)}_{k,\text{dA}}$ and $S^{(1)}_{k,\text{bA}}$ free. We set all
$S^{(1)}_{z,j}=0$, as well as
$S^{(1)}_{k,\text{db}}=S^{(1)}_{k,\text{dA}}=0=S^{(1)}_{x,\text{bA}}$. This
choice satisfies the boundary conditions for the frame transformation $A(t)$,
and allows us to obtain the corrective fields by $S^{(1)}_{y,\text{bA}}(t)$
via the first order target constraints. In particular, for $\Delta^{(1)}=0$,
the target condition:
$\text{Tr}[H_{\text{D}}^{(1)}(\sigma_{\text{bb}}-\sigma_{\text{ee}})]=0,$
(118)
gives the following solution for $S^{(1)}_{y,\text{bA}}(t)$:
$S^{(1)}_{y,\text{bA}}(t)=\frac{\Omega_{1}^{(0)}(\lambda_{1}-\lambda_{2}+2\lambda_{12}\cos(t\delta_{\text{gs}}+\phi_{\text{C}4}))^{2}f(t)}{8\sqrt{2}\delta_{\text{es}}},$
(119)
where we have also set $\lambda_{12}=\lambda_{21}$, which arises from the
polarization definitions we have used. Then, from the target constraints:
$\displaystyle h_{x}^{(1)}$ $\displaystyle=$
$\displaystyle\text{Tr}[H_{\text{D}}^{(n)}\sigma_{x,\text{be}}]=0,$ (120)
$\displaystyle h_{y}^{(1)}$ $\displaystyle=$
$\displaystyle\text{Tr}[H_{\text{D}}^{(n)}\sigma_{y,\text{be}}]=0,$ (121)
we solve for $\Omega_{1}^{(1)}$ and $\bar{\Omega}_{1}^{(1)}$, which depend on
$S^{(1)}_{y,\text{bA}}(t)$. The expressions for the pulse corrections for the
SiV- are:
$\Omega_{1}^{(1)}=\Omega_{2}^{(1)}=\frac{\Delta\Omega_{1}^{(0)}(\lambda_{1}-\lambda_{2}+2\lambda_{1}\lambda_{2}\cos(t\delta_{\text{gs}}+\phi_{\text{C4}}))^{2}}{16\delta_{\text{es}}},$
(122)
$\bar{\Omega}_{1}^{(1)}=\bar{\Omega}_{2}^{(1)}=\frac{\Omega_{1}^{(0)}(\lambda_{1}-\lambda_{2}+2\lambda_{12}\cos(t\delta_{\text{gs}}+\phi_{\text{C4}}))}{16\delta_{\text{es}}}A(t),$
(123)
where $A(t)$ is given by:
$\begin{split}A(t)=\Big{(}&-4\delta_{\text{gs}}\lambda_{12}\sin(t\delta_{\text{gs}}+\phi_{\text{C4}})\\\
&+\left(\lambda_{1}-\lambda_{2}+2\lambda_{12}\cos(t\delta_{\text{gs}}+\phi_{\text{C4}})\right)\frac{\dot{f}(t)}{f(t)}\Big{)}\end{split}$
(124)
Lastly, we follow a similar procedure for the SnV-, and we find that the
corrections $\Omega_{1}^{(1)}$ and $\bar{\Omega}_{1}^{(1)}$ are:
$\Omega_{1}^{(1)}=\Omega_{2}^{(1)}=\frac{\Omega_{1}^{(0)}\Delta((\lambda_{1}+\lambda_{2})^{2}+2\lambda_{21}^{2}(1-\cos(2\delta_{\text{gs}}t)))}{16\delta_{\text{es}}}$
(125)
$\bar{\Omega}_{1}^{(1)}=\bar{\Omega}_{2}^{(1)}=\frac{\lambda_{21}^{2}\Omega_{1}^{(0)}\sin(t\delta_{\text{gs}})(2\delta_{\text{gs}}\cos(t\delta_{\text{gs}})+\frac{\dot{f}(t)}{f(t)}\sin(t\delta_{\text{gs}}))}{4\delta_{\text{es}}},$
(126)
where $\lambda_{21}=\Omega_{2}^{\text{C2}}/\Omega_{1}^{(0)}$.
## References
* Balasubramanian _et al._ [2009] G. Balasubramanian, P. Neumann, D. Twitchen, M. Markham, R. Kolesov, N. Mizuochi, J. Isoya, J. Achard, J. Beck, J. Tissler, V. Jacques, P. R. Hemmer, F. Jelezko, and J. Wrachtrup, Nat. Matter. 8, 383 (2009).
* Epstein _et al._ [2005] R. J. Epstein, F. M. Mendoza, Y. K. Kato, and D. D. Awschalom, Nat. Phys. 1, 94 (2005).
* Gaebel _et al._ [2006] T. Gaebel, M. Domhan, I. Popa, C. Wittmann, P. Neumann, F. Jelezko, J. R. Rabeau, N. Stavrias, A. D. Greentree, S. Prawer, J. Meijer, J. Twamley, P. R. Hemmer, and J. Wrachtrup, Nat. Phys. 2, 408 (2006).
* Jelezko _et al._ [2004] F. Jelezko, T. Gaebel, I. Popa, A. Gruber, and J. Wrachtrup, Phys. Rev. Lett. 92, 076401 (2004).
* Gupta _et al._ [2016] A. Gupta, L. Hacquebard, and L. Childress, J. Opt. Soc. Am. B 33, B28 (2016).
* Lovchinsky _et al._ [2016] I. Lovchinsky, A. O. Sushkov, E. Urbach, N. P. de Leon, S. Choi, K. De Greve, R. Evans, R. Gertner, E. Bersin, C. Müller, L. McGuinness, F. Jelezko, R. L. Walsworth, H. Park, and M. D. Lukin, Science 351, 836 (2016).
* Holzgrafe _et al._ [2019] J. Holzgrafe, J. Beitner, D. Kara, H. S. Knowles, and M. Atatüre, npj Quantum Inf. 5, 13 (2019).
* Robledo _et al._ [2011a] L. Robledo, H. Bernien, T. van der Sar, and R. Hanson, New J. Phys. 13, 025013 (2011a).
* Robledo _et al._ [2011b] L. Robledo, L. Childress, H. Bernien, B. Hensen, P. F. A. Alkemade, and H. R., Nature 477, 574 (2011b).
* Hensen _et al._ [2015] B. Hensen, H. Bernien, A. E. Dréau, A. Reiserer, N. Kalb, M. S. Blok, J. Ruitenberg, R. F. L. Vermeulen, R. N. Schouten, C. Abellán, W. Amaya, V. Pruneri, M. W. Mitchell, M. Markham, D. J. Twitchen, D. Elkouss, S. Wehner, T. H. Taminiau, and R. Hanson, Nature 526, 682 (2015).
* Taminiau _et al._ [2012] T. H. Taminiau, J. J. T. Wagenaar, T. van der Sar, F. Jelezko, V. V. Dobrovitski, and R. Hanson, Phys. Rev. Lett. 109, 137602 (2012).
* Golter _et al._ [2016] D. A. Golter, T. Oo, M. Amezcua, I. Lekavicius, K. A. Stewart, and H. Wang, Phys. Rev. X 6, 041060 (2016).
* Zhou _et al._ [2017] B. B. Zhou, A. Baksic, H. Ribeiro, C. G. Yale, F. J. Heremans, P. C. Jerger, A. Auer, G. Burkard, A. A. Clerk, and D. Awschalom, Nat. Phys. 13, 330 (2017).
* Schukraft _et al._ [2016] M. Schukraft, J. Zheng, T. Schröder, S. L. Mouradian, M. Walsh, M. E. Trusheim, H. Bakhru, and D. R. Englund, APL Photonics 1, 020801 (2016).
* Wolters _et al._ [2013] J. Wolters, N. Sadzak, A. W. Schell, T. Schröder, and O. Benson, Phys. Rev. Lett. 110, 027401 (2013).
* Hepp _et al._ [2014] C. Hepp, T. Müller, V. Waselowski, J. N. Becker, B. Pingault, H. Sternschulte, D. Steinmüller-Nethl, A. Gali, J. R. Maze, M. Atatüre, and C. Becher, Phys. Rev. Lett. 112, 036405 (2014).
* Müller _et al._ [2014] T. Müller, C. Hepp, B. Pingault, E. Neu, S. Gsell, M. Schreck, H. Sternschulte, D. Steinmüller-Nethl, C. Becher, and M. Atatüre, Nat. Commun 5, 3328 (2014).
* Zhang _et al._ [2018] J. L. Zhang, S. Sun, M. J. Burek, C. Dory, Y.-K. Tzeng, K. A. Fischer, Y. Kelaita, K. G. Lagoudakis, M. Radulaski, Z.-X. Shen, N. A. Melosh, S. Chu, M. Lončar, and J. Vučković, Nano Lett. 18, 1360 (2018).
* Sun _et al._ [2018] S. Sun, J. L. Zhang, K. A. Fischer, M. J. Burek, C. Dory, K. G. Lagoudakis, Y.-K. Tzeng, M. Radulaski, Y. Kelaita, A. Safavi-Naeini, Z.-X. Shen, N. A. Melosh, S. Chu, M. Lončar, and J. Vučković, Phys. Rev. Lett. 121, 083601 (2018).
* Meesala _et al._ [2018] S. Meesala, Y.-I. Sohn, B. Pingault, L. Shao, H. A. Atikian, J. Holzgrafe, M. Gündoğan, C. Stavrakas, A. Sipahigil, C. Chia, R. Evans, M. J. Burek, M. Zhang, L. Wu, J. L. Pacheco, J. Abraham, E. Bielejec, M. D. Lukin, M. Atatüre, and M. Lončar, Phys. Rev. B 97, 205444 (2018).
* Sohn _et al._ [2018] Y. Sohn, S. Meesala, B. Pingault, H. A. Atikian, J. Holzgrafe, M. Gündoğan, C. Stavrakas, M. J. Stanley, A. Sipahigil, J. Choi, M. Zhang, J. L. Pacheco, J. Abraham, E. Bielejec, M. D. Lukin, M. Atatüre, and M. Lončar, Nat. Commun. 9, 2012 (2018).
* Lemonde _et al._ [2018] M.-A. Lemonde, S. Meesala, A. Sipahigil, M. J. A. Schuetz, M. D. Lukin, M. Lončar, and P. Rabl, Phys. Rev. Lett. 120, 213603 (2018).
* Machielse _et al._ [2019] B. Machielse, S. Bogdanovic, S. Meesala, S. Gauthier, M. J. Burek, G. Joe, M. Chalupnik, Y. I. Sohn, J. Holzgrafe, R. E. Evans, C. Chia, H. Atikian, M. K. Bhaskar, D. D. Sukachev, L. Shao, S. Maity, M. D. Lukin, and M. Lončar, Phys. Rev. X 9, 031022 (2019).
* Bhaskar _et al._ [2020] M. K. Bhaskar, R. Riedinger, B. Machielse, D. S. Levonian, C. T. Nguyen, E. N. Knall, H. Park, D. Englund, M. Lončar, D. D. Sukachev, and M. D. Lukin, Nature 580, 60 (2020).
* Trusheim _et al._ [2020] M. E. Trusheim, B. Pingault, N. H. Wan, M. Gündoğan, L. De Santis, R. Debroux, D. Gangloff, C. Purser, K. C. Chen, M. Walsh, J. J. Rose, J. N. Becker, B. Lienhard, E. Bersin, I. Paradeisanos, G. Wang, D. Lyzwa, A. R.-P. Montblanch, G. Malladi, H. Bakhru, A. C. Ferrari, I. A. Walmsley, M. Atatüre, and D. Englund, Phys. Rev. Lett. 124, 023602 (2020).
* Rugar _et al._ [2019] A. E. Rugar, C. Dory, S. Sun, and J. Vučković, Phys. Rev. B 99, 205417 (2019).
* Rugar _et al._ [2020a] A. E. Rugar, H. Lu, C. Dory, S. Sun, P. J. McQuade, Z.-X. Shen, N. A. Melosh, and J. Vučković, Nano Lett. 20, 1614 (2020a).
* Aghaeimeibodi _et al._ [2021] S. Aghaeimeibodi, D. Riedel, A. E. Rugar, C. Dory, and J. Vučković, (2021), arXiv:2103.01917 [physics.optics] .
* Rugar _et al._ [2021] A. E. Rugar, S. Aghaeimeibodi, D. Riedel, C. Dory, H. Lu, P. J. McQuade, Z.-X. Shen, N. A. Melosh, and J. Vučković, (2021), arXiv:2102.11852 [physics.optics] .
* Rugar _et al._ [2020b] A. E. Rugar, C. Dory, S. Aghaeimeibodi, H. Lu, S. Sun, S. D. Mishra, Z.-X. Shen, N. A. Melosh, and J. Vučković, ACS Photonics 7, 2356 (2020b).
* Rogers _et al._ [2014a] L. J. Rogers, K. D. Jahnke, M. W. Doherty, A. Dietrich, L. P. McGuinness, C. Müller, T. Teraji, H. Sumiya, J. Isoya, N. B. Manson, and F. Jelezko, Phys. Rev. B 89, 235101 (2014a).
* Becker and Becher [2017] J. N. Becker and C. Becher, physica status solidi (a) 214, 1700586 (2017).
* Rogers _et al._ [2014b] L. J. Rogers, K. D. Jahnke, T. Teraji, L. Marseglia, C. Müller, B. Naydenov, H. Schauffer, C. Kranz, J. Isoya, L. P. McGuinness, and F. Jelezko, Nat. Commun. 5 (2014b).
* Goss _et al._ [1996] J. P. Goss, R. Jones, S. J. Breuer, P. R. Briddon, and S. Öberg, Phys. Rev. Lett. 77, 3041 (1996).
* Sipahigil _et al._ [2014] A. Sipahigil, K. D. Jahnke, L. J. Rogers, T. Teraji, J. Isoya, A. S. Zibrov, F. Jelezko, and M. D. Lukin, Phys. Rev. Lett. 113, 113602 (2014).
* Evans _et al._ [2016] R. E. Evans, A. Sipahigil, D. D. Sukachev, A. S. Zibrov, and M. D. Lukin, Phys. Rev. Applied 5, 044010 (2016).
* Benedikter _et al._ [2017] J. Benedikter, H. Kaupp, T. Hümmer, Y. Liang, A. Bommer, C. Becher, A. Krueger, J. M. Smith, T. W. Hänsch, and D. Hunger, Phys. Rev. Applied 7, 024031 (2017).
* Bradac _et al._ [2019] C. Bradac, W. Gao, J. Forneris, M. E. Trusheim, and I. Aharonovich, Nat. Commun. 10 (2019).
* Nguyen _et al._ [2019a] C. T. Nguyen, D. D. Sukachev, M. K. Bhaskar, B. Machielse, D. S. Levonian, E. N. Knall, P. Stroganov, C. Chia, M. J. Burek, R. Riedinger, H. Park, M. Lončar, and M. D. Lukin, Phys. Rev. B 100, 165428 (2019a).
* Nguyen _et al._ [2019b] C. T. Nguyen, D. D. Sukachev, M. K. Bhaskar, B. Machielse, D. S. Levonian, E. N. Knall, P. Stroganov, R. Riedinger, H. Park, M. Lončar, and M. D. Lukin, Phys. Rev. Lett. 123, 183602 (2019b).
* Rogers _et al._ [2014c] L. J. Rogers, K. D. Jahnke, M. H. Metsch, A. Sipahigil, J. M. Binder, T. Teraji, H. Sumiya, J. Isoya, M. D. Lukin, P. Hemmer, and F. Jelezko, Phys. Rev. Lett. 113, 263602 (2014c).
* Sukachev _et al._ [2017] D. D. Sukachev, A. Sipahigil, C. T. Nguyen, M. K. Bhaskar, R. E. Evans, F. Jelezko, and M. D. Lukin, Phys. Rev. Lett. 119, 223602 (2017).
* Pingault _et al._ [2014] B. Pingault, J. N. Becker, C. H. H. Schulte, C. Arend, C. Hepp, T. Godde, A. I. Tartakovskii, M. Markham, C. Becher, and M. Atatüre, Phys. Rev. Lett. 113, 263601 (2014).
* Zhang _et al._ [2017] J. L. Zhang, K. G. Lagoudakis, Y.-K. Tzeng, C. Dory, M. Radulaski, Y. Kelaita, K. A. Fischer, S. Sun, Z.-X. Shen, N. A. Melosh, S. Chu, and J. Vučković, Optica 4, 1317 (2017).
* Becker _et al._ [2016] J. N. Becker, J. Gorlitz, C. Arend, M. Markham, and C. Becher, Nat. Commun. 7 (2016).
* Pingault _et al._ [2017] B. Pingault, D. Jarausch, C. Hepp, L. Klintberg, J. N. Becker, M. Markham, C. Becher, and M. Atatüre, Nat. Commun. 8 (2017).
* Becker _et al._ [2018] J. N. Becker, B. Pingault, D. Groß, M. Gündoğan, N. Kukharchyk, M. Markham, A. Edmonds, M. Atatüre, P. Bushev, and C. Becher, Phys. Rev. Lett. 120, 053603 (2018).
* Economou _et al._ [2006] S. E. Economou, L. J. Sham, Y. Wu, and D. G. Steel, Phys. Rev. B 74, 205415 (2006).
* Economou and Reinecke [2007] S. E. Economou and T. L. Reinecke, Phys. Rev. Lett. 99, 217401 (2007).
* Roque _et al._ [2021] F. Roque, A. A. Clerk, and H. Ribeiro, npj Quantum Inf. 7, 28 (2021).
* Gambetta _et al._ [2011] J. M. Gambetta, F. Motzoi, S. T. Merkel, and F. K. Wilhelm, Phys. Rev. A 83, 012308 (2011).
* L. S. Theis and Wilhelm [2018] S. M. L. S. Theis, F. Motzoi and F. K. Wilhelm, EPL 123, 60001 (2018).
* Motzoi _et al._ [2009] F. Motzoi, J. M. Gambetta, P. Rebentrost, and F. K. Wilhelm, Phys. Rev. Lett. 103, 110501 (2009).
* Tai _et al._ [2017] W. Tai, T. Yang, C. Ge, and D. Jia, in _Terahertz, RF, Millimeter, and Submillimeter-Wave Technology and Applications X_ , Vol. 10103, edited by L. P. Sadwick and T. Yang, International Society for Optics and Photonics (SPIE, 2017) pp. 61 – 68.
* Rosen and Zener [1932] N. Rosen and C. Zener, Phys. Rev. 40, 502 (1932).
* Bertlmann and Krammer [2008] A. R. Bertlmann and P. Krammer, J. Phys. A: Math. Theor. 41, 235303 (2008).
|
Extra Dimension-Inspired Models: $\mathrm{Z^{\prime}}$, $\mathrm{W^{\prime}}$,
Dijet Resonances, Black Hole Searches
Tobias Pook on behalf of the ATLAS & CMS Collaborations111This work is
supported by the german Federal Ministry of Education and Research
III. Physikalisches Instiut A
Physikzentrum, RWTH Aachen University, 52056 Aachen
t.pook $<$at$>$ cern.ch
> I give a summary of BSM searches performed by the ATLAS and CMS experiments
> with an focus on heavy gauge bosons, extra dimensions and quantum black
> holes. The presented results use data collected during 2012 when the LHC
> operated at an center of mass energy of $\sqrt{s}=8\,\textrm{TeV}$.
>
>
> In memory of Harris Hassan
> PRESENTED AT
>
>
>
>
> the Twelfth Conference on the Intersections of Particle and Nuclear Physics
> (CIPANP15)
> Vail, USA, May 19–24, 2015
## 1 Introduction
The Large Hadron Collider (LHC) operated at a center of mass energy of
$\sqrt{s}=8\,\mathrm{TeV}$ during 2012 and the multi-purpose particle
detectors ATLAS [2] and CMS [1] recorded data with an integrated luminosity of
$20\,\mathrm{fb^{-1}}$. The recorded data presents a unique opportunity to
search for physics beyond the standard model (BSM) and both experiments have
interpreted their measurements in terms of a variety of theories.
This work aims to briefly summarize search results in the dilepton (same and
opposite flavor), lepton$+{\not\mathrel{E}}_{T}$, dijet and ditop channel for
a selected set of related BSM theories which predict the existence of heavy
gauge bosons $\mathrm{Z^{\prime}}$ and $\mathrm{W^{\prime}}$, extra dimensions
or quantum black holes.
## 2 Theories
Extra dimension models summarized in the following describe extensions of our
spacetime with additional compactified dimension. The related theories may
lower the fundamental Planck mass $M_{D}$ to the $\mathrm{TeV}$ region, and
thus solve the higgs mass hierachy problem. This summary focuses on the most
popular theories: Randall Sundrum (RS)[3] and the Arkani-Hamed, Dimopulos,
Dvali (ADD) [4, 5] models. Both models provide no fundamental theory of
quantum gravity, but are built as effective field theories based on classical
assumptions. They use parts of the mathematical framework which was developed
in string theory, or more precisely brane physics to confine SM particles to a
(3+1) dimensional subspace of the ($3+1+n$) dimensional space-time[6]. Extra
dimension theories predict a spectrum of Graviton modes (Kaluza-Klein towers)
or a spectrum of heavier copies of SM particles if they are able to propagate
in the compactified additional dimensions.
The ADD model assumes a flat spacetime. The model parameter under study
depends on the production process. The direct production cross section depends
directly on $M_{D}$, while the virtual graviton exchange is only able to probe
the UV cut-off $M_{s}$, which can be argued to be close to $M_{D}$.
The RS model assumes a warped space-time represented by an exponential term in
the metric $ds^{2}=e^{-2kr\phi}\eta_{\mu\nu}dx^{\mu}dx^{\nu}+r_{c}d\phi^{2}$.
The cross section in these models depends on the ratio $\tilde{k}$ of the warp
factor $k$ and $M_{D}$. Several extensions of this model exist, most notably
the Bulk RS1 scenario [7]. Here the fermion and boson fields localized near to
a TeV or a Planck brane respectively. This allows solving the flavor puzzle
and the higgs mass hierarchy problem without introducing an additional
hierarchy.
Heavy Gauge Bosons $\mathbf{\mathrm{W^{\prime}},\mathrm{Z^{\prime}}}$ refer to
heavier versions of the weak gauge bosons and are predicted in several classes
of theories. The most studied scenario is the sequential standard model (SSM)
[8] where $\mathrm{W^{\prime}}$ and $\mathrm{Z^{\prime}}$ bosons carry exactly
the same quantum numbers and interfere with their SM counterparts. The
$\mathrm{Z^{\prime}}$ is expected to decay flavor violating in several
theories. Relevant with respect to the presented searches are generic
extensions of the SSM with additional flavor violating couplings, which are
expressed as ratios $Q_{ij}\,(i,j=e,\mu,\tau)$ of the SSM same flavor coupling
[9] or extra dimensions where the mass hierachy among SM families is explained
by the overlap of the particle wave functions if fermions and the higgs are
localized on a higher dimensional brane [10]. Technicolor models which suggest
additional gauge couplings are of special interest in decay channels with tops
where the color singlet $\mathrm{Z^{\prime}}$ is leptophobic and couples only
to first and third generation quarks[11].
Quantum Black Holes (QBH) may be produced if LHC collisions take place above a
lowered fundamental Planck scale. All discussed models assume either the ADD
or the RS model as their starting point, but include different sets of
additional assumptions. The main parameters to control the signal shape and
cross section are the additional number of dimensions and the threshold mass
$M_{th}$, which is necessary to produce a QBH. The presented models are often
referred to by the generator which implements it. The generators used for the
summarized searches are:
* •
CalcHEP for flavor violating QBH decays [12].
* •
QBH which uses a generic description of gravitational bound two-body states
with an non thermal QBH decay [13].
* •
BLACKMAX includes a wide range of black hole theories but most relevant for
the presented analyses are models comparable to the QBH generator with
additional model assumptions [14].
QBH theories share an important limitation: black hole production is expected
at scales where gravity becomes strong, and one hopes that the extrapolation
from the classical domain holds.
## 3 Selected Searches with the CMS and ATLAS Experiments
### 3.1 Dilepton (same flavor)
The dilepton channel is theoretical well understood and has been studied by
both experiments [15, 16, 17]. Both analyses try to use a model unspecific
selection which aims to reliably select well reconstructed and isolated pairs
of electrons or muons. No significant deviation from the SM was observed and
two distinct limit strategies have been used to set limits for resonant and
non-resonant BSM signals.
The resonant searches fit smooth functions to both data and background
prediction. A set of signal shape templates with different
$\mathrm{Z^{\prime}}$mass is used to construct a background + signal
hypothesis, which is compared to both data and the background only hypothesis.
The resulting limit on the cross section times efficiency for a SSM
$\mathrm{Z^{\prime}}$dependent on the resonance mass is shown in fig. 1.
Figure 1: 95% CL limits on the cross section $\times$ branching ratio
$\times$efficiency dependent on the resonance mass for the ATLAS [15] (left)
and CMS [17] (right) searches.
Both experiments report observed limits of 2.9 TeV on the
$\mathrm{Z^{\prime}}_{SSM}$ mass. The technique to derive these results
differs between both experiments. The ATLAS collaboration uses the complete
spectrum with a binned likelihood approach, this allows gaining additional
sensitivity for the studied SSM by including interference effects outside the
resonance. The CMS collaboration has chosen a more general strategy using an
unbinned likelihood approach with a narrow width approximation. The results
may be reinterpreted for any model with comparable acceptance by simply
applying a cross section ratio for the SSM $\mathrm{Z^{\prime}}$ and the model
under investigation within a mass window of $\pm 5\%\,\sqrt{\hat{s}}$. This
difference explains stronger fluctuations for CMS results in fig 1.
Possible signals from the lightest Kaluza-Klein Graviton mode in RS models
serve as a benchmark model for spin 2 resonances with a modified signal
acceptance. The ATLAS results show exclusion limits in the $k-M_{Pl}$ plane,
while CMS chose to present results similar to the $\mathrm{Z^{\prime}}$
interpretation, see figure 2. The comparison of both CMS limit plots show that
differences in cross section between $\mathrm{Z^{\prime}}$ and RS Gravitons
are only visible for small resonance masses.
Figure 2: 95% CL exclusion limits in the $k-M_{Pl}$ plane as reported by the
ATLAS collaboration [15] (left) 95% CL exclusion limits on the cross section
times efficiency depending on the resonance mass for spin-2 RS Gravitons by
CMS [17] (right).
QBHs are expected to create an edge like resonance structure in the dilepton
mass range close above the production threshold mass $M_{th}$. The ATLAS
search uses the resonant search strategy to derive 95% CL exclusion limits on
$M_{th}$ of 3.65 TeV for a signal based on an ADD scenario $(n=6)$ and 2.24
TeV for a RS based scenario $(n=1)$ using the QBH generator.
Both experiments performed non-resonant searches using a single bin counting
experiment above a lower mass threshold, which was optimized for the best
exclusion limits on the ADD UV cut-off $M_{s}$ at different number of extra
dimensions as shown in fig. 3, the observed limits reach from 4.9 TeV to 3.3
TeV for 3 to 7 additional dimensions.
Figure 3: Comparision of 95% CL exclusion limits on the UV cutoff $M_{s}$
depending on the number of additional dimensions for different searches by
ATLAS [15], CMS [17, 18, 19] and D0 [20]
### 3.2 Dilepton (mixed flavor)
Dilepton events with opposite flavor were studied by both experiments in the
$e\mu$ channel. ATLAS has performed additional searches in $e\tau$ and
$\mu\tau$ channels. Lepton flavor decays are of special interest because of
the good mass resolution and only small SM background contributions to the
final states.
Both experiments searched for $\mathrm{Z^{\prime}}$ bosons with additional
lepton flavor violating couplings. The ATLAS search chose the coupling
$\mathrm{Z^{\prime}}\rightarrow e\mu,e\tau,\mu\tau$ to be equal to SSM
$\mathrm{Z^{\prime}}$ same flavor coupling. A binned likelihood approach was
used to derive limits on the $\mathrm{Z^{\prime}}$ mass of 2.5 TeV ($e\mu$),
2.2 TeV ($e\tau$) and 2.2 TeV ($\mu\tau$) at 95% CL. The CMS analysis studied
an extra dimension inspired model where the coupling is set to match existing
strong bounds from $K_{L}\rightarrow e\mu$ decays. This search concluded to be
not sensitive to the $\mathrm{Z^{\prime}}$ model under investigation.
The quantum gravitational nature of QBHs suggest the existence of lepton
flavor violating decays. The CMS experiment has interpreted its measurements
in terms of several QBH models implemented in CalcHEP where the threshold mass
is set to be equal to the reduced Planck mass. Limits at 95% CL were set on
$M_{th}$ of 2.4 TeV in a RS based scenario $(n=1)$ and 3.15 TeV to 3.63 TeV
for 2 to 6 extra dimensions in an ADD based scenario.
### 3.3 Lepton+$\not\mathrel{\textbf{E}}_{\small\textbf{T}}$
Both experiments published results for final states with one high
$\mathrm{p_{T}}$ lepton and a significant amount of missing momentum in the
transverse plane ${\not\mathrel{E}}_{T}$ [21, 22]. The high mass tails for
this signature are dominated by off-shell SM $W$ production. Single lepton
triggers with transverse momentum thresholds for electrons (muons) of
$\mathrm{p_{T}}>120\,\textrm{GeV}\,(\mathrm{p_{T}}>40\,\textrm{GeV})$ and
$\mathrm{p_{T}}>80\,\textrm{GeV}\,(\mathrm{p_{T}}>40\,\textrm{GeV})$ have been
used by ATLAS and CMS respectively. Events with additional well reconstructed
same flavor leptons with $\mathrm{p_{T}}>20\,\textrm{GeV}$ are discarded in
the ATLAS analysis while CMS uses $\mathrm{p_{T}}>35\,\textrm{GeV}$ for
electrons and $\mathrm{p_{T}}>25\,\textrm{GeV}$ for muons. The transverse mass
$M_{T}=\sqrt{2\mathrm{p_{T}}^{\mathit{l}}{\not\mathrel{E}}_{T}\left(1-\cos[\Delta\phi(\vec{\mathrm{p_{T}}}^{\mathit{l}},\vec{{\not\mathrel{E}}_{T}})]\right)}$
is used as the main observable for $\mathrm{W^{\prime}}$ searches.
Additional final state specific kinematic cuts distinguish both searches:
ATLAS adjusts the lower threshold for ${\not\mathrel{E}}_{T}$ to the trigger
$\mathrm{p_{T}}$ thresholds for each flavor; CMS applies a back-to-back cut
$\Delta\phi(l,{\not\mathrel{E}}_{T})>2.5$ and requirements on the
$\mathrm{p_{T}}$-${\not\mathrel{E}}_{T}$ ratio:
$0.4<\mathrm{p_{T}}/{\not\mathrel{E}}_{T}<1.5$, both cuts should reflect that
BSM paricles are produced in a balanced recoil at leading order.
ATLAS and CMS report lower limits of 3.2 TeV and 3.3 TeV on the
$\mathrm{W^{\prime}}$ mass at 95 %CL. Different statistical procedures were
used to derive the limits; ATLAS uses a single bin counting experiment above a
varying lower threshold on $M_{T}$. Cross section limits are calculated based
on an optimized threshold for each considered $\mathrm{W^{\prime}}$ mass, see
fig. 4. The CMS analysis used an shape based template fit similar to the
resonant ATLAS search in the dilepton channel, see fig. 4. CMS has also
reported limits based on single bin counting experiments above varying mass
thresholds but did not use this approach for the $\mathrm{W^{\prime}}$
interpretation.
Figure 4: 95% CL exclusion limits on branching ratio times cross section
depending on the $\mathrm{W^{\prime}}$mass for ATLAS [21] (left) and CMS [22]
(right)
### 3.4 Dijet
Final states with two high-$\mathrm{p_{T}}$ jets profit from a large cross
section at hadron colliders like the LHC and enough events were collected to
extract shape information up to several TeV. Both experiments add additional
requirements on the dijet event kinematics in their search [23, 24]. A
separation in (pseudo)-rapidity between the jets with the highest
$\mathrm{p_{T}}$ of $\Delta y<0.6$ and $\Delta\eta<0.65$ is used by ATLAS and
CMS respectively. ATLAS used so called pre-scaled triggers where only a fixed
fraction of all events is saved. This allows collecting data with lowered
trigger requirements and decreases the lower limit for searches in the dijet
mass distribution to $m_{jj}>250\,\textrm{GeV}$ compared to the CMS analysis
with $m_{jj}>890\,\textrm{GeV}$. Both experiments use smooth fit function to
estimate the background expectation from data and compare it to signal
templates using a binned likelihood approach. Lower limits on particle masses
for SSM $\mathrm{Z^{\prime}}$, $\mathrm{W^{\prime}}$ and Kaluza-Klein
Gravitons in the RS model with $n=1$ are listed in table 1.
Dijet events also represent the most sensitive channel for QBH searches, and
many QBH models predict that the produced BH decays primarily to dijet pairs
[25]. Lower limits on $M_{th}$ were set by both experiments using the model
implemented in the QBH generator. ATLAS has set $M_{D}=M_{th}$ and reported
5.7 TeV while CMS kept both variables as free parameters and found a limit of
5.8 TeV for $M_{pl}=5\,\textrm{TeV}$. Additional bounds on a related model
implemented in BLACKMAX were set by the ATLAS experiment of
$M_{th}<5.6\,\textrm{TeV}$ where $M_{th}$ is again set to be equal to the
reduced Planck mass $M_{D}$.
[ TeV ] | $\mathrm{W^{\prime}}$ | $\mathrm{Z^{\prime}}$ | $G_{KK}$(RS)
---|---|---|---
ATLAS | 2.5 | |
CMS | 2.2 | 1.7 | 1.6
Table 1: 95% CL lower mass limits on the SSM $\mathrm{W^{\prime}}$,
$\mathrm{Z^{\prime}}$ and $G_{KK}$ (RS $n=1$) as reported by ATLAS [23] and
CMS [24] for the dijet channel.
### 3.5 Ditop
Figure 5: Graphical representation of the possible decay modes for a single
top quark.
The analysis of ditop final states by ATLAS and CMS [26, 27] has significantly
increased its sensitivity by employing new analysis strategies for the
reconstruction of boosted top decays, and the subsequent top identification
via so called top-tagging techniques. Each of the two tops decays either
leptonically or hadronically, the hadronic decays can be further split into a
resolved and boosted topology, see fig. 5. The combination of these decay
modes for both tops results in the ditop decay modes: leptonic-leptonic,
leptonic-hadronic, leptonic-hadronic(boosted), hadronic-hadronic
hadronic(boosted)-hadronic(boosted). ATLAS restricted its analysis to the most
sensitive combination with one leptonic and one hadronic decay for the models
under investigation, CMS analyzed all possible decay modes and combined the
measurements for the final result.
Limits have been set on the $\mathrm{Z^{\prime}}$ mass based on topcolor
models as described in [11] where the coupling to lighter quarks is
suppressed: ATLAS and CMS found lower limits of 1.8 TeV and 2.4 TeV with a
width of 1.2% and 1% of the $\mathrm{Z^{\prime}}$ mass respectively, see
figure 6.
The Bulk RS1 model expects a suppression in production and decay for lighter
quarks. This leaves $t\overline{t}$ final states as the most promising channel
to probe the production of Kaluza-Klein gluons $g_{KK}$ at the LHC [7]. ATLAS
and CMS report lower limits on the mass of the lightest Kaluza-Klein mode of
the gluon $g_{KK}$ of 2.2 TeV and 2.8 TeV respectively.
Figure 6: 95% CL exclusion limits on the branching ratio$\times$cross section
dependent on the $\mathrm{Z^{\prime}}$mass for ATLAS [26] (left) and CMS [27]
(right)
## 4 Conclusion
ATLAS and CMS have both performed a large number of searches for the presented
theories and it should be emphasized that this summary reports only on a small
subset of all searches. No significant evidence for physics beyond the
standard model has been reported. A comprehensive list of all searches for new
physics related to this talk are constantly updated online ( ATLAS:
ExoticsPublicResults CMS: PhysicsResultsEXO, PhysicsResultsB2G ). The reach
for most of the presented analysis is limited by the probable phase space. The
recent restart of the LHC at a center of mass energy of
$\sqrt{s}=13\,\textrm{TeV}$ will increase the discovery reach for most
theories with a fraction of the recorded integrated luminosity at
$8\,\textrm{TeV}$.
ACKNOWLEDGEMENTS
I am grateful to Serguei Petrouchanko, Johannes Haller, Tobias Golling and
Koji Terashi for their helpful input during the preparation of my conference
contribution. I thank CERN and the ATLAS and CMS collaborations for their
great work operating the LHC and for providing the results for this summary.
## References
* [1] CMS Collaboration, JINST 3, S08004 (2008).
* [2] ATLAS Collaboration, JINST 3, S08003 (2008).
* [3] L. Randall and R. Sundrum, Phys. Rev. Lett. 83, 3370 (1999), arXiv:hep-ph/9905221.
* [4] N. Arkani-Hamed, S. Dimopoulos and G. R. Dvali, Phys. Lett. B 429, 263 (1998), arXiv:hep-ph/9803315.
* [5] I. Antoniadis, N. Arkani-Hamed, S. Dimopoulos and G. R. Dvali, Phys. Lett. B 436, 257 (1998), arXiv:hep-ph/9804398.
* [6] I. Antoniadis Phys. Lett. B 246, 317 (1990)
* [7] K. Agashe, A. Belyaev, T. Krupovnickas, G. Perez and J. Virzi, Phys. Rev. D 77, 015003 (2008), arXiv:hep-ph/0612015.
* [8] G. Altarelli, B. Mele and M. Ruiz-Altaba, Z. Phys. C 45, 109 (1989) [Z. Phys. C 47, 676 (1990)].
* [9] B. Murakami, Phys. Rev. D 65, 055003 (2002), arXiv:hep-ph/0110095.
* [10] J. M. Frere, M. V. Libanov, E. Y. Nugaev and S. V. Troitsky, JETP Lett. 79, 598 (2004) [Pisma Zh. Eksp. Teor. Fiz. 79, 734 (2004)], arXiv:hep-ph/0404139.
* [11] R. M. Harris and S. Jain, Eur. Phys. J. C 72, 2072 (2012), arXiv:1112.4928.
* [12] A. Belyaev, N. D. Christensen and A. Pukhov, Comput. Phys. Commun. 184, 1729 (2013), arXiv:1207.6082.
* [13] D. M. Gingrich, Comput. Phys. Commun. 181, 1917 (2010), arXiv:0911.5370.
* [14] D. C. Dai, G. Starkman, D. Stojkovic, C. Issever, E. Rizvi and J. Tseng, Phys. Rev. D 77, 076007 (2008), arXiv:0711.3012.
* [15] ATLAS Collaboration, Phys. Rev. D 90, no. 5, 052005 (2014), arXiv:1405.4123.
* [16] ATLAS Collaboration, Eur. Phys. J. C 74, no. 12, 3134 (2014), arXiv:1407.2410.
* [17] CMS Collaboration, JHEP 1504, 025 (2015), Twiki material, arXiv:1412.6302.
* [18] CMS Collaboration, Phys. Lett. B 711, 15 (2013), Twiki material, arXiv:1202.3827.
* [19] CMS Collaboration, Phys. Rev. Lett. 108, 111801 (2012), arXiv:1112.0688.
* [20] D0 Collaboration, Phys. Rev. Lett. 102, 051601 (2009), arXiv:0809.2813.
* [21] ATLAS Collaboration, JHEP 1409, 037 (2014), arXiv:1407.7494.
* [22] CMS Collaboration, Phys. Rev. D 91, no. 9, 092005 (2015) arXiv:1408.2745.
* [23] ATLAS Collaboration, Phys. Rev. D 91, no. 5, 052007 (2015), arXiv:1407.1376.
* [24] CMS Collaboration, Phys. Rev. D 91, no. 5, 052009 (2015), arXiv:1501.04198.
* [25] X. Calmet, W. Gong and S. D. H. Hsu, Phys. Lett. B 668, 20 (2008), arXiv:0806.4605.
* [26] ATLAS Collaboration, arXiv:1505.07018.
* [27] CMS Collaboration, CMS-PAS-B2G-13-008.
|
# On the Convergence of Adam under Non-uniform Smoothness: Separability from
SGDM and Beyond
Bohan Wang Huishuai Zhang Qi Meng Ruoyu Sun Zhi-Ming Ma Wei Chen
###### Abstract
This paper aims to clearly distinguish between Stochastic Gradient Descent
with Momentum (SGDM) and Adam in terms of their convergence rates. We
demonstrate that Adam achieves a faster convergence compared to SGDM under the
condition of non-uniformly bounded smoothness. Our findings reveal that: (1)
in deterministic environments, Adam can attain the known lower bound for the
convergence rate of deterministic first-order optimizers, whereas the
convergence rate of Gradient Descent with Momentum (GDM) has higher order
dependence on the initial function value; (2) in stochastic setting, Adam’s
convergence rate upper bound matches the lower bounds of stochastic first-
order optimizers, considering both the initial function value and the final
error, whereas there are instances where SGDM fails to converge with any
learning rate. These insights distinctly differentiate Adam and SGDM regarding
their convergence rates. Additionally, by introducing a novel stopping-time
based technique, we further prove that if we consider the minimum gradient
norm during iterations, the corresponding convergence rate can match the lower
bounds across all problem hyperparameters. The technique can also help proving
that Adam with a specific hyperparameter scheduler is parameter-agnostic,
which hence can be of independent interest.
Machine Learning, ICML
## 1 Introduction
Among various optimization techniques, the Adam optimizer (Kingma & Ba, 2014;
Loshchilov & Hutter, 2019) stands out due to its empirical success in a wide
range of deep learning applications, especially for pre-training large
foundation models with enormous data (Touvron et al., 2023; Brown et al.,
2020; Zhang et al., 2022a; Rae et al., 2021; Chowdhery et al., 2022; Du et
al., 2021). This popularity of Adam can be attributed to its adaptive learning
rate mechanism, which smartly adjusts the step size for each parameter,
allowing flexible and robust learning rate choices. Adam’s versatility is
further highlighted by its consistent performance in training various kinds of
models, making it a preferred optimizer in both academic and industrial
settings (Schneider et al., 2022). Its empirical success extends beyond
standard benchmarks to real-world challenges, where it often delivers state-
of-the-art results. This track record solidifies Adam’s position as a
fundamental tool for deep learning practitioners.
Exploring the theoretical foundations of the Adam optimizer, particularly why
it often outperforms traditional optimizers like Stochastic Gradient Descent
with Momentum (SGDM), is an intriguing yet complex task. Understanding Adam’s
convergence behavior is challenging, especially in settings defined by
standard convergence rate analysis. In these settings, assumptions include
uniformly bounded smoothness and finite gradient noise variance. Current
research indicates that under these conditions, SGDM can attain the lower
bound of the convergence rate for all first-order optimizers (Carmon et al.,
2017). This finding implies that, theoretically, Adam’s convergence rate
should not exceed that of SGDM. This theoretical result contrasts with
practical observations where Adam frequently excels, presenting a fascinating
challenge for researchers. It highlights the need for more refined theoretical
models that can bridge the gap between Adam’s empirical success and its
theoretical understanding.
Recent research by Zhang et al. (2019) has provided valuable insights into the
complexity of neural network optimization, particularly challenging the
assumption of uniform bounded smoothness. Their observations indicate that
smoothness often varies, showing a positive correlation with the norm of the
gradient and experiencing considerable fluctuations during the optimization
process. Building on this, they introduce the $(L_{0},L_{1})$-smooth condition
(detailed in our Assumption 1), which posits that local smoothness can be
bounded in relation to the gradient norm. This concept presents an exciting
opportunity to theoretically demonstrate that Adam could potentially converge
faster than SGDM. However, even in the relatively simpler deterministic
settings, no study has yet conclusively shown this to be the case.
To effectively compare the convergence rates of Adam and Stochastic Gradient
Descent with Momentum (SGDM), it’s essential to establish an upper bound on
Adam’s convergence rate and a lower bound for SGDM, and then prove Adam’s
superiority. This endeavor faces several challenges. First, the known lower
bound for SGDM’s convergence rate is only available in deterministic settings
without momentum (Zhang et al., 2019; Crawshaw et al., 2022). Moreover, this
result is based on a scenario where the counter-example objective function is
selected after fixing the learning rate. This procedure deviates from more
common practices where the learning rate is adjusted after defining the
objective function (Drori & Shamir, 2020; Carmon et al., 2017; Arjevani et
al., 2022), casting doubts on the standard applicability of this lower bound.
Secondly, for Adam, the current assumptions required to derive an upper bound
for its convergence rate are quite strict. These include assumptions like
bounded adaptive learning rates or deterministically bounded noise (Wang et
al., 2022; Li et al., 2023a). However, even under these constraints, the
convergence rates obtained for Adam are weaker than those of algorithms like
clipped SGDM (Zhang et al., 2019).
These complexities hinder a straightforward comparison between the convergence
rates of Adam and SGDM, highlighting a significant gap in the theoretical
understanding that remains to be bridged.
Our contributions. In this paper, we aim to bridge the gap and summarize our
contributions as follows.
* •
We separate the convergence rate of Adam and SGDM under $(L_{0},L_{1})$-smooth
condition both in the deterministic setting and in the stochastic setting.
* –
In the deterministic setting, for the first time, we prove that under the
$(L_{0},L_{1})$-smooth condition, the convergence rate of the Adam optimizer
can match the existing lower bound for first-order deterministic optimizers,
up to numerical constants. Additionally, we establish a new lower bound for
the convergence rate of GDM, where one is allowed to tune the learning rate
and the momentum coefficient after the problem is fixed. The lower bound
exhibits a higher order dependence on the initial function value gap compared
to the upper bound of Adam. This distinction clearly separates Adam and GDM
for the deterministic setting.
* –
In the stochastic setting, for the first time, we prove that under the
$(L_{0},L_{1})$-smooth condition, the convergence rate of Adam matches the
existing lower bound for first-order stochastic optimizers regarding the
initial function value $f(\bm{w}_{1})-f^{*}$ and the final error
$\varepsilon$. In contrast, counterexamples exist where SGDM fails to
converge, irrespective of the learning rate and momentum coefficient. These
findings distinctly separate the convergence properties of Adam and SGDM in
stochastic settings.
* •
With the aid of a novel stopping time based technique, we further demonstrate
that the convergence rate of minimum error point of Adam can match the lower
bound across all problem hyperparameters. We demonstrate that such a technique
can be of independent interest by proving that Adam with specific scheduler is
parameter-agnostic based on the stopping time.
## 2 Related Works
Convergence analysis under non-uniform smoothness. Observations from empirical
studies on deep neural network training indicate that local smoothness can
vary significantly throughout the optimization process. In response to this,
Zhang et al. (2019) introduced the $(L_{0},L_{1})$-smooth condition, which
posits that local smoothness can be bounded by a linear function of the
gradient norm. Subsequent works have extended this concept by generalizing the
linear function to polynomials (Chen et al., 2023; Li et al., 2023a), or to
more general functions (Mei et al., 2021). Under non-uniform smoothness,
convergence properties of various optimizers have been studied. For instance,
upper bounds on the convergence rate have been established for optimizers such
as Clipped SGDM (Zhang et al., 2020), sign-based optimizers (Jin et al., 2021;
Hübler et al., 2023; Sun et al., 2023), AdaGrad (Faw et al., 2023; Wang et
al., 2023b), variance-reduction methods (Reisizadeh et al., 2023; Chen et al.,
2023), and trust-region methods (Xie et al., 2023). However, research on lower
bounds has been comparatively limited, with results primarily focusing on
Gradient Descent.
Convergence analysis of Adam. The development of convergence analysis for Adam
has been quite tortuous. While Adam was originally proposed with a convergence
guarantee (Kingma & Ba, 2014), subsequent analysis by Reddi et al. (2018)
pointed out flaws in this initial analysis and provided counterexamples
claiming that Adam could fail to converge. Only recently, Shi et al. (2021)
and Zhang et al. (2022b) have shown that the counterexamples in Reddi et al.
(2018) only rule out the possibility that Adam can converge problem-
agnostically, and it is still possible that Adam can converge with problem-
dependent hyperparameters.
So far, several works have established the convergence of Adam under the
$L$-smooth condition. Zaheer et al. (2018) proved that Adam without momentum
can converge to the neighborhood of stationary points by additionally assuming
that $\lambda$ is large. De et al. (2018) showed that Adam without momentum
can converge to stationary points but under the strong assumption that the
sign of gradients does not change during the optimization. Zou et al. (2019),
Défossez et al. (2022), and Guo et al. (2021) derived the convergence of Adam
by assuming the stochastic gradient is bounded. Shi et al. (2021) and Zhang et
al. (2022b) characterized the convergence of random-reshuffling Adam but
suffer from sub-optimal rates. He et al. (2023) studied the non-ergodic
convergence of Adam under a bounded gradient assumption, while Hong & Lin
(2023) provided high-probability guarantees for Adam under a deterministically
bounded noise assumption. A concurrent work by Wang et al. (2023a) shows that
Adam can achieve the lower bound of first-order optimizers with respect to the
final error $\varepsilon$ under standard assumptions, but it is unknown
whether Adam can match the lower bound with respect to other problem
specifics.
On the other hand, closely related to our work, there are only two works
studying the convergence of Adam under non-uniform smoothness (Wang et al.,
2022; Li et al., 2023a), both with restricted assumptions and results. We will
provide a detailed discussion in Section 4.
Parameter-agnostic optimization. The term ”parameter-agnostic” implies that
the optimizer is capable of converging without the need for extensive
hyperparameter tuning or detailed knowledge of the task characteristics.
Designing parameter-agnostic or parameter-free optimizers is a significant
challenge, as it can help avoid the extensive cost associated with
hyperparameter search. Existing works on parameter-agnostic optimization can
be categorized into several streams based on the settings they are predicated
upon. In the deterministic offline setting, it is widely acknowledged that GD
is not parameter-agnostic, even under an $L$-smooth condition (Nesterov et
al., 2018). However, this can be rectified by combining the GD with the
Backtracking Line Search technique (Armijo, 1966). In the stochastic offline
setting, under the $L$-smooth condition, multiple algorithms have been shown
to be parameter-agnostic (Yang et al., 2023; Ward et al., 2020; Faw et al.,
2022; Wang et al., 2023b; Cutkosky & Mehta, 2020). More recently, Hübler et
al. (2023) demonstrated that Normalized-SGDM can be parameter-agnostic even
under an $(L_{0},L_{1})$-smooth condition. In the realm of online convex
optimization, Orabona & Pál (2016); Orabona & Tommasi (2017) have shown there
exist parameter-free algorithms achieving optimal dependence regarding not
only the final error but also other problem specifics.
## 3 Preliminary
Notations. In this paper, we will use asymptotic notations
$\mathcal{O},\Omega,\Theta$ to respectively denote asymptotically smaller,
larger , and equivalent. We also use
$\tilde{\mathcal{O}},\tilde{\Omega},\tilde{\Theta}$ to indicate that there is
logarithmic factor hidden. We denote ${\mathcal{F}}_{t}$ as the filter given
by $\bm{w}_{1},\cdots,\bm{w}_{t}$.
Problem and Algorithm. We study the unconstrained minimization problem
$\min_{\bm{w}}f(\bm{w})$. We present the psedo-code of Adam as follows.
Algorithm 1 Adam Optimizer
Input: Stochastic oracle $\bm{O}$, learning rate $\eta>0$, initial point
$\bm{w}_{1}\in\mathbb{R}^{d}$, initial conditioner
$\bm{\nu}_{0}\in\mathbb{R}^{+}$, initial momentum $\bm{m}_{0}$, momentum
parameter $\beta_{1}$, conditioner parameter $\beta_{2}$, number of epoch $T$
for $t=1$ to $T$ do
Generate a random $z_{t}$, and query stochastic oracle
$\bm{g}_{t}=\bm{O}_{f}(\bm{w}_{t},z_{t})$
Calculate
$\bm{\nu}_{t}=\beta_{2}\bm{\nu}_{t-1}+(1-\beta_{2})\|\bm{g}_{t}\|^{2}$
Calculate $\bm{m}_{t}=\beta_{1}\bm{m}_{t-1}+(1-\beta_{1})\bm{g}_{t}$
Update
$\bm{w}_{t+1}=\bm{w}_{t}-\eta\frac{1}{\lambda+\sqrt{\bm{\nu}_{t}}}\bm{m}_{t}$
end for
In our paper, we present a slightly altered version of the Adam optimizer as
delineated in Algorithm 1, diverging from the canonical form described by
(Kingma & Ba, 2014). We assert that these modifications are implemented not to
undermine the generality of the algorithm but to facilitate more streamlined
proofs. Furthermore, our analysis retains applicability to the conventional
Adam algorithm, largely following the same approach, albeit with a more
elaborate proof process. Specifically, our first modification involves the
omission of the bias-correction factors $1-\beta_{1}^{t}$ and
$1-\beta_{2}^{t}$ from the first and second momentum terms, respectively. It
is important to note that incorporating bias correction would not alter the
convergence rate, as these terms approach unity at an exponential rate, thus
having a negligible impact on convergence.
Our second adjustment pertains to adopting a scalar-based adaptive learning
rate, in contrast to the per-coordinate modification utilized in the typical
Adam algorithm. Employing a scalar—or ”norm”—version of adaptive optimizers is
a recognized simplification strategy in the analysis of adaptive optimizers,
as evidenced by literature such as (Xing et al., 2021; Faw et al., 2022, 2023;
Wang et al., 2023b). Our proof is readily adaptable to the per-coordinate
version by entailing a separate analysis for each dimension 111It is worth
mentioning that the convergence rate for the per-coordinate Adam is subject to
the dimensionality $d$. Addressing the challenge of decoupling the convergence
rate of per-coordinate adaptive optimizers from dimensionality remains an
unresolved issue, one that we acknowledge but reserve for future
investigation..
We would like to highlight that all the analysis in this paper is for
$\lambda=0$. This is because $\lambda=0$ means we do not require the adaptive
learning rate to be upper bounded (a restrictive assumption in existing works
(Li et al., 2023a; Guo et al., 2021)) and is most challenging. The proof can
be immediately extended to $\lambda>0$ without any modification.
Meanwhile, we briefly state the SGDM optimizer as follows: with initial point
$\bm{w}_{1}$ and initial momentum $\bm{m}_{0}$, the update of $t$-th iteration
of SGDM is given by
$\displaystyle\bm{m}_{t}=\beta\bm{m}_{t-1}+(1-\beta)\bm{g}_{t},\bm{w}_{t+1}=\bm{w}_{t}-\eta\bm{m}_{t}.$
Assumptions. In this paper, all the analyses are established under the
following two standard assumptions.
###### Assumption 1 ($(L_{0},L_{1})$-smooth condition).
We assume $f$ is differentiable and lower bounded, and there exist non-
negative constants $L_{0},L_{1}>0$, such that
$\forall\bm{w}_{1},\bm{w}_{2}\in\mathbb{R}^{d}$ satisfying
$\|\bm{w}_{1}-\bm{w}_{2}\|\leq\frac{1}{L_{1}}$,
$\|\nabla f(\bm{w}_{1})-\nabla f(\bm{w}_{2})\|\leq(L_{0}+L_{1}\|\nabla
f(\bm{w}_{1})\|)\|\bm{w}_{1}-\bm{w}_{2}\|.$
###### Assumption 2 (Affine noise variance).
We assume that the stochastic noise $\bm{g}_{t}$ is unbiased, i.e.,
$\mathbb{E}^{|\mathcal{F}_{t}}\bm{g}_{t}=\bm{G}_{t}$. We further assume
$\bm{g}_{t}$ has affine variance, i.e., there exists $\sigma_{0}\geq
0,\sigma_{1}\geq 1$,
$\mathbb{E}^{|{\mathcal{F}}_{t}}[\|\bm{g}_{t}\|^{2}]\leq\sigma_{0}^{2}+\sigma_{1}^{2}\|\nabla
f(\bm{w}_{t})\|^{2}$.
Assumption 1 is a more general form of $(L_{0},L_{1})$-smooth condition and is
equivalent to the Hessian-bound form (Zhang et al., 2019) when Hessian exists.
Assumption 2 is one of the weakest assumptions on the noise in existing
literature, and generalizes bounded variance assumption (Li et al., 2023b),
bounded gradient assumption (Défossez et al., 2022), bounded noise assumption
(Li et al., 2023a).
## 4 Separating the convergence rates of Adam and (S)GD
In this section, we elucidate the disparate convergence rates of Adam and
(S)GD under Assumptions 1 and 2, examining both deterministic and stochastic
settings. We commence with the deterministic scenario before delving into the
stochastic complexities.
### 4.1 Analysis for the deterministic setting
As discussed in the introduction section, to discern the differential
convergence rates of deterministic Adam and GD, it is necessary to establish
not only Adam’s upper bound but also GD’s lower bound, given a consistent set
of assumptions. Crucially, these bounds must be sufficiently tight to ensure
that Adam’s upper bound is indeed the lesser. To date, only a couple of
studies have addressed the convergence of deterministic Adam. The first,
referenced in (Wang et al., 2022), indicates a convergence rate of
$\mathcal{O}(\frac{(f(\bm{w}_{1})-f^{*})^{2}}{\varepsilon^{4}})$, which is
sub-optimal compared to the classical deterministic rate of
$\mathcal{O}(\frac{f(\bm{w}_{1})-f^{*}}{\varepsilon^{2}})$ (Zhang et al.,
2019, 2020) regarding both final error $\varepsilon$ and the initial function
value gap $(f(\bm{w}_{1})-f^{*})$. The second study, (Li et al., 2023a),
presents a convergence rate that depends polynomially on $\frac{1}{\lambda}$,
where $\lambda$ is the small constant introduced to prevent the adaptive
learning rate from becoming infinity. Therefore, their result is only non-
vacuous when $\lambda$ is large, which deviates from practical settings.
Additionally, their bound exhibits an exaggerated dependency on the initial
function value gap, yielding $\min_{t\in[T]}\|\nabla
f(\bm{w}_{t})\|=\mathcal{O}(\frac{(f(\bm{w}_{1})-f^{*})^{3}}{\varepsilon^{2}})$.
As we will see later, such dependencies create upper bounds that surpass the
lower bounds of GD, making them unable to serve our purpose. To overcome these
limitations and accurately assess the performance of deterministic Adam, we
propose a new theorem that establishes an improved convergence rate for
deterministic Adam.
An upper bound for the convergence rate of deterministic Adam.
###### Theorem 1 (Informal).
Let Assumption 1 hold. Then, $\forall\beta_{1},\beta_{2}\geq 0$ satisfying
$\beta_{1}^{2}<\beta_{2}<1$, $\lambda=0$, and
$\varepsilon=\mathcal{O}(L_{0}/L_{1})$, if
$T\geq\Theta\left(\frac{L_{0}(f(\bm{w}_{1})-f^{*})}{\varepsilon^{2}}\right)$,
then Algorithm 1 satisfies
$\frac{1}{T}\sum_{t=1}^{T}\|\nabla f(\bm{w}_{t})\|\leq\varepsilon.$
###### Proof.
Please see Appendix B.1 for the formal statement of theorem and the proof. ∎
Our result offers a tighter bound than those presented in prior studies (Wang
et al., 2022; Li et al., 2023a). It is noteworthy that under the uniform
smoothness constraint—where the objective function’s smoothness is capped at
$L$ (that is, when $L_{0}=L$ and $L_{1}=0$ as per Assumption 1, referred to as
the $L$-smooth condition in existing literature (Arjevani et al., 2022; Carmon
et al., 2017; Faw et al., 2022))—Assumption 1 is met with $L_{0}=L$ and any
$L_{1}\geq 0$. Consequently, the established lower bound for all first-order
optimizers (Carmon et al., 2017) pertaining to the $L$-smooth condition
inherently provides a lower bound for the $(L_{0},L_{1})$-smooth condition,
which is
$\Omega\left(\frac{\sqrt{L_{0}(f(\mathbf{w}_{1})-f^{*})}}{\sqrt{T}}\right)$.
This coincides with our upper bound up to numerical constants. Such
correspondence suggests that our proposed bound is, in fact, optimal.
Our proof strategy utilizes a distinctive Lyapunov function,
$f(\bm{w}_{t})+\frac{\beta_{1}}{2(1-\beta_{1})\sqrt[4]{\beta_{2}}}\eta\frac{||\bm{m}_{t-1}||^{2}}{\lambda+\sqrt{\bm{\nu}_{t-1}}}$,
which draws inspiration from the current analysis of Gradient Descent with
Momentum (GDM) under the $L$-smooth condition (Sun et al., 2019). However, we
have introduced significant modifications to accommodate the integration of an
adaptive learning rate. This carefully crafted Lyapunov function enables us to
effectively control the deviation between the momentum term and the current
gradient, even under $(L_{0},L_{1})$-smooth condition. Through this approach,
we successfully establish the final optimal bound.
###### Remark 1 (On the comparison with AdaGrad).
Our result also suffices to separate Adam from AdaGrad. It is important to
note that the convergence rate of AdaGrad under the $(L_{0},L_{1})$-smooth
condition in a deterministic setting, as reported in(Wang et al., 2023b), is
$\frac{(f(\bm{w}_{1})-f^{*})^{2}}{\varepsilon^{2}}$. This rate is outperformed
by that of Adam222The state-of-art rate of AdaGrad under
$(L_{0},L_{1})$-smooth condition and stochastic setting is
$\frac{(f(\bm{w}_{1})-f^{*})^{2}}{\varepsilon^{4}}$, which is also worse than
the rate of Adam established latter in Theorem 3.. In Appendix B.3, we show
that the rate in (Wang et al., 2023b) is tight by providing a counterexample.
The comparatively slower convergence rate of AdaGrad can be attributed to that
$(L_{0},L_{1})$-smooth condition demands the update norm to be bounded by
$\mathcal{O}(1)$ to prevent the local smoothness from exponentially
increasing. This, in turn, necessitates a learning rate of $\mathcal{O}(1)$.
However, the adaptive conditioner in AdaGrad, which accumulates over time,
causes the adaptive learning rate to become excessively small during later
training stages, resulting in reduced convergence speed. Conversely, Adam
utilizes an exponential moving average for its adaptive learning rate, which
prevents the conditioner from accumulating excessively. Consequently, Adam
does not suffer from the aforementioned issue.
A lower bound for the convergence rate of GDM
With Adam’s upper bound, we then move on to a lower bound for the convergence
rate of GDM. In fact, there has already been such lower bounds for GD in the
existing literature (Zhang et al., 2019; Crawshaw et al., 2022), which we
restate as follows:
###### Proposition 1 (Theorem 2, (Crawshaw et al., 2022)).
Fix $\varepsilon,L_{0},L_{1}$, and $\Delta_{1}$, with learning rate $\eta$,
there exists objective function $f$ satisfying $(L_{0},L_{1})$-smooth
condition and $f(\bm{w}_{1})-f^{*}=\Delta_{1}$, such that the minimum step $T$
of GD to achieve final error $\varepsilon$ (i.e., let
$\\{\bm{w}_{t}\\}_{t=1}^{\infty}$ be the iterates of GD, and
$T\triangleq\min\\{t:\|\nabla f(\bm{w}_{t})\|<\varepsilon\\}$) satisfies
$T=\tilde{\Omega}\left(\frac{L_{1}^{2}\Delta_{1}^{2}+L_{0}\Delta_{1}}{\varepsilon^{2}}\right).$
However, the proposition presents a limitation: the counter-example is chosen
after the learning rate has been determined. This approach is inconsistent
with standard practices, where hyperparameters are usually adjusted based on
the specific task, and deviates from conventional lower bounds (Carmon et al.,
2017; Arjevani et al., 2022) that offer assurances for optimally-tuned
hyperparameters. This type of result does not eliminate the possibility that,
if the learning rate were adjusted after selecting the objective function—as
is common practice—Gradient Descent (GD) could potentially achieve a markedly
faster convergence rate. This misalignment raises concerns about the
appropriateness of the proposition’s methodology. Moreover, this proposition
does not take momentum into account, a technique that is commonly employed in
conjunction with GD in practice.
To address these shortcomings, we introduce a new lower bound for GDM. This
lower bound is applicable under the standard practice of adjusting
hyperparameters after the objective function has been selected. Moreover, it
encompasses scenarios where momentum is incorporated.
###### Theorem 2 (Informal).
Fixing $\varepsilon,L_{0},L_{1}$, and $\Delta_{1}$, there exists an objective
function $f$ satisfying $(L_{0},L_{1})$-smooth condition and
$f(\bm{w}_{1})-f^{*}=\Delta_{1}$, such that for any learning rate $\eta>0$ and
$\beta\in[0,1]$, the minimum step $T$ of GDM to achieve final error
$\varepsilon$ satisfies
$T=\tilde{\Omega}\left(\frac{L_{1}^{2}\Delta_{1}^{2}+L_{0}\Delta_{1}}{\varepsilon^{2}}\right).$
###### Proof.
Please see Appendix B.2 for the formal statement of theorem and the proof. ∎
It should be noticed in the above theorem, the hyperparameters (i.e., the
learning rate and the momentum coefficient) are chosen after the objective
function is determined, which agrees with practice and the settings of common
lower bounds, and overcomes the shortcoming of Proposition 1. Moreover, as
shown in Zhang et al. (2019), it is easy to prove that the upper bound of GD’s
convergence rate is also
$\mathcal{O}\left(\frac{L_{1}^{2}\Delta_{1}^{2}+L_{0}\Delta_{1}}{\varepsilon^{2}}\right)$,
which indicates such a lower bound is optimal.
The proof addresses two primary challenges outlined above. The first challenge
involves handling momentum. To tackle this, we extend the counterexample
provided in Proposition 1 for cases where the momentum coefficient $\beta$ is
small. Additionally, we introduce a new counterexample for situations with a
large $\beta$, demonstrating how large momentum can bias the optimization
process and decelerate convergence. The second challenge is how to derive a
universal counterexample such that every hyperparameter setting will lead to
slow convergence. We overcome this by a simple but effective trick: we
independently put counterexamples for different hyperparameters in Proposition
1 over different coordinates and make it a whole counterexample. Therefore,
for different hyperparameters, there will be at least one coordinate converge
slowly, which leads to the final result.
Separating deterministic Adam and GDM. Upon careful examination of Theorem 1
and Theorem 2, it becomes apparent that the convergence rate of GDM is
inferior to that of Adam since
$\frac{\sum_{t=1}^{T}\|\bm{G}_{t}\|}{T}\geq\min_{t\in[T]}\|\bm{G}_{t}\|$.
Notably, GDM exhibits a more pronounced dependency on the initial function
value gap in comparison to Adam. This implies that with a sufficiently poor
initial point, the convergence of GDM can be significantly slower than that of
Adam. The underlying reason for this disparity can be attributed to GDM’s
inability to adeptly manage varying degrees of sharpness within the
optimization landscape. Consequently, GDM necessitates a learning rate
selection that is conservative, tailored to the most adverse sharpness
encountered—often present during the initial optimization stages.
### 4.2 Analysis for the stochastic setting
Transitioning to the more complex stochastic setting, we extend our analysis
beyond the deterministic framework. As with our previous approach, we start by
reviewing the literature to determine if the existing convergence rates for
Adam under the $(L_{0},L_{1})$-smooth condition can delineate a clear
distinction between the convergence behaviors of Adam and Stochastic Gradient
Descent with Momentum (SGDM). In fact, the only two studies that delve into
this problem are the ones we discussed in Section 4.1, i.e., (Wang et al.,
2022; Li et al., 2023a). However, these results pertaining to Adam are
contingent upon rather stringent assumptions. Wang et al. (2022) postulates
that stochastic gradients not only conform to the $(L_{0},L_{1})$-smooth
condition but are also limited to a finite set of possibilities. These
assumptions are more restrictive than merely assuming that the true gradients
satisfy the $(L_{0},L_{1})$-smooth condition, and such strong prerequisites
are seldom employed outside of the analysis of variance-reduction algorithms.
Meanwhile, Li et al. (2023a) aligns its findings on stochastic Adam with those
on deterministic Adam, leading to a polynomial dependency on $1/\lambda$,
which deviates from practical scenarios as discussed in Section 4.1.
Furthermore, it presumes an a.s. bounded difference between stochastic
gradients and true gradients, an assumption that closely resembles the
boundedness of stochastic gradients and is more limiting than the standard
assumption of bounded variance for stochastic gradients.
These more restricted and non-standard assumptions cast challenges in
establishing a lower bound for the convergence of SGDM in the relevant
contexts, let alone attempting a comparison between SGDM and Adam. In addition
to the fact that these upper bounds fail to facilitate a clear comparison
between Adam and SGDM, there are also concerns regarding their convergence
rates. Wang et al. (2022) reports a convergence rate of
$\frac{(f(\bm{w}_{1})-f^{*})^{2}}{\varepsilon^{8}}$, which has a higher-order
dependence on the initial function value gap and the final error than the
$\frac{(f(\bm{w}_{1})-f^{*})}{\varepsilon^{4}}$ rate established for Clipped
SGDM under the $(L_{0},L_{1})$-smooth condition (Zhang et al., 2020)333While
Zhang et al. (2020) also assumes an a.s. bounded gap between stochastic
gradients and true gradients.. Furthermore, Li et al. (2023a) indicates a
convergence rate of
$\mathcal{O}(\frac{(f(\bm{w}_{1})-f^{*})^{4}\operatorname{poly}(1/\lambda)}{\varepsilon^{4}})$,
which, aside from the previously mentioned dependency issues on $1/\lambda$,
shows a significantly stronger dependence over the initial function value gap
compared to the analysis of Clipped SGDM. This naturally leads to the question
of whether such rates for Adam can be improved to match Clipped SGDM.
To tackle these obstacles, we present the following upper bound for Adam.
An upper bound for the convergence rate of Adam.
###### Theorem 3 (Informal).
Let Assumptions 1 and 2 hold. Then, $\forall 1>\beta_{1}\geq 0$ and
$\lambda=0$, if
$\varepsilon\leq\frac{1}{\operatorname{poly}(f(\bm{w}_{1})-f^{*},L_{0},L_{1},\sigma_{0},\sigma_{1})}$,
with a proper choice of learning rate $\eta$ and momentum hyperparameter
$\beta_{2}$, we have if
$T\geq\Theta\left(\frac{(L_{0}+L_{1})\sigma_{0}^{3}\sigma_{1}^{2}(f(\bm{w}_{1})-f^{*})}{\varepsilon^{4}}\right)$,
$\frac{1}{T}\mathbb{E}\sum_{t=1}^{T}\|\nabla f(\bm{w}_{t})\|\leq\varepsilon.$
###### Proof.
Please see Appendix C.1 for the formal statement of theorem and the proof. ∎
Below we include several discussions regarding Theorem 3. To begin with, one
can immediately observe that Theorem 3 only requires Assumptions 1 and 2, and
the convergence rate with respect to the initial function value gap and the
final error $\frac{f(\bm{w}_{1})-f^{*}}{\varepsilon^{4}}$ matches that of
Clipped SGDM (Zhang et al., 2020) even with a weaker noise assumption.
Therefore, our result successfully mitigate these barriers raised above.
Indeed, to the best of our knowledge, it is for the first time that an
algorithm is shown to converge with rate
$\mathcal{O}\left(\frac{f(\bm{w}_{1})-f^{*}}{\varepsilon^{4}}\right)$ only
requiring Assumptions 1 and 2, showcasing the advantage of Adam.
We briefly sketch the proof here before moving on to the result of SGDM.
Specifically, the proof is inspired by recent analysis of Adam under
$L$-smooth condition (Wang et al., 2023a), but several challenges arise during
the proof:
* •
The first challenge lies in the additional error introduced by the
$(L_{0},L_{1})$-smooth condition. We address this by demonstrating that the
telescoping sum involving the auxiliary function
$\frac{\|\bm{G}_{t}\|^{2}}{\sqrt{\bm{\nu}_{t-1}}}$, as employed in (Wang et
al., 2023a), can bound this additional error when the adaptive learning rate
is upper bounded. Although the adaptive learning rate in the Adam algorithm is
not inherently bounded, we establish that the deviation incurred by employing
a bounded surrogate adaptive learning rate is manageable;
* •
The second challenge involves deriving the desired dependence on the initial
function value gap. Wang et al. (2023a) introduces two distinct proof
strategies for bounding the conditioner $\bm{\nu}_{t}$ and determining the
final convergence rate. However, one strategy introduces an additional
logarithmic dependence on $\varepsilon$, while the other exhibits sub-optimal
dependence on the initial function value gap. We propose a novel two-stage
divide-and-conquer approach to surmount this issue. In the first stage, we
bound $\bm{\nu}_{t}$ effectively. Subsequently, we leverage this bound within
the original descent lemma to achieve the optimal dependence on
$f(\bm{w}_{1})-f^{*}$.
###### Remark 2 (On the limitations).
Although Theorem 3 addresses certain deficiencies identified in prior studies
(Wang et al., 2022; Li et al., 2023a), it is not without its limitations. As
noted by Arjevani et al. (2022), the established lower bound for the
convergence rate of first-order optimization algorithms under the
$L_{0}$-smooth condition with bounded noise variance (specifically,
$\sigma_{0}=\sigma_{0}$ and $\sigma_{1}=1$ as stated in Assumption 2) is
$\mathcal{O}(\frac{(f(\bm{w}_{1})-f^{*})L_{0}\sigma_{0}^{2}}{\varepsilon^{4}})$.
This sets a benchmark for the performance under Assumptions 1 and 2. The upper
bound of Adam’s convergence rate as presented in Theorem 3 falls short when
compared to this benchmark, exhibiting a weaker noise scale dependency
($\sigma_{0}^{3}$ as opposed to $\sigma_{0}^{2}$) and additional dependencies
on $L_{1}$ and $\sigma_{1}$.
To address these issues, we demonstrate in the subsequent section that by
focusing on the convergence of the minimum gradient norm,
$\mathbb{E}\min_{t\in[T]}\|\nabla f(\bm{w}_{t})\|$, we can attain an improved
convergence rate of
$\mathcal{O}(\frac{(f(\bm{w}_{1})-f^{*})L_{0}\sigma_{0}^{2}}{\varepsilon^{4}})$.
This rate aligns with the aforementioned lower bound across all the problem
hyperparameters.
We now establish the lower bound of SGDM. This is, however, more challenging
than the deterministic case as to the best of our knowledge, there is no such
a lower bound in existing literature (despite that the lower bounds of GD
(Zhang et al., 2019; Crawshaw et al., 2022) naturally offer a lower bound of
SGD, which is considerably loose given the factor of $1/\varepsilon^{2}$).
Intuitively, stochasticity can make the convergence of GDM even worse, as
random fluctuations can inadvertently propel the iterations towards regions
characterized by high smoothness even with a good initialization. We formulate
this insight into the following theorem.
A lower bound for the convergence rate of SGDM.
###### Theorem 4 (Informal).
Fix $L_{0},L_{1}$, and $\Delta_{1}$, there exists objective function $f$
satisfying $(L_{0},L_{1})$-smooth condition and
$f(\bm{w}_{1})-f^{*}=\Delta_{1}$, and a gradient noise oracle satisfying
Assumption 2, such that for any learning rate $\eta>0$ and $\beta\in[0,1]$,
for all $T>0$,
$\min_{t\in[T]}\mathbb{E}\|\nabla f(\bm{w}_{t})\|=\|\nabla f(\bm{w}_{1})\|\geq
L_{1}\Delta_{1}.$
###### Proof.
Please see Appendix C.2 for the formal statement of theorem and the proof. ∎
Theorem 4 provides concrete evidence for the challenges inherent in the
convergence of SGDM. It shows that there are instances that comply with
Assumption 1 and Assumption 2 for which SGDM fails to converge, regardless of
the chosen learning rate and momentum coefficient. This outcome confirms our
earlier hypothesis: the stochastic elements within SGDM can indeed adversely
affect its convergence properties under non-uniform smoothness.
Our proof is founded upon a pivotal observation: an objective function that
escalates rapidly can effectively convert non-heavy-tailed noise into a
”heavy-tailed” one. In particular, under the $(L_{0},L_{1})$-smooth condition,
the magnitude of the gradient is capable of exponential growth. As a result,
even if the density diminishes exponentially, the expected value of the
gradient norm may still become unbounded. This situation mirrors what occurs
under the $L$-smooth condition when faced with heavy-tailed noise. Such a
dynamic can lead to the non-convergence of SGDM.
Separating Adam and SGDM. Considering that Adam can achieve convergence under
Assumptions 1 and 2, while SGD cannot, the superiority of Adam over SGDM
becomes evident. It is important to note, however, a recent study by (Li et
al., 2023b), which demonstrates that SGD can converge with high probability
under the same assumptions, provided the noise variance is bounded. We would
like to contextualize this finding in relation to our work as follows: First,
this result does not conflict with our Theorem 4, since our theorem pertains
to bounds in expectation rather than with high probability. Second, our
comparison of Adam and SGDM within an in-expectation framework is reasonable
and aligns with the convention of most existing lower bounds in the literature
(Carmon et al., 2017; Drori & Shamir, 2020; Arjevani et al., 2022). Moreover,
establishing high-probability lower bounds is technically challenging, and
there are few references to such bounds in the existing literature. Lastly,
while we have not derived a corresponding high-probability lower bound for
SGD, the upper bound provided by Li et al. (2023b) is
$\mathcal{O}(\frac{(f(\bm{w}_{1})-f^{*})^{4}}{\varepsilon^{4}})$, which
indicates a less favorable dependency on the initial function value gap
compared to the bound for Adam.
## 5 Can Adam reach the lower bound of the convergence rate under
$(L_{0},L_{1})$-smooth condition?
As we mentioned in Remark 2, although Theorem 3 matches the lower bound
established by Arjevani et al. (2022) with respect to the initial function
value gap $f(\bm{w}_{1})-f^{*}$, the final error $\varepsilon$, and the
smoothness coefficient $L_{0}$, it exhibits sub-optimal dependence on the
noise scale $\sigma_{0}$ and additional dependence on $L_{1}$ and
$\sigma_{1}$. One may wonder whether these dependencies are inherently
unavoidable or if they stem from technical limitations in our analysis.
Upon revisiting the proof, we identified that the sub-optimal dependencies
arise from our strategy of substituting the original adaptive learning rate
with a bounded surrogate. For example, the correlation between stochastic
gradient and adaptive learning rate will introduce an error term
$\eta\frac{\sigma_{0}^{2}(1-\beta_{2})\|\bm{g}_{t}\|^{2}}{\sqrt{\beta_{2}\bm{\nu}_{t-1}}\bm{\nu}_{t}}$,
detailed in Eq. (8). To bound this term, we add a constant $\lambda$ to
$\beta_{2}\bm{\nu}_{t-1}$, allowing us to upper bound
$\frac{1}{\sqrt{\beta_{2}\bm{\nu}_{t-1}+\lambda}}$. Consequently, the term
$\eta\frac{\sigma_{0}^{2}(1-\beta_{2})\|\bm{g}_{t}\|2}{\sqrt{\beta_{2}\bm{\nu}_{t-1}+\lambda}\bm{\nu}_{t}}$
can be bounded by
$\eta\frac{\sigma_{0}^{2}(1-\beta_{2})\|\bm{g}_{t}\|^{2}}{\sqrt{\lambda}\bm{\nu}_{t}}$,
which has the same order as a second-order Taylor expansion. To control the
error introduced by adding $\lambda$, we cannot choose a value for $\lambda$
that is too large. The optimal choice of $\lambda$ for balancing the new error
against the original error is $(1-\beta_{2})\sigma_{0}^{2}$. This selection
results in the original error term
$\eta\frac{\sigma_{0}\sqrt{1-\beta_{2}}\|\bm{g}_{t}\|^{2}}{\bm{\nu}_{t}}$,
which induces an additional $\sigma_{0}$ factor, ultimately leading to the
sub-optimal dependence on $\sigma_{0}$. Therefore, we need to explore
alternative methods to handle the error term to eliminate the sub-optimal
dependence on $\sigma_{0}$.
We begin our analysis by observing that the term
$\frac{(1-\beta_{2})\|\bm{g}_{t}\|^{2}}{\sqrt{\beta_{2}\bm{\nu}_{t-1}}\bm{\nu}_{t}}$
can in fact be bounded by an ”approximate telescoping” series of
$\frac{1}{\sqrt{\bm{\nu}_{t}}}$ (noting an additional coefficient
$\frac{1}{\sqrt{\beta_{2}}}$ in comparison to standard telescoping):
$\frac{(1-\beta_{2})\|\bm{g}_{t}\|^{2}}{\sqrt{\beta_{2}\bm{\nu}_{t-1}}\bm{\nu}_{t}}\leq\mathcal{O}\left(\frac{1}{\sqrt{\beta_{2}\bm{\nu}_{t-1}}}-\frac{1}{\sqrt{\bm{\nu}_{t}}}\right).$
Accordingly, summing
$\eta\frac{\sigma_{0}^{2}(1-\beta_{2})\|\bm{g}_{t}\|2}{\sqrt{\beta_{2}\bm{\nu}_{t-1}}\bm{\nu}_{t}}$
over $t$ yields a bound of
$\mathcal{O}(\eta\sigma_{0}^{2}\sum_{t}(1-\beta_{2})\frac{1}{\sqrt{\bm{\nu}_{t}}})$.
However, this term could potentially be unbounded since $\sqrt{\bm{\nu}_{t}}$
is not lower bounded. To circumvent this issue, we consider the first-order
Taylor’s expansion of the descent lemma, which, gives
$-\sum_{t}\eta\frac{\|\nabla f(\bm{w}_{t})\|2}{\sqrt{\bm{\nu}_{t}}}$.
Intuitively, if any $\|\nabla f(\bm{w}_{t})\|^{2}$ is of the order
$\mathcal{O}(\sigma_{0}^{2}(1-\beta_{2}))$, our proof would be completed since
we choose $1-\beta_{2}=\Theta(\varepsilon^{4})$. In the other case, the term
$\mathcal{O}(\eta\sigma_{0}^{2}\sum_{t}(1-\beta_{2})\frac{1}{\sqrt{\bm{\nu}_{t}}})$
can be offset by the negative term $-\sum_{t}\eta\frac{\|\nabla
f(\bm{w}_{t})\|^{2}}{\sqrt{\bm{\nu}_{t}}}$. However, formalizing this
intuition into a proof is challenging in the context of stochastic analysis,
where the randomness across iterations complicates the analysis. Specifically,
if we condition on the event that ”no gradient norm is as small as
$\sigma_{0}^{2}(1-\beta_{2})$,” which is supported over the randomness of all
iterations, it becomes difficult to express many expected values (such as
those from the first-order Taylor expansion) in closed form.
We address this difficulty by introducing a stopping time
$\tau\triangleq\min\\{t:\|\nabla
f(\bm{w}_{t+1})\|^{2}\leq\mathcal{O}(\sigma_{0}^{2}(1-\beta_{2}))\\}$. By
applying the optimal stopping theorem (Durrett, 2019), we can maintain closed-
form expressions for the expected values up to the stopping time, allowing the
problematic error term to be absorbed within this interval. Building on this
methodology, we formulate the following theorem.
###### Theorem 5 (Informal).
Let Assumptions 1 and 2 hold. Then, $\forall 1>\beta_{1}\geq 0$, if
$\varepsilon\leq\frac{1}{\operatorname{Poly}(L_{0},L_{1},\sigma_{0},\sigma_{1},\frac{1}{1-\beta_{1}},f(\bm{w}_{1})-f^{*})}$,
with a proper choice of learning rate $\eta$ and momentum hyperparameter
$\beta_{2}$, we have that if
$T\geq\Theta(\frac{L_{0}\sigma_{0}^{2}(f(\bm{w}_{1})-f^{*})}{\varepsilon^{4}})$
$\mathbb{E}\min_{t\in[1,T]}\|\nabla f(\bm{w}_{t})\|\leq\varepsilon.$
###### Proof.
Please see Appendix D.1 for the formal statement of theorem and the proof. ∎
One can easily see that the convergence rate of Theorem 5 matches the lower
bound in Arjevani et al. (2022) with respect to all problem hyperparameters up
to numerical constants even under the weaker $(L_{0},L_{1})$-smooth condition.
Therefore, such a rate is optimal and provides an affirmative answer to the
question raised in the beginning of this section.
One may notice that in the construction of the stopping time, we set the
threshold for the squared gradient norm to be $\mathcal{O}(1-\beta_{2})$. As
we set $1-\beta_{2}=\Theta(\varepsilon^{4})$, the threshold is actually much
smaller than what we aim for, since our goal is to have $\|\nabla
f(\mathbf{w}_{t})\|^{2}\leq\varepsilon^{2}$. Therefore, based on the stopping-
time technique, we can actually show that Adam can converge with an optimal
rate of $\mathcal{O}(\varepsilon^{-4})$ when $1-\beta_{2}=\varepsilon^{2}$, or
$1/\sqrt{T}$ if expressed in terms of the iteration number $T$. To the best of
our knowledge, this is the first time that Adam has been shown to converge
with an optimal rate under the condition that $1-\beta_{2}=\Omega(1/T)$, which
greatly enlarges the hyperparameter range. Moreover, as we select
$\eta=1/\sqrt{T}$, choosing $1-\beta_{2}=\Omega(1/T)$ has the advantage that
the update norm decreases with respect to $T$. This makes Adam parameter-
agnostic under the $(L_{0},L_{1})$-smooth condition, as the update norm will
eventually become smaller than $\frac{1}{L_{1}}$ as $T$ increases.
###### Theorem 6.
Let Assumptions 1 and 2 hold. Then, at the $t$-th iteration, setting
$\eta=\frac{1}{\sqrt{t}}$, $\beta_{2}=1-\frac{1}{\sqrt[4]{t^{3}}}$, we have
that Algorithm 1 satisfies
$\mathbb{E}\min_{t\in[1,T]}\|\nabla
f(\bm{w}_{t})\|\leq\tilde{\mathcal{O}}\left(\frac{1}{\sqrt[4]{T}}\right).$
It is shown in (Hübler et al., 2023) that Normed-SGDM is parameter-agnostic.
Here we show that Adam with a specific scheduler can achieve the same goal.
## 6 Conclusion
In this paper, we have conducted a mathematical examination of the performance
of the Adam optimizer and SGDM within the context of non-uniform smoothness.
Our convergence analysis reveals that Adam exhibits a faster rate of
convergence compared to SGDM under these conditions. Moreover, we introduce a
novel stopping time technique that demonstrates Adam’s capability to achieve
the existing lower bounds for convergence rates. This finding underscores the
robustness of Adam in complex optimization landscapes and contributes to a
deeper understanding of its theoretical properties.
## Impact Statement
This paper investigates convergence of Adam and SGDM under non-uniform
smoothness. The main contributions of this paper are theoretical. Thus, in our
opinion, the paper has no potential ethical and societal problems.
## References
* Arjevani et al. (2022) Arjevani, Y., Carmon, Y., Duchi, J. C., Foster, D. J., Srebro, N., and Woodworth, B. Lower bounds for non-convex stochastic optimization. _Mathematical Programming_ , pp. 1–50, 2022.
* Armijo (1966) Armijo, L. Minimization of functions having lipschitz continuous first partial derivatives. _Pacific Journal of mathematics_ , 16(1):1–3, 1966.
* Brown et al. (2020) Brown, T. B., Mann, B., Ryder, N., Subbiah, M., Kaplan, J., Dhariwal, P., Neelakantan, A., Shyam, P., Sastry, G., Askell, A., Agarwal, S., Herbert-Voss, A., Krueger, G., Henighan, T., Child, R., Ramesh, A., Ziegler, D. M., Wu, J., Winter, C., Hesse, C., Chen, M., Sigler, E., Litwin, M., Gray, S., Chess, B., Clark, J., Berner, C., McCandlish, S., Radford, A., Sutskever, I., and Amodei, D. Language models are few-shot learners, 2020.
* Carmon et al. (2017) Carmon, Y., Duchi, J. C., Hinder, O., and Sidford, A. Lower bounds for finding stationary points i. _arXiv preprint arXiv:1710.11606_ , 2017.
* Chen et al. (2023) Chen, Z., Zhou, Y., Liang, Y., and Lu, Z. Generalized-smooth nonconvex optimization is as efficient as smooth nonconvex optimization. _arXiv preprint arXiv:2303.02854_ , 2023.
* Chowdhery et al. (2022) Chowdhery, A., Narang, S., Devlin, J., Bosma, M., Mishra, G., Roberts, A., Barham, P., Chung, H. W., Sutton, C., Gehrmann, S., Schuh, P., Shi, K., Tsvyashchenko, S., Maynez, J., Rao, A., Barnes, P., Tay, Y., Shazeer, N., Prabhakaran, V., Reif, E., Du, N., Hutchinson, B., Pope, R., Bradbury, J., Austin, J., Isard, M., Gur-Ari, G., Yin, P., Duke, T., Levskaya, A., Ghemawat, S., Dev, S., Michalewski, H., Garcia, X., Misra, V., Robinson, K., Fedus, L., Zhou, D., Ippolito, D., Luan, D., Lim, H., Zoph, B., Spiridonov, A., Sepassi, R., Dohan, D., Agrawal, S., Omernick, M., Dai, A. M., Pillai, T. S., Pellat, M., Lewkowycz, A., Moreira, E., Child, R., Polozov, O., Lee, K., Zhou, Z., Wang, X., Saeta, B., Diaz, M., Firat, O., Catasta, M., Wei, J., Meier-Hellstern, K., Eck, D., Dean, J., Petrov, S., and Fiedel, N. Palm: Scaling language modeling with pathways, 2022.
* Crawshaw et al. (2022) Crawshaw, M., Liu, M., Orabona, F., Zhang, W., and Zhuang, Z. Robustness to unbounded smoothness of generalized signSGD. _arXiv preprint arXiv:2208.11195_ , 2022.
* Cutkosky & Mehta (2020) Cutkosky, A. and Mehta, H. Momentum improves normalized SGD. In _International conference on machine learning_ , pp. 2260–2268. PMLR, 2020.
* De et al. (2018) De, S., Mukherjee, A., and Ullah, E. Convergence guarantees for RMSProp and ADAM in non-convex optimization and an empirical comparison to Nesterov acceleration. _arXiv preprint arXiv:1807.06766_ , 2018.
* Défossez et al. (2022) Défossez, A., Bottou, L., Bach, F., and Usunier, N. A simple convergence proof of Adam and Adagrad. _Transactions on Machine Learning Research_ , 2022.
* Drori & Shamir (2020) Drori, Y. and Shamir, O. The complexity of finding stationary points with stochastic gradient descent. In _International Conference on Machine Learning_ , pp. 2658–2667. PMLR, 2020.
* Du et al. (2021) Du, Z., Qian, Y., Liu, X., Ding, M., Qiu, J., Yang, Z., and Tang, J. All NLP tasks are generation tasks: A general pretraining framework. _CoRR_ , abs/2103.10360, 2021. URL https://arxiv.org/abs/2103.10360.
* Durrett (2019) Durrett, R. _Probability: theory and examples_ , volume 49. Cambridge university press, 2019.
* Faw et al. (2022) Faw, M., Tziotis, I., Caramanis, C., Mokhtari, A., Shakkottai, S., and Ward, R. The power of adaptivity in SGD: Self-tuning step sizes with unbounded gradients and affine variance. In _Conference on Learning Theory_ , pp. 313–355. PMLR, 2022.
* Faw et al. (2023) Faw, M., Rout, L., Caramanis, C., and Shakkottai, S. Beyond uniform smoothness: A stopped analysis of adaptive sgd. _arXiv preprint arXiv:2302.06570_ , 2023.
* Guo et al. (2021) Guo, Z., Xu, Y., Yin, W., Jin, R., and Yang, T. A novel convergence analysis for algorithms of the Adam family. _arXiv preprint arXiv:2112.03459_ , 2021.
* He et al. (2023) He, M., Liang, Y., Liu, J., and Xu, D. Convergence of adam for non-convex objectives: Relaxed hyperparameters and non-ergodic case. _arXiv preprint arXiv:2307.11782_ , 2023.
* Hong & Lin (2023) Hong, Y. and Lin, J. High probability convergence of adam under unbounded gradients and affine variance noise. _arXiv preprint arXiv:2311.02000_ , 2023.
* Hübler et al. (2023) Hübler, F., Yang, J., Li, X., and He, N. Parameter-agnostic optimization under relaxed smoothness. _arXiv preprint arXiv:2311.03252_ , 2023.
* Jin et al. (2021) Jin, J., Zhang, B., Wang, H., and Wang, L. Non-convex distributionally robust optimization: Non-asymptotic analysis. _Advances in Neural Information Processing Systems_ , 34:2771–2782, 2021.
* Kingma & Ba (2014) Kingma, D. P. and Ba, J. Adam: A method for stochastic optimization, 2014.
* Li et al. (2023a) Li, H., Jadbabaie, A., and Rakhlin, A. Convergence of Adam under relaxed assumptions. _arXiv preprint arXiv:2304.13972_ , 2023a.
* Li et al. (2023b) Li, H., Qian, J., Tian, Y., Rakhlin, A., and Jadbabaie, A. Convex and non-convex optimization under generalized smoothness. _arXiv preprint arXiv:2306.01264_ , 2023b.
* Loshchilov & Hutter (2019) Loshchilov, I. and Hutter, F. Decoupled weight decay regularization. In _International Conference on Learning Representations_ , 2019.
* Mei et al. (2021) Mei, J., Gao, Y., Dai, B., Szepesvari, C., and Schuurmans, D. Leveraging non-uniformity in first-order non-convex optimization. In _International Conference on Machine Learning_ , pp. 7555–7564. PMLR, 2021.
* Nesterov et al. (2018) Nesterov, Y. et al. _Lectures on convex optimization_ , volume 137. Springer, 2018.
* Orabona & Pál (2016) Orabona, F. and Pál, D. Coin betting and parameter-free online learning. _Advances in Neural Information Processing Systems_ , 29, 2016.
* Orabona & Tommasi (2017) Orabona, F. and Tommasi, T. Training deep networks without learning rates through coin betting. _Advances in Neural Information Processing Systems_ , 30, 2017.
* Rae et al. (2021) Rae, J. W., Borgeaud, S., Cai, T., Millican, K., Hoffmann, J., Song, F., Aslanides, J., Henderson, S., Ring, R., Young, S., et al. Scaling language models: Methods, analysis & insights from training gopher. _arXiv preprint arXiv:2112.11446_ , 2021.
* Reddi et al. (2018) Reddi, S. J., Kale, S., and Kumar, S. On the convergence of adam and beyond. In _International Conference on Learning Representations_ , 2018.
* Reisizadeh et al. (2023) Reisizadeh, A., Li, H., Das, S., and Jadbabaie, A. Variance-reduced clipping for non-convex optimization. _arXiv preprint arXiv:2303.00883_ , 2023.
* Schneider et al. (2022) Schneider, F., Nado, Z., Agarwal, N., Dahl, G. E., and Hennig, P. HITY workshop poll, NeurIPS 2022. https://github.com/fsschneider/HITYWorkshopPoll, 2022.
* Shi et al. (2021) Shi, N., Li, D., Hong, M., and Sun, R. RMSprop converges with proper hyper-parameter. In _International Conference on Learning Representations_ , 2021.
* Sun et al. (2019) Sun, T., Yin, P., Li, D., Huang, C., Guan, L., and Jiang, H. Non-ergodic convergence analysis of heavy-ball algorithms. In _Proceedings of the AAAI Conference on Artificial Intelligence_ , volume 33, pp. 5033–5040, 2019.
* Sun et al. (2023) Sun, T., Chen, C., Qiao, P., Shen, L., Liu, X., and Li, D. Rethinking sign training: Provable nonconvex acceleration without first-and second-order gradient lipschitz. _arXiv preprint arXiv:2310.14616_ , 2023.
* Touvron et al. (2023) Touvron, H., Lavril, T., Izacard, G., Martinet, X., Lachaux, M.-A., Lacroix, T., Rozière, B., Goyal, N., Hambro, E., Azhar, F., et al. Llama: Open and efficient foundation language models. _arXiv preprint arXiv:2302.13971_ , 2023.
* Wang et al. (2022) Wang, B., Zhang, Y., Zhang, H., Meng, Q., Ma, Z.-M., Liu, T.-Y., and Chen, W. Provable adaptivity in Adam. _arXiv preprint arXiv:2208.09900_ , 2022.
* Wang et al. (2023a) Wang, B., Fu, J., Zhang, H., Zheng, N., and Chen, W. Closing the gap between the upper bound and lower bound of adam’s iteration complexity. In _Thirty-seventh Conference on Neural Information Processing Systems_ , 2023a.
* Wang et al. (2023b) Wang, B., Zhang, H., Ma, Z., and Chen, W. Convergence of adagrad for non-convex objectives: Simple proofs and relaxed assumptions. In _The Thirty Sixth Annual Conference on Learning Theory_ , pp. 161–190. PMLR, 2023b.
* Ward et al. (2020) Ward, R., Wu, X., and Bottou, L. Adagrad stepsizes: Sharp convergence over nonconvex landscapes. _The Journal of Machine Learning Research_ , 21(1):9047–9076, 2020.
* Xie et al. (2023) Xie, C., Li, C., Zhang, C., Deng, Q., Ge, D., and Ye, Y. Trust region methods for nonconvex stochastic optimization beyond lipschitz smoothness. _arXiv preprint arXiv:2310.17319_ , 2023.
* Xing et al. (2021) Xing, Y., He, X., et al. On the convergence of msgd and adagrad for stochastic optimization. In _International Conference on Learning Representations_ , 2021.
* Yang et al. (2023) Yang, J., Li, X., Fatkhullin, I., and He, N. Two sides of one coin: the limits of untuned sgd and the power of adaptive methods. _arXiv preprint arXiv:2305.12475_ , 2023.
* Zaheer et al. (2018) Zaheer, M., Reddi, S., Sachan, D., Kale, S., and Kumar, S. Adaptive methods for nonconvex optimization. _Advances in neural information processing systems_ , 31, 2018.
* Zhang et al. (2020) Zhang, B., Jin, J., Fang, C., and Wang, L. Improved analysis of clipping algorithms for non-convex optimization. _Advances in Neural Information Processing Systems_ , 33:15511–15521, 2020.
* Zhang et al. (2019) Zhang, J., He, T., Sra, S., and Jadbabaie, A. Why gradient clipping accelerates training: A theoretical justification for adaptivity. In _International Conference on Learning Representations_ , 2019.
* Zhang et al. (2022a) Zhang, S., Roller, S., Goyal, N., Artetxe, M., Chen, M.-W., Chen, S., Dewan, C., Diab, M., Li, X., Lin, X. V., et al. Opt: Open pre-trained transformer language models. _arXiv preprint arXiv:2205.01068_ , 2022a.
* Zhang et al. (2022b) Zhang, Y., Chen, C., Shi, N., Sun, R., and Luo, Z.-Q. Adam can converge without any modification on update rules. _arXiv preprint arXiv:2208.09632_ , 2022b.
* Zou et al. (2019) Zou, F., Shen, L., Jie, Z., Zhang, W., and Liu, W. A sufficient condition for convergences of Adam and RMSProp. In _Proceedings of the IEEE/CVF conference on computer vision and pattern recognition_ , pp. 11127–11135, 2019.
## Appendix A Auxiliary Lemmas
In this section, we provide auxiliary results which will be used in subsequent
results.
###### Lemma 1.
We have $\forall t\geq 1$,
$\|\bm{w}_{t+1}-\bm{w}_{t}\|\leq\eta\frac{1-\beta_{1}}{\sqrt{1-\beta_{2}}\sqrt{1-\frac{\beta_{1}^{2}}{\beta_{2}}}}$.
###### Proof.
We have that
$\displaystyle\|\bm{w}_{t+1}-\bm{w}_{t}\|=\eta\left|\frac{\bm{m}_{t}}{\sqrt{\bm{\nu}_{t}}}\right|\leq\eta\frac{\sum_{i=0}^{t-1}(1-\beta_{1})\beta_{1}^{i}\|\bm{g}_{t-i}\|}{\sqrt{\sum_{i=0}^{t-1}(1-\beta_{2})\beta_{2}^{i}\|\bm{g}_{t-i}\|^{2}+\beta_{2}^{t}\bm{\nu}_{0}}}$
$\displaystyle\leq$
$\displaystyle\eta\frac{1-\beta_{1}}{\sqrt{1-\beta_{2}}}\frac{\sqrt{\sum_{i=0}^{t-1}\beta_{2}^{i}\|\bm{g}_{t-i}\|^{2}}\sqrt{\sum_{i=0}^{t-1}\frac{\beta_{1}^{2i}}{\beta_{2}^{i}}}}{\sqrt{\sum_{i=0}^{t-1}\beta_{2}^{i}\|\bm{g}_{t-i}\|^{2}}}\leq\eta\frac{1-\beta_{1}}{\sqrt{1-\beta_{2}}\sqrt{1-\frac{\beta_{1}^{2}}{\beta_{2}}}}.$
Here the second inequality is due to Cauchy’s inequality. The proof is
completed. ∎
The following lemma provides a novel descent lemma under
$(L_{0},L_{1})$-smooth condition.
###### Lemma 2.
Let Assumption 1 hold. Then, for any three points
$\bm{w}^{1},\bm{w}^{2},\bm{w}^{3}\in\mathcal{X}$ satisfying
$\|\bm{w}^{1}-\bm{w}^{2}\|\leq\frac{1}{2L_{1}}$ and
$\|\bm{w}^{1}-\bm{w}^{3}\|\leq\frac{1}{2L_{1}}$, we have
$f(\bm{w}^{2})\leq f(\bm{w}^{3})+\langle\nabla
f(\bm{w}^{3}),\bm{w}^{2}-\bm{w}^{3}\rangle+\frac{1}{2}(L_{0}+L_{1}\|\nabla
f(\bm{w}^{1})\|)\|\bm{w}^{2}-\bm{w}^{3}\|(\|\bm{w}^{1}-\bm{w}^{3}\|+\|\bm{w}^{1}-\bm{w}^{2}\|).$
###### Proof.
By the Fundamental Theorem of Calculus, we have
$\displaystyle f(\bm{w}^{2})=$ $\displaystyle
f(\bm{w}^{3})+\int_{0}^{1}\langle\nabla
f(\bm{w}^{3}+a(\bm{w}^{2}-\bm{w}^{3})),\bm{w}^{2}-\bm{w}^{3}\rangle\mathrm{d}a$
$\displaystyle=$ $\displaystyle f(\bm{w}^{3})+\langle\nabla
f(\bm{w}^{1}),\bm{w}^{2}-\bm{w}^{3}\rangle+\int_{0}^{1}\langle\nabla
f(\bm{w}^{3}+a(\bm{w}^{2}-\bm{w}^{3}))-\nabla
f(\bm{w}^{1}),\bm{w}^{2}-\bm{w}^{3}\rangle\mathrm{d}a$ $\displaystyle\leq$
$\displaystyle f(\bm{w}^{3})+\langle\nabla
f(\bm{w}^{1}),\bm{w}^{2}-\bm{w}^{3}\rangle+\int_{0}^{1}\|\nabla
f(\bm{w}^{3}+a(\bm{w}^{2}-\bm{w}^{3}))-\nabla
f(\bm{w}^{1})\|\|\bm{w}^{2}-\bm{w}^{3}\|\mathrm{d}a$
$\displaystyle\overset{(\star)}{\leq}$ $\displaystyle
f(\bm{w}^{3})+\langle\nabla
f(\bm{w}^{1}),\bm{w}^{2}-\bm{w}^{3}\rangle+\int_{0}^{1}(L_{0}+L_{1}\|\nabla
f(\bm{w}^{1})\|)\|a(\bm{w}^{2}-\bm{w}^{1})+(1-a)(\bm{w}^{3}-\bm{w}^{1})\|\|\bm{w}^{2}-\bm{w}^{3}\|\mathrm{d}a$
$\displaystyle\leq$ $\displaystyle f(\bm{w}^{3})+\langle\nabla
f(\bm{w}^{1}),\bm{w}^{2}-\bm{w}^{3}\rangle+\frac{1}{2}(L_{0}+L_{1}\|\nabla
f(\bm{w}^{2})\|)\|\bm{w}^{2}-\bm{w}^{3}\|(\|\bm{w}^{1}-\bm{w}^{3}\|+\|\bm{w}^{1}-\bm{w}^{2}\|),$
where Inequality $(\star)$ is because due to
$\|\bm{w}^{3}+a(\bm{w}^{2}-\bm{w}^{3})-\bm{w}^{1}\|=\|a(\bm{w}^{2}-\bm{w}^{1})+(1-a)(\bm{w}^{3}-\bm{w}^{1})\|\leq\frac{1}{L_{1}},$
the definition of $(L_{0},L_{1})$-smooth condition can be applied.
The proof is completed. ∎
The following lemma is helpful when bounding the second-order term.
###### Lemma 3.
Assume we have $0<\beta_{1}^{2}<\beta_{2}<1$ and a sequence of real numbers
$(a_{n})_{n=1}^{\infty}$. Let $b_{0}>0$,
$b_{n}=\beta_{2}b_{n-1}+(1-\beta_{2})a_{n}^{2}$, $c_{0}=0$, and
$c_{n}=\beta_{1}c_{n-1}+(1-\beta_{1})a_{n}$. Then, we have
$\sum_{n=1}^{T}\frac{|c_{n}|^{2}}{b_{n}}\leq\frac{(1-\beta_{1})^{2}}{(1-\frac{\beta_{1}}{\sqrt{\beta_{2}}})^{2}(1-\beta_{2})}\left(\ln\left(\frac{b_{T}}{b_{0}}\right)-T\ln\beta_{2}\right).$
###### Lemma 4.
If $\beta_{2}\geq\beta_{1}$, then we have
$\frac{\|\bm{m}_{t}\|^{2}}{(\sqrt{\bm{\nu}_{t}})^{3}}\leq
4(1-\beta_{1})\left(\sum_{s=1}^{t}\sqrt[4]{\beta_{1}^{t-s}}\frac{2}{1-\beta_{2}}\left(\frac{1}{\sqrt{\beta_{2}\bm{\nu}_{s-1}}}-\frac{1}{\sqrt{\bm{\nu}_{s}}}\right)\right).$
###### Proof.
To begin with, we have
$\displaystyle\frac{\|\bm{m}_{t}\|}{\sqrt[4]{\bm{\nu}_{t}^{3}}}\leq(1-\beta_{1})\sum_{s=1}^{t}\frac{\beta_{2}^{t-s}\|\bm{g}_{s}\|}{\sqrt[4]{\bm{\nu}_{t}^{3}}}\leq(1-\beta_{1})\sum_{s=1}^{t}\frac{\beta_{1}^{t-s}\|\bm{g}_{s}\|}{\sqrt[4]{\beta_{2}^{3(t-s)}}\sqrt[4]{\bm{\nu}_{s}^{3}}}.$
Here in the last inequality we use
$\bm{\nu}_{t}\geq\beta_{2}^{t-s}\bm{\nu}_{s}$.
By further applying Cauchy-Schwartz inequality, we obtain
$\displaystyle\frac{\|\bm{m}_{t}\|^{2}}{\sqrt{\bm{\nu}_{t}^{3}}}\leq$
$\displaystyle(1-\beta_{1})^{2}\left(\sum_{s=1}^{t}\frac{\beta_{1}^{t-s}\|\bm{g}_{s}\|^{2}}{\sqrt[4]{\beta_{2}^{3(t-s)}}\sqrt{\bm{\nu}_{s}^{3}}}\right)\left(\sum_{s=1}^{t}\frac{\beta_{1}^{t-s}}{\sqrt[4]{\beta_{2}^{3(t-s)}}}\right)$
$\displaystyle\leq$
$\displaystyle\frac{(1-\beta_{1})^{2}}{1-\frac{\beta_{1}}{\sqrt[4]{\beta_{2}^{3}}}}\left(\sum_{s=1}^{t}\frac{\beta_{1}^{t-s}\|\bm{g}_{s}\|^{2}}{\sqrt[4]{\beta_{2}^{3(t-s)}}\sqrt{\bm{\nu}_{s}^{3}}}\right)$
$\displaystyle\leq$ $\displaystyle
4(1-\beta_{1})\left(\sum_{s=1}^{t}\frac{\beta_{1}^{t-s}\|\bm{g}_{s}\|^{2}}{\sqrt[4]{\beta_{2}^{3(t-s)}}\sqrt{\bm{\nu}_{s}^{3}}}\right).$
As
$\frac{\|\bm{g}_{s}\|^{2}}{\sqrt{\bm{\nu}_{s}^{3}}}\leq\frac{2\|\bm{g}_{s}\|^{2}}{\sqrt{\bm{\nu}_{s}}\sqrt{\beta_{2}\bm{\nu}_{s-1}}(\sqrt{\bm{\nu}_{s}}+\sqrt{\beta_{2}\bm{\nu}_{s-1}})}=\frac{2}{1-\beta_{2}}\left(\frac{1}{\sqrt{\beta_{2}\bm{\nu}_{s-1}}}-\frac{1}{\sqrt{\bm{\nu}_{s}}}\right)$,
the proof is completed. ∎
###### Lemma 5.
If $\beta_{2}\geq\beta_{1}$, then we have
$\frac{\|\bm{m}_{t}\|^{2}\|\bm{G}_{t}\|^{2}}{\bm{\nu}_{t}\sqrt{\beta_{2}\bm{\nu}_{t-1}}}\leq
4(1-\beta_{1})\left(\sum_{s=1}^{t}\frac{\sqrt[8]{\beta_{1}^{t-s}}\|\bm{g}_{s}\|^{2}\|\bm{G}_{s}\|^{2}}{\bm{\nu}_{s}\sqrt{\beta_{2}\bm{\nu}_{s-1}}}\right)+8\frac{1-\beta_{1}}{1-\beta_{2}}\frac{L_{1}^{2}}{L_{0}^{2}}\left(\sum_{s=1}^{t}\sqrt[8]{\beta_{1}^{t-s}}\left(\frac{1}{\sqrt{\beta_{2}\bm{\nu}_{s-1}}}-\frac{1}{\sqrt{\bm{\nu}_{s}}}\right)\right).$
###### Proof.
Similar to the proof of Lemma 4, we have
$\displaystyle\frac{\|\bm{m}_{t}\|^{2}}{\sqrt{\beta_{2}\bm{\nu}_{t-1}}\bm{\nu}_{t}}\leq$
$\displaystyle
4(1-\beta_{1})\left(\sum_{s=1}^{t}\frac{\beta_{1}^{t-s}\|\bm{g}_{s}\|^{2}}{\sqrt[4]{\beta_{2}^{3(t-s)}}\sqrt{\beta_{2}\bm{\nu}_{s-1}}\bm{\nu}_{s}}\right).$
(1)
Meanwhile, according to Assumption 1, we have
$\displaystyle\|\bm{G}_{t}\|^{2}\leq$
$\displaystyle\|\bm{G}_{t-1}\|^{2}+2\|\bm{G}_{t-1}\|\|\bm{G}_{t}-\bm{G}_{t-1}\|+\|\bm{G}_{t}-\bm{G}_{t-1}\|^{2}$
$\displaystyle\leq$
$\displaystyle\|\bm{G}_{t-1}\|^{2}+2\|\bm{G}_{t-1}\|(L_{0}+L_{1}\|\bm{G}_{t-1}\|)\|\bm{w}_{t+1}-\bm{w}_{t}\|+2(L_{0}^{2}+L_{1}^{2}\|\bm{G}_{t-1}\|^{2})\|\bm{w}_{t+1}-\bm{w}_{t}\|^{2}$
$\displaystyle\leq$
$\displaystyle\|\bm{G}_{t-1}\|^{2}+\frac{1-\sqrt[8]{\beta_{1}}}{3\sqrt[8]{\beta_{1}}}\|\bm{G}_{t-1}\|^{2}+\frac{3\sqrt[8]{\beta_{1}}L_{0}^{2}}{1-\sqrt[8]{\beta_{1}}}\|\bm{w}_{t+1}-\bm{w}_{t}\|^{2}+2L_{1}\|\bm{G}_{t-1}\|^{2}\|\bm{w}_{t+1}-\bm{w}_{t}\|$
$\displaystyle+2(L_{0}^{2}+L_{1}^{2}\|\bm{G}_{t-1}\|^{2})\|\bm{w}_{t+1}-\bm{w}_{t}\|^{2}$
$\displaystyle\overset{(\star)}{\leq}$
$\displaystyle\|\bm{G}_{t-1}\|^{2}+\frac{1-\sqrt[8]{\beta_{1}}}{3\sqrt[8]{\beta_{1}}}\|\bm{G}_{t-1}\|^{2}+\frac{1-\sqrt[8]{\beta_{1}}}{2}\frac{L_{0}^{2}}{L_{1}^{2}}+\frac{1-\sqrt[8]{\beta_{1}}}{3\sqrt[8]{\beta_{1}}}\|\bm{G}_{t-1}\|^{2}$
$\displaystyle+\frac{1-\sqrt[8]{\beta_{1}}}{2}\frac{L_{0}^{2}}{L_{1}^{2}}+\frac{1-\sqrt[8]{\beta_{1}}}{3\sqrt[8]{\beta_{1}}}\|\bm{G}_{t-1}\|^{2}$
$\displaystyle\leq$
$\displaystyle\frac{1}{\sqrt[8]{\beta_{1}}}\|\bm{G}_{t-1}\|^{2}+(1-\sqrt[8]{\beta_{1}})\frac{L_{1}^{2}}{L_{0}^{2}}.$
Here inequality $(\star)$ is because
$\|\bm{w}_{t+1}-\bm{w}_{t}\|\leq\frac{1-\sqrt[8]{\beta_{1}}}{6L_{1}}$.
Recursively applying the above inequality, we obtain that
$\displaystyle\|\bm{G}_{t}\|^{2}\leq\frac{1}{\sqrt[8]{\beta_{1}^{t-s}}}\|\bm{G}_{s}\|^{2}+\left(\left(\frac{1}{\sqrt[8]{\beta_{1}}}\right)^{t-s}-1\right)\frac{L_{1}^{2}}{L_{0}^{2}},$
which by Eq. (1) further gives
$\displaystyle\frac{\|\bm{m}_{t}\|^{2}\|\bm{G}_{t}\|^{2}}{\sqrt{\beta_{2}\bm{\nu}_{t-1}}\bm{\nu}_{t}}\leq$
$\displaystyle
4(1-\beta_{1})\left(\sum_{s=1}^{t}\frac{\beta_{1}^{t-s}\|\bm{g}_{s}\|^{2}\|\bm{G}_{t}\|^{2}}{\sqrt[4]{\beta_{2}^{3(t-s)}}\bm{\nu}_{s}\sqrt{\beta_{2}\bm{\nu}_{s-1}}}\right)$
$\displaystyle\leq$ $\displaystyle
4(1-\beta_{1})\left(\sum_{s=1}^{t}\frac{\sqrt[8]{\beta_{1}^{t-s}}\|\bm{g}_{s}\|^{2}\|\bm{G}_{s}\|^{2}}{\bm{\nu}_{s}\sqrt{\beta_{2}\bm{\nu}_{s-1}}}+\sum_{s=1}^{t}\frac{\sqrt[8]{\beta_{1}^{t-s}}\|\bm{g}_{s}\|^{2}}{\bm{\nu}_{s}\sqrt{\beta_{2}\bm{\nu}_{s-1}}}\frac{L_{1}^{2}}{L_{0}^{2}}\right)$
$\displaystyle\leq$ $\displaystyle
4(1-\beta_{1})\left(\sum_{s=1}^{t}\frac{\sqrt[8]{\beta_{1}^{t-s}}\|\bm{g}_{s}\|^{2}\|\bm{G}_{s}\|^{2}}{\bm{\nu}_{s}\sqrt{\beta_{2}\bm{\nu}_{s-1}}}\right)+8\frac{1-\beta_{1}}{1-\beta_{2}}\frac{L_{1}^{2}}{L_{0}^{2}}\left(\sum_{s=1}^{t}\sqrt[8]{\beta_{1}^{t-s}}\left(\frac{1}{\sqrt{\beta_{2}\bm{\nu}_{s-1}}}-\frac{1}{\sqrt{\bm{\nu}_{s}}}\right)\right).$
Here the last inequality is based on the similar reasoning of Lemma 5.
The proof is completed. ∎
## Appendix B Proofs for deterministic algorithms
### B.1 Proof for deterministic Adam
We will first provide the formal statement of Theorem 1, and then show the
corresponding proof.
###### Theorem 7 (Theorem 1, restated).
Let Assumption 1 hold. Then, $\forall\beta_{1},\beta_{2}$ satisfying
$0\leq\beta_{1}^{2}<\beta_{2}<1$, if
$T>\frac{L_{1}^{2}(f(\bm{w}_{1})-f^{*})(1-\frac{\beta_{1}^{2}}{\beta_{2}})}{L_{0}(1-\beta_{1})^{2}}$,
picking
$\eta=\frac{\sqrt{f(\bm{w}_{1})-f^{*}}\sqrt{1-\frac{\beta_{1}^{2}}{\beta_{2}}}}{\sqrt{TL_{0}}(1-\beta_{1})}$,
we have
$\frac{1}{T}\sum_{t=1}^{T}\|\nabla
f(\bm{w}_{t})\|\leq\frac{64}{(1-\beta_{2})(1-\frac{\beta_{1}^{2}}{\beta_{2}})\left(1-\frac{\beta_{1}}{\sqrt[4]{\beta_{2}}}\right)^{2}}\left(\frac{\sqrt{L_{0}(f(\bm{w}_{1})-f^{*})}}{\sqrt{T}}\right).$
###### Proof.
To begin with, according to Lemma 1 and restriction on the value of $T$, we
obtain that
$\forall t\in\mathbb{N}~{}\&t~{}\geq
1,\|\bm{w}_{t+1}-\bm{w}_{t}\|\leq\frac{1}{4L_{1}}.$
Therefore, the descent lemma can then be applied and thus $\forall
t\in\mathbb{N}\&t\geq 1$,
$f(\bm{w}_{t+1})\leq
f(\bm{w}_{t})\underbrace{-\eta\left\langle\bm{G}_{t},\frac{\bm{m}_{t}}{\lambda+\sqrt{\bm{\nu}_{t}}}\right\rangle}_{\text{First
Order}}+\underbrace{\eta^{2}\frac{L_{0}+L_{1}\|\bm{G}_{t}\|}{2}\frac{\|\bm{m}_{t}\|^{2}}{(\lambda+\sqrt{\bm{\nu}_{t}})^{2}}}_{\text{Second
Order}}.$
To begin with, as for the ”First Order” term, acording to
$\bm{m}_{t}=\beta_{1}\bm{m}_{t-1}+(1-\beta_{1})\bm{G}_{t}$ we have that
$\displaystyle-\eta\left\langle\bm{G}_{t},\frac{\bm{m}_{t}}{\lambda+\sqrt{\bm{\nu}_{t}}}\right\rangle=$
$\displaystyle-\eta\frac{1}{1-\beta_{1}}\left\langle\bm{m}_{t},\frac{\bm{m}_{t}}{\lambda+\sqrt{\bm{\nu}_{t}}}\right\rangle+\eta\frac{\beta_{1}}{1-\beta_{1}}\left\langle\bm{m}_{t-1},\frac{\bm{m}_{t}}{\lambda+\sqrt{\bm{\nu}_{t}}}\right\rangle$
$\displaystyle\overset{(\star)}{\leq}$
$\displaystyle-\eta\frac{1}{1-\beta_{1}}\frac{\|\bm{m}_{t}\|^{2}}{\lambda+\sqrt{\bm{\nu}_{t}}}+\eta\frac{\beta_{1}}{(1-\beta_{1})\sqrt[4]{\beta_{2}}}\left\langle\bm{m}_{t-1},\frac{\bm{m}_{t}}{\sqrt{\lambda+\sqrt{\bm{\nu}_{t}}}\sqrt{\lambda+\sqrt{\bm{\nu}_{t-1}}}}\right\rangle$
$\displaystyle\overset{(\ast)}{\leq}$
$\displaystyle-\eta\frac{1}{1-\beta_{1}}\frac{\|\bm{m}_{t}\|^{2}}{\lambda+\sqrt{\bm{\nu}_{t}}}+{\frac{\beta_{1}}{2(1-\beta_{1})\sqrt[4]{\beta_{2}}}}\eta\frac{\|\bm{m}_{t}\|^{2}}{\lambda+\sqrt{\bm{\nu}_{t}}}+{\frac{\beta_{1}}{2(1-\beta_{1})\sqrt[4]{\beta_{2}}}}\eta\frac{\|\bm{m}_{t-1}\|^{2}}{\lambda+\sqrt{\bm{\nu}_{t-1}}}$
$\displaystyle=$
$\displaystyle-\eta\frac{1-\frac{\beta_{1}}{\sqrt[4]{\beta_{2}}}}{1-\beta_{1}}\frac{\|\bm{m}_{t}\|^{2}}{\lambda+\sqrt{\bm{\nu}_{t}}}-{\frac{\beta_{1}}{2(1-\beta_{1})\sqrt[4]{\beta_{2}}}}\eta\frac{\|\bm{m}_{t}\|^{2}}{\lambda+\sqrt{\bm{\nu}_{t}}}+{\frac{\beta_{1}}{2(1-\beta_{1})\sqrt[4]{\beta_{2}}}}\eta\frac{\|\bm{m}_{t-1}\|^{2}}{\lambda+\sqrt{\bm{\nu}_{t-1}}}.$
where inequality $(\star)$ is due to that
$\sqrt{\bm{\nu}_{t}}\geq\sqrt{\beta_{2}\bm{\nu}_{t-1}}$ and inequality
$(\ast)$ is due to Young’s inequality.
Meanwhile, as for the ”Second Order” term, we have
$\displaystyle\eta^{2}\frac{L_{0}+L_{1}\|\bm{G}_{t}\|}{2}\frac{\|\bm{m}_{t}\|^{2}}{(\lambda+\sqrt{\bm{\nu}_{t}})^{2}}\overset{(\bullet)}{\leq}$
$\displaystyle
L_{0}\eta^{2}\frac{(1-\beta_{1})^{2}}{(1-\beta_{2})(1-\frac{\beta_{1}^{2}}{\beta_{2}})}+\frac{L_{1}\eta^{2}}{\sqrt{1-\beta_{2}}}\frac{\|\bm{m}_{t}\|^{2}}{\lambda+\sqrt{\bm{\nu}_{t}}}$
$\displaystyle\overset{(\circ)}{\leq}$ $\displaystyle
L_{0}\eta^{2}\frac{(1-\beta_{1})^{2}}{(1-\beta_{2})(1-\frac{\beta_{1}^{2}}{\beta_{2}})}+\frac{\eta}{2}\frac{1-\frac{\beta_{1}}{\sqrt[4]{\beta_{2}}}}{1-\beta_{1}}\frac{\|\bm{m}_{t}\|^{2}}{\lambda+\sqrt{\bm{\nu}_{t}}}.$
Here inequality $(\bullet)$ is due to Lemma 1 and
$\bm{\nu}_{t}\geq(1-\beta_{2})\|\bm{G}_{t}\|^{2},$
and inequality $(\circ)$ is due to the requirement over $T$.
Applying the estimations of both the ”First Order” and the ”Second Order”
terms, we obtain that
$\displaystyle f(\bm{w}_{t+1})-f(\bm{w}_{t})\leq$
$\displaystyle-\frac{\eta}{2}\frac{1-\frac{\beta_{1}}{\sqrt[4]{\beta_{2}}}}{1-\beta_{1}}\frac{\|\bm{m}_{t}\|^{2}}{\lambda+\sqrt{\bm{\nu}_{t}}}-{\frac{\beta_{1}}{2(1-\beta_{1})\sqrt[4]{\beta_{2}}}}\eta\frac{\|\bm{m}_{t}\|^{2}}{\lambda+\sqrt{\bm{\nu}_{t}}}+{\frac{\beta_{1}}{2(1-\beta_{1})\sqrt[4]{\beta_{2}}}}\eta\frac{\|\bm{m}_{t-1}\|^{2}}{\lambda+\sqrt{\bm{\nu}_{t-1}}}$
$\displaystyle+L_{0}\eta^{2}\frac{(1-\beta_{1})^{2}}{(1-\beta_{2})(1-\frac{\beta_{1}^{2}}{\beta_{2}})}.$
Summing the above inequality over $t\in\\{1,\cdots,T\\}$ then gives
$\displaystyle\sum_{t=1}^{T}\frac{\eta}{2}\frac{1-\frac{\beta_{1}}{\sqrt[4]{\beta_{2}}}}{1-\beta_{1}}\frac{\|\bm{m}_{t}\|^{2}}{\lambda+\sqrt{\bm{\nu}_{t}}}$
(2) $\displaystyle\leq$ $\displaystyle
f(\bm{w}_{1})-f(\bm{w}_{T+1})-{\frac{\beta_{1}}{2(1-\beta_{1})\sqrt[4]{\beta_{2}}}}\eta\frac{\|\bm{m}_{T}\|^{2}}{\lambda+\sqrt{\bm{\nu}_{T}}}+TL_{0}\eta^{2}\frac{(1-\beta_{1})^{2}}{(1-\beta_{2})(1-\frac{\beta_{1}^{2}}{\beta_{2}})}$
$\displaystyle\leq$ $\displaystyle
f(\bm{w}_{1})-f(\bm{w}_{T+1})+TL_{0}\eta^{2}\frac{(1-\beta_{1})^{2}}{(1-\beta_{2})(1-\frac{\beta_{1}^{2}}{\beta_{2}})}.$
Furthermore, as $(1-\beta_{1})\bm{G}_{t}=\bm{m}_{t}-\beta_{1}\bm{m}_{t-1}$, we
have that
$\|\bm{G}_{t}\|^{2}\leq\frac{1}{(1-\beta_{1})^{2}}\|\bm{m}_{t}\|^{2}+\frac{1}{(1-\beta_{1})^{2}}\|\bm{m}_{t-1}\|^{2}.$
Applying the above inequality and $\lambda=0$ to Eq. (2), we obtain that
$\sum_{t=1}^{T}\frac{\eta}{4}\left(1-\frac{\beta_{1}}{\sqrt[4]{\beta_{2}}}\right)(1-\beta_{1})\frac{\|\bm{G}_{t}\|^{2}}{\sqrt{\bm{\nu}_{t}}}\leq
f(\bm{w}_{1})-f(\bm{w}_{T+1})+TL_{0}\eta^{2}\frac{(1-\beta_{1})^{2}}{(1-\beta_{2})(1-\frac{\beta_{1}^{2}}{\beta_{2}})}.$
Meanwhile, we have
$\sqrt{\bm{\nu}_{t}}-\sqrt{\beta_{2}\bm{\nu}_{t-1}}=\frac{(1-\beta_{2})\|\bm{G}_{t}\|^{2}}{\sqrt{\bm{\nu}_{t}}+\sqrt{\beta_{2}\bm{\nu}_{t-1}}}\leq(1-\beta_{2})\frac{\|\bm{G}_{t}\|^{2}}{\sqrt{\bm{\nu}_{t}}}.$
Therefore, applying the above inequality and dividing both sides by $\eta$, we
have
$\frac{1}{4}\left(1-\frac{\beta_{1}}{\sqrt[4]{\beta_{2}}}\right)(1-\beta_{1})\sum_{t=1}^{T}(\sqrt{\bm{\nu}_{t}}-\sqrt{\beta_{2}\bm{\nu}_{t-1}})\leq\frac{f(\bm{w}_{1})-f(\bm{w}_{T+1})}{\eta}+TL_{0}\eta\frac{(1-\beta_{1})^{2}}{(1-\beta_{2})(1-\frac{\beta_{1}^{2}}{\beta_{2}})},$
which by telescoping further leads to
$\frac{1}{4}\left(1-\frac{\beta_{1}}{\sqrt[4]{\beta_{2}}}\right)(1-\beta_{1})\sum_{t=1}^{T}(1-\beta_{2})\sqrt{\bm{\nu}_{t}}\leq\frac{f(\bm{w}_{1})-f(\bm{w}_{T+1})}{\eta}+TL_{0}\eta\frac{(1-\beta_{1})^{2}}{(1-\beta_{2})(1-\frac{\beta_{1}^{2}}{\beta_{2}})}.$
According to Cauchy-Schwartz’s inequality, we then obtain
$\displaystyle\left(\sum_{t=1}^{T}\|\bm{G}_{t}\|\right)^{2}\leq$
$\displaystyle\left(\sum_{t=1}^{T}\sqrt{\bm{\nu}_{t}}\right)\left(\sum_{t=1}^{T}\frac{\|\bm{G}_{t}\|^{2}}{\sqrt{\bm{\nu}_{t}}}\right)$
$\displaystyle\leq$
$\displaystyle\frac{1}{1-\beta_{2}}\left(\frac{4(f(\bm{w}_{1})-f(\bm{w}_{T+1}))}{\eta\left(1-\frac{\beta_{1}}{\sqrt[4]{\beta_{2}}}\right)(1-\beta_{1})}+TL_{0}\eta\frac{(1-\beta_{1})}{(1-\beta_{2})\left(1-\frac{\beta_{1}}{\sqrt[4]{\beta_{2}}}\right)(1-\frac{\beta_{1}^{2}}{\beta_{2}})}\right)^{2}$
$\displaystyle=$
$\displaystyle\frac{1}{1-\beta_{2}}\left(\frac{4(f(\bm{w}_{1})-f(\bm{w}_{T+1}))}{\eta\left(1-\frac{\beta_{1}}{\sqrt[4]{\beta_{2}}}\right)(1-\beta_{1})}+4TL_{0}\eta\frac{(1-\beta_{1})}{(1-\beta_{2})\left(1-\frac{\beta_{1}}{\sqrt[4]{\beta_{2}}}\right)(1-\frac{\beta_{1}^{2}}{\beta_{2}})}\right)^{2}.$
The proof is completed by applying the value of $\eta$. ∎
### B.2 Proof for GDM
This section collects the proof of Theorem 2. To begin with, given problem
hyperparameters $\Delta_{1}$, $\varepsilon$, $L_{0}$, and $L_{1}$. We first
construct three 1D functions as follows:
$f_{1}(x)=\left\\{\begin{aligned}
&\frac{L_{0}e^{L_{1}x-1}}{L_{1}^{2}}&,x\in\left[\frac{1}{L_{1}},\infty\right),\\\
&\frac{L_{0}x^{2}}{2}+\frac{L_{0}}{2L_{1}^{2}}&,x\in[-\frac{1}{L_{1}},\frac{1}{L_{1}}],\\\
&\frac{L_{0}e^{-L_{1}x-1}}{L_{1}^{2}}&,x\in\left(-\infty,-\frac{1}{L_{1}}\right].\end{aligned}\right.$
(3) $f_{2}(y)=\left\\{\begin{aligned}
&\varepsilon(y-1)+\frac{\varepsilon}{2}&,y\in[1,\infty),\\\
&\frac{\varepsilon}{2}y^{2}&,y\in[-1,1],\\\
&-\varepsilon(y+1)+\frac{\varepsilon}{2}&,y\in(-\infty,-1].\end{aligned}\right.$
(4) $f_{3}(z)=\left\\{\begin{aligned}
&\varepsilon(z-1)+\frac{\varepsilon}{2L_{1}}+\frac{L_{0}}{2L_{1}^{2}}&,z\in[1,\infty),\\\
&\frac{\varepsilon
L_{1}}{2}z^{2}+\frac{L_{0}}{2L_{1}^{2}}&,z\in[0,\frac{1}{L_{1}}],\\\
&\frac{L_{0}z^{2}}{2}+\frac{L_{0}}{2L_{1}^{2}}&,z\in[-\frac{1}{L_{1}},0],\\\
&\frac{L_{0}e^{-L_{1}z-1}}{L_{1}^{2}}&,z\in\left(-\infty,-\frac{1}{L_{1}}\right].\end{aligned}\right.$
(5)
It is easy to verify that these functions satisfy $(L_{0},L_{1})$-smooth
condition as long as $\varepsilon\leq L_{0}$. We then respectively the
convergence of GDM over these three examples with different learning rate and
momentum coefficient.
###### Lemma 6 (Convergence over $f_{1}$).
Assume $\Delta_{1}\geq\frac{L_{0}}{L_{1}^{2}}(e-\frac{1}{2})$,
$\varepsilon\leq 1$ ,and let
$x_{1}=\frac{1+\log(\frac{1}{2}+\frac{L_{1}^{2}}{L_{0}}\Delta_{1})}{L_{1}}$.
Then, we have $f_{1}(x_{1})-f_{1}^{*}=\Delta_{1}$, and if
$\eta\geq\frac{(5+8\log\frac{1}{\varepsilon})(1+\log(\frac{1}{2}+\frac{L_{1}^{2}}{L_{0}}\Delta_{1}))}{L_{1}^{2}(\Delta_{1}+\frac{L_{0}}{2L_{1}^{2}})}$
and $\beta\leq
1-2\left(\frac{L_{1}^{2}}{L_{0}}e\right)^{-4\log\frac{1}{\varepsilon}-2}(\Delta_{1}+\frac{L_{0}}{2L_{1}^{2}})^{-4\log\frac{1}{\varepsilon}-2}$,
we have that GDM satisfies that $\forall t\in[1,\infty)$,
$|f_{1}^{\prime}(x_{t})|\geq L_{1}\Delta_{1}$.
###### Proof.
We prove this lemma by proving that $\forall k\geq 1$,
$|x_{k+1}|\geq(4+8\log\frac{1}{\varepsilon})|x_{k}|$ and
$\operatorname{Sign}(x_{k+1})=(-1)^{k+1}$ by induction. When $k=1$, according
to the update rule of GDM, we have
$x_{2}=x_{1}-\eta f_{1}^{\prime}(x_{1}).$
As
$\eta\geq\frac{(5+8\log\frac{1}{\varepsilon})(1+\log(\frac{1}{2}+\frac{L_{1}^{2}}{L_{0}}\Delta_{1}))}{\Delta_{1}+\frac{L_{0}}{2L_{1}^{2}}}=-\frac{(5+8\log\frac{1}{\varepsilon})x_{1}}{f_{1}^{\prime}(x_{1})}$,
we have
$x_{2}\leq-(4+8\log\frac{1}{\varepsilon})x_{1},$
which leads to the claim.
Now assuming that the claim has been proved for $k\leq t-1$ ($t\geq 2$). Then,
for $k=t$, with induction hypothesis we have
$x_{t+1}=x_{t}-\eta\bm{m}_{t}=x_{t}-\eta\left(\beta^{t}f_{1}^{\prime}(x_{1})+(1-\beta)\sum_{s=1}^{t-1}\beta^{t-s}f_{1}^{\prime}(x_{s})+(1-\beta)f_{1}^{\prime}(x_{t})\right).$
Without the loss of generality, we assume $t$ is even. By the induction
hypothesis, we obtain that $f_{1}^{\prime}(x_{t})<0$ and
$f_{1}^{\prime}(x_{t-1})<0$, and
$|f_{1}^{\prime}(x_{1})|\leq|f_{1}^{\prime}(x_{2})|\leq\cdots\leq|f_{1}^{\prime}(x_{t-1})|.$
Therefore, we have
$\displaystyle x_{t+1}\geq$ $\displaystyle x_{t}-\eta\left(\beta
f_{1}^{\prime}(x_{t-1})+(1-\beta)f_{1}^{\prime}(x_{t})\right)$
$\displaystyle=$ $\displaystyle x_{t}-\frac{L_{0}}{L_{1}}\eta\left(\beta
e^{L_{1}x_{t-1}-1}-(1-\beta)e^{-L_{1}x_{t}-1}\right)$ $\displaystyle\geq$
$\displaystyle x_{t}-\frac{L_{0}}{L_{1}}\eta\left(\beta
e^{-\frac{L_{1}x_{t}}{8\log\frac{1}{\varepsilon}+4}-1}-(1-\beta)e^{-L_{1}x_{t}-1}\right).$
Furthermore, according to the definition of $x_{1}$, we have
$1-\beta\geq 2e^{-L_{1}(4\log\frac{1}{\varepsilon}+2)x_{1}}\geq
2e^{\frac{L_{1}x_{t}}{2}},$
which leads to
$\displaystyle x_{t+1}\geq x_{t}+\frac{L_{0}}{L_{1}}\eta
e^{-\frac{L_{1}x_{t}}{2}-1}\geq
x_{t}+\frac{(5+8\log\frac{1}{\varepsilon})x_{1}}{e^{L_{1}x_{1}}}e^{-\frac{L_{1}x_{t}}{2}}\geq
x_{t}+\frac{(5+8\log\frac{1}{\varepsilon})x_{1}}{e^{L_{1}x_{1}}}e^{L_{1}x_{t}(2+4\log\frac{1}{\varepsilon})}.$
Then, as $\frac{e^{\frac{L_{1}x}{2}}}{x}$ is monotonously increasing for
$x\in[\frac{2}{L_{1}},\infty)$, and $x_{1}\geq\frac{2}{L_{1}}$, we have
$x_{t+1}\geq
x_{t}+\frac{(5+8\log\frac{1}{\varepsilon})x_{1}}{e^{L_{1}x_{1}}}e^{L_{1}x_{t}(1+2\log\frac{1}{\varepsilon})}\geq
x_{t}-(5+8\log\frac{1}{\varepsilon})x_{t}\geq-(4+8\log\frac{1}{\varepsilon})x_{t}.$
The proof is completed. ∎
###### Lemma 7 (Convergence over $f_{2}$).
Assume that $\Delta_{1}\geq\frac{\varepsilon}{2}+\frac{L_{1}}{L_{0}}$, and let
$y_{1}\triangleq\frac{\Delta_{1}}{\varepsilon}+\frac{1}{2}$. Then, if
$\eta\leq\frac{(5+8\log\frac{1}{\varepsilon})(1+\log(\frac{1}{2}+\frac{L_{1}^{2}}{L_{0}}\Delta_{1}))}{L_{1}^{2}(\Delta_{1}+\frac{L_{0}}{2L_{1}^{2}})}$,
we have that GDM satisfies that $\|\nabla f_{2}(y_{t})\|\geq\varepsilon$ if
$T\leq\tilde{\Theta}(\frac{L_{1}^{2}\Delta_{1}^{2}+L_{0}\Delta_{1}}{\varepsilon^{2}})$.
###### Proof.
We have that $\bm{m}_{t}=\varepsilon$ before $y_{t}$ enters the region
$(-\infty,1]$. As the movement of each step before $y_{t}$ enters the region
$(-\infty,1]$ is $\eta\varepsilon$ and the total length to enter $(-\infty,1]$
is $y_{1}-1$, the proof is completed. ∎
###### Lemma 8 (Convergence over $f_{3}$).
Assume
$\Delta_{1}\geq\frac{L_{0}}{L_{1}^{2}}e+4e+\frac{L_{0}^{2}}{e^{2}L_{1}^{2}}$,
$L_{1}\geq 1$, $\varepsilon\leq\frac{1}{2}$, and let
$z_{1}=-\frac{1+\log(\frac{1}{2}+\frac{L_{1}^{2}}{L_{0}}\Delta_{1})}{L_{1}}$.
Then, we have $f_{3}(z_{1})-f_{3}^{*}=\Delta_{1}$, and if
$\eta\geq\frac{(5+8\log\frac{1}{\varepsilon})(1+\log(\frac{1}{2}+\frac{L_{1}^{2}}{L_{0}}\Delta_{1}))}{L_{1}^{2}(\Delta_{1}+\frac{L_{0}}{2L_{1}^{2}})}$
and $\beta\geq
1-2\left(\frac{L_{1}^{2}}{L_{0}}e\right)^{-4\log\frac{1}{\varepsilon}-2}(\Delta_{1}+\frac{L_{0}}{2L_{1}^{2}})^{-4\log\frac{1}{\varepsilon}-2}$,
we have that GDM satisfies that $\forall
t\in[1,\Theta(\frac{L_{1}^{2}\Delta_{1}^{2}}{\varepsilon^{3}}))$,
$|f_{3}^{\prime}(x_{t})|\geq\varepsilon$.
###### Proof.
To begin with, according to the definition of $z_{1}$, we have
$\eta\geq-\frac{(5+8\log\frac{1}{\varepsilon})z_{1}}{f_{3}^{\prime}(x_{1})}$
and $1-\beta\geq
2e^{L_{1}(4\log\frac{1}{\varepsilon}+2)z_{1}}\geq\frac{1}{2}$. Also. as
$\Delta_{1}\geq\frac{L_{0}}{L_{1}^{2}}(e-\frac{1}{2})$, we have
$z_{1}\leq-\frac{2}{L_{1}}$, and thus
$f^{\prime}_{3}(z_{1})=-\frac{L_{0}}{L_{1}}e^{-L_{1}z_{1}-1}\leq-
L_{1}\left(\Delta_{1}+\frac{L_{0}}{2L_{1}^{2}}\right)\leq-4.$
We will first prove the following claim by induction: for
$k\in[2,\lfloor\frac{1}{1-\beta}\rfloor]$, we have $z_{k}\geq\frac{1}{L_{1}}$,
and $\bm{m}_{k}\leq\frac{\beta^{k-1}f_{3}^{\prime}(z_{1})}{2}.$
As for $k=2$, we have
$z_{2}=z_{1}-\eta
f_{3}^{\prime}(z_{1})\geq-\left(4+8\log\frac{1}{\varepsilon}\right)z_{1}.$
According to $\Delta_{1}\geq\frac{L_{0}}{L_{1}^{2}}(e-\frac{1}{2})$, we have
$z_{1}\leq-\frac{2}{L_{1}}$, and thus $z_{2}\geq\frac{1}{L_{1}}$. Since
$\bm{m}_{2}=\beta
f^{\prime}(z_{1})+(1-\beta)\varepsilon<\frac{f_{3}^{\prime}(z_{1})}{2}$, the
claim is proved for $k=2$.
Now assuming that we have prove the claim for $k\leq t-1$. According to the
induction hypothesis, we have
$f_{3}^{\prime}(z_{2})=\cdots=f_{3}^{\prime}(z_{t-1})=\varepsilon,$
and thus
$\bm{m}_{t}=\beta^{t-1}f_{3}^{\prime}(z_{1})+(1-\beta^{t-1})\varepsilon\overset{(\star)}{\leq}\beta^{t-1}f_{3}^{\prime}(z_{1})-\frac{\beta^{t-1}f_{3}^{\prime}(z_{1})}{2}\leq\frac{\beta^{t-1}f_{3}^{\prime}(z_{1})}{2}.$
Here inequality $(\star)$ is due to
$\beta^{\lfloor\frac{1}{1-\beta}\rfloor}\geq\frac{1}{4}$ as
$\beta\geq\frac{1}{2}$. Therefore, as $z_{t}=z_{t-1}-\eta\bm{m}_{t}\geq
z_{t-1}\geq\frac{1}{L_{1}}$, we prove the claim.
It should be noticed that $\forall t\in[1,\lfloor\frac{1}{1-\beta}\rfloor]$,
$\|f_{3}^{\prime}(z_{t})|\geq\varepsilon$. Furthermore, according to the
claim, $z_{\lfloor\frac{1}{1-\beta}\rfloor+1}$ can now be bounded as
$\displaystyle z_{\lfloor\frac{1}{1-\beta}\rfloor+1}=$ $\displaystyle
z_{1}-\eta\sum_{k=1}^{\lfloor\frac{1}{1-\beta}\rfloor}\bm{m}_{t}\geq\frac{\eta}{5+8\log\frac{1}{\varepsilon}}f_{3}^{\prime}(z_{1})-\eta\sum_{k=1}^{\lfloor\frac{1}{1-\beta}\rfloor}\frac{\beta^{k-1}f_{3}^{\prime}(z_{1})}{2}\geq\frac{\eta}{5+8\log\frac{1}{\varepsilon}}f_{3}^{\prime}(z_{1})-\eta\frac{1-\frac{1}{e}}{(1-\beta)}\frac{f_{3}^{\prime}(z_{1})}{2}$
$\displaystyle\geq$
$\displaystyle\frac{1}{L_{1}}-\eta\frac{1-\frac{1}{e}}{(1-\beta)}\frac{f_{3}^{\prime}(z_{1})}{4}\geq\frac{1}{L_{1}}-\eta\left(1-\frac{1}{e}\right)\frac{f_{3}^{\prime}(z_{1})}{8}\left(\frac{L_{1}^{2}}{L_{0}}e\right)^{4\log\frac{1}{\varepsilon}+2}\left(\Delta_{1}+\frac{L_{0}}{2L_{1}^{2}}\right)^{4\log\frac{1}{\varepsilon}+2}$
$\displaystyle\geq$
$\displaystyle\frac{1}{L_{1}}+\frac{\eta}{16}\frac{L_{1}^{2}\Delta_{1}^{2}+L_{0}\Delta_{1}}{\varepsilon^{2}}.$
As $f_{3}^{\prime}(z)=\varepsilon$ for all $z\geq\frac{1}{L_{1}}$, the
iterates needs additional
$\frac{\frac{\eta}{16}\frac{L_{1}^{2}\Delta_{1}^{2}}{\varepsilon^{2}}}{\eta\varepsilon}=\frac{1}{16}\frac{L_{1}^{2}\Delta_{1}^{2}}{\varepsilon^{3}}$
steps to make $f_{3}^{\prime}(z_{t})<\varepsilon$. The proof is completed. ∎
###### Theorem 8 (Theorem 2, restated).
Assume that $\Delta_{1}\geq
4\frac{L_{0}}{L_{1}}e+16e+4\frac{L_{0}^{2}}{e^{2}L_{1}^{2}}$, $L_{1}\geq 1$
and $\varepsilon\leq 1$, then there exists objective function $f$ satisfying
$(L_{0},L_{1})$-smooth condition and $f(\bm{w}_{1})-f^{*}=\Delta_{1}$, such
that for any learning rate $\eta>0$ and $\beta\in[0,1]$, the minimum step $T$
of GDM to achieve final error $\varepsilon$ satisfies
$T=\tilde{\Omega}\left(\frac{L_{1}^{2}\Delta_{1}^{2}+L_{0}\Delta_{1}}{\varepsilon^{2}}\right).$
###### Proof.
Construct the objective function as
$f(x,y,z,u)=f_{1}(x)+f_{2}(y)+f_{3}(z)+f_{4}(u)$. Then, let $x_{1}$, $y_{1}$,
$z_{1}$, $u_{1}$ be chosen as
$f_{1}(x_{1})-f_{1}^{*}=f_{2}(y_{1})-f_{2}^{*}=f_{3}(z_{1})-f_{3}^{*}=\frac{\Delta_{1}}{3}$
and $z_{1}\leq 0$. Then, for each learning rate and momentum coefficient, they
will always be cover by one of the above lemmas, and applying the
corresponding lemma gives the desired result.
The proof is completed. ∎
### B.3 Proof for Deterministic AdaGrad
To begin with, we recall the following result from Wang et al. (2023b):
###### Proposition 2.
For every learning rate $\eta\geq\Theta(\frac{1}{L_{1}})$ and $\Delta_{1}$,
there exist a lower-bounded objective function $g_{1}$ obeying Assumption 1
and a corresponding initialization point $\bm{w}_{0}$ with
$g_{1}(\bm{w}_{1})-g_{1}^{*}=\Delta_{1}$, such that AdaGrad with learning rate
$\eta$ and initialized at $\bm{w}_{0}$ diverges over $g_{1}$.
We then define $g_{2}$ as the $f_{2}$ in the proof of Theorem 2, i.e.,
$g_{2}(y)=\left\\{\begin{aligned}
&\varepsilon(y-1)+\frac{\varepsilon}{2}&,y\in[1,\infty),\\\
&\frac{\varepsilon}{2}y^{2}&,y\in[-1,1],\\\
&-\varepsilon(y+1)+\frac{\varepsilon}{2}&,y\in(-\infty,-1].\end{aligned}\right.$
(6)
We then have the following lemma characterizing the convergence of AdaGrad
over $g_{2}$.
###### Lemma 9 (Convergence over $g_{2}$).
Assume that $\Delta_{1}\geq\frac{\varepsilon}{2}+\frac{L_{1}}{L_{0}}$, and let
$y_{1}\triangleq\frac{\Delta_{1}}{\varepsilon}+\frac{1}{2}$. Then, if
$\eta\leq\Theta(\frac{1}{L_{1}})$, we have that AdaGrad satisfies that
$\|\nabla g_{2}(y_{t})\|\geq\varepsilon$ if
$T\leq\tilde{\Theta}(\frac{L_{1}^{2}\Delta_{1}^{2}}{\varepsilon^{2}})$.
###### Proof.
We have that $\bm{g}_{t}=\varepsilon$ before $y_{t}$ enters the region
$(-\infty,1]$. Therefore, the sum of movement of each step before $y_{t}$
enters the region $(-\infty,1]$ is
$\eta\sum_{s=1}^{t}\frac{\varepsilon}{\sqrt{s}\varepsilon}=\eta\Theta(\sqrt{t}).$
Solving $\eta\Theta(\sqrt{t})=\frac{\Delta_{1}}{\varepsilon}+\frac{1}{2}-1$
gives $t=\frac{L_{1}^{2}\Delta_{1}^{2}}{\varepsilon^{2}}$, and the proof is
completed. ∎
We then have the following lower bound for deterministic AdaGrad.
###### Theorem 9.
Assume that $\Delta_{1}\geq\frac{\varepsilon}{2}+\frac{L_{1}}{L_{0}}$. Then,
there exists objective function $f$ satisfying $(L_{0},L_{1})$-smooth
condition and $f(\bm{w}_{1})-f^{*}=\Delta_{1}$, such that for any learning
rate $\eta>0$ and $\beta\in[0,1]$, the minimum step $T$ of AdaGrad to achieve
final error $\varepsilon$ satisfies
$T=\Omega(\frac{L_{1}^{2}\Delta_{1}^{2}}{\varepsilon^{2}}).$
###### Proof.
The proof is completed by letting $f(x,y)=g_{1}(x)+g_{2}(y)$ following the
same routine as Theorem 8. ∎
## Appendix C Proof for stochastic algorithms
### C.1 Proof for Adam
To begin with, we restate the theorem as follows:
###### Theorem 10 (Theorem 3, restated).
Let Assumptions 1 and 2 hold. Then, $\forall\beta_{1}\geq 0$ and $\lambda=0$,
if
$\varepsilon\leq\frac{1}{\operatorname{poly}(f(\bm{w}_{1})-f^{*},L_{0},L_{1},\sigma_{0},\sigma_{1})}$,
with
$\eta=\frac{\sqrt{f(\bm{w}_{1})-f^{*}}}{\sqrt{L_{0}+L_{1}}\sqrt{T\sigma_{0}\sigma_{1}^{2}}}$
and momentum hyperparameter
$\beta_{2}=1-\eta^{2}\left(\frac{1024\sigma_{1}^{2}(L_{1}+L_{0})(1-\beta_{1})}{\sqrt{1-\frac{\beta_{1}^{2}}{\beta_{2}}}(1-\frac{\beta_{1}}{\sqrt{\beta_{2}}})}\right)^{2}$,
we have if
$T\geq\Theta\left(\frac{(L_{0}+L_{1})\sigma_{0}^{3}\sigma_{1}^{2}(f(\bm{w}_{1})-f^{*})}{\varepsilon^{4}}\right)$,
then Algorithm 1 satisfies
$\frac{1}{T}\mathbb{E}\sum_{t=1}^{T}\|\nabla f(\bm{w}_{t})\|\leq\varepsilon.$
###### Proof.
Let the approximate iterative sequence be defined as
$\bm{u}_{t}\triangleq\frac{\bm{w}_{t}-\frac{\beta_{1}}{\sqrt{\beta_{2}}}\bm{w}_{t-1}}{1-\frac{\beta_{1}}{\sqrt{\beta_{2}}}}$
and the surrogate second-order momentum be defined as
$\widetilde{\bm{\nu}}_{t}\triangleq\beta_{2}\bm{\nu}_{t-1}+(1-\beta_{2})\sigma_{0}^{2}$.
Then, as
$\frac{\eta}{\sqrt{1-\beta_{2}}}=\frac{\sqrt{1-\frac{\beta_{1}^{2}}{\beta_{2}}}(1-\frac{\beta_{1}}{\sqrt{\beta_{2}}})}{1024\sigma_{1}^{2}(L_{1}+L_{0})(1-\beta_{1})}$,
we have
$\|\bm{u}_{t}-\bm{w}_{t}\|=\frac{\frac{\beta_{1}}{\sqrt{\beta_{2}}}}{1-\frac{\beta_{1}}{\sqrt{\beta_{2}}}}\|\bm{w}_{t}-\bm{w}_{t-1}\|\overset{(*)}{\leq}\eta\frac{\frac{\beta_{1}}{\sqrt{\beta_{2}}}}{1-\frac{\beta_{1}}{\sqrt{\beta_{2}}}}\frac{1-\beta_{1}}{\sqrt{1-\beta_{2}}\sqrt{1-\frac{\beta_{1}^{2}}{\beta_{2}}}}\leq\frac{1}{4L_{1}},$
and
$\|\bm{u}_{t+1}-\bm{w}_{t}\|=\frac{1}{1-\frac{\beta_{1}}{\sqrt{\beta_{2}}}}\|\bm{w}_{t+1}-\bm{w}_{t}\|\overset{(*)}{\leq}\eta\frac{1}{1-\frac{\beta_{1}}{\sqrt{\beta_{2}}}}\frac{1-\beta_{1}}{\sqrt{1-\beta_{2}}\sqrt{1-\frac{\beta_{1}^{2}}{\beta_{2}}}}\leq\frac{1}{4L_{1}}.$
Therefore, if choosing $\bm{w}^{1}=\bm{w}_{t}$, $\bm{w}^{2}=\bm{u}_{t+1}$, and
$\bm{w}^{3}=\bm{u}_{t}$ in Lemma 2, we see the conditions of Lemma 2 is
satisfied, which after taking expectation gives
$\displaystyle\mathbb{E}^{|{\mathcal{F}}_{t}}f(\bm{u}_{t+1})\leq
f(\bm{u}_{t})+\mathbb{E}^{|{\mathcal{F}}_{t}}\langle\nabla
f(\bm{w}_{t}),\bm{u}_{t+1}-\bm{u}_{t}\rangle+\frac{1}{2}(L_{0}+L_{1}\|\nabla
f(\bm{w}_{t})\|)\mathbb{E}^{|{\mathcal{F}}_{t}}(\|\bm{u}_{t+1}-\bm{w}_{t}\|+\|\bm{u}_{t}-\bm{w}_{t}\|)\|\bm{u}_{t+1}-\bm{u}_{t}\|.$
We call $\langle\nabla f(\bm{w}_{t}),\bm{u}_{t+1}-\bm{u}_{t}\rangle$ the
first-order term and $\frac{1}{2}(L_{0}+L_{1}\|\nabla
f(\bm{w}_{t})\|)(\|\bm{u}_{t+1}-\bm{w}_{t}\|+\|\bm{u}_{t}-\bm{w}_{t}\|)\|\bm{u}_{t+1}-\bm{u}_{t}\|$
the second-order term, as they respectively correspond to the first-order and
second-order Taylor’s expansion. We then respectively bound these two terms as
follows.
Analysis for the first-order term. Before we start, denote
$\widetilde{\bm{\nu}}_{t}\triangleq\beta_{2}\bm{\nu}_{t-1}+(1-\beta_{2})\sigma_{0}^{2}$
$\displaystyle\bm{u}_{t+1}-\bm{u}_{t}=$
$\displaystyle\frac{\bm{w}_{t+1}-\bm{w}_{t}}{1-\frac{\beta_{1}}{\sqrt{\beta_{2}}}}-\frac{\beta_{1}}{\sqrt{\beta_{2}}}\frac{\bm{w}_{t}-\bm{w}_{t-1}}{1-\frac{\beta_{1}}{\sqrt{\beta_{2}}}}$
$\displaystyle=$
$\displaystyle-\frac{\eta}{1-\frac{\beta_{1}}{\sqrt{\beta_{2}}}}\frac{1}{\sqrt{\bm{\nu}_{t}}}\bm{m}_{t}+\beta_{1}\frac{\eta}{1-\frac{\beta_{1}}{\sqrt{\beta_{2}}}}\frac{1}{\sqrt{\beta_{2}\bm{\nu}_{t-1}}}\bm{m}_{t-1}$
$\displaystyle=$
$\displaystyle-\frac{\eta}{1-\frac{\beta_{1}}{\sqrt{\beta_{2}}}}\frac{1}{\sqrt{\tilde{\bm{\nu}}_{t}}}\bm{m}_{t}+\beta_{1}\frac{\eta}{1-\frac{\beta_{1}}{\sqrt{\beta_{2}}}}\frac{1}{\sqrt{\widetilde{\bm{\nu}}_{t}}}\bm{m}_{t-1}-\frac{\eta}{1-\frac{\beta_{1}}{\sqrt{\beta_{2}}}}\left(\frac{1}{\sqrt{\bm{\nu}_{t}}}-\frac{1}{\sqrt{\tilde{\bm{\nu}}_{t}}}\right)\bm{m}_{t}$
$\displaystyle+\beta_{1}\frac{\eta}{1-\frac{\beta_{1}}{\sqrt{\beta_{2}}}}\left(\frac{1}{\sqrt{\beta_{2}\bm{\nu}_{t-1}}}-\frac{1}{\sqrt{\widetilde{\bm{\nu}}_{t}}}\right)\bm{m}_{t-1}$
$\displaystyle=$
$\displaystyle-\eta\frac{1-\beta_{1}}{1-\frac{\beta_{1}}{\sqrt{\beta_{2}}}}\frac{1}{\sqrt{\tilde{\bm{\nu}}_{t}}}\bm{g}_{t}-\frac{\eta}{1-\frac{\beta_{1}}{\sqrt{\beta_{2}}}}\left(\frac{1}{\sqrt{\bm{\nu}_{t}}}-\frac{1}{\sqrt{\tilde{\bm{\nu}}_{t}}}\right)\bm{m}_{t}+\beta_{1}\frac{\eta}{1-\frac{\beta_{1}}{\sqrt{\beta_{2}}}}\left(\frac{1}{\sqrt{\beta_{2}\bm{\nu}_{t-1}}}-\frac{1}{\sqrt{\widetilde{\bm{\nu}}_{t}}}\right)\bm{m}_{t-1}.$
According to the above decomposition, we have the first-order term can also be
decomposed into
$\displaystyle\mathbb{E}^{|{\mathcal{F}}_{t}}\left[\left\langle\nabla
f(\bm{w}_{t}),\bm{u}_{t+1}-\bm{u}_{t}\right\rangle\right]$ $\displaystyle=$
$\displaystyle\frac{1-\beta_{1}}{1-\frac{\beta_{1}}{\sqrt{\beta_{2}}}}\mathbb{E}^{|{\mathcal{F}}_{t}}\left[\left\langle\bm{G}_{t},-\eta\frac{1}{\sqrt{\tilde{\bm{\nu}}_{t}}}\bm{g}_{t}\right\rangle\right]+\mathbb{E}^{|{\mathcal{F}}_{t}}\left[\left\langle\bm{G}_{t},-\frac{\eta}{1-\frac{\beta_{1}}{\sqrt{\beta_{2}}}}\left(\frac{1}{\sqrt{\bm{\nu}_{t}}}-\frac{1}{\sqrt{\tilde{\bm{\nu}}_{t}}}\right)\bm{m}_{t}\right\rangle\right]$
$\displaystyle+\mathbb{E}^{|{\mathcal{F}}_{t}}\left[\left\langle\bm{G}_{t},\beta_{1}\frac{\eta}{1-\frac{\beta_{1}}{\sqrt{\beta_{2}}}}\left(\frac{1}{\sqrt{\beta_{2}\bm{\nu}_{t-1}}}-\frac{1}{\sqrt{\widetilde{\bm{\nu}}_{t}}}\right)\bm{m}_{t-1}\right\rangle\right].$
(7)
As
$\mathbb{E}^{|{\mathcal{F}}_{t}}\left[\left\langle\bm{G}_{t},-\eta\frac{1}{\sqrt{\tilde{\bm{\nu}}_{t}}}\bm{g}_{t}\right\rangle\right]=-\eta\frac{\|\bm{G}_{t}\|^{2}}{\sqrt{\widetilde{\bm{\nu}}_{t}}}$,
we have
$\frac{1-\beta_{1}}{1-\frac{\beta_{1}}{\sqrt{\beta_{2}}}}\mathbb{E}^{|{\mathcal{F}}_{t}}\left[\left\langle\bm{G}_{t},-\eta\frac{1}{\sqrt{\tilde{\bm{\nu}}_{t}}}\bm{g}_{t}\right\rangle\right]\leq-\frac{\|\bm{G}_{t}\|^{2}}{\sqrt{\widetilde{\bm{\nu}}_{t}}}.$
We then respectively bound the rest of the two terms in Eq. (7). To begin
with,
$\displaystyle\mathbb{E}^{|\mathcal{F}_{t}}\left[\left\langle\bm{G}_{t},-\frac{\eta}{1-\frac{\beta_{1}}{\sqrt{\beta_{2}}}}\left(\frac{1}{\sqrt{\bm{\nu}_{t}}}-\frac{1}{\sqrt{\tilde{\bm{\nu}}_{t}}}\right)\bm{m}_{t}\right\rangle\right]$
$\displaystyle=$
$\displaystyle\mathbb{E}^{|\mathcal{F}_{t}}\left[\left\langle\bm{G}_{t},-\frac{\eta}{1-\frac{\beta_{1}}{\sqrt{\beta_{2}}}}\left(\frac{(1-\beta_{2})(\sigma_{0}^{2}-\|\bm{g}_{t}\|^{2})}{\sqrt{\bm{\nu}_{t}}\sqrt{\tilde{\bm{\nu}}_{t}}(\sqrt{\bm{\nu}_{t}}+\sqrt{\tilde{\bm{\nu}}_{t}})}\right)\bm{m}_{t}\right\rangle\right]$
$\displaystyle\leq$
$\displaystyle\frac{\eta}{1-\frac{\beta_{1}}{\sqrt{\beta_{2}}}}\mathbb{E}^{|\mathcal{F}_{t}}\left[\|\bm{G}_{t}\|\left(\frac{(1-\beta_{2})(\sigma_{0}^{2}+\|\bm{g}_{t}\|^{2})}{\sqrt{\bm{\nu}_{t}}\sqrt{\tilde{\bm{\nu}}_{t}}(\sqrt{\bm{\nu}_{t}}+\sqrt{\tilde{\bm{\nu}}_{t}})}\right)\|\bm{m}_{t}\|\right]$
$\displaystyle=$
$\displaystyle{\frac{\eta}{1-\frac{\beta_{1}}{\sqrt{\beta_{2}}}}\mathbb{E}^{|\mathcal{F}_{t}}\left[\|\bm{G}_{t}\|\left(\frac{(1-\beta_{2})\|\bm{g}_{t}\|^{2}}{\sqrt{\bm{\nu}_{t}}\sqrt{\tilde{\bm{\nu}}_{t}}(\sqrt{\bm{\nu}_{t}}+\sqrt{\tilde{\bm{\nu}}_{t}})}\right)\|\bm{m}_{t}\|\right]}+{\frac{\eta}{1-\frac{\beta_{1}}{\sqrt{\beta_{2}}}}\mathbb{E}^{|\mathcal{F}_{t}}\left[\|\bm{G}_{t}\|\left(\frac{(1-\beta_{2})\sigma_{0}^{2}}{\sqrt{\bm{\nu}_{t}}\sqrt{\tilde{\bm{\nu}}_{t}}(\sqrt{\bm{\nu}_{t}}+\sqrt{\tilde{\bm{\nu}}_{t}})}\right)\|\bm{m}_{t}\|\right]}.$
(8)
The first term in the right-hand-side of Eq. (8) can be bounded as
$\displaystyle\frac{\eta}{1-\frac{\beta_{1}}{\sqrt{\beta_{2}}}}\mathbb{E}^{|\mathcal{F}_{t}}\left[\|\bm{G}_{t}\|\left(\frac{(1-\beta_{2})\|\bm{g}_{t}\|^{2}}{\sqrt{\bm{\nu}_{t}}\sqrt{\tilde{\bm{\nu}}_{t}}(\sqrt{\bm{\nu}_{t}}+\sqrt{\tilde{\bm{\nu}}_{t}})}\right)\|\bm{m}_{t}\|\right]\overset{(*)}{\leq}\frac{\eta(1-\beta_{1})}{\left(\sqrt{1-\frac{\beta_{1}}{\sqrt{\beta_{2}}}}\right)^{3}}\mathbb{E}^{|\mathcal{F}_{t}}\left[\|\bm{G}_{t}\|\left(\frac{\sqrt{1-\beta_{2}}\|\bm{g}_{t}\|^{2}}{\sqrt{\tilde{\bm{\nu}}_{t}}(\sqrt{\bm{\nu}_{t}}+\sqrt{\tilde{\bm{\nu}}_{t}})}\right)\right]$
$\displaystyle\overset{(\circ)}{\leq}$
$\displaystyle\frac{\eta(1-\beta_{1})}{\left(\sqrt{1-\frac{\beta_{1}}{\sqrt{\beta_{2}}}}\right)^{3}}\frac{\|\bm{G}_{t}\|}{\sqrt{\tilde{\bm{\nu}}_{t}}}\sqrt{\mathbb{E}^{|\mathcal{F}_{t}}\|\bm{g}_{t}\|^{2}}\sqrt{\mathbb{E}^{|\mathcal{F}_{t}}\frac{\|\bm{g}_{t}\|^{2}}{(\sqrt{\bm{\nu}_{t}}+\sqrt{\tilde{\bm{\nu}}_{t}})^{2}}}\overset{(\bullet)}{\leq}\frac{\eta(1-\beta_{1})\sqrt{1-\beta_{2}}}{\left(\sqrt{1-\frac{\beta_{1}}{\sqrt{\beta_{2}}}}\right)^{3}}\frac{\|\bm{G}_{t}\|}{\sqrt{\tilde{\bm{\nu}}_{t}}}\sqrt{\sigma_{0}^{2}+\sigma_{1}^{2}\|\bm{G}_{t}\|^{2}}\sqrt{\mathbb{E}^{|\mathcal{F}_{t}}\frac{\|\bm{g}_{t}\|^{2}}{(\sqrt{\bm{\nu}_{t}}+\sqrt{\tilde{\bm{\nu}}_{t}})^{2}}}$
$\displaystyle\leq$
$\displaystyle\frac{\eta(1-\beta_{1})\sqrt{1-\beta_{2}}}{\left(\sqrt{1-\frac{\beta_{1}}{\sqrt{\beta_{2}}}}\right)^{3}}\frac{\|\bm{G}_{t}\|}{\sqrt{\tilde{\bm{\nu}}_{t}}}(\sigma_{0}+\sigma_{1}\|\bm{G}_{t}\|)\sqrt{\mathbb{E}^{|\mathcal{F}_{t}}\frac{\|\bm{g}_{t}\|^{2}}{(\sqrt{\bm{\nu}_{t}}+\sqrt{\tilde{\bm{\nu}}_{t}})^{2}}},$
where inequality $(*)$ uses Lemma 1, inequality $(\circ)$ is due to Holder’s
inequality, and inequality $(\bullet)$ is due to Assumption 2. Applying mean-
value inequality respectively to
$\frac{\eta(1-\beta_{1})\sqrt{1-\beta_{2}}}{\left(\sqrt{1-\frac{\beta_{1}}{\sqrt{\beta_{2}}}}\right)^{3}}\mathbb{E}^{|\mathcal{F}_{t}}\frac{\|\bm{G}_{t}\|}{\sqrt{\tilde{\bm{\nu}}_{t}}}\sigma_{0}\sqrt{\mathbb{E}^{|\mathcal{F}_{t}}\frac{\|\bm{g}_{t}\|^{2}}{(\sqrt{\bm{\nu}_{t}}+\sqrt{\tilde{\bm{\nu}}_{t}})^{2}}}$
and
$\frac{\eta(1-\beta_{1})\sqrt{1-\beta_{2}}}{\left(\sqrt{1-\frac{\beta_{1}}{\sqrt{\beta_{2}}}}\right)^{3}}\mathbb{E}^{|\mathcal{F}_{t}}\frac{\|\bm{G}_{t}\|}{\sqrt{\tilde{\bm{\nu}}_{t}}}\sigma_{1}\|\bm{G}_{t}\|\sqrt{\mathbb{E}^{|\mathcal{F}_{t}}\frac{\|\bm{g}_{t}\|^{2}}{(\sqrt{\bm{\nu}_{t}}+\sqrt{\tilde{\bm{\nu}}_{t}})^{2}}}$
and due to $\beta_{1}\leq\beta_{2}$, we obtain that the right-hand-side of the
above inequality can be bounded by
$\displaystyle\frac{1}{16}\eta\frac{1-\beta_{1}}{1-\frac{\beta_{1}}{\sqrt{\beta_{2}}}}\sqrt{1-\beta_{2}}\sigma_{0}\frac{\|\bm{G}_{t}\|^{2}}{\tilde{\bm{\nu}}_{t}}+\frac{4\eta\sqrt{1-\beta_{2}}\sigma_{0}}{\left(1-\frac{\beta_{1}}{\sqrt{\beta_{2}}}\right)^{2}}\mathbb{E}^{|\mathcal{F}_{t}}\frac{\|\bm{g}_{t}\|^{2}}{(\sqrt{\bm{\nu}_{t}}+\sqrt{\tilde{\bm{\nu}}_{t}})^{2}}$
$\displaystyle+\frac{1}{16}\eta\frac{1-\beta_{1}}{1-\frac{\beta_{1}}{\sqrt{\beta_{2}}}}\frac{\|\bm{G}_{t}\|^{2}}{\sqrt{\tilde{\bm{\nu}}_{t}}}+4\eta\frac{(1-\beta_{2})(1-\beta_{1})}{(1-\frac{\beta_{1}}{\sqrt{\beta_{2}}})^{2}}\sigma_{1}^{2}\frac{\|\bm{G}_{t}\|^{2}}{\sqrt{\tilde{\bm{\nu}}_{t}}}\mathbb{E}^{|\mathcal{F}_{t}}\frac{\|\bm{g}_{t}\|^{2}}{(\sqrt{\bm{\nu}_{t}}+\sqrt{\tilde{\bm{\nu}}_{t}})^{2}}$
$\displaystyle\leq$
$\displaystyle\frac{1}{8}\eta\frac{\|\bm{G}_{t}\|^{2}}{\sqrt{\tilde{\bm{\nu}}_{t}}}+\frac{4\eta\sqrt{1-\beta_{2}}\sigma_{0}}{\left(1-\frac{\beta_{1}}{\sqrt{\beta_{2}}}\right)^{2}}\mathbb{E}^{|\mathcal{F}_{t}}\frac{\|\bm{g}_{t}\|^{2}}{\bm{\nu}_{t}}+\frac{1}{8}\eta\frac{\|\bm{G}_{t}\|^{2}}{\sqrt{\tilde{\bm{\nu}}_{t}}}+16\eta\frac{(1-\beta_{2})}{(1-\beta_{1})}\sigma_{1}^{2}\frac{\|\bm{G}_{t}\|^{2}}{\sqrt{\tilde{\bm{\nu}}_{t}}}\mathbb{E}^{|\mathcal{F}_{t}}\frac{\|\bm{g}_{t}\|^{2}}{(\sqrt{\bm{\nu}_{t}}+\sqrt{\tilde{\bm{\nu}}_{t}})^{2}}.$
(9)
Here the inequality is due to
$\widetilde{\bm{\nu}}_{t}=(1-\beta_{2})\sigma_{0}^{2}+\beta_{2}\bm{\nu}_{t-1}\geq(1-\beta_{2})\sigma_{0}^{2}$.
Meanwhile, we have
$\displaystyle\left(\frac{1}{\sqrt{\beta_{2}\widetilde{\bm{\nu}}_{t}}}-\frac{1}{\sqrt{\widetilde{\bm{\nu}}_{t+1}}}\right)\|\bm{G}_{t}\|^{2}$
$\displaystyle=$
$\displaystyle\frac{\|\bm{G}_{t}\|^{2}((1-\beta_{2})^{2}\sigma_{0}^{2}+\beta_{2}(1-\beta_{2})\|\bm{g}_{t}\|^{2})}{\sqrt{\beta_{2}\widetilde{\bm{\nu}}_{t}}\sqrt{\widetilde{\bm{\nu}}_{t+1}}(\sqrt{\beta_{2}\widetilde{\bm{\nu}}_{t}}+\sqrt{\widetilde{\bm{\nu}}_{t+1}})}\geq\frac{\|\bm{G}_{t}\|^{2}\beta_{2}(1-\beta_{2})\|\bm{g}_{t}\|^{2}}{\sqrt{\beta_{2}\widetilde{\bm{\nu}}_{t}}\sqrt{\widetilde{\bm{\nu}}_{t+1}}(\sqrt{\beta_{2}\widetilde{\bm{\nu}}_{t}}+\sqrt{\widetilde{\bm{\nu}}_{t+1}})}$
$\displaystyle\geq$
$\displaystyle\frac{1}{4}\frac{\|\bm{G}_{t}\|^{2}(1-\beta_{2})\|\bm{g}_{t}\|^{2}}{\sqrt{\widetilde{\bm{\nu}}_{t}}(\sqrt{\bm{\nu}_{t}}+\sqrt{\widetilde{\bm{\nu}}_{t}})^{2}},$
where in the last inequality, we use $\sqrt{\beta_{2}}\geq\frac{1}{2}$.
Applying the above inequality back to Eq. (9), we obtain that
$\displaystyle\frac{\eta}{1-\beta_{1}}\mathbb{E}^{|\mathcal{F}_{t}}\left[\|\bm{G}_{t}\|\left(\frac{(1-\beta_{2})\bm{g}_{t}^{2}}{\sqrt{\bm{\nu}_{t}}\sqrt{\tilde{\bm{\nu}}_{t}}(\sqrt{\bm{\nu}_{t}}+\sqrt{\tilde{\bm{\nu}}_{t}})}\right)\|\bm{m}_{t}\|\right]$
$\displaystyle\leq$
$\displaystyle\frac{1}{4}\eta\frac{\|\bm{G}_{t}\|^{2}}{\sqrt{\tilde{\bm{\nu}}_{t}}}+\frac{4\eta\sqrt{1-\beta_{2}}\sigma_{0}}{\left(1-\frac{\beta_{1}^{2}}{\beta_{2}}\right)^{2}}\mathbb{E}^{|\mathcal{F}_{t}}\frac{\|\bm{g}_{t}\|^{2}}{\bm{\nu}_{t}}+\eta\frac{64}{(1-\beta_{1})}\sigma_{1}^{2}\mathbb{E}^{|\mathcal{F}_{t}}\left(\frac{1}{\sqrt{\beta_{2}\widetilde{\bm{\nu}}_{t}}}-\frac{1}{\sqrt{\widetilde{\bm{\nu}}_{t+1}}}\right)\|\bm{G}_{t}\|^{2}.$
(10)
Furthermore, due to Assumption 1, we have (we define $G_{0}\triangleq G_{1}$)
$\displaystyle\|\bm{G}_{t+1}\|^{2}\leq$
$\displaystyle\|\bm{G}_{t}\|^{2}+2\|\bm{G}_{t}\|\|\bm{G}_{t+1}-\bm{G}_{t}\|+\|\bm{G}_{t+1}-\bm{G}_{t}\|^{2}$
$\displaystyle\leq$
$\displaystyle\|\bm{G}_{t}\|^{2}+2(L_{0}+L_{1}\|\bm{G}_{t}\|)\|\bm{G}_{t}\|\|\bm{w}_{t+1}-\bm{w}_{t}\|+2(L_{0}^{2}+L_{1}^{2}\|\bm{G}_{t}\|^{2})\|\bm{w}_{t+1}-\bm{w}_{t}\|^{2},$
which by
$\frac{\eta}{\sqrt{1-\beta_{2}}}=\frac{\sqrt{1-\frac{\beta_{1}^{2}}{\beta_{2}}}(1-\frac{\beta_{1}}{\sqrt{\beta_{2}}})^{2}}{1024\sigma_{1}^{2}(L_{1}+L_{0})(1-\beta_{1})}$
further leads to
$\displaystyle\frac{1}{\sqrt{\beta_{2}\widetilde{\bm{\nu}}_{t+1}}}\|\bm{G}_{t}\|^{2}$
$\displaystyle\geq$
$\displaystyle\frac{1}{\sqrt{\beta_{2}\widetilde{\bm{\nu}}_{t+1}}}\left(\|\bm{G}_{t+1}\|^{2}-2(L_{0}+L_{1}\|\bm{G}_{t}\|)\|\bm{G}_{t}\|\|\bm{w}_{t+1}-\bm{w}_{t}\|-2(L_{0}^{2}+L_{1}^{2}\|\bm{G}_{t}\|^{2})\|\bm{w}_{t+1}-\bm{w}_{t}\|^{2}\right)$
$\displaystyle\geq$
$\displaystyle\left(\frac{1}{\sqrt{\beta_{2}\widetilde{\bm{\nu}}_{t+1}}}\|\bm{G}_{t+1}\|^{2}-\frac{2L_{0}}{\sigma_{0}}\frac{(1-\beta_{1})}{64\sigma_{1}^{2}}\|\bm{w}_{t+1}-\bm{w}_{t}\|^{2}-\frac{3}{8}\frac{(1-\beta_{1})}{64\sigma_{1}^{2}}\frac{\|\bm{G}_{t}\|^{2}}{\sqrt{\widetilde{\bm{\nu}}_{t}}}\right).$
Applying the above inequality back to Eq. (10) leads to that
$\displaystyle\frac{\eta}{1-\frac{\beta_{1}}{\sqrt{\beta_{2}}}}\mathbb{E}^{|\mathcal{F}_{t}}\left[\|\bm{G}_{t}\|\left(\frac{(1-\beta_{2})\bm{g}_{t}^{2}}{\sqrt{\bm{\nu}_{t}}\sqrt{\tilde{\bm{\nu}}_{t}}(\sqrt{\bm{\nu}_{t}}+\sqrt{\tilde{\bm{\nu}}_{t}})}\right)\|\bm{m}_{t}\|\right]$
$\displaystyle\leq$
$\displaystyle\frac{5}{8}\eta\frac{\|\bm{G}_{t}\|^{2}}{\sqrt{\tilde{\bm{\nu}}_{t}}}+\frac{4\eta\sqrt{1-\beta_{2}}\sigma_{0}}{\left(1-{\beta_{1}}\right)^{2}}\mathbb{E}^{|\mathcal{F}_{t}}\frac{\|\bm{g}_{t}\|^{2}}{\bm{\nu}_{t}}+\eta\frac{64}{(1-\beta_{1})}\sigma_{1}^{2}\mathbb{E}^{|\mathcal{F}_{t}}\left(\frac{\|\bm{G}_{t}\|^{2}}{\sqrt{\beta_{2}\widetilde{\bm{\nu}}_{t}}}-\frac{\|\bm{G}_{t+1}\|^{2}}{\sqrt{\widetilde{\bm{\nu}}_{t+1}}}\right)$
$\displaystyle+2\frac{L_{0}}{\sigma_{0}}\mathbb{E}^{|{\mathcal{F}}_{t}}\|\bm{w}_{t+1}-\bm{w}_{t}\|^{2}.$
(11)
As for the second term in the right-hand-side of Eq. (8), we have
$\displaystyle\frac{\eta}{1-\frac{\beta_{1}}{\sqrt{\beta_{2}}}}\mathbb{E}^{|\mathcal{F}_{t}}\left[\|\bm{G}_{t}\|\left(\frac{(1-\beta_{2})\sigma_{0}^{2}}{\sqrt{\bm{\nu}_{t}}\sqrt{\tilde{\bm{\nu}}_{t}}(\sqrt{\bm{\nu}_{t}}+\sqrt{\tilde{\bm{\nu}}_{t}})}\right)\|\bm{m}_{t}\|\right]$
$\displaystyle\leq$
$\displaystyle\frac{\eta}{1-\frac{\beta_{1}}{\sqrt{\beta_{2}}}}\mathbb{E}^{|\mathcal{F}_{t}}\left[\|\bm{G}_{t}\|\left(\frac{\sqrt[4]{1-\beta_{2}}\sqrt{\sigma_{0}}}{\sqrt[4]{\tilde{\bm{\nu}}_{t}}\sqrt{\bm{\nu}_{t}}}\right)\|\bm{m}_{t}\|\right]$
$\displaystyle\leq$
$\displaystyle\frac{1}{8}\eta\frac{\|\bm{G}_{t}\|^{2}}{\sqrt{\tilde{\bm{\nu}}_{t}}}+\frac{8\eta\sqrt{1-\beta_{2}}\sigma_{0}}{(1-\beta_{1})^{2}}\mathbb{E}^{|\mathcal{F}_{t}}\left[\left(\frac{\|\bm{m}_{t}\|^{2}}{\bm{\nu}_{t}}\right)\right].$
(12)
In the last inequality we use again $\beta_{2}\geq\beta_{1}$. With
Inequalities (11) and (12), we conclude that the first-order term can be
bounded by
$\displaystyle\mathbb{E}^{|{\mathcal{F}}_{t}}\left[\left\langle\nabla
f(\bm{w}_{t}),\bm{u}_{t+1}-\bm{u}_{t}\right\rangle\right]\leq$
$\displaystyle-\frac{1}{4}\eta\mathbb{E}\frac{\|\bm{G}_{t}\|^{2}}{\sqrt{\tilde{\bm{\nu}}_{t}}}+\frac{4\eta\sqrt{1-\beta_{2}}\sigma_{0}}{\left(1-\beta_{1}\right)^{2}}\mathbb{E}^{|\mathcal{F}_{t}}\frac{\|\bm{g}_{t}\|^{2}}{\bm{\nu}_{t}}+\eta\frac{64}{(1-{\beta_{1}})}\sigma_{1}^{2}\mathbb{E}^{|\mathcal{F}_{t}}\left(\frac{\|\bm{G}_{t}\|^{2}}{\sqrt{\beta_{2}\widetilde{\bm{\nu}}_{t}}}-\frac{\|\bm{G}_{t+1}\|^{2}}{\sqrt{\widetilde{\bm{\nu}}_{t+1}}}\right)$
$\displaystyle+2\frac{L_{0}}{\sigma_{0}}\mathbb{E}^{|{\mathcal{F}}_{t}}\|\bm{w}_{t+1}-\bm{w}_{t}\|^{2}+\frac{8\eta\sqrt{1-\beta_{2}}\sigma_{0}}{(1-\beta_{1})^{2}}\mathbb{E}^{|\mathcal{F}_{t}}\left[\left(\frac{\|\bm{m}_{t}\|^{2}}{\bm{\nu}_{t}}\right)\right].$
(13)
Analysis for the second-order term. To recall, the second order term is
$\frac{1}{2}(L_{0}+L_{1}\|\nabla
f(\bm{w}_{t})\|)(\|\bm{u}_{t+1}-\bm{w}_{t}\|+\|\bm{u}_{t}-\bm{w}_{t}\|)\|\bm{u}_{t+1}-\bm{u}_{t}\|$.
Before we start, we have the following expansion for
$\bm{u}_{t+1}-\bm{u}_{t}$: (here the operations are all coordinate-wisely)
$\displaystyle\bm{u}_{t+1}-\bm{u}_{t}=$
$\displaystyle\frac{\bm{w}_{t+1}-\bm{w}_{t}-\frac{\beta_{1}}{\sqrt{\beta_{2}}}(\bm{w}_{t}-\bm{w}_{t-1})}{1-\frac{\beta_{1}}{\sqrt{\beta_{2}}}}$
$\displaystyle=$
$\displaystyle\frac{-\eta\frac{\bm{m}_{t}}{\sqrt{\bm{\nu}_{t}}}+\eta\frac{\beta_{1}}{\sqrt{\beta_{2}}}\frac{\bm{m}_{t-1}}{\sqrt{\bm{\nu}_{t-1}}}}{1-\frac{\beta_{1}}{\sqrt{\beta_{2}}}}=\frac{-\eta\frac{\bm{m}_{t}}{\sqrt{\bm{\nu}_{t}}}+\eta\beta_{1}\frac{\bm{m}_{t-1}}{\sqrt{\bm{\nu}_{t}}}-\eta\beta_{1}\frac{\bm{m}_{t-1}}{\sqrt{\bm{\nu}_{t}}}+\eta\frac{\beta_{1}}{\sqrt{\beta_{2}}}\frac{\bm{m}_{t-1}}{\sqrt{\bm{\nu}_{t-1}}}}{1-\frac{\beta_{1}}{\sqrt{\beta_{2}}}}$
$\displaystyle=$
$\displaystyle\frac{-\eta\frac{(1-\beta_{1})\bm{g}_{t}}{\sqrt{\bm{\nu}_{t}}}+\eta\frac{\beta_{1}(1-\beta_{2})\|\bm{g}_{t}\|^{2}}{\sqrt{\beta_{2}}}\frac{\bm{m}_{t-1}}{\sqrt{\bm{\nu}_{t-1}}\sqrt{\bm{\nu}_{t}}(\sqrt{\bm{\nu}_{t}}+\sqrt{\beta_{2}\bm{\nu}_{t-1}})}}{1-\frac{\beta_{1}}{\sqrt{\beta_{2}}}}$
(14)
Then firstly, we have
$\displaystyle\frac{1}{2}L_{0}(\|\bm{u}_{t+1}-\bm{w}_{t}\|+\|\bm{u}_{t}-\bm{w}_{t}\|)\|\bm{u}_{t+1}-\bm{u}_{t}\|$
$\displaystyle\leq$
$\displaystyle\frac{1}{2}L_{0}\left(\|\bm{u}_{t+1}-\bm{u}_{t}\|^{2}+\frac{1}{2}\|\bm{u}_{t+1}-\bm{w}_{t}\|^{2}+\frac{1}{2}\|\bm{u}_{t}-\bm{w}_{t}\|^{2}\right)$
$\displaystyle=$
$\displaystyle\frac{1}{2}L_{0}\left(\left\|\frac{-\eta\frac{(1-\beta_{1})\bm{g}_{t}}{\sqrt{\bm{\nu}_{t}}}+\eta\frac{\beta_{1}(1-\beta_{2})\|\bm{g}_{t}\|^{2}}{\sqrt{\beta_{2}}}\frac{\bm{m}_{t-1}}{\sqrt{\bm{\nu}_{t-1}}\sqrt{\bm{\nu}_{t}}(\sqrt{\bm{\nu}_{t}}+\sqrt{\beta_{2}\bm{\nu}_{t-1}})}}{1-\frac{\beta_{1}}{\sqrt{\beta_{2}}}}\right\|^{2}+\frac{1}{2}\left\|\frac{\frac{\beta_{1}}{\sqrt{\beta_{2}}}}{1-\frac{\beta_{1}}{\sqrt{\beta_{2}}}}(\bm{w}_{t}-\bm{w}_{t-1})\right\|^{2}+\frac{1}{2}\left\|\frac{1}{1-\frac{\beta_{1}}{\sqrt{\beta_{2}}}}(\bm{w}_{t+1}-\bm{w}_{t})\right\|^{2}\right)$
$\displaystyle\leq$
$\displaystyle\frac{L_{0}\eta^{2}}{2}\left(\left(\frac{1-\beta_{1}}{1-\frac{\beta_{1}}{\sqrt{\beta_{2}}}}+\frac{\beta_{1}(1-\beta_{1})}{(\sqrt{\beta_{2}}-\beta_{1})\sqrt{1-\frac{\beta_{1}^{2}}{\beta_{2}}}}\right)^{2}\left\|\frac{\bm{g}_{t}}{\sqrt{\bm{\nu}_{t}}}\right\|^{2}+\frac{1}{2}\left(\frac{\frac{\beta_{1}}{\sqrt{\beta_{2}}}}{1-\frac{\beta_{1}}{\sqrt{\beta_{2}}}}\right)^{2}\left\|\frac{\bm{m}_{t-1}}{\sqrt{\bm{\nu}_{t-1}}}\right\|^{2}+\frac{1}{2}\left(\frac{1}{1-\frac{\beta_{1}}{\sqrt{\beta_{2}}}}\right)^{2}\left\|\frac{\bm{m}_{t}}{\sqrt{\bm{\nu}_{t}}}\right\|^{2}\right)$
$\displaystyle\overset{(\bullet)}{\leq}$
$\displaystyle\frac{L_{0}\eta^{2}}{2}\left(2\left(\frac{1-\beta_{1}}{1-\frac{\beta_{1}}{\sqrt{\beta_{2}}}}+\frac{\beta_{1}(1-\beta_{1})}{(\sqrt{\beta_{2}}-\beta_{1})\sqrt{1-\frac{\beta_{1}^{2}}{\beta_{2}}}}\right)^{2}\left\|\frac{\bm{g}_{t}}{\sqrt{\bm{\nu}_{t}}}\right\|^{2}+\left(\frac{\frac{\beta_{1}}{\sqrt{\beta_{2}}}}{1-\frac{\beta_{1}}{\sqrt{\beta_{2}}}}\right)^{2}\left\|\frac{\bm{m}_{t-1}}{\sqrt{\bm{\nu}_{t-1}}}\right\|^{2}\right).$
Secondly, we have
$\displaystyle\frac{1}{2}L_{1}\|\nabla
f(\bm{w}_{t})\|(\|\bm{u}_{t+1}-\bm{w}_{t}\|+\|\bm{u}_{t}-\bm{w}_{t}\|)\|\bm{u}_{t+1}-\bm{u}_{t}\|$
$\displaystyle\leq$ $\displaystyle\frac{1}{2}L_{1}\|\nabla
f(\bm{w}_{t})\|(2\|\bm{u}_{t+1}-\bm{w}_{t}\|+\|\bm{u}_{t+1}-\bm{u}_{t}\|)\left(\frac{\left\|\eta\frac{(1-\beta_{1})\bm{g}_{t}}{\sqrt{\bm{\nu}_{t}}}\right\|}{1-\frac{\beta_{1}}{\sqrt{\beta_{2}}}}+\frac{\eta\frac{\beta_{1}(1-\beta_{2})\|\bm{g}_{t}\|^{2}}{\sqrt{\beta_{2}}}\frac{\|\bm{m}_{t-1}\|}{\sqrt{\bm{\nu}_{t-1}}\sqrt{\bm{\nu}_{t}}(\sqrt{\bm{\nu}_{t}}+\sqrt{\beta_{2}\bm{\nu}_{t-1}})}}{1-\frac{\beta_{1}}{\sqrt{\beta_{2}}}}\right)$
$\displaystyle\overset{(*)}{\leq}$ $\displaystyle\frac{1}{2}L_{1}\|\nabla
f(\bm{w}_{t})\|(2\|\bm{u}_{t+1}-\bm{w}_{t}\|+\|\bm{u}_{t+1}-\bm{u}_{t}\|)\left(\frac{\left\|\eta\frac{(1-\beta_{1})\bm{g}_{t}}{\sqrt{\bm{\nu}_{t}}}\right\|}{1-\frac{\beta_{1}}{\sqrt{\beta_{2}}}}+\frac{\eta\frac{\beta_{1}(1-\beta_{1})}{\sqrt{\beta_{2}}}\frac{\|\bm{g}_{t}\|}{\sqrt{\bm{\nu}_{t}}}}{(1-\frac{\beta_{1}}{\sqrt{\beta_{2}}})\sqrt{1-\frac{\beta_{1}^{2}}{\beta_{2}}}}\right)$
$\displaystyle=$
$\displaystyle\frac{L_{1}}{2}\eta\left(\frac{1-\beta_{1}}{1-\frac{\beta_{1}}{\sqrt{\beta_{2}}}}+\frac{\beta_{1}(1-\beta_{1})}{(\sqrt{\beta_{2}}-\beta_{1})\sqrt{1-\frac{\beta_{1}^{2}}{\beta_{2}}}}\right)\|\nabla
f(\bm{w}_{t})\|(2\|\bm{u}_{t+1}-\bm{w}_{t}\|+\|\bm{u}_{t}-\bm{u}_{t+1}\|)\frac{\|\bm{g}_{t}\|}{\sqrt{\bm{\nu}_{t}}}$
$\displaystyle\overset{(\circ)}{=}$
$\displaystyle\frac{L_{1}}{2}\eta\left(\frac{1-\beta_{1}}{1-\frac{\beta_{1}}{\sqrt{\beta_{2}}}}+\frac{\beta_{1}(1-\beta_{1})}{(\sqrt{\beta_{2}}-\beta_{1})\sqrt{1-\frac{\beta_{1}^{2}}{\beta_{2}}}}\right)\|\bm{G}_{t}\|\left(\|\bm{u}_{t+1}-\bm{u}_{t}\|+2\frac{1}{1-\frac{\beta_{1}}{\sqrt{\beta_{2}}}}\eta\left\|\frac{\bm{m}_{t}}{\sqrt{\bm{\nu}_{t}}}\right\|\right)\frac{\|\bm{g}_{t}\|}{\sqrt{\bm{\nu}_{t}}}.$
where inequality $(*)$ is due to that
$\frac{\|\bm{m}_{t-1}\|}{\sqrt{\bm{\nu}_{t-1}}}\leq\frac{1-\beta_{1}}{\sqrt{1-\beta_{2}}\sqrt{1-\frac{\beta_{1}^{2}}{\beta_{2}}}}$,
$\frac{\|\bm{g}_{t}\|}{\sqrt{\bm{\nu}_{t}}}\leq\frac{1}{\sqrt{1-\beta_{2}}}$,
and equation $(\circ)$ is due to
$\bm{u}_{t}-\bm{w}_{t}=\frac{\frac{\beta_{1}}{\sqrt{\beta_{2}}}}{1-\frac{\beta_{1}}{\sqrt{\beta_{2}}}}(\bm{w}_{t}-\bm{w}_{t-1})$
and
$\bm{u}_{t+1}-\bm{w}_{t}=\frac{1}{1-\frac{\beta_{1}}{\sqrt{\beta_{2}}}}(\bm{w}_{t+1}-\bm{w}_{t})$.
As for the term
$\|\bm{G}_{t}\|\frac{\|\bm{m}_{t}\|}{\sqrt{\bm{\nu}_{t}}}\frac{\|\bm{g}_{t}\|}{\sqrt{\bm{\nu}_{t}}}$,
we first add additional denominator for it. Specifically, we have
$\displaystyle\|\bm{G}_{t}\|\frac{\|\bm{m}_{t}\|}{\sqrt{\bm{\nu}_{t}}}\frac{\|\bm{g}_{t}\|}{\sqrt{\bm{\nu}_{t}}}=$
$\displaystyle\frac{\|\bm{G}_{t}\|\|\bm{m}_{t}\|\|\bm{g}_{t}\|}{\bm{\nu}_{t}+(1-\beta_{2})\sigma_{0}^{2}}+\frac{\|\bm{G}_{t}\|\|\bm{m}_{t}\|\|\bm{g}_{t}\|(1-\beta_{2})\sigma_{0}^{2}}{(\bm{\nu}_{t}+(1-\beta_{2})\sigma_{0}^{2})\bm{\nu}_{t}}$
$\displaystyle\leq$
$\displaystyle\frac{\|\bm{G}_{t}\|\|\bm{m}_{t}\|\|\bm{g}_{t}\|}{\bm{\nu}_{t}+(1-\beta_{2})\sigma_{0}^{2}}+\frac{\|\bm{G}_{t}\|\|\bm{m}_{t}\|\sigma_{0}}{\sqrt{\bm{\nu}_{t}+(1-\beta_{2})\sigma_{0}^{2}}\sqrt{\bm{\nu}_{t}}}$
$\displaystyle\leq$
$\displaystyle\frac{\|\bm{G}_{t}\|\|\bm{m}_{t}\|\|\bm{g}_{t}\|}{\bm{\nu}_{t}+(1-\beta_{2})\sigma_{0}^{2}}+\frac{1}{2}\frac{\|\bm{G}_{t}\|^{2}\sigma_{0}}{\bm{\nu}_{t}+(1-\beta_{2})\sigma_{0}^{2}}+\frac{1}{2}\sigma_{0}\frac{\|\bm{m}_{t}\|^{2}}{\bm{\nu}_{t}}$
$\displaystyle\leq$
$\displaystyle\frac{\|\bm{G}_{t}\|\|\bm{m}_{t}\|\|\bm{g}_{t}\|}{\bm{\nu}_{t}+(1-\beta_{2})\sigma_{0}^{2}}+\frac{1}{2\sqrt{1-\beta_{2}}}\frac{\|\bm{G}_{t}\|^{2}}{\sqrt{\bm{\nu}_{t}+(1-\beta_{2})\sigma_{0}^{2}}}+\frac{1}{2}\sigma_{0}\frac{\|\bm{m}_{t}\|^{2}}{\bm{\nu}_{t}}.$
We analyze the first term in the right-hand-side of above inequality more
carefully. Specifically, this term with expectation can be bounded as
$\displaystyle\mathbb{E}^{|{\mathcal{F}}_{t}}\frac{\|\bm{G}_{t}\|\|\bm{m}_{t}\|\|\bm{g}_{t}\|}{\bm{\nu}_{t}+(1-\beta_{2})\sigma_{0}^{2}}$
$\displaystyle\leq$
$\displaystyle\mathbb{E}^{|{\mathcal{F}}_{t}}\frac{\|\bm{G}_{t}\|\|\bm{m}_{t}\|\|\bm{g}_{t}\|}{\sqrt{\bm{\nu}_{t}+(1-\beta_{2})\sigma_{0}^{2}}\sqrt{\beta_{2}\bm{\nu}_{t-1}+(1-\beta_{2})\sigma_{0}^{2}}}$
$\displaystyle\leq$
$\displaystyle\frac{\|\bm{G}_{t}\|}{\sqrt{\beta_{2}\bm{\nu}_{t-1}+(1-\beta_{2})\sigma_{0}^{2}}}\sqrt{\|\bm{g}_{t}\|^{2}}\sqrt{\mathbb{E}^{|{\mathcal{F}}_{t}}\frac{\|\bm{m}_{t}\|^{2}}{\bm{\nu}_{t}+(1-\beta_{2})\sigma_{0}^{2}}}$
$\displaystyle\overset{(\star)}{\leq}$
$\displaystyle\frac{\|\bm{G}_{t}\|}{\sqrt{\beta_{2}\bm{\nu}_{t-1}+(1-\beta_{2})\sigma_{0}^{2}}}\sqrt{\sigma_{1}^{2}\|\bm{G}_{t}\|^{2}+\sigma_{0}^{2}}\sqrt{\mathbb{E}^{|{\mathcal{F}}_{t}}\frac{\|\bm{m}_{t}\|^{2}}{\bm{\nu}_{t}+(1-\beta_{2})\sigma_{0}^{2}}}$
$\displaystyle\leq$
$\displaystyle\frac{\|\bm{G}_{t}\|}{\sqrt{\beta_{2}\bm{\nu}_{t-1}+(1-\beta_{2})\sigma_{0}^{2}}}(\sigma_{1}\|\bm{G}_{t}\|+\sigma_{0})\sqrt{\mathbb{E}^{|{\mathcal{F}}_{t}}\frac{\|\bm{m}_{t}\|^{2}}{\bm{\nu}_{t}+(1-\beta_{2})\sigma_{0}^{2}}}$
$\displaystyle\leq$
$\displaystyle\frac{1-\beta_{1}}{\sqrt{1-\beta_{2}}\sqrt{1-\frac{\beta_{1}^{2}}{\beta_{2}}}}\sigma_{1}\frac{\|\bm{G}_{t}\|^{2}}{\sqrt{\beta_{2}\bm{\nu}_{t-1}+(1-\beta_{2})\sigma_{0}^{2}}}+\frac{1}{2\sqrt{1-\beta_{2}}}\frac{\|\bm{G}_{t}\|^{2}}{\sqrt{\beta_{2}\bm{\nu}_{t-1}+(1-\beta_{2})\sigma_{0}^{2}}}+\frac{\sigma_{0}}{2}\mathbb{E}^{|{\mathcal{F}}_{t}}\frac{\|\bm{m}_{t}\|^{2}}{\bm{\nu}_{t}+(1-\beta_{2})\sigma_{0}^{2}},$
where Eq. ($\star$) is due to Holder’s inequality.
Meanwhile, due to Eq. (14), we have that the term
$|\bm{G}_{t}\|\|\bm{u}_{t+1}-\bm{u}_{t}\|\frac{\|\bm{g}_{t}\|}{\sqrt{\bm{\nu}_{t}}}$
can be be bounded as
$\displaystyle|\bm{G}_{t}\|\|\bm{u}_{t+1}-\bm{u}_{t}\|\frac{\|\bm{g}_{t}\|}{\sqrt{\bm{\nu}_{t}}}\leq\eta\left(\frac{1-\beta_{1}}{1-\frac{\beta_{1}}{\sqrt{\beta_{2}}}}+\frac{\beta_{1}(1-\beta_{1})}{(\sqrt{\beta_{2}}-\beta_{1})\sqrt{1-\frac{\beta_{1}^{2}}{\beta_{2}}}}\right)\|\bm{G}_{t}\|\frac{\|\bm{g}_{t}\|}{\sqrt{\bm{\nu}_{t}}}\frac{\|\bm{g}_{t}\|}{\sqrt{\bm{\nu}_{t}}}.$
Then, following the similar reasoning above, we have
$|\bm{G}_{t}\|\|\bm{u}_{t+1}-\bm{u}_{t}\|\frac{\|\bm{g}_{t}\|}{\sqrt{\bm{\nu}_{t}}}$
can be bounded as
$\displaystyle\mathbb{E}^{|{\mathcal{F}}_{t}}\|\bm{G}_{t}\|\frac{\|\bm{g}_{t}\|}{\sqrt{\bm{\nu}_{t}}}\frac{\|\bm{g}_{t}\|}{\sqrt{\bm{\nu}_{t}}}$
$\displaystyle\leq$
$\displaystyle\frac{1}{\sqrt{1-\beta_{2}}}\sigma_{1}\frac{\|\bm{G}_{t}\|^{2}}{\sqrt{\beta_{2}\bm{\nu}_{t-1}+(1-\beta_{2})\sigma_{0}^{2}}}+\frac{1}{2\sqrt{1-\beta_{2}}}\frac{\|\bm{G}_{t}\|^{2}}{\sqrt{\beta_{2}\bm{\nu}_{t-1}+(1-\beta_{2})\sigma_{0}^{2}}}+\sigma_{0}\mathbb{E}^{|{\mathcal{F}}_{t}}\frac{\|\bm{g}_{t}\|^{2}}{\bm{\nu}_{t}+(1-\beta_{2})\sigma_{0}^{2}}$
$\displaystyle+\frac{1}{2\sqrt{1-\beta_{2}}}\frac{\|\bm{G}_{t}\|^{2}}{\sqrt{\bm{\nu}_{t}+(1-\beta_{2})\sigma_{0}^{2}}}+\frac{1}{2}\sigma_{0}\mathbb{E}^{|{\mathcal{F}}_{t}}\frac{\|\bm{g}_{t}\|^{2}}{\bm{\nu}_{t}}.$
Putting all the estimations together, we have that the second-order term can
be bounded by
$\displaystyle\mathbb{E}^{|{\mathcal{F}}_{t}}\frac{1}{2}(L_{0}+L_{1}\|\nabla
f(\bm{w}_{t})\|)(\|\bm{u}_{t+1}-\bm{w}_{t}\|+\|\bm{u}_{t}-\bm{w}_{t}\|)\|\bm{u}_{t+1}-\bm{u}_{t}\|$
$\displaystyle\leq$
$\displaystyle\frac{L_{1}\eta^{2}}{1-\frac{\beta_{1}}{\sqrt{\beta_{2}}}}\left(\frac{1-\beta_{1}}{1-\frac{\beta_{1}}{\sqrt{\beta_{2}}}}+\frac{\beta_{1}(1-\beta_{1})}{(\sqrt{\beta_{2}}-\beta_{1})\sqrt{1-\frac{\beta_{1}^{2}}{\beta_{2}}}}\right)\left(\frac{2}{\sqrt{1-\beta_{2}}}\frac{\|\bm{G}_{t}\|^{2}}{\sqrt{\beta_{2}\bm{\nu}_{t-1}+(1-\beta_{2})\sigma_{0}^{2}}}+\frac{\sigma_{0}}{2}\mathbb{E}^{|{\mathcal{F}}_{t}}\frac{\|\bm{g}_{t}\|^{2}}{\bm{\nu}_{t}}\right)$
$\displaystyle+\frac{L_{0}\eta^{2}}{2}\left(2\left(\frac{1-\beta_{1}}{1-\frac{\beta_{1}}{\sqrt{\beta_{2}}}}+\frac{\beta_{1}(1-\beta_{1})}{(\sqrt{\beta_{2}}-\beta_{1})\sqrt{1-\frac{\beta_{1}^{2}}{\beta_{2}}}}\right)^{2}\mathbb{E}^{|{\mathcal{F}}_{t}}\left\|\frac{\bm{g}_{t}}{\sqrt{\bm{\nu}_{t}}}\right\|^{2}+\left(\frac{\frac{\beta_{1}}{\sqrt{\beta_{2}}}}{1-\frac{\beta_{1}}{\sqrt{\beta_{2}}}}\right)^{2}\left\|\frac{\bm{m}_{t-1}}{\sqrt{\bm{\nu}_{t-1}}}\right\|^{2}\right)$
$\displaystyle\leq$ $\displaystyle
4\frac{L_{1}\eta^{2}}{1-\beta_{1}}\left(1+\frac{1}{\sqrt{1-\beta_{1}}}\right)\left(\frac{2}{\sqrt{1-\beta_{2}}}\frac{\|\bm{G}_{t}\|^{2}}{\sqrt{\beta_{2}\bm{\nu}_{t-1}+(1-\beta_{2})\sigma_{0}^{2}}}+\frac{\sigma_{0}}{2}\mathbb{E}^{|{\mathcal{F}}_{t}}\frac{\|\bm{g}_{t}\|^{2}}{\bm{\nu}_{t}}\right)$
$\displaystyle+2L_{0}\eta^{2}\left(2\left(1+\frac{1}{\sqrt{1-\beta_{1}}}\right)^{2}\mathbb{E}^{|{\mathcal{F}}_{t}}\left\|\frac{\bm{g}_{t}}{\sqrt{\bm{\nu}_{t}}}\right\|^{2}+\left(\frac{1}{1-\beta_{1}}\right)^{2}\left\|\frac{\bm{m}_{t-1}}{\sqrt{\bm{\nu}_{t-1}}}\right\|^{2}\right)$
$\displaystyle\leq$
$\displaystyle\frac{1}{8}\eta\frac{\|\bm{G}_{t}\|^{2}}{\sqrt{\widetilde{\bm{\nu}}_{t}}}+4\frac{L_{1}\eta^{2}\sigma_{0}}{(1-\beta_{1})^{\frac{3}{2}}}\mathbb{E}^{|{\mathcal{F}}_{t}}\frac{\|\bm{g}_{t}\|^{2}}{\bm{\nu}_{t}}+2L_{0}\eta^{2}\left(8\frac{1}{1-\beta_{1}}\mathbb{E}^{|{\mathcal{F}}_{t}}\left\|\frac{\bm{g}_{t}}{\sqrt{\bm{\nu}_{t}}}\right\|^{2}+\left(\frac{1}{1-\beta_{1}}\right)^{2}\left\|\frac{\bm{m}_{t-1}}{\sqrt{\bm{\nu}_{t-1}}}\right\|^{2}\right).$
(15)
Here in the second inequality we use $\beta_{2}\geq\beta_{1}$, and in the last
inequality we use
$\frac{\eta}{\sqrt{1-\beta_{2}}}=\frac{\sqrt{1-\frac{\beta_{1}^{2}}{\beta_{2}}}(1-\frac{\beta_{1}}{\sqrt{\beta_{2}}})^{2}}{1024\sigma_{1}^{2}(L_{1}+L_{0})(1-\beta_{1})}$.
Applying the estimations of the first-order term (Eq. (13)) and the second-
order term (Eq. (15)) back into the descent lemma, we derive that
$\displaystyle\mathbb{E}^{|{\mathcal{F}}_{t}}f(\bm{u}_{t+1})\leq$
$\displaystyle
f(\bm{u}_{t})-\frac{1}{8}\eta\frac{\|\bm{G}_{t}\|^{2}}{\sqrt{\tilde{\bm{\nu}}_{t}}}+\frac{4\eta\sqrt{1-\beta_{2}}\sigma_{0}}{\left(1-\beta_{1}\right)^{2}}\mathbb{E}^{|\mathcal{F}_{t}}\frac{\|\bm{g}_{t}\|^{2}}{\bm{\nu}_{t}}+\eta\frac{64}{(1-{\beta_{1}})}\sigma_{1}^{2}\mathbb{E}^{|\mathcal{F}_{t}}\left(\frac{\|\bm{G}_{t}\|^{2}}{\sqrt{\beta_{2}\widetilde{\bm{\nu}}_{t}}}-\frac{\|\bm{G}_{t+1}\|^{2}}{\sqrt{\widetilde{\bm{\nu}}_{t+1}}}\right)$
$\displaystyle+2\frac{L_{0}}{\sigma_{0}}\mathbb{E}^{|{\mathcal{F}}_{t}}\|\bm{w}_{t+1}-\bm{w}_{t}\|^{2}+\frac{8\eta\sqrt{1-\beta_{2}}\sigma_{0}}{(1-\beta_{1})^{2}}\mathbb{E}^{|\mathcal{F}_{t}}\left[\left(\frac{\|\bm{m}_{t}\|^{2}}{\bm{\nu}_{t}}\right)\right]$
$\displaystyle+4\frac{L_{1}\eta^{2}\sigma_{0}}{(1-\beta_{1})^{\frac{3}{2}}}\mathbb{E}^{|{\mathcal{F}}_{t}}\frac{\|\bm{g}_{t}\|^{2}}{\bm{\nu}_{t}}+2L_{0}\eta^{2}\left(8\frac{1}{1-\beta_{1}}\mathbb{E}^{|{\mathcal{F}}_{t}}\left\|\frac{\bm{g}_{t}}{\sqrt{\bm{\nu}_{t}}}\right\|^{2}+\left(\frac{1}{1-\beta_{1}}\right)^{2}\left\|\frac{\bm{m}_{t-1}}{\sqrt{\bm{\nu}_{t-1}}}\right\|^{2}\right).$
Taking expectation to the above inequality and summing it over $t\in[1,T]$
then gives
$\displaystyle\frac{1}{8}\eta\sum_{t=1}^{T}\mathbb{E}\frac{\|\bm{G}_{t}\|^{2}}{\sqrt{\widetilde{\bm{\nu}}_{t}}}\leq$
$\displaystyle
f(\bm{u}_{1})-f^{*}+\eta\frac{64}{(1-{\beta_{1}})}\sigma_{1}^{2}\frac{\|\bm{G}_{1}\|^{2}}{\sqrt{\beta_{2}\widetilde{\bm{\nu}}_{1}}}+\eta\frac{64}{(1-{\beta_{1}})}\sigma_{1}^{2}\left(\frac{1}{\sqrt{\beta_{2}}}-1\right)\sum_{t=1}^{T}\mathbb{E}\frac{\|\bm{G}_{t}\|^{2}}{\sqrt{\widetilde{\bm{\nu}}_{t}}}$
$\displaystyle+\left(\frac{4\eta\sqrt{1-\beta_{2}}\sigma_{0}}{\left(1-\beta_{1}\right)^{2}}+4\frac{L_{1}\eta^{2}\sigma_{0}}{(1-\beta_{1})^{\frac{3}{2}}}+\frac{16L_{0}\eta^{2}}{1-\beta_{1}}\right)\sum_{t=1}^{T}\mathbb{E}\frac{\|\bm{g}_{t}\|^{2}}{\bm{\nu}_{t}}$
$\displaystyle+\left(2\frac{L_{0}}{\sigma_{0}}\eta^{2}+\frac{8\eta\sqrt{1-\beta_{2}}\sigma_{0}}{(1-\beta_{1})^{2}}+\frac{2L_{0}\eta^{2}}{(1-\beta_{1})^{2}}\right)\sum_{t=1}^{T}\mathbb{E}\left\|\frac{\bm{m}_{t}}{\sqrt{\bm{\nu}_{t}}}\right\|^{2}.$
Since $\beta_{2}\geq\frac{1}{2}$ and
$1-\beta_{2}\leq\frac{1-\beta_{1}}{1024\sigma_{1}^{2}}$, we have
$\eta\frac{64}{(1-{\beta_{1}})}\sigma_{1}^{2}\left(\frac{1}{\sqrt{\beta_{2}}}-1\right)\sum_{t=1}^{T}\mathbb{E}\frac{\|\bm{G}_{t}\|^{2}}{\sqrt{\widetilde{\bm{\nu}}_{t}}}\leq\frac{1}{16}\eta\sum_{t=1}^{T}\mathbb{E}\frac{\|\bm{G}_{t}\|^{2}}{\sqrt{\widetilde{\bm{\nu}}_{t}}}.$
By further applying Lemma 3 and $\beta_{2}\geq\beta_{1}$, we obtain
$\displaystyle\frac{1}{16}\eta\sum_{t=1}^{T}\mathbb{E}\frac{\|\bm{G}_{t}\|^{2}}{\sqrt{\widetilde{\bm{\nu}}_{t}}}$
$\displaystyle\leq$ $\displaystyle
f(\bm{u}_{1})-f^{*}+\eta\frac{64}{(1-{\beta_{1}})}\sigma_{1}^{2}\frac{\|\bm{G}_{1}\|^{2}}{\sqrt{\beta_{2}\widetilde{\bm{\nu}}_{1}}}$
(16)
$\displaystyle+\frac{1}{1-\beta_{2}}\left(\frac{36\eta\sqrt{1-\beta_{2}}\sigma_{0}}{\left(1-\beta_{1}\right)^{2}}+4\frac{L_{1}\eta^{2}\sigma_{0}}{(1-\beta_{1})^{\frac{3}{2}}}+\frac{24L_{0}\eta^{2}}{1-\beta_{1}}+8\frac{L_{0}}{\sigma_{0}}\eta^{2}\right)\left(\mathbb{E}\ln\bm{\nu}_{T}-T\ln\beta_{2}\right)$
$\displaystyle\leq$ $\displaystyle
f(\bm{w}_{1})-f^{*}+\eta\frac{64}{(1-{\beta_{1}})}\sigma_{1}^{2}\frac{\|\bm{G}_{1}\|^{2}}{\sqrt{\beta_{2}\widetilde{\bm{\nu}}_{1}}}$
$\displaystyle+\frac{1}{1-\beta_{2}}\left(\frac{147456\eta^{2}(L_{0}+L_{1})\sigma_{1}^{2}\sigma_{0}}{\left(1-\beta_{1}\right)^{\frac{5}{2}}}+4\frac{L_{1}\eta^{2}\sigma_{0}}{(1-\beta_{1})^{\frac{3}{2}}}+\frac{24L_{0}\eta^{2}}{1-\beta_{1}}+8\frac{L_{0}}{\sigma_{0}}\eta^{2}\right)\left(\mathbb{E}\ln\bm{\nu}_{T}-T\ln\beta_{2}\right).$
(17)
Here last inequality we apply
$\frac{\eta}{\sqrt{1-\beta_{2}}}=\frac{\sqrt{1-\frac{\beta_{1}^{2}}{\beta_{2}}}(1-\frac{\beta_{1}}{\sqrt{\beta_{2}}})^{2}}{1024\sigma_{1}^{2}(L_{1}+L_{0})(1-\beta_{1})}$.
Below we transfer the above bound to the bound of
$\sum_{t=1}^{T}\|\bm{G}_{t}\|$ by two rounds of divide-and-conquer. In the
first round, we will bound $\mathbb{E}\ln\bm{\nu}_{T}$. To start with, we have
that
$\displaystyle\frac{\|\bm{G}_{t}\|^{2}}{\sqrt{\widetilde{\bm{\nu}}_{t}}}\mathds{1}_{\|G_{t}\|\geq\frac{\sigma_{0}}{\sigma_{1}}}\geq\frac{\frac{1}{2\sigma_{1}^{2}}\mathbb{E}^{|{\mathcal{F}}_{t}}\|\bm{g}_{t}\|^{2}}{\sqrt{\widetilde{\bm{\nu}}_{t}}}\mathds{1}_{\|G_{t}\|\geq\frac{\sigma_{0}}{\sigma_{1}}}$
$\displaystyle=$
$\displaystyle\frac{\frac{1}{2\sigma_{1}^{2}}\mathbb{E}^{|{\mathcal{F}}_{t}}\|\bm{g}_{t}\|^{2}}{\sqrt{\beta_{2}\bm{\nu}_{t-1}+(1-\beta_{2})\sigma_{0}^{2}}}\mathds{1}_{\|G_{t}\|\geq\frac{\sigma_{0}}{\sigma_{1}}}$
$\displaystyle\geq$
$\displaystyle\frac{1}{2\sigma_{1}^{2}}\mathbb{E}^{|{\mathcal{F}}_{t}}\frac{\beta_{2}^{T-t}\|\bm{g}_{t}\|^{2}}{\sqrt{\bm{\nu}_{T}+(1-\beta_{2})\sigma_{0}^{2}}}\mathds{1}_{\|G_{t}\|\geq\frac{\sigma_{0}}{\sigma_{1}}},$
where the last inequality is due to that
$\displaystyle\beta_{2}\bm{\nu}_{t-1}+(1-\beta_{2})\sigma_{0}^{2}\leq\beta_{2}^{t-T}\bm{\nu}_{T}+(1-\beta_{2})\sigma_{0}^{2}\leq(\bm{\nu}_{T}+(1-\beta_{2})\sigma_{0}^{2})\beta_{2}^{2(t-T)}.$
(18)
Furthermore, we have
$\displaystyle\frac{\sigma_{0}^{2}+\frac{\beta_{2}^{T}\bm{\nu}_{0}}{1-\beta_{2}}}{\sqrt{\bm{\nu}_{T}+(1-\beta_{2})\sigma_{0}^{2}}}+\sum_{t=1}^{T}\mathbb{E}\frac{\beta_{2}^{T-t}\|\bm{g}_{t}\|^{2}}{\sqrt{\bm{\nu}_{T}+(1-\beta_{2})\sigma_{0}^{2}}}\mathds{1}_{\|\bm{G}_{t}\|<\frac{\sigma_{0}}{\sigma_{1}}}$
$\displaystyle\leq$
$\displaystyle\frac{\sigma_{0}^{2}+\frac{\beta_{2}^{T}\bm{\nu}_{0}}{1-\beta_{2}}}{\sqrt{\bm{\nu}_{0}\beta_{2}^{T}+\sum_{s=1}^{T}\beta_{2}^{T-s}\|g_{s}\|^{2}\mathds{1}_{\|\bm{G}_{s}\|<\frac{\sigma_{0}}{\sigma_{1}}}+(1-\beta_{2})\sigma_{0}^{2}}}+\sum_{t=1}^{T}\mathbb{E}\frac{\beta_{2}^{T-s}\|\bm{g}_{t}\|^{2}}{\sqrt{\bm{\nu}_{0}\beta_{2}^{T}+\sum_{s=1}^{T}\beta_{2}^{T-s}\|g_{s}\|^{2}\mathds{1}_{\|\bm{G}_{s}\|<\frac{\sigma_{0}}{\sigma_{1}}}+(1-\beta_{2})\sigma_{0}^{2}}}\mathds{1}_{\|\bm{G}_{t}\|<\frac{\sigma_{0}}{\sigma_{1}}}$
$\displaystyle=$
$\displaystyle\frac{1}{1-\beta_{2}}\mathbb{E}\sqrt{\bm{\nu}_{0}\beta_{2}^{T}+\sum_{s=1}^{T}\beta_{2}^{T-s}\|g_{s}\|^{2}\mathds{1}_{\|\bm{G}_{s}\|<\frac{\sigma_{0}}{\sigma_{1}}}+(1-\beta_{2})\sigma_{0}^{2}}\leq\frac{1}{1-\beta_{2}}\sqrt{\beta_{2}^{T}\bm{\nu}_{0}+2\sigma_{0}^{2}}.$
(19)
Conclusively, we obtain
$\displaystyle\mathbb{E}\sqrt{\bm{\nu}_{T}+(1-\beta_{2})\sigma_{0}^{2}}$
$\displaystyle=$
$\displaystyle(1-\beta_{2})\left(\frac{\sigma_{0}^{2}+\frac{\beta_{2}^{T}\bm{\nu}_{0}}{1-\beta_{2}}}{\sqrt{\bm{\nu}_{T}+(1-\beta_{2})\sigma_{0}^{2}}}+\sum_{t=1}^{T}\mathbb{E}\frac{\beta_{2}^{T-t}\|\bm{g}_{t}\|^{2}}{\sqrt{\bm{\nu}_{T}+(1-\beta_{2})\sigma_{0}^{2}}}\mathds{1}_{\|\bm{G}_{t}\|<\frac{\sigma_{0}}{\sigma_{1}}}\right.$
$\displaystyle+\left.\sum_{t=1}^{T}\mathbb{E}\frac{\beta_{2}^{T-t}\|\bm{g}_{t}\|^{2}}{\sqrt{\bm{\nu}_{T}+(1-\beta_{2})\sigma_{0}^{2}}}\mathds{1}_{\|\bm{G}_{t}\|\geq\frac{\sigma_{0}}{\sigma_{1}}}\right)$
$\displaystyle\leq$
$\displaystyle\sqrt{\beta_{2}^{T}\bm{\nu}_{0}+2\sigma_{0}^{2}}+2(1-\beta_{2})\sigma_{1}^{2}\mathbb{E}\sum_{t=1}^{T}\frac{\|\bm{G}_{t}\|^{2}}{\sqrt{\widetilde{\bm{\nu}}_{t}}}\mathds{1}_{\|\bm{G}_{t}\|\geq\frac{\sigma_{0}}{\sigma_{1}}}$
$\displaystyle\leq$
$\displaystyle\sqrt{\beta_{2}^{T}\bm{\nu}_{0}+2\sigma_{0}^{2}}+2(1-\beta_{2})\sigma_{1}^{2}\mathbb{E}\sum_{t=1}^{T}\frac{\|\bm{G}_{t}\|^{2}}{\sqrt{\widetilde{\bm{\nu}}_{t}}}.$
Substituting
$\mathbb{E}\sum_{t=1}^{T}\frac{\|\bm{G}_{t}\|^{2}}{\sqrt{\widetilde{\bm{\nu}}_{t}}}$
according to Eq. (17), we obtain that
$\displaystyle\mathbb{E}\sqrt{\bm{\nu}_{T}+(1-\beta_{2})\sigma_{0}^{2}}$
$\displaystyle\leq$
$\displaystyle\sqrt{\beta_{2}^{T}\bm{\nu}_{0}+2\sigma_{0}^{2}}+\frac{2(1-\beta_{2})\sigma_{1}^{2}}{\eta}\eta\mathbb{E}\sum_{t=1}^{T}\frac{\|\bm{G}_{t}\|^{2}}{\sqrt{\widetilde{\bm{\nu}}_{t}}}$
$\displaystyle\leq$
$\displaystyle\sqrt{\beta_{2}^{T}\bm{\nu}_{0}+2\sigma_{0}^{2}}+\frac{2(1-\beta_{2})\sigma_{1}^{2}}{\eta}\left(f(\bm{w}_{1})-f^{*}+\eta\frac{64}{(1-{\beta_{1}})}\sigma_{1}^{2}\frac{\|\bm{G}_{1}\|^{2}}{\sqrt{\beta_{2}\widetilde{\bm{\nu}}_{1}}}\right.$
$\displaystyle+\left.\frac{1}{1-\beta_{2}}\left(\frac{147456\eta^{2}(L_{0}+L_{1})\sigma_{1}^{2}\sigma_{0}}{\left(1-\beta_{1}\right)^{\frac{5}{2}}}+4\frac{L_{1}\eta^{2}\sigma_{0}}{(1-\beta_{1})^{\frac{3}{2}}}+\frac{24L_{0}\eta^{2}}{1-\beta_{1}}+8\frac{L_{0}}{\sigma_{0}}\eta^{2}\right)\left(\mathbb{E}\ln\bm{\nu}_{T}-T\ln\beta_{2}\right)\right)$
$\displaystyle\leq$
$\displaystyle\sqrt{\beta_{2}^{T}\bm{\nu}_{0}+2\sigma_{0}^{2}}+\sigma_{0}+\frac{1}{4}\mathbb{E}\ln\bm{\nu}_{T}$
$\displaystyle\leq$
$\displaystyle\sqrt{\beta_{2}^{T}\bm{\nu}_{0}+2\sigma_{0}^{2}}+\sigma_{0}+\frac{1}{2}\mathbb{E}\sqrt{\bm{\nu}_{T}+(1-\beta_{2})\sigma_{0}^{2}}.$
where the third inequality is due to
$\displaystyle T\geq$
$\displaystyle\frac{36*2048^{4}(L_{0}+L_{1})^{3}\sigma_{1}^{12}(f(\bm{w}_{1})-f^{*})}{(1-\beta_{1})^{6}\sigma_{0}^{2}}+\frac{768*2048^{2}(f(\bm{w}_{1})-f^{*})\sigma_{1}^{8}(8L_{1}^{2}(f(\bm{w}_{1})-f^{*})^{2}+4L_{0}(f(\bm{w}_{1})-f^{*}))}{(1-\beta_{1})^{4}\sigma_{0}^{2}}$
$\displaystyle+\frac{24^{2}*147456(L_{0}+L_{1})\sigma_{1}^{8}(f(\bm{w}_{1})-f^{*})\sigma_{0}^{2}}{(1-\beta_{2})^{5}}+\frac{128^{2}(L_{0}+L_{1})(f(\bm{w}_{1})-f^{*})\sigma_{1}^{4}}{\sigma_{0}^{2}}$
$\displaystyle+\frac{24^{2}*147456*2048^{2}(L_{0}+L_{1})^{3}\sigma_{1}^{16}(f(\bm{w}_{1})-f^{*})^{3}}{(1-\beta_{2})^{11}}+\frac{128^{2}*2048^{2}(L_{0}+L_{1})^{3}(f(\bm{w}_{1})-f^{*})^{3}\sigma^{12}}{\sigma_{0}^{4}(1-\beta_{1})^{6}},$
and the last inequality is due to $\ln x\leq x$. Solving the above inequality
with respect to $\mathbb{E}\sqrt{\bm{\nu}_{T}+(1-\beta_{2})\sigma_{0}^{2}}$
and applying $\bm{\nu}_{0}=\sigma_{0}^{2}$ then gives
$\displaystyle\mathbb{E}\sqrt{\bm{\nu}_{T}}\leq\mathbb{E}\sqrt{\bm{\nu}_{T}+(1-\beta_{2})\sigma_{0}^{2}}\leq$
$\displaystyle 6\sigma_{0}.$ (20)
Therefore, Eq. (17) can be rewritten as
$\displaystyle\frac{1}{16}\eta\sum_{t=1}^{T}\mathbb{E}\frac{\|\bm{G}_{t}\|^{2}}{\sqrt{\widetilde{\bm{\nu}}_{t}}}$
$\displaystyle\leq$ $\displaystyle
f(\bm{w}_{1})-f^{*}+\eta\frac{64}{(1-{\beta_{1}})}\sigma_{1}^{2}\frac{\|\bm{G}_{1}\|^{2}}{\sqrt{\beta_{2}\widetilde{\bm{\nu}}_{1}}}$
$\displaystyle+\frac{1}{1-\beta_{2}}\left(\frac{147456\eta^{2}(L_{0}+L_{1})\sigma_{1}^{2}\sigma_{0}}{\left(1-\beta_{1}\right)^{\frac{5}{2}}}+4\frac{L_{1}\eta^{2}\sigma_{0}}{(1-\beta_{1})^{\frac{3}{2}}}+\frac{24L_{0}\eta^{2}}{1-\beta_{1}}+8\frac{L_{0}}{\sigma_{0}}\eta^{2}\right)\left(2\ln
6\sigma_{0}-T\ln\beta_{2}\right).$ (21)
We then execute the second round of divide-and-conquer. To begin with, we have
that
$\sum_{t=1}^{T}\mathbb{E}\left[\frac{\|\bm{G}_{t}\|^{2}}{\sqrt{\widetilde{\bm{\nu}}_{t}}}\mathds{1}_{\|G_{t}\|\geq\frac{\sigma_{0}}{\sigma_{1}}}\right]\leq\sum_{t=1}^{T}\mathbb{E}\left[\frac{\|\bm{G}_{t}\|^{2}}{\sqrt{\widetilde{\bm{\nu}}_{t}}}\right].$
(22)
On the other hand, we have that
$\displaystyle\frac{\|\bm{G}_{t}\|^{2}}{\sqrt{\widetilde{\bm{\nu}}_{t}}}\mathds{1}_{\|G_{t}\|\geq\frac{\sigma_{0}}{\sigma_{1}}}\geq\frac{\frac{2}{3}\|\bm{G}_{t}\|^{2}+\frac{1}{3}\frac{\sigma^{2}_{0}}{\sigma_{1}^{2}}}{\sqrt{\widetilde{\bm{\nu}}_{t}}}\mathds{1}_{\|G_{t}\|\geq\frac{\sigma_{0}}{\sigma_{1}}}\geq\frac{\frac{\beta_{2}}{3\sigma_{1}^{2}}\mathbb{E}^{|{\mathcal{F}}_{t}}\|\bm{g}_{t}\|^{2}+\frac{1-\beta_{2}}{3}\frac{\sigma^{2}_{0}}{\sigma_{1}^{2}}}{\sqrt{\widetilde{\bm{\nu}}_{t}}}\mathds{1}_{\|G_{t}\|\geq\frac{\sigma_{0}}{\sigma_{1}}}$
$\displaystyle=$ |
# Long-term Dynamical Evolution of Pallene (Saturn XXXIII) and its Diffuse,
Dusty Ring
Marco A. Muñoz-Gutiérrez,1 A. P. Granados Contreras,1 Gustavo Madeira,2,3
Joseph A. A’Hearn,4 and Silvia Giuliatti Winter2
1Institute of Astronomy and Astrophysics, Academia Sinica, 11F of AS/NTU
Astronomy-Mathematics Building, No.1, Sec. 4,
Roosevelt Rd, Taipei 10617, Taiwan, R.O.C.
2Grupo de Dinâmica Orbital e Planetologia, São Paulo State University (UNESP),
333 Av. Dr. Ariberto Pereira da Cunha, Guaratinguetá-SP, 12516-410, Brazil
3Université de Paris, Institut de Physique du Globe de Paris, CNRS, F-75005
Paris, France
4Department of Physics, University of Idaho, 875 Perimeter Drive, Moscow,
Idaho 83844, USA E-mail<EMAIL_ADDRESS>(MAM)
(Accepted XXX. Received YYY; in original form ZZZ)
###### Abstract
The distinctive set of Saturnian small satellites, Aegaeon, Methone, Anthe,
and Pallene, constitutes an excellent laboratory to understand the evolution
of systems immersed in co-orbital dusty rings/arcs, subjected to perturbations
from larger satellites and non-gravitational forces. In this work, we carried
out a comprehensive numerical exploration of the long-term evolution of
Pallene and its ring. Through frequency map analysis, we characterised the
current dynamical state around Pallene. A simple tidal evolution model serves
to set a time frame for the current orbital configuration of the system. With
detailed short and long-term N-body simulations we determine whether Pallene
is currently in resonance with one or more of six of Saturn’s major moons. We
analysed a myriad of resonant arguments extracted from the direct and indirect
parts of the disturbing function, finding that Pallene is not in mean motion
resonance from the present up to 5 Myr into the future; nonetheless, some
resonant arguments exhibit intervals of libration and circulation at different
timescales and moon pairings. We studied the dynamical evolution of
micrometric particles forming the ring, considering gravitational and non-
gravitational forces. Non-gravitational forces are responsible for particles
vertical excursions and outward migration. By estimating the satellite’s mass
production rate, we find that Pallene could be responsible for keeping its
ring in steady-state only if it is mainly composed of large micrometre-sized
particles. If mainly composed of particles with a few micrometres for which
Pallene is the only source, the ring will spread out, both radially and
vertically, until it finally disappears.
###### keywords:
planets and satellites: individual: Pallene – methods: numerical – planets and
satellites: dynamical evolution and stability – planets and satellites: rings
††pubyear: 2021††pagerange: Long-term Dynamical Evolution of Pallene (Saturn
XXXIII) and its Diffuse, Dusty Ring–Long-term Dynamical Evolution of Pallene
(Saturn XXXIII) and its Diffuse, Dusty Ring
## 1 Introduction
Pallene (Saturn XXXIII) is a satellite of only 2.23 km in radius (Thomas et
al., 2013), orbiting Saturn at an average distance of $\sim 212\,283$ km, with
an eccentricity of $\sim 0.004$, and a relatively large inclination of $\sim
0.18^{\circ}$ (Spitale et al., 2006; Jacobson et al., 2008).
This small Saturnian moon was first observed in a single photograph of the
_Voyager 2_ spacecraft. It was reported, together with a preliminary orbital
and physical characterisation, by Synnott (1986). Pallene was then
rediscovered in 2005 by the _Cassini_ Imaging Science team (Porco et al.,
2005) and positively identified as the S/1981 S14 object from _Voyager 2_.
Pallene is one of three small moons located between the orbits of Mimas and
Enceladus, called collectively as the Alkyonides. Despite the presence of a
vast number of resonances in the region, accentuated by the commensurabilities
of Mimas with Tethys and Enceladus with Dione (see e.g. Sinclair, 1972;
Greenberg, 1973; Peale, 1976, 1999), Pallene doesn’t seem to be in a mean
motion resonance (MMR), unlike Methone and Anthe, trapped in 14:15 and 10:11
corotation eccentricity resonances with Mimas, respectively (Cooper et al.,
2008; Hedman et al., 2009; El Moutamid et al., 2014). Nonetheless, Pallene is
migrating away from Saturn via tidal evolution, though at a slower rate than
Mimas. Thus the orbits of Pallene and Mimas are converging, which at some
point either in the past or in the future should have resulted or should
result in Pallene being captured into resonance with Mimas. A simple tidal
evolution model (Murray & Dermott, 1999) suggests that the most recent first-
order resonance that Pallene might have escaped from, perhaps around 40 Myr
ago, is the 4:5 resonance with Mimas. Pallene’s current eccentricity or
inclination could be signs of this or another past resonance.
After Pallene’s rediscovery, Spitale et al. (2006) determined the orbital
parameters of Pallene with high accuracy using images from _Cassini_ and
_Voyager 2_. Spitale et al. suggested that Pallene could be in an inner third-
order resonance with Enceladus (i.e.,
$\phi=19\lambda_{\mathrm{Enc}}-16\lambda_{\mathrm{Pal}}-\varpi_{\mathrm{Pal}}-2\Omega_{\mathrm{Pal}}$).
However, recent short-term numerical integrations have shown that this
resonance’s libration angle actually circulates (e.g. Fig. 1 in Muñoz-
Gutiérrez & Giuliatti Winter, 2017).
Furthermore, synchronous periodic eccentricity and inclination oscillations
were found while exploring a longer-term dynamical evolution of Pallene (of up
to $10^{5}$ yr, Callegari & Yokoyama, 2010; Muñoz-Gutiérrez & Giuliatti
Winter, 2017), which could be the result of a resonant perturbation produced
by either Mimas or Enceladus. Moreover, Callegari & Yokoyama (2010) identifies
a possible argument for the proposed quasi-resonance involving the apsidal and
nodal longitudes of Pallene and Mimas, given by
$\phi=\varpi_{\mathrm{Pal}}-\varpi_{\mathrm{Mim}}+\Omega_{\mathrm{Pal}}-\Omega_{\mathrm{Mim}}$.
Nonetheless, Muñoz-Gutiérrez & Giuliatti Winter (2017) found that this
argument also circulates with a period of $\sim 4762.2$ yr, though,
interestingly, with the same period of the observed oscillations of Pallene’s
eccentricity and inclination.
Pallene shares its orbit with a diffuse ring of micrometre-sized dust, first
reported by Hedman et al. (2009). The constant resupply of ring material is
expected to come from impact debris, expelled from the satellite’s surface by
collisions between interplanetary dust particles (IDPs) and the moon. A
similar mechanism has been proposed and explored in order to explain the
existence of Aegaeon’s ring arc inside the G ring (Hedman et al., 2010;
Madeira et al., 2018), the ring arcs of Methone and Anthe (Sun et al., 2017;
Madeira & Giuliatti Winter, 2020), as well as the Neptunian rings and arcs
(Gaslac Gallardo et al., 2020; Giuliatti Winter et al., 2020).
In this work we carry out a comprehensive study of the long-term dynamics of
Pallene, as well as of the possible origin and dynamical evolution of its
diffuse dusty ring, formed by micrometre-sized particles subject to
gravitational and non-gravitational forces. We organise this paper as follows:
in Section 2, we describe the different set-ups of our numerical simulations,
performed to address various aspects of our study; we characterise the current
dynamical environment of Pallene and its ring through frequency map analysis
in Section 3. In Section 4, we first estimate the time span in which the
current orbital configuration of the Saturnian system would remain
approximately unchanged by using a simple tidal evolution model; then, with
detailed short- and long-term simulations, we re-evaluate at different
timescales all possible libration angles between Pallene and the six major
Saturnian satellites considered in our study. Finally, a characterisation of
the evolution of Pallene’s ring is carried out in Section 5, where all the
relevant non-gravitational forces that affect small particles are considered.
We summarise our work and present our main conclusions in Section 6.
## 2 Methods and Simulations
Table 1: Saturn’s physical parameters.
Parameter | Value | Reference
---|---|---
$R_{\mathrm{S}}$ [km] | $60\,330$ | Kliore et al. (1980)
$GM_{\mathrm{S}}$ [km3 s-2] | 3.793120749865220E+07 | gm_de431.tpc _a_
$J_{2}$ | 1.6290573E-02 | Iess et al. (2019)
$J_{4}$ | -9.35314E-04 | Iess et al. (2019)
$J_{6}$ | 8.6340E-05 | Iess et al. (2019)
$\Omega_{\mathrm{S}}$ [rad s-1] | 1.65269E-04 | Helled et al. (2015)
* _a_
Available at
https://naif.jpl.nasa.gov/pub/naif/generic_kernels/pck/gm_de431.tpc
Table 2: Summary of physical parameters of the six large moons in our system.
Name | $GM_{m}$_a_ | $\rho_{m}$ | $R_{m}$_b_
---|---|---|---
| [km3 s-2] | [g cm-3] | [km]
Mimas | 2.503522884661795E+00 | 1.152 | 198.2
Enceladus | 7.211292085479989E+00 | 1.606 | 252.6
Tethys | 4.121117207701302E+01 | 0.956 | 537.5
Dione | 7.311635322923193E+01 | 1.469 | 561.4
Rhea | 1.539422045545342E+02 | 1.233 | 763.8
Titan | 8.978138845307376E+03 | 1.880 | 2574.7
* _a_
$GM_{m}$ values are taken from the planetary constant kernel gm_de431.tpc.
* _b_
Radius values, $R_{m}$, are taken from the planetary constant kernel
pck00010.tpc (available at
https://naif.jpl.nasa.gov/pub/naif/generic_kernels/pck/pck00010.tpc, Archinal
et al., 2011).
We carried out extensive and detailed numerical simulations of the evolution
of the dynamical system formed by Pallene and six major Saturnian satellites,
those gravitationally relevant in our region of interest, namely: Mimas,
Enceladus, Tethys, Dione, Rhea, and Titan. Throughout this work, we consider
Saturn’s oblateness and take into account zonal harmonic terms up to $J_{6}$
in all simulations. Our numerical integrations cover several time spans, in
order to study different aspects of the dynamics of Pallene, its phase-space
surroundings, as well as the evolution of its dust-ring. Our shortest
simulation lasts 18 yr, while the longest simulation is $5\times 10^{6}$ yr
long.
Unless otherwise stated, the physical parameters of Saturn and the six major
moons used throughout this work are summarised in Tables 1 and 2. We use a
rotation rate for Saturn $\Omega_{\mathrm{S}}=1.65269\times 10^{-4}$ rad/s
from Helled et al. (2015). As initial conditions for the major Saturnian
moons, we use the satellite’s Saturn-centric state vectors taken from the JPL
Horizons ephemerides service111https://ssd.jpl.nasa.gov/horizons.cgi on
$JD=2459305.5$, corresponding to April 1, 2021. We scale the satellite semi-
major axes and masses to Pallene’s semi-major axis and Saturn’s mass,
respectively. The system’s gravitational constant is scaled accordingly, for
which we use Pallene’s average semi-major axis
$\bar{a}_{\mathrm{Pal}}=2.1228335\times 10^{5}$ km (as found in Muñoz-
Gutiérrez & Giuliatti Winter, 2017) and the $GM_{\mathrm{S}}$ parameter given
in Table 1. Consequently, our gravitational constant for this system is
$G=29.59895344398\;\bar{a}_{\mathrm{Pal}}^{3}\,M_{\mathrm{S}}^{-1}\,d^{-2}$.
Pallene’s mass is derived from its size, which has been measured with small
uncertainty, i.e., $R_{\mathrm{Pal}}=2.23\pm 0.07$ km, as well as from its
reported range of bulk density, i.e. $0.19\leq\rho_{\mathrm{Pal}}\leq 0.34$
g/cm3 (Thomas et al., 2013). We explore three different density values to
cover the uncertainty reported by Thomas et al., i.e., $\rho_{\mathrm{Pal}}=$
0.19, 0.25, and 0.34 g/cm3. This means that for each simulation suite
described in the following paragraphs, we run three versions, each with
Pallene’s gravitational parameter given by $GM_{\mathrm{Pal}}=$
5.89064055531E-07, 7.75084283594E-07, and 1.05411462568E-06 km3/s2,
corresponding to the selected density values. At the end of each of our
integrations, we convert the state vectors ($\vec{r}$ and $\vec{v}$) to
geometric orbital elements (Renner & Sicardy, 2006), which reduces the short-
term oscillations of the osculating elements due to the oblateness of the
central mass.
In order to place Pallene within its current dynamical context, our first
objective is to characterise the dynamics of a broad region of the geometric
semi-major axis-eccentricity ($a$-$e$) phase-space plane around Pallene. With
this in mind, we performed two numerical simulations (lasting 18 and $10^{4}$
yr, respectively), including a total of $13\,025$ test particles covering a
grid of the geometric $a$-$e$ plane in the vicinity of Pallene. For these
integrations, we used the Bulirsch-Stoer integrator from the Mercury6 package
(Chambers, 1999), with a toleration accuracy parameter of $10^{-12}$ and an
initial time-step of 0.1 days.
Secondly, to examine the big picture of Pallene’s tidal evolution, we use a
simple model based on Murray & Dermott (1999), which assumes a linear tidal
dissipation mechanism and a constant $Q$, independent of frequency. We only
examine the tidal evolution of Pallene and the large moons in its vicinity,
Mimas and Enceladus, in order to look at resonances that may have been crossed
in the recent past, as well as to establish a time limit of the validity of
the current orbital configuration of the system for the longer-term
simulations.
Next, in order to determine the possible resonant behaviour of Pallene, we
performed a set of N-body simulations, spanning from 50 up to $5\times 10^{6}$
yr of integration time. In this instance, the test particles are not included,
and there are only seven bodies orbiting Saturn. The N-body simulations are
performed with our implementation of the Implicit integrator with Adaptive
time-Stepping of 15th-order (IAS15, Rein & Spiegel, 2015) taking into account
Saturn’s gravitational moments (Table 1). Subsequently, we integrate the
satellite system for 50, $5\times 10^{3}$, $5\times 10^{4}$, $5\times 10^{5}$,
and $5\times 10^{6}$ yr. We use the geometric orbital elements to calculate
several libration angle combinations, among all satellites in Table 2 and
Pallene.
Finally, we study the evolution of the diffuse ring through two distinct
scenarios: (a) particles initially co-orbital to the satellite and (b) by
considering the temporal evolution of particles launched from Pallene’s
surface. The study is performed considering the system’s gravitational effects
and also non-gravitational forces acting in the region, such as solar
radiation force, plasma drag, and the electromagnetic force. Using an adapted
version of the Mercury6 package which includes the effects of these forces and
Saturn’s gravitational moments, we integrated the system formed by Pallene,
the six large moons, and a set of 5,000 test particles until all the particles
were removed from the simulation.
## 3 Pallene’s Current Dynamical Context
### 3.1 Characterisation Through Frequency Map Analysis
Figure 1: Diffusion map for a wide region of the geometric semi-major axis -
eccentricity phase-space plane around Pallene. A colour scale indicates the
stability of orbits, where bluer regions represent the more stable, and redder
ones the more unstable. Locations where particles were ejected or collided
with Pallene before the end of the simulation are coloured in white. The solid
black lines stand for the constant pericentre and apocentre distances of
Pallene, delimiting the collision region of the small moon. All the MMR ratios
which were explored for libration in the simulations of Section 4.2, going
from first to fourth order, are labelled at the top of the figure. Colours
correspond to MMRs with Mimas (blue), Enceladus (red), Tethys (orange), Dione
(brown), and Rhea (green). The final conditions of a longer simulation
($10^{4}$ yr), of the same particles used to create the diffusion map, are
over-plotted on the map (black dots) to highlight the predictive power of the
frequency analysis technique for the characterisation of the dynamical
stability of wide regions of phase space.
To gain a better understanding of the dynamical behaviour and future stability
of Pallene, as well as of the micrometric dust particles in its vicinity, we
carried out a frequency map analysis (FMA, Laskar, 1990; Laskar et al., 1992;
Robutel & Laskar, 2001) of a broad region of the geometric $a$–$e$ phase space
plane, surrounding Pallene. We performed a short-term numerical integration
(of $\sim 18$ yr, or approximately $5\,700$ Pallene orbital periods and
$2\,000$ orbital periods of the most external particle in the map). We used
this time span since at least $2\,000$ revolutions of each particle are
required to confidently recover the main orbital frequencies.
We included $13\,025$ test particles distributed in a homogeneous grid
covering the $a$–$e$ geometric plane, with the following conditions: $a$ is
sampled from 0.95 to 1.05 $D_{\mathrm{Pal}}$ (where $D_{\mathrm{Pal}}$ is the
normalised average geometric semi-major axis of Pallene) in steps of size
$\Delta a=5\times 10^{-5}$. In $e$ we sampled from 0 to 0.02 in steps of
$\Delta e=2\times 10^{-3}$. The remaining orbital elements are all set to zero
for simplicity, namely, inclination $I$, longitude of pericentre $\varpi$,
longitude of the ascending node $\Omega$, and mean anomaly $M$. We recall that
test particles are subject to the gravitational perturbations of an oblate
Saturn, Pallene, and the six gravitationally dominant moons in our region of
interest.
A frequency analysis for each test particle in the grid was performed, using
the algorithm of Šidlichovský & Nesvorný (1996), over the dynamical variable:
$\xi(t)=a(t)\exp(i\lambda(t)),$ (1)
where $a(t)$ and $\lambda(t)$ are the semi-major axis and mean longitude of
each particle, respectively. The variable $\xi(t)$ expresses a combination
closely related to a formal combination of the action and angle variables
($J_{i}$,$\eta_{i}$), of each orbit, expressed as
$\xi^{\prime}_{i}(t)=J_{i}\exp{\eta_{i}}$. Though it is clear that $\xi(t)$
and $\xi^{\prime}(t)$ are not equal, they are still related as
$\xi(t)=f(\xi^{\prime}_{1},\xi^{\prime}_{2},...,\xi^{\prime}_{n})$, being $f$
is a function close to unity (Laskar, 1993).
When we perform a frequency analysis of $\xi(t)$, we obtain a decomposition of
the form
$\xi(t)=\alpha_{0}\exp(i\beta_{0})+\sum_{k=1}^{N}\alpha_{k}\exp(i\beta_{k}).$
(2)
For a Keplerian orbit, the decomposition of $\xi(t)$ would have only one term,
i.e. $\alpha_{0}=a$ and $\beta_{0}=n$, where $\beta_{0}$ is what we call the
“mean frequency”, while $a$ and $n$ are the semi-major axis and mean motion of
the particle, respectively. For non-Keplerian orbits, the decomposition given
in Eq. 2 contains many periodic terms. Nonetheless, frequency analysis ensures
that if a particle remains in a stable orbit, the conditions expressed by the
approximations $\alpha_{0}\approx a$ and $\beta_{0}\approx n$ will prevail;
also, for stable orbits $\alpha_{0}\gg\alpha_{k}$. These conditions do not
hold for particles following unstable orbits, for which $\beta_{0}$ will
change dramatically from one time interval to the next, since the evolution of
chaotic orbits does not remain on the surface of KAM tori.
To compute the change of the main frequencies, we perform a frequency analysis
of $\xi(t)$ in two adjacent time intervals of length $T$, equal to half the
total integration time. We call $\beta_{01}$ and $\beta_{02}$ the main
frequencies obtained in each interval, respectively. Finally, we define a
diffusion parameter, $D$, which provides a measure of the stability of the
orbits. Following Correia et al. (2005); Muñoz-Gutiérrez & Giuliatti Winter
(2017) we have
$D=\frac{\left|\beta_{01}-\beta_{02}\right|}{T}.$ (3)
It can be seen that small values of $D$ will be obtained for stable
trajectories, while larger values of $D$ are the result of unstable orbital
evolution.
### 3.2 Diffusion Map of Pallene’s Neighbourhood
A diffusion map for the region around Pallene, shown in Fig. 1, was obtained
after applying the above procedure to all the grid particles covering the
geometric $a$–$e$ plane. A coloured rectangle is plotted for each particle
according to its initial location in the plane, where colour is scaled
according to the value of the logarithm of $D$. Redder colours indicate more
unstable orbits, while bluer colours represent the more stable trajectories.
Particles that are lost from the simulation before it finished, mainly due to
collisions with Pallene, are coloured white. Solid black lines delimit
Pallene’s collision region, i.e. the region in which, at their apocentric or
pericentric excursions, particles will cross Pallene’s orbit, thus having a
higher probability of colliding with the small moon.
The diffusion map provides a quick method to globally characterise the
dynamical state of a vast region of phase-space, at a low computational cost,
i.e. using only short-term numerical simulations. Unstable regions are
immediately highlighted by the colour contrast. We can quickly identify MMRs,
as well as their relative strength. The semi-major axis parameter space from
0.98 to 1.02 $D_{\mathrm{Pal}}$ encompasses completely both Pallene’s orbit
and the co-orbital dusty ring. In this region the strongest MMRs are due to
first-order commensurabilities with either Mimas or Enceladus, however,
higher-order MMRs with Dione, Tethys, and Rhea can also be observed. The
location of all the existing commensurabilities with the six major moons (up
to order 4 and degree 30) are indicated at the top of Fig. 1; outside this
interval we only indicate the location of first-order MMRs with Mimas and
Enceladus. The stronger resonances are characterised by thin vertical
structures of homogeneous yellow to orange colour, such as the 4:5, 5:6, 6:7,
and 7:8 MMRs with Mimas (blue labels), as well as the 5:4, 6:5, 7:6, 8:7, 9:8,
and 10:9 with Enceladus (red labels). Second-, third-, and fourth-order MMR
bands are thinner than first-order resonances. Furthermore, MMR chords are
less stable than the broader, non-resonant, blue bands, regardless of
eccentricity. Aside from possible exceptions at MMRs, lower eccentricity
orbits are far more stable in general throughout the map.
From Fig. 1, it is apparent that Pallene, whose location is indicated by the
large black circle, is not currently trapped inside any strong MMR, despite
the very close proximity of three resonances: the 9:11 with Mimas and the
19:16 and 25:21 with Enceladus.
Moreover, two interesting regions stand out from the map, corresponding to the
clustering of several MMRs with various moons. The first of such regions,
$b_{1}$, is located at $\sim 0.986$ $D_{\mathrm{Pal}}$, where the 5:6 MMR with
Mimas, the 17:14 and 23:19 MMRs with Enceladus, the 5:3 MMR with Tethys
(orange label), and the 4:1 MMR with Rhea (green label) lie in close proximity
to each other. The second region, $b_{2}$, is located around $\sim 1.014$
$D_{\mathrm{Pal}}$; in this region two first-order resonances, the 4:5 with
Mimas and the 7:6 with Enceladus, are in close proximity to the 8:5 MMR with
Tethys (orange label), and the 7:3 MMR with Dione. It is apparent that the
interaction of several low-order resonances results in especially unstable
regions at these locations. A similar case occurs at $\sim 0.966$
$D_{\mathrm{Pal}}$, where the two first-order resonances, 6:7 with Mimas and
5:4 with Enceladus, produce a particularly wide unstable region.
To reassess the predictive power of the frequency analysis technique, we
integrated up to $10\,000$ yr the same set of $13\,025$ particles of the grid
covering the geometric $a$–$e$ phase-space plane. The final conditions of this
simulation were over-plotted on the diffusion map of Fig. 1 with black dots.
To the left of the collision region of Pallene, the largest perturbations in
eccentricity are observed for particles located in the bands of MMRs, as
expected. The most unstable region, however, is the one located in the top
left corner of the map, roughly above 0.015 in eccentricity and to the left of
the 11:13 MMR with Mimas; here the direct influence of Mimas is stronger and
particles are removed faster. To the right of the collision region, all the
particles remain nearly unperturbed, except for the $b_{2}$ band where several
resonances converge, as well as at the locations of other first-order MMRs
with Mimas and Enceladus.
Notably, inside the collision region of Pallene, only three particles survive
after 10 kyr, one of them is a co-orbital with the small moon; a second lies
at the location of the 19:16 MMR with Enceladus, and the last one lies inside
the band of the 11:9 MMR with Enceladus, which overlaps with the 21:25 MMR
with Mimas.
Both the map and the long-term simulation of the particles serve as an
indication of the future evolution of large dust particles, with radii larger
than $\sim 30$ $\mu$m, i.e. those unaffected by non-gravitational forces known
to act in this region. Towards the internal zone of the Pallene collision
region, even this kind of large particles would be removed (though at times
greater than 10 kyr) due to perturbations from Mimas. Exterior to the Pallene
collision region, large particles could in principle survive for very long
times. This indicates that the Pallene ring would find greater stability
towards semi-major axes larger than that of the small moon, increasing the
eccentricity of its conforming particles as they find MMR regions with
Enceladus. On the other hand, ring-forming particles within the Pallene
collision region could survive mainly as co-orbitals; however, with only one
co-orbital and two apparently resonant particles surviving in this region in
the 10 kyr simulation, we cannot provide quantifiable predictions for the
behaviour of the ring, based exclusively on the diffusion map. For a more in-
depth analysis of the possible origin of the ring and its future evolution, we
performed a large set of detailed simulations, presented in Sections 5.2 to
5.4 of this paper.
## 4 Dynamical evolution of Pallene in different timescales
### 4.1 Tidal Evolution
To gain an appropriate perspective on the timescales of Pallene’s dynamical
evolution, we first look at Pallene’s tidal evolution in between Mimas and
Enceladus. Although more complex analyses of tidal evolution in the Saturn
system have recently been done (e.g. Fuller et al., 2016; Lainey et al.,
2020), here we employ a simpler model to gain a general understanding of the
context in which Pallene may have evolved. Using Equation 4.213 from Murray &
Dermott (1999), we can calculate previous semi-major axes
$a=a_{0}\left[1-\left(\frac{k_{2}}{Q}\frac{39M_{m}R_{S}^{5}}{2a_{0}^{13/2}}\sqrt{\frac{G}{M_{S}}}t\right)\right]^{2/13},$
(4)
assuming that the tidal dissipation mechanism is linear and that $Q$ is
frequency-independent.
For our tidal evolution calculations, we take our value for Saturn’s Love
number, $k_{2}=0.390$, from Lainey et al. (2017). We estimate a quality factor
$Q=2000$ also based on Lainey et al. (2017) and similar to what is used in Ćuk
et al. (2016), which was based on the earlier work of Lainey et al. (2012),
though there is less agreement on this value and it is meant to apply only
near the semi-major axes roughly around Mimas and Enceladus. Previous
estimates of $Q$ an order of magnitude higher were due to the assumption that
Mimas was primordial (Murray & Dermott, 1999; Meyer & Wisdom, 2008). However,
recent studies that argue Saturn’s rings and the mid-sized moons are probably
young, use a $Q$ value in the range we have assumed (Ćuk et al., 2016; Fuller
et al., 2016; Lainey et al., 2017; Neveu & Rhoden, 2019; Hesselbrock & Minton,
2019). Other values for this calculation are given in Tables 1 and 2.
Using these values, we measured the change in semi-major axis with respect to
today’s semi-major axis value $\frac{\Delta a}{a}$ over the past five million
years for Mimas, Pallene, Enceladus, Tethys, and Dione. Out of these
measurements, Mimas has $\frac{\Delta a}{a}=0.0017$, which is the largest
among these moons. Because this change in semi-major axis due to tidal
evolution is small, we expect our long-term simulations of 5 Myr without the
inclusion of tidal evolution to be accurate enough.
From the semi-major axis calculations, if Pallene is old enough, it may have
recently escaped the 4:5 resonance with Mimas (40 Myr ago with $Q=2000$).
Prior to escape, Pallene could have migrated with Mimas for a substantial
period of time. For this reason, it becomes difficult to project Pallene’s
previous tidal evolution with much certainty. If Pallene was not captured in
any resonance with Mimas for a significant period of time, which is unlikely
because their orbits are converging, then further in the past Pallene’s orbit
may have crossed that of Enceladus (400 Myr ago with $Q=2000$), suggesting
that Pallene could be a fragment from Enceladus, similar to the way Showalter
et al. (2019) propose that Hippocamp could have fragmented off of Proteus,
possibly from a cometary impact.
Hippocamp is close to the orbit that is synchronous with Neptune’s rotation,
which, together with the fact that it is the least massive of Neptune’s moons,
implies that the rest of Neptune’s moons are diverging from Hippocamp. In
contrast, Pallene’s orbit is converging with Mimas’s orbit. For this reason,
Pallene is expected to have been captured into resonance with Mimas at each
resonance crossing, but it is difficult to determine the duration of the
capture in each resonance.
Proteus and Hippocamp have mean radii of 203.8 km and 17.4 km (Showalter et
al., 2019), while Enceladus and Pallene have mean radii of 252 km and 2.23 km
(Roatsch et al., 2009; Thomas et al., 2013). Using these mean radii and masses
of $1.08\times 10^{20}$ kg for Enceladus (Jacobson et al., 2006) and
$4.4\times 10^{19}$ kg for Proteus (multiplying the volume from Stooke (1994)
by an assumed density of 1.3 g/cm3), the escape velocity
$v_{\mathrm{esc}}=\sqrt{2GM_{m}/R_{m}}$ from the surface of Enceladus is 240
m/s, while for Proteus it is 170 m/s. Pallene has a smaller size ratio to
Enceladus than Hippocamp has to Proteus, but perhaps Pallene is evidence of
the proposed impactor in the south polar terrain of Enceladus (Roberts &
Stickle, 2017).
Not too long in the past, however, is the Mimas-Enceladus 3:2 resonance
crossing (115 Myr ago with $Q=2000$). Meyer & Wisdom (2008) studied a triplet
of Mimas-Enceladus 3:2 resonances and found that Mimas’s eccentricity can be
explained either by passage through the 3:2 $e$-Mimas resonance or the 6:4
$ee^{\prime}$-mixed resonance (but not the 3:2 $e$-Enceladus resonance), and
found dynamical escape to be possible for both of these resonances. Ćuk et al.
(2016) proposed that Tethys, Dione, and Rhea all formed in one event about 100
Myr ago, and suggests that Mimas and Enceladus could have formed during the
same epoch or could be even younger. Neveu & Rhoden (2019), however, have
suggested that Mimas could be significantly younger than Enceladus. This last
scenario allows for the possibility of Pallene migrating away from Enceladus
after an impact before the formation of Mimas.
Thus, given a constant $Q$ tidal model, it looks like Pallene has crossed some
resonances, which, especially if it had been trapped in any of them for some
period of time, could have affected its eccentricity and inclination. However,
the new tidal models indicate the evolution of the satellites could be more
complex than previously thought (Fuller et al., 2016; Lainey et al., 2020).
Still, small moons such as Pallene are likely sensitive probes of this tidal
evolution (see, for example, El Moutamid et al., 2017) and so should be
considered in those contexts.
### 4.2 Resonance Analysis
In view of the rich dynamical structure of the phase-space close to Pallene,
where many resonances are in close proximity to each other, we seek to
determine whether any particular resonance between Pallene and one or more of
the major Saturnian moons drives the evolution of Pallene, or could be a
possible mechanism to confine the particles of the dusty ring. Hence, we ran
five sets of numerical N-body simulations with different integration times,
i.e. 50 (or approximately $15\,766$ Pallene’s orbits), $5\times 10^{3}$,
$5\times 10^{4}$, $5\times 10^{5}$, and $5\times 10^{6}$ yr. The output
interval in each integration is always a multiple of Pallene’s orbital period,
$P\approx 1.2$ d, so that in each output file there are a total of $15\,220$
data points. For each integration, several libration angles from the direct
and indirect arguments of the disturbing function were explored, up to fourth-
order (Murray & Dermott, 1999). Due to the uncertainties in Pallene’s density
and therefore its mass, three different densities were considered as described
in Section 2, which means that in total 15 realisations were performed, three
per each integration time; we designate these as density-sets per integration
time.
We referred to the resonant arguments of the disturbing function for two
reasons: (1) the number of possible arguments is constrained and (2) in the
case that one of these arguments librates, then the corresponding argument
would facilitate its use in future secular theory calculations of this system.
The libration angle among an outer (with primed orbital elements) and an inner
satellite (un-primed elements), is expressed as
$\phi=j\lambda^{\prime}+(k-j)\lambda+\gamma(\varpi^{\prime},\varpi,\Omega^{\prime},\Omega),$
(5)
where $k$ is the order, $j$ the degree, and $\gamma$ is a linear combination
of $\varpi^{\prime},\,\varpi,\,\Omega^{\prime}$, and $\Omega$. The examined
libration angles range in order $k$ from 1 to 4, while the degree $j$
corresponds to possible resonances within 0.98 and 1.02 $D_{\mathrm{Pal}}$.
The linear combination $\gamma(\varpi^{\prime},\varpi,\Omega^{\prime},\Omega)$
in Eq. 5 is determined from the direct and indirect arguments of the
disturbing function described in Murray & Dermott (1999), which have the form
$\gamma=k_{1}\varpi^{\prime}+k_{2}\varpi+k_{3}\Omega^{\prime}+k_{4}\Omega$,
where $k_{1}+k_{2}+k_{3}+k_{4}=k$.
In the rest of this section, we denote the libration angles of a given moon
with Pallene by their capitalised initials, e.g., $\phi_{\mathrm{PM}}$ for the
Pallene-Mimas libration angle, except for Tethys which will be denoted by a
“t” to distinguish it from Titan. For the semi-major axis interval considered
above, the possible resonant combinations are summarised in Table 3. The
majority of explored direct arguments involve either Pallene and Mimas, or
Pallene and Enceladus. In contrast, the combination between Pallene and Titan
lacks possible resonant combinations in this semi-major axis interval. For
completeness, additional zeroth-order resonances were also evaluated for
degrees $j=$ 0 to 15.
Table 3: Order $k$ and degree $j$ explored for libration angles with Pallene Moon | $k$ | $j$
---|---|---
Mimas | 1 | 5, 6
2 | 10 – 12
3 | 15 – 19
4 | 20 – 25
Enceladus | 1 | 6, 7
2 | 11 – 15
3 | 17 – 22
4 | 22 – 30
Tethys | 2 | 5
3 | 8
4 | 10
Dione | 4 | 7
Rhea | 3 | 4
We inspected 75 indirect arguments per moon pair, i.e., 450 in total,
denominated as $\psi$ to distinguish them from the direct arguments, $\phi$.
Most of the indirect arguments explore all the angular range in every
timescale. Only two fourth-order indirect arguments show interesting
behaviour: the Dione-Pallene argument
$\psi_{\mathrm{DP}}=\lambda^{\prime}+3\lambda-2\varpi^{\prime}-2\varpi$
displays temporal libration (Fig. 2a) for about 30 kyr; while the Titan-
Pallene argument
$\psi_{\mathrm{TP}}=3\lambda^{\prime}+\lambda-2\varpi^{\prime}-2\Omega$ (Fig.
2b) presents a long circulation period of 494 yr.
(a) $\psi_{\mathrm{DP}}=\lambda^{\prime}+3\lambda-2\varpi^{\prime}-2\varpi$
(b) $\psi_{\mathrm{TP}}=3\lambda^{\prime}+\lambda-2\varpi^{\prime}-2\Omega$
Figure 2: Unique indirect arguments in our search with either librating
properties or long period circulation. The remaining 448 arguments displayed
short period circulation.
In contrast, the direct arguments displayed a broader variety of phenomena
depending on the timescale of the integration: circulation, alternating
intervals of circulation, libration, or overall circulation with ‘steps’ of
near constant value. In Sections 4.3 to 4.4, we only present angles that show
resonant-like features and that coincide within a given density-set; we
display the evolution of $\rho_{\mathrm{Pal}}=0.25$ g/cm3 integrations only.
Nevertheless, when the resonant-like libration angles are compared within
density-sets, we find that for integrations longer than $5\times 10^{3}$ yr
the angles evolve similarly within $5\times 10^{4}$ yr but differ after this
threshold. Consequently, the effect of Pallene’s mass in its dynamical
evolution is small and only noticeable after $10^{4}$ yr or $\sim 10^{6}$
Pallene orbits.
We divide our analysis into short (50 yr) and long-term ($t\geq 5\times
10^{3}$ yr), demonstrating that Pallene has different resonant behaviour with
one or more Saturnian satellites depending on the timescale, some emerging
just in either short- or long-term simulations.
### 4.3 Short-term evolution of direct arguments
The intention of the 50 yr simulations was to re-examine the suggested third-
order resonance between Pallene and Enceladus (Spitale et al., 2006). We
probed for libration all ten possible direct arguments with
$19\lambda^{\prime}-16\lambda$, finding one additional combination with
interesting behaviour in this interval. Figure 3 shows a comparison between
the resonant angle suggested by Spitale et al. (2006) (Fig. 3a) and our
finding (Fig. 3b). The angle
$\phi_{\mathrm{EP}}=19\lambda^{\prime}-16\lambda-\varpi-2\Omega$, circulates
with a period of $10.6$ yr. Similarly, Muñoz-Gutiérrez & Giuliatti Winter
(2017) found this angle to circulate but with a period 1.8 times shorter. The
angle
$\phi_{\mathrm{EP}}=19\lambda^{\prime}-16\lambda-\varpi^{\prime}-2\Omega^{\prime}$
differs from that suggested in Spitale et al., in that the longitudes of
ascending node and pericentre belong to the outer satellite instead of the
inner one. The evolution of this argument exhibits a softer negative slope
that circulates with a period of $\sim 30$ yr.
(a) $\phi_{\mathrm{EP}}=19\lambda^{\prime}-16\lambda-\varpi-2\Omega$
(b)
$\phi_{\mathrm{EP}}=19\lambda^{\prime}-16\lambda-\varpi^{\prime}-2\Omega^{\prime}$
Figure 3: Two different 19:16 libration angles between Enceladus and Pallene
over 50 yr. The top panel corresponds to the resonant angle suggested by
Spitale et al. (2006) and the bottom panel corresponds to our finding.
Although both libration angles circulate, the argument in Fig. 3b has a
circulation period 3 times longer than the one in Fig. 3a.
In contrast to other small moons in the region, clearly trapped in first-order
MMRs with Mimas, such as Aegaeon in the 7:6 (Hedman et al., 2010; Madeira et
al., 2018), Methone in the 14:15 (Spitale et al., 2006; Hedman et al., 2009;
Callegari et al., 2021), and Anthe in the 10:11 (Cooper et al., 2008;
Callegari & Yokoyama, 2020), our short-term (and long-term) simulations
indicate that Pallene’s evolution is not characterised uniquely by any MMR,
either with Mimas or Enceladus. Although some of the 19:16 libration angles
between Pallene and Enceladus present features associated with a near-
resonance, they all clearly circulate in longer timescales. It is likely that
Pallene is just outside the parameter space that characterises the 19:16 MMR
with Enceladus. Similarly, several of the libration angles shown outside of
the Mimas-Anthe 10:11 MMR (Fig. 8 Callegari & Yokoyama, 2020) resemble the
evolution of some of the direct arguments we studied in this work. The latter
suggest that an analysis of the “individual dynamic power spectra” (IPS in
Callegari & Yokoyama, 2020) of the 19:16 MMR between Pallene and Enceladus
could disclose the nature of the current resonant state of Pallene (Fig. 3),
however, we consider such analysis beyond the scope of the current work.
#### 4.3.1 Simultaneous zeroth-order direct argument among all moons
While examining the zeroth-order direct arguments of the 50 yr simulations, a
simultaneous resonant libration angle was detected between Pallene and four
other moons: Mimas, Tethys, Dione, and Titan. Here ‘simultaneous’ means that
more than one pair of satellites (Pallene and another large Saturnian moon)
displays apparent resonant properties for the same libration angle expression.
In this case, this simultaneity emerged for
$\Phi\equiv\varpi^{\prime}-\varpi+\Omega^{\prime}-\Omega$ as presented in Fig.
4. In this time interval, $\Phi$ appears to be constant with small
oscillations, except for the pairs Enceladus-Pallene and Rhea-Pallene, which
circulate with a period of 12 and 36 yr, respectively. Nonetheless, Enceladus
displays a semi-resonant behaviour due to the step-like oscillation of
$\Phi_{\mathrm{EP}}$. Each “step” has a semi-constant value that changes in
each full circulation. For example, in the first step (from 1 to 4 yr) the
nearly-constant value is $90^{\circ}$ while on the fourth step (from 14 to 18
yr) the corresponding value is $60^{\circ}$, therefore, there are $\sim 4$ yr
intervals where this angle librates followed by a shift of $\sim 130^{\circ}$
during $\sim 1.5$ yr to another semi-constant step.
Figure 4: 50 yr evolution of the libration angle
$\Phi=\varpi^{\prime}-\varpi+\Omega^{\prime}-\Omega$ between each moon in
Table 2 and Pallene. The known libration angle between Pallene and Mimas
(Callegari & Yokoyama, 2010, top panel), also presents resonant behaviour
between Pallene and three other moons: Tethys, Dione, and Titan (3rd, 4th, and
6th panels from top to bottom). In contrast, $\Phi_{\mathrm{EP}}$ exhibits
circulation with semi-constant ‘steps’, whereas $\Phi_{\mathrm{RP}}$ (5th
panel) circulates.
Callegari & Yokoyama (2010) suggested this quasi-resonant relationship between
Mimas and Pallene ($\Phi_{\mathrm{PM}}$) and demonstrated that it has a long
circulation period ($\sim 5000$ yr, later confirmed by Muñoz-Gutiérrez &
Giuliatti Winter, 2017)). In order to explore possible circulation of $\Phi$
for Tethys, Dione, and Titan, we looked for circulation of this direct
argument in our $5\times 10^{3}$ yr integrations and, if circulation existed,
determined the corresponding period using Fourier frequency analysis. Table 4
lists the circulation periods of $\Phi$ for each moon pair, including our
estimate for $\Phi_{\mathrm{PM}}=4708$ yr. The measured circulation periods
for Tethys-Pallene (tP), Dione-Pallene (DP) and Titan-Pallene (TP), are 872
yr, 844 yr, and 794 yr, respectively. Even though these angles are not
resonant, their long circulation relative to Pallene’s orbital period might
significantly affect the dynamics of Pallene in the short-term.
Table 4: Period of libration angle $\phi=\varpi^{\prime}-\varpi+\Omega^{\prime}-\Omega$ for the moon pairs in Fig. 4. Moon pair | $P_{\mathrm{circ}}$ | No. orbits
---|---|---
| [yr] | [$10^{3}$]
PM | 4708 | 1495.5
EP | 12 | 3.9
tP | 872 | 277.0
DP | 844 | 268.2
RP | 36 | 11.3
TP | 794 | 252.4
The possible existence of a quasi-resonance with the same combination of
angles that excludes the mean longitudes suggests an alignment of the lines of
nodes and apses of Pallene, Mimas, Tethys, and Dione, most likely with Titan.
In other terms, a combination of the eccentricity and inclination vectors of
these satellites may be aligned to some extent to Titan’s. This is not
entirely unexpected, since secular resonances could lead to apsidal
alignments; in the Saturnian system an example of this has long been known to
occur between Rhea and Titan (see Greenberg, 1975, and references therein).
The well known example of Tethys-Mimas 2:4 MMR
($\phi_{\mathrm{tM}}=4\lambda^{\prime}-2\lambda-\Omega^{\prime}-\Omega$), for
which the variation in inclination drives the resonance (Greenberg, 1973;
Allan, 1969), is another important example of node alignment. Moreover,
alignment of the nodes has been discussed in several works involving the
dynamics of compact extrasolar systems (e.g., Kaib et al., 2011; Boué &
Fabrycky, 2014; Granados Contreras & Boley, 2018); the later works refer to
this alignment as the interaction of an outer massive planet/companion with an
inner compact system (of planets) which affects the inner system as if it were
a rigid body. In the case of the Saturnian moon system, a study of the
compactness of the orbits interior to Titan could reveal whether this
phenomenon also occurs in this system. Nonetheless, a detailed study of this
scenario is currently beyond the scope of this paper focused on Pallene
dynamics; thus we consider this idea for future work.
### 4.4 Long-term evolution of direct terms
We performed four long-term simulations, lasting $5\times 10^{3}$, $5\times
10^{4}$, $5\times 10^{5}$, and $5\times 10^{6}$ yr. In these simulations, most
of the explored arguments circulate. Although a handful of angles display
resonant characteristics during definite time intervals, there is not a single
case in which the libration angle has a constant value for the total length of
the simulations.
In Sections 4.4.1 to 4.4.5, we present libration angles of interest separated
by order, at least one per order, from first to fourth-order finishing with
zeroth-order. The second-order arguments in Section 4.4.2 failed to produce
similar behaviour among the density-set in all timescales. However, we include
the results of two Tethys-Pallene arguments (each with distinct densities for
Pallene) displaying temporal libration to exemplify the long-term effect of
Pallene’s mass in determining its resonant state.
#### 4.4.1 First-order arguments
Only one first-order argument presenting unusual features was recovered from
our simulations (Fig. 5). Although it circulates at all times in the $5\times
10^{5}$ yr integration, this 7:6 argument between Enceladus and Pallene shows
a change in circulation frequency that slows down and holds for more than
$2\times 10^{5}$ yr, a considerable interval in terms of Pallene’s orbital
period. The exact resonance is located at 1.012$D_{\mathrm{Pal}}$ and is one
of the strongest resonances in the region considered in this work (see map of
Fig. 1). However, due to its semi-major axis being far from the 7:6 MMR
location, it is unlikely that Pallene would be trapped or suffer strong
perturbations from Enceladus through this resonance.
Figure 5: First-order argument
$\phi_{\mathrm{EP}}=7\lambda^{\prime}-6\lambda+\varpi-2\Omega$ between
Enceladus and Pallene over $5\times 10^{5}$ yr. A change in the circulation
frequency is observed between 100 to 300 kyr.
#### 4.4.2 Second-order arguments
Fig. 6 presents the evolution of two second-order libration angles with the
same degree, 5:3, over $5\times 10^{5}$ yr. The argument involving the
longitudes of ascending nodes of Tethys and Pallene (6a), corresponding to the
simulation with $\rho_{\mathrm{Pal}}=0.19$ g/cm3, exhibits two librating
intervals, one between 350 and 400 kyr and another extending from 450 to 500
kyr, with a slow circulation period enclosed by both intervals. On the other
hand, the second argument (6b) is an outcome of the $\rho_{\mathrm{Pal}}=0.34$
g/cm3 simulation and involves both the nodal and apsidal longitudes. This
argument briefly librates at different intervals of the simulation, the most
notable of which covers the 400 to 450 kyr interval.
(a) $\phi_{\mathrm{tP}}=5\lambda^{\prime}-3\lambda+\Omega^{\prime}-3\Omega$
(b)
$\phi_{\mathrm{tP}}=5\lambda^{\prime}-3\lambda+\varpi^{\prime}-\varpi+\Omega^{\prime}-\Omega$
Figure 6: Second-order 5:3 MMR between Tethys and Pallene. Both arguments
present temporal libration at different times that last for thousands of
years.
#### 4.4.3 Third-order arguments
(a) $\phi_{\mathrm{EP}}=22\lambda^{\prime}-19\lambda-\varpi^{\prime}-2\varpi$
(b) $\phi_{\mathrm{tP}}=8\lambda^{\prime}-5\lambda-\varpi^{\prime}-2\varpi$
(c) $\phi_{\mathrm{EP}}=19\lambda^{\prime}-16\lambda-3\varpi$
(d) $\phi_{\mathrm{tP}}=8\lambda^{\prime}-5\lambda-\varpi^{\prime}-2\varpi$
Figure 7: Third-order direct arguments between Enceladus and Pallene and
between Tethys and Pallene on different timescales. From all the arguments
with librating properties, the argument
$\phi_{\mathrm{tP}}=8\lambda^{\prime}-5\lambda-\varpi^{\prime}-2\varpi$ (Figs.
7b and 7d) librates for about 10 kyr, the longest time amongst our findings in
Section 4.
In total, three different third-order arguments were found (Fig. 7),
associated with resonances with Enceladus and with Tethys. On the $5\times
10^{3}$ yr timescale, the direct argument
$\phi_{\mathrm{EP}}=22\lambda^{\prime}-19\lambda-\varpi^{\prime}-2\varpi$
between Enceladus and Pallene (Fig. 7a) circulates for most of the integration
yet displays intervals of libration which last about 400 yr, and, similar to
the argument in Fig. 5, shifts the constant value at which it librates, e.g.,
in the first 400 yr it librates close to 0∘ and then shifts to librate close
to 90∘ from the 1200 to 1600 yr interval. The argument associated with the 8:5
MMR between Tethys and Pallene,
$\phi_{\mathrm{tP}}=8\lambda^{\prime}-5\lambda-\varpi^{\prime}-2\varpi$,
exhibits a clear ample libration for the duration of the integration. Figure
7b is the clearest example of libration found in our exhaustive exploration of
resonant arguments between Pallene and a major Saturnian moon.
On the $5\times 10^{4}$ yr realisations (Figs. 7c and 7d), we recover a
Tethys-Pallene 8:5 argument and find an additional 19:16 direct argument
between Enceladus and Pallene. The latter argument (Fig. 7c),
$\phi_{\mathrm{EP}}=19\lambda^{\prime}-16\lambda-3\varpi$, presents a distinct
libration interval between $3.8$ and $4\times 10^{4}$ yr around 90∘. Finally,
the Tethys-Pallene 8:5 argument is displayed in Fig. 7d. We observe that the
soft slope visible in the shorter timescale (Fig. 7b) is maintained in this
scale, which then steepens after $\sim 1.2\times 10^{4}$ yr until the argument
initiates an erratic behaviour, followed by alternating circulation and
libration intervals. Similar behaviour occurs in the $5\times 10^{5}$
realisation, but not in our longest integrations ($5\times 10^{6}$ yr) where
the 8:5 argument no longer exhibits signs of libration, just circulation.
#### 4.4.4 Fourth-order arguments
We identified three direct arguments of fourth-order with temporal libration
with Dione and with Enceladus. Figure 8a illustrates the argument
$\phi_{\mathrm{DP}}=7\lambda^{\prime}-3\lambda-4\Omega^{\prime}$ between Dione
and Pallene; this inclination-type resonance involves only the longitude of
the ascending node of Dione; it was recovered in the timescale of $5\times
10^{3}$ yr only. Despite the general circulation of this argument, some
libration intervals with large amplitude about 180∘ are observed.
(a) $\phi_{\mathrm{DP}}=7\lambda^{\prime}-3\lambda-4\Omega^{\prime}$
(b) $\phi_{\mathrm{EP}}=22\lambda^{\prime}-18\lambda-\Omega^{\prime}-3\Omega$
(c)
$\phi_{\mathrm{EP}}=25\lambda^{\prime}-21\lambda-\varpi^{\prime}-\varpi-2\Omega^{\prime}$
Figure 8: Evolution of fourth-order direct arguments (of Pallene) with Dione
and with Enceladus on different timescales.
The remaining arguments are between Enceladus and Pallene (Figs. 8b and 8c)
over $5\times 10^{4}$ yr which have the particularity that they coincide in
the location of their temporal libration and have similar width in Fig. 1. The
argument
$\phi_{\mathrm{EP}}=22\lambda^{\prime}-18\lambda-\Omega^{\prime}-3\Omega$
corresponds to the Enceladus-Pallene 11:9 MMR location in Fig. 1, while the
argument in Fig. 8c, i.e.,
$\phi_{\mathrm{EP}}=25\lambda^{\prime}-21\lambda-\varpi^{\prime}-\varpi-2\Omega^{\prime}$,
is situated at 0.9983 $D_{\mathrm{Pal}}$ almost overlapping the Mimas-Pallene
9:11 MMR.
#### 4.4.5 Zeroth-Order arguments
We recovered several zeroth-order arguments with various degrees of Pallene
with Dione, Rhea, and Titan from the $5\times 10^{4}$ yr and $5\times 10^{6}$
yr integrations. The clearest libration occurs in the argument
$\phi_{\mathrm{TP}}=5\lambda^{\prime}-5\lambda+\varpi^{\prime}+\varpi-\Omega^{\prime}-\Omega$
between Titan and Pallene (Fig. 9c) which coincides with the libration
intervals of
$\phi_{\mathrm{RP}}=13\lambda^{\prime}-13\lambda+\varpi^{\prime}-\varpi-\Omega^{\prime}+\Omega$
(Fig. 9b) and the reversal of circulation of
$\phi_{\mathrm{DP}}=3\lambda^{\prime}-3\lambda+\varpi^{\prime}+\varpi-2\Omega^{\prime}$
(Fig. 9a). A different argument involving Titan and Pallene also with degree 5
is shown in Fig. 9d. It displays a slow circulation with intervals of faster
circulation coincident with the libration period of the argument in Fig. 9c.
(a)
$\phi_{\mathrm{DP}}=3\lambda^{\prime}-3\lambda+\varpi^{\prime}+\varpi-2\Omega^{\prime}$
(b)
$\phi_{\mathrm{RP}}=13\lambda^{\prime}-13\lambda+\varpi^{\prime}-\varpi-\Omega^{\prime}+\Omega$
(c)
$\phi_{\mathrm{TP}}=5\lambda^{\prime}-5\lambda+\varpi^{\prime}+\varpi-\Omega^{\prime}-\Omega$
(d) $\phi_{\mathrm{TP}}=5\lambda^{\prime}-5\lambda+2\varpi-2\Omega^{\prime}$
(e) $\phi_{\mathrm{DP}}=\varpi^{\prime}-\varpi$
(f) $\phi_{\mathrm{RP}}=2\varpi-\Omega^{\prime}-\Omega$
Figure 9: Several zeroth-order arguments over timescales of $10^{4}$ and
$10^{6}$ yr. In these timescales, none of the zeroth-order arguments repeat as
in Section 4.3.1. Furthermore, two Titan-Pallene direct arguments of degree 5
were recovered (Figs. 9c and 9d); the latter displays a clearer temporal
libration about 0∘ at $3.75\times 10^{4}$ yr.
Finally, the bottom two panels in Fig. 9 only involve the apsidal and nodal
longitudes. The longer and most evident circulation found in our simulations
occurs between the longitudes of pericentres of Dione and Pallene (Fig. 9e),
while a reversal in the circulation of the argument
$\phi_{\mathrm{RP}}=2\varpi-\Omega^{\prime}-\Omega$ between Rhea and Pallene
(Fig. 9f) takes place around 2 and 3 Myr, producing a temporary libration in
this interval.
### 4.5 What all these arguments mean
In our exhaustive search for resonant behaviour, we did not find any clear
libration for either first- or second-order resonances among any of the
Pallene pairings with the six large moons considered in this work. This means
that the proposed 19:16 MMR between Enceladus and Pallene does not exist.
The quasi-resonant zeroth-order argument suggested by Callegari & Yokoyama
(2010) between Pallene and Mimas is also present with other moons. Taking into
account the values of $e$ and $I$, we consider that the most important
contribution of this combination to the disturbing function would be the one
arising from the Mimas-Pallene pair, followed by the Titan-Pallene pair. The
small discrepancy in the circulation period found in this paper with respect
to the value found in Muñoz-Gutiérrez & Giuliatti Winter (2017) may be due to
the updated values of both $GM_{m}$ and Saturn’s zonal harmonics.
The clearest librations, observed for arguments of third- and fourth-order
resonances, would, however, have only a slight contribution to the disturbing
function, given the small eccentricities and inclinations of both Pallene and
the other moons. For the same reason, we do not expect that any resonance of
major order, or with the same order but of a larger degree, would result in
any significant contribution to the evolution of Pallene.
Based on our analysis, we can conclude that Pallene is not currently trapped
in any two-body MMR of any order or degree. This does not exclude the
possibility of the existence of a more complex, three-body resonance,
involving Pallene and some of the other moons, not exclusive to Mimas and
Enceladus. Although a preliminary analysis of this possibility does not show
any clear signs for the existence of such a configuration, an in-depth
analysis of three-body resonances is left for future work.
We find no significant variations in the overall results of simulations
shorter than $\sim 2\times 10^{4}$ yr as a function of density (this includes
all the simulations referring to the evolution of Pallene’s ring). For longer
simulations, the accumulation of numerical errors, resulting from differences
in the $GM_{m}$ values of order $10^{-7}$ \- $10^{-8}$, and the weak chaotic
nature of the N-body problem, lead to a loss of coherence among different
simulations; nonetheless, statistically, all the longer-term simulations are
equivalent to each other up to our longest integration time of $5\times
10^{6}$ yr. Despite the shift in angular phases, the main orbital elements,
$(a,e,I)$, remain confined and evolve regularly up to 5 Myr.
## 5 Origin and Dynamical Evolution of the Pallene Ring
Pallene shares its orbit with a complete dusty ringlet (Hedman et al., 2009,
2010) seen by _Cassini_ images in high phase angle, while a concentration of
large particles ($\gtrsim 100~{}\mu m$) was detected in other phase angle
images (Hedman et al., 2009). These data indicate that the ring is composed of
micrometre-sized particles and denser bodies. Hedman et al. (2009) found that
the ring has a radial full-width of $\sim$2500 km and a vertical profile with
a full-width at half-maximum (FWHM) of $\sim$50 km, that is, the ring is
vertically thin. More recently, Spahn et al. (2019) measured the FWHM of the
Gaussian vertical profile as $\sim$270 km while obtaining the same radial
full-width as Hedman et al. (2009). Spahn et al. (2019) also found that the
radial mean position of the ring is shifted radially outwards by $\sim$1100
km.
### 5.1 Pallene’s Mass Production by Impacts
In theory, satellites of a few kilometres in radius are efficient sources of
debris for rings and arcs due to their reasonably large cross-section and low
escape velocity (Poppe, 2016). However, Madeira et al. (2018, hereafter M18)
and Madeira & Giuliatti Winter (2020, hereafter M20) found that Saturn’s three
smallest moons (Aegaeon, Anthe, and Methone) do not replenish the material
lost by their associated arcs due to non-gravitational forces. It raises the
question of whether Pallene can maintain its diffuse ring in a steady state,
as proposed by Hedman et al. (2009). In this section, we compute the amount of
debris ejected from Pallene and analyse the fate of the ejecta in Section 5.4.
The production of material by Pallene is the result of energetic collisions
between the surface of the satellite and fluxes and interplanetary dust
projectiles (IDPs) (Grun et al., 1985; Divine, 1993). Typically, IDPs are
supplied by families of comets (Jupiter-family, Halley-type, and Oort-Cloud
comets, Dikarev et al., 2005; Nesvorný et al., 2010; Poppe et al., 2011) and
by the Edgeworth-Kuiper Belt (EKB, Landgraf et al., 2002). Data obtained by
the Student Dust Counter (SDC) on board the New Horizons spacecraft indicate
that the Saturn neighbourhood is dominated by EKB dust (Piquette et al., 2019;
Poppe et al., 2019) corresponding to the population that reaches the orbits of
Saturn’s satellites.
In addition to the impacts with IDPs, Pallene may produce material due to
impacts with the E ring particles (ERPs). The icy-dust emission from
Enceladus’s volcanism is the principal source of the E ring (Spahn et al.,
2006; Kempf et al., 2010), producing a dense debris that impacts the surface
of satellites immersed in the E ring (Spahn et al., 2006). The mass production
rate by Pallene (or any other satellite) is given by (Krivov et al., 2003):
$M^{+}=\pi R_{m}^{2}(F_{\rm IDP}Y_{\rm IDP}+F_{\rm ERP}Y_{\rm ERP})$ (6)
where $R_{m}$ is the satellite radius, $F_{\rm IDP}$ and $F_{\rm ERP}$ are the
mass flux of impactors due to IDPs and ERPs, respectively, and $Y_{\rm IDP}$
and $Y_{\rm ERP}$ are the ejecta yields associated to each projectile-type.
The ejecta yield is the ratio between the mass produced during the impact and
the impactor’s mass. This quantity is calculated using the empirical
prescription obtained by Koschny & Grün (2001) for pure-ice satellites:
$Y=\frac{6.69\times 10^{-8}}{2^{1.23}~{}{\rm
kg/m^{3}}}\left(\frac{1}{927~{}{\rm kg/m^{3}}}\right)^{-1}\left(\frac{m_{\rm
imp}}{\rm kg}\right)^{0.23}~{}\left(\frac{v_{\rm imp}}{\rm m/s}\right)^{2.46}$
(7)
where $m_{\rm imp}$ and $v_{\rm imp}$ are the mass and velocity of the
impactor.
Pallene, Aegaeon, Anthe, and Methone are likely porous satellites (Hedman et
al., 2020), due to their bulk densities, $\rho_{m}$, being lower than the
density of ice ($\rho_{\rm ice}$=927 kg/m3). Since an impact on a porous body
is expected to generate more material than an impact on a non-porous surface,
we artificially modified Equation 7 by introducing a porosity ratio of
${\rm\alpha_{p}=\rho_{m}/\rho_{ice}}$:
$Y_{p}=\frac{(6.69\times 10^{-8})^{\alpha_{p}}}{2^{1.23}~{}{\rm
kg/m^{3}}}\left(\frac{\alpha_{p}}{927~{}{\rm
kg/m^{3}}}\right)^{-1}\left(\frac{m_{\rm imp}}{\rm
kg}\right)^{0.23}~{}\left(\frac{v_{\rm imp}}{\rm m/s}\right)^{2.46}.$ (8)
We must point out that Equation 8 is theoretical, and there is no experimental
evidence that it actually rules the yield for a porous body. In this work, we
will use Equation 8 only as an artifice to demonstrate the uncertainties
related to the collision yield. The parameters assumed for the two projectile
populations are presented below.
#### 5.1.1 Interplanetary Dust Projectiles
In Saturn’s vicinity, the (unfocused) IDP mass flux is estimated to be $F_{\rm
IDP}^{(\infty)}=10^{-16}$ kgm-2s-1 (Altobelli et al., 2018; Piquette, 2019).
We assume the IDPs’ velocity near Saturn as the median speed of EKB grains,
$v_{\rm imp}^{(\infty)}=3.1$ km/s (Poppe, 2016) and the mass of the impactors
as $m_{\rm imp}=10^{-8}$ kg. When IDPs enter Saturn’s Hill sphere, the
planet’s gravitational force is responsible for enhancing the flux and
velocity of the projectiles (Krivov et al., 2003). Respectively, the mass flux
and velocity of IDPs at an orbital radius $r$ are (Colombo et al., 1966;
Krivov et al., 2003):
$\frac{F_{\rm imp}}{F_{\rm imp}^{(\infty)}}=\frac{1}{2}\left(\frac{v_{\rm
imp}}{v_{\rm imp}^{(\infty)}}\right)^{2}+\frac{1}{2}\frac{v_{\rm imp}}{v_{\rm
imp}^{(\infty)}}\left[\left(\frac{v_{\rm imp}}{v_{\rm
imp}^{(\infty)}}\right)^{2}\right.\\\
\left.-\left(\frac{R_{\mathrm{S}}}{r}\right)^{2}\left(1+\frac{2GM_{S}}{R_{\mathrm{S}}(v_{\rm
imp}^{(\infty)})^{2}}\right)\right]^{1/2},$ (9)
and
$\frac{v_{\rm imp}}{v_{\rm
imp}^{(\infty)}}=\sqrt{1+\frac{2GM_{S}}{r\left(v_{\rm
imp}^{(\infty)}\right)^{2}}}.$ (10)
#### 5.1.2 E Ring Impactors
We assume the E ring is composed of sub-micrometric ejecta from Enceladus onto
highly eccentric orbits (Nicholson et al., 1996; Kempf et al., 2008; Postberg
et al., 2008; Ye et al., 2014a). The average mass of impactors is assumed to
be $m_{\rm imp}=2.3\times 10^{-15}$ kg ($0.65~{}\mu$m, Spahn et al., 2006) and
the impact velocity is given by (Hamilton & Burns, 1994; Spahn et al., 2006):
$v_{\rm imp}=\frac{1}{2}\sqrt{\frac{GM_{S}}{r}}$ (11)
The flux of impactors on the equator plane is assumed to be $F_{\rm
ERP}=m_{\rm imp}v_{\rm imp}N_{\rm ERP}$, where $N_{\rm ERP}$ is the particle
number density in the E ring, extracted from the Cosmic Dust Analyser data
(Kempf et al., 2008):
$N_{\rm
ERP}(r)=N_{0}\exp\left(-\frac{z_{0}(r)^{2}}{2\sigma(r)^{2}}\right)\left\\{\begin{array}[]{ll}\left(\frac{r}{3.98~{}R_{\mathrm{S}}}\right)^{50}&\textrm{for}~{}r\leq
3.98~{}R_{\mathrm{S}}\\\
\left(\frac{r}{3.98~{}R_{\mathrm{S}}}\right)^{-20}&\textrm{for}~{}r>3.98~{}R_{\mathrm{S}},\end{array}\right.$
(12)
with
$\sigma(r)=1826~{}{\rm
km}+(r-3.98~{}R_{\mathrm{S}})\left\\{\begin{array}[]{ll}-\frac{467~{}{\rm
km}}{0.82~{}R_{\mathrm{S}}}&\textrm{for}~{}r\leq 3.98~{}R_{\mathrm{S}}\\\
\frac{510~{}{\rm
km}}{0.77~{}R_{\mathrm{S}}}&\textrm{for}~{}r>3.98~{}R_{\mathrm{S}},\end{array}\right.$
(13)
and,
$z_{0}(r)=\left\\{\begin{array}[]{ll}-1220\left(\frac{r-3.98~{}R_{\mathrm{S}}}{0.82~{}R_{\mathrm{S}}}\right)~{}{\rm
km}&\textrm{for}~{}r\leq 3.98~{}R_{\mathrm{S}}\\\
0&\textrm{for}~{}r>3.98~{}R_{\mathrm{S}},\end{array}\right.,$ (14)
where $N_{0}$ is the maximum particle number density – near Enceladus’ radius
– set as $N_{0}=1~{}$m-3 (Ye et al., 2014b).
#### 5.1.3 Mass Production Rate of Aegaeon, Anthe, Methone, and Pallene
Following the prescription described in Sections 5.1.1 and 5.1.2 and using Eq.
7, we estimate the mass production rate of Pallene as
$M^{+}\sim 7.4\times 10^{-4}~{}{\rm kg/s}.$ (15)
In order to determine whether Pallene can maintain the ring, we need to
estimate the mass of the structure and compare it with the lifetime of the
ejected material, which is obtained by N-body numerical simulations in Section
5.4. If the time $\mathcal{T}$ for Pallene to produce the amount of mass
observed in the ring is shorter than the particles’ lifetime, then the
satellite is an efficient source for the ring and the structure will be in a
steady state. On the other hand, if $\mathcal{T}$ is longer than the lifetime
of the particles, the ring will disappear unless another source keeps it in a
steady-state.
The time for the satellite to produce the observed mass of the ring is (M20)
$\mathcal{T}=M_{\mathrm{Ring}}/M^{+},$ (16)
if $M_{\mathrm{Ring}}$ is the mass of a ring (or arc), as given by (Sfair &
Giuliatti Winter, 2012):
$M_{\mathrm{Ring}}=A\left(\frac{4}{3}\pi\rho_{\rm ice}\right)\int_{0.1~{}\mu
m}^{100~{}\mu m}C\pi s^{3-q}ds,$ (17)
where $s$ is the physical radius of the particles, $C$ is a constant, and $q$
is the slope of the size distribution of the particles. The surface area is
$A=r\Delta\theta\Delta r/2$ (M20), where $\Delta\theta$ is the angular width
of the ring/arc in radians and $\Delta r$ is the radial width. The constant
$C$ can be obtained from the observed optical depth $\tau$ (Sfair & Giuliatti
Winter, 2012)
$\tau=\int_{0.1~{}\mu m}^{100~{}\mu m}C\pi s^{2-q}ds.$ (18)
The distribution of particles in Pallene’s ringlet is not constrained by
observational data. However, the data regarding the size distribution of the E
ring provides us with a range of possible slopes $q$ for the ringlet, with
values ranging from 1.9 to 5 (Horányi et al., 2008; Kempf et al., 2008; Ye et
al., 2014a; Srama et al., 2020). For instance, Horányi et al. (2008) estimated
from numerical simulations that the grain density in the E ring follows a
power law distribution with $q=2.5$, while Kempf et al. (2008) obtained slopes
between 4 and 5 for $s>0.9~{}\mu$m from Cassini data. The slopes reported by
Ye et al. (2014a) vary between 3 and 4 for $s>10~{}\mu$m. To cover all
possible values of $q$, we assume slopes between 1 and 6.
Figure 10: Estimated time $\mathcal{T}$ for Aegaeon, Methone, Anthe, and
Pallene to produce the mass of their associated arc/ring as a function of the
slope $q$ of the particle radius distribution. The solid and dashed lines
correspond to the time calculated following the prescription given in Section
5.1. The solid (dash-dotted) black line corresponds to Pallene’s system
assuming a non-porous (porous) satellite and the grey area gives the error in
the calculation of $\mathcal{T}$ due to the uncertainties in Pallene’s bulk
density. The coloured red, blue, and green lines correspond to the arcs of
Aegaeon, Methone, and Anthe, respectively. The arc lifetime is given by
different coloured dashed lines. The red star gives $\mathcal{T}$ obtained for
Aegaeon by M18 and the triangles the times obtained for Methone (blue) and
Anthe (green) by M20.
Figure 10 shows the time $\mathcal{T}$ for Pallene to produce the ringlet mass
(solid black line) for slopes between 1 and 6, assuming a non-porous satellite
(Eq. 7). The figure also shows the time for the moons Aegaeon, Methone, and
Anthe to produce the material of their associated arcs (solid coloured lines).
Meanwhile, the dash-dotted lines provide the estimated production time
$\mathcal{T}$ assuming that the satellites are porous. For Aegaeon, Anthe, and
Methone, we assume a bulk density of 500 kg/m3, while for Pallene this value
is 250 kg/m3. The filled region surrounding the dashed black line gives the
$\mathcal{T}$ calculated using the minimum and maximum bulk densities
estimated for Pallene ($\rho_{\mathrm{Pal}}$=190-340 kg/m3). The mass
production rate depends only on the cross-section of the satellite, so if we
assume a non-porous Pallene, the uncertainties regarding its bulk density do
not affect the mass production, since the physical radius of the satellite is
constrained by observational data (Hedman et al., 2009).
M18 and M20 estimated $\mathcal{T}$ following a simple prescription assuming
production due to IDP impacts of cometary origin (with lower focused fluxes
and velocities than the EKB grains), and assumed a single slope, $q=3.5$. The
prescription here presented goes a step further in relation to their model
because it incorporates recent data and the production due to ERP impacts. The
time $\mathcal{T}$ obtained in M18 for the arc of Aegaeon is shown by the red
star in Fig. 10 and the times obtained in M20 for the arcs of Methone and
Anthe are the triangles with matching colours. The dashed lines correspond to
the lifetime of ${\rm 10~{}\mu}$m-sized particles, obtained by M18 and M20.
Our times are shorter than those estimated in previous works. M18 obtained
that Aegaeon’s arc will most likely disappear if it is composed exclusively of
micrometre-sized grains. Here, we also obtained that a non-porous Aegaeon
cannot replenish the arc material when we disregard other sources in the
arcs,222We do not compute production due to ERPs because Aegaeon is immersed
in the G ring. since $\mathcal{T}$ is at least an order of magnitude higher
than the lifespan of the particles. However, if we mimic the effect of
porosity on the yield, the satellite can maintain the arc for $q\gtrsim 4$.
Unlike M20, Methone can replenish the arc material for $q>3.3$ regardless of
its porosity. Although the lifetime of the particles in Anthe’s arc is shorter
than our $\mathcal{T}$ for the non-porous case, the radial width of the arc is
unknown 333We assume the same radial width as Methone’s arc due to the
proximity of the systems and the similar evolution of the particles under the
effects of the 14:15 and 10:11 corotation resonance. and we cannot be sure if
the satellite can produce by itself the amount of material necessary to keep
the arc in a steady-state or not. Assuming a porous limit, the Anthe arc seems
to be in a steady-state for $q\gtrsim 4$.
Table 5: Radial width ($\Delta r$), angular width ($\Delta\theta$), and optical depth ($\tau$) assumed for the systems of Aegaeon, Methone, Anthe, and Pallene (Hedman et al., 2009, 2010, 2020; Sun et al., 2017; Spahn et al., 2019). The table shows the fractions of yield $Y$, flux $F$, and mass rate $M^{+}$ between the IDP and ERP, and the total mass rate production in kg/s. | Aegaeon | Methone | Anthe | Pallene
---|---|---|---|---
$\Delta r$ [km] | 250 | 1000 | 1000 | 2500
$\Delta\theta$ [∘] | 60 | 10 | 20 | 360
$\tau$ | $10^{-5}$ | $10^{-6}$ | $10^{-6}$ | $10^{-6}$
$Y_{\rm IDP}/Y_{\rm ERP}$ | – | ${\rm 447}$ | ${\rm 448}$ | ${\rm 449}$
$F_{\rm IDP}/F_{\rm ERP}$ | – | ${\rm 10}$ | ${\rm 4}$ | ${\rm 10^{-1}}$
$M^{+}_{\rm IDP}/M^{+}_{\rm ERP}$ | – | ${\rm 4\times 10^{3}}$ | ${\rm 2\times 10^{3}}$ | ${\rm 50}$
$M^{+}$[kg/s] | ${\rm 2.6\times 10^{-5}}$ | ${\rm 3.7\times 10^{-4}}$ | ${\rm 4.2\times 10^{-5}}$ | ${\rm 7.4\times 10^{-4}}$
Table 5 summarises the initial ring (arc) parameters and the estimated
fraction of yield, flux, and mass production between the IDP and ERP
populations. We also include the total mass production for Aegaeon, Methone,
Anthe, and Pallene for the non-porous case. Ejecta production due to IDP
impacts is the most efficient for all systems. For the arcs of Aegaeon,
Methone, and Anthe, production due to ERPs can be disregarded because the
$M^{+}$ due to IDP impacts is more than 1000 times higher than for ERPs. The
production due to ERPs corresponds to 2% of the total amount produced by
Pallene.
### 5.2 Dynamical Model
We study the evolution and fate of Pallene’s ringlet by analysing the temporal
evolution of two distinct sets of particles: i) particles initially co-orbital
to the satellite (Section 5.3) and ii) particles ejected from Pallene’s
surface (Section 5.4). The first set corresponds to a scenario in which the
ringlet, and perhaps Pallene, would have formed by a disruption of an ancient
satellite; while the second, mimics the evolution of the material produced by
impacts into the satellite (Section 5.1).
The numerical simulations were performed using Mercury6 (Chambers, 1999) with
the Bulirsch-Stoer algorithm. We used 5,000 particles with micrometric sizes
ranging from 0.1 $\mu$m to 100 $\mu$m, and integrated the system until either
all particles collide with Mimas, Pallene, or Enceladus or migrate outwards
beyond the orbit of Enceladus. We adopted the collision detection treatment
between particles and satellites as implemented in Mercury6 (for details, see
Chambers, 1999; Liu et al., 2016).
Micrometre-sized particles are affected by non-gravitational forces that
decrease their lifetimes. Thus it is necessary to include these effects in the
system. In our simulations, the particles are under the effect of a total
force,
$\vec{\rm F}=\vec{\rm F}_{\rm SR}+\vec{\rm F}_{\rm PD}+\vec{\rm F}_{\rm
EM}+\vec{\rm F}_{\rm G},$ (19)
where $\vec{\rm F}_{\rm SR}$ is the solar radiation force, $\vec{\rm F}_{\rm
PD}$ is the plasma drag force, $\vec{\rm F}_{\rm EM}$ is the electromagnetic
force, and $\vec{\rm F}_{\rm G}$ corresponds to the sum of the gravitational
forces of the system: Saturn (including its gravitational coefficients),
Mimas, Enceladus, Tethys, Dione, Rhea, Titan, and Pallene.
#### 5.2.1 Non-Gravitational Forces
The solar radiation force ($\vec{\rm F}_{\rm SR}$) includes two components
(Burns et al., 1979; Mignard, 1984): the radiation pressure (RP) caused by
collisions of solar radiation on the dust grain,
$\vec{\rm F}_{\rm RP}=\frac{\Phi\pi
s^{2}}{c}Q_{pr}\frac{\vec{r}_{sp}}{r_{sp}},$ (20)
and the Poynting-Robertson drag (PR), caused by the re-emission of the solar
radiation absorbed by the particles,
$\vec{\rm F}_{\rm PR}=-\frac{\Phi\pi
s^{2}}{c}Q_{pr}\left\\{\frac{\vec{V}_{P}+\vec{V}}{c}+\left[\left(\frac{\vec{V}_{P}}{c}+\frac{\vec{V}}{c}\right)\cdot\frac{\vec{r}_{sp}}{r_{sp}}\right]\frac{\vec{r}_{sp}}{r_{sp}}\right\\},$
(21)
where $c$ is the speed of light, $\Phi$ is the solar flux (Burns et al.,
1979), and $\vec{V}$ is the velocity vector of the particle relative to the
planet. The solar radiation pressure efficiency $Q_{pr}$ (in Eqs. 20 and 21)
depends on the radius of the particle and is computed from Mie theory (Irvine,
1965; Mishchenko et al., 1999, 2002) assuming spherical ice grains. The
particle is in a circumplanetary orbit $\vec{r}$ ($r=|\vec{r}|$), and the
planet in a circular heliocentric orbit. The heliocentric position of Saturn
$\vec{r}_{sp}$ ($r_{sp}=|\vec{r}_{sp}|$) and the magnitude of the planet’s
velocity $\vec{V}_{P}$ are considered constants. We also assume that Saturn
shields particles from solar radiation when the planet eclipses the Sun from
the particle’s perspective, i.e., the solar radiation force is neglected when
the particle is in the planet’s shadow, which happens when
$\vec{r}\cdot\vec{r}_{sp}<0$ and
$(r^{2}-R_{\mathrm{S}}^{2})r_{sp}-|\vec{r}\cdot\vec{r}_{sp}|^{2}<0$ (Liu et
al., 2016).
The principal source of plasma for Saturn’s magnetosphere in the E ring region
is the ionisation of neutrals provided by the Enceladus plume. The E ring
region is dominated by water group ions, i.e., O+, OH+, H2O+, and H3O+, the O+
ion being the most abundant (Cassidy & Johnson, 2010; Tseng et al., 2010;
Tseng & Ip, 2011; Sittler & Johnson, 2015). Direct collision of the plasma
with the ring particles is responsible for a drag force ($\vec{\rm F}_{\rm
PD}$) (Morfill & Gruen, 1979; Morfill et al., 1993; Horányi et al., 2008),
given by
$\vec{\rm F}_{\mathrm{PD}}=\pi
s^{2}m_{i}N_{i}a^{2}(n-\Omega_{\mathrm{S}})^{2}\hat{u}_{t},$ (22)
where $n$ is the mean motion of the particle, $m_{i}$ and $N_{i}$ are the mass
and number density of the plasma ions, respectively, and $\hat{u}_{t}$ is the
unit vector in the tangential direction to the osculating orbit of the
particle.
Cassini measurements have shown seasonal variations in ion densities ranging
from $N_{i}\sim 40~{}{\rm cm}^{-3}$ to $N_{i}\sim 120~{}{\rm cm}^{-3}$ in
Pallene’s vicinity (Elrod et al., 2014; Persoon et al., 2015; Persoon et al.,
2020). For simplicity, we assume the plasma in the Pallene region is only
composed of O+ ions (molecular mass of 16 a.m.u.) with constant number density
$N_{i}=65.9~{}{\rm cm}^{-3}$ (Persoon et al., 2015). Moreover, we neglect the
indirect Coulomb interaction between charged ring particles and the plasma
material, since this effect is at least two orders of magnitude weaker than
the direct collisions (Northrop & Birmingham, 1982; Grun et al., 1984; Sun et
al., 2015).
The ring particles are also influenced by Saturn’s magnetosphere due to the
charging of the particles by the ambient plasma and electrons photoemission
(solar UV). Therefore, the electromagnetic force ($\vec{F}_{\rm EM}$)
(Northrop & Birmingham, 1982; Burns et al., 1985), is included in our
simulations as
$\vec{\rm
F}_{\mathrm{EM}}=\frac{4\pi\epsilon_{0}sV}{c}\left\\{\left[\vec{V}-\Omega_{\mathrm{S}}(\hat{u}_{n}\times\vec{r})\right]\times\vec{B}\right\\},$
(23)
where $\epsilon_{0}=8.8542\times 10^{-12}$ F/m is the vacuum permittivity
(Chapman & Bartels, 1940), $V$ is the electric potential, $\vec{B}$ is the
magnetic field vector, and $\hat{u}_{n}$ is the unit vector perpendicular to
the planet’s equatorial plane. We adopt an equilibrium potential of $V=-3$ V
for the Pallene region, as determined by Hsu et al. (2011) in their
investigation of the dynamics of the Saturnian stream particles.
We assumed the Saturnian magnetic field as a composition of an aligned dipole
and a quadrupole (Chapman & Bartels, 1940; Hamilton, 1993):
$\vec{B}=g_{1.0}R_{\mathrm{S}}^{3}\vec{\nabla}\left(\frac{\cos{\zeta}}{r^{2}}\right)+\frac{g_{2.0}}{2}R_{\mathrm{S}}^{4}\vec{\nabla}\left(\frac{3\cos^{2}{\zeta}-1}{r^{3}}\right)$
(24)
where $g_{1.0}=0.21$ G is the Saturnian dipole momentum and $g_{2.0}=0.02$ G,
the quadrupole momentum (Hamilton, 1993; Belenkaya et al., 2006); $\zeta$ is
the angle between $\hat{u}_{n}$ and $\vec{r}$.
#### 5.2.2 Orbital Elements Of One Representative Particle
The non-gravitational forces are responsible for variations in the shape and
orientation of the orbits, affecting the temporal evolution of the particles.
The mean temporal variations of the osculating orbital elements of a particle
with mass $m$ are (Mignard, 1984; Hamilton, 1993; Madeira & Giuliatti Winter,
2020)
$\dot{a}=-\frac{2na^{2}\alpha_{\rm
r}}{c}\frac{5+\cos^{2}{I}}{6}+\frac{2|\vec{F}_{\mathrm{PD}}|}{mn}\sqrt{1-e^{2}},$
(25) $\dot{e}=\alpha_{\rm
r}\sqrt{1-e^{2}}(\cos{\Omega}\sin{\omega}+\sin{\Omega}\cos{\omega}\cos{I})\\\
-\frac{3}{2}\frac{e|\vec{F}_{\mathrm{PD}}|}{mna}\sqrt{1-e^{2}}-\frac{qg_{1.0}R_{\mathrm{S}}^{3}\Omega_{\mathrm{S}}}{4mcna^{3}}e\sqrt{1-e^{2}}\sin^{2}{I}\sin{2\omega},$
(26) $\dot{I}=\frac{\alpha_{\rm
r}e}{\sqrt{1-e^{2}}}\sin{\Omega}\cos{\omega}\sin{I}+\frac{3}{2}\frac{|\vec{F}_{\mathrm{PD}}|}{mna}\sqrt{1-e^{2}}\sin{I}\\\
+\frac{qg_{1.0}R_{\mathrm{S}}^{3}\Omega_{\mathrm{S}}}{8mcna^{3}}\frac{e^{2}}{\sqrt{1-e^{2}}}\sin{2I}\sin{2\omega},$
(27) $\dot{\Omega}=-\dot{\Omega}_{\rm obl}+\frac{\alpha_{\rm
r}e}{\sqrt{1-e^{2}}}\sin{\Omega}\sin{\omega}-(2-e)\frac{|\vec{F}_{\mathrm{PD}}|}{mna}\cos{I}\sqrt{1-e^{2}}\\\
+\frac{qg_{1.0}R_{\mathrm{S}}^{3}\Omega_{\mathrm{S}}}{mcna^{3}}\frac{1}{\sqrt{1-e^{2}}}\left[\cos{I}-\frac{1}{(1-e^{2})}\left(\frac{n}{\Omega_{\mathrm{S}}}\right)\right],$
(28)
and
$\dot{\varpi}=\dot{\varpi}_{\rm obl}+\frac{\alpha_{\rm
r}\sqrt{1-e^{2}}}{e}(\cos{\Omega}\cos{\omega}-\sin{\Omega}\sin{\omega}\cos{I})\\\
+(2-e)\frac{|\vec{F}_{\mathrm{PD}}|}{mna}\sqrt{1-e^{2}}+\frac{qg_{1.0}R_{\mathrm{S}}^{3}\Omega_{\mathrm{S}}}{mcna^{3}}\frac{2\cos{I}}{(1-e^{2})^{3/2}}\left(\frac{n}{\Omega_{\mathrm{S}}}\right),$
(29)
where
$\alpha_{\rm r}=\frac{3\Phi\pi s^{2}}{2mcna}Q_{pr}.$ (30)
$\dot{\Omega}_{\mathrm{obl}}$ and $\dot{\varpi}_{\mathrm{obl}}$ are the
temporal variation of longitude of ascending node and argument of pericentre,
respectively, due to the non-sphericity of Saturn (see Renner & Sicardy,
2006).
Figure 11: From top to bottom: Geometric semi-major axis, eccentricity,
inclination, longitude of ascending node, and argument of pericentre of a
${\rm 10~{}\mu}$m-sized particle co-orbital to Pallene with displacement in
the mean anomaly of 180∘ in relation to the satellite. The top row of each
panel shows the orbital elements when only gravitational effect is included.
The following rows display the evolution of the particle when different non-
gravitational forces are included (i.e., solar radiation force,
electromagnetic force, and plasma drag). Finally, the bottom row of each panel
shows the effect of all forces.
Figure 11 illustrates the variation of geometric orbital elements ($a$, $e$,
$I$, $\Omega$ and $\varpi$) of one representative ${\rm 10~{}\mu}$m particle
due to each non-gravitational force and the total force (Eq. 19). The particle
is initially co-orbital to Pallene with
$\lambda=\lambda_{\mathrm{Pal}}+180^{\circ}$, where $\lambda$ and
$\lambda_{\mathrm{Pal}}$ are the mean longitude of the particle and Pallene,
respectively. As one can see in the top panel of Fig. 11 (Eq. 25), the semi-
major axis is affected secularly by two distinct drag effects: the Poynting-
Robertson component that produces an inward migration, and the plasma drag,
which increases the semi-major axis of the particle. We find that the plasma
drag is at least one order of magnitude stronger than the Poynting-Robertson
component for all particle sizes. While the electromagnetic force only induces
short-term variations in the semi-major axis, the net outcome is that grains
migrate outward when all the effects are included.
In the eccentricities, we have that the electromagnetic and solar radiation
forces produce oscillations with constant period and amplitude for the same
particle size (Hamilton, 1993; Madeira et al., 2018; Gaslac Gallardo et al.,
2020). As we can see in Eq. 26, the intensity of these effects depends on the
radius of the particles, with $\dot{e}\propto s^{-3}$ for the electromagnetic
force and $\dot{e}\propto s^{-1}$ for solar radiation. Thus, the effect of the
electromagnetic force dominates over the solar radiation for smaller
particles, while for larger sizes the electromagnetic force can be disregarded
in relation to the solar radiation.
Plasma drag, on the other hand, produces only short-term variations in the
eccentricities (M20). The jumps of this element, seen in Fig. 11, result from
the crossing of the particle with resonances with Enceladus, as will be shown
in Section 5.3. For Pallene ringlet particles, the electromagnetic force
dominates for ${\rm s\leq 5~{}\mu}$m, while the solar radiation force is the
most important effect on the eccentricity of ${\rm s>5~{}\mu}$m particles. We
obtain that the non-perturbative forces produce only small variations in the
inclination ($I\sim 10^{-3}$ deg) for the time intervals considered by us in
this section.
The longitude of ascending node and argument of pericentre are mainly affected
by the plasma drag, which is responsible for the precession of the elements in
relation to Pallene. Fig. 12 displays snapshots of the osculating orbit (solid
lines) of a representative particle (coloured dots) and Pallene (black dot).
We rotate the systems on each snapshot to keep Pallene in the fixed position
$x=1$ ${\rm D_{Pal}}$. We show particles with radius of ${\rm 20~{}\mu m}$,
${\rm 50~{}\mu m}$, ${\rm 100~{}\mu m}$, as well as with radius of
centimetres, which corresponds to the case with only gravitational forces.
Figure 12: Snapshots of the osculating orbit (solid lines) and spatial
position (dots) of Pallene (in black) and of a co-orbital particle with
$\lambda=\lambda_{P}+90^{\circ}$. The colour indicates the body, as labelled.
We assume the single particle has a radius of either ${\rm 20~{}\mu}$m, ${\rm
50~{}\mu}$m, or ${\rm 100~{}\mu}$m. Displayed in red, we include the case
solely with gravitational forces (“cms”). The orbits are provided in the
rotating frame in which Pallene is stationary at $x=1$ ${\rm D_{Pal}}$. An
animation of this figure is included in the electronic version; it requires
Adobe Reader version $\geq$9 or similar.
As we can see in Fig. 12, without non-gravitational forces, the particle
remains in the same orbit as Pallene and lacks vertical variation in relation
to the satellite’s orbital plane. When the non-gravitational forces are
included, the orbit precesses, exhibiting vertical excursions in relation to
Pallene’s orbital plane. This phenomenon could be responsible for the observed
vertical width of $\sim 10^{2}~{}$ km of the ring (Hedman et al., 2009; Spahn
et al., 2019) indicating that the ringlet may evolve into a torus, as observed
in the gossamer rings of Jupiter (Burns et al., 1999). The formation of the
torus occurs when the precession of the pericentre acts long enough to
completely randomise the orientation of the particles’ orbits. These results
will be discussed in detail in Section 5.3.
The osculating semi-major axis and eccentricity of a representative particle
under the effects of the non-gravitational forces are presented in Fig. 13.
The lines correspond to numerical simulations where the physical radius of the
single particle is modified (${\rm 0.1~{}\mu}$m, ${\rm 0.2~{}\mu}$m, ${\rm
0.5~{}\mu}$m, ${\rm 1~{}\mu}$m, ${\rm 2~{}\mu}$m, ${\rm 5~{}\mu}$m, ${\rm
10~{}\mu}$m, ${\rm 20~{}\mu}$m, ${\rm 50~{}\mu}$m, and ${\rm 100~{}\mu}$m).
The solid and dotted horizontal lines indicate the orbits of Pallene and
Enceladus, respectively. In this work, we consider a particle to be removed
from the ringlet if it collides with a satellite or migrates outside the
generous limit of $a_{\mathrm{Pal}}+1100$ km (${\rm\sim
1.05~{}D_{\mathrm{Pal}}}$). The latter can be seen in the figure by the
horizontal dot-dashed line.
Figure 13: Osculating semi-major axis and eccentricity of representative
particles co-orbiting Pallene. The particles have a size of ${\rm
0.1~{}\mu}$m, ${\rm 0.2~{}\mu}$m, ${\rm 0.5~{}\mu}$m, ${\rm 1~{}\mu}$m, ${\rm
2~{}\mu}$m, ${\rm 5~{}\mu}$m, ${\rm 10~{}\mu}$m, ${\rm 20~{}\mu}$m, ${\rm
50~{}\mu}$m, and ${\rm 100~{}\mu}$m (coloured lines). The horizontal dotted
line indicates Enceladus’s semi-major axis, while the horizontal dot-dashed
line is the maximum semi-major axis of the particle to be considered as a
ringlet particle. The particles are under the effects of the solar radiation
force, plasma drag, and electromagnetic force.
Particles with ${\rm s\leq 2~{}\mu}$m migrate beyond the orbit of Enceladus
(horizontal dotted line) in less than 100 yr and reach $e>10^{-2}$. In the
case shown in Fig. 13, the particles of $0.1~{}\mu$m and $1~{}\mu$m are
ejected from the Saturnian system ($e>1$) while the particles of $0.2~{}\mu$m
and $0.5~{}\mu$m collide with a satellite outside the orbit of Enceladus. The
${\rm 2~{}\mu}$m-sized particle collides with Enceladus in about 80 yr.
The effects of the non-gravitational forces are weaker for larger grains and
particles with $s>{\rm 5~{}\mu}$m remain with eccentricities of the order of
$10^{-3}$. These particles migrate outwards but still are considered ringlet
particles according to our definition. These results roughly demonstrate that
the permanence of the particles in the ring is strongly affected by non-
gravitational forces and only particles with a radius of tens of micrometres
or greater should have significantly long lifetimes in the ringlet (several
hundreds of years). In the next sections, we perform full N-body simulations
of the ring particles evolution.
### 5.3 Particles co-orbital to Pallene
In this section, we analyse Pallene’s ringlet as formed by a set of 5,000
particles co-orbital to the satellite. We assume particles with the same
orbital elements as Pallene, except for the mean anomaly that was randomly
selected from a uniform distribution between 0∘ and 360∘. The ring composed of
co-orbital particles corresponds, e.g., to a scenario where the structure
could be formed by the disruption of a proto-Pallene. In this scenario, the
ring would also be composed of centimetre-sized or even larger particles.
Nevertheless, we do not perform simulations for this size range since the
effects of non-gravitational forces can be neglected. The orbital evolution of
the centimetre-sized particles would correspond to the analysis in Section 3.2
which demonstrated that most of the particles initially located inside the
Pallene collision region would eventually collide with the satellite, reducing
the survival rate of co-orbital particles.
As a general outcome, particles with $s\leq 10~{}\mu$m present a dynamical
evolution similar to those shown in Fig. 11. The particles migrate towards
Enceladus and show an increase in eccentricity. However, we obtain a more
complex dynamical evolution for particles with $s\geq 20~{}\mu$m caused by
capture in resonances with Enceladus. Roughly speaking, a migrating particle
is captured at a given resonance with a satellite if the migration timescale
is shorter than the libration period of the resonance (Batygin, 2015). In our
case, this condition is achieved for the largest particles ($20~{}\mu$m,
$50~{}\mu$m, and $100~{}\mu$m) which are captured, even for a short period of
time, in the 7:6, 8:7, 9:8, and 10:9 $e$-type MMRs with Enceladus.
Figure 14: Snapshots showing the percentage of particles as a function of the
geometric semi-major axis (at left) and the geometric eccentricity vs.
geometric semi-major axis (at right). From top to bottom, we show the data for
0, 200, 750, 5000, and 8000 yr. The ${\rm 20~{}\mu}$m, ${\rm 50~{}\mu}$m, and
${\rm 100~{}\mu}$m sized particles are shown in different colours, as
indicated. Pallene is represented by a black filled-circle. The locations of
MMRs with Enceladus are indicated by dashed vertical lines. Similarly to Fig.
12, an animation of this figure is provided in the electronic version.
Figure 14 shows the evolution of the fraction of particles with $s>20~{}\mu$m
(left column), as well as their geometric eccentricity (right column), as a
function of the geometric semi-major axis. Initially, all particles have the
same semi-major axis and eccentricity as Pallene (black dot). As the particles
migrate outward, they cross resonances with Enceladus, increasing their
eccentricities. After 200 yr, a fraction of $20~{}\mu$m-sized particles is
trapped in the 7:6 and 8:7 MMRs, while most of the set is located between the
8:7 and 9:8 MMRs. Particles in the 7:6 MMR are confined for a longer period of
time, reaching the highest eccentricity values ($\approx$0.05). The ${\rm
20~{}\mu}$m-sized particles that are not in MMRs at 200 yr had their
eccentricity increased during the passage through the two innermost
resonances, reaching values $\sim 0.01$. Particles with radius of $50~{}\mu$m
and $100~{}\mu$m have not yet crossed any resonances and remain with the same
initial eccentricity.
At 750 yr, the ${\rm 100~{}\mu}$m-sized particles have crossed the 7:6 MMR,
and the ${\rm 50~{}\mu}$m-sized particles have crossed all four resonances.
Most of the ${\rm 20~{}\mu}$m-sized particles migrated outside the limit of
${\rm\approx 1.05~{}D_{Pal}}$, leaving only the particles confined in MMRs. A
similar result is seen for 5,000 yr, when only ${\rm 100~{}\mu}$m-sized
particles in MMRs remain in the ring, indicating that capture in resonances
increases their longevity. Therefore, the vicinity of MMRs would correspond to
brighter regions of the ring, as will be shown later. Finally, after 8000 yr,
the ring is completely depleted of $\mu$m-sized particles.
(a) (b) (c)
Figure 15: a) The half-life (in blue) and the lifetime (in red) of the ring as
a function of the physical radius of the co-orbital particles. b) The fraction
of the particles that collides with the satellites Mimas (in red), Pallene (in
black), and Enceladus (in blue), and the fraction of particles that migrates
out of the orbit of Enceladus (in green). c) The time $\mathcal{T}$ for the
satellite to produce the mass of the ring, assuming a non-porous (black solid
line) and a porous (black dot-dashed line) Pallene. The red and blue lines
give the ring’s lifetime and half-life, respectively, as a function of the
slope $q$.
Figure 15a shows two different timescales as a function of particle radius: in
blue, the time required for 50% of particles to collide with a satellite or
migrate outside the limit of ${\rm\sim 1.05~{}D_{Pal}}$ – hereafter referred
to as the ring’s half-lifetime – and in red the time required for all
particles to be lost – referred as the ring’s lifetime. The ring is completely
depleted of sub-micrometric particles in less than a decade, while particles
of radius of $1-10~{}\mu$m have lifetimes of the order of ${\rm 10^{2}}$ yr.
Particles that last longer are those with $s\geq 20~{}\mu$m, with lifetimes of
${\rm\sim 10^{3}}$ yr – same order of the time $\mathcal{T}$ for Pallene to
produce the mass of the ring (see Fig. 10).
Particle sinks are shown in Fig. 15b. Due to the intense migration caused by
the plasma drag, almost all the sub-micrometric particles migrate beyond the
orbit of Enceladus and collide with an external satellite or are ejected from
the system. By increasing the radius of the particles, the slower rate of
migration increases the period that the particles interact gravitationally
with Enceladus in the vicinity of the satellite. Consequently, the number of
collisions with Enceladus increases, as seen in Fig. 15b. Also due to
migration, the number of particles that collide with Pallene is less than 5%
for all sizes; this rules out Pallene as an efficient secondary source of
material, produced by subsequent impact with these particles.
Figure 15c shows in black lines the same curves shown in Fig. 10: the solid
line is the time for Pallene to produce the ring mass in the non-porous case,
while the dot-dashed line is the same for the porous case. The red and blue
lines indicate the ring’s lifetime and half-lifetime, respectively, obtained
by a time-weighted average:
$\bar{T}=\frac{\sum_{s}m_{s}\left(\frac{s~{}{\rm(\mu m)}}{\rm 100~{}\mu
m}\right)^{-q}T_{s}}{\sum_{s}m_{s}\left(\frac{s~{}{\rm(\mu m)}}{\rm 100~{}\mu
m}\right)^{-q}}$ (31)
where $m_{s}$ is the mass of a particle with radius $s$ and $T_{s}$ is the
(half)-lifetime of the particles.
Focusing on the red curve in Fig. 15c, we verify that the ring would not be in
a steady-state, assuming ejection by Pallene as the only source of material.
However, given the uncertainties in the yield calculation and the proximity of
the values between the black and red solid curves, towards the lower values of
$q$, we can conclude that Pallene might be able to maintain its ring if the
particle distribution is given by ${\rm q\lesssim 3}$. Lower slope values mean
that the ring has higher concentrations of larger particles, which seems to be
the case of the ringlet of Pallene – given that larger particles can be
captured in MMRs with Enceladus, while smaller ones have lifetimes of only a
few years. If the particle distribution in the ring is given by slopes ${\rm
q\gtrsim 4}$, Pallene by itself certainly cannot maintain the ring, since the
lifetime is lower than $\mathcal{T}$ even for the porous limit.
Figure 16: Animations showing the normalised optical depth ${\rm\tau_{norm}}$
in the $\theta$-$r$ (top panels) and $r$-$z$ (bottom panels) planes in the
rotating frame for co-orbital particles. The green dot gives Pallene’s
position and the dashed lines indicate the MMRs with Enceladus. The upper
limit of the radius in the panels corresponds to the limit ${\rm
1.05~{}D_{Pal}}$. Adobe Reader version $\geq$9 or similar is required.
Figure 16 shows animations of the co-orbital particle profiles in the planes
$\theta$-$r$ (top panels) and $r$-$z$ (bottom panels). The colour of each
pixel gives the normalised optical depth of that pixel, assuming a particle
distribution with slope $q=2.5$. The particles are initially distributed along
the orbit of Pallene. In 10 yr, we can identify ring-like structures in the
$r$-$z$ plane, produced by the precession of the longitude of pericentre (Fig.
12), where each structure is composed of particles with different radii. After
100 yr, the ring shows an asymmetrical profile, with the brightest part close
to Pallene’s orbit, and structures with lower brightness outside the
satellite’s orbit. We do not see any bright regions inside the orbit of
Pallene, since outward migration is dominant for all particles.
At 400 yr, the torus structure is completely formed, and the ring has an
asymmetric structure. The brightest part of the ring is in the region of the
7:6 MMR with Enceladus, but we see dimmer structures inside and outside this
location, as an effect of the increased eccentricity of resonant particles.
After 1000 yr, the complete structure of the ring has moved outward and the
brightest region is located in the 8:7 MMR. After 4000 yr, the structure has
moved further away and only a few particles have remained in the ring region.
### 5.4 Particles Ejected from Pallene
In the numerical simulations presented in this section, 5000 particles were
randomly and uniformly distributed in a spherical shell within the Hill radius
of Pallene. Particles are ejected radially with random velocities that follow
the normalised distribution (Hartmann, 1985; Krivov et al., 2003; Sun et al.,
2017):
$f_{v}=\frac{1}{v_{0}}\left(\frac{v}{v_{0}}\right)^{-2}\Theta[v-v_{0}],$ (32)
where $\Theta(x)$ denotes the Heaviside function. The minimum ejecta speed,
$v_{0}$, is obtained from the transcendental equation (Krüger et al., 2000)
$\frac{K_{e}}{K_{i}}=Y\left(\frac{v_{0}}{v_{\rm
imp}}\right)^{2}\left[\left(\frac{v_{0}}{v_{\rm max}}\right)^{-1}-1\right],$
(33)
where $v_{\rm max}$ is the maximum ejecta speed and $K_{e}/K_{i}$ is the ratio
between the kinetic energy partitioned to the ejecta and the impactor’s
kinetic energy, assumed as $K_{e}/K_{i}=0.1$ (Sun et al., 2017).
Figure 17: Normalised optical depth ${\rm\tau_{norm}}$ for the ejected
particles. Similarly to Fig. 16, we present a cut in the $\theta$-$r$ and
$r$-$z$ planes in the rotating frame. The green dot gives Pallene’s position
and the vertical dashed lines are MMRs with Enceladus. Adobe Reader version
$\geq$9 or similar is required.
Figure 17 is similar to Fig. 16 but for the ejected particles. The temporal
evolution of the ejected particles is similar to the co-orbital particles
scenario. The same is true for the ring profiles, with greater distinctions
only in the first years of the simulation, due to the different initial
conditions. Figure 18 shows the half-lifetime and lifetime of the ring (top
panel), the particle sinks (middle panel), the times required for Pallene to
produce the ring material, as well as the lifetimes as a function of the slope
of the size distribution (bottom panel). Our results are similar to those
discussed in Section 5.3. In both scenarios, Pallene could produce the
material to keep the ring in a steady-state if the distribution of the
particles in the ring is given by $q\lesssim 3$.
(a) (b) (c)
Figure 18: a) The solid lines in blue and red show the time for 50% and 100%
of the ejected particles to be removed from Pallene ring, respectively. b) The
coloured lines show the fraction of particles that collide with Mimas (in
red), Pallene (in black), and Enceladus (in blue), and the fraction that
migrates outside the orbit of Enceladus (in green). c) The time for Pallene to
produce the ring material is given by the black lines, in the non-porous
(solid) and porous (dot-dashed) cases, while the ring lifetime and half-life
are given by the red and blue lines, respectively.
### 5.5 Comments on ring sources
Similar to Madeira et al. (2018) and Madeira & Giuliatti Winter (2020), we
only computed the production due to external projectile impacts with the
immersed moon. Therefore, we are analysing whether the satellite can produce
the amount of material needed to keep the systems in steady-state, not whether
they are in steady-state. In fact, the most likely case is that all the
mentioned dusty arcs/rings are in a quasi-steady state, demonstrating that
more sophisticated models are needed to understand their stability.
As we pointed out in this section, satellite porosity can be a factor
influencing material production; however, the systems also have other sources.
For example, ring particles are also impacted by external projectiles and
therefore also produce material. However, following the prescription given in
Dikarev et al. (2005), we obtained that such source is at least three orders
of magnitude less efficient than the satellite for the systems analysed here.
The mentioned arcs/rings have the similarity of having a population of larger
particles ($\sim$ cm-m, Hedman et al., 2009, 2010; Spahn et al., 2019), which
lead us to speculate whether the mutual collision of these objects or their
impacts with the moon would be the main source of these systems (Colwell &
Esposito, 1990a, b). Just as a proof of concept, we will assume that in the
Pallene ring is immersed a family of moonlets with radii ranging from $1$ m to
$100$ m, following a size distribution $N\sim s^{-3.5}$ and total optical
depth $\tau_{\rm mlets}=10^{-8}$. Production due to impacts between the
moonlets can be roughly estimated as (Sun et al., 2015)
$\dot{M}_{\rm mlets}=3\tau_{\rm mlets}NM_{\rm col}$ (34)
where $M_{\rm col}$ is the amount of dust released per collision, assumed as
$0.12M_{\rm mlet}$ (Canup & Esposito, 1995), and $M_{\rm mlet}$ is the total
mass of the moonlet population.
As a result, we get $\dot{M}_{\rm mlets}\sim 10^{-2}~{}{\rm kg/s}$
corresponding to a value more than one order of magnitude higher than the
production due to the non-porous Pallene. This shows that impacts between
larger particles are an appealing possibility to keep the arcs/rings in
steady-state. However, production due to impacts between centimetric-metric
bodies is a very intricate problem, and is beyond the scope of this work.
## 6 Summary and Conclusions
In this work, we performed an exhaustive numerical exploration of the
evolution of the small Saturnian moon Pallene, as well as of the diffuse dusty
ring sharing its orbit. We used both short- and long-term numerical
simulations, spanning a wide range of timescales to cover in detail the
evolution of Pallene and its ring.
By using the frequency map analysis technique, we produced a diffusion map to
characterise the current dynamical state of a wide region of phase-space
surrounding Pallene. We identified all the MMRs of relevance in the region,
among Pallene and any of the six major moons considered in this study, up to
fourth order. We used a simple tidal evolution calculation for Mimas, Pallene,
and Enceladus in order to set the context for our longer-term simulations. We
made note that the most recent resonance Pallene may have escaped from is the
4:5 resonance with Mimas. Pallene’s current eccentricity or inclination could
be signs of this or another past resonance crossing.
From the short- and long-term N-body simulations, we analysed all the direct
and indirect arguments of the disturbing function identified in the diffusion
map in the vicinity of Pallene. These arguments included zeroth-order
arguments, with degrees $j\leq$ 15, and first- to fourth-order arguments with
degrees $j\leq 30$. In brief, we found that some arguments displayed
interesting behaviour by temporally librating at various timescales. In
particular, the direct argument
$\phi_{\mathrm{tP}}=8\lambda^{\prime}-5\lambda-\varpi^{\prime}-2\varpi$ of
Pallene with Tethys that librates for $\sim 10$ kyr and the zeroth-order
argument $\Phi=\varpi^{\prime}-\varpi+\Omega^{\prime}-\Omega$ of Pallene with
Tethys, Dione and Titan, which coincides with the angle combination suggested
for Pallene with Mimas by Callegari & Yokoyama (2010). The recurrence of this
zeroth-order combination suggests a possible secular alignment of the lines of
apsides and nodes among Pallene, Dione, Rhea, and Titan in timescales $\sim
800$ yr.
Furthermore, after a thorough search of possible (two-body) resonant arguments
for Pallene, we conclude that the small moon is not currently in resonance
with either Mimas, Enceladus, Tethys, Dione, Rhea, or Titan. It is unlikely
that Pallene would be in a higher-order MMR, i.e., $\geq$ 5th order, with any
of these satellites, due to their small eccentricity/inclination, and the
corresponding $e$-$I$ coefficients of the disturbing function. Nevertheless,
the lack of two-body MMRs for Pallene does not exclude the hypothesis that
Pallene might be part of a three-body resonance. Moreover, under the present
considerations and without accounting for Saturn’s tidal forces in the
numerical simulations, we cannot dismiss either the past escape of Pallene
from a resonance or its future trapping, particularly at times longer than 5
Myr.
We analysed the dynamical evolution of the Pallene ring assuming a scenario
where particles are ejected from the satellite’s surface, as well as a
scenario where the material is originally co-orbital to Pallene. We found that
non-gravitational forces dynamically dominate the system and the material
experiences a similar dynamical evolution in both scenarios.
The outward migration due to plasma drag causes the loss of particles with
radius of a few micrometres in just tens of years, while larger particles
($\gtrsim 10~{}\mu$m) can survive for a few hundred years in the ring. Spahn
et al. (2019) measured the radial mean position of the ring to be more than
$1000$ km beyond the satellite’s orbit; this is likely caused by plasma drag.
Our ring profiles clearly show the formation of particle clusters beyond
Pallene’s orbit. Furthermore, the profiles show that the ring evolves into
structures that are radially asymmetrical in relation to the satellite’s
orbit.
The precession of the longitude of pericentre due to non-gravitational forces
produces vertical excursions of the particles in relation to Pallene’s orbital
plane. This could be the mechanism responsible for vertical excursions
discussed in Hedman et al. (2009).
_Cassini_ data indicate a concentration of larger particles around Pallene’s
orbit, which is in line with the significantly longer lifetime of the larger
particles that we found. In fact, when calculating the mass production rate
due to IDPs and ERPs, we find that Pallene can keep the ring in a steady-state
only if it is predominantly composed of larger micrometre-sized particles
($q\lesssim 3$).
If we assume Pallene as the only source of material for the rings, we conclude
that the ring would spread for $q\lesssim 4$. This corresponds to the slope
range given by Kempf et al. (2008); Ye et al. (2014a) for the E ring, in which
Pallene is immersed. In this scenario, our profiles show that the ring will
evolve into a toroidal structure similar to the gossamer rings of Jupiter, and
then it will continuously spread out, both radially and vertically, until it
finally disappears. From our numerical results, we cannot constrain whether
the ring originated from the material ejected from the satellite or from the
disruption of an ancient proto-Pallene.
We must point out that our dynamical model is not complete; if the ring has a
high concentration of larger particles, additional effects such as collisions
between the particles, self-gravity, and local viscosity may be significant to
the system. However, even in this case, plasma drag may dominate, and our main
results would still hold valid.
## Acknowledgements
We thank the anonymous referee for a detailed and careful report that helped
to greatly improve the quality of this paper. G. Madeira thanks FAPESP for
financial support via grant 2018/23568-6. J. A’Hearn thanks M. Hedman, M.
Tiscareno, and M. Showalter for useful discussions; and also thanks NASA for
partial support through the Cassini Data Analysis and Participating Scientist
Program grant NNX15AQ67G. S. M. Giuliatti Winter thanks FAPESP (2016/24561-0),
CNPq (313043/2020-5) and Capes for the financial support.
## Data Availability
The data underlying this article will be shared on reasonable request to the
corresponding author.
## References
* Allan (1969) Allan R. R., 1969, AJ, 74, 497
* Altobelli et al. (2018) Altobelli N., Kempf S., Postberg F., Fischer C., Albin T., Srama R., 2018, in European Planetary Science Congress. pp EPSC2018–199
* Archinal et al. (2011) Archinal B. A., et al., 2011, Celestial Mechanics and Dynamical Astronomy, 109, 101
* Batygin (2015) Batygin K., 2015, MNRAS, 451, 2589
* Belenkaya et al. (2006) Belenkaya E. S., Cowley S. W. H., Alexeev I. I., 2006, Annales Geophysicae, 24, 1649
* Boué & Fabrycky (2014) Boué G., Fabrycky D. C., 2014, ApJ, 789, 111
* Burns et al. (1979) Burns J. A., Lamy P. L., Soter S., 1979, Icarus, 40, 1
* Burns et al. (1985) Burns J. A., Schaffer L. E., Greenberg R. J., Showalter M. R., 1985, Nature, 316, 115
* Burns et al. (1999) Burns J. A., Showalter M. R., Hamilton D. P., Nicholson P. D., de Pater I., Ockert-Bell M. E., Thomas P. C., 1999, Science, 284, 1146
* Callegari & Yokoyama (2010) Callegari N., Yokoyama T., 2010, in Fernandez J. A., Lazzaro D., Prialnik D., Schulz R., eds, IAU Symposium Vol. 263, Icy Bodies of the Solar System. pp 161–166 (arXiv:0910.2726), doi:10.1017/S1743921310001699
* Callegari & Yokoyama (2020) Callegari N., Yokoyama T., 2020, Icarus, 348, 113820
* Callegari et al. (2021) Callegari N., Rodríguez A., Ceccatto D. T., 2021, Celestial Mechanics and Dynamical Astronomy, 133, 49
* Canup & Esposito (1995) Canup R. M., Esposito L. W., 1995, Icarus, 113, 331
* Cassidy & Johnson (2010) Cassidy T. A., Johnson R. E., 2010, Icarus, 209, 696
* Chambers (1999) Chambers J. E., 1999, MNRAS, 304, 793
* Chapman & Bartels (1940) Chapman S., Bartels J., 1940, Geomagnetism. Vol. I. Geomagnetic and related phenomana. Vol. II. Analysis and physical interpretation of the phenomena.. Oxford University Press
* Colombo et al. (1966) Colombo G., Lautman D. A., Shapiro I. I., 1966, J. Geophys. Res., 71, 5705
* Colwell & Esposito (1990a) Colwell J. E., Esposito L. W., 1990a, Geophysical research letters, 17, 1741
* Colwell & Esposito (1990b) Colwell J. E., Esposito L. W., 1990b, Icarus, 86, 530
* Cooper et al. (2008) Cooper N. J., Murray C. D., Evans M. W., Beurle K., Jacobson R. A., Porco C. C., 2008, Icarus, 195, 765
* Correia et al. (2005) Correia A. C. M., Udry S., Mayor M., Laskar J., Naef D., Pepe F., Queloz D., Santos N. C., 2005, A&A, 440, 751
* Ćuk et al. (2016) Ćuk M., Dones L., Nesvorný D., 2016, ApJ, 820, 97
* Dikarev et al. (2005) Dikarev V., Grün E., Baggaley J., Galligan D., Landgraf M., Jehn R., 2005, Advances in Space Research, 35, 1282
* Divine (1993) Divine N., 1993, J. Geophys. Res., 98, 17029
* El Moutamid et al. (2014) El Moutamid M., Sicardy B., Renner S., 2014, Celestial Mechanics and Dynamical Astronomy, 118, 235
* El Moutamid et al. (2017) El Moutamid M., Sicardy B., Renner S., 2017, MNRAS, 469, 2380
* Elrod et al. (2014) Elrod M. K., Tseng W. L., Woodson A. K., Johnson R. E., 2014, Icarus, 242, 130
* Fuller et al. (2016) Fuller J., Luan J., Quataert E., 2016, MNRAS, 458, 3867
* Gaslac Gallardo et al. (2020) Gaslac Gallardo D. M., Giuliatti Winter S. M., Madeira G., Muñoz-Gutiérrez M. A., 2020, Ap&SS, 365, 5
* Giuliatti Winter et al. (2020) Giuliatti Winter S., Madeira G., Sfair R., 2020, Monthly Notices of the Royal Astronomical Society, 496, 590
* Granados Contreras & Boley (2018) Granados Contreras A. P., Boley A. C., 2018, AJ, 155, 139
* Greenberg (1973) Greenberg R., 1973, MNRAS, 165, 305
* Greenberg (1975) Greenberg R., 1975, MNRAS, 170, 295
* Grun et al. (1984) Grun E., Morfill G. E., Mendis D. A., 1984, in Greenberg R., Brahic A., eds, IAU Colloq. 75: Planetary Rings. pp 275–332
* Grun et al. (1985) Grun E., Zook H. A., Fechtig H., Giese R. H., 1985, Icarus, 62, 244
* Hamilton (1993) Hamilton D. P., 1993, Icarus, 101, 244
* Hamilton & Burns (1994) Hamilton D. P., Burns J. A., 1994, Science, 264, 550
* Hartmann (1985) Hartmann W. K., 1985, Icarus, 63, 69
* Hedman et al. (2009) Hedman M. M., Murray C. D., Cooper N. J., Tiscareno M. S., Beurle K., Evans M. W., Burns J. A., 2009, Icarus, 199, 378
* Hedman et al. (2010) Hedman M. M., Cooper N. J., Murray C. D., Beurle K., Evans M. W., Tiscareno M. S., Burns J. A., 2010, Icarus, 207, 433
* Hedman et al. (2020) Hedman M. M., Helfenstein P., Chancia R. O., Thomas P., Roussos E., Paranicas C., Verbiscer A. J., 2020, AJ, 159, 129
* Helled et al. (2015) Helled R., Galanti E., Kaspi Y., 2015, Nature, 520, 202
* Hesselbrock & Minton (2019) Hesselbrock A. J., Minton D. A., 2019, AJ, 157, 30
* Horányi et al. (2008) Horányi M., Juhász A., Morfill G. E., 2008, Geophys. Res. Lett., 35, L04203
* Hsu et al. (2011) Hsu H. W., Postberg F., Kempf S., Trieloff M., Burton M., Roy M., Moragas-Klostermeyer G., Srama R., 2011, Journal of Geophysical Research (Space Physics), 116, A09215
* Iess et al. (2019) Iess L., et al., 2019, Science, 364, aat2965
* Irvine (1965) Irvine W. M., 1965, Journal of the Optical Society of America (1917-1983), 55, 16
* Jacobson et al. (2006) Jacobson R. A., et al., 2006, AJ, 132, 2520
* Jacobson et al. (2008) Jacobson R. A., Spitale J., Porco C. C., Beurle K., Cooper N. J., Evans M. W., Murray C. D., 2008, AJ, 135, 261
* Kaib et al. (2011) Kaib N. A., Raymond S. N., Duncan M. J., 2011, ApJ, 742, L24
* Kempf et al. (2008) Kempf S., et al., 2008, Icarus, 193, 420
* Kempf et al. (2010) Kempf S., Beckmann U., Schmidt J., 2010, Icarus, 206, 446
* Kliore et al. (1980) Kliore A. J., Patel I. R., Lindal G. F., Sweetnam D. N., Hotz H. B., Waite J. H., McDonough T., 1980, J. Geophys. Res., 85, 5857
* Koschny & Grün (2001) Koschny D., Grün E., 2001, Icarus, 154, 391
* Krivov et al. (2003) Krivov A. V., Sremčević M., Spahn F., Dikarev V. V., Kholshevnikov K. V., 2003, Planet. Space Sci., 51, 251
* Krüger et al. (2000) Krüger H., Krivov A. V., Grün E., 2000, Planet. Space Sci., 48, 1457
* Lainey et al. (2012) Lainey V., et al., 2012, ApJ, 752, 14
* Lainey et al. (2017) Lainey V., et al., 2017, Icarus, 281, 286
* Lainey et al. (2020) Lainey V., et al., 2020, Nature Astronomy,
* Landgraf et al. (2002) Landgraf M., Liou J. C., Zook H. A., Grün E., 2002, AJ, 123, 2857
* Laskar (1990) Laskar J., 1990, Icarus, 88, 266
* Laskar (1993) Laskar J., 1993, Physica D Nonlinear Phenomena, 67, 257
* Laskar et al. (1992) Laskar J., Froeschlé C., Celletti A., 1992, Physica D Nonlinear Phenomena, 56, 253
* Liu et al. (2016) Liu X., Sachse M., Spahn F., Schmidt J., 2016, Journal of Geophysical Research (Planets), 121, 1141
* Madeira & Giuliatti Winter (2020) Madeira G., Giuliatti Winter S. M., 2020, European Physical Journal Special Topics, 229, 1527
* Madeira et al. (2018) Madeira G., Sfair R., Mourão D. C., Giuliatti Winter S. M., 2018, MNRAS, 475, 5474
* Meyer & Wisdom (2008) Meyer J., Wisdom J., 2008, Icarus, 193, 213
* Mignard (1984) Mignard F., 1984, in Greenberg R., Brahic A., eds, IAU Colloq. 75: Planetary Rings. pp 333–366
* Mishchenko et al. (1999) Mishchenko M. I., Dlugach Z. M., Yanovitskij E. G., Zakharova N. T., 1999, J. Quant. Spectrosc. Radiative Transfer, 63, 409
* Mishchenko et al. (2002) Mishchenko M. I., Travis L. D., Lacis A. A., 2002, Scattering, absorption, and emission of light by small particles
* Morfill & Gruen (1979) Morfill G. E., Gruen E., 1979, Planet. Space Sci., 27, 1269
* Morfill et al. (1993) Morfill G. E., Havnes O., Goertz C. K., 1993, J. Geophys. Res., 98, 11285
* Muñoz-Gutiérrez & Giuliatti Winter (2017) Muñoz-Gutiérrez M. A., Giuliatti Winter S., 2017, MNRAS, 470, 3750
* Murray & Dermott (1999) Murray C. D., Dermott S. F., 1999, Solar system dynamics. Cambridge University Press
* Nesvorný et al. (2010) Nesvorný D., Jenniskens P., Levison H. F., Bottke W. F., Vokrouhlický D., Gounelle M., 2010, ApJ, 713, 816
* Neveu & Rhoden (2019) Neveu M., Rhoden A. R., 2019, Nature Astronomy, 3, 543
* Nicholson et al. (1996) Nicholson P. D., et al., 1996, Science, 272, 509
* Northrop & Birmingham (1982) Northrop T. G., Birmingham T. J., 1982, J. Geophys. Res., 87, 661
* Peale (1976) Peale S. J., 1976, ARA&A, 14, 215
* Peale (1999) Peale S. J., 1999, ARA&A, 37, 533
* Persoon et al. (2015) Persoon A. M., Gurnett D. A., Kurth W. S., Groene J. B., Faden J. B., 2015, Journal of Geophysical Research (Space Physics), 120, 6276
* Persoon et al. (2020) Persoon A. M., et al., 2020, Journal of Geophysical Research (Space Physics), 125, e27545
* Piquette (2019) Piquette M. R., 2019, PhD thesis, University of Colorado at Boulder
* Piquette et al. (2019) Piquette M., et al., 2019, Icarus, 321, 116
* Poppe (2016) Poppe A. R., 2016, Icarus, 264, 369
* Poppe et al. (2011) Poppe A., James D., Horányi M., 2011, Planet. Space Sci., 59, 319
* Poppe et al. (2019) Poppe A. R., et al., 2019, ApJ, 881, L12
* Porco et al. (2005) Porco C. C., et al., 2005, Science, 307, 1226
* Postberg et al. (2008) Postberg F., Kempf S., Hillier J. K., Srama R., Green S. F., McBride N., Grün E., 2008, Icarus, 193, 438
* Rein & Spiegel (2015) Rein H., Spiegel D. S., 2015, MNRAS, 446, 1424
* Renner & Sicardy (2006) Renner S., Sicardy B., 2006, Celestial Mechanics and Dynamical Astronomy, 94, 237
* Roatsch et al. (2009) Roatsch T., Jaumann R., Stephan K., Thomas P. C., 2009, Cartographic Mapping of the Icy Satellites Using ISS and VIMS Data. p. 763, doi:10.1007/978-1-4020-9217-6_24
* Roberts & Stickle (2017) Roberts J. H., Stickle A. M., 2017, in Lunar and Planetary Science Conference. Lunar and Planetary Science Conference. p. 1955
* Robutel & Laskar (2001) Robutel P., Laskar J., 2001, Icarus, 152, 4
* Sfair & Giuliatti Winter (2012) Sfair R., Giuliatti Winter S. M., 2012, A&A, 543, A17
* Showalter et al. (2019) Showalter M. R., de Pater I., Lissauer J. J., French R. S., 2019, Nature, 566, 350
* Sinclair (1972) Sinclair A. T., 1972, MNRAS, 160, 169
* Sittler & Johnson (2015) Sittler E. C. J., Johnson R. E., 2015, in AGU Fall Meeting Abstracts. pp P43E–08
* Spahn et al. (2006) Spahn F., et al., 2006, Planet. Space Sci., 54, 1024
* Spahn et al. (2019) Spahn F., Sachse M., Seiß M., Hsu H.-W., Kempf S., Horányi M., 2019, Space Sci. Rev., 215, 11
* Spitale et al. (2006) Spitale J. N., Jacobson R. A., Porco C. C., Owen Jr. W. M., 2006, AJ, 132, 692
* Srama et al. (2020) Srama R., et al., 2020, in European Planetary Science Congress. pp EPSC2020–1012
* Stooke (1994) Stooke P. J., 1994, Earth Moon and Planets, 65, 31
* Sun et al. (2015) Sun K.-L., Schmidt J., Spahn F., 2015, arXiv e-prints, p. arXiv:1510.07730
* Sun et al. (2017) Sun K.-L., Seiß M., Hedman M. M., Spahn F., 2017, Icarus, 284, 206
* Synnott (1986) Synnott S. P., 1986, Icarus, 67, 189
* Thomas et al. (2013) Thomas P. C., Burns J. A., Hedman M., Helfenstein P., Morrison S., Tiscareno M. S., Veverka J., 2013, Icarus, 226, 999
* Tseng & Ip (2011) Tseng W.-L., Ip W.-H., 2011, Icarus, 212, 294
* Tseng et al. (2010) Tseng W. L., Ip W. H., Johnson R. E., Cassidy T. A., Elrod M. K., 2010, Icarus, 206, 382
* Ye et al. (2014a) Ye S. Y., Gurnett D. A., Kurth W. S., Averkamp T. F., Morooka M., Sakai S., Wahlund J. E., 2014a, Journal of Geophysical Research (Space Physics), 119, 3373
* Ye et al. (2014b) Ye S. Y., Gurnett D. A., Kurth W. S., Averkamp T. F., Kempf S., Hsu H. W., Srama R., Grün E., 2014b, Journal of Geophysical Research (Space Physics), 119, 6294
* Šidlichovský & Nesvorný (1996) Šidlichovský M., Nesvorný D., 1996, Celestial Mechanics and Dynamical Astronomy, 65, 137
|
11institutetext: Carnegie Mellon University, Pittsburgh, USA
11email<EMAIL_ADDRESS>
22institutetext: Amazon Scholar
33institutetext: Institute of Mathematics, Technische Universität Berlin,
Germany
33email<EMAIL_ADDRESS>
# Happy Ending: An Empty Hexagon in Every Set of 30 Points
Marijn J. H. Heule 1122 0000-0002-5587-8801 Manfred Scheucher 33
0000-0002-1657-9796
###### Abstract
Satisfiability solving has been used to tackle a range of long-standing open
math problems in recent years. We add another success by solving a geometry
problem that originated a century ago. In the 1930s, Esther Klein’s
exploration of unavoidable shapes in planar point sets in general position
showed that every set of five points includes four points in convex position.
For a long time, it was open if an empty hexagon, i.e., six points in convex
position without a point inside, can be avoided. In 2006, Gerken and Nicolás
independently proved that the answer is no. We establish the exact bound:
Every 30-point set in the plane in general position contains an empty hexagon.
Our key contributions include an effective, compact encoding and a search-
space partitioning strategy enabling linear-time speedups even when using
thousands of cores.
###### Keywords:
Erdős–Szekeres problem empty hexagon theorem planar point set cube-and-conquer
proof of unsatisfiability
## 1 Introduction
In 1932, Esther Klein showed that every set of five points in the plane _in
general position_ (i.e., no three points on a common line) has a subset of
four points in convex position. Shortly after, Erdős and Szekeres [9]
generalized this result by showing that, for every integer $k$, there exists a
smallest integer $g(k)$ such that every set of $g(k)$ points in the plane in
general position contains a _$k$ -gon_ (i.e., a subset of $k$ points that form
the vertices of a convex polygon). As the research led to the marriage of
Szekeres and Klein, Erdős named it the _happy ending problem_. Erdős and
Szekeres constructed witnesses of $g(k)>2^{k-2}$ [10], which they conjectured
to be maximal. The best upper bound is $g(k)\leq 2^{k+o(k)}$ [31, 21].
Determining the value $g(5)=9$ requires a more involved case distinction
compared to $g(4)=5$[24]. It took until 2006 to determine that $g(6)=17$ via
an exhaustive computer search by Szekeres and Peters [32] using 1500 CPU
hours. Marić [26] and Scheucher [29] independently verified $g(6)=17$ using
satisfiability (SAT) solving in a few CPU hours. This was later reduced to 10
CPU minutes [30]. The approach presented in this paper computes it in 8.53 CPU
seconds, showing the effectiveness of SAT compared to the original method.
Erdős also asked whether every sufficiently large point set contains a _$k$
-hole_: a $k$-gon without a point inside. We denote by $h(k)$ the smallest
integer—if it exists—such that every set of $h(k)$ points in general position
in the plane contains a $k$-hole. Both $h(3)=3$ and $h(4)=5$ are easy to
compute (see Fig. 1 for an illustration) and coincide with the original
setting. Yet the answer can differ a lot, as Horton [22] constructed
arbitrarily large point sets without 7-holes.
Figure 1: An illustration for the proof of $h(4)=5$: The three possibilities
of how five points can be placed. Each possibility implies a $4$-hole.
While Harborth [16] showed in 1978 that $h(5)=10$, the existence of $6$-holes
remained open until the late 2000s, when Gerken [14]111Gerken’s groundbreaking
work was awarded the Richard-Rado prize by the German Mathematical Society in
2008. and Nicolás [27] independently proved that $h(6)$ is finite. Gerken
proved that every $9$-gon yields a $6$-hole, thereby showing that $h(6)\leq
g(9)\leq 1717$ [34]. The best-known lower bound $h(6)\geq 30$ is witnessed by
a set of 29 points without $6$-holes which was found by Overmars [28] using a
local search approach, see Figure 8.
We close the gap between the upper and lower bound and ultimately answer
Erdős’ question by proving that every set of 30 points yields a 6-hole.
###### Theorem 1.1
$h(6)=30$.
Our result is actually stronger and shows that the bounds for $6$-holes in
point sets coincide with the bounds for $6$-holes in _counterclockwise
systems_ [25]. This represents another success of solving long-standing open
problems in mathematics using SAT, similar to results on Schur Number Five
[18] and Keller’s Conjecture [5].
We also investigate the combination of $6$-holes and $7$-gons and show
###### Theorem 1.2
Every set of 24 points in the plane in general position contains a $6$-hole or
a $7$-gon.
#####
We achieve these results through the following contributions:
* •
We develop a compact and effective SAT encoding for $k$-gon and $k$-hole
problems that uses $O(n^{4})$ clauses, while existing encodings use $O(n^{k})$
clauses.
* •
We construct a partitioning of $k$-gon and $k$-hole problems that allows us to
solve them with linear-time speedups even when using thousands of cores.
* •
We present a novel method of validating SAT-solving results that checks the
proof while solving the problem using substantially less overhead.
* •
We verify most of the presented results using clausal proof checking.
## 2 Preliminaries
##### The SAT problem.
The satisfiability problem (SAT) asks whether a Boolean formula can be
satisfied by some assignment of truth values to its variables. The Handbook of
Satisfiability [2] provides an overview. We consider formulas in _conjunctive
normal form_ (CNF), which is the default input of SAT solvers. As such, a
formula $\Gamma$ is a conjunction (logical “AND”) of clauses. A clause is a
disjunction (logical “OR”) of literals, where a literal is a Boolean variable
or its negation. We sometimes write (sets of) clauses using other logical
connectives.
If a formula $\Gamma$ is found to be satisfiable, modern SAT solvers commonly
output a truth assignment of the variables. Additionally, if a formula turns
out to be unsatisfiable, sequential SAT solvers produce an independently-
checkable proof that there exists no assignment that satisfies the formula.
##### Verification.
The most commonly-used proofs for SAT problems are expressed in the DRAT
clausal proof system [17]. A DRAT proof of unsatisfiability is a list of
clause addition and clause deletion steps. Formally, a clausal proof is a list
of pairs $\langle{s_{1}},C_{1}\rangle,\dots,\langle{s_{m}},C_{m}\rangle$,
where for each $i\in\\{1,\dots,m\\}$, $s_{i}\in\\{\mathsf{a},\mathsf{d}\\}$
and $C_{i}$ is a clause. If $s_{i}=\mathsf{a}$, the pair is called an
_addition_ , and if $s_{i}=\mathsf{d}$, it is called a _deletion_. For a given
input formula $\Gamma_{0}$, a clausal proof gives rise to a set of
_accumulated formulas_ $\Gamma_{i}$ ($i\in\\{1,\dots,m\\}$) as follows:
$\displaystyle\Gamma_{i}=\begin{cases}\Gamma_{i-1}\cup\\{C_{i}\\}&\text{if
$\mathsf{s}_{i}=\mathsf{a}$}\\\ \Gamma_{i-1}\setminus\\{C_{i}\\}&\text{if
$\mathsf{s}_{i}=\mathsf{d}$}\\\ \end{cases}$
Each clause addition must preserve satisfiability, which is usually guaranteed
by requiring the added clauses to fulfill some efficiently decidable syntactic
criterion. Deletions help to speed up proof checking by keeping the
accumulated formula small. A valid proof of unsatisfiability must add the
empty clause.
##### Cube And Conquer.
The cube-and-conquer approach [20] aims to _split_ a SAT instance $\Gamma$
into multiple instances $\Gamma_{1},\ldots,\Gamma_{m}$ in such a way that
$\Gamma$ is satisfiable if and only if at least one of the instances
$\Gamma_{i}$ is satisfiable, thus allowing work on the different instances
$\Gamma_{i}$ in parallel. A cube is a conjunction of literals. Let
$\psi=\left(c_{1}\lor\cdots\lor c_{m}\right)$ be a disjunction of cubes. When
$\psi$ is a tautology, we have
$\Gamma\iff\Gamma\land\psi\iff\bigvee_{i=1}^{m}(\Gamma\land
c_{i})\iff\bigvee_{i=1}^{m}\Gamma_{i},$
where the different $\Gamma_{i}\coloneqq(\Gamma\land c_{i})$ are the instances
resulting from the split.
Intuitively, each cube $c_{i}$ represents a _case_ , i.e., an assumption about
a satisfying assignment to $\Gamma$, and soundness comes from $\psi$ being a
tautology, which means that the split into cases is exhaustive. If the split
is well designed, then each $\Gamma_{i}$ is a particular case that is
substantially easier to solve than $\Gamma$, and thus solving them all in
parallel can give significant speed-ups, especially considering the sequential
nature of CDCL at the core of most solvers.
However, the quality of the split ($\psi$) has an enormous impact on the
effectiveness of the approach. A key challenge is figuring out a high-quality
split.
## 3 Trusted Encoding
To obtain an upper-bound result using a SAT-based approach, we need to show
that every set of $n$ points contains a $k$-hole. We will do this by
constructing a formula based on $n$ points that asks whether a $k$-hole can be
avoided. If this formula is unsatisfiable, then we obtain the bound $h(k)\leq
n$. Instead of reasoning directly whether an empty $k$-gon can be avoided, we
ask whether every $k$ points contain at least one triangle with a point
inside. The latter implies the former.
We only need to know for each triple of points whether it is empty. Throughout
the paper, we assume that points are sorted with strictly increasing
$x$-coordinates. This gives us only four options for a point $p_{i}$ to be
inside the triangle formed by points $p_{a}$, $p_{b}$, $p_{c}$, see Fig. 2.
For example, the left image shows that $p_{i}$ is inside if $a<i<b$, $p_{c}$
and $p_{i}$ are above the line $p_{a}p_{b}$, and $p_{i}$ is below the line
$p_{a}p_{c}$. So we need some machinery to express that points are above or
below certain lines. That is what the encoding will provide. For readability,
we sometimes identify points by their indices, that is, we refer to $p_{a}$ by
its index $a$.
$a$$b$$c$$i$
$a$$b$$c$$i$
$a$$b$$c$$i$
$a$$b$$c$$i$
Figure 2: The four ways a point $p_{i}$ can be inside triangle
$\\{p_{a},p_{b},p_{c}\\}$ based on whether $i<b$ (left two images) and whether
$p_{c}$ is above the line $p_{a}p_{b}$ (first and third image).
We first present what we call the _trusted encoding_ to determine whether a
$6$-hole can be avoided. The encoding needs to be trusted in the sense that we
do not provide a mechanically verified proof of its correctness. Building upon
existing work [29], our primary focus is on $6$-holes, which constitute our
main result. The encoding of $6$-gons and $7$-gons is similar and more simple.
During an initial study, the estimated runtime for showing $h(6)\leq 30$ using
this encoding and off-the-shelf partitioning was roughly 1000 CPU years. The
optimizations in Sections 4 and 5 reduce the computational costs to about 2
CPU years.
### 3.1 Orientation Variables
$a$$b$$c$$d$$\boldsymbol{+}$$\boldsymbol{-}$ Figure 3: An illustration of
triple orientations.
We formulate the problem in such a way that all reasoning is based solely on
the relative positions of points. Thus, we do not encode coordinates but only
orientations of point triples. For a point set $S=\\{p_{1},\ldots,p_{n}\\}$
with $p_{i}=(x_{i},y_{i})$, the triple $(p_{a},p_{b},p_{c})$ with $a<b<c$ is
_positively oriented_ (resp. _negatively oriented_) if $p_{c}$ lies above
(resp. below) the line $p_{a}p_{b}$ through $p_{a}$ and $p_{b}$. The notion of
positive orientation corresponds to Knuth’s _counterclockwise relation_ [25].
Fig. 3 illustrates a positively-oriented triple $(p_{a},p_{b},p_{c})$ and a
negatively-oriented triple $(p_{a},p_{b},p_{d})$.
To search for point sets without $k$-gons and $k$-holes, we introduce a
Boolean orientation variable ${\mathsf{o}}_{a,b,c}$ for each triple
$(p_{a},p_{b},p_{c})$ with $a<b<c$. Intuitively, ${\mathsf{o}}_{a,b,c}$ is
supposed to be true if the triple is positively oriented. Since we assume
general position, no three points lie on a common line, so
${\mathsf{o}}_{a,b,c}$ being false means that the triple is negatively
oriented.
### 3.2 Containment Variables, $3$-Hole Variables, and Constraints
Using orientation variables, we can now express what it means for a triangle
to be empty. We define _containment variables_ ${\mathsf{c}}_{i;a,b,c}$ to
encode whether point $p_{i}$ lies inside the triangle spanned by
$\\{p_{a},p_{b},p_{c}\\}$. Since the points have increasing $x$-coordinates,
containment is only possible if $a<i<c$. We use two kinds of definitions,
depending on whether $i$ is smaller or larger than $b$ (see Fig. 2). The first
definition is for the case $a<i<b$. Note that if ${\mathsf{o}}_{a,b,c}$ is
true, we only need to know whether $i$ is above the line $p_{a}p_{b}$ and
below the line $p_{a}p_{c}$. Earlier work [29] used an extended definition
that included the redundant variable ${\mathsf{o}}_{i,b,c}$. Avoiding this
variable makes the definition more compact (six instead of eight clauses) and
the resulting formula is easier to solve.
${\mathsf{c}}_{i;a,b,c}\leftrightarrow\Big{(}\big{(}{\mathsf{o}}_{a,b,c}\rightarrow(\overline{{\mathsf{o}}_{a,i,b}}\land{\mathsf{o}}_{a,i,c})\big{)}\land\big{(}\overline{{\mathsf{o}}_{a,b,c}}\rightarrow({\mathsf{o}}_{a,i,b}\land\overline{{\mathsf{o}}_{a,i,c}})\big{)}\Big{)}$
(1)
The second definition is for $b<i<c$, which avoids using the variable
${\mathsf{o}}_{a,b,i}$:
${\mathsf{c}}_{i;a,b,c}\leftrightarrow\Big{(}\big{(}{\mathsf{o}}_{a,b,c}\rightarrow({\mathsf{o}}_{a,i,c}\land\overline{{\mathsf{o}}_{b,i,c}})\big{)}\land\big{(}\overline{{\mathsf{o}}_{a,b,c}}\rightarrow(\overline{{\mathsf{o}}_{a,i,c}}\land{\mathsf{o}}_{b,i,c})\big{)}\Big{)}$
(2)
Each definition translates into six clauses (without using Tseitin variables).
Additionally, we introduce definitions ${\mathsf{h}}_{a,b,c}$ of _$3$ -hole
variables_ that express whether the triangle spanned by
$\\{p_{a},p_{b},p_{c}\\}$ is a $3$-hole. The triangle
$\\{p_{a},p_{b},p_{c}\\}$ forms a $3$-hole if and only if no point $p_{i}$
lies in its interior. A point $p_{i}$ can only be an inner point if it lies in
the vertical strip between $p_{a}$ and $p_{c}$ and if it is distinct from
$p_{b}$. Since the points are sorted, the index $i$ of an interior point
$p_{i}$ must therefore fulfill $a<i<c$ and $i\neq b$. Logically, the
definition is as follows:
${\mathsf{h}}_{a,b,c}\leftrightarrow\bigwedge_{\begin{subarray}{c}a<i<c\\\
i\neq b\end{subarray}}\overline{{\mathsf{c}}_{i;a,b,c}}.$ (3)
Finally, we encode the “forbid $k$-hole” constraint as follows: For each
subset $X\subseteq S$ of size $k$, at least one of the triangles formed by
three points in $X$ must not be a $3$-hole. So for $k=6$, each clause consists
of $\binom{k}{3}=20$ literals.
$\bigwedge_{\begin{subarray}{c}X\subseteq S\\\
|X|=k\end{subarray}}~{}~{}\big{(}~{}\bigvee_{\begin{subarray}{c}a,b,c\in X\\\
a<b<c\end{subarray}}\overline{{\mathsf{h}}_{a,b,c}}~{}\big{)}$ (4)
In Section 4, we will optimize the encoding. Most optimizations aim to improve
the encoding of the constraint (4).
### 3.3 Forbidding Non-Realizable Patterns
Only a small fraction of all assignments to the $\binom{n}{3}$ orientation
variables, $2^{\Theta(n\log n)}$, actually describe point sets [3]. However,
we can reduce the search space from $2^{\Theta(n^{3})}$ to $2^{\Theta(n^{2})}$
by forbidding non-realizable patterns [25]. Consider four points
$p_{a},p_{b},p_{c},p_{d}$ in a sorted point set with $a<b<c<d$. The leftmost
three points determine three lines $p_{a}p_{b}$, $p_{a}p_{c}$, $p_{b}p_{c}$,
which partition the open half-plane $\\{(x,y)\in\mathbb{R}^{2}:x>x_{c}\\}$
into four regions (see Fig. 4). After placing $p_{a}$, $p_{b}$, $p_{c}$,
observe that all realizable positions of point $p_{d}$ obey the following
implications:
${\mathsf{o}}_{a,b,c}\land{\mathsf{o}}_{a,c,d}\Rightarrow{\mathsf{o}}_{a,b,d}$
and
${\mathsf{o}}_{a,b,c}\land{\mathsf{o}}_{b,c,d}\Rightarrow{\mathsf{o}}_{a,c,d}$.
Similarly for the negations,
$\overline{{\mathsf{o}}_{a,b,c}}\land\overline{{\mathsf{o}}_{a,c,d}}\Rightarrow\overline{{\mathsf{o}}_{a,b,d}}$
and
$\overline{{\mathsf{o}}_{a,b,c}}\land\overline{{\mathsf{o}}_{b,c,d}}\Rightarrow\overline{{\mathsf{o}}_{a,c,d}}$.
These implications are equivalent to the following clauses (grouping positive
and negative):
$\displaystyle(\overline{{\mathsf{o}}_{a,b,c}}\lor\overline{{\mathsf{o}}_{a,c,d}}\lor{\mathsf{o}}_{a,b,d})$
$\displaystyle\land$
$\displaystyle({\mathsf{o}}_{a,b,c}\lor{\mathsf{o}}_{a,c,d}\lor\overline{{\mathsf{o}}_{a,b,d}})$
(5)
$\displaystyle(\overline{{\mathsf{o}}_{a,b,c}}\lor\overline{{\mathsf{o}}_{b,c,d}}\lor{\mathsf{o}}_{a,c,d})$
$\displaystyle\land$
$\displaystyle({\mathsf{o}}_{a,b,c}\lor{\mathsf{o}}_{b,c,d}\lor\overline{{\mathsf{o}}_{a,c,d}})$
(6)
Forbidding these non-realizable assignments was also used for $g(6)\leq 17$
[32]. Some call the restriction signotope axioms [12]. The counterclockwise
system axioms [25] achieve the same effect, but require $\Theta(n^{5})$
clauses instead of $\Theta(n^{4})$.
$a$$b$$c$$d$
${\mathsf{o}}_{a,b,c}$ | ${\mathsf{o}}_{a,b,d}$ | ${\mathsf{o}}_{a,c,d}$ | ${\mathsf{o}}_{b,c,d}$
---|---|---|---
$+$ | $+$ | $+$ | $+$
$+$ | $+$ | $+$ | $-$
$+$ | $+$ | $-$ | $-$
$+$ | $-$ | $-$ | $-$
$-$ | $-$ | $-$ | $-$
$-$ | $-$ | $-$ | $+$
$-$ | $-$ | $+$ | $+$
$-$ | $+$ | $+$ | $+$
Figure 4: All possibilities to place four points, when points are sorted from
left to right.
### 3.4 Initial Symmetry Breaking
To further reduce the search space, we ensure that $p_{1}$ lies on the
boundary of the convex hull (i.e., it is an extremal point) and that
$p_{2},\ldots,p_{n}$ appear around $p_{1}$ in counterclockwise order, thus
providing us the unit clauses $({\mathsf{o}}_{1,a,b})$ for $1<a<b$. Without
loss of generality, we can label points to satisfy the above, because the
labeling doesn’t affect gons and holes. However, we also want points to be
sorted from left to right. One can satisfy both orderings at the same time
using the lemma below. We attach a proof in Appendix 0.A.
###### Lemma 1 ([29, Lemma 1])
Let $S=\\{p_{1},\ldots,p_{n}\\}$ be a point set in the plane in general
position such that $p_{1}$ is extremal and $p_{2},\ldots,p_{n}$ appear
(clockwise or counterclockwise) around $p_{1}$. Then there exists a point set
$\tilde{S}=\\{\tilde{p}_{1},\ldots,\tilde{p}_{n}\\}$ with the same triple
orientations (in particular, $\tilde{p}_{1}$ is extremal and
$\tilde{p}_{2},\ldots,\tilde{p}_{n}$ appear around $\tilde{p}_{1}$) such that
the points $\tilde{p}_{1},\ldots,\tilde{p}_{n}$ have increasing
$x$-coordinates.
## 4 Optimizing the Encoding
An ideal SAT encoding has the following three properties:
1. 1)
it is compact to reduce the cost of unit propagation (and cache misses);
2. 2)
it detects conflicts as early as possible (i.e., is domain consistent [13]);
and
3. 3)
it contains variables that can generalize conflicts effectively.
The trusted encoding lacks these properties because it has $O(n^{6})$ clauses,
cannot quickly detect holes, and has no variables that can generalize
conflicts. In this section, we show how to modify the trusted encoding to
obtain all three properties. All the modifications are expressible in a proof
to ensure correctness.
### 4.1 Toward Domain Consistency
The effectiveness of an encoding depends on how quickly the solver can
determine a conflict. Given an assignment, we want to derive as much as
possible via unit propagation. This is known as _domain consistency_ [13]. The
trusted encoding does not have this property. We modify the encoding below to
boost propagation.
We borrow from the method by Szekeres and Peters that a $k$-gon can be
detected by looking at assignments to $k-2$ orientation variables [32]. For
example, if ${\mathsf{o}}_{a,b,c}$, ${\mathsf{o}}_{b,c,d}$,
${\mathsf{o}}_{c,d,e}$, and ${\mathsf{o}}_{d,e,f}$ with
$a\\!<\\!b\\!<\\!c\\!<\\!d\\!<\\!e\\!<\\!f$ are assigned to the same truth
value, then this implies that the points form a $6$-gon. An illustration of
this assignment is shown in Fig. 5 (left). We combine this with our
observation below that only a specific triangle has to be empty to infer a
$6$-hole somewhere.
Consider a scenario involving six points, $a$, $b$, $c$, $d$, $e$, and $f$,
that are arranged from left to right. In this scenario, the orientation
variables ${\mathsf{o}}_{a,b,c}$, ${\mathsf{o}}_{b,c,d}$,
${\mathsf{o}}_{c,d,e}$, and ${\mathsf{o}}_{d,e,f}$ are all set to false, while
the $3$-hole variable ${\mathsf{h}}_{a,c,e}$ is set to true. As mentioned
above, this implies that the points form a $6$-gon. Together with $3$-hole
variable ${\mathsf{h}}_{a,c,e}$ being set to true, we can deduce the existence
of a $6$-hole: The $6$-gon is either a $6$-hole or it contains a $6$-hole. The
reasoning will be explained in the next paragraph. Note that in the trusted
encoding of this scenario, only one out of the twenty literals in the
corresponding ‘forbid $6$-hole’ clause is false. This suggests that the solver
is still quite far from detecting a conflict.
A crucial insight underpinning our efficient encoding is the understanding
that the truth of the variable ${\mathsf{h}}_{a,c,e}$ alone is sufficient to
infer the existence of a $6$-hole. Consider the following rationale: If the
triangle $\\{a,b,c\\}$ contains any points, then there must be at least one
point inside the triangle that is closer to the line $ac$ than point $b$ is.
Let’s denote the nearest point as $i$. The proximity of $i$ to the line $ac$
guarantees that the triangle $\\{a,i,c\\}$ is empty. We can substitute $b$
with $i$ to create a smaller but similarly-shaped hexagon. This logic extends
to other triangles as well; specifically, the truth values of
${\mathsf{h}}_{c,d,e}$ and ${\mathsf{h}}_{a,e,f}$ are not necessary to infer
the presence of a $6$-hole.
Our insight emerged when we noticed that the SAT solver eliminated some
$3$-hole literals from previous encodings. This elimination occurred primarily
when only a few points existed between the leftmost and rightmost points of a
triangle. On the other hand, the solver struggles significantly to identify
the redundancy of these $3$-hole literals when the leftmost and rightmost
points of a triangle were far apart. Therefore, to enhance the encoding’s
effectiveness, we chose to omit these $3$-hole literals (instead of letting
the solver figure it out).
$a$$b$$c$$d$$e$$f$
$a$$b$$c$$d$$e$$f$
$a$$b$$c$$d$$e$$f$
Figure 5: Three types of $6$-gons: left, all points are on one side of line
$a\mathit{f}$ (2 cases); middle, three points are on one side and one point is
on the other side of line $a\mathit{f}$ (8 cases); and right, two points are
on either side of line $a\mathit{f}$ (6 cases). If the marked triangle is
empty, we can conclude that there exists a $6$-hole.
Blocking the existence of a $6$-hole within the $6$-gon described above can be
achieved with the following clause (which simply negates the assignment):
$\displaystyle{\mathsf{o}}_{a,b,c}\lor{\mathsf{o}}_{b,c,d}\lor{\mathsf{o}}_{c,d,e}\lor{\mathsf{o}}_{d,e,f}\lor\overline{{\mathsf{h}}_{a,c,e}}$
(7)
For each set of six points, 16 different configurations can result in a
$6$-hole. These configurations depend on which points are positioned above or
below the line connecting the leftmost and rightmost points among the six.
Three types of such configurations are illustrated in Fig. 5, while the
remaining configurations are symmetrical. It is important to note that this
adds $16\times\binom{n}{6}$ clauses to the formula, significantly increasing
its size. However, in Section 6.1, we will show that this improves
performance.
We can reduce the number of clauses by about 30% by strategically selecting
which triangle within a $6$-gon is checked to be empty (i.e., which $3$-hole
literal will be used). The two options are the triangle that includes the
leftmost point (as depicted in Fig. 5) and the triangle with the second-
leftmost point. If the leftmost point is $p_{1}$, we opt for the second-
leftmost point; otherwise, we choose the leftmost point. After propagating the
unit clauses ${\mathsf{o}}_{1,a,b}$, the clauses that describe configurations
with three points below the line $a\mathit{f}$ are subsumed by the clause for
the configuration with four points below the line $1\mathit{f}$.
### 4.2 An $O(n^{4})$ Encoding
This section is rather technical. It introduces auxiliary variables to reduce
our encoding to $O(n^{4})$ clauses. The process is known as structured bounded
variable addition (SBVA) [15], which in each step adds a new auxiliary
variable to encode a subset of the formula more compactly. SBVA heuristically
selects the auxiliary variables. Instead, we select them manually because it
is more effective, the new variables have meaning, and SBVA is extremely slow
on this problem. Eliminating the auxiliary variables results in the encoding
of Section 4.1.
The first type of these variables, ${\mathsf{u}}^{4}_{a,c,d}$, represents the
presence of a $4$-gon $\\{a,b,c,d\\}$ such that points $a,b,c,d$ appear in
this order from left to right and $b$ and $c$ are above the line $ad$.
Furthermore, the variables ${\mathsf{u}}^{5}_{a,d,e}$ indicate the existence
of a $5$-gon $\\{a,b,c,d,e\\}$ with the property that the points $a,b,c,d,e$
appear in this order from left to right, the points $b$, $c$, and $d$ are
above the line $ae$, and the triangle $\\{a,c,e\\}$ is empty. This
configuration implies the existence of a $5$-hole within $\\{a,b,c,d,e\\}$
using similar reasoning as described in Section 4.1. The logic enforcing these
properties is outlined below.
$\displaystyle\overline{{\mathsf{o}}_{a,b,c}}\land\overline{{\mathsf{o}}_{b,c,d}}\rightarrow{\mathsf{u}}^{4}_{a,c,d}$
$\displaystyle\mathrm{with~{}}a<b<c<d$ (8)
$\displaystyle{{\mathsf{u}}^{4}_{a,c,d}}\land\overline{{\mathsf{o}}_{c,d,e}}\land{\mathsf{h}}_{a,c,e}\rightarrow{\mathsf{u}}^{5}_{a,d,e}$
$\displaystyle\mathrm{with~{}}a<c<d<e$ (9)
In the following we distinguish five types of 6-holes by the number of points
that lie above/below the line connecting the leftmost and rightmost points.
Fig. 5 shows three configurations with four, three, and two points above the
line, respectively. The configurations with three and four points below the
line are symmetric but will be handled in a different and more efficient
manner below.
To block all $6$-holes with configurations having three or four points above
the line connecting the leftmost and rightmost points, we utilize the
variables ${\mathsf{u}}^{5}_{a,d,e}$. Specifically, a configuration with three
points above occurs if there is a point $b$ situated between $a$ and $e$,
lying below the line $ae$. Also, the configuration with four points above
arises when a point $f$, located to the right of $e$, falls below the line
$de$. The associated clauses for these configurations are detailed below. The
omission of 3-hole literals is justified by our knowledge that a $3$-hole
exists among $a$, $c$, and $e$ for some point $c$ positioned above the line
$ae$.
$\displaystyle\overline{{\mathsf{u}}^{5}_{a,d,e}}\lor\overline{{\mathsf{o}}_{a,b,e}}$
$\displaystyle\mathrm{with~{}}a<d<e,a<b<e$ (10)
$\displaystyle\overline{{\mathsf{u}}^{5}_{a,d,e}}\lor{\mathsf{o}}_{d,e,f}$
$\displaystyle\mathrm{with~{}}a<d<e<f$ (11)
To block the third type of 6-hole, we need to introduce variables
${\mathsf{v}}^{4}_{a,c,d}$ which, similar as ${\mathsf{u}}^{4}_{a,c,d}$,
indicate the presence of a $4$-gon $\\{a,b,c,d\\}$ with the property that the
points $a,b,c,d$ appear in this order from left to right and $b$ and $c$ are
_below_ the line $ad$. The logic that encode these variables is shown below.
$\displaystyle{\mathsf{o}}_{a,b,c}\land{\mathsf{o}}_{b,c,d}\rightarrow{\mathsf{v}}^{4}_{a,c,d}$
$\displaystyle\mathrm{with~{}}a<b<c<d$ (12)
Using the variables ${\mathsf{u}}^{4}_{a,c,d}$ and
${\mathsf{v}}^{4}_{a,c^{\prime}\\!,d}$ we are now ready to block the
configuration of the third type of a 6-hole where two points lie above and two
points lie below the line connecting the leftmost and rightmost points; see
Fig. 5 (right). Recall that ${\mathsf{u}}^{4}_{a,c,d}$ denotes a $4$-gon
situated above the line $ad$, with $c$ being the second-rightmost point. Also,
${\mathsf{v}}^{4}_{a,c^{\prime}\\!,d}$ denotes a $4$-gon below the line $ad$,
with $c^{\prime}$ as the second-rightmost point. A $6$-hole exists if both
${\mathsf{u}}^{4}_{a,c,d}$ and ${\mathsf{v}}^{4}_{a,c^{\prime},d}$ are true
for some points $a$ and $d$ when there are no points within the triangle
formed by $a$, $c$, and $c^{\prime}$. Or, in clauses:
$\displaystyle\overline{{\mathsf{u}}^{4}_{a,c,d}}\lor\overline{{\mathsf{v}}^{4}_{a,c^{\prime}\\!,d}}\lor\overline{{\mathsf{h}}_{a,c,c^{\prime}}}$
$\displaystyle\mathrm{with~{}}a<c<c^{\prime}<d$ (13)
$\displaystyle\overline{{\mathsf{u}}^{4}_{a,c,d}}\lor\overline{{\mathsf{v}}^{4}_{a,c^{\prime}\\!,d}}\lor\overline{{\mathsf{h}}_{a,c^{\prime},c}}$
$\displaystyle\mathrm{with~{}}a<c^{\prime}<c<d$ (14)
The remaining configurations to consider involve those with three or four
points below the line joining the leftmost and rightmost points. As we
discussed at the end of Section 4.1, these configurations can be encoded more
compactly. We only need to block the existence of $5$-holes $\\{a,b,c,d,e\\}$
with the property that the points $1,a,b,c,d,e$ appear in this order from left
to right and the points $b$, $c$, and $d$ are below the line $ae$. The
reasoning is as follows: if such a $5$-hole exists, it can be expanded into a
$6$-hole by the closest point to line $ab$ within the triangle $\\{1,a,b\\}$.
If the triangle is empty, this is point 1. Additionally, by blocking these
specific $5$-holes, we simultaneously block all $6$-holes with three or four
points below the line between the leftmost and rightmost points. Following the
earlier cases, we only require a single $3$-hole literal which ensures that
the triangle $\\{a,c,e\\}$ is empty. The clauses to block these $5$-holes are
as follows:
$\displaystyle\overline{{\mathsf{v}}^{4}_{a,c,d}}\lor\overline{{\mathsf{o}}_{c,d,e}}\lor\overline{{\mathsf{h}}_{a,c,e}}$
$\displaystyle\mathrm{with~{}}1<a<c<d<e$ (15)
This encoding uses $O(n^{4})$ clauses, while it has the same propagation power
as having all $16\times\binom{n}{6}$ clauses in the domain-consistent encoding
of Section 4.1. In general, the trusted encoding for $k$-holes uses $O(n^{k})$
clauses, while the optimized encoding when generalized to $k$-holes has only
$O(kn^{4})$ clauses, or $O(n^{4})$ for every fixed $k$. An encoding of size
$O(n^{4})$ for $k$-gons is analogous: simply remove the $3$-hole literals from
the clauses.
### 4.3 Minor Optimizations
We can make the encoding even more compact by removing a large fraction of the
clauses from the trusted encoding. Note that constraints to forbid $6$-holes
contain only negative $3$-hole literals. That means that only half of the
constraints to define the $3$-hole variables are actually required. This in
turn shows that only half of the inside variable definitions are required. So,
instead of (1), (2), and (3), it suffices to use the following:
$\displaystyle{\mathsf{c}}_{i;a,b,c}$ $\displaystyle\rightarrow$
$\displaystyle\Big{(}\big{(}{\mathsf{o}}_{a,b,c}\rightarrow(\overline{{\mathsf{o}}_{a,i,b}}\land{\mathsf{o}}_{a,i,c})\big{)}\land\big{(}\overline{{\mathsf{o}}_{a,b,c}}\rightarrow({\mathsf{o}}_{a,i,b}\land\overline{{\mathsf{o}}_{a,i,c}})\big{)}\Big{)}$
(16) $\displaystyle{\mathsf{c}}_{i;a,b,c}$ $\displaystyle\rightarrow$
$\displaystyle\Big{(}\big{(}{\mathsf{o}}_{a,b,c}\rightarrow({\mathsf{o}}_{a,i,c}\land\overline{{\mathsf{o}}_{b,i,c}})\big{)}\land\big{(}\overline{{\mathsf{o}}_{a,b,c}}\rightarrow(\overline{{\mathsf{o}}_{a,i,c}}\land{\mathsf{o}}_{b,i,c})\big{)}\Big{)}$
(17) $\displaystyle{\mathsf{h}}_{a,b,c}$ $\displaystyle\leftarrow$
$\displaystyle\bigwedge_{\begin{subarray}{c}a<i<c\\\ i\neq
b\end{subarray}}\overline{{\mathsf{c}}_{i;a,b,c}}.$ (18)
It is worth noting that the SAT preprocessing technique blocked-clause
elimination (BCE) will automatically remove the clauses we omit [23]. However,
for means of efficiency, BCE is turned off by default in top-tier solvers,
including the solver CaDiCaL, which we used for the proof. During initial
experiments, we observed that omitting these clauses slightly improves the
performance.
Finally, the variables ${\mathsf{u}}^{4}_{a,c,d}$ and
${\mathsf{v}}^{4}_{a,c,d}$ can be used to more compactly encode the clauses
(6). We can replace the clauses (6) with:
$\displaystyle(\overline{{\mathsf{u}}^{4}_{a,c,d}}\lor\overline{{\mathsf{o}}_{a,c,d}})\land(\overline{{\mathsf{v}}^{4}_{a,c,d}}\lor{\mathsf{o}}_{a,c,d})$
$\displaystyle\mathrm{with~{}}a<c<d$ (19)
### 4.4 Breaking the Reflection Symmetry
Holes are invariant to reflectional symmetry: If we mirror a point set $S$,
then the counterclockwise order around the extremal point $p_{1}$ (which is
$p_{2},\ldots,p_{n}$) is reversed (to $p_{n},\ldots,p_{2}$). By relabeling
points to preserve the counterclockwise order, we preserve
${\mathsf{o}}_{1,a,b}=true$ for $a<b$, while the original orientation
variables ${\mathsf{o}}_{a,b,c}$ with $2\leq a<b<c\leq n$ are mapped to
${\mathsf{o}}_{n-c+2,n-b+2,n-a+2}$. A similar mapping applies to the
containment and $3$-hole variables. The trusted encoding maps almost onto
itself, except for the missing reflection clauses of (5) and (6). As a fix for
verification, we add each reflected clause using one resolution step.
Since only a tiny fraction of triple orientations map to themselves (so-called
_involutions_), breaking the reflectional symmetry reduces the search space by
a factor of almost 2. We partially break this symmetry by constraining the
variables ${\mathsf{o}}_{a,a+1,a+2}$ with $2\leq a\leq n-2$. We used the
symmetry-breaking predicate below, because it is compatible with our cube
generation, described in Section 5.
${\mathsf{o}}_{\lceil\frac{n}{2}\rceil-1,\lceil\frac{n}{2}\rceil,\lceil\frac{n}{2}\rceil+1},\dots,{\mathsf{o}}_{2,3,4}\preccurlyeq{\mathsf{o}}_{\lfloor\frac{n}{2}\rfloor+1,\lfloor\frac{n}{2}\rfloor+2,\lfloor\frac{n}{2}\rfloor+3},\dots,{\mathsf{o}}_{n-2,n-1,n}$
(20)
One symmetry that remains is the choice of the first point. Any point on the
convex hull could be picked for this purpose, and breaking it can potentially
reduce the search space by at least a factor of 3. However, breaking this
symmetry effectively is complicated, and we therefore left it on the table.
## 5 Problem Partitioning
The formula to determine that $h(6)\leq 30$ requires CPU years to solve. To
compute this in reasonable time, the problem needs to be partitioned into many
small subproblems that can be solved in parallel. Although tools exist to
construct partitionings automatically [20], we observed that this partitioning
was ineffective. As a consequence, we focused on manual partitioning.
During our initial experiments, we determined which orientation variables were
suitable for splitting. We used the formula for $g(6)\leq 17$ for this purpose
because its runtime is large enough to make meaningful observations and small
enough to explore many options. It turned out that the orientation variables
${\mathsf{o}}_{a,a+1,a+2}$ were the most effective choice for splitting the
problem. Assigning one of these ${\mathsf{o}}_{a,a+1,a+2}$ variables to
true/false roughly halves the search space and reduces the runtime by a factor
of roughly 2.
A problem with $n$ points has $n-3$ free variables of the form
${\mathsf{o}}_{a,a+1,a+2}$, as the variable ${\mathsf{o}}_{1,2,3}$ is already
fixed by the symmetry breaking. One cannot generate $2^{n-3}$ equally easy
subproblems, because
$(\overline{{\mathsf{o}}_{a,a+1,a+2}}\lor\overline{{\mathsf{o}}_{a+1,a+2,a+3}}\lor\overline{{\mathsf{o}}_{a+2,a+3,a+4}})$
and
$({\mathsf{o}}_{a,a+1,a+2}\lor{\mathsf{o}}_{a+1,a+2,a+3}\lor{\mathsf{o}}_{a+2,a+3,a+4}\lor{\mathsf{o}}_{a+3,a+4,a+5})$
follow directly from the optimized formula after unit propagation. Thus,
assigning three consecutive ${\mathsf{o}}_{a,a+1,a+2}$ variables to true
results directly in a falsified clause, as it would create a 6-hole among the
points $p_{1}$, $p_{a}$, $\dots$, $p_{a+4}$. The same holds for four
consecutive ${\mathsf{o}}_{a,a+1,a+2}$ variables assigned to false, which
would create a 6-hole among the points $p_{a}$, $\dots$, $p_{a+5}$. The
asymmetry is due to fixing the variables ${\mathsf{o}}_{1,a,b}$ to true. If we
assigned them to false, then the opposite would happen.
We observed that limiting the partition to variables involving the middle
points reduces the total runtime. We will demonstrate such experiments in
Section 6.2. So, to obtain suitable cubes, we considered all assignments of
the sequence ${\mathsf{o}}_{a,a+1,a+2}$, ${\mathsf{o}}_{a+1,a+2,a+3}$,
$\ldots$, ${\mathsf{o}}_{a+\ell-1,a+\ell,a+\ell+1}$ for a suitable constant
$\ell$ and $a=\frac{n+\ell}{2}-1$ such that the above properties are
fulfilled, that is, no three consecutive entries are true and no four
consecutive entries are false. In the following we refer to $\ell$ as the
_length_ of the cube-space. In our experiments of Section 6.1, we observed
that picking $\ell<n-3$ reduces the overall computational costs. Specifically,
for the $h(6)\leq 30$ experiments, we use length $\ell=21$.
Our initial experiments showed that the runtime of cubes grows exponentially
with the number of occurrences of the alternating pattern
${\mathsf{o}}_{b,b+1,b+2}=+$, ${\mathsf{o}}_{b+1,b+2,b+3}=-$,
${\mathsf{o}}_{b+2,b+3,b+4}=+$. As a consequence, the hardest cube for
$h(6)\leq 30$ would still require days of computing time, thereby limiting
parallelism. To deal with this issue, we further partition cubes that contain
this pattern. For each occurrence of the alternating pattern in a cube, we
split the cube into two cubes: one that extends it with
${\mathsf{o}}_{b,b+2,b+4}$ and one that extends it with
$\overline{{\mathsf{o}}_{b,b+2,b+4}}$. Note that we do this for each
occurrence. So a cube containing $m$ of these patterns is split into $2^{m}$
cubes. This reduced the computational costs of the hardest cubes to less than
an hour.
## 6 Evaluation
For the experiments, we use the solver CaDiCaL (version 1.9.3) [1], which is
currently the only top-tier solver that can produce LRAT proofs directly. The
efficient, verified checker cakeLPR [33] validated the proofs. We run CaDiCaL
with command-line options: \----sat \----reducetarget=10 \----forcephase
\----phase=0. The first option reduces the number of restarts. This is
typically more useful for satisfiable formulas (as the name suggests), but in
this case it is also helpful for unsatisfiable formulas. The second option
turns off aggressive clause deletion strategy, which is usually helpful for
large formulas. The last two options tell the solver to assign decision
variables to false, a MiniSAT heuristic [8]. Each of these settings improved
performance compared to the default setting on the formulas used in the
evaluation. Experiments were run on a specialized, internal Amazon Web
Services solver framework that provides cloud-level scaling. The framework
used m6i.xlarge instances, which have two physical cores and 16 GB of memory.
### 6.1 Impact of the Encoding
To illustrate the impact of the encoding on the performance, we show some
statistics on various encodings of the $h(6)\leq 30$ formula. We restricted
this experiment to solving a single randomly-picked subproblem. For other
subproblems, the results were similar. We experimented with five encodings:
* •
$T$: the trusted encoding presented in Section 3
* •
$O_{1}$: $T$ with (4) replaced by the domain-consistent encoding (7) of
Section 4.1
* •
$O_{2}$: $O_{1}$ with (7) replaced by the $O(n^{4})$ encoding of Section 4.2
* •
$O_{3}$: $O_{2}$ with the minor optimizations that replace (1), (2), (3), and
(6) by (17), (18), (18), and (19), respectively, see Section 4.3
* •
$O_{4}$: $O_{3}$ extended with the symmetry-breaking predicate from Section
4.4
Table 1: Comparison of the different encodings of randomly-picked subproblem formula | $\\#$variables | $\\#$clauses | $\\#$conflicts | $\\#$propagations | time (s)
---|---|---|---|---|---
$T$ | 62 930 | 1 171 942 | 1 082 569 | 1 338 662 627 | 243.07
$O_{1}$ | 62 930 | 5 823 078 | 228 838 | 282 774 472 | 136.20
$O_{2}$ | 75 110 | 667 005 | 211 272 | 343 388 591 | 45.49
$O_{3}$ | 75 110 | 436 047 | 234 755 | 340 387 692 | 39.46
$O_{4}$ | 75 110 | 444 238 | 234 587 | 342 904 580 | 39.41
Table 1 summarizes the results. The domain-consistent encoding can be solved
more efficiently than the trusted encoding while having over five times as
many clauses. The reason for the faster performance becomes clear when looking
at the number of conflicts and propagations. The domain-consistent encoding
requires just over a fifth as many conflicts and propagations to determine
unsatisfiability. The auxiliary variables that enable the $O(n^{4})$ encoding
reduce the size by almost an order of magnitude. The resulting formula can be
solved three times as fast, while using a similar number of conflicts and
propagations. The minor optimizations reduce the size by roughly a third and
further improve the runtime. Finally, the addition of the symmetry-breaking
predicate doesn’t impact the performance. Its main purpose is to halve the
number of cubes.
We also solved the optimized encoding ($O_{3}$) of the formula $g(6)\leq 17$,
which takes 41.99 seconds using 623 540 conflicts. Adding the symmetry-
breaking predicate ($O_{4}$) reduces the runtime to 17.39 seconds using 316
785 conflicts. So the symmetry-breaking predicate reduces the number of
conflicts by roughly a factor of 2 (as expected) while the runtime is reduced
even more. The latter is due to the slowdown caused by maintaining more
conflict clauses while solving the formula without the symmetry-breaking
predicate.
Table 2: Runtime comparison for Theorem 1.2 using different values of parameter $\ell$ $\ell$ | $\\#$cubes | average time (s) | max time (s) | total time (h)
---|---|---|---|---
21 | 312 418 | 6.99 | 66.86 | 606.55
19 | 89 384 | 13.61 | 123.70 | 337.96
17 | 25 663 | 34.29 | 293.10 | 244.50
15 | 7393 | 112.61 | 949.50 | 231.27
13 | 2149 | 431.26 | 3 347.59 | 257.44
11 | 629 | 1 847.46 | 11 844.05 | 322.79
9 | 188 | 7 745.14 | 32 329.05 | 404.47
7 | 57 | 32 905.90 | 105 937.76 | 521.01
### 6.2 Impact of the Partitioning
All known point sets witnessing the lower bound $h(6)\geq 30$ contain a
$7$-gon. To obtain a possibly easier problem to test and compare heuristics,
we studied how many points are required to guarantee the existence of a
$6$-hole or a $7$-gon. It turned out that the answer is at most 24 (Theorem
1.2). Computing this is still hard but substantially easier compared to our
main result. During our experiments, we observed that increasing the number of
cubes eventually increase the total runtime. We therefore explored which
parameters produce the lowest total runtime. The experimental results are
shown in Table 2 for various values for the parameter $\ell$. Incrementing
$\ell$ by 2 increases the number of cubes roughly by a factor of 3. The
optimal total runtime is achieved for $\ell=15$, which is a 62% reduction
compared to full partitioning ($\ell=21$). Note that the solving time for the
hardest cube (the max column) increases substantially when using fewer cubes.
This in turn reduces the effectiveness of parallelism. The runtime without
partitioning is expected to be about 1000 CPU hours, so partitioning achieves
super-linear speedups and more than a factor of 4 speedup for $\ell=15$. Fig.
6 shows plots of cumulatively solved cubes, with similar curves for all
settings.
$\hbox{NaN}\%$$\hbox{NaN}\%$$\hbox{NaN}\%$$\hbox{NaN}\%$$\hbox{NaN}\%$$\hbox{NaN}\%$$10^{-1}$$10^{0}$$10^{1}$$10^{2}$$10^{3}$$10^{4}$$10^{5}$runtime
(seconds)$\ell=7$$\ell=9$$\ell=11$$\ell=13$$\ell=15$$\ell=17$$\ell=19$$\ell=21$
Figure 6: Runtime to solve the subproblems of Theorem 1.2 for various
splitting parameters
We also evaluated the off-the-shelf tool March for partitioning. This tool was
used to prove Schur Number Five [18]. We used option -d 13 to cut off
partitioning at depth 13 to create 8192 cubes. That partition turned out to be
very poor: at least 18 cubes took over 100 000 seconds. The expected total
costs are about 10 000 CPU hours, so 10 times the estimated partition-free
runtime.
A partitioning can also guide the search to solve the formula $g(6)\leq 17$.
The partitioning of this formula using $\ell=12$ results in 1108 cubes. If we
add these cubes to the formula with the symmetry-predicate ($O_{4}$) in the
iCNF format [35], then CaDiCaL can solve it in 8.53 seconds using 205 153
conflicts.
### 6.3 Theorem 1.1
To show that the optimized encoding for $h(6)\leq 30$ is unsatisfiable, we
partitioned the Theorem 1.1 problem with the splitting algorithm described in
Section 5 with parameter $\ell=21$, which results in $312\,418$ cubes. We
picked this setting based on the experiments shown in Table 2. Fig. 7 shows
the runtime of solving the subproblems. The average runtime was just below 200
seconds. All subproblems were solved in less than an hour. Almost $24\,000$
subproblems could be solved within a second. For these subproblems, the cube
resulted directly in a conflict, so the solver didn’t have to perform any
search.
The total runtime is close to 17 300 CPU hours, or slightly less than 2 CPU
years. We could achieve practically a linear speedup using 1000 m6i.xlarge
instances. The timings include producing and validating the LRAT proof. We
chose the LRAT proof format, because it allows concurrent checking, as
described in Section 7.1. The combined size of the proofs is 180 terabytes in
the uncompressed LRAT format used by the cakeLPR checker. In past verification
efforts of hard math problems, the produced proofs were in the DRAT format.
For this problem, the LRAT proofs are roughly 2.3 times as large as the
corresponding DRAT proof. We estimate that the DRAT proof would have been 78
terabytes in size, so approximately one third of the Pythagorean Triples proof
[19]. For all problems, the checker was able to easily keep up with the solver
while running on a different core, thereby finishing as soon as the solver was
done.
100K200K300K$10^{-1}$$10^{0}$$10^{1}$$10^{2}$$10^{3}$runtime (seconds) Figure
7: Reported process time to solve the subproblems of $h(6)\leq 30$ with proof
logging while running the cakeLPR verified checker on another core.
### 6.4 Lower-Bound Experiments
coordinates: 1 | 1260
---|---
16 | 743
22 | 531
37 | 0
306 | 592
310 | 531
366 | 552
371 | 487
374 | 525
392 | 575
---|---
396 | 613
410 | 539
416 | 550
426 | 526
434 | 552
436 | 535
446 | 565
449 | 518
450 | 498
453 | 542
---|---
458 | 526
489 | 537
492 | 502
496 | 579
516 | 467
552 | 502
754 | 697
777 | 194
1259 | 320
Figure 8: A set of 29 points with no $6$-hole and no $8$-gon [28]. The three
points forming the convex hull are slightly moved outward to avoid the visual
confusion that some points appear collinear. The lines show the six convex
hull layers.
Overmars constructed a 29-point set without $6$-hole [28], see Fig. 8. The
layers of the convex hull have size 3, 4, 7, 7, 7, 1. The paper mentioned that
the convex hull layers of all $6$-hole-free 29-point set found by the local
search were the same.
We used our encoding to find many $6$-hole-free 29-point sets. We partitioned
the problem using $\ell=22$, which results in $581\,428$ cubes. Out of those
cubes, $116\,305$ ($20.00\%$) were satisfiable. For all the cubes, the first
solution found by the solver had the same layers of the convex hull. We also
tested for each of these cubes whether there is a solution for which either
the first layer has more than 3 points or the second layer has exactly three
points. This can be done by adding a single clause to the formula asking
whether there is a point below the line $p_{2}p_{29}$ or whether point $p_{4}$
is in the triangle $\\{p_{3},p_{27},p_{28}\\}$ or $p_{27}$ is in the triangle
$\\{p_{3},p_{27},p_{28}\\}$. Adding that clause made all cubes unsatisfiable.
The result above means that all $6$-hole-free 29-point sets have exactly 3
points in the convex hull and the next layer has at least 4 points. Note that
this implies that there cannot be a $6$-hole-free 30-point set.
Although we haven’t verified it yet, it seems likely that the convex hull
layers of all $6$-hole-free 29-point sets are the same. As a consequence, each
of those point sets has at least three $7$-gons.
## 7 Verification
We applied three verification steps to increase trust in the correctness of
our results. In the first step, we check the results produced by the SAT
solver. The second step consists of checking the correctness of the
optimizations discussed in Section 4. In the third step, we validate that the
case split covers all cases.
### 7.1 Concurrent Solving and Checking
The most commonly used approach to validate SAT-solving results works as
follows. First, a SAT solver produces a DRAT proof. This proof is checked and
trimmed using a fast, but unverified tool that produces a LRAT proof. The
difference between a DRAT proof and a LRAT proof is that the latter contains
hints. The LRAT proof is then validated by a formally-verified checker, which
uses the hints to obtain efficient performance.
Recently, the SAT solver CaDiCaL added support for producing LRAT proofs
directly (since version 1.7.0). This allows us to produce the proof and
validate it concurrently. To the best of our knowledge, we are the first to
take advantage of this possibility. CaDiCaL sends its proof to a unix pipe and
the verified checker cakeLPR reads it from the pipe. This tool chain works
remarkably well, adds little performance overhead, and avoids needing to store
large files.
### 7.2 Reencoding Proof
We validated the four optimizations presented in Section 4. Only the trusted
encoding has the reflection symmetry, as none of the optimizations preserve
this symmetry. Each of the clauses in the symmetry-breaking predicate have the
substitution redundancy (SR) property [6] with respect to the trusted
encoding. However, there doesn’t exist a SR checker. Instead, we transformed
the SR check into a sequence of DRAT addition and deletion steps. This is
feasible for small point sets (up to 10), but is too expensive for the full
problem. It may therefore be more practical to verify this optimization in a
theorem prover.
Transforming the trusted encoding into the domain-consistent one is
challenging to validate because the solver cannot easily infer the existence
of a $6$-hole using only the clauses (7). Since we are replacing (4) by (7)
and clause deletion trivially preserves satisfiability, we only need to check
whether each of the clauses (7) is entailed by the trusted encoding. This can
be achieved by constructing a formula that asks whether there exists an
assignment that satisfies the trusted encoding, but falsifies at least one of
the clauses (7). We validated that this formula is unsatisfiable for $n\leq
12$ (around 300 seconds).222We implemented an entailment tool, see
https://github.com/marijnheule/entailment The formula becomes challenging to
solve for larger $n$. However, the validation for small $n$ provides
substantial evidence of the correctness of the encoding and the
implementation.
Checking the correctness of the other two optimizations is easier. Observe
that one can obtain the domain-consistent encoding from the $O(n^{4})$
encoding by applying Davis-Putnam resolution [7] on the auxiliary variables.
This can be expressed using DRAT steps. The DRAT derivation from the domain-
consistent encoding to the $O(n^{4})$ encoding applies all these steps in
reverse order. The minor optimizations mostly delete clauses, which is
trivially correct for proofs of unsatisfiability. The clauses (19) have the
RAT property on the auxiliary variables and their redundancy is easily checked
using a DRAT checker.
### 7.3 Tautology Proof
The final validation step consists of checking whether the partition of the
problem covers the entire search space. This part has also been called the
tautology proof [18], because in most cases it needs to determine whether the
disjunction of cubes is a tautology. We take a slightly different approach and
validate that the following formula is unsatisfiable: the conjunction of the
negated cubes; the symmetry-breaking predicate; and some clauses from the
formula.
Recall that we omitted various cubes because they resulted in a conflict with
the clauses
$(\overline{{\mathsf{o}}_{a,a+1,a+2}}\lor\overline{{\mathsf{o}}_{a+1,a+2,a+3}}\lor\overline{{\mathsf{o}}_{a+2,a+3,a+4}})$
with $a\in\\{2,\dots,n-4\\}$ and
$({\mathsf{o}}_{a,a+1,a+2}\lor{\mathsf{o}}_{a+1,a+2,a+3}\lor{\mathsf{o}}_{a+2,a+3,a+4}\lor{\mathsf{o}}_{a+3,a+4,a+5})$
with $a\in\\{2,\dots,n-5\\}$. We checked with DRATtrim that these clauses are
implied by the optimized formulas, which takes 0.3 CPU seconds in total. We
combined them with the negated cubes and the symmetry-breaking predicate,
which results in an unsatisfiable formula that can be solved by CaDiCaL in 12
CPU seconds.
## 8 Conclusion
We closed the final case regarding $k$-holes in the plane by showing
$h(6)=30$. This is another example that SAT-solving techniques can effectively
solve a range of long-standing open problems in mathematics. Other successes
include the Pythagorean Triples problem [19], Schur Number Five [18], and
Keller’s Conjecture [5]. Also, we recomputed $g(6)=17$ many orders of
magnitude faster compared to the original computation by Szekeres and Peters
[32] even when taking into account the difference in hardware. SAT techniques
overwhelmingly outperformed their dedicated approach. Key contributions
include an effective, compact encoding and a partitioning strategy enabling
linear-time speedups even when using thousands of cores. We also presented a
new concurrent proof-checking procedure to significantly decrease proof
verification costs.
Although the tools are fully automatic, several aspects of our solution
require significant user ingenuity. In particular, we had to develop encoding
optimizations and a search-space partitioning strategy to fully leverage the
power of the tools. Constructing the domain-consistent encoding automatically
appears challenging. Most other optimizations can be achieved automatically,
for example via structured bounded variable elimination [15]. However, the
resulting formula cannot be solved nearly as efficiently as the presented one.
Substantial research into generating effective partitionings is required to
enable non-experts to solve such problems. Although we validated most
optimization steps, formally verifying the trusted encoding or even the
domain-consistent encoding would further increase trust in the correctness of
our result.
#### 8.0.1 Acknowledgements
Heule is partially supported by NSF grant CCF-2108521. Scheucher was supported
by the DFG grant SCHE 2214/1-1. We thank Donald Knuth, Benjamin Kiesl-Reiter,
John Mackey, Robert Jones, and the reviewers for their valuable feedback. The
authors met for the first time during Dagstuhl Seminar 23261 “SAT Encodings
and Beyond”, which kicked off the research published in this paper. We thank
Helena Bergold for the visualization in Fig. 9.
## References
* [1] Biere, A., Fazekas, K., Fleury, M., Heisinger, M.: CaDiCaL, Kissat, Paracooba, Plingeling and Treengeling entering the SAT Competition 2020\. In: Proc. of SAT Competition 2020 – Solver and Benchmark Descriptions. Department of Computer Science Report Series B, vol. B-2020-1, pp. 51–53. University of Helsinki (2020), http://hdl.handle.net/10138/318754
* [2] Biere, A., Heule, M., van Maaren, H., Walsh, T. (eds.): Handbook of Satisfiability, Frontiers in Artificial Intelligence and Applications, vol. 336. IOS Press, second edn. (2021), https://www.iospress.com/catalog/books/handbook-of-satisfiability-2
* [3] Björner, A., Las Vergnas, M., White, N., Sturmfels, B., Ziegler, G.M.: Oriented Matroids, Encyclopedia of Mathematics and its Applications, vol. 46. Cambridge University Press, 2 edn. (1999). https://doi.org/10/bhb4rn
* [4] Bokowski, J., Richter, J.: On the Finding of Final Polynomials. European Journal of Combinatorics 11(1), 21–34 (1990). https://doi.org/10/gsjw3n
* [5] Brakensiek, J., Heule, M.J.H., Mackey, J., Narváez, D.E.: The resolution of keller’s conjecture. Journal of Automated Reasoning 66(3), 277–300 (2022). https://doi.org/10.1007/S10817-022-09623-5
* [6] Buss, S., Thapen, N.: DRAT and propagation redundancy proofs without new variables. Logical Methods in Computer Science 17(2) (2021). https://doi.org/10/mbdx
* [7] Davis, M., Putnam, H.: A computing procedure for quantification theory. Journal of the ACM 7(3), 201–215 (1960). https://doi.org/10/bw9h55
* [8] Eén, N., Sörensson, N.: An extensible sat-solver. In: Theory and Applications of Satisfiability Testing. pp. 502–518. Springer (2004)
* [9] Erdős, P., Szekeres, G.: A combinatorial problem in geometry. Compositio Mathematica 2, 463–470 (1935), http://www.renyi.hu/~p_erdos/1935-01.pdf
* [10] Erdős, P., Szekeres, G.: On some extremum problems in elementary geometry. Annales Universitatis Scientiarium Budapestinensis de Rolando Eötvös Nominatae, Sectio Mathematica 3–4, 53–63 (1960), https://www.renyi.hu/~p_erdos/1960-09.pdf
* [11] Felsner, S., Goodman, J.E.: Pseudoline Arrangements. In: Toth, O’Rourke, Goodman (eds.) Handbook of Discrete and Computational Geometry. CRC Press, third edn. (2018). https://doi.org/10/gh9v6f
* [12] Felsner, S., Weil, H.: Sweeps, arrangements and signotopes. Discrete Applied Mathematics 109(1), 67–94 (2001). https://doi.org/10/dc4tb4
* [13] Gent, I.P.: Arc consistency in SAT. In: European Conference on Artificial Intelligence (ECAI 2002). FAIA, vol. 77, pp. 121–125. IOS Press (2002), https://frontiersinai.com/ecai/ecai2002/pdf/p0121.pdf
* [14] Gerken, T.: Empty Convex Hexagons in Planar Point Sets. Discrete & Computational Geometry 39(1), 239–272 (2008). https://doi.org/10/c4kn3s
* [15] Haberlandt, A., Green, H., Heule, M.J.H.: Effective Auxiliary Variables via Structured Reencoding. In: International Conference on Theory and Applications of Satisfiability Testing (SAT 2023). Leibniz International Proceedings in Informatics (LIPIcs), vol. 271, pp. 11:1–11:19. Dagstuhl, Dagstuhl, Germany (2023). https://doi.org/10.4230/LIPIcs.SAT.2023.11
* [16] Harborth, H.: Konvexe Fünfecke in ebenen Punktmengen. Elemente der Mathematik 33, 116–118 (1978), http://www.digizeitschriften.de/dms/img/?PID=GDZPPN002079801
* [17] Heule, M.J.H.: The DRAT format and DRAT-trim checker (2016), arXiv:1610.06229
* [18] Heule, M.J.H.: Schur number five. In: Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence. AAAI’18, AAAI Press (2018)
* [19] Heule, M.J.H., Kullmann, O., Marek, V.W.: Solving and verifying the Boolean Pythagorean triples problem via cube-and-conquer. In: Theory and Applications of Satisfiability Testing (SAT 2016). LNCS, vol. 9710, pp. 228–245. Springer (2016). https://doi.org/10/gkkscn
* [20] Heule, M.J.H., Kullmann, O., Wieringa, S., Biere, A.: Cube and Conquer: Guiding CDCL SAT Solvers by Lookaheads. In: Hardware and Software: Verification and Testing. pp. 50–65. Springer (2012). https://doi.org/10/f3ss29
* [21] Holmsen, A.F., Mojarrad, H.N., Pach, J., Tardos, G.: Two extensions of the Erdős–Szekeres problem. Journal of the European Mathematical Society pp. 3981–3995 (2020). https://doi.org/10/gsjw4m
* [22] Horton, J.: Sets with no empty convex $7$-gons. Canadian Mathematical Bulletin 26, 482–484 (1983). https://doi.org/10/chf6dk
* [23] Järvisalo, M., Biere, A., Heule, M.J.H.: Blocked clause elimination. In: Tools and Algorithms for the Construction and Analysis of Systems. pp. 129–144. Springer (2010)
* [24] Kalbfleisch, J., Kalbfleisch, J., Stanton, R.: A combinatorial problem on convex regions. In: Proc. Louisiana Conf. Combinatorics, Graph Theory and Computing, Congressus Numerantium, vol. 1, Baton Rouge, La.: Louisiana State Univ. pp. 180–188 (1970)
* [25] Knuth, D.E.: Axioms and Hulls, LNCS, vol. 606. Springer (1992). https://doi.org/10/bwfnz9
* [26] Marić, F.: Fast formal proof of the Erdős–Szekeres conjecture for convex polygons with at most 6 points. Journal of Automated Reasoning 62, 301–329 (2019). https://doi.org/10/gsjw4r
* [27] Nicolás, M.C.: The Empty Hexagon Theorem. Discrete & Computational Geometry 38(2), 389–397 (2007). https://doi.org/10/bw3hnd
* [28] Overmars, M.: Finding Sets of Points without Empty Convex 6-Gons. Discrete & Computational Geometry 29(1), 153–158 (2002). https://doi.org/10/cnqmr4
* [29] Scheucher, M.: Two disjoint 5-holes in point sets. Computational Geometry 91, 101670 (2020). https://doi.org/10/gsjw2z
* [30] Scheucher, M.: A SAT Attack on Erdős–Szekeres Numbers in $\mathbb{R}^{d}$ and the Empty Hexagon Theorem. Computing in Geometry and Topology 2(1), 2:1–2:13 (2023). https://doi.org/10/gsjw22
* [31] Suk, A.: On the Erdős–Szekeres convex polygon problem. Journal of the AMS 30, 1047–1053 (2017). https://doi.org/10/gsjw44
* [32] Szekeres, G., Peters, L.: Computer solution to the 17-point Erdős–Szekeres problem. Australia and New Zealand Industrial and Applied Mathematics 48(2), 151–164 (2006). https://doi.org/10/dkb9j3
* [33] Tan, Y.K., Heule, M.J.H., Myreen, M.O.: Verified propagation redundancy and compositional UNSAT checking in cakeml. International Journal on Software Tools for Technology 25(2), 167–184 (2023). https://doi.org/10/grw7wm
* [34] Tóth, G., Valtr, P.: The Erdős–Szekeres theorem: Upper Bounds and Related Results. In: Combinatorial and Computational Geometry. vol. 52, pp. 557–568. MSRI Publications, Cambridge Univ. Press (2005), http://www.ams.org/mathscinet-getitem?mr=2178339
* [35] Wieringa, S., Niemenmaa, M., Heljanko, K.: Tarmo: A framework for parallelized bounded model checking. In: International Workshop on Parallel and Distributed Methods in verifiCation, PDMC 2009. EPTCS, vol. 14, pp. 62–76 (2009). https://doi.org/10.4204/EPTCS.14.5
## Appendix 0.A Proof of Lemma 1
In the following proof, which is based on [29], we utilize the fact that, the
triple orientation ${\mathsf{o}}_{a,b,c}=true$ encodes whether the sign of the
determinant
$\det\begin{pmatrix}1&1&1\\\ x_{a}&x_{b}&x_{c}\\\
y_{a}&y_{b}&y_{c}\end{pmatrix}$
is positive, and use some basics from linear algebra.
###### Proof
First, we apply an affine-linear transformation to $S$ so that $p_{1}$ is
mapped to the origin $(0,0)$ and all other $p_{i}$, $i\geq 2$, have positive
$x$\- and $y$-coordinates. To see this, apply a translation
$(x,y)\mapsto(x+s,y+t)$ for some constants $s,t\in\mathbb{R}$ so that $p_{1}$
is mapped to the origin. Since $p_{1}$ is an extremal point, we can perform a
rotation $(x,y)\to(x\cos(\phi)-y\sin(\phi),x\sin(\phi)+y\cos(\phi))$ for some
constant $\phi\in[0,2\pi)$ such that all points $p_{2},\ldots,p_{n}$ have
positive $x$-coordinate. Finally, we apply a shearing transformation
$(x,y)\mapsto(x,y+c\cdot x)$ for some constant $c\in\mathbb{R}$ so that
$p_{2},\ldots,p_{n}$ have positive $y$-coordinate as well. Pause to note that
affine-linear transformations do not affect determinants and hence the triple
orientations are persevered. Formally, one can introduce transformation
matrices to write the translation as
$\begin{pmatrix}1\\\ x+s\\\ y+t\\\ \end{pmatrix}=\begin{pmatrix}1&0&0\\\
s&1&0\\\ t&0&1\\\ \end{pmatrix}\cdot\begin{pmatrix}1\\\ x\\\ y\\\
\end{pmatrix},$
a shearing as
$\begin{pmatrix}1\\\ x\\\ y+cx\\\ \end{pmatrix}=\begin{pmatrix}1&0&0\\\
0&1&0\\\ 0&c&1\\\ \end{pmatrix}\cdot\begin{pmatrix}1\\\ x\\\ y\\\
\end{pmatrix},$
and a rotation as
$\begin{pmatrix}1\\\ x\cos(\phi)-y\sin(\phi)\\\ x\sin(\phi)+y\cos(\phi)\\\
\end{pmatrix}=\begin{pmatrix}1&0&0\\\ 0&\cos(\phi)&-\sin(\phi)\\\
0&\sin(\phi)&\cos(\phi))\\\ \end{pmatrix}\cdot\begin{pmatrix}1\\\ x\\\ y\\\
\end{pmatrix}.$
Since each of the transformation-matrices has determinant 1, and
$\det\left(A\cdot\begin{pmatrix}1&1&1\\\ x_{a}&x_{b}&x_{c}\\\
y_{a}&y_{b}&y_{c}\\\
\end{pmatrix}\right)=\det(A)\cdot\det\begin{pmatrix}1&1&1\\\
x_{a}&x_{b}&x_{c}\\\ y_{a}&y_{b}&y_{c}\\\ \end{pmatrix},$
none of these affine transformations affects the triple orientations.
Now $x_{i}/y_{i}$ is increasing for $i\geq 2$ as $p_{2},\ldots,p_{n}$ are
sorted counterclockwise around $p_{1}$. Since $S$ is in general position,
there is an $\varepsilon>0$ such that $S$ and
$S^{\prime}:=\\{(0,\varepsilon)\\}\cup\\{p_{2},\ldots,p_{n}\\}$ are of the
same order type. Formally, since the determinant is a polynomial and hence
continuous, it holds
$\operatorname{sgn}\det\begin{pmatrix}1&1&1\\\ 0&x_{a}&x_{b}\\\
\varepsilon&y_{a}&y_{b}\\\
\end{pmatrix}=\operatorname{sgn}\det\begin{pmatrix}1&1&1\\\ 0&x_{a}&x_{b}\\\
0&y_{a}&y_{b}\\\ \end{pmatrix}$
for some sufficiently small $\varepsilon>0$. We next apply the projective
transformation $(x,y)\mapsto(\nicefrac{{x}}{{y}},\nicefrac{{-1}}{{y}})$ to
$S^{\prime}$ to obtain $\tilde{S}$. By the multilinearity of the determinant,
we obtain
$\det\begin{pmatrix}1&1&1\\\ x_{a}&x_{b}&x_{c}\\\ y_{a}&y_{b}&y_{c}\\\
\end{pmatrix}=y_{a}\cdot y_{b}\cdot y_{c}\cdot\det\begin{pmatrix}1&1&1\\\
\nicefrac{{x_{a}}}{{y_{a}}}&\nicefrac{{x_{b}}}{{y_{b}}}&\nicefrac{{x_{c}}}{{y_{c}}}\\\
\nicefrac{{-1}}{{y_{a}}}&\nicefrac{{-1}}{{y_{b}}}&\nicefrac{{-1}}{{y_{c}}}\\\
\end{pmatrix}.$
Since all points in $S^{\prime}$ have positive $y$-coordinates, the signs of
the determinants coincide, and hence $S^{\prime}$ and $\tilde{S}$ have the
same triple orientations. Moreover, as
$\tilde{x_{i}}=\nicefrac{{x_{i}^{\prime}}}{{y_{i}^{\prime}}}$ is increasing
for $i\geq 1$, the set $\tilde{S}$ fulfills all desired properties. ∎
## Appendix 0.B Realizability
Figure 9: Visualization of a signotope on $23$ elements with no $6$-hole or
$7$-gon as wiring diagram. The triple orientations can be read as following:
${\mathsf{o}}_{a,b,c}$ with $a<b<c$ equals $+$ if and only if wire $a$
intersects $b$ before $c$ when traced from left to right. For more background
on signotopes and wiring diagrams see [12] and the handbook article [11].
We used SAT to show that every set of 30 points yields a 6-hole. Since there
exist sets of 29 points [28] with no 6-holes, we determined the precise value
$h(6)=30$. For Theorem 1.2 we do not have such a witnessing point set. The SAT
solver found millions of signotopes on 23 elements with no 7-gon and no
6-hole, witnessing that the bound is sharp in the more general combinatorial
setting. Fig. 9 shows one such example. However, so far we did not manage to
find a corresponding point set to any of the signotopes. In fact, all tested
configurations are provably non-realizable using the method of bi-quadratic
final polynomials [4], which is not surprising since only a small proportion
($2^{\Theta(n\log n)}$ of $2^{\Theta(n^{2})}$) of rank $3$ signotopes are
actually realizable by point sets; see [3, Chapters 7.4 and 8.7]. Moreover,
deciding whether a triple-assignment can be realized by an actual point set is
a notoriously hard problem as it is complete for the _existential theory of
the reals_ (${\mathsf{ETR}}$); a complexity class which lies between
${\mathsf{NP}}$ and ${\mathsf{PSPACE}}$ [3, Chapter 8.4].
|
# Analogist: Out-of-the-box Visual In-Context Learning with Image Diffusion
Model
Zheng Gu<EMAIL_ADDRESS>0000-0001-9914-3922 City University of Hong
Kong and State Key Lab for Novel Software Technology, Nanjing UniversityChina
, Shiyuan Yang<EMAIL_ADDRESS>0000-0001-8213-5803 City University
of Hong Kong and Tianjin UniversityChina , Jing Liao<EMAIL_ADDRESS>0000-0001-7014-5377 City University of Hong KongChina , Jing Huo
<EMAIL_ADDRESS>0000-0002-8504-455X State Key Lab for Novel Software
Technology, Nanjing UniversityChina and Yang Gao<EMAIL_ADDRESS>0000-0002-2488-1813 State Key Lab for Novel Software Technology, Nanjing
UniversityChina
###### Abstract.
Visual In-Context Learning (ICL) has emerged as a promising research area due
to its capability to accomplish various tasks with limited example pairs
through analogical reasoning. However, training-based visual ICL has
limitations in its ability to generalize to unseen tasks and requires the
collection of a diverse task dataset. On the other hand, existing methods in
the inference-based visual ICL category solely rely on textual prompts, which
fail to capture fine-grained contextual information from given examples and
can be time-consuming when converting from images to text prompts. To address
these challenges, we propose Analogist, a novel inference-based visual ICL
approach that exploits both visual and textual prompting techniques using a
text-to-image diffusion model pretrained for image inpainting. For visual
prompting, we propose a self-attention cloning (SAC) method to guide the fine-
grained structural-level analogy between image examples. For textual
prompting, we leverage GPT-4V’s visual reasoning capability to efficiently
generate text prompts and introduce a cross-attention masking (CAM) operation
to enhance the accuracy of semantic-level analogy guided by text prompts. Our
method is out-of-the-box and does not require fine-tuning or optimization. It
is also generic and flexible, enabling a wide range of visual tasks to be
performed in an in-context manner. Extensive experiments demonstrate the
superiority of our method over existing approaches, both qualitatively and
quantitatively. Our project webpage is available at
https://analogist2d.github.io.
Visual In-Context Learning, Diffusion Models, Image Transformation
††ccs: Computing methodologies Image processing
Figure 1. Examples of in-context visual generation by our method using a
pretrained Stable Diffusion Inpainting model are demonstrated. With an example
image pair $A$ and $A^{\prime}$, illustrating a visual transformation, and a
query image $B$, our method enhances the model’s capacity for visual in-
context comprehension, producing a reasonable output $B^{\prime}$ that follows
the same visual pattern. Source images: ImageNet (Deng et al., 2009), LOL
(Chen et al., 2018), InstructPix2Pix (Brooks et al., 2023), TongYi QianWen
APP, UBC-Fashion (Zablotskaia et al., 2019), ScanNet (Dai et al., 2017), DAVIS
(Perazzi et al., 2016), DALLE-3 (Betker et al., 2023).
## 1\. Introduction
As one of the most popular research topics in the recent field of natural
language processing (NLP), in-context learning (ICL) represents a paradigm
wherein large language models (LLMs) acquire the ability to learn tasks based
on a limited set of demonstrative examples (Dong et al., 2022). Unlike
supervised learning, ICL directly generates predictions using pretrained LLMs
(Brown et al., 2020). This paradigm offers an interpretable interface for
interacting with LLMs through language demonstrations, mirroring human
decision-making by learning through analogies and similar experiences. ICL
significantly lowers computational costs for adapting models to new tasks,
making language-model-as-a-service feasible and enabling practical
applications in large-scale, real-world tasks such as machine translation (Xu
et al., 2023), information extraction (He et al., 2023), and complexity
reasoning (Wei et al., 2022).
Following the success of NLP, research in visual In-Context Learning has
entered its embryonic stage of exploration (Yang et al., 2023a; Bai et al.,
2023). Specifically, when the demonstration is a pair of images $A$ and
$A^{\prime}$, visual in-context learning can be considered as an image analogy
problem (Hertzmann et al., 2001). This involves analogizing the observed
transformation from $A$ to $A^{\prime}$ and applying it onto a query image
$B$, resulting in $B^{\prime}$. This analogy capability holds significant
potential in computer graphics and vision tasks (Šubrtová et al., 2023; Parmar
et al., 2023; Cao et al., 2023). For example, as shown in Figure 1, with just
a single pair of examples without training on a large dataset, the pretrained
model can perform tasks ranging from low-level tasks such as colorization,
deblurring, denoising, etc., to high-level tasks such as image editing, image
translation, motion transfer, etc. Visual ICL also offers significant
potential in enhancing creative workflows. Designers can leverage a model to
learn design ideas such as color themes, typography, and visual motifs from an
example pair and adapt them analogously to different contents.
Existing visual ICL works fall into two categories: training-based and
inference-based. Training-based methods train the generative model on diverse
in-context tasks (Wang et al., 2023a; Najdenkoska et al., 2023). The ICL
capabilities primarily exhibit tasks similar to their training tasks and have
limitations when applied to unseen tasks. Moreover, collecting and organizing
the data into in-context task format is laborious. Inference-based methods
conduct ICL via appropriate prompting the model during inference, possessing
better generalizability. However, existing methods (Šubrtová et al., 2023;
Nguyen et al., 2023) convert the given images into textual prompts, falling
short in two aspects. First, the textual prompting is coarse-grained and
cannot cover the detailed information presented in the image examples. Second,
textual inversion from images requires iterative optimization, which is still
time-consuming.
In this work, we propose Analogist, a novel inference-based visual ICL
approach, to address the aforementioned challenges. We introduce both visual
and textual prompting techniques on a pretrained text-to-image diffusion
model.
Firstly, we introduce a novel visual prompting technique to overcome the
coarse-granularity issue in textual prompting. Inspired by MAEVQGAN (Bar et
al., 2022), we formulate the ICL task as an image inpainting task by arranging
the exemplary image pair $A$ and $A^{\prime}$, the query image $B$, and the
unknown image $B^{\prime}$ in a $2\times 2$ grid. Then, we utilize a
pretrained diffusion inpainting model to fill in the region of $B^{\prime}$.
To guide the inpainting process with fine-grained visual contextual
information, we propose a self-attention cloning (SAC) method. This method
clones the self-attention maps between $A$ and $B$ to the self-attention maps
between $A^{\prime}$ and $B^{\prime}$ during the forward propagation of the
diffusion inpainting model. Since the self-attention maps represent similarity
between pixels, the SAC method effectively helps learn structural-level
relationships between $A$ and $B$, which are then applied to $A^{\prime}$ to
generate $B^{\prime}$ analogically.
In addition to visual prompting offering structural-level guidance, we
incorporate textual prompting to offer semantic-level guidance by providing
appropriate text prompts to the inpainting model. However, unlike previous
methods (Šubrtová et al., 2023; Nguyen et al., 2023) that rely on time-
consuming textual inversion optimization, we propose utilizing GPT-4V’s visual
reasoning capability to analyze the semantic transformation between $A$ and
$A^{\prime}$ and apply it analogically to $B$ to generate a textual
description of $B^{\prime}$. This is facilitated by our well-designed
graphical and textual instructions fed into GPT-4V. Furthermore, we introduce
a cross-attention masking (CAM) operation to restrict the interaction between
text and image to the $B^{\prime}$ region only, which ensures that the textual
prompt more accurately guides the generation of $B^{\prime}$.
With both semantic-level (coarse-grained) and structural-level (fine-grained)
contextual information respectively provided by textual and visual prompting
techniques, our approach is capable of performing a wide range of visual tasks
in an in-context manner, as illustrated in Figure 1. Our approach is an out-
of-the-box solution that only requires one forward step of a pretrained
diffusion model, without the need for fine-tuning or optimization. Extensive
experiments and comparisons across different tasks have confirmed that our
method outperforms existing training-based and inference-based visual ICL
methods, both qualitatively and quantitatively. Our method is primarily
designed for applications where the input $A$ and $A^{\prime}$ are spatially
aligned. Nonetheless, we show that it holds promise for applications in
misaligned scenarios as well. In summary, our contributions can be summarized
as follows:
* •
We introduce Analogist, an out-of-the-box approach for visual in-context
learning that utilizes a pretrained diffusion inpainting model along with
effective visual and textual prompting techniques.
* •
In visual prompting, we propose a Self-Attention Cloning (SAC) method that
effectively guides the image inpainting model to exploit fine-grained
contextual information in the $2\times 2$ grid visual prompt.
* •
In textual prompting, we propose to efficiently generate textual prompts using
GPT-4V and enhance the accuracy of textual guidance by introducing a Cross-
Attention Masking (CAM) operation.
## 2\. Related Work
### 2.1. Visual In-context Learning
Inspired by the taxonomy in Dong et al. (2022), we categorize current visual
in-context learning into two groups, training-based and inference-based, based
on the criterion of whether the model is trained on in-context tasks.
##### Training-based Methods
Training-based methods train (or finetune) the model on diverse in-context
tasks. Painter (Wang et al., 2023b) uses paired input and output images as
visual prompts to train a Vision Transformer (Dosovitskiy et al., 2020), which
enables the model to learn and perform a wide range of vision tasks. The
follow-up work SegGPT (Wang et al., 2023c) extends the in-context learning
capabilities of Painter specifically for precise and adaptable segmentation
across various domains. More recently, several work progressively exhibits the
ICL ability of state-of-the-art diffusion models (Rombach et al., 2022).
PromptDiffusion (Wang et al., 2023a) introduces ControlNet (Zhang et al.,
2023) to tune a pretrained Stable Diffusion on six manually designed vision-
language tasks. The proposed method is able to generalize to similar,
contextually related unseen tasks. However, it poses challenge for users to
offer detailed and precise text descriptions. ImageBrush (SUN et al., 2023)
introduces a novel framework for image manipulation using in-context visual
instructions, rather than natural language. An additional prompt encoder is
introduced to translate the visual changes depicted in the example images into
text features to guide the inpainting model. ImageBrush is built on a
diffusion-based inpainting model and trained on several vision datasets. The
above training-based methods necessitate the construction of high-quality and
diverse tasks, making the pipeline laborious and inflexible. Meanwhile, the
test tasks should ideally bear some similarity to the training tasks,
suggesting opportunities for improving generalizability.
Figure 2. Overview of the proposed Analogist. A visual demonstration is
defined by an example pair $A$ (woman holding a cat) and $A^{\prime}$ (the
same woman holding a tiger). Given a new image $B$ (another cat), we format
these three images into a $2\times 2$ grid and tackle this problem by fill the
missing image via a pretrained Stable Diffusion inpainting model. We employ
GPT-4V to provide a proper text description (i.e., “close-up of a tiger’s
face”) to further guide the inpainting process. During the process of model
inference, Self-Attention Cloning (SAC) and Cross-Attention Masking (CAM) are
introduced to encourage the model concentrate on the visual and textual
prompts, thus enhance its in-context learning capacities. Source image:
InstructPix2Pix (Brooks et al., 2023).
##### Inference-based Methods
Instead of tuning the model parameters, inference-based methods inspire the
model’s understanding on the given demonstrations during inference time. Among
them, MAEVQGAN (Bar et al., 2022) innovatively proposes a visual prompting
format of inpainting the missing patch in a $2\times 2$ grid-like image. The
model is pre-trained on figures from computer vision papers which are
typically in a regular grid pattern and emerges with ICL capability. However,
the generation effects are not entirely satisfactory due to limitations in
dataset size and model capacity in comparison with the latest diffusion
models. VISII (Nguyen et al., 2023) considers the demonstration as images
before and after image editing. This approach estimates the editing
instruction based on a pretrained text-based image editing model (Brooks et
al., 2023), producing results with higher quality. However, reverse-
engineering the textual description of the differences between two images
through optimization remains time-consuming. What’s more, by transferring
visual information to coarse-grained text, the generation process is merely
driven by textual descriptions. The role of visual prompting is not fully
leveraged, leading to inaccurate contextual understanding.
Our work falls into the category of inference-based methods and, notably,
eliminates the need for additional optimization steps. Instead of solely
relying on textual prompts, our approach leverages both textual and visual
prompting. This allows us to respectively understand semantic-level and
structural-level contextual information for visual ICL. Besides, our method
utilizes GPT-4V to get textual prompts instead of textual inversion.
### 2.2. Image Analogies
Defined by $A:A^{\prime}::B:B^{\prime}$, the goal of image analogies
(Hertzmann et al., 2001) is to find an “analogous” image $B^{\prime}$ that
relates to $B$ in the same way as $A^{\prime}$ relates to $A$. Such idea can
be extended in various ways of image synthesis (Diamanti et al., 2015;
Jamriška et al., 2019; Liao et al., 2017; Yuan et al., 2024). Recently, DIA
(Šubrtová et al., 2023) investigates the image analogies task with Diffusion
model. This method estimates the CLIP features of the given images. The CLIP
features are injected into a pretrained text-to-image diffusion model to
provide in-context guidance. DIA is capable of executing example-based image
editing that encompasses complex, higher-level contextual or structural
relationships. However, since the goal of CLIP is to align image and text
spaces, the estimated features are high level and struggle to capture detailed
image information.
Our work aims to tackle the problem of image analogies in the paradigm of
visual in-context learning. Different from traditional texture synthesis
approaches (Hertzmann et al., 2001; Liao et al., 2017), the analogy is
achieved by prompting a pre-trained text-to-image diffusion model and can be
applied to more applications such as low-level tasks, manipulation tasks, and
vision tasks.
### 2.3. Prompt-based Image Editing
Recent multimodal approaches have demonstrated superior text-image feature
alignment capabilities (Radford et al., 2021; Li et al., 2022), leading to a
series of works on prompt-based image editing. Previous GAN-based methods
perform manipulation in the latent space via GAN inversion (Xia et al., 2022;
Patashnik et al., 2021; Baykal et al., 2023). More recent methods utilize
text-to-image diffusion models to attain leading outcomes (Cao et al., 2023;
Brooks et al., 2023; Parmar et al., 2023). However, these methods struggle to
do image analogy task since they take textual descriptions as input, which is
not sufficiently intuitive and accurate to depict details related to the image
structure. In contrast, our work takes a pair of images as demonstration
input, utilizes self-attention to provide structure-related information, and
automatically acquires the corresponding textual description through GPT-4V.
## 3\. Preliminary
Since our approach utilizes a pretrained Stable Diffusion inpainting model, we
briefly review latent Stable Diffusion in Section 3.1 as well as the Stable
Diffusion inpainting model in Section 3.2.
### 3.1. Latent Diffusion Models.
Denoising Diffusion Probabilistic Models (DDPM) (Ho et al., 2020) are a class
of generative models that gradually convert random noise into structured data
through a series of reverse diffusion steps based on a Markov chain. Latent
Diffusion Models (LDM) like Stable Diffusion (SD) (Rombach et al., 2022)
enhances DDPM by employing an encoder $E$ to map high-dimensional data $x$
into lower-dimensional latent space $z=E(x)$. The generation of Stable
Diffusion can be guided by an additional text embedding $c(y)$ encoded by CLIP
(Radford et al., 2021) and a text prompt $y$. During training, an UNet model,
parameterized by $\theta$, is optimized to eliminate the noise $\epsilon$
introduced into $z_{t}$:
(1) $\mathcal{L}=\mathbb{E}_{z\sim
E(x),y,\epsilon\sim\mathcal{N}(0,1),t}\left[{\left\|{\epsilon-\epsilon_{\theta}(z_{t},t,c(y))}\right\|}^{2}_{2}\right].$
During inference, a randomly sampled latent $z_{T}\sim\mathcal{N}(0,1)$ is
progressively denoised through the model to produce a clean latent
representation $z_{0}$ by
(2)
$z_{t-1}=\frac{1}{\sqrt{\alpha_{t}}}\left[z_{t}-\frac{1-\alpha_{t}}{1-\sqrt{\bar{\alpha}_{t}}}\epsilon_{\theta}\left(z_{t},t,c(y)\right)\right],$
where $\bar{\alpha}_{t}=\prod_{i=1}^{t}\alpha_{t}$. Subsequently, the clean
latent is fed into the decoder to obtain the generated image $D(z_{0})$.
### 3.2. Stable Diffusion Inpainting Model
We apply our method over the pretrained Stable Diffusion inpainting model,
which is fine-tuned to boasts an additional feature of image inpainting. The
forward process of the inpainting pipeline is as follows:
(3)
$z_{t-1}=\frac{1}{\sqrt{\alpha_{t}}}\left[z_{t}-\frac{1-\alpha_{t}}{1-\sqrt{\bar{\alpha}_{t}}}\epsilon_{\theta}\left(z_{t},t,c(y),E(I_{m}),M\right)\right],$
The UNet is updated to include five extra input channels – four dedicated to
the encoded masked-image $E(I_{m})$ and one for the mask $M$ itself. These two
extra inputs are concated with $z_{t}$ to fed into the UNet to predict the
noise at each time step.
## 4\. Method
The goal of ICL is to encourage pretrained model to learn tasks given only a
few examples in the form of demonstration (Dong et al., 2022). Specific to the
image domain, the demonstration is defined as an example image pair $A$ and
$A^{\prime}$, where $A^{\prime}$ is the result obtained by applying a certain
visual effect or transformation to $A$. Given a new query image $B$, the model
is expected to apply the same effect to $B$, thus creating a new image
$B^{\prime}$, so that $A:A^{\prime}::B:B^{\prime}$ (Hertzmann et al., 2001).
This process demonstrates the model’s understanding and replication of visual
transformations from a given demonstration to a new context, exhibiting the
ICL ability.
As illustrated in Figure 2, to address this issue, we approach it from both
visual structural-level (Section 4.1) and textual semantic-level (Section 4.2)
perspectives. For visual prompting (red region in Figure 2), we formulate the
input images into a 2x2 grid image, utilizing a pretrained diffusion
inpainting model to fill in the missing region in Section 4.1.1. To introduce
more fine-grained visual information, we propose Self-Attention Cloning (SAC)
in Section 4.1.2. For textual prompting (blue region in Figure 2), GPT-4V is
elaborated to provide semantic-level guidance to the generation process in
Section 4.2.1. To foster semantic correspondence between the inpainted image
and the text prompt, we propose Cross-Attention Masking (CAM) in Section
4.2.2.
Figure 3. Visualization of the attention relationships. Given an anchor point
on image $A$ (shown in red, green, and blue colors), we calculate the
attention values between this point and all regions of image $B$. Soucre
image: InstructPix2Pix (Brooks et al., 2023).
### 4.1. Visual Prompting
To introduce fine-grained structural-level visual guidance in the in-context
inference process, we construct a visual prompt in the form of a $2\times 2$
grid-like image for the pretrained inpainting model, and provide visual
contextual information by cloning the self-attention associations between the
given images.
#### 4.1.1. $2\times 2$-grid Prompting
Image inpainting models fill in unknown areas of an image based on its known
regions, which naturally aligns with the concept of ICL. As shown in Figure 2,
to take advantage of this property, we first rearrange the input images $A$,
$A^{\prime}$, and $B$ into a single $2\times 2$ grid-like image, denoted as
$I$. Image $B$ is pasted to the bottom right corner of the grid image, getting
image $I^{\prime}$. We extract the features of the pasted image,
$E(I^{\prime})$, and add noise to it via diffusion forward process, getting
the initial $x_{T}$. To align with the interface of the pretrained model, a
mask image $M$ is simultaneously generated. In this mask, the bottom right
region is entirely ones, while the remaining regions are zeros. At each
timestep $t$, the latent $x_{t}\in\mathbb{R}^{b\times 4\times h\times w}$ is
concatenated with the feature $E(I)\in\mathbb{R}^{b\times 4\times h\times w}$
and mask $M\in\mathbb{R}^{b\times 1\times h\times w}$, constructing the input
of the UNet. By establishing such a $2\times 2$-grid prompt, we encourage the
model to fill in the content of unknown area ($B^{\prime}$) based on the
contextual regions ($A$, $A^{\prime}$, and $B$) in the image.
Figure 4. Detailed illustration of self-attention cloning (SAC). The sub self-
attention map $\mathcal{M}_{s}(A^{\prime},B^{\prime})$ is set as the value of
$\mathcal{M}_{s}(A,B)$, denoting cloning the relation between $A$ and $B$ to
that of $A^{\prime}$ and $B^{\prime}$.
#### 4.1.2. Self-Attention Cloning
The key of in-context learning is to recognize task instruction from the given
demonstration. Previous inference-based work extract the visual instructions
through cross-attention injection, which could only provides coarse and
imprecise guidance. Differently, we introduce fine-grained structural-aware
contextual information via self-attention.
Our motivation comes from an observation that the Diffusion model accurately
constructs associations between different positions in the known areas through
self-attention. We show the visualization of self-attention relations in
Figure 3. We calculate the attention values between key semantic positions
(e.g., the eyes, mouth, and flower in the first row and the spire, building,
and the background grassland in the second row) in $A$ and all regions in $B$.
The results demonstrate that the visual associations between images can be
accurately identified through self-attention, which could be more accurate
than abstract semantic text prompts as guidance. Based on this observation, we
propose to use self-attention as a structural-level prior to guide the in-
context generation procedure by modulating self-attention in UNet. We show an
example in Figure 2 of translating a cat into a tiger. The relative positional
relationship of the tiger in $B^{\prime}$ and the tiger in $A^{\prime}$ should
be consistent with the relative positional relationship of the two cats in $B$
and $A$.
We present detailed illustration of the proposed self-attention cloning (SAC)
in Figure 4. Denote the image feature before self-attention as
$F_{i}\in\mathbb{R}^{h\times w\times c}$. The self-attention map
$\mathcal{M}_{s}\in\mathbb{R}^{hw\times hw}$ records the similarity of each
position on the entire image with other positions, which also includes the
similarities between $A$ and $B$, as well as between $A^{\prime}$ and
$B^{\prime}$. We extract the sub self-attention map
$\mathcal{M}_{s}(A,B)\in\mathbb{R}^{\frac{hw}{4}\times\frac{hw}{4}}$ and
assign its value to
$\mathcal{M}_{s}(A^{\prime},B^{\prime})\in\mathbb{R}^{\frac{hw}{4}\times\frac{hw}{4}}$:
(4) $\mathcal{M}_{s}(A^{\prime},B^{\prime}):=\mathcal{M}_{s}(A,B)\cdot s,$
where $s$ is a coefficient used to balance the degree of preserving the
structure of image $B$ and the degree of applying transformations. We perform
the self-attention cloning operation before softmax to prevent the original
self-attention results being excessively affected.
Figure 5. Detailed illustration of cross-attention masking (CAM). The sub
cross-attention map between text embedding and regions $A$, $A^{\prime}$, and
$B$ are set to zero, making the semantic guidance more focused on region
$B^{\prime}$.
### 4.2. Textual Prompting
Cloning self-attention effectively manages basic in-context visual guidance,
yet the diffusion model’s celebrated text-to-image feature remains
underutilized to provide semantic-level guidance. To address this, we utilize
GPT-4V’s visual reasoning abilities (Yang et al., 2023a) to provide semantic
guidance to the inpainting model.
#### 4.2.1. GPT-4V Prompting
We prompt GPT-4V to generate a coherent text description to aid the inpainting
process. Considering the consistency of the entire pipeline, we feed the whole
$2x2$ grid-like image directly into GPT-4V with a pre-designed problem
Description, as depicted in Figure 2. We employ two carefully-designed
graphical instructions to make it easier for GPT-4V to understand the task.
Firstly, inspired by (Yang et al., 2023b), we place a letter mark ($A$,
$A^{\prime}$, $B$, $B^{\prime}$) in the top-left corner of each grid cell.
Secondly, we add prominent arrow markers ($\rightarrow$) between $A$ and
$A^{\prime}$, as well as between $B$ and $B^{\prime}$, to indicate the
relationship between the two images. These approaches introduce structured,
easily identifiable reference points, facilitating more effective and accurate
responses to queries involving visual content. Then, GPT-4V is asked to
perform an analogy and output the text description for $B^{\prime}$. Finally,
we use GPT-4V’s answer as the semantic-level positive text prompt to reinforce
the model’s ICL capabilities. We also employ negative text prompts (i.e.,
“Messy, Disordered, Chaotic, Cluttered, Haphazard, Unkempt, Scattered,
Disheveled, Tangled, Random”) to prevent the diffusion model from generating
irregular and illogical results. These two prompts work cooperatively to
inject semantic-level guidance into the model.
Figure 6. Comparison with other baseline methods, each row indicates one task,
given the input image pair $A$, $A^{\prime}$ and query image $B$. Since
MAEVQGAN (Bar et al., 2022) does not take text as input and DIA (Šubrtová et
al., 2023) and VISII (Nguyen et al., 2023) estimate the text prompts by extra
optimization, the text prompts generated by GPT-4V prompting are only used by
PromptDiffusion (Wang et al., 2023a) and Analogist. Source images: ImageNet
(Deng et al., 2009), LOL (Chen et al., 2018), InstructPix2Pix (Brooks et al.,
2023), UBC-Fashion (Zablotskaia et al., 2019), ScanNet (Dai et al., 2017),
DAVIS (Perazzi et al., 2016).
Figure 7. Examples of results generated by the proposed Analogist on different
tasks. In each example, the image $A$ and $A^{\prime}$ are shown in the first
column, the image $B$ and generated image $B^{\prime}$ is shown in the second
and third column. The text prompt generated via GPT-4V is shown below each
example. Source ImageNet (Deng et al., 2009), InstructPix2Pix (Brooks et al.,
2023), ScanNet (Dai et al., 2017), DAVIS (Perazzi et al., 2016).
Figure 8. Comparison with ImageBrush (SUN et al., 2023). The result of
ImageBrush in the first three tasks are from the original paper and the result
of the last three tasks are provided by the authors of ImageBrush. Source
images: InstructPix2Pix (Brooks et al., 2023), UBC-Fashion (Zablotskaia et
al., 2019), ScanNet (Dai et al., 2017), DAVIS (Perazzi et al., 2016).
#### 4.2.2. Cross-Attention Masking
Note that the prompt obtained from GPT-4V is specifically tailored for
$B^{\prime}$, yet the textual guidance impacts the entire image through cross-
attention in the UNet. To address this issue, we propose cross-attention
masking (CAM): in cross-attention layers, we restrict the text interacts only
with the region corresponding to $B^{\prime}$. Specifically, suppose the
cross-attention map as $\mathcal{M}_{c}\in\mathbb{R}^{hw\times L}$, where $L$
denotes the length of text embedding. We repurpose the indices of different
regions identified in the previous SAC process and set the attention values
between the text and regions other than $B^{\prime}$ (i.e., $A$, $A^{\prime}$,
and $B$) to zero:
(5)
$\mathcal{M}_{c}(A):=0;\mathcal{M}_{c}(A^{\prime}):=0;\mathcal{M}_{c}(B):=0.$
As illustrated in Figure 5, we utilize the attention map post-softmax, as we
are completely obstructing the relationship between the text and regions
outside of $B^{\prime}$.
As for the attention map indexing in SAC and CAM, due to the fixed positions
of each image, we are able to pre-calculate the indices required for
extracting the necessary sub-attention maps (e.g., $\mathcal{M}_{s}(A,B)$ and
$\mathcal{M}_{c}(A)$) from the entire attention map. This pre-determination
streamlines the entire pipeline, enhancing its simplicity and efficiency.
## 5\. Experiments
### 5.1. Implementation Details
We implement our work in PyTorch (Paszke et al., 2019). The input images $A$,
$A^{\prime}$, $B$ are resized to $256\times 256$ and spatially combined to
form a $512\times 512$ grid-like image. We used a publicly available Stable
Diffusion inpainting model111https://huggingface.co/runwayml/stable-diffusion-
inpainting. The model is initialized with SD1.2 and trained on inpainting
task, therefore capable of inpainting the missing areas specified by a mask.
The UNet architecture contains 16 blocks, each consists of one cross-attention
and one self-attention. We perform SAC and CAM from layer 3 to 10 at all
timesteps in the diffusion process. The scale for classifier-free guidance is
set at $15$. The coefficient for self-attention cloning $s=1.3$ in all
experiments except for skeleton-to-image where $s=1.4$. All experiments are
conducted on an RTX 3090 GPU.
### 5.2. Evaluation Setup
##### Dataset
We employ the following three major categories, totaling ten tasks to evaluate
the effectiveness of the proposed method quantitatively: low-level tasks,
manipulation tasks, and more challenging vision tasks.
* •
Low-level tasks. We test out method on four low-level tasks, i.e., image
colorization, image deblurring, image denoising, and image enhancement. For
the first three tasks, we sample in-the-wild images from ImageNet (Deng et
al., 2009) and apply corresponding transformations (i.e., grayscale, gaussian
blurry, adding noise). For image enhancement, we use the LOL dataset (Chen et
al., 2018), which consists of low/normal-light image pairs. We collect 100
samples for each low-level task.
* •
Manipulation tasks. We select three kind of image manipulation tasks (i.e.,
image editing, image translation, and style transfer) from the CLIP-filtered
subset processed by InstructPix2Pix (Brooks et al., 2023). Since the dataset
is constructed for general image editing, we split the samples into three
tasks based on the keywords. Instructions containing “add”, “remove” are
considered as image editing tasks, those with “make, turn, change” are image
translation tasks. Each manipulation task contains 200 samples.
* •
Vision tasks. We select three more challenging vision tasks for evaluation:
skeleton-to-image generation from UBC-Fas-hion (Zablotskaia et al., 2019),
mask-to-image generation from ScanNet (Dai et al., 2017), and image inpainting
from DAVIS dataset (Perazzi et al., 2016). Each task contains 200 samples.
By developing these three major categories, we can evaluate if the pretrained
model is capable of understanding, processing, and utilizing visual
information across various levels, while also evaluating its ability to
generalize effectively across these tasks.
##### Baseline methods
We take four methods, MAEVQGAN (Bar et al., 2022), PromptDiffusion (Wang et
al., 2023a), DIA (Šubrtová et al., 2023) and VISII (Nguyen et al., 2023) as
our baseline. All baseline methods utilize the official implementations and
checkpoints provided. Since PromptDiffusion (Wang et al., 2023a) requires text
as part of its input, but most of the test datasets (such as low-level) do not
have paired text descriptions, we input the same text prompts as ours that
obtained from GPT-4V into PromptDiffusion (Wang et al., 2023a) to ensure a
fair comparison.
##### Evaluation Metrics
We evaluate the model’s ICL capacity via the CLIP direction similarity between
the demonstration and the produced results. We utilize the Image Encoder from
CLIP to extract the image features of $A$, $A^{\prime}$, $B$, and the
generated $B^{\prime}$. Then, we calculate the cosine similarity between the
directional changes from $A$ to $A^{\prime}$ and from $B$ to $B^{\prime}$. The
higher the similarity, the more consistent the inferred $B^{\prime}$ is with
the transformation effects applied to $A$. Due to the generation diversity of
diffusion models, we do not compare pixel-level metrics like SSIM and PSNR.
Instead, we calculate FID between the generated $B^{\prime}$ images and the
ground truth images. In order to obtain more accurate result, we merge all the
data in each major category to calculate the FID values for comparison.
### 5.3. Qualitative Results
Figure 6 presents comparison of our method with the baselines on all of the
ten tasks. For MAEVQGAN (Bar et al., 2022), due to the lack of specific
structuring of training data into the form of tasks and the absence of textual
guidance, the quality of the generated output is relatively poor, especially
for high-level tasks like manipulation. For PromptDiffusion (Wang et al.,
2023a), the bias in training task (i.e., image-to-HED, HED-to-image)
significantly impacts the ICL generalizability of the model. As shown in the
example of deblur and translation, the results tend to produce line drawings
similar with edge detection results. For the other two inference-based methods
DIA (Šubrtová et al., 2023) and VISII (Nguyen et al., 2023), they conduct in-
context learning through the estimated text solely, making it difficult
provide sufficiently accurate prompt information to generate the correct
results. Our method takes into account guidance at both the visual and
semantic levels, which can produce accurate and reasonable in-context outputs.
Notice that GPT-4V prompting may struggle with vision tasks, giving coarse
descriptions. For example, “person in dress standing” in the skeleton-to-image
example does not give the detailed description that what pose the woman should
be standing in. However, thanks to the proposed SAC operation, these
structure-aware in-context information can be still captured and utilized to
produce the correct results. Figure 7 shows further results of Analogist on
these tasks, demonstrating the ICL capabilities of our proposed method. More
randomly selected results are shown in supplementary materials.
Additionally, we conducted a comparison with ImageBrush (SUN et al., 2023).
Since ImageBrush has not released the code, the comparison is made in the
range of training tasks of ImageBrush. As shown in Figure 8, it is worth
noting that our method is more effective at preserving the details in Image
$B$. Especially in manipulation tasks, the color of the aurora, the contour
structure of the animals, and the texture on the clothing are better
preserved. This is because our proposed visual and textual prompting contain
more detailed in-context information. On the three vision tasks, we achieve
competitive results with ImageBrush. Note that our model is not fine-tuned
specifically for these tasks, which demonstrate our superiority of in-context
generalizability as an inference-based method.
Table 1. Quantitative comparison on different category of tasks with previous ICL approaches. We report the cosine similarity between CLIP direction from $A$ to $A^{\prime}$ and from $B$ to $B^{\prime}$. Higher similarity represents more contextually appropriate generated results. The best results are highlighted. Category | Task | MAEVQGAN | PromptDiffusion | DIA | VISII | Analogist
---|---|---|---|---|---|---
Low level tasks | Colorization | 0.0558 | 0.1283 | 0.0066 | 0.1061 | 0.1797
Deblur | -0.0961 | 0.0251 | -0.1337 | 0.0081 | 0.0608
Denoise | -0.0389 | 0.1612 | 0.1212 | 0.1098 | 0.2391
Enhancement | 0.1120 | 0.1551 | -0.1443 | 0.2181 | 0.2251
Manipulation tasks | Image Editing | 0.1600 | 0.1768 | 0.0922 | 0.2181 | 0.1800
Image Translation | 0.2526 | 0.2426 | 0.1617 | 0.2965 | 0.3136
Style Transfer | 0.2274 | 0.2336 | 0.1515 | 0.2687 | 0.2455
Vision tasks | Skeleton-to-image | 0.4452 | 0.6150 | 0.2874 | 0.5201 | 0.7334
Mask-to-image | 0.4467 | 0.3984 | 0.1590 | 0.3071 | 0.5531
Inpainting | -0.0357 | 0.0014 | -0.0511 | 0.0619 | 0.1013
Average | | 0.1529 | 0.2137 | 0.0650 | 0.2104 | 0.2832
Table 2. Comparison of FID between the generated $B^{\prime}$s and the ground-truth images. The best results are highlighted. Our method outperforms previous methods in terms of all the three task categories. Method | Low-level | Manipulation | Vision
---|---|---|---
MAEVQGAN | 181.48 | 143.19 | 169.74
PromptDiffusion | 180.39 | 111.79 | 159.02
DIA | 173.10 | 103.39 | 191.51
VISII | 140.39 | 88.36 | 138.44
Analogist | 114.15 | 85.67 | 96.67
Table 3. User study results. In each task, we report the average percentage of selected result by the users. The best results are highlighted. Our approach garnered the highest number of selections. Method | Low-level | Manipulation | Vision
---|---|---|---
MAEVQGAN | 3.51% | 3.45% | 0.87%
PromptDiffusion | 5.33% | 14.99% | 9.09%
DIA | 4.88% | 3.32% | 0.43%
VISII | 20.18% | 18.30% | 15.58%
Analogist | 66.10% | 59.95% | 74.03%
### 5.4. Quantitative Comparisons
##### CLIP Direction
We compute the following CLIP direction similarity,
$cos[(\mathcal{E}(B^{\prime})-\mathcal{E}(B)),(\mathcal{E}(A^{\prime})-\mathcal{E}(A))]$,
to evaluate how faithfully the transformations provided by the model adhere to
the transformations contained in the given examples. The results are shown in
in Table 1. Note that VISII (Nguyen et al., 2023) achieves acceptable results
in manipulation tasks since the model it utilizes is pretrained on this ip2p
dataset (Brooks et al., 2023). Overall, our method demonstrates superior ICL
capabilities across all these tasks.
##### Fréchet inception distance (FID)
We calculate FID between generated images and ground truth on the entire major
category. The results are shown in Table 2. The proposed Analogist outperforms
all baselines across the three major tasks. Notice that VISII (Nguyen et al.,
2023) outperforms other baselines on manipulation tasks. This is because VISII
leverages an InstructPix2Pix (Brooks et al., 2023) model which is pretrained
on the same dataset, making it more familiar with generating data of similar
quality.
##### User Study
We conduct a user study to evaluate the perceptual performance of our method.
The user study consisted of 50 questions, with 42 participants involved,
containing all of the 10 kind of tasks. In each question, first, we presented
the participants with images $A$ and $A^{\prime}$, asking them to analyze the
changes between them. Then, we provided image $B$ and tasked them with
predicting the expected transformation of $B$ following the same pattern.
Subsequently, we displayed the outputs generated by different methods for this
task, and the participants were required to select the one they deemed most
consistent with the identified pattern and of the highest generative quality.
We report the average selection result for the three major tasks: low-level
tasks, manipulation tasks, and vision tasks in Table 3. Our proposed method
exhibited the highest rate of being chosen among all of the three tasks.
Figure 9. Ablation on the proposed components. An input $2\times 2$ image grid
is inpainted by: (a) pretrained SD Inpainting model with random noise as
input, (b) initializing $B^{\prime}$ as noised $B$, (c) adding negative
prompt, (d-1) adding self-attention cloning (SAC) by
$\mathcal{M}_{s}(B,B^{\prime}):=\mathcal{M}_{s}(A,A^{\prime})$, (d-2) adding
SAC by $\mathcal{M}_{s}(A^{\prime},B^{\prime}):=\mathcal{M}_{s}(A,B)$, (e)
adding GPT-4V prompting without cross-attention masking (CAM), and (f) adding
CAM (the full approach). Source images: The $1^{st}$ row are generated by
DALLE-3 (Betker et al., 2023) and all others are from InstructPix2Pix (Brooks
et al., 2023).
Figure 10. Ablation on the graphical instructions in GPT-4V prompting. By
adding marks and arrows, the identity and relation of the task becomes more
obvious, making it easier for GPT-4V to produce proper text prompt. Source
images: InstructPix2Pix (Brooks et al., 2023).
Figure 11. Ablation on hyper-parameters. In the first row, lower coefficient
$s$ produces results more like $B$, while higher $s$ transfers more feature of
$A^{\prime}$. In the second row, performing SAC and CAM at middle layers
($16\times 16$) of the UNet achieves balance between structure preserving and
transformation applying. Source images: InstructPix2Pix (Brooks et al., 2023).
### 5.5. Ablation Study
##### Effectiveness of proposed components
To evaluate the effectiveness of the proposed components, we conduct a series
of ablation studies. The ablation results are presented in Figure 9. (a) The
baseline model of pretrained inpainting model generates rough and low-quality
results. (b) By pasting $B$ to the bottom right corner of the grid image, the
outputs are more structurally consistent with $B$. (c) Adding negative prompts
helps to stabilize the generation process and avoid messy results. (d-1)
Crucially, when operating self-attention cloning by
$\mathcal{M}_{s}(B,B^{\prime}):=\mathcal{M}_{s}(A,A^{\prime})$, the model
retains the information from $B$, but is unable to extract accurate context
from $A^{\prime}$ to infer the same transformation result. (d-2) When
executing SAC by
$\mathcal{M}_{s}(A^{\prime},B^{\prime}):=\mathcal{M}_{s}(A,B)$, the model is
required to keep the structural relation between $A$ and $B$ consistent, after
they have been transformed into $A^{\prime}$ and $B^{\prime}$. Thus, we use
(d-2) instead of (d-1). (e) When adding textual prompts from GPT-4V in the
whole grid image, the model rarely focuses the text guidance on the target
inpainting area $B^{\prime}$. (f) Finally, with the proposed CAM, our full
approach not only maintained respectable generation quality but also
successfully identified the necessary visual editing (adding sunglasses),
effects (applying a cubist style), and transformations (changing church into
mosque) for the ICL task.
##### GPT-4V Prompting
We ablate on the designed graphical instructions that used to hint GPT-4V in
Figure 10. Without adding the visual marks on the grid image, GPT-4V may not
know the corresponding relationship of the given images, therefore is unable
to correctly analyze the content according to the instructions. By explicitly
marking the positions of images ($A$, $A^{\prime}$, $B$, and $B^{\prime}$) on
the constructed grid image, GPT-4V conveniently understands the information
contained in the pictures. Meanwhile, the introduced arrows from $A$ to
$A^{\prime}$ and $B$ to $B^{\prime}$ successfully demonstrate the
transformation relations, making it more acceptable for GPT-4V to produce the
ideal response of adding a “pagoda in the snowy forest”. This text prompt will
introduce semantic contextual information for the pretrained model to
understand the task. Note that our method is generic and supports other
vision-language models (Zhu et al., 2023) as well.
Figure 12. Given the same image $A$ and $B$ in the first column, and different
$A^{\prime}$s, our method is able to recognize the contextual relation between
$A$ and $A^{\prime}$ and produce the output $B^{\prime}$ images accordingly.
Source image: $A$ and $B$ are from ImageBrush (SUN et al., 2023).
$\\{A_{1}^{\prime},A_{2}^{\prime},A_{3}^{\prime},A_{4}^{\prime}\\}$ are
generated using MasaCtrl (Cao et al., 2023).
Table 4. Comparison of inference time taken to perform one ICL task for different methods. Compared to existing methods, our method does not require training on a specific task and additional optimization. Method | Inference time
---|---
MAEVQGAN (Bar et al., 2022) | 0.4s
PromptDiffusion (Wang et al., 2023a) | 4s
DIA (Šubrtová et al., 2023) | 258s
VISII (Nguyen et al., 2023) | 685s
Analogist (ours) | 4s
##### Hyper-parameters
We present ablation on the parameter sensitivity of our proposed method in
Figure 11. As for the SAC coefficient $s$, utilizing a smaller $s$ value
($s=0.5$) results in an output more closely resembling the original Image $B$,
whereas a larger value ($s=1.3$) tends to imbue the result with
characteristics of $A^{\prime}$. However, excessively large coefficients
($s=1.8$) leads to an overly unbalanced attention map, which in turn reduces
the quality of generation. We also ablate the selection of UNet layers in
which we perform SAC and CAM. The results indicate that it is necessary to
perform operations simultaneously in both the encoder and the decoder.
Furthermore, if the operations are performed at a shallow level (high
resolution), the outcome is merely a simple replication of some colors and
coarse textures, leading to poor quality. If the operations are performed at a
deeper level (low resolution), the excessive compression of information leads
to the generated result being similar to the original image $B$. In our
experiments, we perform SAC and CAM at a middle level of the UNet layers.
### 5.6. Analysis
##### Different In-context examples
A model with contextual reasoning abilities should be able to produce
different results based on different in-context examples, when given the same
input. To verify that our approach has such capabilities, we conducted the
following experiment as shown in Figure 12. Given the same image $A$ as an
image of wolves, we first translate $A$ into different example outputs
$\left\\{A^{\prime}_{1},A^{\prime}_{2},A^{\prime}_{3},A^{\prime}_{4}\right\\}$
using MasaCtrl (Cao et al., 2023), obtaining different animals like lion,
tiger, dog, and panda. We construct different ICL tasks, keeping the image $A$
and $B$ being the same, while varying the image $A^{\prime}$s. Our method is
able to recognize the translation from $A$ to $A^{\prime}$ accordingly and
generate the corresponding animals in $B^{\prime}$, demonstrating the ICL
capacity of our Analogist.
##### Inference Runtime
In this section, we compare the execution time for different ICL methods
performed once. Our experiment is conducted on an RTX 3090 GPU, and we
calculated the time taken to generate one image. The result is shown in Tab 4.
MAEVQGAN (Bar et al., 2022) is the least time-consuming, taking 0.4 seconds,
since it is generating very few tokens without the need of iteratively
denoising. Our method Analogist takes about 4 second, the same as
PromptDiffusion (Wang et al., 2023a), which is typically the standard sampling
time for Diffusion models, but does not require specific fine-tuning. As for
the previous inference-baesd methods DIA (Šubrtová et al., 2023) and VISII
(Nguyen et al., 2023), it takes rather long time (i.e., 258 seconds and 685
seconds) for these two methods to estimate the CLIP feature and editing
instruction respectively.
Figure 13. Examples of application for tasks where $A$ and $A^{\prime}$ are
aligned. The text prompts generated by GPT-4V is shown below each example.
utput images are highlighted. Source image: Photo-to-caricature images are
from CariMe (Gu et al., 2021). Sketch-to-portrait images are from
DeepFaceDrawing (Chen et al., 2020). Normal-to-RGB images are from Trevithick
et al. (2024). Icon images are from IconShop (Wu et al., 2023).
Figure 14. Illustration of the pipeline for tasks in which $A$ is aligned with
$B$ instead of $A^{\prime}$. We swap the positions of $A^{\prime}$ and $B$ in
the grid image. Through this way, we simplify the problem into aligned tasks.
Source images: generated by DALLE-3 (Betker et al., 2023).
Figure 15. Examples of application for tasks where $A$ and $B$ are aligned.
The text prompts of GPT-4V are shown below each example. Output images are
highlighted. Source images: The example images of the first motion transfer
case are from Chang et al. (2023). The other three example images are
generated by DALLE-3 (Betker et al., 2023).
Figure 16. Examples of application for tasks where $A$, $A^{\prime}$ and $B$
are all misaligned. We test our method without SAC, only CAM is applied.
Output images are highlighted. Source images: MAEVQGAN (Bar et al., 2022).
## 6\. Application
In this section, we extend Analogist to three categories of applications: (a)
$A$ and $A^{\prime}$ are aligned, (b) $A$ and $B$ are aligned, and (c) $A$,
$A^{\prime}$, and $B$ are all misaligned. For (b) and (c), we make adjustments
to our method accordingly.
### 6.1. $A$ and $A^{\prime}$ are aligned
Under the condition that $A$ and $A^{\prime}$ are aligned, we show example of
applications in Figure 13, e.g., photo-to-caricature, sketch-to-portrait,
normal-to-RGB, and icon-to-image tasks. The results show that our method is
able to generate reasonable results on these tasks. Notice that there are
slight structural changes between $A$ and $A^{\prime}$ for photo-to-caricature
and icon-to-image. However, our method is still robust to these minor issues
since we are providing in-context information from both structural and
semantic levels.
### 6.2. $A$ and $B$ are aligned
We make it possible to address tasks where $A$ is aligned with $B$ instead of
$A^{\prime}$. We give an example of object multiplication in Figure 14, where
$A$ contains one brick and $A^{\prime}$ contains a brick stack. This problem
can not be done through our original pipeline. To tackle this problem, we swap
the positions of $A^{\prime}$ and $B$ in the grid image, constructing a new
grid image where $A^{\prime}$ contains one brick and $B$ contains a stack of
bricks. In this way, we simplify the task into one where $A$ and $A^{\prime}$
are aligned again, i.e., changing the task of turning one brick into brick
stack into the task of changing bricks into golden bricks. This strategy can
be applied to tasks like motion transfer and image analogy where $A$ and
$A^{\prime}$ are misaligned in figure 15. We also demonstrate our method’s
ability of addressing tasks with multiple translations like both motion
editing and style transfer, and object multiplication with editing.
### 6.3. $A$, $A^{\prime}$, and $B$ are all misaligned
We extend our method on tasks where $A$, $A^{\prime}$, and $B$ are all
misaligned in Figure 16, such as changing a circle to a square, resizing a big
circle to a smaller one, extrapolating new content of numbers and letters. We
test our method without SAC to prevent incorrect structure guidance. Analogist
produces reasonable results and outperforms MAEVQGAN. It should be pointed out
that the quality of long sequence letter generation still have room to improve
due to notorious tendency of diffusion models to struggle with generating
high-quality text. Nevertheless, we believe these results demonstrate the pre-
trained generative models have ample potential of in-context ability to be
further tapped.
(a) Example of inaccurate prompt by GPT-4V. The expected right prompt is shown
above the image with the critical words marked green. The prompt given by
GPT-4V is shown below with the wrong words in red.
(b) Failure examples of generating unnatural images on which the model is
rarely seen during the pretraining stage, for example, normal maps and
abstract icons.
(c) Example of $A$, $A^{\prime}$, and $B$ are all misaligned, where SAC is not
applicable.
Figure 17. Example of failure cases. (a) GPT-4V fails to accurately deduce the
correct textual prompt from the given grid images when the transformation
(adding a polar bear) or category (elephant, instead of lion) is ambiguous.
(b) The model fails to generate unnatural images like normal maps or icons
even though given the right text prompt. (c) The proposed SAC struggles with
tasks where $A$, $A^{\prime}$, and $B$ are all misaligned. Source image:
Trevithick et al. (2024), IconShop (Wu et al., 2023), and DALLE-3 (Betker et
al., 2023).
## 7\. Limitation
Although our approach enhances in-context learning abilities, it’s important
to consider two possible limitations. Firstly, the inpainting model might be
misled by incorrect text descriptions. In Figure 17(a), when the
transformation from $A$ to $A^{\prime}$ is minor (i.e., the added object in
the first case is small and easily overlooked), GPT-4V fails to recognize it.
The second case shows an style transfer task of drawing “a sketch of
elephant”. However, GPT-4V recognizes the object as a lion instead of an
elephant, leading to inaccurate guidance. The potential solution could be
leaving an interface for users to monitor and customize the text prompts in
real time.
Secondly, the model struggles with producing data that it seldom sees during
the training stage. As shown in Figure 17(b), when asked to produce unnatural
images like normal map and line-drawing icons, the model fails to generate
accurate results since most of its training data are natural RGB images. On
the other hand, it explains our method’s mediocre performance on vision tasks
compared to ImageBrush (SUN et al., 2023). We believe this could potentially
be achieved by demanding a more powerful pretrained base model.
Finally, the proposed self-attention cloning may struggle with scenario in
which $A$, $A^{\prime}$, and $B$ are all misaligned as shown in Figure 17(c).
The structural-level information is not applicable in this case. One possible
solution is to rely on semantic-level information to produce the
transformation as discussed in Section 6.3.
## 8\. Conclusion
Addressing the limitations of inaccurate instruction and tedious optimization
of existing inference-based methods, we introduced Analogist, a novel approach
for visual In-Context Learning (ICL) combining visual and textual prompting.
The proposed method utilizes a text-to-image diffusion model pretrained for
image inpainting, making it an out-of-the-box solution for a wide range of
visual tasks. We innovate with Self-Attention Cloning (SAC) for visual
prompting, enabling fine-grained structural-level analogy, and leverage
GPT-4V’s visual reasoning for efficient textual prompting, supplemented by
Cross-Attention Masking (CAM) for enhanced semantic-level analogy accuracy.
Our approach, without the need for extra training or optimization,
demonstrates superior performance in both qualitative and quantitative
measures, showcasing robust ICL capabilities.
###### Acknowledgements.
This work was supported in part by the National Natural Science Foundation of
China under Grant 62276128, Grant 62192783 in part by the Collaborative
Innovation Center of Novel Software Technology and Industrialization, and a
GRF grant from the Research Grants Council (RGC) of the Hong Kong Special
Administrative Region, China [Project No. CityU 11216122].
## References
* (1)
* Bai et al. (2023) Yutong Bai, Xinyang Geng, Karttikeya Mangalam, Amir Bar, Alan Yuille, Trevor Darrell, Jitendra Malik, and Alexei A Efros. 2023. Sequential Modeling Enables Scalable Learning for Large Vision Models. _arXiv preprint arXiv:2312.00785_ (2023).
* Bar et al. (2022) Amir Bar, Yossi Gandelsman, Trevor Darrell, Amir Globerson, and Alexei Efros. 2022. Visual prompting via image inpainting. _Advances in Neural Information Processing Systems_ 35 (2022), 25005–25017.
* Baykal et al. (2023) Ahmet Canberk Baykal, Abdul Basit Anees, Duygu Ceylan, Erkut Erdem, Aykut Erdem, and Deniz Yuret. 2023. CLIP-guided StyleGAN Inversion for Text-driven Real Image Editing. _ACM Transactions on Graphics_ 42, 5 (2023), 1–18.
* Betker et al. (2023) James Betker, Gabriel Goh, Li Jing, Tim Brooks, Jianfeng Wang, Linjie Li, Long Ouyang, Juntang Zhuang, Joyce Lee, Yufei Guo, et al. 2023\. Improving image generation with better captions. _Computer Science. https://cdn. openai. com/papers/dall-e-3. pdf_ 2 (2023), 3.
* Brooks et al. (2023) Tim Brooks, Aleksander Holynski, and Alexei A Efros. 2023. Instructpix2pix: Learning to follow image editing instructions. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_. 18392–18402.
* Brown et al. (2020) Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. 2020\. Language models are few-shot learners. _Advances in Neural Information Processing Systems_ 33 (2020), 1877–1901.
* Cao et al. (2023) Mingdeng Cao, Xintao Wang, Zhongang Qi, Ying Shan, Xiaohu Qie, and Yinqiang Zheng. 2023. Masactrl: Tuning-free mutual self-attention control for consistent image synthesis and editing. In _Proceedings of the IEEE/CVF International Conference on Computer Vision_. 22560–22570.
* Chang et al. (2023) Di Chang, Yichun Shi, Quankai Gao, Jessica Fu, Hongyi Xu, Guoxian Song, Qing Yan, Xiao Yang, and Mohammad Soleymani. 2023. MagicDance: Realistic Human Dance Video Generation with Motions & Facial Expressions Transfer. _arXiv preprint arXiv:2311.12052_ (2023).
* Chen et al. (2020) Shu-Yu Chen, Wanchao Su, Lin Gao, Shihong Xia, and Hongbo Fu. 2020. DeepFaceDrawing: Deep generation of face images from sketches. _ACM Transactions on Graphics_ 39, 4 (2020), 72–1.
* Chen et al. (2018) Wei Chen, Wang Wenjing, Yang Wenhan, and Liu Jiaying. 2018. Deep Retinex Decomposition for Low-Light Enhancement. In _British Machine Vision Conference_. British Machine Vision Association.
* Dai et al. (2017) Angela Dai, Angel X Chang, Manolis Savva, Maciej Halber, Thomas Funkhouser, and Matthias Nießner. 2017. Scannet: Richly-annotated 3d reconstructions of indoor scenes. In _Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition_. 5828–5839.
* Deng et al. (2009) Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. 2009. Imagenet: A large-scale hierarchical image database. In _2009 IEEE Conference on Computer Vision and Pattern Recognition_. Ieee, 248–255.
* Diamanti et al. (2015) Olga Diamanti, Connelly Barnes, Sylvain Paris, Eli Shechtman, and Olga Sorkine-Hornung. 2015. Synthesis of Complex Image Appearance from Limited Exemplars. _ACM Transactions on Graphics_ (Mar 2015), 1–14. https://doi.org/10.1145/2699641
* Dong et al. (2022) Qingxiu Dong, Lei Li, Damai Dai, Ce Zheng, Zhiyong Wu, Baobao Chang, Xu Sun, Jingjing Xu, and Zhifang Sui. 2022. A survey for in-context learning. _arXiv preprint arXiv:2301.00234_ (2022).
* Dosovitskiy et al. (2020) Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, et al. 2020\. An image is worth 16x16 words: Transformers for image recognition at scale. _arXiv preprint arXiv:2010.11929_ (2020).
* Gu et al. (2021) Zheng Gu, Chuanqi Dong, Jing Huo, Wenbin Li, and Yang Gao. 2021. CariMe: Unpaired caricature generation with multiple exaggerations. _IEEE Transactions on Multimedia_ 24 (2021), 2673–2686.
* He et al. (2023) Jiabang He, Lei Wang, Yi Hu, Ning Liu, Hui Liu, Xing Xu, and Heng Tao Shen. 2023. ICL-D3IE: In-Context Learning with Diverse Demonstrations Updating for Document Information Extraction. In _Proceedings of the IEEE/CVF International Conference on Computer Vision_. 19485–19494.
* Hertzmann et al. (2001) Aaron Hertzmann, Charles E. Jacobs, Nuria Oliver, Brian Curless, and David H. Salesin. 2001. Image analogies. In _Proceedings of the 28th annual conference on Computer graphics and interactive techniques_. https://doi.org/10.1145/383259.383295
* Ho et al. (2020) Jonathan Ho, Ajay Jain, and Pieter Abbeel. 2020. Denoising diffusion probabilistic models. _Advances in Neural Information Processing Systems_ 33 (2020), 6840–6851.
* Jamriška et al. (2019) Ondřej Jamriška, Šárka Sochorová, Ondřej Texler, Michal Lukáč, Jakub Fišer, Jingwan Lu, Eli Shechtman, and Daniel Sýkora. 2019. Stylizing video by example. _ACM Transactions on Graphics_ (Aug 2019), 1–11. https://doi.org/10.1145/3306346.3323006
* Li et al. (2022) Junnan Li, Dongxu Li, Caiming Xiong, and Steven Hoi. 2022. Blip: Bootstrapping language-image pre-training for unified vision-language understanding and generation. In _International Conference on Machine Learning_. PMLR, 12888–12900.
* Liao et al. (2017) Jing Liao, Yuan Yao, Lu Yuan, Gang Hua, and Sing Bing Kang. 2017. Visual atribute transfer through deep image analogy. _ACM Transactions on Graphics_ 36, 4 (2017), 120.
* Najdenkoska et al. (2023) Ivona Najdenkoska, Animesh Sinha, Abhimanyu Dubey, Dhruv Mahajan, Vignesh Ramanathan, and Filip Radenovic. 2023. Context Diffusion: In-Context Aware Image Generation. _arXiv preprint arXiv:2312.03584_ (2023).
* Nguyen et al. (2023) Thao Nguyen, Yuheng Li, Utkarsh Ojha, and Yong Jae Lee. 2023. Visual Instruction Inversion: Image Editing via Image Prompting. In _Thirty-seventh Conference on Neural Information Processing Systems_. https://openreview.net/forum?id=l9BsCh8ikK
* Parmar et al. (2023) Gaurav Parmar, Krishna Kumar Singh, Richard Zhang, Yijun Li, Jingwan Lu, and Jun-Yan Zhu. 2023. Zero-shot image-to-image translation. In _ACM SIGGRAPH 2023 Conference Proceedings_. 1–11.
* Paszke et al. (2019) Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, et al. 2019\. Pytorch: An imperative style, high-performance deep learning library. _Advances in Neural Information Processing Systems_ 32 (2019).
* Patashnik et al. (2021) Or Patashnik, Zongze Wu, Eli Shechtman, Daniel Cohen-Or, and Dani Lischinski. 2021. Styleclip: Text-driven manipulation of stylegan imagery. In _Proceedings of the IEEE/CVF International Conference on Computer Vision_. 2085–2094.
* Perazzi et al. (2016) Federico Perazzi, Jordi Pont-Tuset, Brian McWilliams, Luc Van Gool, Markus Gross, and Alexander Sorkine-Hornung. 2016. A benchmark dataset and evaluation methodology for video object segmentation. In _Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition_. 724–732.
* Radford et al. (2021) Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. 2021\. Learning transferable visual models from natural language supervision. In _International Conference on Machine Learning_. PMLR, 8748–8763.
* Rombach et al. (2022) Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, and Björn Ommer. 2022. High-resolution image synthesis with latent diffusion models. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_. 10684–10695.
* Šubrtová et al. (2023) Adéla Šubrtová, Michal Lukáč, Jan Čech, David Futschik, Eli Shechtman, and Daniel Sỳkora. 2023. Diffusion Image Analogies. In _ACM SIGGRAPH 2023 Conference Proceedings_. 1–10.
* SUN et al. (2023) Yasheng SUN, Yifan Yang, Houwen Peng, Yifei Shen, Yuqing Yang, Han Hu, Lili Qiu, and Hideki Koike. 2023. ImageBrush: Learning Visual In-Context Instructions for Exemplar-Based Image Manipulation. In _Thirty-seventh Conference on Neural Information Processing Systems_. https://openreview.net/forum?id=EmOIP3t9nk
* Trevithick et al. (2024) Alex Trevithick, Matthew Chan, Towaki Takikawa, Umar Iqbal, Shalini De Mello, Manmohan Chandraker, Ravi Ramamoorthi, and Koki Nagano. 2024. What You See is What You GAN: Rendering Every Pixel for High-Fidelity Geometry in 3D GANs. _arXiv preprint arXiv:2401.02411_ (2024).
* Wang et al. (2023b) Xinlong Wang, Wen Wang, Yue Cao, Chunhua Shen, and Tiejun Huang. 2023b. Images speak in images: A generalist painter for in-context visual learning. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_. 6830–6839.
* Wang et al. (2023c) Xinlong Wang, Xiaosong Zhang, Yue Cao, Wen Wang, Chunhua Shen, and Tiejun Huang. 2023c. SegGPT: Towards Segmenting Everything in Context. In _Proceedings of the IEEE/CVF International Conference on Computer Vision_. 1130–1140.
* Wang et al. (2023a) Zhendong Wang, Yifan Jiang, Yadong Lu, yelong shen, Pengcheng He, Weizhu Chen, Zhangyang Wang, and Mingyuan Zhou. 2023a. In-Context Learning Unlocked for Diffusion Models. In _Thirty-seventh Conference on Neural Information Processing Systems_. https://openreview.net/forum?id=6BZS2EAkns
* Wei et al. (2022) Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Fei Xia, Ed Chi, Quoc V Le, Denny Zhou, et al. 2022\. Chain-of-thought prompting elicits reasoning in large language models. _Advances in Neural Information Processing Systems_ 35 (2022), 24824–24837.
* Wu et al. (2023) Ronghuan Wu, Wanchao Su, Kede Ma, and Jing Liao. 2023. IconShop: Text-Guided Vector Icon Synthesis with Autoregressive Transformers. _ACM Transactions on Graphics_ 42, 6 (2023), 1–14.
* Xia et al. (2022) Weihao Xia, Yulun Zhang, Yujiu Yang, Jing-Hao Xue, Bolei Zhou, and Ming-Hsuan Yang. 2022. Gan inversion: A survey. _IEEE Transactions on Pattern Analysis and Machine Intelligence_ 45, 3 (2022), 3121–3138.
* Xu et al. (2023) Canwen Xu, Yichong Xu, Shuohang Wang, Yang Liu, Chenguang Zhu, and Julian McAuley. 2023. Small models are valuable plug-ins for large language models. _arXiv preprint arXiv:2305.08848_ (2023).
* Yang et al. (2023b) Jianwei Yang, Hao Zhang, Feng Li, Xueyan Zou, Chunyuan Li, and Jianfeng Gao. 2023b. Set-of-mark prompting unleashes extraordinary visual grounding in gpt-4v. _arXiv preprint arXiv:2310.11441_ (2023).
* Yang et al. (2023a) Zhengyuan Yang, Linjie Li, Kevin Lin, Jianfeng Wang, Chung-Ching Lin, Zicheng Liu, and Lijuan Wang. 2023a. The dawn of lmms: Preliminary explorations with gpt-4v (ision). _arXiv preprint arXiv:2309.17421_ 9, 1 (2023).
* Yuan et al. (2024) Liang Yuan, Dingkun Yan, Suguru Saito, and Issei Fujishiro. 2024. DiffMat: Latent diffusion models for image-guided material generation. _Visual Informatics_ (2024).
* Zablotskaia et al. (2019) Polina Zablotskaia, Aliaksandr Siarohin, Bo Zhao, and Leonid Sigal. 2019. Dwnet: Dense warp-based network for pose-guided human video generation. _arXiv preprint arXiv:1910.09139_ (2019).
* Zhang et al. (2023) Lvmin Zhang, Anyi Rao, and Maneesh Agrawala. 2023. Adding conditional control to text-to-image diffusion models. In _Proceedings of the IEEE/CVF International Conference on Computer Vision_. 3836–3847.
* Zhu et al. (2023) Deyao Zhu, Jun Chen, Xiaoqian Shen, Xiang Li, and Mohamed Elhoseiny. 2023. MiniGPT-4: Enhancing Vision-Language Understanding with Advanced Large Language Models. In _The Twelfth International Conference on Learning Representations_.
|
Puyuan Peng1, Shang-Wen Li2, Okko Räsänen3, Abdelrahman Mohamed4, David
Harwath1
# Syllable Discovery and Cross-Lingual Generalization in
a Visually Grounded, Self-Supervised Speech Model
###### Abstract
In this paper, we show that representations capturing syllabic units emerge
when training a self-supervised speech model with a visually-grounded training
objective. We demonstrate that a nearly identical model architecture (HuBERT)
trained with a masked language modeling loss does not exhibit this same
ability, suggesting that the visual grounding objective is responsible for the
emergence of this phenomenon. We propose the use of a minimum cut algorithm to
automatically predict syllable boundaries in speech, followed by a 2-stage
clustering method to group identical syllables together. We show that our
model not only outperforms a state-of-the-art syllabic segmentation method on
the language it was trained on (English), but also generalizes in a zero-shot
fashion to Estonian. Finally, we show that the same model is capable of zero-
shot generalization for a word segmentation task on 4 other languages from the
Zerospeech Challenge, in some cases beating the previous state-of-the-
art.111Code & Model: https://github.com/jasonppy/syllable-discovery.
Index Terms: visually-grounded speech, speech segmentation, self-supervised
speech processing
## 1 Introduction
Traditionally, automatic speech recognition, speech synthesis, and spoken
language understanding tasks have relied on supervised learning and the
assumption that ground-truth text transcriptions of the training speech are
available. Such transcriptions are costly to collect and represent a major
hurdle in developing speech recognition and related technologies that can
serve the thousands of languages around the world.
Recently the speech community has made tremendous progress developing self-
supervised models that can learn powerful representations of the speech signal
by being pre-trained on untranscribed speech data. After pre-training the
models can be fine-tuned on a small amount of transcribed data to achieve
impressive performance on a variety of tasks [1, 2, 3, 4, 5]. Furthermore, the
representations learned by these models can be clustered into discrete speech
units that have been shown to be strongly correlated with words and phones [6,
7]. These units can be used to tokenize speech into a pseudo-text sequence,
which can be used as a drop-in replacement for a text transcription in a wide
variety of downstream tasks, giving rise to a new genre of ``textless'' speech
processing research [8, 9, 10, 11].
Because of the emergent nature of these units, it is not yet understood how to
control what type of linguistic structure (e.g. phones, syllables, words) they
will capture. It has been shown that the representations of self-supervised
speech models tend to correlate with lower-level structure such as phones at
lower model layers, and higher-level structure such as words at higher model
layers [6, 12]. However, it has also been demonstrated that the model's
training objective strongly influences the nature of these representations.
Training the model to perform cross-modal grounding of speech to contextually-
relevant visual images has been shown to dramatically increase the model's
word learning capability over a masked language modeling objective, even when
the model architecture is held nearly constant [7].
In this paper, we build on [7] and demonstrate that multimodal self-
supervision simultaneously results in the emergence of word-like and syllable-
like representations within the same model. While [7] showed that word-like
units are encoded by the Transformer's attention heads, we show that syllabic
structure emerges within the embeddings of the token sequence itself. We
propose the use of a minimum cut segmentation algorithm to derive syllable
boundaries from these features, outperforming a state-of-the-art method for
unsupervised syllabic segmentation. We then show that these segments can be
clustered across a speech corpus to perform syllable discovery, enabling
tokenization of the speech signal at the level of syllable-like units.
Finally, we also show surprising results where our model trained only on
English speech is able to perform zero-shot segmentation of syllables on
another language (Estonian) and words in multiple non-English languages, in
several cases outperforming the state-of-the-art models on the Zerospeech
challenge [13].
## 2 Related Work
Besides the aforementioned work on self-supervised and textless speech
processing, our work is also related to spoken term discovery and visually
grounded speech processing.
Spoken term discovery - inferring the temporal boundary and identity of words
and short phrases from untranscribed speech audio data - has been an important
research direction in Zero-resource speech processing [13]. The earliest work
that tackles spoken term discovery date back to at least the segmental dynamic
programming algorithm proposed by Park and Glass [14]. Since then, numerous
other approaches have been proposed. [15, 16] developed Bayesian models for
hierarchical phoneme and word discovery. Based on the fact that syllables are
organized around particularly sonorous speech sounds, [17] developed sonority
fluctuation-based method for syllabic segmentation. Other works model word
directly either via an iterative segmentating-clustering approach [18], or
reinforcement learning [19]. Self-supervised learning has also been considered
for end-to-end phoneme and word segmentation [20, 21]. Mostly recently,
Algayres et al. [22] identified the key issues in applying text-based models
for speech segmentation, and proposed the DP-Parse algorithm which uses
instance lexicon to mitigate clustering error. Herman [23] applied vector
quantization for phoneme-like unit discovery, and then ran a dynamic
programming algorithm on the discovered units for word segmentation.
Visually grounded speech (VGS) processing [24] generalizes the idea of self-
supervised learning to multimodal (visual) data and learns speech
representations by associating speech audio with contextually-relevant visual
input. VGS usually leverages image-speech [25, 26] or video-speech [27, 28]
paired data. In practice, besides speech-image retrieval and alignment [29,
30, 31, 32, 33, 34], VGS models has also be shown to achieves competitive
performance keyword spotting [35], query-by-example research [36], and varies
tasks in the SUPERB benchmark [37, 38]. The study of linguistic information
learned in VGS models has been attracting increasing attention. In particular,
researchers has measured the phonetic, syllabic, and lexical information in
VGS models [39, 40, 6, 41, 42, 7, 43]. In addition to [7] which we build our
work on, [43] is the most relevant to ours where they studied the emergence of
phonetic, syllabic, and lexical information in different layers of CNN-based
VGS models. Our work is different from their in that none of the modules of
our model receives textual supervision, while their image encoder is pre-
trained on Imagenet classification [44]. In addition, we show the emergence of
hierarchical linguistic information in the non-hierarchical Transformer model,
while they use hierarchical CNN models.
## 3 Technical Approach
VG-HuBERT [7] is a self-supervised dual-encoder model trained using a
contrastive loss to match speech waveforms with the images they describe.
Although VG-HuBERT is not trained with any textual supervision, the model has
been shown to exhibit strong word discovery capabilities [7]. Specifically,
its CLS token places concentrated chunks of attention weight on word segments
in input utterances (see lower left subfigure of figure 1 for an example). Our
motivating hypothesis is that VG-HuBERT's word discovery ability is predicated
on its ability to also discover sub-word units at earlier layers. To probe
this we first extract a sequence of frame embeddings from some layer of the
model given an input waveform, $\mathbf{C}\in\mathbb{R}^{T\times D}$, ($T$ is
number of speech frames, $D$ is the feature dimension). Next, we then
calculate the feature self-similarity matrix as
$\text{featSSM}\mathrel{\mathop{\mathchar
58\relax}}=\mathbf{C}\mathbf{C}^{\intercal}$. We normalize featSSM by
subtracting smallest element of the matrix from all elements to insure that
all frame-pair similarity scores are non-negative. Figure 1 shows an example
of featSSM, where green color denotes high similarity and blue denotes low
similarity. We see a clear block diagonal structure in VG-HuBERT's featSSM,
where each block corresponds to a syllable. In HuBERT's featSSM, however, the
block structure hardly exists. Based on the different patterns we see between
the feature self-similarity matrix and the CLS attention, we hypothesize that
visually grounded training leads to the emergence of syllable identity being
encoded in VG-HuBERT's features, and the CLS token attending to these features
to infer the presence of words. To quantitatively study the syllable discovery
phenomenon, we adopt the normalized minimum cut algorithm [45, 46, 47] to
automatically segment the blocks in featSSM, and use the block boundaries to
predict syllable boundaries.
A min-cut segmentation algorithm for featSSM. We define a fully-connected,
undirected graph $G(V,E)$ for every speech utterance. Set $V$ consists of all
speech frames as nodes; Set $E$ consists of edges, where the edge weight
$w(u,v)$ is defined as the similarity score corresponding to nodes $u$ and
$v$. Segmenting the blocks in featSSM means partitioning the corresponding
graph $G(V,E)$ into disjoint sets $A_{1},A_{2},\cdots,A_{k}$ such that
similarity among nodes (i.e. frames) within each set are maximized, and while
minimizing the similarities of nodes between sets. To achieve this, [45]
proposed the following objective:
$\text{Ncut}_{k}(V)=\frac{cut(A_{1},V-A_{1})}{vol(A_{1})}+\cdots+\frac{cut(A_{k},V-A_{k})}{vol(A_{k})}\vspace{-1mm}$
where $cut(A,B)\mathrel{\mathop{\mathchar 58\relax}}=\sum_{u\in A,v\in
B}w(u,v)$, and $vol(A)\mathrel{\mathop{\mathchar 58\relax}}=\sum_{u\in A,v\in
V}w(u,v)$. For sequential data, the above minimization problem can be solved
using a dynamic programming algorithm [46] in $O(KN^{2})$ time. Here $K$ is
the number of partitions (estimated number of syllables in the utterance in
our case), and $N$ is the number of nodes (speech frames). $K$ needs to be set
up-front for every utterance, and we use a hyperparameter second-per-syllable
(secPerSyllable) to decide $K$ based on the duration of the utterance. In
practice, we use the variant introduced in [47], where we first oversegment
featSSM, and then iteratively merge temporally adjacent partitions if the
cosine similarity of the averaged features belonging to the two partitions
falls below some threshold (denoted as mergeThres). We found that this variant
always outperformed the original algorithm proposed in [46].
Clustering. With hypothesized syllabic segment boundaries produced by the min-
cut algorithm, we further use a 2-step clustering approach to categorize the
segments. Average features within each segment are used as the embedding of
the segment. We initially cluster the segment embeddings using KMeans to
produce a large number of clusters, and then run agglomerate clustering to
merge similar clusters. We found our 2-step clustering approach to work better
compared to just using Kmeans, given the same number of final clusters. Since
our work and [7] are both based on VG-HuBERT, we denote [7]'s segmentation
approach as $\text{VG-HuBERT}_{\text{cls}}$, where the CLS attention is used
to segment speech, and denote our approach as $\text{VG-
HuBERT}_{\text{featSSM}}$, where the min-cut algorithm is used on featSSM for
segmentation. Both approaches used the 2-step clustering method for segment
categorization.
Figure 1: Visualization of feature self-similarity matrix (upper) and the
attention (lower) in VG-HuBERT and HuBERT. The vertical white dotted lines are
generated by minCutMerge, and vertical blue dotted lines are generated by
taking the midpoint of boundaries of adjacent attention segments
## 4 Experiments
### 4.1 Datasets
Following [7], the training dataset is SpokenCOCO [48], an image-English
spoken caption dataset built on top of the MSCOCO image-text caption dataset
[49]. For evaluation on English, we use the test set of SpokenCOCO. Since
SpokenCOCO does not have syllable alignment, we first use the Montreal Forced
Aligner222https://montreal-forced-aligner.readthedocs.io/en/latest/ to
generate phonetic and word alignment, and then derive the corresponding
syllable alignment utilizing a rule-based syllabification
script333https://github.com/kylebgorman/syllabify. For cross-lingual
generalization experiments, we follow [17] and evaluate our approaches on
Estonian syllabic segmentation using the Phonetic Corpus of Estonian
Spontaneous Speech [50], which contains conversational speech between two test
subjects recorded with near-field microphones. The corpus comes with manually
verified syllable transcription and alignment. We also evaluate our approach
on the Zerospeech word segmentation task, which contains five languages:
Mandarin, English, French, German, and Wolof.
### 4.2 Implementation details
Model training. We use the official open-sourced codebase and training recipe
released by Peng and Harwath [7] and train a VG-HuBERT on SpokenCOCO. Model
snapshots are saved during training for syllable and word discovery analysis.
Evaluation. To evaluate segmentation performance, we use precision, recall, F1
and R-value [51, 23]. For the calculation of above metrics, we use a tolerance
window of $50$ms for SpokenCOCO and Estonian following [17], and $30$ms for
the Zerospeech Challenge [13]. To evaluate the quality of our syllable
clustering, we first match hypothesized syllable segments with the ground
truth segments for each utterance. To do so, we use a Hungarian matching
algorithm where each segment is a node and edge weights are defined by
temporal intersection-over-union between each hypothesized segment and ground
truth segment (unmatched segments are assigned to a dummy segment). Then, we
follow [7] and use cluster purity and number of detected syllables (DS). A
syllable is defined as being detected if it achieves an F1 score greater than
$0.5$ for some cluster [7]. To avoid conflating word detection and syllable
detection, we only evaluate on multisyllabic words.
Hyperparameter tuning. For SpokenCOCO, we tune the mergeThres to maximize the
segmentation R-value on the SpokenCOCO validation set. The number of clusters
in Kmeans and agglomerative clustering are fixed at $16384$ and $4096$. For
syllabic segmentation on Estonian, we tune the hyperparameters on a validation
set created following the procedure introduced in [17], using a subset of the
original Estonain corpus [50]. For cross-lingual word segmentation on the
Zerospeech challenge, we use the hyperparameters selected from the SpokenCOCO
validation set.
### 4.3 When do syllables and words emerge during training?
Figure 2: The performance of speech-image retrieval, and syllable and word
segmentation of VG-HuBERT as training progress.
We first investigate when syllable and word information emerges during the
training of VG-HuBERT. In Figure 2, we show the syllable and word segmentation
performance of VG-HuBERT as a function of training iteration, along with
speech-image retrieval accuracy on the SpokenCOCO validation set. Since the
contrastive training loss is a direct approximation of the retrieval metric,
speech-image retrieval accuracy keeps improving throughout the course of
training as expected. For syllabic segmentation, VG-HuBERT reaches the first
peak at 20*2k steps, and the performance keeps improving shortly afterwards,
with a trend similar to retrieval performance. Interestingly, VG-HuBERT peaks
at 20*2k steps for word segmentation, and the performance slightly decreases
before levelling off. Anecdotally, by manually examining some examples we
found that VG-HuBERT's CLS token tends to ignore more words in the later
stages of training. This might be because the model is starting to ignore non-
salient words in order to produce semantic representations that are more
discriminative in terms of retrieval performance. Notably, as we can see in
Figure 1, syllabic information for the entire utterance tends to persist in
the model's representations even when some segments are ignored by the CLS
token's attention.
### 4.4 Where in the model do syllables and words emerge?
We next perform a layer-wise study to show how visual grounding helps the
emergence of syllables and words, and the interplay between the discovery of
different linguistic units. Figure 3 compares VG-HuBERT to HuBERT for syllabic
segmentation, and also shows VG-HuBERT's word segmentation on the SpokenCOCO
validation set. HuBERT performs quite evenly across all layers, while syllabic
segmentation is best in VG-HuBERT's mid to late layers, and VG-HuBERT's word
segmentation ability is concentrated in the final few layers. We also fine-
tuned HuBERT on the SpokenCOCO utterances using its original self-supervised
loss to mitigate the potential domain gap, but did not see any improvement in
syllabic segmentation (see first two rows in Table 1). We see a `division of
labor' between different layers in VG-HuBERT with middle layers performing
best in syllabic segmentation, while the last three layers specialize in word
segmentation. In addition, we note that the best syllabic segmentation layer
(layer $9$) is right before the best word segmentation layer (layer $10$),
indicating that the attention heads may be learning to string syllables
together into words. We leave a more in-depth investigation of this phenomenon
for future work.
Figure 3: Layer-wise performance of VG-HuBERT on syllable and word
segmentation, and HuBERT on syllabic segmentation on SpokenCOCO val set.
HuBERT word segmentation gives very poor results [7] and therefore is not
shown.
### 4.5 Syllable discovery on English
Table 1 compares VG-HuBERT with other models for syllable discovery on the
SpokenCOCO test set. We see that HuBERT performs the worst on this dataset, no
matter whether it is fine-tuned on SpokenCOCO or not. $\text{VG-
HuBERT}_{\text{cls}}$ denotes the CLS token's attention-based segmentation, a
method that has been shown to achieve SotA on word segmentation [7], gives
high precision and low recall on this syllabic segmentation task as expected.
In terms of syllable detection, we see that $\text{VG-HuBERT}_{\text{cls}}$
can detect more than $700$ syllables with a high cluster purity. Considering
the high cluster purity and low boundary recall of $\text{VG-
HuBERT}_{\text{cls}}$, we conclude that this approach is able to discover a
smaller number of syllables, but is highly confident of the ones that it does
discover. Oscillator [17] is a signal processing-based syllabic segmentation
algorithm that achieves SotA for unsupervised syllabic segmentation on
multiple languages, including English. Oscillator performs reasonably well on
this dataset, only lagging behind our approach on segmentation. Our $\text{VG-
HuBERT}_{\text{featSSM}}$ model achieves the best performance in both syllabic
segmentation (best F1 and R-val) and clustering (best DS).
Table 1: Syllabic segmentation performance of different models on SpokenCOCO
test set. DS denotes detected syllables.
Model | Prec. | Rec. | F1 | R-val. | Purity | DS
---|---|---|---|---|---|---
HuBERT ft. [2] | 43.8 | 49.4 | 46.4 | 51.5 | 29.0 | 519
HuBERT [2] | 43.8 | 46.5 | 45.1 | 52.0 | 30.1 | 522
$\text{VG-HuBERT}_{\text{cls}}$ [7] | 58.7 | 37.1 | 45.5 | 54.3 | 66.1 | 751
Oscillator [17] | 52.0 | 64.6 | 57.6 | 57.4 | - | -
$\text{VG-HuBERT}_{\text{featSSM}}$ | 57.4 | 63.6 | 60.3 | 64.3 | 45.8 | 902
### 4.6 Zero-shot syllabic segmentation on Estonian
Syllables are strongly correlated with speech intensity and voicing, and are
organized around sonorant speech sounds [17]. This suggests that a syllable
detection model trained on one language may able to generalize to other
languages. We thus evaluate our English-trained models on a non-English
language, namely Estonian. We use the same five-hour subset and evaluation
pipeline as [17]. Table 2 lists the results. We see that compared to other
methods including the Oscillator, our VG-HuBERT performs the best in both F1
and R-val metrics, indicating that its syllabic segmentation ability is at
least somewhat language-agnostic.
Table 2: Syllabic segmentation on the Estonian corpus.
Approach | Prec. | Rec. | F1 | R-val.
---|---|---|---|---
$\text{VG-HuBERT}_{\text{cls}}$ [7] | 56 | 77 | 65 | 57
HuBERT [2] | 64 | 75 | 69 | 70
WN [17] | 77 | 62 | 69 | 72
EnvMin [52] | 67 | 71 | 69 | 73
Vseg [53] | 82 | 63 | 71 | 73
Oscillator [17] | 71 | 78 | 74 | 77
Oscillator (our reprod.) | 72 | 78 | 75 | 78
$\text{VG-HuBERT}_{\text{featSSM}}$ | 77 | 80 | 79 | 82
### 4.7 Zero-shot word segmentation on unseen languages
Lastly, we ask the question: if VG-HuBERT's CLS token detects words in
English, what does it do for a language it has not seen during training? To
investigate CLS token's behavior on languages unseen during training, we first
visualize the CLS attention for Estonian and Mandarin utterances in figure 4.
We see that anecdotally, the CLS attention appears to be performing syllabic
segmentation, but it sometimes also connect adjacent syllables together. In
some cases, the connections give invalid words - in figure 4, for Estonian
(the upper figure), `h_ve' and `i' are connected, but the result is not a
valid word; for Mandarin, `必须分' is connected (in the middle figure), and the
result is also not a valid word. However, in some other cases, the connections
happen to give valid words - in the two Mandarin examples in figure 4, `历史'
and `不知' got connected, and they are valid words.
Based on the observation that the CLS token produces a mixture of
monosyllablic and multisyllabic segmentation, we test $\text{VG-
HuBERT}_{\text{cls}}$ for word segmentation on the Zerospeech challenge. In
table 3, we see that VG-HuBERT achieves SotA performance on three out of five
languages, despite only being trained on English. Interestingly, VG-HuBERT
performs very differently on Mandarin and Wolof. While this could be due to
hyperparameter settings (we use the same hyperparameters for all languages),
we are not able to verify because the Wolof transcripts are not publicly
available.
Figure 4: Visualizations of VG-HuBERT's CLS attention on unseen languages -
Estonian and Mandarin. Thin dashed lines denote syllable boundaries, thick
vertical line denotes word boundaries. Word boundaries are also syllable
boundaries. Table 3: Word segmentation performance on the Zerospeech
Challenge. Token F1 is a stricter metric than boundary F1 where a word is
considered a hit only when both it's start and end boundaries are successfully
predicted.
Approach | Mand. | French | Engl. | German | Wolof
---|---|---|---|---|---
PDTW [54] | 4.4 | 5.1 | 4.1 | 2.9 | 4.2
ES-KMeans [18] | 8.1 | 6.3 | 19.2 | 14.5 | 10.9
SEA [55] | 12.1 | 6.3 | 6.6 | 6.3 | 12.6
DP-Parse [22] | 16.0 | 15.3 | 21.9 | 13.4 | 17.5
DPDP [23] | 26.3 | 12.2 | 19.2 | 9.0 | 15.0
$\text{VG-HuBERT}_{\text{cls}}$ | 19.5 | 15.5 | 26.6 | 15.8 | 7.1
## 5 Concluding Discussion
In this paper, we demonstrated that the VG-HuBERT visually-grounded speech
model exhibits emergent syllable recognition behavior. We proposed the use of
a minimum cut algorithm to automatically extract syllable boundaries from the
model's learned representations, and showed that this segmentation ability
could transfer to Estonian speech even though the model was only trained on
English. Furthermore, we demonstrated that the emergent word discovery ability
that is also present in the model could be applied in a zero-shot transfer
fashion to segment words in non-English languages, achieving state-of-the-art
segmentation performance for several languages in the Zerospeech Challenge
benchmark. In our future work, we plan to apply our syllable discovery method
to tokenize speech waveforms and use these tokenizations in various textless
speech processing tasks such as spoken language modeling and speech-to-speech
translation, as well as unsupervised speech recognition.
## References
* [1] A. Baevski, Y. Zhou, A. Mohamed, and M. Auli, ``wav2vec 2.0: A framework for self-supervised learning of speech representations,'' in _NeurIPS_ , 2020\.
* [2] W.-N. Hsu _et al._ , ``Hubert: Self-supervised speech representation learning by masked prediction of hidden units,'' _TASLP_ , 2021.
* [3] Y.-A. Chung _et al._ , ``w2v-bert: Combining contrastive learning and masked language modeling for self-supervised speech pre-training,'' _ASRU_ , 2021.
* [4] S. Chen _et al._ , ``Wavlm: Large-scale self-supervised pre-training for full stack speech processing,'' _JSTSP_ , 2021.
* [5] A. Mohamed _et al._ , ``Self-supervised speech representation learning: A review,'' _JSTSP_ , 2022.
* [6] D. Harwath, W. Hsu, and J. R. Glass, ``Learning hierarchical discrete linguistic units from visually-grounded speech,'' in _ICLR_ , 2020.
* [7] P. Peng and D. F. Harwath, ``Word discovery in visually grounded, self-supervised speech models,'' in _Interspeech_ , 2022.
* [8] K. Lakhotia _et al._ , ``On generative spoken language modeling from raw audio,'' _TACL_ , 2021.
* [9] X. Li, Y. Jia, and C.-C. Chiu, ``Textless direct speech-to-speech translation with discrete speech representation,'' _ArXiv preprint_ , 2022.
* [10] T. Nguyen _et al._ , ``Generative spoken dialogue language modeling,'' _ArXiv_ , 2022.
* [11] G.-T. Lin _et al._ , ``Dual: Textless spoken question answering with speech discrete unit adaptive learning,'' _ArXiv preprint_ , 2022.
* [12] A. Pasad, B. Shi, and K. Livescu, ``Comparative layer-wise analysis of self-supervised speech models,'' _ArXiv preprint_ , 2022.
* [13] E. Dunbar, N. Hamilakis, and E. Dupoux, ``Self-supervised language learning from raw audio: Lessons from the zero resource speech challenge,'' _JSTSP_ , 2022.
* [14] A. S. Park and J. R. Glass, ``Unsupervised pattern discovery in speech,'' _TASLP_ , no. 1, 2008.
* [15] C.-y. Lee, T. J. O'Donnell, and J. Glass, ``Unsupervised lexicon discovery from acoustic input,'' _TACL_ , 2015.
* [16] T. Taniguchi, S. Nagasaka, and R. Nakashima, ``Nonparametric bayesian double articulation analyzer for direct language acquisition from continuous speech signals,'' _TCDS_ , 2015.
* [17] O. Räsänen, G. Doyle, and M. C. Frank, ``Pre-linguistic segmentation of speech into syllable-like units,'' _Cognition_ , 2018.
* [18] H. Kamper, K. Livescu, and S. Goldwater, ``An embedded segmental k-means model for unsupervised segmentation and clustering of speech,'' _ASRU_ , 2017.
* [19] Y. Wang, H. Lee, and L. Lee, ``Segmental audio word2vec: Representing utterances as sequences of vectors with applications in spoken term detection,'' in _ICASSP_ , 2018.
* [20] S. Bhati _et al._ , ``Segmental contrastive predictive coding for unsupervised word segmentation,'' in _Interspeech_ , 2021.
* [21] S. Cuervo _et al._ , ``Contrastive prediction strategies for unsupervised segmentation and categorization of phonemes and words,'' _ICASSP_ , 2022.
* [22] R. Algayres _et al._ , ``Dp-parse: Finding word boundaries from raw speech with an instance lexicon,'' _TACL_ , 2022.
* [23] H. Kamper, ``Word segmentation on discovered phone units with dynamic programming and self-supervised scoring,'' _TASLP_ , 2022.
* [24] G. Chrupała, ``Visually grounded models of spoken language: A survey of datasets, architectures and evaluation techniques,'' _J. Artif. Intell. Res._ , 2021.
* [25] S. Gabriel, V. Maarten, and E. Dupoux, ``Learning words from images and speech,'' in _NeurIPS Workshop on Learning Semantics_ , 2014.
* [26] D. F. Harwath, A. Torralba, and J. R. Glass, ``Unsupervised learning of spoken language with visual context,'' in _NeurIPS_ , 2016.
* [27] A. Rouditchenko _et al._ , ``Avlnet: Learning audio-visual language representations from instructional videos,'' in _Interspeech_ , 2021.
* [28] M. Nikolaus, A. Alishahi, and G. Chrupała, ``Learning english with peppa pig,'' _TACL_ , vol. 10, pp. 922–936, 2022.
* [29] H. Kamper, G. Shakhnarovich, and K. Livescu, ``Semantic speech retrieval with a visually grounded model of untranscribed speech,'' _TASLP_ , vol. 27, pp. 89–98, 2017.
* [30] R. Sanabria, A. Waters, and J. Baldridge, ``Talk, don't write: A study of direct speech-based image retrieval,'' in _Interspeech_ , 2021.
* [31] P. Peng and D. Harwath, ``Fast-slow transformer for visually grounding speech,'' in _ICASSP_ , 2022.
* [32] Y.-J. Shih _et al._ , ``Speechclip: Integrating speech with pre-trained vision and language model,'' _2022 IEEE Spoken Language Technology Workshop (SLT)_ , pp. 715–722, 2022.
* [33] K. Khorrami and O. J. Räsänen, ``Evaluation of audio-visual alignments in visually grounded speech models,'' in _Interspeech_ , 2021.
* [34] D. F. Harwath, A. Recasens, D. Surís, G. Chuang, A. Torralba, and J. R. Glass, ``Jointly discovering visual objects and spoken words from raw sensory input,'' _IJCV_ , vol. 128, pp. 620–641, 2018.
* [35] K. Olaleye, ``Visually grounded keyword detection and localisation for low-resource languages,'' _ArXiv_ , vol. abs/2302.00765, 2023.
* [36] H. Kamper, A. Anastassiou, and K. Livescu, ``Semantic query-by-example speech search using visual grounding,'' _ICASSP_ , pp. 7120–7124, 2019.
* [37] S. Yang _et al._ , ``SUPERB: speech processing universal performance benchmark,'' in _Interspeech_ , 2021.
* [38] P. Peng and D. Harwath, ``Self-supervised representation learning for speech using visual grounding and masked language modeling,'' in _SAS@AAAI_ , 2022\.
* [39] A. Alishahi, M. Barking, and G. Chrupała, ``Encoding of phonology in a recurrent neural model of grounded speech,'' _ArXiv_ , vol. abs/1706.03815, 2017.
* [40] O. J. Räsänen and K. Khorrami, ``A computational model of early language acquisition from audiovisual experiences of young infants,'' in _Interspeech_ , 2019.
* [41] W. N. Havard, J.-P. Chevrot, and L. Besacier, ``Models of visually grounded speech signal pay attention to nouns: A bilingual experiment on english and japanese,'' _ICASSP_ , pp. 8618–8622, 2019.
* [42] ——, ``Word recognition, competition, and activation in a model of visually grounded speech,'' in _Conference on Computational Natural Language Learning_ , 2019.
* [43] K. Khorrami and O. J. Räsänen, ``Can phones, syllables, and words emerge as side-products of cross-situational audiovisual learning? - a computational investigation,'' _Language Dev. Research_ , 2021.
* [44] O. Russakovsky _et al._ , ``Imagenet large scale visual recognition challenge,'' _IJCV_ , vol. 115, pp. 211–252, 2014.
* [45] J. Shi and J. Malik, ``Normalized cuts and image segmentation,'' _CVPR_ , 1997\.
* [46] I. Malioutov and R. Barzilay, ``Minimum cut model for spoken lecture segmentation,'' in _ACL_ , 2006.
* [47] D. F. Harwath and T. J. Hazen, ``Topic identification based extrinsic evaluation of summarization techniques applied to conversational speech,'' _ICASSP_ , 2012.
* [48] W.-N. Hsu _et al._ , ``Text-free image-to-speech synthesis using learned segmental units,'' in _ACL_ , 2021.
* [49] T.-Y. Lin _et al._ , ``Microsoft coco: Common objects in context,'' in _ECCV_ , 2014.
* [50] P. Lippus _et al._ , ``Phonetic corpus of estonian spontaneous speech,'' _Institute of Estonian and General Linguistics, University of Tartu. DOI: https://doi. org/10.15155/TY. D_ , 2013.
* [51] O. Räsänen, U. K. Laine, and T. Altosaar, ``An improved speech segmentation quality measure: the r-value,'' in _Interspeech_ , 2009.
* [52] D. Wang and S. S. Narayanan, ``Robust speech rate estimation for spontaneous speech,'' _TASLP_ , 2007.
* [53] R. Villing, J. Timoney, and T. E. Ward, ``Automatic blind syllable segmentation for continuous speech,'' 2004.
* [54] O. Räsänen and M. A. C. Blandón, ``Unsupervised discovery of recurring speech patterns using probabilistic adaptive metrics,'' in _Interspeech_ , 2020.
* [55] S. Bhati _et al._ , ``Self-expressing autoencoders for unsupervised spoken term discovery,'' in _Interspeech_ , 2020.
|
# On the linear space of the two-sided generalized Fibonacci sequences
Martin Bunder
School of Mathematics and Applied Statistics
University of Wollongong Australia
<EMAIL_ADDRESS>Joseph Tonien
School of Computing and Information Technology
University of Wollongong Australia
<EMAIL_ADDRESS>
###### Abstract
In this paper, we study the linear space of all two-sided generalized
Fibonacci sequences $\\{F_{n}\\}_{n\in\mathbb{Z}}$ that satisfy the recurrence
equation of order $k$: $F_{n}=F_{n-1}+F_{n-2}+\dots+F_{n-k}$. We give two
types of explicit formula, one is based on generalized binomial coefficients
and the other based on generalized multinomial coefficients.
AMS Classification Numbers: 11B37, 11B39, 47B37
Keywords: generalized Fibonacci sequence, generalized binomial, generalized
multinomial.
## 1 Introduction
The Fibonacci sequence, $F_{0}=0$, $F_{1}=1$, $F_{n}=F_{n-1}+F_{n-2}$, have
been generalized in many ways. One of the generalizations [12, 5, 17] is to
change the recurrence equation to $F_{n}=\alpha F_{n-1}+\beta F_{n-2}$, thus
keeping the characteristic equation remained in order 2. Another common
generalization is to extend the recurrence equation to a higher order. For a
fixed integer $k\geq 2$, a sequence is called a Fibonacci sequence of order
$k$ if it satisfies the following recurrence equation
$F_{n}=F_{n-1}+F_{n-2}+\dots+F_{n-k}.$ (1)
For some particular values of $k$, the sequence has a special name. It is
called a tribonacci sequence, a tetranacci sequence and a pentanacci sequence
for $k=3,4,5$, respectively.
A Fibonacci sequence of order $k$ is uniquely determined by a list of values
of $k$ consecutive terms. For instance, if the values of
$F_{0},F_{1},\dots,F_{k-1}$ are given then using the recurrence equation (1),
we can work out the values of all other terms $F_{n}$ for $n\geq k$, as well
as for negative indices $n<0$. Here is an example of a Fibonacci sequence of
order 5:
$\displaystyle\dots,F_{-4}=-2,F_{-3}=7,F_{-2}=-3,F_{-1}=-4,$ $\displaystyle
F_{0}={\bf 3},F_{1}={\bf 1},F_{2}={\bf 4},F_{3}={\bf 1},F_{4}={\bf
5},F_{5}=14,F_{6}=25,\dots\quad.$
Since we have $F_{0}=0$ and $F_{1}=1$ in the original Fibonacci sequence,
there are two common ways to set the initial conditions: (i)
$F_{0}=F_{1}=\dots=F_{k-2}=0$, $F_{k-1}=1$ as in [18, 9, 19, 13, 4, 6]; or
(ii) $F_{0}=0$, $F_{1}=\dots=F_{k-2}=F_{k-1}=1$ as in [14, 21, 3]. Another
initial condition $F_{0}=F_{1}=\dots=F_{k-1}=1$ appears in Ferguson [8] arisen
in the study of polyphase merge-sorting. Various formulas have been found for
Fibonacci sequences with these three initial conditions which can be grouped
into three types: Binet formula [7, 13], binomial coefficients [8, 1] and
multinomial coefficents [18, 13]. We note that these formulas of $F_{n}$ are
only restricted to the integer indices $n\geq 0$. The Binet type of formula is
algebraic in nature and remains valid when we extend to negative indices
$n<0$. However, formulas involved binomial coefficients and multinomial
coefficents are limited to non-negative indices and it is not trivial to
extend to negative indices.
While most authors only consider sequences $F_{n}$ with $n\geq 0$, in this
paper, we will study two-sided sequences. Those are sequences $\\{F_{n}\\}$
where the index $n\in\mathbb{Z}$, that is, we allow $n$ to be a negative
integer. Instead of looking for explicit formula for a Fibonacci sequence with
a particular initial condition, our aim is to find explicit formulas for a
general Fibonacci sequence that has an arbitrary initial condition
$(F_{0},F_{1},\dots,F_{k-1})$. To do that, we consider the set of all
Fibonacci sequences of order $k$. This forms a $k$-dimensional linear space.
We will study the standard basis of this linear space which is denoted by
$B^{(0)},B^{(1)},\dots,B^{(k-1)}$. For $0\leq j\leq k-1$, each $B^{(j)}$ is a
Fibonacci sequence whose initial values are all zero except $B^{(j)}_{j}=1$.
We will find explicit formula for the basis sequences
$B^{(0)},B^{(1)},\dots,B^{(k-1)}$, and thus, any Fibonacci sequence $F$ can be
determined by a linear combination
$F=F_{0}B^{(0)}+F_{1}B^{(1)}+\dots+F_{k-1}B^{(k-1)}$.
Our aim is to find explicit formulas for two-sided Fibonacci sequences that
are expressed in terms of binomial coefficients and multinomial coefficients,
respectively. Since the classical binomial coefficients and multinomial
coefficients are only associated with non-negative integers, to use these for
our two-sided sequences we need to extend the binomial notation and
multinomial notation to include negative integers. To this end, we extend the
binomial notation ${n\choose i}$ to negative values of $n$ and $i$, writing
this as $\left\langle{n\choose i}\right\rangle$. Subjected to the two
conditions $\left\langle{n\choose n}\right\rangle=1$ and
$\left\langle{{n-1}\choose{i}}\right\rangle+\left\langle{{n-1}\choose{i-1}}\right\rangle=\left\langle{{n}\choose{i}}\right\rangle$,
the latter is called the Pascal Recursion equation, the value of the
generalized binomial notation is uniquely determined. In Theorem 7, we will
show that
$\displaystyle B_{n}^{(j)}=$
$\displaystyle-\sum_{i\in\mathbb{Z}}{(-1)^{i}\left\langle{{n-ik}\choose{i-1}}\right\rangle
2^{n+1-i(k+1)}}$
$\displaystyle+\sum_{i\in\mathbb{Z}}{(-1)^{i}\left\langle{{n-j-1-ik}\choose{i-1}}\right\rangle
2^{n-j-i(k+1)}}\mbox{ for all }n\in\mathbb{Z}.$
We extend the multinomial notation ${n\choose{i_{1},i_{2},\dots,i_{t}}}$ to
negative values of $n$ and $i_{1},\dots,i_{t}$, writing this as
$\left\langle{n\choose{i_{1},i_{2},\dots,i_{t}}}\right\rangle$. The
generalization is done as follows.
Using the generalized binomial notation we extend the traditional multinomial
notation
${n\choose{i_{1},i_{2},\dots,i_{k}}}={n\choose{i_{2}+\dots+i_{t}}}{{i_{2}+\dots+i_{t}}\choose{i_{3}+\dots+i_{t}}}\dots{i_{t-2}+i_{t-1}+i_{t}}\choose{i_{t-1}+i_{t}}{{i_{t-1}+i_{t}}\choose{i_{t}}},$
to
$\displaystyle\left\langle{{n}\choose{i_{1},i_{2},\dots,i_{t}}}\right\rangle=\left\langle{{n}\choose{i_{2}+\dots+i_{t}}}\right\rangle\left\langle{{i_{2}+\dots+i_{t}}\choose{i_{3}+\dots+i_{t}}}\right\rangle\dots\left\langle{{i_{t-2}+i_{t-1}+i_{t}}\choose{i_{t-1}+i_{t}}}\right\rangle\left\langle{{i_{t-1}+i_{t}}\choose{i_{t}}}\right\rangle.$
Using this generalized multinomial notation, in Theorem 12, we will show that
$B_{n}^{(j)}=\sum_{n-k-j\leq a_{1}+2a_{2}+\dots+ka_{k}\leq
n-k}{\left\langle{{a_{1}+a_{2}+\dots+a_{k}}\choose{a_{1},a_{2},\dots,a_{k}}}\right\rangle},\mbox{
for all }n\in\mathbb{Z}.$
The rest of the paper is organised as follows. In section 2, we study the
linear space of Fibonacci sequences of order $k$ in general, especially
looking at the linear automorphisms of this space. Formulas based on the
generalized binomial notation are derived in section 3. Formulas based on the
generalized multinomial notation are derived in section 4. Finally, in section
5, we remark on how the generalized Fibonacci sequences are related to a
tiling problem.
## 2 The Fibonacci linear space of order $k$
###### Definition 1.
Let $k\geq 2$ be a fixed integer. A sequence $\\{F_{n}\\}_{n\in{\mathbb{Z}}}$
is called a Fibonacci sequence of order $k$ if it satisfies the following
recurrence equation
$F_{n}=F_{n-1}+F_{n-2}+\dots+F_{n-k},\mbox{ for all }n\in\mathbb{Z}.$ (2)
We can see that, given $k$ values $(F_{0},F_{1},\dots,F_{k-1})$, then using
the Fibonacci recurrence equation (2), all other values $F_{n}$ for
$n\in\mathbb{Z}$ are determined uniquely. We will refer to
$(F_{0},F_{1},\dots,F_{k-1})$ as the initial values of the sequence. The set
of all Fibonacci sequences of order $k$ forms a $k$-dimensional vector space
(either over the field $\mathbb{R}$ or $\mathbb{C}$). We will use
$\mathsf{Fibonacci}^{(k)}$ to denote this vector space of all Fibonacci
sequences of order $k$. We now define the standard basis for the Fibonacci
vector space $\mathsf{Fibonacci}^{(k)}$.
###### Definition 2.
Let $k\geq 2$ be a fixed integer. For each integer $0\leq j\leq k-1$, the
sequence $B^{(j)}\in\mathsf{Fibonacci}^{(k)}$ is defined by the initial values
$B^{(j)}_{n}=\begin{cases}0,&\mbox{ if }0\leq n\leq k-1\mbox{ and }n\neq j\\\
1,&\mbox{ if }n=j.\end{cases}$
The special sequences $B^{(0)},B^{(1)},\dots,B^{(k-1)}$ defined above form a
standard basis for the space $\mathsf{Fibonacci}^{(k)}$. Any member of this
Fibonacci vector space is a linear combination of the standard basis and we
have the following theorem.
###### Theorem 1.
Let $k\geq 2$ be a fixed integer. Let $\\{F_{n}\\}_{n\in{\mathbb{Z}}}$ be a
Fibonacci sequence of order $k$. Then
$F_{n}=\sum_{j=0}^{k-1}{B^{(j)}_{n}F_{j}}\mbox{ for all }n\in\mathbb{Z}.$
By Theorem 1, we can see that in order to determine an explicit formula for
any Fibonacci sequence $\\{F_{n}\\}_{n\in{\mathbb{Z}}}$, it suffices to derive
formula for the $k$ basis sequences $B^{(0)},B^{(1)},\dots,B^{(k-1)}$.
### 2.1 Linear operators on the Fibonacci space
Here we list some standard linear operators on two-sided sequences.
* •
Identity operator $\mathtt{I}$.
* •
Left shift operator $\mathtt{L}$: $\mathtt{L}(X)=Y$ iff $Y_{n}=X_{n+1}$ for
all $n\in{\mathbb{Z}}$.
* •
Right shift operator $\mathtt{R}$: $\mathtt{R}(X)=Y$ iff $Y_{n}=X_{n-1}$ for
all $n\in{\mathbb{Z}}$. The left shift and the right shift are inverse of each
other: $\mathtt{L}\mathtt{R}=\mathtt{R}\mathtt{L}=\mathtt{I}$.
* •
Forward difference operator $\Delta$: $\Delta(X)=Y$ iff $Y_{n}=X_{n+1}-X_{n}$
for all $n\in{\mathbb{Z}}$. Here $\Delta=\mathtt{L}-\mathtt{I}$.
* •
Backward difference operator $\nabla$: $\nabla(X)=Y$ iff $Y_{n}=X_{n}-X_{n-1}$
for all $n\in{\mathbb{Z}}$. Here
$\nabla=\mathtt{I}-\mathtt{R}=\mathtt{I}-\mathtt{L}^{-1}$,
$\mathtt{L}\nabla=\Delta$ and $\mathtt{R}\Delta=\nabla$.
We have the following theorem concerning the above operators.
###### Theorem 2.
All operators $\mathtt{I}$, $\mathtt{L}$, $\mathtt{R}$, $\Delta$ and $\nabla$
when restricted to the space $\mathsf{Fibonacci}^{(k)}$ are linear
automorphisms $\mathsf{Fibonacci}^{(k)}\to\mathsf{Fibonacci}^{(k)}$ and
satisfy the following relations:
(i)
$\mathtt{L}^{k}=\mathtt{I}+\mathtt{L}+\mathtt{L}^{2}+\dots+\mathtt{L}^{k-1}.$
(ii)
$\mathtt{R}=\mathtt{L}^{-1}=-\mathtt{I}-\mathtt{L}-\mathtt{L}^{2}-\dots-\mathtt{L}^{k-2}+\mathtt{L}^{k-1}$
(iii)
$\mathtt{R}^{k}=\mathtt{I}-\mathtt{R}-\mathtt{R}^{2}-\dots-\mathtt{R}^{k-1}.$
(iv)
$\mathtt{L}=\mathtt{R}^{-1}=\mathtt{I}+\mathtt{R}+\mathtt{R}^{2}+\dots+\mathtt{R}^{k-1}.$
(v) $\mathtt{L}^{k+1}=2\mathtt{L}^{k}-\mathtt{I}.$
(vi) $\mathtt{R}^{k+1}=2\mathtt{R}-\mathtt{I}.$
(vii)
$\Delta(\mathtt{I}+(k-1)\mathtt{R}+(k-2)\mathtt{R}^{2}+(k-3)\mathtt{R}^{3}+\dots+2\mathtt{R}^{k-2}+\mathtt{R}^{k-1})=(k-1)\mathtt{I}.$
(viii)
$\nabla(k\mathtt{I}+(k-1)\mathtt{R}+(k-2)\mathtt{R}^{2}+\dots+2\mathtt{R}^{k-2}+\mathtt{R}^{k-1})=(k-1)\mathtt{I}.$
(ix) $\sum_{i=0}^{k}{{k+1}\choose{i+1}}\frac{k-1-2i}{k+1}\Delta^{i}=0.$
(x) $(k-1)\mathtt{I}+\sum_{i=1}^{k}{{k+1}\choose{i+1}}(-1)^{i}\nabla^{i}=0.$
Proof. It is easy to see that all these operators $\mathtt{I}$, $\mathtt{L}$,
$\mathtt{R}$, $\Delta$ and $\nabla$ are linear. Each maps a Fibonacci sequence
to another Fibonacci sequence. The bijectivity of $\mathtt{I}$, $\mathtt{L}$,
$\mathtt{R}$ is obvious, whereas, the bijectivity of $\Delta$ and $\nabla$
follows from (vii) and (viii), respectively.
(i) For any $X\in\mathsf{Fibonacci}^{(k)}$, let
$(\mathtt{I}+\mathtt{L}+\mathtt{L}^{2}+\dots+\mathtt{L}^{k-1})(X)=Y$ then
$Y_{n}=X_{n}+X_{n+1}+X_{n+2}+\dots+X_{n+k-1}=X_{n+k}$, therefore,
$Y=\mathtt{L}^{k}(X)$. This proves that, restricted to the linear space
$\mathsf{Fibonacci}^{(k)}$,
$\mathtt{I}+\mathtt{L}+\mathtt{L}^{2}+\dots+\mathtt{L}^{k-1}=\mathtt{L}^{k}$.
(ii) For any $X\in\mathsf{Fibonacci}^{(k)}$, let
$(-\mathtt{I}-\mathtt{L}-\mathtt{L}^{2}-\dots-\mathtt{L}^{k-2}+\mathtt{L}^{k-1})(X)=Y$
then $Y_{n}=-X_{n}-X_{n+1}-X_{n+2}-\dots-X_{n+k-2}+X_{n+k-1}=X_{n-1}$. Hence,
$Y=\mathtt{R}(X)$, and therefore,
$-\mathtt{I}-\mathtt{L}-\mathtt{L}^{2}-\dots-\mathtt{L}^{k-2}+\mathtt{L}^{k-1}=\mathtt{R}=\mathtt{L}^{-1}$.
(iii) For any $X\in\mathsf{Fibonacci}^{(k)}$, let
$(\mathtt{I}-\mathtt{R}-\mathtt{R}^{2}-\dots-\mathtt{R}^{k-1})(X)=Y$ then
$Y_{n}=X_{n}-X_{n-1}-X_{n-2}-\dots-X_{n-k+1}=X_{n-k}$. Hence,
$Y=\mathtt{R}^{k}(X)$, and therefore,
$\mathtt{I}-\mathtt{R}-\mathtt{R}^{2}-\dots-\mathtt{R}^{k-1}=\mathtt{R}^{k}.$
(iv) For any $X\in\mathsf{Fibonacci}^{(k)}$, let
$(\mathtt{I}+\mathtt{R}+\mathtt{R}^{2}+\dots+\mathtt{R}^{k-1})(X)=Y$ then
$Y_{n}=X_{n}+X_{n-1}+X_{n-2}+\dots+X_{n-k+1}=X_{n+1}$. Hence,
$Y=\mathtt{L}(X)$, and therefore,
$\mathtt{I}+\mathtt{R}+\mathtt{R}^{2}+\dots+\mathtt{R}^{k-1}=\mathtt{L}=\mathtt{R}^{-1}$.
(v) By (i),
$\mathtt{L}^{k+1}=\mathtt{L}\,\mathtt{L}^{k}=\mathtt{L}(\mathtt{I}+\mathtt{L}+\mathtt{L}^{2}+\dots+\mathtt{L}^{k-1})=\mathtt{L}+\mathtt{L}^{2}+\dots+\mathtt{L}^{k-1}+\mathtt{L}^{k}=(\mathtt{I}+\mathtt{L}+\mathtt{L}^{2}+\dots+\mathtt{L}^{k-1})+\mathtt{L}^{k}-\mathtt{I}=\mathtt{L}^{k}+\mathtt{L}^{k}-\mathtt{I}=2\mathtt{L}^{k}-\mathtt{I}$.
(vi) By (iii),
$\mathtt{R}^{k+1}=\mathtt{R}\mathtt{R}^{k}=\mathtt{R}(\mathtt{I}-\mathtt{R}-\mathtt{R}^{2}-\dots-\mathtt{R}^{k-1})=\mathtt{R}-\mathtt{R}^{2}-\mathtt{R}^{3}-\dots-\mathtt{R}^{k-1}-\mathtt{R}^{k}=\mathtt{R}-\mathtt{R}^{2}-\mathtt{R}^{3}-\dots-\mathtt{R}^{k-1}-(\mathtt{I}-\mathtt{R}-\mathtt{R}^{2}-\dots-\mathtt{R}^{k-1})=2\mathtt{R}-\mathtt{I}$.
(vii) We have
$\displaystyle\Delta(\mathtt{I}+(k-1)\mathtt{R}+(k-2)\mathtt{R}^{2}+(k-3)\mathtt{R}^{3}+\dots+2\mathtt{R}^{k-2}+\mathtt{R}^{k-1})$
$\displaystyle=(\mathtt{L}-\mathtt{I})(\mathtt{I}+(k-1)\mathtt{R}+(k-2)\mathtt{R}^{2}+(k-3)\mathtt{R}^{3}+\dots+2\mathtt{R}^{k-2}+\mathtt{R}^{k-1})$
$\displaystyle=\mathtt{L}+(k-2)\mathtt{I}-\mathtt{R}-\mathtt{R}^{2}-\dots-\mathtt{R}^{k-2}-\mathtt{R}^{k-1}$
$\displaystyle=(k-1)\mathtt{I}\quad\mbox{ by (iv).}$
(viii) We have
$\displaystyle\nabla(k\mathtt{I}+(k-1)\mathtt{R}+(k-2)\mathtt{R}^{2}+\dots+2\mathtt{R}^{k-2}+\mathtt{R}^{k-1})$
$\displaystyle=(\mathtt{I}-\mathtt{R})(k\mathtt{I}+(k-1)\mathtt{R}+(k-2)\mathtt{R}^{2}+\dots+2\mathtt{R}^{k-2}+\mathtt{R}^{k-1})$
$\displaystyle=k\mathtt{I}-\mathtt{R}-\mathtt{R}^{2}-\dots-\mathtt{R}^{k-1}-\mathtt{R}^{k}$
$\displaystyle=(k-1)\mathtt{I}\quad\mbox{ by (iii).}$
(ix) Substituting $\mathtt{L}=\mathtt{I}+\Delta$ into (i), we have
$\displaystyle(\mathtt{I}+\Delta)^{k}$
$\displaystyle=\mathtt{I}+(\mathtt{I}+\Delta)+(\mathtt{I}+\Delta)^{2}+\dots+(\mathtt{I}+\Delta)^{k-1}$
$\displaystyle\sum_{i=0}^{k}{k\choose i}\Delta^{i}$
$\displaystyle=\sum_{j=0}^{k-1}\sum_{i=0}^{j}{j\choose
i}\Delta^{i}=\sum_{i=0}^{k-1}\sum_{j=i}^{k-1}{j\choose
i}\Delta^{i}=\sum_{i=0}^{k-1}{k\choose{i+1}}\Delta^{i}.$
Therefore,
$\displaystyle\Delta^{k}$
$\displaystyle=\sum_{i=0}^{k-1}\left({k\choose{i+1}}-{k\choose
i}\right)\Delta^{i}=\sum_{i=0}^{k-1}{{k+1}\choose{i+1}}\frac{k-1-2i}{k+1}\Delta^{i}.$
(x) Substituting $\mathtt{R}=\mathtt{I}-\nabla$ into (iii), we have
$\displaystyle(\mathtt{I}-\nabla)^{k}$
$\displaystyle=\mathtt{I}-(\mathtt{I}-\nabla)-(\mathtt{I}-\nabla)^{2}-\dots-(\mathtt{I}-\nabla)^{k-1}.$
So
$\displaystyle\sum_{i=1}^{k}{k\choose i}(-\nabla)^{i}$
$\displaystyle=-\sum_{j=1}^{k-1}\sum_{i=0}^{j}{j\choose
i}(-\nabla)^{i}=-(k-1)\mathtt{I}-\sum_{i=1}^{k-1}\sum_{j=i}^{k-1}{j\choose
i}(-\nabla)^{i}$
$\displaystyle=-(k-1)\mathtt{I}-\sum_{i=1}^{k-1}{k\choose{i+1}}(-\nabla)^{i}.$
Therefore,
$\displaystyle(-\nabla)^{k}$
$\displaystyle=-(k-1)\mathtt{I}-\sum_{i=1}^{k-1}\left({k\choose{i+1}}+{k\choose
i}\right)(-\nabla)^{i}$
$\displaystyle=-(k-1)\mathtt{I}-\sum_{i=1}^{k-1}{{k+1}\choose{i+1}}(-\nabla)^{i}$
and
$\displaystyle\sum_{i=1}^{k}{{k+1}\choose{i+1}}(-\nabla)^{i}$
$\displaystyle=-(k-1)\mathtt{I}.\quad\blacksquare$
###### Theorem 3.
Denote $S=B^{(0)}+B^{(1)}+\dots+B^{(k-1)}\in\mathsf{Fibonacci}^{(k)}$. We have
(i) $B^{(j)}-B^{(j-1)}=\mathtt{R}^{j}(B^{(0)})$ for all $1\leq j\leq k-1$.
(ii) $B^{(j)}=\sum_{i=0}^{j}\mathtt{R}^{i}(B^{(0)})$ for all $0\leq j\leq
k-1$.
(iii) $B^{(0)}=\mathtt{R}(B^{(k-1)})$ and $B^{(k-1)}=\mathtt{L}(B^{(0)})$.
(iv) $B^{(j)}=\sum_{i=0}^{j}\mathtt{R}^{i+1}(B^{(k-1)})$ for all $0\leq j\leq
k-1$.
(v)
$S=(k\,\mathtt{I}+(k-1)\,\mathtt{R}+(k-2)\,\mathtt{R}^{2}+\dots+\mathtt{R}^{k-1})(B^{(0)})$.
(vi) $\nabla(S)=(k-1)B^{(0)}$.
(vii) $(\mathtt{I}-\mathtt{R}^{j+1})(S)=(k-1)B^{(j)}$ for all $0\leq j\leq
k-1$.
Proof. (i) Both $B^{(j)}-B^{(j-1)}$ and $\mathtt{R}^{j}(B^{(0)})$ are members
of $\mathsf{Fibonacci}^{(k)}$ and their initial values are equal, therefore,
$B^{(j)}-B^{(j-1)}=\mathtt{R}^{j}(B^{(0)})$.
(ii) It follows from (i).
(iii) By (ii), $B^{(k-1)}=\sum_{i=0}^{k-1}\mathtt{R}^{i}(B^{(0)})$ and since
$\mathtt{L}=\mathtt{R}^{-1}=\mathtt{I}+\mathtt{R}+\mathtt{R}^{2}+\dots+\mathtt{R}^{k-1}$
(Theorem 2(iv)), we have $B^{(k-1)}=\mathtt{L}(B^{(0)})$ and so
$B^{(0)}=\mathtt{R}(B^{(k-1)})$.
(iv) It follows from (ii) and (iii).
(v) It follows from (ii).
(vi) It follows from (v) and Theorem 2(viii).
(vii) We have
$\displaystyle(k-1)B^{(j)}$
$\displaystyle=(k-1)\sum_{i=0}^{j}\mathtt{R}^{i}(B^{(0)})\quad\mbox{ by (ii)}$
$\displaystyle=\sum_{i=0}^{j}\mathtt{R}^{i}(\nabla(S))\quad\mbox{ by (vi)}$
$\displaystyle=\sum_{i=0}^{j}(\mathtt{R}^{i}(1-\mathtt{R}))(S)=(1-\mathtt{R}^{j+1})(S).$
Another direct way to prove (vii) is by observing that both $(k-1)B^{(j)}$ and
$(1-\mathtt{R}^{j+1})(S)$ are members of $\mathsf{Fibonacci}^{(k)}$ and their
initial values are equal. $\blacksquare$
## 3 Explicit formulas based on binomials
In this section, we will derive explicit formula for the two-sided Fibonacci
basis sequences $B^{(0)},B^{(1)},\dots,B^{(k-1)}$ expressed in terms of
binomial coefficients. Since the traditional binomial notation is associated
with non-negative integers, to use these for our two-sided sequences we need
to extend the binomial notation to include negative integers. To this end, we
extend the binomial notation ${n\choose i}$ to negative values of $n$ and $i$.
The binomial notation ${n\choose i}$ can be generalized to
$\left\langle{{n}\choose{i}}\right\rangle$ for all integers $n$ and $i$ by
enforcing two conditions:
* •
$\left\langle{{n}\choose{n}}\right\rangle=1$ for all $n\in\mathbb{Z}$; and
* •
Pascal Recursion relation
$\left\langle{{n-1}\choose{i}}\right\rangle+\left\langle{{n-1}\choose{i-1}}\right\rangle=\left\langle{{n}\choose{i}}\right\rangle.$
(3)
With these two conditions, $\left\langle{{n}\choose{i}}\right\rangle$ is
uniquely determined as
$\displaystyle\left\langle{{n}\choose{i}}\right\rangle$
$\displaystyle=\begin{cases}\frac{n^{\underline{n-i}}}{(n-i)!}=\frac{n(n-1)(n-2)\dots(i+1)}{(n-i)!},&\text{if
}n\geq i\\\ 0,&\text{otherwise}\end{cases}$ (4)
$\displaystyle=\begin{cases}{n\choose i},&\text{if }n\geq i\geq 0\\\
(-1)^{i+n}{{-i-1}\choose{-n-1}},&\text{if }-1\geq n\geq i\\\
0,&\text{otherwise}\end{cases}.$ (5)
Refer to [15, 16] for detailed discussion on various generalizations of
binomial notation. The following table shows some values of
$\left\langle{{n}\choose{i}}\right\rangle$:
$\left\langle{{n}\choose{i}}\right\rangle$ | $i$
---|---
$-6$ | $-5$ | $-4$ | $-3$ | $-2$ | $-1$ | $0$ | $1$ | $2$ | $3$ | $4$ | $5$ | $6$
$n$ | $6$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$ | $1$ | $6$ | $15$ | $20$ | $15$ | $6$ | $1$
$5$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$ | $1$ | $5$ | $10$ | $10$ | $5$ | $1$ | $0$
$4$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$ | $1$ | $4$ | $6$ | $4$ | $1$ | $0$ | $0$
$3$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$ | $1$ | $3$ | $3$ | $1$ | $0$ | $0$ | $0$
$2$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$ | $1$ | $2$ | $1$ | $0$ | $0$ | $0$ | $0$
$1$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$ | $1$ | $1$ | $0$ | $0$ | $0$ | $0$ | $0$
$0$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$ | $1$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$
$-1$ | $-1$ | $1$ | $-1$ | $1$ | $-1$ | $1$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$
$-2$ | $5$ | $-4$ | $3$ | $-2$ | $1$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$
| $-3$ | $-10$ | $6$ | $-3$ | $1$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$
| $-4$ | $10$ | $-4$ | $1$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$
| $-5$ | $-5$ | $1$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$
| $-6$ | $1$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$ | $0$
In the following theorem, we define an auxiliary sequence
$\\{A_{n}\\}_{n\in\mathbb{Z}}$ which will be useful in the sequel. Note that
this sequence is not a member of the linear space $\mathsf{Fibonacci}^{(k)}$.
The proof of the theorem is a consequence of the Pascal Recursion relation
(3).
###### Theorem 4.
Let $k\geq 2$ and the sequence $\\{A_{n}\\}$ defined as
$A_{n}=\sum_{i\in\mathbb{Z}}{(-1)^{i}\left\langle{{n-ik}\choose{i-1}}\right\rangle
2^{n+1-i(k+1)}}\mbox{ for all }n\in\mathbb{Z}.$ (6)
Then $A_{0}=A_{1}=A_{2}=\dots=A_{k-1}=0$,
$A_{n}=A_{n-1}+A_{n-2}+\dots+A_{n-k}-1$ and $A_{n}=2A_{n-1}-A_{n-k-1}$.
Proof. Note that the above summation in the formula of $A_{n}$ only has a
finite number of non-zero terms. This is because
$\left\langle{{n-ik}\choose{i-1}}\right\rangle=0$ except for $1\leq
i\leq\frac{n+1}{k+1}$ when $n\geq 0$ and $\frac{n+1}{k}\leq
i\leq\frac{n+1}{k+1}$ for $n\leq-1$. It follows that
$A_{0}=A_{1}=A_{2}=\dots=A_{k-1}=0$ and $A_{k}=-1$.
We have
$\displaystyle 2A_{n-1}-A_{n-k-1}=$ $\displaystyle
2\sum{(-1)^{i}\left\langle{{n-1-ik}\choose{i-1}}\right\rangle 2^{n-i(k+1)}}$
$\displaystyle-\sum{(-1)^{i}\left\langle{{n-k-1-ik}\choose{i-1}}\right\rangle
2^{n-k-i(k+1)}}$ $\displaystyle=$
$\displaystyle\sum{(-1)^{i}\left\langle{{n-1-ik}\choose{i-1}}\right\rangle
2^{n+1-i(k+1)}}$
$\displaystyle+\sum{(-1)^{i+1}\left\langle{{n-1-(i+1)k}\choose{i-1}}\right\rangle
2^{n+1-(i+1)(k+1)}}.$
In the last summation, let $i:=i+1$, we have
$\displaystyle 2A_{n-1}-A_{n-k-1}=$
$\displaystyle\sum{(-1)^{i}\left\langle{{n-1-ik}\choose{i-1}}\right\rangle
2^{n+1-i(k+1)}}$
$\displaystyle+\sum{(-1)^{i}\left\langle{{n-1-ik}\choose{i-2}}\right\rangle
2^{n+1-i(k+1)}}$
and by the Pascal Recursion (3),
$\displaystyle 2A_{n-1}-A_{n-k-1}=$
$\displaystyle\sum{(-1)^{i}\left\langle{{n-ik}\choose{i-1}}\right\rangle
2^{n+1-i(k+1)}}$ $\displaystyle=$ $\displaystyle A_{n}.$
Therefore, $(\mathtt{R}^{k+1}-2\mathtt{R}+\mathtt{I})(A)=0$.
As
$\mathtt{R}^{k+1}-2\mathtt{R}+\mathtt{I}=(\mathtt{R}-\mathtt{I})(\mathtt{R}^{k}+\mathtt{R}^{k-1}+\dots+\mathtt{R}-\mathtt{I})$,
it follows that
$(\mathtt{R}^{k}+\mathtt{R}^{k-1}+\dots+\mathtt{R}-\mathtt{I})(A)$ is a
constant sequence, so
$A_{n-1}+A_{n-2}+\dots+A_{n-k}-A_{n}=A_{0}+A_{1}+\dots+A_{k-1}-A_{k}=1$.
$\blacksquare$
Recall that in Theorem 3 we define the sequence
$S=B^{(0)}+B^{(1)}+\dots+B^{(k-1)}\in\mathsf{Fibonacci}^{(k)}$. The following
theorem gives an explicit formula for the sequence $S$.
###### Theorem 5.
Let $k\geq 2$. The $k$-order Fibonacci sequence $S$ (determined by the first
$k$ terms $(1,1,\dots,1)$) satisfies the following formula
$S_{n}=1-(k-1)\sum_{i\in\mathbb{Z}}{(-1)^{i}\left\langle{{n-ik}\choose{i-1}}\right\rangle
2^{n+1-i(k+1)}}\mbox{ for all }n\in\mathbb{Z}.$ (7)
Proof. Let $S_{n}^{\prime}$ denote the sequence on the RHS of (7) then
$S_{n}^{\prime}=1-(k-1)A_{n}$ where $\\{A_{n}\\}$ is the auxiliary sequence
defined in Theorem 4. It follows from Theorem 4 that
$S_{0}^{\prime}=S_{1}^{\prime}=\dots=S_{k-1}^{\prime}=1$, $S_{k}^{\prime}=k$
and $S^{\prime}_{n}=2S^{\prime}_{n-1}-S^{\prime}_{n-k-1}$. By Theorem 2(vi),
the sequence $S$ also satisfies the same recursion equation
$S_{n}=2S_{n-1}-S_{n-k-1}$. Since $S_{i}=S^{\prime}_{i}$ for all $0\leq i\leq
k$, it follows that $S_{i}=S^{\prime}_{i}$ for all $i\in\mathbb{Z}$.
$\blacksquare$
###### Theorem 6.
Let $k\geq 2$. The $k$-order Fibonacci sequence $S$ (determined by the first
$k$ terms $(1,1,\dots,1)$) satisfies the following formula
$S_{n}=1-(k-1)\sum_{1\leq
i\leq\frac{n+1}{k+1}}{(-1)^{i}{{n-ik}\choose{i-1}}2^{n+1-i(k+1)}}\mbox{ for
all }n\geq 0,$ (8) $S_{n}=1-(k-1)\sum_{\frac{n+1}{k}\leq
i\leq\frac{n+1}{k+1}}{(-1)^{i}\left\langle{{n-ik}\choose{i-1}}\right\rangle
2^{n+1-i(k+1)}}\mbox{ for all }n\leq-1.$ (9)
Proof. Since $\left\langle{{n-ik}\choose{i-1}}\right\rangle=0$ except for
$1\leq i\leq\frac{n+1}{k+1}$ when $n\geq 0$ and $\frac{n+1}{k}\leq
i\leq\frac{n+1}{k+1}$ for $n\leq-1$, the theorem follows from Theorem 5.
$\blacksquare$
###### Theorem 7.
Let $k\geq 2$, $0\leq j\leq k-1$. The $k$-order Fibonacci sequence $B^{(j)}$
satisfies the following formula
$\displaystyle B_{n}^{(j)}=$
$\displaystyle-\sum_{i\in\mathbb{Z}}{(-1)^{i}\left\langle{{n-ik}\choose{i-1}}\right\rangle
2^{n+1-i(k+1)}}$
$\displaystyle+\sum_{i\in\mathbb{Z}}{(-1)^{i}\left\langle{{n-j-1-ik}\choose{i-1}}\right\rangle
2^{n-j-i(k+1)}}\mbox{ for all }n\in\mathbb{Z}.$
Proof. By Theorem 3(vii),
$B^{(j)}=\frac{1}{k-1}(\mathtt{I}-\mathtt{R}^{j+1})(S)$, thus, using the
formula (7) for $S_{n}$ in Theorem 5, we obtain the desired formula for
$B^{(j)}_{n}$. $\blacksquare$
The formula (8) for $S_{n}$ in Theorem 6 is equivalent to a formula in
Ferguson [8] (formula (3) for $V_{n,a(n+1)+b}$). Theorem 7 for the case
$j=k-1$ and positive indices is proved in Benjamin et al. [1].
## 4 Explicit formula based on multinomials
In this section, we will derive explicit formula for the two-sided Fibonacci
basis sequences $B^{(0)},B^{(1)},\dots,B^{(k-1)}$ expressed in terms of
multinomial coefficients. Since the traditional multinomial notation is
associated with non-negative integers, to use these for our two-sided
sequences we need to extend the multinomial notation to include negative
integers. To this end, we extend the multinomial notation
${n\choose{i_{1},i_{2},\dots,i_{t}}}$ to negative values of $n$ and
$i_{1},i_{2},\dots,i_{t}$.
A multinomial is defined as
$\displaystyle(i_{1},i_{2},\dots,i_{t})$
$\displaystyle={{i_{1}+i_{2}+\dots+i_{t}}\choose{i_{1},i_{2},\dots,i_{t}}}=\frac{(i_{1}+i_{2}+\dots+i_{t})!}{i_{1}!i_{2}!\dots
i_{t}!}.$
We observe that
$(i_{1},i_{2},\dots,i_{t})={{i_{1}+\dots+i_{t}}\choose{i_{2}+\dots+i_{t}}}{{i_{2}+\dots+i_{t}}\choose{i_{3}+\dots+i_{t}}}\dots{i_{t-2}+i_{t-1}+i_{t}}\choose{i_{t-1}+i_{t}}{{i_{t-1}+i_{t}}\choose{i_{t}}}.$
We will use this formula to extend multinomial notation for negative integers.
###### Definition 3.
Let $t\geq 2$ be an integer. For any integers $i_{1},i_{2},\dots,i_{t}$, the
generalized multinomial $\left\langle(i_{1},i_{2},\dots,i_{t})\right\rangle$
is defined as
$\displaystyle\left\langle(i_{1},i_{2},\dots,i_{t})\right\rangle=\left\langle{{i_{1}+i_{2}+\dots+i_{t}}\choose{i_{1},i_{2},\dots,i_{t}}}\right\rangle$
$\displaystyle=\left\langle{{i_{1}+\dots+i_{t}}\choose{i_{2}+\dots+i_{t}}}\right\rangle\left\langle{{i_{2}+\dots+i_{t}}\choose{i_{3}+\dots+i_{t}}}\right\rangle\dots\left\langle{{i_{t-2}+i_{t-1}+i_{t}}\choose{i_{t-1}+i_{t}}}\right\rangle\left\langle{{i_{t-1}+i_{t}}\choose{i_{t}}}\right\rangle.$
Using the following formula for the generalized binomial coefficient
$\displaystyle\left\langle{{n}\choose{i}}\right\rangle$
$\displaystyle=\begin{cases}\frac{n^{\underline{n-i}}}{(n-i)!}=\frac{n(n-1)(n-2)\dots(i+1)}{(n-i)!},&\text{if
}n\geq i\\\ 0,&\text{otherwise}\end{cases},$
we obtain the following formula for the generalized multinomial
$\displaystyle\left\langle(i_{1},i_{2},\dots,i_{t})\right\rangle=\left\langle{{i_{1}+i_{2}+\dots+i_{t}}\choose{i_{1},i_{2},\dots,i_{t}}}\right\rangle$
$\displaystyle=\begin{cases}\cfrac{(i_{1}+\dots+i_{t})^{\underline{i_{1}}}(i_{2}+\dots+i_{t})^{\underline{i_{2}}}\dots(i_{t-1}+i_{t})^{\underline{i_{t-1}}}}{i_{1}!i_{2}!\dots
i_{t-1}!},&\text{if }i_{1},i_{2},\dots,i_{t-1}\geq 0\\\
0,&\text{otherwise}\end{cases}.$
When $t=2$, the Pascal Recursion relation becomes
$\displaystyle\left\langle(i_{1},i_{2})\right\rangle=\left\langle(i_{1}-1,i_{2})\right\rangle+\left\langle(i_{1},i_{2}-1)\right\rangle.$
For a general $t\geq 2$, we have the following generalized Pascal Recursion
relation for multinomials:
$\displaystyle\left\langle(i_{1},i_{2},\dots,i_{t})\right\rangle$
$\displaystyle=\left\langle(i_{1}-1,i_{2},\dots,i_{t})\right\rangle+\left\langle(i_{1},i_{2}-1,\dots,i_{t})\right\rangle+\dots+\left\langle(i_{1},i_{2},\dots,i_{t}-1)\right\rangle.$
(10)
Since $\left\langle{n\choose i}\right\rangle$ is non-zero only for $n\geq
i\geq 0$ or $-1\geq n\geq i$, the generalized multinomial
$\left\langle(i_{1},i_{2},\dots,i_{t})\right\rangle$ is non-zero only for
$i_{1}+\dots+i_{t}\geq i_{2}+\dots+i_{t}\geq\dots\geq i_{t-1}+i_{t}\geq
i_{t}\geq 0$ or $-1\geq i_{1}+\dots+i_{t}\geq i_{2}+\dots+i_{t}\geq\dots\geq
i_{t-1}+i_{t}\geq i_{t}$. Using the formula (5) for $\left\langle{n\choose
i}\right\rangle$, we can derive the formula for the generalized multinomial in
these two separate cases.
Case 1. If $i_{1}+\dots+i_{t}\geq i_{2}+\dots+i_{t}\geq\dots\geq
i_{t-1}+i_{t}\geq i_{t}\geq 0$, i.e. $i_{1},i_{2},\dots,i_{t}\geq 0$, then
$\displaystyle\left\langle(i_{1},i_{2},\dots,i_{t})\right\rangle=\left\langle{{i_{1}+i_{2}+\dots+i_{t}}\choose{i_{1},i_{2},\dots,i_{t}}}\right\rangle={{i_{1}+i_{2}+\dots+i_{t}}\choose{i_{1},i_{2},\dots,i_{t}}}=(i_{1},i_{2},\dots,i_{t}).$
Case 2. If $-1\geq i_{1}+\dots+i_{t}\geq i_{2}+\dots+i_{t}\geq\dots\geq
i_{t-1}+i_{t}\geq i_{t}$ then
$\displaystyle\left\langle(i_{1},i_{2},\dots,i_{t})\right\rangle$
$\displaystyle=\left\langle{{i_{1}+i_{2}+\dots+i_{t}}\choose{i_{1},i_{2},\dots,i_{t}}}\right\rangle$
$\displaystyle=(-1)^{i_{1}+\dots+i_{t-1}}{{-i_{t}-1}\choose{i_{1},i_{2},\dots,i_{t-1},-i_{1}-\dots-
i_{t}-1}}$
$\displaystyle=(-1)^{i_{1}+\dots+i_{t-1}}(i_{1},i_{2},\dots,i_{t-1},-i_{1}-\dots-
i_{t}-1).$
Thus, we obtain the following theorem that connects the generalized
multinomial to the classical multinomial.
###### Theorem 8.
For any integer $t\geq 2$ and $i_{1},i_{2},\dots,i_{t}\in\mathbb{Z}$, we have
$\displaystyle\left\langle(i_{1},i_{2},\dots,i_{t})\right\rangle$
$\displaystyle=\begin{cases}(i_{1},i_{2},\dots,i_{t}),&\text{if
}i_{1},i_{2},\dots,i_{t}\geq 0\\\
(-1)^{i_{1}+\dots+i_{t-1}}(i_{1},i_{2},\dots,i_{t-1},-i_{1}-\dots-
i_{t}-1)&\text{if }i_{1},i_{2},\dots,i_{t-1}\geq 0\mbox{ and
}i_{1}+\dots+i_{t}\leq-1\\\ 0,&\text{otherwise.}\end{cases}$
In the following theorem, we define an auxiliary sequence
$\\{X_{n}\\}_{n\in\mathbb{Z}}$. Note that $X$ is a member of the linear space
$\mathsf{Fibonacci}^{(k)}$.
###### Theorem 9.
Let $k\geq 2$, $c\in\mathbb{Z}$ any constant, and
$\displaystyle X_{n}$
$\displaystyle=\sum_{a_{1}+2a_{2}+\dots+ka_{k}=n+c}{\left\langle(a_{1},a_{2},\dots,a_{k})\right\rangle}$
$\displaystyle=\sum_{s_{1}+s_{2}+\dots+s_{k}=n+c}{\left\langle{s_{1}\choose
s_{2}}\right\rangle\left\langle{s_{2}\choose
s_{3}}\right\rangle\dots\left\langle{s_{k-1}\choose s_{k}}\right\rangle}.$
Then $\\{X_{n}\\}_{n\in\mathbb{Z}}$ is a Fibonacci sequence of order $k$.
Proof. The two formulas on the RHS are equivalent by using the variables
$s_{1}=a_{1}+\dots+a_{k}$, $s_{2}=a_{2}+\dots+a_{k}$, …,
$s_{k-1}=a_{k-1}+a_{k}$ and $s_{k}=a_{k}$.
Note that the summation only has a finite number of non-zero terms. This is
because $\left\langle(a_{1},a_{2},\dots,a_{k})\right\rangle$ is non-zero only
if $s_{1}\geq s_{2}\geq\dots\geq s_{k}\geq 0$ or $-1\geq s_{1}\geq
s_{2}\geq\dots\geq s_{k}$, and there are only a finite number of choices for
$s_{1},s_{2},\dots,s_{k}$ that have the same sign whose sum
$s_{1}+s_{2}+\dots+s_{k}=n+c$ is fixed.
By Pascal Recursion relation (10),
$\displaystyle X_{n}=$
$\displaystyle\sum_{a_{1}+2a_{2}+\dots+ka_{k}=n+c}{\left\langle(a_{1}-1,a_{2},\dots,a_{k})\right\rangle}$
$\displaystyle+\sum_{a_{1}+2a_{2}+\dots+ka_{k}=n+c}{\left\langle(a_{1},a_{2}-1,\dots,a_{k})\right\rangle}$
$\displaystyle+\dots+\sum_{a_{1}+2a_{2}+\dots+ka_{k}=n+c}{\left\langle(a_{1},a_{2},\dots,a_{k}-1)\right\rangle}.$
Let $a_{1}^{\prime}=a_{1}-1$, $a_{2}^{\prime}=a_{2}-1$, …,
$a_{k}^{\prime}=a_{k}-1$. We have
$\displaystyle X_{n}=$
$\displaystyle\sum_{a_{1}^{\prime}+2a_{2}+\dots+ka_{k}=n+c-1}{\left\langle(a_{1}^{\prime},a_{2},\dots,a_{k})\right\rangle}$
$\displaystyle+\sum_{a_{1}+2a_{2}^{\prime}+\dots+ka_{k}=n+c-2}{\left\langle(a_{1},a_{2}^{\prime},\dots,a_{k})\right\rangle}$
$\displaystyle+\dots+\sum_{a_{1}+2a_{2}+\dots+ka_{k}^{\prime}=n+c-k}{\left\langle(a_{1},a_{2},\dots,a_{k}^{\prime})\right\rangle}$
$\displaystyle=X_{n-1}+X_{n-2}+\dots+X_{n-k},$
therefore, $\\{X_{n}\\}$ is a Fibonacci sequence of order $k$. $\blacksquare$
###### Theorem 10.
Let $k\geq 2$. Then
$B_{n}^{(0)}=\sum_{a_{1}+2a_{2}+\dots+ka_{k}=n-k}{\left\langle(a_{1},a_{2},\dots,a_{k})\right\rangle},\mbox{
for all }n\in\mathbb{Z}.$ (11)
Proof. Let $B^{\prime}$ denote the RHS, then by Theorem 9, $B^{\prime}$ is a
Fibonacci sequence. We only need to show its initial values match with those
of $B^{(0)}$.
Again, as in the proof of Theorem 9, we use the variables
$s_{1}=a_{1}+\dots+a_{k}$, $s_{2}=a_{2}+\dots+a_{k}$, …,
$s_{k-1}=a_{k-1}+a_{k}$ and $s_{k}=a_{k}$, then $s_{1}+s_{2}+\dots+s_{k}=n-k$.
When $n=0$, $s_{1}+s_{2}+\dots+s_{k}=-k<0$, so
$\left\langle(a_{1},a_{2},\dots,a_{k})\right\rangle$ is non-zero only if
$-1\geq s_{1}\geq s_{2}\geq\dots\geq s_{k}$. The only possibility is
$s_{1}=s_{2}=\dots=s_{k}=-1$ and this gives $a_{1}=a_{2}=\dots=a_{k-1}=0$,
$a_{k}=-1$ and $B^{\prime}_{0}=\left\langle(0,\dots,0,-1)\right\rangle=1$.
When $1\leq n\leq k-1$, $-(k-1)\leq s_{1}+s_{2}+\dots+s_{k}=n-k<0$. There are
no such $-1\geq s_{1}\geq s_{2}\geq\dots\geq s_{k}$ that satisfy this
condition, so the summation is empty and $B^{\prime}_{n}=0$ for $1\leq n\leq
k-1$. $\blacksquare$
###### Theorem 11.
Let $k\geq 2$. Then
$B_{n}^{(k-1)}=\sum_{a_{1}+2a_{2}+\dots+ka_{k}=n-k+1}{\left\langle(a_{1},a_{2},\dots,a_{k})\right\rangle},\mbox{
for all }n\in\mathbb{Z}.$
Proof. By Theorem 3(iii), $B^{(k-1)}=\mathtt{L}(B^{(0)})$, so using the
formula for $B^{(0)}_{n}$ in Theorem 10 we obtain the desired formula for
$B^{(k-1)}_{n}$. $\blacksquare$
The formula in Theorem 11 is proved in Miles [18] for natural number $n\geq
k-1$. Our Theorem 11 extends it to $n<k-1$ and negative integer $n$.
The Tribonacci sequence $\\{T_{n}\\}_{n\geq 0}$ studied in Rabinowitz [20] is
a Fibonacci sequence of order $k=3$ with initial values $T_{0}=0$, $T_{1}=1$,
$T_{2}=1$. Solving for $T_{-1}$, we have $T_{-1}=0$, so
$T=\mathtt{L}(B^{(2)})$. The formula in Theorem 11 is proved in Rabinowitz
[20] for $k=3$ and $n\geq 2$. Our Theorem 11 extends it to all order $k\geq 2$
and all index $n\in\mathbb{Z}$.
The next theorem give an explicit formula for all basis Fibonacci sequences of
order $k$.
###### Theorem 12.
Let $k\geq 2$. For any $0\leq j\leq k-1$,
$B_{n}^{(j)}=\sum_{n-k-j\leq a_{1}+2a_{2}+\dots+ka_{k}\leq
n-k}{\left\langle(a_{1},a_{2},\dots,a_{k})\right\rangle},\mbox{ for all
}n\in\mathbb{Z}.$
Proof. By Theorem 3(ii), $B^{(j)}=\sum_{i=0}^{j}\mathtt{R}^{i}(B^{(0)})$, so
using the formula for $B^{(0)}_{n}$ in Theorem 10 we obtain the desired
formula for $B^{(j)}_{n}$. $\blacksquare$
Theorem 11 and Theorem 12 give rise to two different formulas for the sequence
$B^{(k-1)}$. It would be interesting to see a combinatorial proof of the
equality of these two formulas.
## 5 A remark on a tiling problem
It is well known that the classical Fibonacci sequence, $F_{0}=0$, $F_{1}=1$,
$F_{n}=F_{n-1}+F_{n-2}$, has a close relation with the tiling problem. The
value $F_{n}$ counts the number of tilings of an $1\times n$-board with
square-tiles $1\times 1$ and domino-tiles $1\times 2$. This is because for
$n\geq 2$, by considering the first tile, if the first tile is a square then
there are $F_{n-1}$ ways to cover the remaining strip of length $n-1$, and if
the first tile is a domino then there are $F_{n-2}$ ways to cover the
remaining strip of length $n-2$. That is how the recursion equation
$F_{n}=F_{n-1}+F_{n-2}$ arises.
If we allow tiles of length up to $k$, then the result is a sequence
$\\{C_{n}\\}_{n\geq 0}$. We have $C_{0}=0$, $C_{1}=1$, $C_{2}=C_{0}+C_{1}$,
$C_{3}=C_{0}+C_{1}+C_{2}$,…, $C_{k-1}=C_{0}+C_{1}+\dots+C_{k-2}$, and for
$n\geq k$, $C_{n}=C_{n-1}+C_{n-2}+\dots+C_{n-k}$. Of course, if we extend the
index to negative integers and set $C_{-1}=C_{-2}=\dots=C_{-(k-2)}=0$ then we
have the Fibonacci recursion equation $C_{n}=C_{n-1}+C_{n-2}+\dots+C_{n-k}$
holds for all $n\geq 2$. This sequence $C$ is just a left shift of the basis
sequence $B^{(k-1)}$. Indeed, $C=\mathtt{R}^{k-2}(B^{(k-1)})$. Many authors
such as Gabai, Philippou, Muwafi, Benjamin, Heberle, Quinn and Su [19, 9, 1,
2] have studied this tiling problem and here we decide to use the letter $C$
to denote this sequence since it is related to a combinatorial problem.
## References
* [1] A. T. Benjamin and C. R. Heberle, Counting on $r$-Fibonacci numbers, Fibonacci Quarterly 52(2), 121–128, 2014.
* [2] A. T. Benjamin, J. J. Quinn and F. E. Su, Phased tilings and generalized Fibonacci identities, Fibonacci Quarterly 38(3), 282–289, 2000.
* [3] M. Bunder and J. Tonien, Generalized Fibonacci numbers and their 2-adic order, Integers, 20, #A105, 2020.
* [4] A. P. Chaves and D. Marques, A Diophantine equation related to the sum of squares of consecutive $k$-generalized Fibonacci numbers, Fibonacci Quarterly 52(1), 70–74, 2014.
* [5] T. W. Cusick, On a certain integer associated with a generalized Fibonacci sequence, Fibonacci Quarterly 6(2), 117–126, 1968.
* [6] M. Ddamulira, C. A. Gomez and F. Luca, On a problem of Pillai with $k$–generalized Fibonacci numbers and powers of 2, Monatshefte fur Mathematik 187, 635–664, 2018.
* [7] T. P. Dence, Ratios of generalized Fibonacci sequences, Fibonacci Quarterly 25(2), 137–143, 1987.
* [8] D. E. Ferguson, An expression for generalized Fibonacci numbers, Fibonacci Quarterly 4(3), 270–272, 1966.
* [9] H. Gabai, Generalized Fibonacci $k$-sequences, Fibonacci Quarterly, 8(1), 31–38, 1970.
* [10] F. T. Howard and C. Cooper, Some identities for $r$-Fibonacci numbers, Fibonacci Quarterly, 49(3), 231–242, 2011.
* [11] D. Kessler and J. Schiff, A combinatoric proof and generalization of Ferguson’s formula for $k$-generalized Fibonacci numbers, Fibonacci Quarterly 42(3), 266–273, 2004.
* [12] I. I. Kolodner, On a generating function associated with generalized Fibonacci sequences, Fibonacci Quarterly 3(4), 272–278, 1965.
* [13] G-Y. Lee, S-G. Lee, J-S. Kim, and H-K. Shin, The Binet formula and representations of k-generalized Fibonacci numbers, Fibonacci Quarterly, 39(2), 158–164, 2001
* [14] T. Lengyel and D. Marques, The 2-adic order of some generalized Fibonacci numbers, Integers 17(2017), #A5.
* [15] D. E. Loeb, Sets with a negative number of elements, Advances in Mathematics, 91(1), 64–74, 1992.
* [16] D. E. Loeb, A generalization of the binomial coefficients, Discrete Mathematics, 105(1–3), 143–156, 1992.
* [17] R. S. Melham, Certain classes on finite sums that involve generalized Fibonacci and Lucas numbers, Fibonacci Quarterly 42(1), 47–54, 2004.
* [18] E. P. Miles Jr, Generalized Fibonacci numbers and associated matrices, The American Mathematical Monthly, 67(8), 745–752, 1960.
* [19] A. N. Philippou and A. A. Muwafi, Waiting for the $k$th consecutive success and the Fibonacci sequence of order $k$, Fibonacci Quarterly 20(1), 28–32, 1982.
* [20] S. Rabinowitz, Algorithmic manipulation of third-order linear recurrences, Fibonacci Quarterly 34(5), 447–464, 1996.
* [21] B. Sobolewski, The 2-adic valuation of generalized Fibonacci sequences with an application to certain Diophantine equations, Journal of Number Theory, 180, 730–742, 2017.
|
# Feasibility of measuring the magnetic dipole moments of the charm baryons
at the LHC using bent crystals
A.S. Fomin<EMAIL_ADDRESS>LAL (Laboratoire de l’Accélérateur Linéaire),
Université Paris-Sud/IN2P3, Orsay, France NSC Kharkiv Institute of Physics
and Technology, 61108 Kharkiv, Ukraine V.N. Karazin Kharkiv National
University, 61022 Kharkiv, Ukraine A.Yu. Korchin<EMAIL_ADDRESS>NSC
Kharkiv Institute of Physics and Technology, 61108 Kharkiv, Ukraine V.N.
Karazin Kharkiv National University, 61022 Kharkiv, Ukraine A. Stocchi
<EMAIL_ADDRESS>LAL (Laboratoire de l’Accélérateur Linéaire), Université
Paris-Sud/IN2P3, Orsay, France O.A. Bezshyyko Taras Shevchenko National
University of Kyiv, 01601 Kyiv, Ukraine L. Burmistrov LAL (Laboratoire de
l’Accélérateur Linéaire), Université Paris-Sud/IN2P3, Orsay, France S.P.
Fomin NSC Kharkiv Institute of Physics and Technology, 61108 Kharkiv, Ukraine
V.N. Karazin Kharkiv National University, 61022 Kharkiv, Ukraine I.V.
Kirillin NSC Kharkiv Institute of Physics and Technology, 61108 Kharkiv,
Ukraine V.N. Karazin Kharkiv National University, 61022 Kharkiv, Ukraine L.
Massacrier IPNO (Institut de Physique Nucléaire), Université Paris-Sud/IN2P3,
Orsay, France A. Natochii Taras Shevchenko National University of Kyiv,
01601 Kyiv, Ukraine LAL (Laboratoire de l’Accélérateur Linéaire), Université
Paris-Sud/IN2P3, Orsay, France P. Robbe LAL (Laboratoire de l’Accélérateur
Linéaire), Université Paris-Sud/IN2P3, Orsay, France W. Scandale LAL
(Laboratoire de l’Accélérateur Linéaire), Université Paris-Sud/IN2P3, Orsay,
France CERN, European Organization for Nuclear Research, CH-1211 Geneva 23,
Switzerland INFN Sezione di Roma, Piazzale Aldo Moro 2, 00185 Rome, Italy
N.F. Shul’ga NSC Kharkiv Institute of Physics and Technology, 61108 Kharkiv,
Ukraine V.N. Karazin Kharkiv National University, 61022 Kharkiv, Ukraine
(May 9, 2017)
###### Abstract
In this paper we revisit the idea of measuring the magnetic dipole moments of
the charm baryons and, in particular, of $\Lambda_{c}^{+}$ by studying the
spin precession induced by the strong effective magnetic field inside the
channels of a bent crystal. We present a detailed sensitivity study showing
the feasibility of such an experiment at the LHC in the coming years.
###### pacs:
13.30.Eg, 13.40.Em, 13.88+e, 14.20.Lq, 61.85.+p
## I Introduction
The magnetic dipole moment (MDM) of a particle is its fundamental
characteristic that determines the torque which particle experiences in an
external magnetic field. The MDMs of many particles are presently known
PDG:2014 . For electron the QED prediction agrees with experimentally measured
value up to very high precision. For muon the measurement of the BNL E821
experiment Bennett:2006fi disagrees with the Standard Model prediction by 3–4
standard deviations, which may suggest physics beyond the Standard Model. The
disagreement for the muon $g-2$ is the subject of many studies (see, e.g.,
review Jegerlehner:2009 ). The MDM of the $\tau$-lepton has not been measured
so far and is of great interest for testing calculations in the Standard Model
Eidelman:2007 .
For hadrons, the MDMs are measured for the baryon octet with
$J^{P}={\tfrac{1}{2}}^{+}$. Historically, reasonable agreement between the
measured MDM and predictions of the quark model was important to substantiate
the constituent quark models of the hadrons.
In general, the MDM of the spin-$\tfrac{1}{2}$ particle is expressed as
$\vec{\mu}=\frac{2\mu}{\hbar}\vec{S},\qquad\quad\mu=\frac{q\hbar}{2mc}\,\frac{g}{2},$
(1)
where $\vec{S}=\tfrac{\hbar}{2}\vec{\sigma}$, $m$ is the particle mass, $q$ is
the particle electric charge, $g$ is the gyromagnetic factor. The value $g=2$
corresponds to a Dirac particle without magnetic moment anomaly. Usually, the
MDM of baryons is measured in units of the nuclear magneton $\mu_{N}\equiv
e\hbar/(2m_{p}c)$ PDG:2014 , where $m_{p}$ is the proton mass and $e$ is the
elementary charge.
It would be very important to measure the MDM of the charm baryons
$\Lambda_{c}^{+}(udc)$ and $\Xi_{c}^{+}(usc)$, which have not been measured so
far because of their very short lifetime of the order of $10^{-13}$ s.
There has been many calculations of the MDM of the charm baryons in various
models of their structure Franklin:1981 ; Barik:1984 ; Savage:1994 ;
SilvestreBrac:1996 ; Zhu:1997 ; Aliev:2002 ; Julia-Diaz:2004 ; Albertus:2006 ;
Kumar:2005 ; Faessler:2006 ; Karliner:2006ny ; Patel:2008 ; Majethiya:2008 ;
Aliev:2008_1 ; Aliev:2008_2 ; Sharma:2010 ; Bernotas:2013 . As for the
$\Lambda_{c}^{+}$ baryon, majority of the calculations predict the MDM and
$g$-factor in the ranges
$\frac{\mu({\Lambda_{c}^{+}})}{\mu_{N}}=0.37\text{--}0.42,\qquad
g({\Lambda_{c}^{+}})=1.80\text{--}2.05.$ (2)
Thus, an experimental study of the MDM of heavy baryons can be useful to
distinguish between different theoretical approaches.
One of the motivations for measurement of the MDM of the heavy baryons is also
studying the MDM of the charm quark. If this quark behaves as a point-like
Dirac particle, then the corresponding gyromagnetic factor $g_{c}$ is equal or
close to 2, while if the charm quark has a composite structure we can expect a
sizable deviation from this value.
In the quark model the MDM of the heavy baryon is expressed in terms of the
MDMs of the heavy and light quarks. In particular, for the charm baryons, the
spin and flavor structure of the ground-state baryons $\Lambda_{c}^{+}$ and
$\Xi_{c}^{+}$ implies that (see, e.g., Ref. Franklin:1981 )
$\mu({\Lambda_{c}^{+}})=\mu_{c},\qquad\mu({\Xi_{c}^{+}})=\frac{1}{3}\left(2\mu_{u}+2\mu_{s}-\mu_{c}\right).$
(3)
MDMs in Eqs. (3) depend on the MDM of the charm quark. Let us consider
$\Lambda_{c}^{+}$ and take “effective” mass of the $c$-quark $m_{c}=1.6$ GeV
as suggested from the charmonia spectroscopy Franklin:1981 . Keeping
explicitly the $g$-factor of the charm quark we can write
$\frac{\mu({\Lambda_{c}^{+}})}{\mu_{N}}=0.39\frac{g_{c}}{2},\qquad
g({\Lambda_{c}^{+}})=1.91\frac{g_{c}}{2}.$ (4)
For $g_{c}=2$ these values are consistent with Eqs. (2).
For $\Xi_{c}^{+}$ one needs to specify also the masses of the light
constituent quarks. Choosing $m_{u}=336$ MeV and $m_{s}=509$ MeV, which
reproduce MDMs of the baryon octet Perkins:2000 , one obtains from (3)
$\frac{\mu({\Xi_{c}^{+}})}{\mu_{N}}=0.83-0.13\frac{g_{c}}{2},\quad
g({\Xi_{c}^{+}})=4.37-0.69\frac{g_{c}}{2},$ (5)
where the first numbers in each quantity in (5) come from the $u$ and $s$
quarks, and the second — from the $c$ quark.
The combined measurements of MDMs of $\Lambda_{c}^{+}$ and $\Xi_{c}^{+}$ may
help to obtain information on the $g$-factor of the charm quark.
In the present paper we discuss the feasibility of the MDM measurement for the
positively charged charm baryons $\Lambda_{c}^{+}$ and $\Xi_{c}^{+}$ at the
LHC. This extends the proposal of the UA9 collaboration Burmistrov:2016 .
## II Principle of measurement
The experimental results on MDM are all obtained by a well-assessed method
that consists of measuring the polarization vector of the incoming particles
and the precession angle when the particle is traveling through an intense
magnetic field. The polarization is evaluated by analyzing the angular
distribution of the decay products. No measurement of magnetic moments of
charm or beauty baryons (and $\tau$ lepton) has been performed so far. The
main reason is that the lifetimes of charm/beauty baryons are too short to
measure the magnetic moment by standard techniques.
One proposal to meet the challenge of measuring the magnetic moments of
baryons with heavy flavored quarks is to use the strong effective magnetic
field inside the channels of a bent crystal instead of the conventional
magnetic field to induce the precession of the polarization vector and measure
the magnetic moment. Some theoretical aspects of this phenomenon with possible
applications to the LHC have recently been discussed in Baryshevsky:2016 ,
where the author carried out the preliminary estimations of the possibilities
to measure MDMs of the short-lived particles, in particular, charmed baryons
at the LHC energies. In Ref. Botella:2016 the authors suggested to use this
method for studying the electric dipole moments (EDM) of the strange $\Lambda$
baryon and the charm baryons.
The theoretical formalism of the precession of the polarization vector of
spin-$\tfrac{1}{2}$ particle in external electric, $\vec{E}$, and magnetic,
$\vec{H}$, fields has been known for a long time Thomas:1926 ; Thomas:1927 ;
Bargmann:1959 ; Hagedorn:1963 ; Beresteckii:1982 ; Jackson:1999 . In Refs.
Baryshevsky:1979 ; Lyuboshits:1980 ; Kim:1983 ; Biryukov ; Akhiezer ;
grininko1991 ; Greenenko:1992ef this formalism was applied to the case of the
bent crystals.
In the planned fixed-target experiment at the LHC, the high-energy proton beam
produces the polarized charm baryons by interacting with nuclei of a target-
converter
$p+A\to\Lambda_{c}^{+}(\Xi_{c}^{+})+X,$ (6)
which are directed into the bent crystal. The initial polarization vector
$\vec{\xi}_{i}$ of the charm baryon is perpendicular to the reaction plane
spanned by the proton and baryon momenta, $\vec{q}$ and $\vec{p}$,
respectively, because of the space-inversion symmetry of the strong
interaction.
When falling on a bent crystal, a small fraction of baryons gets in the regime
of planar channeling (see, e.g., Tsyganov ; Biryukov ; Akhiezer ). Note that
only positively charged particles can be efficiently deflected by a bent
crystal using planar channeling phenomenon. The planar channeling of
negatively charged particles is very unstable due to the enhancement of their
multiple scattering on lattice atoms (see, e.g., fomin1997 ). However, the
negatively charged particle can be also deflected using the so-called
stochastic mechanism of multiple scattering by atomic strings of a bent
crystal. This mechanism was proposed in grininko1991 . The possibility to use
it for the MDM measurement was considered in Greenenko:1992ef .
The motion of channeled relativistic baryons in the inter-plane electric field
of a bent crystal imitates the particle motion in a strong magnetic field
directed along the crystal bending axis (axis $Oy$ in Fig. 1). The MDM vector
of baryon rotates around this axis. The gradient of the inter-plane electric
field of a silicon crystal reaches the maximum value about 5 GeV/cm that
corresponds to the value of the induction of effective magnetic field of
thousands of tesla in the rest frame of a TeV baryon. The initial value of the
3D polarization vector can be determined using the non-channeled baryons. The
absolute value of the polarization can be also measured as a by-product of
this experiment. Various aspects of this analysis will be discussed later.
The first experimental realization of such method was carried out in Fermilab
Chen:1992 at the 800 GeV proton beam. The strange $\Sigma^{+}(uus)$ baryons
(with lifetime $0.8\times 10^{-10}\,$s) produced on the Cu target had average
momentum 375 GeV/c and the absolute value of polarization $(12\,\pm\,1)$ %.
After passing 4.5 cm of the bent silicon single crystal the polarization
vector precessed by about $60^{\circ}$. This new technique allowed to obtain
the MDM of the $\Sigma^{+}$ hyperon $\mu=(2.40\pm 0.46_{stat}\pm
0.40_{syst})\,\mu_{N}$ which was consistent with the world-average value.
The proposed experiment at the LHC is much more difficult because the
lifetimes of the charm baryons $\Lambda_{c}^{+}$ and $\Xi_{c}^{+}$ are three
orders of magnitude less than the lifetime of $\Sigma^{+}$. In order to
measure the angle of MDM precession with sufficient accuracy and
correspondingly extract the MDM at the LHC energies, it is necessary to carry
out the optimization of target-converter and bent crystal parameters by means
of detailed computer simulation as well as to study the properties of charm
baryons as it is discussed in detail later.
### II.1 Spin precession in a bent crystal.
Master formulas
Because of the extremely short lifetime of charmed baryons in comparison with
the $\Sigma^{+}$ hyperon, in our case it is not possible to prepare a beam of
polarized baryons in advance and to measure the degree of their initial
polarization, as was done in the Fermilab experiment Chen:1992 . In our case,
as explained below, the crystal could be used as a beam collimator.
To be captured into the channeling regime, the incoming particle must have a
very small angle $\theta_{x}$ between its momentum and the crystal plane of
the chosen channel, namely, $|\theta_{x}|<\theta_{\rm{L}}$, where $\theta_{\rm
L}$ is the Lindhard angle Lindhard :
$\theta_{\rm L}=\sqrt{\frac{4\pi\,n\,d\,a_{\rm TF}\,Z\,e^{2}}{\varepsilon}},$
(7)
where $n$ is the crystal atomic density, $d$ is the distance between
neighboring planes, $a_{\rm TF}$ is the Thomas-Fermi screening radius, Z$|e|$
is the charge of atomic nucleus, $\varepsilon$ is the energy of incoming
particle. The Lindhard angle is the critical angle of planar channeling for an
ideal crystal case. The axis $Ox$ is perpendicular to the channel plane (see
Fig. 1).
The $\Lambda_{c}^{+}$ baryons emitted from the amorphous target-converter are
polarized and isotropically distributed over the azimuthal angle around the
direction of the initial proton beam. The polar angle $\theta$ that determines
the characteristic cone of the relativistic $\Lambda_{c}^{+}$ baryon emission
in the laboratory frame has a value of the order of $\gamma^{-1}$, where
$\gamma=\varepsilon/m$ is the Lorentz factor of the $\Lambda_{c}^{+}$,
$\varepsilon$ and $m$ are its energy and mass, respectively. In the conditions
of the LHC experiment $\theta\approx 10^{-3}$ rad.
The critical angle of planar channeling (7) for particles with the energy of
several TeV in a silicon crystal is about several microradians, that is at
least two orders of magnitude smaller than a characteristic angular width of
the $\Lambda_{c}^{+}$ beam $\theta$ after the target-converter. Therefore,
only a small part of this beam can be captured in the channeling regime when
entering the crystal. For all channeled particles the angle $\theta_{x}$ is
limited in the interval $(-\theta_{\rm{L}},\,+\theta_{\rm{L}})$. At the same
time, there are no limitations on the value of $\theta_{y}$ of the
$\Lambda_{c}^{+}$ to be channeled.
Thus, the conditions for the particle capture into the planar channeling
regime pick out by themselves the region in the phase space of the
$\Lambda_{c}^{+}$ momentum with a certain direction of the polarization
vector, namely, across the channeling plane (up or down in Fig. 1).
After passing the bent crystal the polarization vector rotates by the angle
Lyuboshits:1980 ; Kim:1983
$\Theta_{\mu}=\gamma\left(\frac{g}{2}-1-\frac{g}{2\gamma^{2}}+\frac{1}{\gamma}\right)\Theta\approx\gamma\left(\frac{g}{2}-1\right)\Theta,$
(8)
with respect to the direction of the initial polarization vector. Here
$\Theta=L/R$ is the deflection angle of the channeled baryon momentum after
passing the bent crystal, $L$ and $R$ are the length and bending radius of the
crystal. A simple derivation of Eq. (8) is presented in Appendix A.
In the conditions of the LHC the Lorentz factor $\gamma$ can be quite big, of
the order of $10^{3}$. In this case the approximate equality in (8) holds
(unless incidentally the relation $g=2$ happens).
The schematic layout of the experiment is shown in Fig. 1. To simplify the
following formulae and for better understanding the experiment layout, here we
consider $\Lambda_{c}^{+}$ baryons to be parallel to the $z$ axis. In our
further calculations we take into account the proper angular distribution of
baryons at the entrance into the crystal.
In this frame the components of the proton momentum $\vec{q}$, baryon initial
$\vec{p}_{i}$ and final $\vec{p}_{f}$ momenta, effective electric field
$\vec{E}_{i}$ and $\vec{E}_{f}$ in the crystal, rotation axis along
$\vec{E}\times\vec{p}$, and the initial $\vec{\xi}_{i}$ and final
$\vec{\xi}_{f}$ polarization vectors are:
$\displaystyle\vec{q}$ $\displaystyle=$ $\displaystyle(0,\,q_{y},\,q_{z}),$
$\displaystyle\vec{p}_{i}$ $\displaystyle=$ $\displaystyle
p\,(0,\,0,\,1),\qquad\vec{p}_{f}=p\,(-\sin\Theta,\,0,\,\cos\Theta),$
$\displaystyle\vec{E}_{i}$ $\displaystyle=$ $\displaystyle
E\,(-1,\,0,\,0),\quad\vec{E}_{f}=E\,(-\cos\Theta,\,0,\,-\sin\Theta),$
$\displaystyle\vec{E}\times\vec{p}$ $\displaystyle=$ $\displaystyle
E\,p\,(0,\,1,\,0),$ $\displaystyle\vec{\xi}_{i}$ $\displaystyle=$
$\displaystyle\xi\,(1,\,0,\,0),\qquad\vec{\xi}_{f}=\xi\,(\cos\Theta_{\mu},\,0,\,\sin\Theta_{\mu}).$
(9)
The absolute value of polarization $\xi=|\vec{\xi}|$ stays constant and is
determined by the process (6).
Figure 1: Schematic layout of experiment. Effective electric field $\vec{E}$
is orthogonal to the momentum $\vec{p}$. The figure shows the case $g>2$.
### II.2 Basic principles of the angular analysis
The orientation of the baryon polarization vector after the crystal can be
determined from the angular distribution of its decay products. For the weak
decays of the spin-$\tfrac{1}{2}$ baryon into the two-particle final states of
baryon and meson ($\tfrac{1}{2}\to\tfrac{1}{2}+0$,
$\tfrac{1}{2}\to\tfrac{1}{2}+1$, $\tfrac{1}{2}\to\tfrac{3}{2}+0$) the
following relation holds
$\frac{1}{N}\frac{dN}{d\cos\vartheta}=\frac{1}{2}(1+\alpha\,\xi\cos\vartheta),$
(10)
in the rest frame of the baryon (see Appendix B). Here $N$ is the number of
events, $\vartheta$ is the angle between the direction of final baryon
(analyzer) and the polarization vector $\vec{\xi}_{f}$. The weak-decay
parameter $\alpha$ characterizes parity violation in the decay.
From the angular analysis one can obtain the expression for the absolute
statistical error of the measured $g$-factor:
$\Delta g=\frac{1}{\ \alpha\,|\xi|\,\gamma\,\Theta~{}}\
\sqrt{\frac{12}{~{}N_{\Lambda_{c}^{+}}~{}}},$ (11)
where $N_{\Lambda_{c}}$ is the number of reconstructed $\Lambda_{c}^{+}$
deflected by a bent crystal. Note that Eq. (11) is obtained for a fixed value
of boost $\gamma$.
The values of absolute polarization $|\xi|$ and weak-decay parameter $\alpha$
are crucial, since the $g$-factor error $\Delta g$ is inversely proportional
to these values.
Figure 2: Polarization of $\Lambda_{c}^{+}$ as a function of its transverse
momentum. Experimental data: red crosses E791 , orange rectangular area
PolLambdac ; dashed red curves — experimental data fit by the normal
distribution; solid red curve — theoretical prediction by the so-called hybrid
model Goldstein for the process $\pi^{-}p\to\Lambda_{c}^{+}X$. Channeled
baryons distribution over transverse momentum: blue histogram (simulation
results obtained using Pythia8).
The polarization of the $\Lambda_{c}^{+}$ baryons has been measured in the
reaction of 230 GeV/c protons with a Copper target and gives
P($\Lambda_{c}^{+}$) = $-0.65\,^{+0.22}_{-0.18}$ at transverse momentum
$p_{t}>1.2$ GeV/c PolLambdac (the sign of the polarization is defined with
respect to the normal to the production plane, $\vec{q}\times\vec{p}_{i}$).
The E791 experiment E791 finds evidence for an increasingly negative
polarization of $\Lambda_{c}^{+}$ as a function of $p_{t}^{2}$, in agreement
with the model dha ; Goldstein . These data are shown in Fig. 2 together with
fitted curves.
In the same plot we show the theoretical prediction in the so-called hybrid
model Goldstein (for the process $\pi^{-}p\to\Lambda_{c}^{+}X$) describing
the $\Lambda_{c}^{+}$ polarization as a function of transverse momentum.
Using simulation code Pythia version 8.1 (Pythia8) Pythia we show the
transverse momentum distribution of channeled $\Lambda_{c}^{+}$ baryons (see
blue histogram in Fig. 2).
By convolving the transverse momentum distribution and polarization curve as a
function of transverse momentum we obtain the mean square value of
$\Lambda_{c}^{+}$ polarization around -0.37 and $-0.40\pm 0.05$ for the
theoretical prediction and experimental data, respectively.
No such measurements exist for the $\Xi_{c}^{+}$ baryons. It is also important
to mention that the absolute polarizations of $\Lambda_{c}^{+}$ and of
$\Xi_{c}^{+}$ as a function of transverse momentum could be measured by the
proposed experiment.
In addition, they could also be measured by using the available data on beam
gas interaction at the LHCb (SMOG data SMOG ).
The weak-decay parameter $\alpha$ is the decay-channel-dependent quantity and
it is compiled for various decay channels in case of the $\Lambda_{c}^{+}$
baryon in Table 1.
For the decay channels containing $\Lambda$ or $\Sigma^{+}$ in the final
states, the parameter $\alpha$ has been measured. The decay channel
$\Lambda_{c}^{+}\to p\,K^{-}\,\pi^{+}$ has a large branching fraction and it
would be interesting to use this decay mode for the MDM measurement. The E791
experiment E791 reports measurements of the amplitudes for $\Lambda_{c}^{+}$
decay into nonresonant $p\,K^{-}\,\pi^{+}$ and to
$p\,\overline{K}^{*}(890)^{0}$, $\Delta^{++}(1232)\,K^{-}$, and
$\Lambda(1520)\,\pi^{+}$ modes. Using the measured amplitudes the values of
the weak parameter $\alpha$ can be extracted with large errors as in
Botella:2016 . It would be extremely important to perform this analysis using
the LHCb data. On the other hand, no measurement of the $\alpha$ parameters
exists in case of $\,\Xi_{c}^{+}$, and it would be important to measure these
parameters in the LHCb experiments.
Table 1: Branching fractions and weak-decay parameters $\alpha$ for different decay modes of $\Lambda_{c}^{+}$. Channel | Fraction ($\Gamma_{j}/\Gamma$) | $\alpha$ | Source
---|---|---|---
$\Lambda_{c}^{+}\to\Lambda\pi^{+};\,\,\Lambda\to p\pi^{-}$ | $(1.07\pm 0.28)\,\%$ $\times$ $(63.9\pm 0.5)\,\%$ | $-0.91\pm 0.15$ | PDG:2014
$\Lambda_{c}^{+}\to\Lambda e^{+}(\mu^{+})\nu_{e(\mu)};\,\,\Lambda\to p\pi^{-}$ | $(2.0\pm 0.6)\,\%$ $\times$ $(63.9\pm 0.5)\,\%$ | $-0.86\pm 0.04$ | PDG:2014
$\Lambda_{c}^{+}\to pK^{-}\pi^{+}$ | $(5.0\pm 1.3)\,\%$ | – | PDG:2014
$\Lambda_{c}^{+}\to\Delta(1232)^{++}K^{-};\,\,\Delta(1232)^{++}\to p\pi^{+}$ | $(0.86\pm 0.3)\,\%$ $\times$ 99.4 % | $-0.67\pm 0.30$ | Botella:2016
$\Lambda_{c}^{+}\to p\,\overline{K}^{*}(892)^{0};\,\,\overline{K}^{*}(892)^{0}\to K^{-}\pi^{+}$ | (1.6 $\pm$ 0.5) % $\times$ 100 % | -0.545 $\pm$ 0.345 | Botella:2016
$\Lambda_{c}^{+}\to\Lambda(1520)\pi^{+};\,\,\Lambda(1520)\to pK^{-}$ | (1.8 $\pm$ 0.6) % $\times$ (45 $\pm$ 1) % | -0.105 $\pm$ 0.604 | Botella:2016
## III The sensitivity studies
In this paper we have performed a sensitivity study for measuring the MDM of
$\Lambda_{c}^{+}$ produced by the strong interaction of high-energy proton
beam impinging into a target-converter of a dense material. For this analysis
we decide to consider only the $\Lambda_{c}^{+}$ baryons which decayed after
having passed the full length of the crystal.
The number of reconstructed $\Lambda_{c}^{+}$ that were deflected by a bent
crystal can be expressed as follows:
$N_{\Lambda_{c}}=\Phi\ t\ \eta_{\rm{det}}\ \frac{\Gamma_{j}}{\Gamma}\
N_{\rm{tar+crys}},$ (12)
where $N_{\rm{tar+crys}}$ is the number of deflected $\Lambda_{c}^{+}$ per
proton:
$N_{\rm{tar+crys}}=\int\frac{\partial N_{\rm tar}}{\partial\varepsilon}\
\eta_{\rm{def}}\ e^{-\frac{L_{\rm{crys}}}{c\tau\gamma}}\,d\varepsilon.$ (13)
Here $\frac{\partial N_{\rm{\rm{tar}}}}{\partial\varepsilon}$ is the
$\Lambda_{c}^{+}$ energy distribution after the target:
$\frac{\partial
N_{\rm{\rm{tar}}}}{\partial\varepsilon}=\rho\,N_{\rm{A}}\,\sigma_{\Lambda_{c}}\,\frac{A_{\rm{tar}}}{M_{\rm{tar}}}\,\frac{\partial
N}{\partial\varepsilon}\,\int\limits_{0}^{L_{\rm{tar}}}e^{-\frac{L}{c\tau\gamma}}\
dL.$ (14)
Then, taking into account the energy distribution of $\Lambda_{c}^{+}$, we
obtain the expression for the absolute statistical error of measured
$g$-factor:
$\Delta g=\frac{1}{\ \alpha\,|\xi|\,\Theta\ }\ \sqrt{\frac{12}{\ \Phi\ t\
\eta_{\rm det}\ \frac{\Gamma_{j}}{\Gamma}\ \int\ \frac{\partial N_{\rm
tar+crys}}{\partial\varepsilon}\,\gamma^{2}\ d\varepsilon\ }}.$ (15)
The definitions of different terms in Eqs. (12)–(15) and their values are
given in Table 2 and discussed in the following sections.
Table 2: List of notations in Eqs. (12)–(15). Terms in Eqs. (12)–(15) | Values | Units
---|---|---
Proton flux, $\Phi$ | $5\times 10^{8}$ | s-1
Time of data taking, $t$ | $\sim 10^{6}$ | s
Detection efficiency, $\eta_{\rm{det}}$ | 0.002–0.03 | -
Deflection efficiency, $\eta_{\rm{def}}$ | (see Sec. III.3) | -
Crystal length, $L_{\rm{crys}}$ | 4–12 | cm
$\Lambda_{c}^{+}$ decay length, $c\tau$ | 60.0 | $\mu$m
Lorentz factor of $\Lambda_{c}^{+}$, $\gamma$ | 500–2000 | -
Normalized production spectra, $\frac{\partial N}{\partial\cal E}$ | (see Fig. 3) | TeV-1
Cross section ($p$+$N$$\rightarrow$$\Lambda_{c}^{+}$+$\dots$), $\sigma_{\Lambda_{c}}$ | $13.6\pm 2.9$ | $\mu$b
Target density, $\rho$ | 19.25 | gr/cm3
Avogadro number, $N_{\rm{A}}$ | $6.022\times 10^{23}$ | mol-1
Nucleon number of target, $A_{\rm{tar}}$ | 183.84 | -
Molar mass of target, $M_{\rm{tar}}$ | 183.84 | gr/mol
Target thickness, $L_{\rm{tar}}$ | 0.5–2 | cm
### III.1 $\Lambda_{c}^{+}$ production cross section: $\sigma_{\Lambda_{c}}$
The center-of-mass energy for the fixed target experiment at the 7 TeV LHC
proton beam is $\sqrt{s}$ = 115 GeV and no measurements of the
$\sigma(\Lambda_{c})$ cross section exist at this center-of-mass energy. For
this study the $\Lambda_{c}$ cross section has been estimated from the total
charm production cross section or explicitly from the $\Lambda_{c}$ cross
section measured at different center-of-mass energies.
The PHENIX experiment in proton-proton collisions at $\sqrt{s}$ = 200 GeV
measured the total charm cross section to be 567 $\pm$ 57 (stat) $\pm$ 224
(syst) $\mu$b PHENIX which is compatible with their previous measurement
$\sigma_{c\bar{c}}$ = 920 $\pm$ 150 $\pm$ 540 $\mu$b in Ref. Adler:2005fy and
the one derived from the analysis of Au-Au collisions Adler:2004ta
($\sigma_{c\bar{c}}$ = 622 $\pm$ 57 $\pm$ 160 $\mu$b). If we rescale the cross
sections at $\sqrt{s}$ = 115 GeV assuming a linear energy dependence, we
obtain $\sigma_{c\bar{c}}$ = 326 $\pm$ 33 $\pm$ 129 $\mu$b,
$\sigma_{c\bar{c}}$ = 529 $\pm$ 86 $\pm$ 311 $\mu$b and $\sigma_{c\bar{c}}$ =
358 $\pm$ 33 $\pm$ 92 $\mu$b, respectively. In the following, we considered
the weighted average of the three experimental results: $\sigma_{c\bar{c}}$ =
357 $\pm$ 77 $\mu$b. The results from the linear interpolation are in
agreement within 1.7$\,\sigma$ with the c$\bar{c}$ cross section obtained with
the Helaconia MC generator Shao:2012iz in Ref. Massacrier:2015qba .
The $\Lambda_{c}$ fragmentation function (7.6 $\pm$ 0.7 ($\pm$ 2 %)) has been
taken from Ref. Gladilin:1999pj , as the average of the results from the CLEO
($f_{c\rightarrow{\Lambda_{c}}}$ = 8.1 $\pm$ 1.2 $\pm$ 1.4 %), ARGUS
($f_{c\rightarrow{\Lambda_{c}}}$ = 7.3 $\pm$ 1.0 $\pm$ 1.0 %), ALEPH
($f_{c\rightarrow{\Lambda_{c}}}$ = 7.8 $\pm$ 0.8 $\pm$ 0.4 %), DELPHI
($f_{c\rightarrow{\Lambda_{c}}}$ = 8.6 $\pm$ 1.8 $\pm$ 1.0 %) and OPAL
($f_{c\rightarrow{\Lambda_{c}}}$ = 4.8 $\pm$ 2.2 $\pm$ 0.8 %) experiments.
Predictions from Pythia8 ($f_{c\rightarrow{\Lambda_{c}}}$ = 7.21 $\pm$ 0.04 %)
and models in Ref. fragm ($f_{c\rightarrow{\Lambda_{c}}}$ = 5.88 $\%$ (L0)
and 5.74 $\%$ (NLO)) are in agreement within the large uncertainties. Finally,
we get $\sigma(\Lambda_{c})$ = 27.1 $\pm$ 9.5 $\mu$b.
On the other hand, we can use the LHCb $\Lambda_{c}$ cross section measurement
in pp collisions at $\sqrt{s}=$ 7 TeV Aaij:2013mga . In this case the cross
section is reported in specific rapidity $y$ and transverse momentum $p_{\rm
t}$ ranges. It is equal to $\sigma_{\Lambda_{c}}\,$(2.0$\,<y<\,$4.5,
0$\,<p_{\rm t}<\,$8 GeV/c) = $233\pm 77$ $\mu$b. We used Pythia8 to
interpolate the cross section to the full $p_{\rm t}$ and rapidity range. The
correction factor is found to be 19.2 $\pm$ 0.3 $\%$. We then extrapolate
linearly the total $\Lambda_{c}$ cross section to the energy of $\sqrt{s}$ =
115 GeV. We obtain $\sigma(\Lambda_{c})$ = 19.9 $\pm$ 6.6 $\mu$b.
Finally, we can use the measurements of the D mesons cross section performed
in pA collisions at HeraB at a center-of-mass energy of $\sqrt{s}$ = 42 GeV
Abt:2007zg . The measurement of the $D^{0}$, $D^{+}$ and $D^{+}_{s}$ were used
to calculate the total charm cross section which is found to be
$\sigma_{c\bar{c}}$ = 49.1 $\pm$ 4.6 $\pm$ 7.4 $\mu$b. After energy
extrapolation, the total charm cross section at $\sqrt{s}=115$ GeV is
$\sigma_{c\bar{c}}$ = 134.4 $\pm$ 12.6 $\pm$ 20.3 $\mu$b. Assuming the
fragmentation function for the $\Lambda_{c}$ given previously, one gets
$\sigma(\Lambda_{c})$ = 10.2 $\pm$ 3.4 $\mu$b.
These three evaluations are compatible within less than 1.7 standard
deviations. The spread of the values is explained by the poorly known total
charm cross section, the poorly known $\Lambda_{c}$ fragmentation function and
the lack of experimental open charm data close to $\sqrt{s}$ = 115 GeV. For
the sensitivity study we took the weighted mean of the three values,
$\sigma(\Lambda_{c})$ = 13.6 $\pm$ 2.9 $\mu$b.
### III.2 $\Lambda_{c}^{+}$ energy distribution: $\frac{\partial
N_{\rm{\rm{tar}}}}{\partial\varepsilon}$
The $\Lambda_{c}^{+}$ produced in the target-converter will have a wide energy
spectrum from zero to the energy of the incident proton. Low-energy
$\Lambda_{c}^{+}$, constituting a majority of the produced particles, can not
be deflected by a bent crystal at a sufficiently large angle to be used for
measuring MDM, due to their rapid decay. The normalized energy distributions
of baryons produced by a 7 TeV proton in a tungsten target of zero thickness
are shown in Fig. 3. These results are obtained using Pythia8.
Figure 3: Energy distribution of $\Lambda_{c}^{+}$ baryons produced by 7 TeV
protons in $p\,$-$\,N$ collision in a fixed target normalized to one produced
$\Lambda_{c}^{+}$ baryon. Solid blue curve is for the initial distribution
$(L\,$=$\,0)$, dashed curves are for different distances from the production
point (listed on the right).
The simulation gives also the angular distribution of produced
$\Lambda_{c}^{+}$, which is important for the determination of the
$\Lambda_{c}^{+}$ beam fraction that could be captured in the channeling
regime in a bent crystal. For the energies higher than 0.5 GeV the
distribution is very close to the normal one with a standard deviation
$\approx\frac{1}{2}\ \gamma^{-1}$, that in the case of $\Lambda_{c}^{+}$
baryon energies of several TeV is of the order of milliradians.
Figure 4 shows the $\Lambda_{c}^{+}$ differential energy distribution after
the target (see Eq. (14)) for different target thicknesses with the parameters
listed in Table 2 and the normalized spectra given in Fig. 3 for $L=0$.
Figure 4: Spectra of $\Lambda_{c}^{+}$ baryons right after the tungsten
targets of different thicknesses $L_{\rm{tar}}$ (listed on the right).
At high energies the number of $\Lambda_{c}^{+}$ is proportional to the target
thickness. Furthermore, the specific ionization losses of TeV baryons in a
tungsten target are about 40 MeV/cm and therefore can be neglected as well as
the multiple scattering of the $\Lambda_{c}^{+}$ in the target, that gives a
correction of the order of percent of the value of the characteristic angular
width of $\Lambda_{c}^{+}$ production $\gamma^{-1}$. The main limitation would
come from secondary particle production in the target. This should be
carefully evaluated. For the present study we decide to use $L_{\rm{tar}}=1$
cm.
### III.3 Deflection efficiency: $\eta_{\rm def}$
The efficiency of particle deflection $\eta_{\rm def}$ is the ratio of the
number of particles which are captured in the channeling regime and deflected
by the full angle $\Theta$ to the total number of particles impinging into the
crystal. It can be expressed as:
$\eta_{\rm def}=\eta_{\rm acc}\,\left(1-\eta_{\rm dech}\right)\ $ (16)
where $\eta_{\rm acc}$ is the acceptance factor which describes the capture of
impinging particle into the channeling regime at the crystal entrance,
$\eta_{\rm{dech}}$ is the dechanneling probability inside the crystal.
The acceptance factor $\eta_{\rm acc}$ is defined first of all by the angular
acceptance factor $\eta_{\rm ang}$ which is the fraction of particles produced
in the target-converter in the narrow interval of angles with respect to the
crystal plane ($zy$). The detailed description on how we have obtained these
parameters is presented in Appendix C.
Figure 5: Angular acceptance factor $\eta_{\rm ang}$ (dotted blue curves),
acceptance factor $\eta_{\rm acc}$ (dashed red curves), deflection efficiency
of 8 cm bent crystal $\eta_{\rm def}$ (solid black curves) as functions of
channeled particle energy in germanium (on the left) and silicon (on the
right) crystals. Curvature radius is 7.5 m for all crystals.
The results of calculations of the angular acceptance factor $\eta_{\rm ang}$
and acceptance factor $\eta_{\rm acc}$ as functions of $\Lambda_{c}^{+}$
energy are presented by the dotted blue and dashed red curves in Fig. 5,
respectively. Note that these factors have a quite different dependence on
particle energy.
Solid black curves represent the deflection efficiency $\eta_{\rm def}$ of the
crystal of length $L_{\rm crys}=8$ cm. The difference between the solid black
and dashed red curves in Fig. 5 is caused by the dechanneling effect.
Figure 5 shows that a germanium crystal has better efficiency with respect to
a silicon one and allows one to keep more energetic $\Lambda_{c}^{+}$ which,
in addition, are more efficient for the precession of the MDM measurement, see
Eq. (15).
### III.4 Crystal parameters optimization
To obtain the optimal crystal parameters and to compare the efficiencies of
silicon and germanium crystals we introduce the relative efficiency $\eta_{\rm
rel}$ of the MDM precession measurement with respect to the efficiency of
silicon crystal with $L_{crys}=8$ cm and $R=22$ m (further, the default
crystal). This parameter corresponds to the ratio of data taking times needed
to measure the $g$-factor with the same absolute error $\Delta g$ (see Eq.
(15)) for two different crystals:
$\eta_{rel}=\frac{t_{0}}{t}=\frac{\ \Theta^{2}\ \int\frac{\partial N_{\rm
tar}}{\partial\varepsilon}\ \eta_{\rm def}\ \gamma^{2}\
e^{-\frac{L_{\rm{crys}}}{c\tau\gamma}}\,d\varepsilon}{\ \Theta_{0}^{2}\
\int\frac{\partial N_{\rm tar}}{\partial\varepsilon}\ \eta_{\rm def,0}\
\gamma^{2}\ e^{-\frac{L_{\rm crys,0}}{c\tau\gamma}}\,d\varepsilon}.$ (17)
Here quantities with index “0” correspond to the default crystal.
In Fig. 6 the upper plot represents $\eta_{\rm{rel}}$ for silicon and
germanium crystals at room temperature and for germanium cooled down to
80${}^{\circ}\,$K as a function of crystal length $L_{\rm crys}$ calculated
for the optimal curvature radius $R$ (shown in the bottom plot).
Figure 6: Relative efficiency of MDM precession measurement $\eta_{\rm rel}$
with respect to the efficiency of default crystal as a function of crystal
length $L_{\rm crys}$ (upper plot). Optimal curvature radius $R$ as a function
of crystal length $L_{\rm crys}$ (bottom plot).
The positions of maxima of curves in Fig. 6 (upper plot) correspond to the
optimal crystal lengths. The bottom plot shows the optimal curvature radius
$R$ as a function of crystal length $L_{\rm crys}$.
Note that $\eta_{\rm rel}$ depends only on target and crystal properties as
well as the baryon energy distribution and decay time. Thus, the optimal
crystal parameters can be found by maximizing this term for all decay channels
at once. The applicability limit for this approach is that the detector
efficiency $\eta_{\rm det}$ should not have a strong dependence on the
$\Lambda_{c}^{+}$ baryon energy. In the opposite case decay parameters
$\alpha$ and $\Gamma_{j}$ and the detection efficiency $\eta_{\rm det}$ should
be integrated together with the terms in Eq. (17) over the energy.
In Table 3 we give the results for the relative efficiency of the MDM
precession measurement $\eta_{\rm rel}$ for three values of $L_{\rm crys}$,
both for silicon and germanium crystals.
Table 3: Optimal crystal parameters | $L_{\rm crys}$ | $R$ | $N_{\rm tar+crys}$ | $\eta_{\rm rel}$
---|---|---|---|---
Si @ 293∘K | 4 cm | 18 m | $3.2\times 10^{-8}~{}$ | 0.5
8 cm | 22 m | $1.6\times 10^{-8}~{}$ | 1.0
12 cm | 25 m | $0.9\times 10^{-8}~{}$ | 1.2
Ge @ 293∘K | 4 cm | 12 m | $4.0\times 10^{-8}~{}$ | 1.5
8 cm | 15 m | $1.9\times 10^{-8}~{}$ | 2.5
12 cm | 18 m | $1.1\times 10^{-8}~{}$ | 2.8
Ge @ 80∘K | 4 cm | 10 m | $4.8\times 10^{-8}~{}$ | 2.5
8 cm | 13 m | $2.5\times 10^{-8}~{}$ | 4.4
12 cm | 16 m | $1.5\times 10^{-8}~{}$ | 4.8
In the table we also give the value for the number of deflected
$\Lambda_{c}^{+}$ per incident proton $N_{\rm tar+crys}$, which can be
obtained by plugging $\eta_{\rm def}$, $\partial N_{\rm
tar}/\partial\varepsilon$ and the decay factor in Eq. (13). Note that there is
no direct relation between $N_{\rm tar+crys}$ and $\eta_{\rm rel}$ as
$\eta_{\rm rel}$ is also proportional to square of the deflection angle
$\Theta^{2}$ and square of Lorentz factor $\gamma^{2}$ of $\Lambda_{c}^{+}$.
It is important to notice that the value $N_{\rm tar+crys}$ is typically of
the order of $10^{-8}$.
For the sensitivity analysis we choose a silicon crystal at room temperature
with $L_{\rm crys}=8$ cm and $R=22$ m.
As follows from Table 3, the use of germanium crystal at room temperature
increases the efficiency by a factor 2.5 (for a germanium crystal cooled down
to 80${}^{\circ}\,$K this factor is 4.4).
### III.5 Detector efficiency: $\eta_{\rm det}$
Many decay channels of the $\Lambda_{c}^{+}$ could be used:
$\Lambda(p\pi^{-})\,\pi^{+}$, $\Lambda\ell^{+}\nu_{\ell}$,
$p\,\overline{K}^{*0}(890)$, or $\Delta^{++}(1232)\,K^{-}$. For the first two
decay modes the weak-decay parameters $\alpha$ have been measured with a
reasonable accuracy, while only preliminary measurement of the branching
fractions and evaluations of the weak-decay parameter values are available for
the other decay modes. A specific analysis should be performed for evaluating
the detector efficiency for each of these channels. For the sensitivity
studies we have decided to select two of these decay modes:
$\Lambda(p\pi^{-})\,\pi^{+}$ and $\Delta^{++}(1232)\,K^{-}$.
For a preliminary evaluation of the detector efficiency we take the LHCb as a
reference detector, by considering typical trigger, acceptance, tracking and
vertex reconstruction efficiency. In particular, due to the very energetic
spectrum, the reconstruction of $\Lambda$ baryon is rather complicated. In
fact, the $\Lambda$ present in the final states, can be very difficult to be
detected since most of them could decay after passing the detector tracking
volume. The efficiency of the $\Lambda(p\pi^{-})\,\pi^{+}$ decay channel has
been evaluated to be in the range
$\eta_{\rm{det}}(\Lambda(p\pi^{-})\,\pi^{+})=(1\text{--}3)\times 10^{-3}$. On
the other hand, the decay mode $\Delta^{++}(1232)\,K^{-}$ seems to be more
promising and a preliminary evaluation of the efficiency gives
$\eta_{\rm{det}}(\Delta^{++}(1232)\,K^{-})=(2\text{--}4)\,\%$. The other
channels could be also used and a more precise evaluation of the detector
efficiency should be the object of dedicated studies.
### III.6 Results of the sensitivity studies
The results of the sensitivity studies have been obtained by generating the
$\Lambda_{c}^{+}$ baryons using Pythia8 and ad hoc parametric Monte Carlo for
taking into account the correlation between the kinematic effects and the
efficiency of the channeling processes. As an example, the number of
reconstructed $\Lambda_{c}^{+}$ as a function of their energy after 40 days of
data taking with a proton flux $\Phi=5\times 10^{8}$ s-1 is shown in Fig. 7.
The red histogram shows the deflected fraction of $\Lambda_{c}^{+}$ produced
by the 7 TeV proton beam in the tungsten target of thickness $L_{\rm{tar}}=1$
cm and channeled through the silicon crystal at room temperature of length
$L_{\rm crys}=8$ cm and radius of curvature $R=22$ m. The total number of
reconstructed $\Lambda_{c}^{+}$ in this case is expected to be about 6000.
Figure 7: The spectrum of reconstructed $\Lambda_{c}^{+}$ after 40 days of
data taking with proton flux $\Phi=5\times 10^{8}$ s-1. The dotted blue curve
shows the spectra of $\Lambda_{c}^{+}$ right after the 1 cm thick tungsten
target-converter.The red histogram shows the spectrum of channeled
$\Lambda_{c}^{+}$ after the same target and silicon crystal at room
temperature with $L_{\rm crys}=8$ cm and $R=22$ m.
The initial polarization of the $\Lambda_{c}^{+}$ is supposed to be known with
high precision using the large sample of the non-channeled $\Lambda_{c}^{+}$;
the polarization in the three spatial coordinates is evaluated by using the
angular analysis as described by Eq. (10). An example of the spin rotation is
given in Fig. 8.
The initial polarization is only on the transverse plane, specifically along
the direction of the $Ox$ axis (see Fig. 1). After $\Lambda_{c}^{+}$ have
passed through the crystal, the polarization acquires also a longitudinal
component (along the $Oz$ axis). The value of the $g$-factor is obtained from
Eq. (8) using variation of the polarization components and values of the boost
and bending angle.
Figure 8: Angular distribution of the polarized $\Lambda_{c}^{+}$ decay
products as a function of cos $\theta_{x}$, cos $\theta_{y}$, cos $\theta_{z}$
(see Eq. (9)). The distributions on the top are for an initial polarization
$\xi_{y}$=$\xi_{z}$=0 and $\xi_{x}$=$-$0.40. The same distributions obtained
for the $\Lambda_{c}^{+}$ after having passed through the crystal are shown at
the bottom.
The polarization angle for $g=1.9$ and the parameters used for this simulation
is of the order $\Theta_{\mu}\sim 0.2$ rad.
In Fig. 9 we show the result in the plane $\Phi\times\eta_{det}$ as a function
of days of data taking to reach a precision on $g$-factor of $\pm$ 0.1 for the
two decay modes which we have considered. The bands display different choice
of absolute $\Lambda_{c}^{+}$ polarization, $\alpha$ parameters and
$\Lambda_{c}^{+}$ cross section according to values and accuracy given in
Tables 1 and 2. As it can be noted, the bands are quite large and depending on
the values of several parameters entering this evaluation, the difference in
terms of data taking time can be very significant. It is important to
emphasize that the width of these bands is mainly coming from the two factors:
the value and the uncertainty of the $\alpha$ parameters and the
$\Lambda_{c}^{+}$ polarization. Thus, it is extremely important to measure
more accurately these parameters using, for instance, the existing LHCb data.
In Fig. 9 the results are shown for silicon crystal at room temperature. The
horizontal lines in the two plots correspond to a value for proton flux of
$\Phi=5\times 10^{8}$ s-1 and the detector efficiency in the range
$(1\text{--}3)\times 10^{-3}$ for the $\Lambda(p\pi^{-})\,\pi^{+}$ decay mode
and $(2\text{--}4)\,\%$ for the $\Delta^{++}(1232)\,K^{-}$ decay mode.
Figure 9: Flux times detection efficiency $\Phi\times\eta_{\rm{det}}$ as a
function of data taking time for two $\Lambda_{c}^{+}$ decay modes to obtain
an absolute error on the gyromagnetic factor $g$ of $\pm$ 0.1. Considering a
flux of proton of 5 $\times$ 108 s-1, the areas between horizontal lines:
$(0.5\text{--}2)\times 10^{6}$ and $(1\text{--}2)\times 10^{7}$ correspond to
$\eta_{\rm det}=(1\text{--}3)\times 10^{-3}$ (typical for the
$\Lambda_{c}^{+}\to\Lambda\pi^{+}$ decay mode) and $(2\text{--}4)\times$ 10-2
(typical for the $\Lambda_{c}^{+}\to\Delta^{++}K^{-}$ decay mode),
respectively.
The most promising channel is $\Lambda_{c}^{+}\to\Delta^{++}(1232)\,K^{-}$.
Using this mode a precision on $g$-factor of $\pm\,$0.1 can be obtained within
the time from a few to 60 days.
In Fig. 10 we show the evolution of the error on the $g$-factor using the
$\Delta^{++}(1232)\,K^{-}$ decay mode once the detector efficiency has been
fixed to a value: $\eta_{\rm det}=2\times 10^{-3}$. The data taking time
needed to reach the certain precision ranges in a quite large interval due to
the uncertainty on the polarization, $\alpha$ parameters and the
$\Lambda_{c}^{+}$ cross section.
As explained in Section III.4 and shown in Table 3, the data taking time can
be reduced by about a factor $(2.5\text{--}4.8)$, if germanium crystal could
be used.
Figure 10: Error of the gyromagnetic factor $g$ as a function of data taking
time $t$ for the $\Delta^{++}(1232)\,K^{-}$ decay mode.
## IV Possible experimental setup for performing this experiment
In the last decade the UA9 Collaboration has developed the technology and more
recently used it to demonstrate that bent silicon crystals can efficiently
steer the diffusive halo surrounding the circulating beam in the LHC, up to
6.5 TeV energy Scandale:2016krl .
A scenario to deflect the halo particles in the vicinity of an interaction
region of LHC is currently under study. The deflected particles should be kept
in the vacuum pipe and will follow trajectories well distinct from those of
the circulating beam core. Inserting a target in the pipe, the deflected halo
can be efficiently used for fixed-target physics. An additional absorber
should intercept halo particles not interacting with the target, thereby
allowing the possibility of fixed-target operation in parasitic mode. In
particular, by directing the deflected halo into another bent crystal tightly
packed with a short and dense target, located in the LHC pipe just before an
existing detector, living baryons should be produced and their polarization
may be measured from the analysis of the decay products. As an example, a
preliminary optical layout compatible with the existing installations in IR8
is presented talkScandale ; talkStocchi and it is suggested to use the
interaction zone close to the LHCb detector. The LHCb detector will be
particularly well suited to perform this experiment and preliminary
discussions are undergoing.
In addition an Expression of Interest Burmistrov:2016 has been presented in
October 2016 at SPSC proposing to perform preliminary studies of the double
crystal setup in SPS. In March 2017 this proposal has been accepted by SPSC
for the next two years and the experiment will be performed in 2017 and 2018.
## V Conclusions
In this paper we have revisited the possibility of a measurement of the
magnetic dipole moment of the charm baryons and in particular of
$\Lambda_{c}^{+}$. As shown, the experimental setup would consist of using the
primary protons in the halo of one of the LHC beams, deflecting them by a bent
crystal into the target-crystal pack, just upstream of one of the existing
detectors of LHC. This experiment is extremely challenging but the recent
success of crystal-collimation tests of the UA9 Collaboration Scandale:2016krl
may provide the necessary technical know-how for such a complex task. The
sensitivity studies presented in this paper show that a precision of $\pm$ 0.1
on the $g$-factor could be reached within data taking time from a few days to
about one month. The uncertainty on the needed data taking time could be
significantly reduced by measuring more precisely the $\alpha$ parameters and
the absolute value of $\Lambda_{c}^{+}$ polarization.
## Acknowledgments
This research was partially conducted in the scope of the IDEATE International
Associated Laboratory (LIA). The research of S.P.F, I.V.K. and A.Yu.K. was
partially supported by the Ministry of Education and Science of Ukraine
(projects no. 0117U004866 and 0115U000473).
## Appendix A Aspects of formalism of the polarization precession
The 4-vector of the polarization $a=(0,\,\vec{\xi})$ of the
spin-$\tfrac{1}{2}$ particle is defined in its rest frame in which the
particle 4-momentum is $p=(m,\,0)$. In this frame the axial vector $\vec{\xi}$
is an average of the particle spin,
$\vec{\xi}=\tfrac{2}{\hbar}\langle\,\vec{S}\,\rangle$ Beresteckii:1982 .
After transforming to the frame, in which the particle 4-momentum is
$p=(\varepsilon,\,\vec{p})$, it looks as
$a=(a^{0},\,\vec{a})=(a^{0},\,\vec{a}_{\perp},\,a_{\parallel})=(\gamma
v\xi_{\parallel},\,\vec{\xi}_{\perp},\,\gamma\xi_{\parallel}),$ (18)
where $\vec{v}=\vec{p}/\varepsilon\,$ is the particle velocity,
$\gamma=\varepsilon/m\,$ is the Lorentz factor, and perpendicular and parallel
components of the 3-vectors are defined with respect to the direction of
motion. Apparently, $a\cdot p=0$ in any frame.
The polarization vector has the clear physical meaning in the rest frame of
the particle, therefore the precession of vector $\vec{\xi}$ is usually
considered. In the instantaneous rest frame the polarization vector obeys the
classical equation Beresteckii:1982
$\frac{d\vec{\xi}}{d\tau}=-\frac{eg}{2m}\vec{H}^{\star}\times\vec{\xi},$ (19)
where $\vec{H}^{\star}$ is the magnetic field in this frame and $\tau$ is the
proper time 111Velocity of light is set to unity.. In Eq. (19) the term with a
possible electric dipole moment of the particle is not included (see, for
example, Refs. Bargmann:1959 ; Botella:2016 in which such contribution is
discussed).
One way to extend Eq. (19) to the laboratory frame is to transform the
magnetic field and the time to the laboratory frame, and include the Thomas
correction Thomas:1926 ; Thomas:1927 . Another commonly used way is based on
the explicitly covariant approach Bargmann:1959 which is analyzed in detail
in Refs. Beresteckii:1982 ; Jackson:1999 . The corresponding equations can be
written as
$\displaystyle\frac{d\vec{\xi}}{dt}=\vec{\omega}\times\vec{\xi},$ (20)
$\displaystyle\vec{\omega}=\vec{\omega}_{\vec{H}}+\vec{\omega}_{\vec{E}},$
$\displaystyle\vec{\omega}_{\vec{H}}=-\frac{e}{m}\left[\left(\frac{g}{2}-1+\frac{1}{\gamma}\right)\,\vec{H}-\left(\frac{g}{2}-1\right)\,\frac{\gamma}{1+\gamma}\,\vec{v}\,(\vec{H}\,\vec{v})\right],$
$\displaystyle\vec{\omega}_{\vec{E}}=-\frac{e}{m}\left(\frac{g}{2}-\frac{\gamma}{1+\gamma}\right)\,\vec{E}\times\vec{v},$
where the electric, $\vec{E}$, and magnetic, $\vec{H}$, fields are defined in
the laboratory frame and $\vec{\omega}$ is the angular velocity of the
polarization precession.
For the purpose of the present paper it is sufficient to keep only the
electric field and choose $\vec{E}\,\vec{v}=0$ at any moment of time, since
the effective electric field in the bent crystal is orthogonal to the particle
momentum. In this case the equations of motion imply that
$\frac{d\vec{v}}{dt}=\frac{e}{m\gamma}\,\vec{E},\qquad\quad\frac{dv}{dt}=0.$
(21)
Choosing vector $\vec{E}$ in the $(xz)$ plane it is seen that the particle
rotates around the axis $Oy$ with the constant velocity (neglecting movement
along the $Oy$ axis). From (21) one obtains the corresponding angular velocity
and the rotation radius
$\omega_{0}=\frac{eE}{m\gamma v},\;\qquad R=\frac{v}{\omega_{0}}=\frac{m\gamma
v^{2}}{eE}.$ (22)
The polarization vector, as it is seen from Eqs. (20), also rotates around the
axis $Oy$ with the angular velocity
$\displaystyle\omega$ $\displaystyle=$
$\displaystyle\frac{evE}{m}\left(\frac{g}{2}-\frac{\gamma}{1+\gamma}\right)$
(23) $\displaystyle=$
$\displaystyle\gamma\left(\frac{g}{2}-1-\frac{g}{2\gamma^{2}}+\frac{1}{\gamma}\right){\omega}_{0}$
We can integrate (23) and arrive at Eq. (8) connecting the angles of
polarization precession and velocity rotation.
Note that Eq. (23) was derived earlier Lyuboshits:1980 for the arbitrary
electric field. It was also re-derived in Kim:1983 using a more elaborate
method.
## Appendix B Asymmetry parameter for decay of polarized $\Lambda_{c}^{+}$
to $\Delta(1232)^{++}K^{-}$
Formalism for the polarization effects in the decay
$\Lambda_{c}^{+}\to\Lambda\,\pi^{+}$
($\tfrac{1}{2}^{+}\to\tfrac{1}{2}^{+}+0^{-}$) is well-known Commins:1983 ,
sec. 6.5 (see also PDG:2014 , p. 1515). If $\Lambda_{c}^{+}$ is polarized and
polarization of $\Lambda$ baryon is not measured, then the angular
distribution is given by Eq. (10).
One of the important modes for measuring polarization of $\Lambda_{c}^{+}$
after passing the crystal is the decay
$\Lambda_{c}^{+}\to\Delta(1232)^{++}K^{-}$. This decay involves the transition
$\tfrac{1}{2}^{+}\to\tfrac{3}{2}^{+}+0^{-}$, and we briefly discuss below the
angular distribution and asymmetry parameter.
The amplitude for the decay $\Lambda_{c}^{+}\to\Delta(1232)^{++}K^{-}$ can be
written as (assuming that $\Delta^{++}$ is produced on-mass-shell)
${\cal M}=\bar{u}^{\mu}(p)\,T_{\mu}\,u(Q)\,\varphi_{K}^{*},$ (24)
where $Q$ ($p$) is the 4-momentum of the initial (final) baryon, $u(Q)$ is the
Dirac spinor, $u^{\mu}(p)$ is the Rarita-Schwinger vector-spinor, such that
$p_{\mu}u^{\mu}(p)=0$ and $\gamma_{\mu}u^{\mu}(p)=0$ (see, e.g.
Beresteckii:1982 , sec. 31), and $\varphi_{K}$ is wave function of the kaon.
In Eq. (24) $T_{\mu}$ is the transition operator which has general form
Commins:1983 (sec. 4.7): $T_{\mu}=(B-A\gamma^{5})\,Q_{\mu}$, where constants
$B$ and $A$ generate parity-conserving and parity-violating amplitudes,
respectively.
The amplitude squared and summed over the final baryon polarizations is
$\displaystyle\overline{|{\cal M}|^{2}}$ $\displaystyle=$
$\displaystyle\frac{1}{2}{\rm
Tr}\big{[}(p/+m_{\Delta})\,S^{\nu\mu}(p)\,T_{\mu}\,$ (25)
$\displaystyle\times\,(Q/+M_{\Lambda_{c}})(1+\gamma^{5}a/)\,\gamma^{0}T_{\nu}^{\dagger}\gamma^{0}\big{]},$
where $a$ is the 4-vector of $\Lambda_{c}^{+}$ polarization in Eq. (18),
$a/=a^{\sigma}\gamma_{\sigma}$, and tensor $S^{\nu\mu}(p)$ is
$S^{\nu\mu}(p)=-g^{\nu\mu}+\frac{1}{3}\gamma^{\nu}\gamma^{\mu}+\frac{2p^{\nu}p^{\mu}}{3m_{\Delta}^{2}}+\frac{p^{\mu}\gamma^{\nu}-p^{\nu}\gamma^{\mu}}{3m_{\Delta}}.$
(26)
From (25) one obtains
$\displaystyle\overline{|{\cal M}|^{2}}$ $\displaystyle=$
$\displaystyle\overline{|{\cal
M}_{0}|^{2}}\,\Big{(}1-\alpha\,\frac{M_{\Lambda_{c}}a\cdot p}{[(p\cdot
Q)^{2}-m_{\Delta}^{2}M_{\Lambda_{c}}^{2}]^{1/2}}\Big{)}$ (27) $\displaystyle=$
$\displaystyle\overline{|{\cal
M}_{0}|^{2}}\,\big{(}1+\alpha\,|\vec{\xi}|\cos\vartheta\big{)}$
in the rest frame of $\Lambda_{c}^{+}$, where $a=(0,\vec{\xi})$ and $a\cdot
p=-|\vec{p}||\vec{\xi}|\cos\vartheta$. The asymmetry parameter $\alpha$ reads
$\alpha=\frac{2\,{\rm
Re}(AB^{*})\,|\vec{p}|}{|A|^{2}(E-m_{\Delta})+|B|^{2}(E+m_{\Delta})},$ (28)
and the amplitude squared for the unpolarized $\Lambda_{c}^{+}$ is
$\overline{|{\cal
M}_{0}|^{2}}=\frac{4M_{\Lambda_{c}}^{3}\,\vec{p}\,^{2}}{3m_{\Delta}^{2}}\left[\,|A|^{2}(E-m_{\Delta})+|B|^{2}(E+m_{\Delta})\,\right].$
(29)
Here $E=(m_{\Delta}^{2}+\vec{p}\,^{2})^{1/2}$ is the energy of $\Delta^{++}$
in the rest frame of $\Lambda_{c}^{+}$.
The analogous consideration applies to the decay
$\Lambda_{c}^{+}\to\Lambda(1520)\,\pi^{+}$
($\tfrac{1}{2}^{+}\to\tfrac{3}{2}^{-}+0^{-}$) with interchange of $A$ and $B$.
Actually, Eqs. (27) are general and valid for other decay modes as well, in
particular, for $\Lambda_{c}^{+}\to\Lambda\,\pi^{+}$
($\tfrac{1}{2}^{+}\to\tfrac{1}{2}^{+}+0^{-}$) and $\Lambda_{c}^{+}\to
p\,\overline{K}^{*}(892)^{0}$ ($\tfrac{1}{2}^{+}\to\tfrac{1}{2}^{+}+1^{-}$).
Of course, for these decays the baryon traces differ from (25), but they are
linear in the polarization vector and the amplitude squared $\overline{|{\cal
M}|^{2}}$ is always linear in $a\cdot p$. The asymmetry parameter in (27)
depends on a specific form of the transition operator $T_{\mu}$.
## Appendix C Details on deflection efficiency: $\eta_{\rm ang},\ \eta_{\rm
acc},\ \eta_{\rm def}$
Angular acceptance factor $\eta_{\rm ang}$ is defined as the fraction of
$\Lambda_{c}^{+}$ baryons that are produced in the narrow interval of angles
with respect to the crystal plane ($zy$):
$\theta_{x}\in(-\theta_{\rm acc},+\theta_{\rm acc}).$ (30)
As the initial angular distribution of baryons is very close to the normal one
with a standard deviation $1/2\,\gamma^{-1}$, the angular acceptance factor
can be expressed as follows:
$\eta_{\rm ang}=\rm{erf}\left(\sqrt{2}\ \theta_{\rm{acc}}\,\gamma\right)\ $
(31)
where erf($x$) is the error function.
The acceptance angle $\theta_{\rm acc}$ is the maximal value of the angle
between the $\Lambda_{c}^{+}$ momentum and the crystal plane, at which the
particle can be captured into the channeling regime.
Figure 11: Acceptance angle as a function of energy of channeled particle in
germanium (thick curves) and silicon crystals. Solid blue curves are for
straight crystals, dashed red and dotted green curves are for bent crystals
with radii of curvature $R=$ 7.5 m and 1.5 m, respectively.
This angle is analogous to the Lindhard angle (see Eq. (7)) but with taking
into account thermal vibrations of lattice atoms and the crystal curvature.
The value $\theta_{\rm{acc}}$ is defined by the effective potential well of
plane channel of bent crystal. The form of this potential well is defined by
averaging the lattice atom potentials along the chosen crystal plane (see,
e.g., Lindhard ; Biryukov ; Akhiezer ). The dependence of acceptance angle on
the particle energy for silicon and germanium crystals is presented in Fig.
11.
As germanium has a rather small value of Debye temperature, cooling down the
crystal leads to a significant decrease of a thermal oscillation amplitude of
atoms in crystal nodes. Through this effect, reduction of the temperature to
liquid nitrogen temperature noticeably gains the deflection efficiency. For
this reason, we also present the results for germanium crystal cooled down to
$80^{\circ}$K (see upper limit of thick curves in Fig. 5 and Fig. 11)
Actually, the fulfillment of condition (30) is not sufficient for particles to
be captured into the channeling regime. It is also necessary for the channeled
particle to have the negative energy of transverse motion with respect to
interplanar potential $U(x)$ (see, e.g., Lindhard ; Biryukov ; Akhiezer ):
$\varepsilon_{t}(\theta_{x},x)=\frac{\ \varepsilon\ \theta_{x}^{2}\ }{2}+\
U_{\rm eff}(x)<0,$ (32)
where
$U_{\rm eff}=U(x)+\frac{\varepsilon}{R}\ x,\ \ \
(-\frac{d}{2}<x<\frac{d}{2}),$ (33)
where $x$ is the impact parameter with respect to the planar channel (see
e.g., Biryukov ). The second summand in Eq. (33) is centrifugal term which
describes the distortion of interplanar potential caused by the crystal
curvature.
As the characteristic width of baryon angular distribution $\gamma^{-1}$ is at
least two orders of magnitude greater than channeling acceptance angle
$\theta_{\rm acc}$, we can consider the angular distribution of channeled
baryons over $\theta_{x}$ as uniform. It is clear that the distribution over
impact parameter $x$ is uniform as well. Thus, the acceptance factor can be
written in the following form:
$\eta_{\rm acc}=\frac{\eta_{\rm ang}}{2\,d\,\Theta_{\rm acc}}\
\int\limits_{-\theta_{\rm acc}}^{\theta_{\rm acc}}\
\int\limits_{-d/2}^{d/2}\Theta_{\rm
H}\left(-\varepsilon_{t}(\theta_{x},x)\right)\ d\theta_{x}\,dx,$ (34)
where $\Theta_{\rm H}$ is the Heaviside function.
The dechanneling probability $\eta_{\rm dech}$ was calculated by means of
Monte-Carlo simulation of particle passage through the crystal using binary
collision model of incident particle interaction with atoms of crystal lattice
(see, e.g., Kudrin ; Andersen ; FominThesis ). The potential of a single atom
was taken as Moliere potential of screened Coulomb field. The multiple
scattering on electron subsystem of crystal lattice was taken into account
using the aggregate collisions model aggregate ; Bazylev:1986gc . The model
was verified by comparing its results with the experimental data Forster .
## References
* (1) C. Patrignani et al. [Particle Data Group], Chin. Phys. C 40, 100001 (2016).
* (2) G. W. Bennett et al. [Muon g-2 Collaboration], Phys. Rev. D 73, 072003 (2006), [hep-ex/0602035].
* (3) F. Jegerlehner and A. Nyffeler, Phys. Rept. 477, 1 (2009).
* (4) S. Eidelman and M. Passera, Mod. Phys. Lett. A 22, 159 (2007).
* (5) J. Franklin, D. B. Lichtenberg, W. Namgung and D. Carydas, Phys. Rev. D 24, 2910 (1981).
* (6) N. Barik and M. Das, Phys. Rev. D 28, 2823 (1983).
* (7) M. J. Savage, Phys. Lett. B 326, 303 (1994).
* (8) B. Silvestre-Brac, Few Body Syst. 20, 1 (1996).
* (9) S. L. Zhu, W. Y. P. Hwang and Z. S. Yang, Phys. Rev. D 56, 7273 (1997).
* (10) T. M. Aliev, A. Ozpineci and M. Savci, Phys. Rev. D 65, 056008 (2002).
* (11) B. Juliá-Diaz and D. O. Riska, Nucl. Phys. A 739, 69 (2004).
* (12) S. Kumar, R. Dhir and R. C. Verma, J. Phys. G 31, 141 (2005).
* (13) C. Albertus, E. Hernández, J. Nieves and J. M. Verde-Velasco, Eur. Phys. J. A 32, 183 (2007); Erratum: [Eur. Phys. J. A 36, 119 (2008)].
* (14) A. Faessler, T. Gutsche, M. A. Ivanov, J.G. Körner, V. E. Lyubovitskij, D. Nicmorus, and K. Pumsa-ard, Phys. Rev. D 73, 094013 (2006).
* (15) M. Karliner and H. J. Lipkin, Phys. Lett. B 660, 539 (2008), [hep-ph/0611306].
* (16) B. Patel, A. K. Rai and P. C. Vinodkumar, Pramana 70, 797 (2008).
* (17) A. Majethiya, B. Patel and P. C. Vinodkumar, Eur. Phys. J. A 38, 307 (2008).
* (18) T. M. Aliev, K. Azizi and A. Ozpineci, Phys. Rev. D 77, 114006 (2008).
* (19) T. M. Aliev, K. Azizi and A. Ozpineci, Nucl. Phys. B 808, 137 (2009).
* (20) N. Sharma, H. Dahiya, P. K. Chatley, and M. Gupta, Phys. Rev. D 81, 073001 (2010).
* (21) A. Bernotas, V. Šimonis, Lith. J. Phys. 53, 84 (2013).
* (22) D. H. Perkins, Introduction to High-Energy Physics, Cambridge University Press, 4th ed. (2000).
* (23) L. Burmistrov et al., Tech. Rep. CERN-SPSC-2016-030. SPSC-EOI-012, CERN, Geneva, June 2016.
* (24) V. G. Baryshevsky, Phys. Lett. B 757, 426 (2016).
* (25) F. J. Botella, L. M. Garcia Martin, D. Marangotto, F. M. Vidal, A. Merli, N. Neri, A. Oyanguren and J. R. Vidal, Eur. Phys. J. C 77, 181 (2017), [hep-ex/1612.06769].
* (26) L. H. Thomas, Nature 117, 514 (1926).
* (27) L. H. Thomas, Phil. Mag. Ser. 7 3, 1 (1927).
* (28) V. Bargmann, L. Michel, V. L. Telegdi, Phys. Rev. Lett. 2, 435 (1959).
* (29) R. Hagedorn, Relativistic kinematics, Benjamin, New York, (1963).
* (30) V. B. Beresteckii, E. M. Lifshitz, L. P. Pitaevskii, Quantum electrodynamics, Pergamon Press, (1982).
* (31) J. D. Jackson, Classical electrodynamics, sec. 11.11, John Wiley, 3rd ed., (1999).
* (32) V. G. Baryshevsky, Sov. Tech. Phys. Lett. 5, 73 (1979).
* (33) V. L. Lyuboshits, Sov. J. Nucl. Phys. 31, 509 (1980); [Yad. Fiz. 31, 986 (1980)].
* (34) I. J. Kim, Nucl. Phys. B 229, 251 (1983).
* (35) V. M. Biryukov, V. I. Kotov and Y. A. Chesnokov, Phys. Usp. 37, 937 (1994); [Usp. Fiz. Nauk 164, 1017 (1994)].
* (36) A. I. Akhiezer, N. F. Shulga, V. I. Truten, A. A. Grinenko and V. V. Syshchenko, Phys. Usp. 38, 1119 (1995); [Usp. Fiz. Nauk 165, 1165 (1995)].
* (37) A. A. Grinenko, N. F. Shul’ga, JETP Lett. 54, 524 (1991).
* (38) A. A. Greenenko, N.F. Shulga, Nucl. Instr. Meth. B 67, 212 (1992).
* (39) E. N. Tsyganov, Preprint Fermilab TM-682, TM-684 (1976).”
* (40) S. P. Fomin et al., Nucl. Instr. Meth. in Phys. Res. B 129, 29 (1997).
* (41) D. Chen, I. F. Albuquerque, V. V. Baublis et al., Phys. Rev. Lett. 69, 3286 (1992).
* (42) J. Lindhard, Danske Vid. Selsk. Mat. Fys. Medd. 34, 14 (1965).
* (43) J. G. Korner and G. Kramer, Z. Phys. C 2, 117 (1979).
* (44) E. M. Aitala et al. [E791 Collaboration], Phys. Lett. B 471, 449 (2000), [hep-ex/9912003].
* (45) W. G. D. Dharmaratna and G. R. Goldstein, Phys. Rev. D 53, 1073 (1996); see also G. R. Goldstein, [hep-ph/9907573].
* (46) G. R. Goldstein (2000), [hep-ph/0001187].
* (47) T. Sjöstrand et al., Comput. Phys. Commun. 178, 852 (2008), [arXiv:0710.3820].
* (48) E. Maurice et al. [The LHCb Collaboration], CERN-LHCb-CONF-2017-001 (2017).
* (49) A. Adare et al. [PHENIX Collaboration], Phys. Rev. Lett. 97, 252002 (2006), [hep-ex/0609010].
* (50) S. S. Adler et al. (PHENIX Collaboration), Phys. Rev. Lett. 96, 032001 (2006).
* (51) S. S. Adler et al. (PHENIX Collaboration), Phys. Rev. Lett. 94, 082301 (2005).
* (52) L. Massacrier et al., Advances in High Energy Physics, 2015, 986348 (2015).
* (53) Hua-Sheng Shao, Comput. Phys. Commun. 184, 2562 (2013).
* (54) L. Gladilin, arXiv:hep-ex/9912064 (1999).
* (55) B. A. Kniehl and G. Kramer, Phys. Rev. D 71, 094013 (2005), [hep-ph/0504058].
* (56) R. Aaij et al., Nucl.Phys. B 871, 1 (2013).
* (57) I. Abt et al., Eur. Phys. J. C 52, 531 (2007).
* (58) W. Scandale et al., Phys. Lett. B 758, 129 (2016).
* (59) W. Scandale et al., talk at Physics Beyond Collider Workshop, 6-7 septembre 2016 CERN indico.cern.ch/event/523655/contributions/2284521/ attachments/1332060/2002778/PBC_WalterScandale.pdf
* (60) A. Stocchi et al., talk at Physics Beyond Collider Workshop, 6-7 septembre 2016 CERN indico.cern.ch/event/523655/contributions/2223401/ attachments/1332883/2004320/proposal-mu-lambdac–workshop-06-09-2016-CERN.pdf
* (61) E.D. Commins, P.H. Bucksbaum, Weak interactions of leptons and quarks, Cambridge University Press (1983).
* (62) V. V. Kudrin. Yu. A. Timoshnikov and S. A. Vorobev, Phys. Status Solidi B 58, 409 (1973).
* (63) S. K. Andersen, O. Fich, H. Nielsen et al., Nucl. Phys. B 167, 1 (1980).
* (64) A. S. Fomin, PhD Thesis (in preparation), Paris-Sud University (2017).
* (65) V. I. Glebov, V. V. Goloviznin, A. M. Kanloev, Preprint IAE-3905/1. M. (1984).
* (66) V. A. Bazylev, V. I. Glebov and V. V. Goloviznin, Sov. Phys. JETP 64, 14 (1986), [Zh. Eksp. Teor. Fiz. 91, 25 (1986)].
* (67) J. S. Forster et al., Nucl. Phys. B 318, 301 (1989).
|
# DragonDiffusion: Enabling Drag-style Manipulation on Diffusion Models
Chong Mou1 Xintao Wang2 Jiechong Song1 Ying Shan2 Jian Zhang†1
1School of Electronic and Computer Engineering, Shenzhen Graduate School,
Peking University 2ARC Lab, Tencent PCG
###### Abstract
Despite the ability of existing large-scale text-to-image (T2I) models to
generate high-quality images from detailed textual descriptions, they often
lack the ability to precisely edit the generated or real images. In this
paper, we propose a novel image editing method, DragonDiffusion, enabling
Drag-style manipulation on Diffusion Models. Specifically, we construct
classifier guidance based on the strong correspondence of intermediate
features in the diffusion model. It can transform the editing signals into
gradients via feature correspondence loss to modify the intermediate
representation of the diffusion model. Based on this guidance strategy, we
also build a multi-scale guidance to consider both semantic and geometric
alignment. Moreover, a cross-branch self-attention is added to maintain the
consistency between the original image and the editing result. Our method,
through an efficient design, achieves various editing modes for the generated
or real images, such as object moving, object resizing, object appearance
replacement, and content dragging. It is worth noting that all editing and
content preservation signals come from the image itself, and the model does
not require fine-tuning or additional modules. Our source code will be
available at https://github.com/MC-E/DragonDiffusion.
††footnotetext: † Corresponding author.
## 1 Introduction
Thanks to the large-scale training data and huge computing power, generative
models have developed rapidly, especially large-scale text-to-image (T2I)
diffusion models [29, 27, 23, 26, 10, 43, 22, 42], which aims to generate
images conditioned on a given text/prompt. However, this generative capability
is often diverse, and it is challenging to design suitable prompts to generate
images consistent with what the user has in mind, let alone further editing
based on existing images.
Compared to image generation, image editing has broader application demands.
Methods based on GANs [1, 2, 3] are widely used in the image editing domain
due to the compact and editable latent space (e.g., StyleGAN [17]). Recently,
DragGAN [24] proposes a point-to-point dragging scheme, which can achieve
refined content dragging. However, it is constrained by the capacity and
generalization of GAN models. Compared to GAN models, Diffusion [14] has
higher stability and superior generation quality. In this paper, we aim to
investigate whether the diffusion model can achieve a similar drag-style
ability. This ability should be a more generalized editing capability, not
limited to point dragging, such as object moving, object resizing, and cross-
image content dragging.
In implementation, the primary challenge lies in the lack of a concise and
modifiable latent space amenable to editing. Numerous diffusion-based image
editing methods (e.g., Prompt2Prompt [13], [12], [5]) are built based on the
correspondence between intermediate text and image features. They find that
the cross-attention map between the feature of words and object has a notable
local similarity, which can be used as an editing medium. Recently, self-
guidance [11] proposes a differentiable approach that employs cross-attention
maps to locate and calculate the size of objects within images. Then, gradient
backpropagation is utilized to edit these properties. However, the
correspondence between text and image features is weak, heavily relying on the
design of prompts. Moreover, in complex or multi-object scenarios, text
struggles to build accurate local similarity with a specific object. In this
paper, we aim to explore a more fine-grained editable space than text-image
correspondence for generalized image editing tasks.
In the large-scale T2I diffusion generation process, besides the strong
correspondence between text features and intermediate image features, there is
also a strong correspondence between intermediate image features. This
characteristic has been explored in DIFT [37], which demonstrates that this
feature correspondence is high-level, facilitating point-to-point
correspondence of relevant content in different images. Therefore, we are
intrigued by the possibility of utilizing this strong correspondence between
image features to achieve image editing. In this paper, we present our
solution. Specifically, our method involves two sets of features (i.e.,
guidance features and generation features) during the diffusion process. We
use the guidance features as the target, employing strong image feature
correspondence to constrain and edit the generation features. Additionally,
the content consistency between the edited result and the original image is
also maintained through the strong image feature correspondence. Here, we also
notice that there is a concurrent work, Drag-Diffusion [30], studying this
issue. It utilizes LORA [28] to maintain consistency with the original image
and optimizes one intermediate step in the diffusion process to perform
editing. Unlike Drag-Diffusion, our method is based on classifier-guidance
[9], and all editing and content consistency signals come from the image
itself, without the need for fine-tuning or training the model. In addition,
we use the intermediate feature correspondence to explore generalized image
editing capabilities, such as object moving, object resizing, object
appearance replacement, and content dragging. In summary, the contributions of
this paper are as follows:
* •
We propose a classifier-guidance image editing strategy based on the strong
correspondence of intermediate features in diffusion models. In this design,
we also study the roles of the feature in different layers and develop a
multi-scale feature matching scheme that considers both semantic and geometric
correspondence.
* •
All content editing and preservation signals in our proposed method come from
the image itself. It allows for a direct translation of T2I generation ability
in diffusion models to image editing tasks without the need for any model
fine-tuning or training.
* •
Extensive experiments demonstrate that our DragonDiffusion can perform various
fine-grained image editing tasks, including object moving, object resizing,
object appearance replacement, and content dragging.
## 2 Related Work
### 2.1 Diffusion Models
In recent years, the diffusion model [14] has achieved great success in the
community of image synthesis. It is designed based on thermodynamics [32, 34],
including a diffusion process and a reverse process. In the diffusion process,
a natural image $\mathbf{X}_{0}$ is converted to a Gaussian distribution
$\mathbf{X}_{T}$ by adding random Gaussian noise with $T$ iterations. Each
step of adding noise is defined as:
$\mathbf{x}_{t}=\sqrt{1-\beta_{t}}\mathbf{x}_{t-1}+\sqrt{\beta_{t}}\bm{\epsilon}_{t-1},\
t\in[1,T],$ (1)
where $\beta_{t}\in[0,1]$ is a gradually increasing hyperparameter.
$\bm{\epsilon}_{t-1}\sim\mathcal{N}(0,\mathbf{I})$ is the random Gaussian
noise. The reverse process is to recover $\mathbf{x}_{0}$ from
$\mathbf{x}_{T}$ by several denoising steps. Therefore, the diffusion model is
training a denoiser, conditioned on the current noisy image and time step:
$L(\theta)=\mathbb{E}_{\mathbf{x}_{0},t,\epsilon\sim\mathcal{N}(0,1)}\left[||\epsilon_{t}-\epsilon_{\theta}(\mathbf{x}_{t},t)||_{2}^{2}\right],$
(2)
where $\theta$ is the model parameters of the denoiser.
Recently, some text-conditioned diffusion models (e.g., GLID [23] and SD [27])
have been proposed, which mostly inject text condition into the denoiser
through a cross-attention strategy.
Figure 1: Illustration of our model design. Our proposed method consists of
two branches, i.e., the guidance branch and the generation branch. The
guidance branch provides editing and consistency guidance to the generation
branch through the correspondence of intermediate features. Our
DragonDiffusion is built based on Stable Diffusion [27], without model fine-
tuning or training.
### 2.2 Classifier guidance in Diffusion Model
From a continuous perspective [36], diffusion models can be viewed as a score
function, i.e., $\nabla\mathbf{x}_{t}\rm{log}q(\mathbf{x}_{t})$, that samples
from the corresponding distribution [35] according to Langevin dynamics [32,
34]. The conditional diffusion process, on the other hand, can be seen as
using a joint score function, i.e.,
$\nabla\mathbf{x}_{t}\rm{log}q(\mathbf{x}_{t},y)$, to sample from a more
enriched distribution, where $y$ is the external condition. The joint score
function can be further decomposed into:
$\nabla\mathbf{x}_{t}\rm{log}q(\mathbf{x}_{t},y)=\nabla\mathbf{x}_{t}\rm{log}q(\mathbf{x}_{t})+\nabla\mathbf{x}_{t}\rm{log}q(y|\mathbf{x}_{t}),$
(3)
where the first term is the original unconditional diffusion denoiser, and the
second term corresponds to the classifier guidance to be added to the
diffusion process, also known as the energy function. The energy function can
be selected based on the generation target, such as a classifier [9] to
specify the category of generation results.
Classifier guidance has been applied to numerous controllable image generation
tasks, such as sketch-guided generation [38], mask-guided generation [31],
universal guided generation [41, 6], and image editing [11]. These methods,
based on classifier guidance, inspire us to transform editing signals into
gradients through score functions, achieving fine-grained image editing.
### 2.3 Image Editing
Image editing methods traditionally targeted a translation between image
domains [15, 16, 19]. Numerous editing approaches [1, 2, 3] invert images into
a latent space of StyleGAN [17] and then edit specific content (e.g., hair and
age) by manipulating latent vectors. Recently, DragGAN [24] proposes a point-
to-point dragging scheme, which can achieve more refined content dragging.
Diffusion [14], as a more stable generative model compared to GANs, has led to
several diffusion-based image editing methods [4, 13, 18, 20, 7]. Most of them
use text as the edit signal. For example, Prompt2Prompt [13] achieves specific
object editing by replacing the correspondence between text features and
intermediate features. SDEdit [20] performs image editing by adding noise to
the original image and then denoising under new text conditions.
InstructionP2P [7] achieves image editing by fine-tuning the model and using
text as an editing instruction. Recently, Self-guidance [11] transforms
editing signals into gradients through the correspondence between text and
intermediate features to achieve image editing. However, the correspondence
between image and text is coarse-grained. How to perform fine-grained and
generalized image editing with diffusion models is still an open challenge.
## 3 Method
### 3.1 Preliminary: Stable Diffusion
In this paper, we implement our method based on the recent state-of-the-art
T2I diffusion model (i.e., Stable Diffusion (SD) [27]). SD is a latent
diffusion model (LDM), which contains an autoencoder and an UNet denoiser. The
autoencoder can convert natural images $\mathbf{x}_{0}$ into latent space
$\mathbf{z}_{0}$ and then reconstruct them. The diffusion process of SD is
conducted in the latent space. The training objective of SD is the same as
that of common diffusion models (i.e., Eq. 3), except that the denoiser
operates on the latent $\mathbf{z}_{t}$ instead of the image $\mathbf{x}_{t}$.
During inference, $\mathbf{Z}_{T}$ is generated from random Gaussian
distribution. The final result $\mathbf{z}_{0}$, as the clean latent, is fed
into the decoder of the autoencoder to generate the natural image
$\mathbf{x}_{0}$. In the conditional part, SD utilizes the pre-trained CLIP
[25] text encoder to embed text inputs as embedding sequences $\mathbf{y}$.
### 3.2 Overview
The objective of our DragonDiffusion is to achieve fine-grained image editing
of real images by SD, which involves two issues: changing the content to be
edited and preserving other content. For example, if a user wants to move the
bread in an image, the generated result only needs to change the position of
the bread, while the appearance of the bread and other image content should
not change. In this paper, inspired by DIFT [14], we utilize the strong
correspondence of intermediate features in diffusion models to address both
issues simultaneously. An overview of our design is presented in Fig. 1.
First, we invert the original image $\mathbf{x}_{0}$ to the latent
representation $\mathbf{z}_{T}$ through the reverse diffusion process [33,
21]. Then, we input $\mathbf{z}_{T}$ into two parallel branches, i.e., the
guidance branch and the generation branch. The guidance branch is the standard
diffusion generation process, which can reconstruct $\mathbf{x}_{0}$. The
generation branch needs to generate the corresponding editing result according
to the demand. To preserve the content of the original image, we utilize the
correspondence between the intermediate features of the two branches,
transferring the content information from the guidance branch to the
generation branch through a cross-branch self-attention design. Similarly,
using the strong features correspondence, we design a score function [36, 35]
that transforms the editing signal into gradients through classifier guidance
[9], modifying the intermediate representation $\mathbf{z}_{t}$ of the
generation branch. Our entire editing process only applies the correspondence
of intermediate features in diffusion models, without the need for model fine-
tuning or training.
### 3.3 Classifier-guidance-based Editing Design
In this article, inspired by classifier guidance [9], we aim to update the
intermediate representation (i.e., $\mathbf{z}_{t}$) of the diffusion process
by transforming editing signals into gradients through score functions,
thereby achieving image editing.
Figure 2: Illustration of using features from different layers as guidance to
reconstruct the original image. In this experiment, we set $\mathbf{z}_{T}$ as
random Gaussian noise, and we set $\mathbf{m}^{gen}$, $\mathbf{m}^{gud}$ as
zeros matrix and $\mathbf{m}^{share}$ as a ones matrix.
#### 3.3.1 Score Function
As illustrated in Eq. 3, to utilize classifier guidance, we first need to
construct a score function that matches the target. The recent work, DIFT
[37], discovers that the intermediate features of diffusion models have a
strong correspondence, which can be used for point-to-point matching between
different images. Inspired by this work, in each iteration, we use the same
denoiser to map the intermediate representations (i.e.,
$\mathbf{z}_{t}^{gen}$, $\mathbf{z}_{t}^{gud}$) of the two branches to the
feature space (i.e., $\mathbf{F}_{t}^{gen}$, $\mathbf{F}_{t}^{gud}$). The
subscripts “gen” and “gud” represent the generation branch and the guidance
branch, respectively. Note that the features here come from the decoder in the
denoiser. $\mathbf{F}_{t}^{gud}$ contains the features of the original image,
and $\mathbf{F}_{t}^{gen}$ contains the features of the edited image. Here, we
use two masks (i.e., $\mathbf{m}^{gud}$ and $\mathbf{m}^{gen}$) to represent
the positions of certain content in the original and edited images,
respectively. Based on the strong correspondence between the features, the two
regions in $\mathbf{F}_{t}^{gen}$ and $\mathbf{F}_{t}^{gud}$ need to have high
similarity. Here, we utilize the cosine distance ($cos(\cdot)\in[-1,1]$) to
measure the similarity and normalize it to $[0,1]$:
$\small\mathcal{S}(\mathbf{m}^{gen},\mathbf{m}^{gud})=\frac{cos(\mathbf{F}_{t}^{gen}[\mathbf{m}^{gen}],\
Sg(\mathbf{F}_{t}^{gud}[\mathbf{m}^{gud}]))+1}{2},$ (4)
where $Sg$ is the gradient clipping operation. The larger the value, the
higher the similarity. $[\cdot]$ represents retrieving values in non-zero
regions. When we want to constrain the content appearing in the position of
$\mathbf{m}^{gud}$ to appear in the target position $\mathbf{m}^{gen}$, our
optimization goal is to make the similarity in Eq. 4 as large as possible.
In addition to editing, we hope that other areas of the editing result remain
consistent with the original image. Given a mask $\mathbf{m}^{share}$, marking
areas with no editing, the similarity between the editing result and the
original image in these areas can also be defined using the cosine similarity
as $\mathcal{S}(\mathbf{m}^{share},\mathbf{m}^{share})$. Finally, the loss
function, combining editing and content preserving, is defined as:
$\small\mathcal{L}=\frac{w_{e}}{\alpha+\beta\cdot\mathcal{S}(\mathbf{m}^{gen},\mathbf{m}^{gud})}+\frac{w_{p}}{\alpha+\beta\cdot\mathcal{S}(\mathbf{m}^{share},\mathbf{m}^{share})},$
(5)
where $\alpha$ and $\beta$ are two hyper-parameters. $w_{e}$ and $w_{p}$ are
two weights to balance the editing and consistency parts. Finally, the joint
score function in Eq. 6 can be written as:
$\displaystyle\begin{split}&\nabla\mathbf{z}_{t}^{gen}\rm{log}q(\mathbf{z}_{t}^{gen},\mathbf{m}^{gen},\mathbf{m}^{share})=\\\
&\nabla\mathbf{z}_{t}^{gen}\rm{log}q(\mathbf{z}_{t}^{gen})+\nabla\mathbf{z}_{t}^{gen}\rm{log}q(\mathbf{m}^{gen},\mathbf{m}^{share}|\mathbf{z}_{t}^{gen}).\end{split}$
(6)
The classifier guidance
$\nabla\mathbf{z}_{t}^{gen}\rm{log}q(\mathbf{m}^{gen},\mathbf{m}^{share}|\mathbf{z}_{t}^{gen})$
can be computed as $\eta\frac{d\mathcal{L}}{d\mathbf{z}_{t}^{gen}}$, where
$\eta$ is the learning rate.
Figure 3: Visualization of the roles that contrastive loss and inpainting loss
play in the object movement task. The contrastive loss we designed can
eliminate the multi-object phenomenon, while the inpainting loss can generate
more natural content in the missing areas.
#### 3.3.2 Multi-scale Guidance
The decoder of the Unet denoiser contains four blocks of different scales.
DIFT [37] finds that the second layer contains more semantic information,
while the third layer contains more geometric information. We also studied the
role of features from different layers in editing tasks, as shown in Fig. 2.
In the experiment, we set $\mathbf{z}_{T}$ as random Gaussian noise, and we
set $\mathbf{m}^{gen}$, $\mathbf{m}^{gud}$ as zeros matrixes and
$\mathbf{m}^{share}$ as a ones matrix. In this way, the generation branch is
guided to reconstruct the original image from the random Gaussian
distribution. We can find that the feature in the first layer is too high-
level to reconstruct the original image accurately. The features in the fourth
layer have weak feature correspondence, resulting in significant differences
between the reconstructed image and the original. The features in the second
and third layers are more suitable for reconstructing the original image, and
each has its own specialty. The second layer of features contains more
semantic information and can reconstruct images that are semantically similar
to the original but with some differences in content details. The features in
the third layer tend to express low-level visual features. The reconstructed
images are closer to the original, but they cannot provide effective
supervision for high-level texture features, resulting in blurry reconstructed
images. In our design, we aim to combine these two levels (i.e., high and low)
of guidance and propose a multi-scale supervision approach based on the second
and third layers of features. The reconstructed results in Fig. 2 also
demonstrate that this combination can balance the generation of low-level and
high-level visual features. Therefore, $\mathbf{F}_{t}^{gen}$ and
$\mathbf{F}_{t}^{gud}$ contain two sets of features from layer 2 and layer 3.
#### 3.3.3 Implementation Details for Each Application
Object moving. In the task of object moving, $\mathbf{m}^{gen}$ and
$\mathbf{m}^{gud}$ locate the same object in different spatial positions.
$\mathbf{m}^{share}$ is the complement of the union of $\mathbf{m}^{gen}$ and
$\mathbf{m}^{gud}$, i.e.,
$\mathbf{m}^{share}=Cu(\mathbf{m}^{gen}\cup\mathbf{m}^{gud})$. We define the
points with a value of 1 in the binary mask as belonging to this mask. Using
only the editing and preserving losses in Eq. 5 can lead to some issues,
especially in the multiple objects phenomenon. As shown in the second image of
Fig. 3, although the bread has been moved according to the editing signal,
some of the bread content is still preserved in its original position in the
generated result. Therefore, in the object moving task, we need to constrain
the generated results to avoid previous image content in the original
position. To address this, we added a contrastive loss to Eq. 5 to provide an
additional constraint:
$\mathcal{L}_{c}=w_{c}\cdot\mathcal{S}(\mathbf{m}^{inpaint},\mathbf{m}^{inpaint}),$
(7)
where $\mathbf{m}^{inpaint}=\mathbf{m}^{gud}-\mathbf{m}^{gen}$, i.e.,
$\mathbf{m}^{inpaint}=\\{p|p\in\mathbf{m}^{gud}\ and\
p\notin\mathbf{m}^{gen}\\}$. $w_{c}$ is a hyper-parameter of the loss weight.
As illustrated in the third image of Fig. 3, although the contrastive loss
function can address the multi-object phenomenon, it lacks guidance during the
inpainting process, resulting in somewhat disordered inpainting. Here, we
design an inpainting loss, using content outside of the object as guidance to
constrain the features of the inpainting region. Mathematically, the loss
function is defined as:
$\displaystyle\begin{split}\left\\{\begin{array}[]{ll}\mathcal{L}_{i}=\frac{w_{i}}{\alpha+\beta\cdot\mathcal{S}_{glob}}\\\
\mathcal{S}_{glob}=\frac{cos(\frac{\sum\mathbf{F}_{t}^{gen}[\mathbf{m}^{inpaint}]}{\sum\mathbf{m}^{inpaint}},\
Sg(\frac{\sum\mathbf{F}_{t}^{gud}[\mathbb{I}-\mathbf{m}^{gud}]}{\sum\mathbf{m}^{gud}}))+1}{2},\end{array}\right.\end{split}$
(8)
where $w_{i}$ is a hyper-parameter of the loss weight. After equipping
$\mathcal{L}_{c}$ and $\mathcal{L}_{i}$, our method can effectively inpaint
the gaps of the object in the original image, as shown in the fourth image of
Fig. 3.
Object resizing. In this task, we use interpolation to transform
$\mathbf{m}^{gud}$ and $\mathbf{F}^{gud}_{t}$ to the target size, and then
extract the intermediate feature $\mathbf{F}^{gud}_{t}[\mathbf{m}^{gud}]$ as
the feature of the object after resizing. To guide the generation branch to
produce a target object with the same size, we perform local resizing on
$\mathbf{m}^{gen}$. Then, we use $\mathbf{F}^{gud}_{t}[\mathbf{m}^{gud}]$ to
supervise and guide the features within this region. Local resizing refers to
interpolating the input and then restoring it to its original size with center
cropping/expansion. Finally, in this task, Eq. 4 is reformulated as:
$\displaystyle\begin{split}\small&\mathcal{S}(\mathbf{m}^{gen},\mathbf{m}^{gud})=\\\
&\frac{cos(\mathbf{F}_{t}^{gen}[\mathcal{C}(\mathcal{R}(\mathbf{m}^{gen}))],\
Sg(\mathcal{R}(\mathbf{F}_{t}^{gud})[\mathcal{R}(\mathbf{m}^{gud})]))+1}{2},\end{split}$
(9)
where $\mathcal{R}$ and $\mathcal{C}$ represent the interpolation and center
cropping/expansion operation, respectively. The other constraints remain
consistent with default.
Figure 4: Visualization of the object moving with and without cross-branch
self-attention.
Appearance replacement. This task aims to replace the appearance between
objects of the same category. Similar to the inpainting loss (i.e., Eq. 8) in
object moving, we use the features mean of the corresponding region to
represent the object appearance. Therefore, the guidance branch will involve
the diffusion of two guidance images, the original image and the appearance
reference image. The appearance reference image only edits the generation
through gradients, generated from appearance similarity. We use
$\mathbf{F}_{t}^{app}$ and $\mathbf{m}^{app}$ to represent the intermediate
features of the appearance reference image and the mask corresponding to the
reference object, respectively. Therefore, the appearance similarity is
defined as:
$\displaystyle\begin{split}\small&\mathcal{S}_{app}(\mathbf{m}^{gud},\mathbf{m}^{gen})=\\\
&\frac{cos(\frac{\sum\mathbf{F}_{t}^{gen}[\mathbf{m}^{gen}]}{\sum\mathbf{m}^{gen}},\
Sg(\frac{\sum\mathbf{F}_{t}^{app}[\mathbf{m}^{app}]}{\sum\mathbf{m}^{app}}))+1}{2}.\end{split}$
(10)
The other constraints remain consistent with default.
Point dragging. In this task, we want to drag the image content via a specific
point in the image. In this case, $\mathbf{m}^{gen}$ and $\mathbf{m}^{gud}$
denote the destination and starting points, as well as their adjacent points
within a small range surrounding them. Unlike the previous tasks, the
$\mathbf{m}^{share}$ here is manually defined. The gradient guidance comes
from Eq. 5 without other specific designs.
### 3.4 Cross-branch Self-attention
To maintain consistency between the generated result and the original image,
we use two strategies: DDIM inversion [33] and a cross-branch self-attention
design. For DDIM inversion, we can also use the more accurate Null-text
inversion [21] to improve consistency. However, it is still challenging to
maintain high consistency between the editing result and the original image
solely through DDIM inversion. Here, inspired by the consistency preservation
in some video and image editing works [40, 39, 8], we design a cross-branch
self-attention guidance. Specifically, we replace the key and value in the
self-attention module of the denoiser in the generation branch with the
corresponding key and value from the guidance branch. Note that since the
feature correspondence in denoiser encoder is relatively weak [14], we only
use this operation in the denoiser decoder. The modified self-attention module
is defined as:
$\displaystyle\begin{split}\small\left\\{\begin{array}[]{ll}\mathbf{Q}=\mathbf{W}_{Q}^{gen}*\mathbf{F}^{gen};\
\mathbf{K}=\mathbf{W}_{K}^{gud}*\mathbf{F}^{gud};\
\mathbf{V}=\mathbf{W}_{V}^{gud}*\mathbf{F}^{gud}\\\
Attention(\mathbf{Q},\mathbf{K},\mathbf{V})=softmax(\frac{\mathbf{Q}\mathbf{K}^{T}}{\sqrt{d}})\mathbf{V},\end{array}\right.\end{split}$
(11)
where $\mathbf{W}_{Q}$, $\mathbf{W}_{K}$, and $\mathbf{W}_{V}$ are learnable
projection matrices. $*$ refers to the convolution operator. A comparison of
our method with and without cross-branch self-attention is shown in Fig. 4.
One can see that the design can effectively close the distance between the
generated result and the original image.
Figure 5: Visualization of our object moving and resizing applications. It can
be seen that our DragonDiffusion is capable of effectively moving objects on
real images, and at the same time, the region of the original object can also
be well inpainted. During the object moving process, we can also selectively
enlarge or shrink the object.
Figure 6: Visualization of object appearance replacement. Our method can
extract the appearance features of objects within the same category from a
reference image, and subsequently replace the appearance of objects in the
edited image accordingly.
Figure 7: Visualization of content dragging. Our method allows dragging image
content using one or multiple points. The results of continuous dragging
demonstrate the promising editing capabilities and stability of our
DragonDiffusion.
## 4 Experiments
In this paper, our DragonDiffusion can perform various image editing tasks,
including object moving, object resizing, object appearance replacement, and
content dragging. In Fig. 5, we demonstrate the application of object moving
and resizing. As can be seen, our method can naturally move objects within the
image, and the edited objects can blend well with the other content in the
original image. In Fig. 6, we present the performance of object appearance
replacement. It is obvious that our method can replace the appearance with
that of a same-category object from a reference image while preserving the
original outline. In Fig. 7, we present the content dragging performance of
our method. As can be seen, our method can drag the content within the image
using a single point or multiple points. The dragging results are consistent
and reasonable with the editing direction, and at the same time, the content
remains consistent with the original image.
## 5 Conclusion
Recent studies have shown that intermediate features in diffusion models
exhibit strong correspondence relationships. Compared to the correspondence
between text and image features, the correspondence between image and image
features is more stable and fine-grained. In this paper, we aim to develop a
fine-grained image editing scheme based on the strong correspondence of
intermediate features in diffusion models. To this end, we design a
classifier-guidance-based method to transform the editing signals into
gradients via feature correspondence loss to modify the intermediate
representation of the diffusion model. The feature correspondence loss is
designed with multiple scales to consider both semantic and geometric
alignment. Moreover, a cross-branch self-attention is added to maintain the
consistency between the original image and the editing result. Extensive
experiments demonstrate that our proposed DragonDiffusion can perform various
image editing applications for the generated or real images, including object
moving, object resizing, object appearance replacement, and content dragging.
At the same time, our DragonDiffusion does not require model fine-tuning or
additional modules.
## References
* [1] Rameen Abdal, Yipeng Qin, and Peter Wonka. Image2stylegan: How to embed images into the stylegan latent space? In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 4432–4441, 2019.
* [2] Rameen Abdal, Yipeng Qin, and Peter Wonka. Image2stylegan++: How to edit the embedded images? In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 8296–8305, 2020.
* [3] Yuval Alaluf, Omer Tov, Ron Mokady, Rinon Gal, and Amit Bermano. Hyperstyle: Stylegan inversion with hypernetworks for real image editing. In Proceedings of the IEEE/CVF conference on computer Vision and pattern recognition, pages 18511–18521, 2022.
* [4] Omri Avrahami, Dani Lischinski, and Ohad Fried. Blended diffusion for text-driven editing of natural images. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 18208–18218, 2022.
* [5] Yogesh Balaji, Seungjun Nah, Xun Huang, Arash Vahdat, Jiaming Song, Karsten Kreis, Miika Aittala, Timo Aila, Samuli Laine, Bryan Catanzaro, et al. ediffi: Text-to-image diffusion models with an ensemble of expert denoisers. arXiv preprint arXiv:2211.01324, 2022.
* [6] Arpit Bansal, Hong-Min Chu, Avi Schwarzschild, Soumyadip Sengupta, Micah Goldblum, Jonas Geiping, and Tom Goldstein. Universal guidance for diffusion models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 843–852, 2023.
* [7] Tim Brooks, Aleksander Holynski, and Alexei A Efros. Instructpix2pix: Learning to follow image editing instructions. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 18392–18402, 2023.
* [8] Mingdeng Cao, Xintao Wang, Zhongang Qi, Ying Shan, Xiaohu Qie, and Yinqiang Zheng. Masactrl: Tuning-free mutual self-attention control for consistent image synthesis and editing. arXiv preprint arXiv:2304.08465, 2023.
* [9] Prafulla Dhariwal and Alexander Nichol. Diffusion models beat gans on image synthesis. Advances in neural information processing systems, 34:8780–8794, 2021.
* [10] Ming Ding, Zhuoyi Yang, Wenyi Hong, Wendi Zheng, Chang Zhou, Da Yin, Junyang Lin, Xu Zou, Zhou Shao, Hongxia Yang, et al. Cogview: Mastering text-to-image generation via transformers. Advances in Neural Information Processing Systems, 34:19822–19835, 2021.
* [11] Dave Epstein, Allan Jabri, Ben Poole, Alexei A Efros, and Aleksander Holynski. Diffusion self-guidance for controllable image generation. arXiv preprint arXiv:2306.00986, 2023.
* [12] Weixi Feng, Xuehai He, Tsu-Jui Fu, Varun Jampani, Arjun Akula, Pradyumna Narayana, Sugato Basu, Xin Eric Wang, and William Yang Wang. Training-free structured diffusion guidance for compositional text-to-image synthesis. arXiv preprint arXiv:2212.05032, 2022.
* [13] Amir Hertz, Ron Mokady, Jay Tenenbaum, Kfir Aberman, Yael Pritch, and Daniel Cohen-Or. Prompt-to-prompt image editing with cross attention control. arXiv preprint arXiv:2208.01626, 2022.
* [14] Jonathan Ho, Ajay Jain, and Pieter Abbeel. Denoising diffusion probabilistic models. Advances in Neural Information Processing Systems, 33:6840–6851, 2020.
* [15] Xun Huang, Ming-Yu Liu, Serge Belongie, and Jan Kautz. Multimodal unsupervised image-to-image translation. In Proceedings of the European conference on computer vision (ECCV), pages 172–189, 2018.
* [16] Phillip Isola, Jun-Yan Zhu, Tinghui Zhou, and Alexei A Efros. Image-to-image translation with conditional adversarial networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 1125–1134, 2017.
* [17] Tero Karras, Samuli Laine, and Timo Aila. A style-based generator architecture for generative adversarial networks. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 4401–4410, 2019.
* [18] Bahjat Kawar, Shiran Zada, Oran Lang, Omer Tov, Huiwen Chang, Tali Dekel, Inbar Mosseri, and Michal Irani. Imagic: Text-based real image editing with diffusion models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 6007–6017, 2023.
* [19] Ming-Yu Liu, Xun Huang, Arun Mallya, Tero Karras, Timo Aila, Jaakko Lehtinen, and Jan Kautz. Few-shot unsupervised image-to-image translation. In Proceedings of the IEEE/CVF international conference on computer vision, pages 10551–10560, 2019.
* [20] Chenlin Meng, Yutong He, Yang Song, Jiaming Song, Jiajun Wu, Jun-Yan Zhu, and Stefano Ermon. Sdedit: Guided image synthesis and editing with stochastic differential equations. arXiv preprint arXiv:2108.01073, 2021.
* [21] Ron Mokady, Amir Hertz, Kfir Aberman, Yael Pritch, and Daniel Cohen-Or. Null-text inversion for editing real images using guided diffusion models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 6038–6047, 2023.
* [22] Chong Mou, Xintao Wang, Liangbin Xie, Jian Zhang, Zhongang Qi, Ying Shan, and Xiaohu Qie. T2i-adapter: Learning adapters to dig out more controllable ability for text-to-image diffusion models. arXiv preprint arXiv:2302.08453, 2023.
* [23] Alexander Quinn Nichol, Prafulla Dhariwal, Aditya Ramesh, Pranav Shyam, Pamela Mishkin, Bob Mcgrew, Ilya Sutskever, and Mark Chen. Glide: Towards photorealistic image generation and editing with text-guided diffusion models. In International Conference on Machine Learning, pages 16784–16804. PMLR, 2022.
* [24] Xingang Pan, Ayush Tewari, Thomas Leimkühler, Lingjie Liu, Abhimitra Meka, and Christian Theobalt. Drag your gan: Interactive point-based manipulation on the generative image manifold. arXiv preprint arXiv:2305.10973, 2023.
* [25] Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. Learning transferable visual models from natural language supervision. In International conference on machine learning, pages 8748–8763, 2021.
* [26] Aditya Ramesh, Mikhail Pavlov, Gabriel Goh, Scott Gray, Chelsea Voss, Alec Radford, Mark Chen, and Ilya Sutskever. Zero-shot text-to-image generation. In International Conference on Machine Learning, pages 8821–8831. PMLR, 2021.
* [27] Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, and Björn Ommer. High-resolution image synthesis with latent diffusion models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 10684–10695, 2022.
* [28] Simo Ryu. Low-rank adaptation for fast text-to-image diffusion fine-tuning, 2023\.
* [29] Chitwan Saharia, William Chan, Saurabh Saxena, Lala Li, Jay Whang, Emily Denton, Seyed Kamyar Seyed Ghasemipour, Burcu Karagol Ayan, S Sara Mahdavi, Rapha Gontijo Lopes, et al. Photorealistic text-to-image diffusion models with deep language understanding. arXiv preprint arXiv:2205.11487, 2022.
* [30] Yujun Shi, Chuhui Xue, Jiachun Pan, Wenqing Zhang, Vincent YF Tan, and Song Bai. Dragdiffusion: Harnessing diffusion models for interactive point-based image editing. arXiv preprint arXiv:2306.14435, 2023.
* [31] Jaskirat Singh, Stephen Gould, and Liang Zheng. High-fidelity guided image synthesis with latent diffusion models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 5997–6006, 2023.
* [32] Jascha Sohl-Dickstein, Eric Weiss, Niru Maheswaranathan, and Surya Ganguli. Deep unsupervised learning using nonequilibrium thermodynamics. In International Conference on Machine Learning, pages 2256–2265. PMLR, 2015.
* [33] Jiaming Song, Chenlin Meng, and Stefano Ermon. Denoising diffusion implicit models. arXiv preprint arXiv:2010.02502, 2020.
* [34] Yang Song and Stefano Ermon. Generative modeling by estimating gradients of the data distribution. Advances in neural information processing systems, 32, 2019.
* [35] Yang Song and Stefano Ermon. Improved techniques for training score-based generative models. Advances in neural information processing systems, 33:12438–12448, 2020.
* [36] Yang Song, Jascha Sohl-Dickstein, Diederik P Kingma, Abhishek Kumar, Stefano Ermon, and Ben Poole. Score-based generative modeling through stochastic differential equations. arXiv preprint arXiv:2011.13456, 2020.
* [37] Luming Tang, Menglin Jia, Qianqian Wang, Cheng Perng Phoo, and Bharath Hariharan. Emergent correspondence from image diffusion. arXiv preprint arXiv:2306.03881, 2023.
* [38] Andrey Voynov, Kfir Aberman, and Daniel Cohen-Or. Sketch-guided text-to-image diffusion models. arXiv preprint arXiv:2211.13752, 2022.
* [39] Wen Wang, Kangyang Xie, Zide Liu, Hao Chen, Yue Cao, Xinlong Wang, and Chunhua Shen. Zero-shot video editing using off-the-shelf image diffusion models. arXiv preprint arXiv:2303.17599, 2023.
* [40] Jay Zhangjie Wu, Yixiao Ge, Xintao Wang, Weixian Lei, Yuchao Gu, Wynne Hsu, Ying Shan, Xiaohu Qie, and Mike Zheng Shou. Tune-a-video: One-shot tuning of image diffusion models for text-to-video generation. arXiv preprint arXiv:2212.11565, 2022.
* [41] Jiwen Yu, Yinhuai Wang, Chen Zhao, Bernard Ghanem, and Jian Zhang. Freedom: Training-free energy-guided conditional diffusion model. arXiv preprint arXiv:2303.09833, 2023.
* [42] Lvmin Zhang and Maneesh Agrawala. Adding conditional control to text-to-image diffusion models, 2023.
* [43] Yufan Zhou, Ruiyi Zhang, Changyou Chen, Chunyuan Li, Chris Tensmeyer, Tong Yu, Jiuxiang Gu, Jinhui Xu, and Tong Sun. Lafite: Towards language-free training for text-to-image generation. arXiv preprint arXiv:2111.13792, 2021.
|
# Mean-Field Assisted Deep Boltzmann Learning with Probabilistic Computers
Shuvro Chowdhury, Shaila Niazi, Kerem Y. Camsari
Department of Electrical and Computer Engineering
University of California Santa Barbara, Santa Barbara, CA 93106, USA
{schowdhury, sniazi, camsari} @ ucsb.edu
(September 2023)
###### Abstract
Despite their appeal as physics-inspired, energy-based and generative nature,
general Boltzmann Machines (BM) are considered intractable to train. This
belief led to simplified models of BMs with _restricted_ intralayer
connections or layer-by-layer training of deep BMs. Recent developments in
domain-specific hardware – specifically probabilistic computers (p-computer)
with probabilistic bits (p-bit) – may change established wisdom on the
tractability of deep BMs. In this paper, we show that deep and _unrestricted_
BMs can be trained using p-computers generating hundreds of billions of Markov
Chain Monte Carlo (MCMC) samples per second, on sparse networks developed
originally for use in D-Wave’s annealers. To maximize the efficiency of
learning the p-computer, we introduce two families of Mean-Field Theory
assisted learning algorithms, or xMFTs (x = Naive and Hierarchical). The xMFTs
are used to estimate the averages and correlations during the _positive phase_
of the contrastive divergence (CD) algorithm and our custom-designed
p-computer is used to estimate the averages and correlations in the negative
phase. A custom Field-Programmable-Gate Array (FPGA) emulation of the
p-computer architecture takes up to 45 billion flips per second, allowing the
implementation of CD-$n$ where $n$ can be of the order of millions, unlike
RBMs where $n$ is typically 1 or 2. Experiments on the full MNIST dataset with
the combined algorithm show that the positive phase can be efficiently
computed by xMFTs without much degradation when the negative phase is computed
by the p-computer. Our algorithm can be used in other scalable Ising machines
and its variants can be used to train BMs, previously thought to be
intractable.
## 1 Introduction
Since their introduction by Hinton and colleagues [1], Boltzmann Machines (BM)
have received sustained interest over the years [2]. Most recently, BMs have
found renewed interest in the representation of quantum many-body
wavefunctions as an alternative to quantum Monte Carlo algorithms (see, for
example, [3]). Meanwhile, the nearing end of Moore’s Law has been driving the
development of domain-specific computers tailored for specific applications
and algorithms. A notable class of such computers deals with probabilistic
computers (p-computer) with probabilistic bits (p-bit) (see, for example, [4,
5]). Probabilistic computers have been implemented at various sizes and in
different physical substrates. Magnetic nanodevices exploiting the ambient
thermal noise to build p-bits have been implemented in small scales (10-80
p-bits, [6, 7]). Custom-made digital _emulators_ using Field Programmable Gate
Arrays (FPGA) has been scaled much further, up to 5,000 - 10,000 p-bits [8].
Despite their small sizes at table-top experiments, nanodevice-based
p-computers are arguably the most scalable option to gain more performance and
energy-efficiency, with projections up to million p-bit densities [9], given
the success of magnetic memory technology [10]. The high costs of pseudorandom
number generators required for each p-bit make it prohibitively expensive to
get to such million-bit densities using existing CMOS technology [11].
Figure 1: p-computing overview: (a) Analogy between interacting bodies in
nature and interacting p-bit networks we build in this work. In stochastic MTJ
(sMTJ) based implementations of p-bits, a low energy barrier magnet is used to
generate natural noise. (b) Typical output of a p-bit against time fluctuating
randomly between $+1$ and $-1$. (c) Input/output characteristic of a p-bit.
The output (blue curve) is pinned to $\pm 1$ at strong positive and negative
inputs. The average (orange) has a tanh behavior. (d) In this work, we emulate
the p-bit in a digital system (FPGA) with a pseudorandom number generator
(PRNG), a lookup table for the tanh and a comparator. (e) The digital
emulation of the synapse with MUXes is also shown. (f) A p-computer consisting
of a network of such p-bits is then realized in an FPGA.
Nonetheless, CMOS emulators of p-computers are useful in demonstrating the
architectural and algorithmic potential of physics-inspired scalable
p-computers. In this paper, we use such a custom-designed, highly efficient
FPGA-based p-computer emulator (Fig. 1d-f) that can take 45 billion MCMC
samples every second (or flips per second, fps) to train deep Boltzmann
machines. Notably, this flips per second is about 4 - 5X faster than custom
GPU/TPU implementations implemented on far simpler networks with $\pm 1$
weights (see, for example, [12, 13, 14, 15]). Boltzmann Machines are trained
by the contrastive divergence algorithm, assuming a quadratic energy function
[16, 17],
$\Delta W_{ij}=\langle m_{i}m_{j}\rangle^{\mbox{data}}-\langle
m_{i}m_{j}\rangle^{\mbox{model}}\qquad\text{and}\qquad\Delta h_{i}=\langle
m_{i}\rangle^{\mbox{data}}-\langle m_{i}\rangle^{\mbox{model}}$ (1)
One difficulty in training unrestricted Boltzmann machines is the need to
perform explicit MCMC sampling in the positive (data) phase. To perform the
visible-to-hidden layer inference in one shot, BMs typically restrict
connections within a layer, removing the need for MCMC sampling. Removing
these connections, however, hurts the representational ability of the network.
The need for two separate phases is also detrimental to hardware development
[18] where each input in a batch needs to be clamped followed by MCMC
sampling, _serially_. In this paper, we propose a hybrid algorithm to
circumvent the positive phase sampling of _unrestricted_ Boltzmann machines.
Our main contributions are as follows:
(1) We implement a fast FPGA-based digital MCMC sampler emulating physical
probabilistic computers that are able to take up to 45 billion Gibbs samples
per second, communicating with a classical computer in a closed-loop setup
capable of training a deep _unrestricted_ BM with 2560 nodes and 17984
parameters to learn the full MNIST data set entirely in hardware, which is
rarely performed in direct hardware implementations of BMs.
(2) We propose a hybrid mean-field theory (MFT) assisted contrastive
divergence (CD) algorithm to ease the positive phase computation of
_unrestricted_ and _deep_ BMs. Going beyond naive MFTs (NMFT), we also propose
a _hierarchical_ MFT (HMFT), improving correlation estimations at the cost of
making $\mathcal{O}(N^{2})$ more NMFT calls.
(3) We demonstrate that the hybrid algorithm we design does not result in
significant degradation compared to the MCMC method since _positive_ phase
correlations are much more easily handled by MFTs as opposed to _negative_
phase correlations.
Figure 2: Hybrid computing scheme for ML: A hybrid computing scheme with
probabilistic and classical computers is shown. Inside the classical computer,
the positive phase is performed with the help of mean-field theory derivative
algorithms. At the beginning of the negative phase, the classical computer
sends weights and biases required to our probabilistic computer (PC) where we
perform Gibbs sampling. The probabilistic computer can generate a measured 45
billion Gibbs flips in a second (FPGA). The PC returns samples to the CPU
which computes the gradient. This process is repeated until convergence.
## 2 Gibbs Sampling with p-bits and Mean Field Theories
A p-bit randomly fluctuates between two states (say, in between $+1$ and $-1$)
with a continuous-valued input. Mathematically, an interconnected network of
p-bits is represented by the following two equations:
$\displaystyle I_{i}=\sum_{j}{W_{ij}m_{j}}+h_{i}\qquad\text{and}\qquad
m_{i}=\mathrm{sgn}\left(\tanh{\left(\beta I_{i}\right)-r_{[-1,1]}}\right)$ (2)
where $m_{i}\in\\{-1,+1\\}$ and $r_{[-1,1]}$ is a uniform random number drawn
from the interval $[-1,1]$. $\\{W_{ij}\\}$ are the weights, $\\{h_{i}\\}$ are
the biases and $\beta$ is the inverse temperature. When solved iteratively
also known as Gibbs sampling [19], Eq. (2) approximately follows the Boltzmann
distribution [20]:
$\displaystyle p(\\{m\\})=\frac{1}{Z}\,\exp{[-\beta
E(\\{m\\})]}\qquad\text{and}\qquad
E(\\{m\\})=-\sum_{i<j}W_{ij}m_{i}m_{j}-\sum_{i}h_{i}m_{i}$ (3)
The second equation in Eq. (2) is also known as the “binary stochastic neuron”
in machine learning. In the present context, the iterated evolution of these
equations represents a _dynamical system_ directly implemented in hardware. As
long as $I_{i}$ computation time is faster than $m_{i}$ computation time in
Eq. (2), the update order does not matter and can be random, supporting
asynchronous and massively parallel designs. In general-purpose computers
where Gibbs sampling is usually performed on software, it can become
computationally expensive, especially without the help of any accelerator.
Therefore, in many fields of physics and statistics, mean-field theory (MFT)
is instead widely used where one tries to approximate the behavior of a many-
body system by using an average field instead of individual interactions among
the components [21],[22]. This simplification of the system description to a
mere average significantly reduces the computational load. In the present
context, the relevant MFT equations to be solved self-consistently are the
following [23] where $\langle m\rangle\in(-1,1)$:
$\displaystyle\langle I_{i}\rangle=\sum_{j}{W_{ij}\langle
m_{i}\rangle}+h_{i}\qquad\text{and}\qquad\langle
m_{i}\rangle=\tanh{(\beta\langle I_{i}\rangle)}$ (4)
It is also worthwhile to note that although MFT yields a solution of a complex
system with less computational effort, the estimates from MFT are not always
accurate [23].
## 3 Hierarchical Mean Field Assisted CD Algorithm
In the context of Boltzmann machine learning, the idea of replacing the
correlations in Eq. (1) with MFTs was first introduced by Petersen et al.
[24]. Improvements to this idea such as linear response correction [25, 26,
27] and its higher order extensions [28] were also made. Hinton et al. [29]
proposed a deterministic variant of the contrastive divergence algorithm.
Recently, a variational mean field theory has also been proposed in [30].
Different from all these approaches considered earlier, in this paper we
propose a hybrid approach (Fig. 2): unlike most MFT approaches, the free
running phase is performed with Gibbs sampling but on a fast p-computer and
for the positive phase in the spirit of [2], we use an alternative method to
compute the correlations from the vanilla MFT method which we call
hierarchical mean-field theory (HMFT). In traditional MFT methods, the
correlations are calculated assuming independence between interacting bodies,
i.e., $\langle m_{i}m_{j}\rangle=\langle m_{i}\rangle\langle m_{j}\rangle$
[27]. In our approach, we do not use this assumption. Rather, we start from
the basic definition of correlation, i.e.,
$\displaystyle\langle m_{i}m_{j}\rangle$ $\displaystyle=\sum_{m_{i}=\pm
1,m_{j}=\pm 1}p(m_{i},m_{j})m_{i}m_{j}\qquad\text{with}\qquad
p(m_{i},m_{j})=p(m_{i}|m_{j})\,p(m_{j})$ (5)
When we compute MFT, we get an estimate for $p(m_{j})$. However, to use Eq.
(5), we also need to know or compute $p(m_{i}|m_{j})$. This can be done by
_clamping_ p-bit $j$ to $\pm 1$ and then performing another MFT estimating
$p(m_{i}|m_{j})$. After making $\Theta(2n)$ such MFT calls, we can estimate
the second-order correlations using Eq. (5). Our HMFT approach is presented in
Algorithm 1. HFMT improves correlation estimations by not baking in the
independence assumption $\langle m_{i}m_{j}\rangle=\langle m_{i}\rangle\langle
m_{j}\rangle$ as Naive MFT does (see Supplementary Information 1 for a
discussion on how HMFT can capture correlations completely missed by MFT
assuming independence, in a toy example). In fact, this Bayesian trick can be
used in conjunction with other methods that approximate marginals, such as
Loopy Belief Propagation [31] or with corrections to naive MFT (for example,
see [25]), improving the HFMT method. Moreover, higher-order correlations of
the form $\langle m_{i}m_{j}m_{k}\rangle$, $\langle
m_{i}m_{j}m_{k}m_{l}\rangle$, $\ldots$ can be _hierarchically_ estimated.
These can then be used to train higher-order Boltzmann machines [32], trading
off parallelizable MFT computations with the fundamentally serial Gibbs
sampling. In the experiments below, we investigate how the positive phase of
the contrastive divergence algorithm could be performed by the MFT and the
HFMT method we propose.
Input : weights and biases $J,h$, update factor $\lambda$, tolerance $\delta$,
max. iteration $T_{\mathrm{max}}$
Output : estimates for averages $\langle m_{\text{i}}\rangle$ and correlations
$\langle m_{i}m_{j}\rangle$
1 $\epsilon\leftarrow 1000$, $N\leftarrow\text{length}(h)$, $T\leftarrow 1$,
$m_{\text{old}}\leftarrow 0.01\,\text{rand}(-1,1)$
2 for _$i\leftarrow 1$ to $N+1$_ do
3 for _$j\leftarrow-1$ to $+1$ By $2$_ do
4 if _$i\neq 1$_ then
$m_{\text{old},i-1}\leftarrow j$
$\triangleright$ clamping to $\pm 1$ to get conditional probability
5
6 while _$\epsilon\geq\delta$_ do
7 $I\leftarrow Jm_{\text{old}}+h$
8 $m_{\text{new}}\leftarrow\tanh{(I)}$, $m_{\text{new},i-1}\leftarrow j$
9
$\epsilon\leftarrow\left(\sum_{i}|m_{\text{new,i}}-m_{\text{old,i}}|\right)/\left(\sum_{i}|m_{\text{new,i}}+m_{\text{old,i}}|\right)$
10 $m_{\text{old}}\leftarrow\lambda m_{\text{new}}+(1-\lambda)m_{\text{old}}$
11
12
$m_{\text{avg}}\leftarrow m_{\text{new}}$
$\triangleright$ spin averages
$p(m_{k}=\pm 1)\leftarrow 1/\left(1+\exp{(\mp 2I_{k})}\right)$
$\triangleright$ individual probabilities
$p(m_{k}=\pm 1|m_{i}=j)\leftarrow 1/\left(1+\exp{(\mp 2I_{k})}\right)$
$\triangleright$ conditional probabilities
13
14
$\langle m_{i}m_{j}\rangle\leftarrow$ Compute correlations from Eq. (5)
Algorithm 1 The Hierarchical Mean-field Algorithm
## 4 Experiments
We have used the MNIST dataset (handwritten digits, [33]) to train sparse,
deep and unrestricted Boltzmann networks without any downsampling, typically
performed on hardware implementations by D-Wave and others [12, 13, 14, 15].
We have used black/white images by thresholding the MNIST dataset and we
choose a Pegasus graph [34] with up to 2560 p-bits (nodes) as the sparse DBM
network model in this paper. The graph density of this Pegasus is 0.55% and
the maximum number of neighbors is 15. The network has 834 visible p-bits
(including 5 sets of labels each containing 10 p-bits) and 1726 hidden p-bits
that are arranged in 2 layers as shown in the inset of FIG. 3b. The total
number of connections (network parameters) in this graph is 17984. Using our
fast Gibbs sampler (p-computer), we accomplish the contrastive divergence
algorithm to train the MNIST dataset divided into 1200 mini-batches with 50
images in each batch. We used $10^{5}$ sweeps in the negative phases of each
epoch.
Similarly, we train the full MNIST using our hybrid MFT algorithm with naive
MFT in the positive phase and Gibbs sampling ($10^{5}$ sweeps) in the negative
phase. To find the classification accuracy, we perform a softmax
classification over 50 label p-bits to get the 10 labels. The p-bit with the
highest probability of being ‘1’ indicates the classified digit. FIG. 3a shows
that our sparse DBM with 2560 p-bits reaches around 87% accuracy with Gibbs
sampling and 70% accuracy (with MFT tolerance $=10^{-2}$ and this accuracy may
further improve with lower tolerance) with hybrid MFT technique in 100 epochs
despite having a significantly lesser number of parameters than typical RBMs.
For the full MNIST dataset, the computational expense of HMFT prevented us
from comparing it with the results of MFT at this time but in future this
could be made possible with more parallel resources like using GPUs. Moreover,
sophisticated techniques like layer-by-layer learning [2] should further
improve the accuracy reported in this work. Although our reported accuracy is
comparable with models with less parameters (e.g., regression models), the
real value of Boltzmann machines is their generative properties and has been
shown recently in [18].
To be able to evaluate both the efficiency of MFT and HFMT methods in the
positive phase, we performed the simpler task of training MNIST/100 (100
images randomly chosen from the MNIST dataset). Three different schemes used
the same hyper-parameters where the positive phase of training is accomplished
with naive MFT, HMFT (on CPU), and Gibbs sampling (on p-computer). The
negative phase is performed in our probabilistic computer where we are
naturally doing persistent CD algorithm (PCD) [35, 36]. This hybrid computing
scheme is illustrated in Fig. 2 and the details of the experimental setup can
be found in [18]. The training accuracy of 100 images reaches 100% with Gibbs
sampling and naive MFT and HMFT also perform similarly. Supplementary Table S2
indicates that despite a large difference in the training set log-likelihood,
the test set shows roughly similar results, indicating how the hybrid approach
does not degrade the performance significantly. The performance of this
approach on larger datasets and networks remains to be seen.
It is interesting to note here that when Gibbs sampling in the negative phase
is replaced by xMFTs, both training and test set accuracies degrade severely
and, in fact, do not work at all. The supplementary Table S1 shows the poorer
performance of xMFTs in the negative phases.
Figure 3: MNIST accuracy with different methods: (a) Full MNIST (60,000
images) is trained on sparse DBM (Pegasus 2560 p-bits) with Gibbs sampling
(CD-$10^{5}$) and naive MFT where batch size = 50, learning rate = 0.003,
momentum = 0.6. Around 87% accuracy is achieved in 100 epochs for Gibbs
sampling and 70% for the naive MFT. Test accuracy represents the accuracy of
all 10,000 images from the MNIST test set, while the training accuracy
corresponds to the accuracy of 10,000 images randomly drawn from the training
set. (b) Training accuracy of MNIST/100 with the three different schemes:
naive MFT, HMFT, and Gibbs sampling where they perform similarly. Here the
batch size = 10, momentum = 0.6 and learning rate varies from 0.06 to 0.006
over 1000 epochs.
## 5 Conclusions and Outlook
The end of Moore’s law is driving the development of physics-based
probabilistic computers that can accelerate MCMC algorithms by orders of
magnitude. In this paper, we showed an FPGA emulation of such a computer that
can take up to 45 billion Gibbs samples per second. Experimentally-validated
projections indicate up to $10^{15}$ samples per second are possible with
truly physical p-computers. Such a large increase in MCMC speeds may allow the
direct training of deep and _unrestricted_ BMs. As an example of this
approach, we trained a 2-layer unrestricted deep BM on a sparse Pegasus graph
used by D-Wave, showing promising results (near 90% classification accuracy
with only $\approx$18k parameters). To aid with the positive phase training of
unrestricted and deep BMs, we also proposed a hierarchical mean-field theory
assisted learning algorithm. In accordance with common wisdom, we found that
MFTs fail to estimate model correlations, especially when the weights become
large. Surprisingly, however, we observed that MFTs accurately approximate
data correlations, during the positive phase, greatly simplifying training in
hardware. With the development of new physical computers, our results may
allow the training of deep and unrestricted BMs previously thought to be
intractable where hybrid MFT approaches could be used during pretraining or as
a supplement to the computationally expensive Gibbs sampling. Extensions of
the model by fully exploiting the parallelism offered by the MFTs in deeper
networks to train harder datasets are left for future study.
## Acknowledgments and Disclosure of Funding
Authors acknowledge support from the Office of Naval Research Young
Investigator Program and National Science Foundation grants.
## References
* Ackley et al. [1985] David H Ackley, Geoffrey E Hinton, and Terrence J Sejnowski. A learning algorithm for Boltzmann machines, 1985.
* Salakhutdinov and Hinton [2009] Ruslan Salakhutdinov and Geoffrey Hinton. Deep boltzmann machines. In David van Dyk and Max Welling, editors, _Proceedings of the Twelth International Conference on Artificial Intelligence and Statistics_ , volume 5 of _Proceedings of Machine Learning Research_ , pages 448–455, Hilton Clearwater Beach Resort, Clearwater Beach, Florida USA, 16–18 Apr 2009. PMLR.
* Carleo et al. [2019] Giuseppe Carleo, Ignacio Cirac, Kyle Cranmer, Laurent Daudet, Maria Schuld, Naftali Tishby, Leslie Vogt-Maranto, and Lenka Zdeborová. Machine learning and the physical sciences. _Reviews of Modern Physics_ , 91(4):045002, 2019.
* Camsari et al. [2017a] Kerem Yunus Camsari, Rafatul Faria, Brian M Sutton, and Supriyo Datta. Stochastic p-bits for invertible logic. _Physical Review X_ , 7(3):031014, 2017a.
* Chowdhury et al. [2023] Shuvro Chowdhury, Andrea Grimaldi, Navid Anjum Aadit, Shaila Niazi, Masoud Mohseni, Shun Kanai, Hideo Ohno, Shunsuke Fukami, Luke Theogarajan, Giovanni Finocchio, et al. A full-stack view of probabilistic computing with p-bits: devices, architectures and algorithms. _IEEE Journal on Exploratory Solid-State Computational Devices and Circuits_ , 2023.
* Borders et al. [2019] William A Borders, Ahmed Z Pervaiz, Shunsuke Fukami, K. Y. Camsari, Hideo Ohno, and Supriyo Datta. Integer factorization using stochastic magnetic tunnel junctions. _Nature_ , 2019.
* Si et al. [2023] Jia Si, Shuhan Yang, Yunuo Cen, Jiaer Chen, Zhaoyang Yao, Dong-Jun Kim, Kaiming Cai, Jerald Yoo, Xuanyao Fong, and Hyunsoo Yang. Energy-efficient superparamagnetic Ising machine and its application to traveling salesman problems. _arXiv preprint arXiv:2306.11572_ , 2023.
* Aadit et al. [2022] Navid Anjum Aadit, Andrea Grimaldi, Mario Carpentieri, Luke Theogarajan, John M Martinis, Giovanni Finocchio, and Kerem Y Camsari. Massively parallel probabilistic computing with sparse Ising machines. _Nature Electronics_ , 5(7):460–468, 2022.
* Sutton et al. [2020] Brian Sutton, Rafatul Faria, Lakshmi Anirudh Ghantasala, Risi Jaiswal, Kerem Yunus Camsari, and Supriyo Datta. Autonomous probabilistic coprocessing with petaflips per second. _IEEE Access_ , 8:157238–157252, 2020.
* Lin et al. [2009] CJ Lin, SH Kang, YJ Wang, K Lee, X Zhu, WC Chen, X Li, WN Hsu, YC Kao, MT Liu, et al. 45nm low power cmos logic compatible embedded stt mram utilizing a reverse-connection 1t/1mtj cell. In _Electron Devices Meeting (IEDM), 2009 IEEE International_ , pages 1–4. IEEE, 2009.
* Kobayashi et al. [2023] Keito Kobayashi, Nihal Singh, Qixuan Cao, Kemal Selcuk, Tianrui Hu, Shaila Niazi, Navid Anjum Aadit, Shun Kanai, Hideo Ohno, Shunsuke Fukami, et al. CMOS+ stochastic nanomagnets: heterogeneous computers for probabilistic inference and learning. _arXiv preprint arXiv:2304.05949_ , 2023.
* Adachi and Henderson [2015] Steven H Adachi and Maxwell P Henderson. Application of quantum annealing to training of deep neural networks. _arXiv preprint arXiv:1510.06356_ , 2015.
* Manukian et al. [2019] Haik Manukian, Fabio L Traversa, and Massimiliano Di Ventra. Accelerating deep learning with memcomputing. _Neural Networks_ , 110:1–7, 2019.
* Dixit et al. [2021] Vivek Dixit, Raja Selvarajan, Muhammad A Alam, Travis S Humble, and Sabre Kais. Training Restricted Boltzmann Machines With a D-Wave Quantum Annealer. _Frontiers in Physics_ , 9:589626, 2021.
* Böhm et al. [2022] Fabian Böhm, Diego Alonso-Urquijo, Guy Verschaffelt, and Guy Van der Sande. Noise-injected analog Ising machines enable ultrafast statistical sampling and machine learning. _Nature Communications_ , 13(1):5847, 2022.
* Hinton [2002] Geoffrey E Hinton. Training products of experts by minimizing contrastive divergence. _Neural computation_ , 14(8):1771–1800, 2002.
* Larochelle et al. [2007] Hugo Larochelle, Dumitru Erhan, Aaron Courville, James Bergstra, and Yoshua Bengio. An empirical evaluation of deep architectures on problems with many factors of variation. In _Proceedings of the 24th international conference on Machine learning_ , pages 473–480, 2007.
* Niazi et al. [2023] Shaila Niazi, Navid Anjum Aadit, Masoud Mohseni, Shuvro Chowdhury, Yao Qin, and Kerem Y Camsari. Training Deep Boltzmann Networks with Sparse Ising Machines. _arXiv preprint arXiv:2303.10728_ , 2023.
* Koller and Friedman [2009] Daphne Koller and Nir Friedman. _Probabilistic Graphical Models: Principles and Techniques - Adaptive Computation and Machine Learning_. The MIT Press, 2009. ISBN 0262013193.
* Camsari et al. [2017b] Kerem Yunus Camsari, Rafatul Faria, Brian M Sutton, and Supriyo Datta. Stochastic p-bits for invertible logic. _Physical Review X_ , 7(3):031014, 2017b.
* Chaikin and Lubensky [1995] P. M. Chaikin and T. C. Lubensky. _Mean-field theory_ , page 144–212. Cambridge University Press, 1995. doi: 10.1017/CBO9780511813467.005.
* Kardar [2007] Mehran Kardar. _Statistical Physics of Particles_. Cambridge University Press, 2007. doi: 10.1017/CBO9780511815898.
* Sakthivadivel [2022] Dalton A R Sakthivadivel. Magnetisation and Mean Field Theory in the Ising Model. _SciPost Phys. Lect. Notes_ , page 35, 2022. doi: 10.21468/SciPostPhysLectNotes.35.
* Petersen and Anderson [1987] Carsten Petersen and James R Anderson. A mean field theory learning algorithm for neural networks. _Complex Systems_ , 1:995–1019, 1987.
* Kappen and de Borja Rodríguez Ortiz [1997] Hilbert Kappen and Francisco de Borja Rodríguez Ortiz. Boltzmann machine learning using mean field theory and linear response correction. In M. Jordan, M. Kearns, and S. Solla, editors, _Advances in Neural Information Processing Systems_ , volume 10. MIT Press, 1997.
* Kappen and Rodríguez [1997] H.J. Kappen and F.B. Rodríguez. Mean field approach to learning in Boltzmann Machines. _Pattern Recognition Letters_ , 18(11):1317–1322, 1997. ISSN 0167-8655. doi: https://doi.org/10.1016/S0167-8655(97)00096-2.
* Kappen and Rodríguez [1998] Hilbert J. Kappen and FDB Rodríguez. Efficient learning in Boltzmann machines using linear response theory. _Neural Computation_ , 10(5):1137–1156, 1998.
* Tanaka [1998] Toshiyuki Tanaka. Mean-field theory of Boltzmann machine learning. _Phys. Rev. E_ , 58:2302–2310, Aug 1998. doi: 10.1103/PhysRevE.58.2302.
* Welling and Hinton [2002] Max Welling and Geoffrey E. Hinton. A New Learning Algorithm for Mean Field Boltzmann Machines. In José R. Dorronsoro, editor, _Artificial Neural Networks — ICANN 2002_ , pages 351–357, Berlin, Heidelberg, 2002. Springer Berlin Heidelberg. ISBN 978-3-540-46084-8.
* Huang [2020] Haiping Huang. Variational mean-field theory for training restricted Boltzmann machines with binary synapses. _Phys. Rev. E_ , 102:030301, Sep 2020. doi: 10.1103/PhysRevE.102.030301.
* Mezard and Montanari [2009] Marc Mezard and Andrea Montanari. _Information, physics, and computation_. Oxford University Press, 2009.
* Sejnowski [1986] Terrence J Sejnowski. Higher-order Boltzmann machines. In _AIP Conference Proceedings_ , volume 151, pages 398–403. American Institute of Physics, 1986.
* LeCun [1998] Yann LeCun. The MNIST database of handwritten digits. _http://yann. lecun. com/exdb/mnist/_ , 1998.
* Dattani et al. [2019] Nike Dattani, Szilard Szalay, and Nick Chancellor. Pegasus: The second connectivity graph for large-scale quantum annealing hardware. _arXiv preprint arXiv:1901.07636_ , 2019.
* Hinton [2012] Geoffrey E. Hinton. _A Practical Guide to Training Restricted Boltzmann Machines_ , pages 599–619. Springer Berlin Heidelberg, Berlin, Heidelberg, 2012.
* Tieleman [2008] Tijmen Tieleman. Training restricted Boltzmann machines using approximations to the likelihood gradient. In _Proceedings of the 25th international conference on Machine learning_ , pages 1064–1071, 2008.
## Supplementary Information
## 1 HMFT vs NMFT in a toy 2-spin example
Consider a simple two-spin antiferromagnetic system with weights
$W_{ij}=\delta_{ij}-1$ and no biases. At any temperature, the marginals of
these two spins will be identically, i.e., $\langle m_{i}\rangle=0$. As such,
the naive mean field method which uses conditional independence assumption
$\langle m_{i}m_{j}\rangle=\langle m_{i}\rangle\langle m_{j}\rangle$ will
estimate the correlation between these two spins to be identically zero, which
is clearly incorrect, since at any non-zero temperature the spins will be
anti-correlated. On the other hand, the hierarchical mean field method which
_does not_ assume the conditional independence and rather uses Eq. (4) to
compute correlations, estimates a non-zero correlation. For example, at $T=1$,
the exact correlation (from Boltzmann law) between the two spins for the given
weights and biases is $-0.7616$. The hierarchical method also estimates the
same correlation value. Higher-order correlations (hierarchically obtained)
can also be shown to be better estimated by the HMFT method.
## 2 Contrastive divergence algorithm
In this section, we briefly outline the contrastive divergence algorithm in
our hybrid approach which implements Eq. (1):
Input : number of samples $N$, batch size $B$, number of batches $N_{B}$,
epochs $N_{\text{L}}$, learning rate $\varepsilon$, mean-field update factor
$\lambda$, mean-field tolerance $\delta$, maximum iteration for mean-field
$T_{\mathrm{max}}$
Output : trained weights $J_{\text{out}}$ and biases $h_{\text{out}}$
1 $J_{\text{out}}\leftarrow\mathcal{N}(0,0.01)$,
$h_{\text{out,hidden}}\leftarrow 0$,
$h_{\text{out,visible}}\leftarrow\log{(p_{i}/(1-p_{i}))}$
2 for _$i\leftarrow 1$ to $N_{\text{L}}$_ do
3 for _$j\leftarrow 1$ to $N_{\text{B}}$_ do
/* positive phase */
4 for _$k\leftarrow 1$ to $B$_ do
5 $h_{\text{$B$}}\leftarrow\text{ clamping to batch images}$
6 $\langle m_{i}\rangle^{(k)},\langle
m_{i}m_{j}\rangle^{(k)}\leftarrow\text{{x}MFT$\\_$module}(J_{\text{out}},h_{B},\lambda,\delta,T_{max})$
7
$\langle m_{i}\rangle_{\text{data}}=\text{mean}\left(\\{\langle
m_{i}\rangle^{(k)}\\}\right)$, $\langle
m_{i}m_{j}\rangle_{\text{data}}=\text{mean}\left(\\{\langle
m_{i}m_{j}\rangle^{(k)}\\}\right)$
$\triangleright$ CPU
/* negative phase */
8 $h_{\text{Sampler}}\leftarrow h_{\text{out}}$, $J_{\text{Sampler}}\leftarrow
J_{\text{out}}$
$\\{m\\}\leftarrow\text{GibbsSampler}(N)$
$\triangleright$ p-computer
$\langle m_{i}\rangle_{\text{model}}=\text{mean}(\\{m\\})$, $\langle
m_{i}m_{j}\rangle_{\text{model}}=\\{m\\}\\{m\\}^{\text{T}}/(N)$
$\triangleright$ CPU
/* update weights and biases */
$J_{\text{out},ij}\leftarrow J_{\text{out},ij}+\epsilon\left(\langle
m_{i}m_{j}\rangle_{\text{data}}-\langle
m_{i}m_{j}\rangle_{\text{model}}\right)$
$\triangleright$ CPU
$h_{\text{out},i}\leftarrow h_{\text{out},i}+\epsilon\left(\langle
m_{i}\rangle_{\text{data}}-\langle m_{i}\rangle_{\text{model}}\right)$
$\triangleright$ CPU
9
10
Algorithm 2 Mean-field assisted training of sparse DBMs
## 3 Evolution of correlations over epochs
Figure S1: Typical predictions of correlations from different methods: The
correlations predicted by the three different schemes - naive MFT, HMFT, and
Gibbs sampling (the putative exact method since we cannot obtain exact
Boltzmann correlations in general spin-glasses) are shown during a typical
epoch in the training of a sparse DBM. We used a batch size of 10 images and
for the positive phase, we obtained correlations by showing only one batch of
10 images. For Gibbs sampling, we used $10^{4}$ sweeps in both positive and
negative phases. We chose a relative error tolerance of $10^{-2}$ for both MFT
and HMFT. 20 bins were used in both histograms. MFT algorithms do
significantly better in the positive phase than in the negative phase allowing
their use in the positive phase training of deep and unrestricted BMs, instead
of the more expensive Gibbs sampling.
In this section, we discuss the evolution of correlations over epochs for
naive and hierarchical MFT. Typical predictions of correlations from xMFT
algorithms are shown in Supplementary Fig. S1. It can be clearly seen that in
the positive phase, xMFT algorithms provide nearly accurate estimations for
correlations. The clamping of many pbits during the positive phase helps xMFT
algorithms to boost their performance but in the absence of such clamps, their
performances degrade severely in the negative phase. In a hybrid setting, the
probabilistic computer suffers from the communication delay caused by the fact
that images are to be sent repeatedly whereas in the negative phase, there is
no such delay. These two considerations justify the desire to replace Gibbs
sampling in the positive phase with xMFT algorithms.
In order to provide a quantitative measure of the performance between the two
xMFT algorithms discussed in this paper, we define the relative average error
between the two correlation matrices estimated from two different approaches
as
$\epsilon=\sqrt{\sum_{ij}{(A_{ij}-B_{ij})^{2}}}/\sqrt{\sum_{ij}{B_{ij}^{2}}}$
where $A$ is the correlation matrix predicted by the xMFT algorithms and $B$
is the corresponding matrix for the Gibbs sampling. We do not include the zero
correlation values in this measure. Since at the 2560 p-bits level, it is
impossible to obtain correlations from the exact distribution, we use Gibbs
sampling as the “putative” reference for comparison. Supplementary Table S1
lists this measure for the xMFT algorithms both in the positive and negative
phases. The performance of the xMFT algorithms in both phases is consistent
with Supplementary Fig. S1. As mentioned in the main text, in our HMFT
implementations we do not modify the average estimations from naive MFT
therefore both xMFT algorithms show the same accuracy for averages. For
correlations, HMFT estimates are slightly better than naive MFT. It is also
interesting to note that both models (MFT/HFMT) perform better at lower epochs
when the weights are small which may suggest their use in possible pre-
training supplementing the exact Gibbs sampling approach.
Table S1: Relative average error is shown in positive and negative phases for two MFT algorithms. The error is measured with respect to Gibbs sampling at each phase (we take $10^{4}$ sweeps for both positive and negative phases). The tolerance of the MFT algorithms is set to $10^{-2}$. xMFT predicted averages and correlations are significantly better in the positive phase than in the negative phase. | Positive phase | Negative phase
---|---|---
epoch | averages | correlations | averages | correlations
| $\langle m_{i}\rangle$ | $\langle m_{i}m_{j}\rangle$ | $\langle m_{i}\rangle$ | $\langle m_{i}m_{j}\rangle$
| MFT | HMFT | MFT | HMFT | MFT | HMFT | MFT | HMFT
1 | 0.72% | 0.72% | 1.55% | 1.19% | 2.00% | 2.00% | 3.82% | 2.967%
5 | 0.69% | 0.69% | 1.39% | 1.11% | 5.82% | 5.82% | 8.82% | 8.00%
10 | 0.63% | 0.63% | 1.24% | 1.00% | 11.73% | 11.73% | 16.40% | 16.07%
50 | 0.59% | 0.59% | 1.11% | 0.92% | 35.85% | 35.85% | 46.80% | 46.61%
100 | 0.61% | 0.61% | 1.11% | 0.93% | 44.44% | 44.44% | 55.4% | 55.16%
## 4 Log-likelihood measure for xMFT algorithms
Table S2: Train (100 images) and test set (20 images) log-likelihood for different samplers i.e., Gibbs-Gibbs, naive MFT-Gibbs, HMFT-Gibbs for sparse deep Boltzmann machines. MNIST/100 | Gibbs-Gibbs | HMFT-Gibbs | MFT-Gibbs
---|---|---|---
Train set | -35.08 | -71.07 | -89.82
Test set | -33.87 | -37.71 | -37.63
In this section, we report the log-likelihood ($\mathcal{L}$) measure defined
as
$\mathcal{L}=\sum_{i=1}^{N}\sum_{c=1}^{C}y_{i,c}\cdot\log(p_{i,c})$ (S.1)
for the xMFT algorithms in both the training and the test set. Here, $y_{i,c}$
is the true label of the $i^{\text{th}}$ sample for class $c$ (1 if the sample
belongs to class $c$ and 0 otherwise), and $p_{i,c}$ is the predicted
probability that the $i^{\text{th}}$ sample belongs to class $c$. We measure
the performance after training MNIST/100, with three different choices of
algorithms to be used in the positive phase namely, Gibbs sampling, naive MFT
and HMFT. The negative phase is always performed with Gibbs sampling.
Supplementary Table S2 lists this measure.
As can be seen, the HMFT-trained model performs better than the naive MFT-
trained model in both the training and test set although in the test set case,
this difference is minimal.
|
TODOMAYB JeremySaysCristobalSaysFabianSays INUTILE LONGSHORT VLONG
NONANONYMOUSANONYMOUS HEADPARAGRAPH BRIDGEPARAGRAPH
# Measuring Discrimination Abilities of Monk Parakeets Between Discreet and
Continuous Quantities Through a Digital Life Enrichment Application
Jérémy Barbay<EMAIL_ADDRESS>0000-0002-3392-8353 , Fabián Jaña Ubal
<EMAIL_ADDRESS>and Cristóbal Sepulveda Álvarez<EMAIL_ADDRESS>
Departamento de Ciencias de la Computación (DCC), Universidad de ChileAvenida
Beauchef 851SantiagoRegion MetropolitanaChile8370448
###### Abstract.
Ain et al. measured three African Grey (_Psittacus erithacus_) parrot’s
discrimination abilities between discreet and continuous quantities. Some
features of their experimental protocol make it difficult to apply to other
subjects and/or species without introducing a risk for some bias, as subjects
could read cues from the experimenter (even though the study’s subjects
probably did not). Can digital life enrichment techniques permit us to
replicate their results with other species with less risk for experimental
bias, with a better precision, and at lower cost? Inspired by previous
informal digital life enrichment experiments with parrots, we designed and
tested a web application to digitally replicate and extend Ain et al.’s
experimental setup. We were able to obtain similar results to theirs for two
individuals from a distinct species, Monk Parakeets (_Myiopsitta Monachus_),
with increased guarantees against potential experimental biases, in a way
which should allow to replicate such experiments at larger scale and at a much
lower cost.
Animal Computer Interaction, Comparative Cognition Study, Continuous and
Discreet Comparative Abilities, Digital Life Enrichment, Monk Parakeet
††ccs: Applied computing Computer-assisted instruction††ccs: Applied computing
Interactive learning environments††ccs: Human-centered computing User
interface design††ccs: Applied computing Agriculture††ccs: Applied computing
Computer games
Figure 1. The Monk Parakeet Tina selecting the largest value out of two
displayed, in heap mode.
A monk parakeet in front of the touch screen of a grey smart phone ”Galaxy
Note 9” reaches to select the largest disk out of four, below the title
”What’s more?”.
Figure 2. The Monk Parakeet Lorenzo selecting the largest value out of four
displayed, in disk mode.
A monk parakeet in front of the touch screen of a grey smart phone ”Galaxy
Note 9” reaches to select the largest disk out of four, below the title
”What’s more?”.
## 1\. Introduction
Al Aïn et al. (2008) measured the discrimination abilities between discrete
and continuous quantities of three African Grey parrots (_Psittacus
erithacus_), showing that their accuracy in choosing between two small
quantities was inversely correlated with the ratio between the difference
between the two quantities and the largest quantity.
Generalizing the experimental protocol described and implemented by Al Aïn et
al. (2008) to other subjects or species present some difficulties. The fact
that the experimenter knows which answer was expected from the subjects is not
an issue in their study because it was previously verified that the three
subjects were unable to read such cues from human experimenters, but it means
that the replication of such protocol is limited to individuals (from the same
or from other species) which inability to read cues has been previously
demonstrated. Beyond such a weakness, the cost of the experimental set-up and
of the analysis of the video recordings of the experiments reduces the
probability that such a protocol will be replicated with other subjects from
the same species, or with subjects from the many other species of parrots
existing around the world.
Touchscreens have been successfully used for experiments in life enrichment
(Perdue et al., 2012; Kohn, 1994; Coghlan et al., 2021a) and in Comparative
Psychology (Egelkamp and Ross, 2018), with individuals from various nonhuman
species. Could digital life enrichment techniques allow to replicate Al Aïn et
al. (2008)’s results at a lower cost, but with a better precision, and less
potential experimental bias? Which additional advantages could a digital
variant bring?
A Cockatoo parrot playing Candy Crush on a large tablet.
Figure 3. A Cockatoo playing the game “Candy Crush” (picture used with the
authorisation of the author).
Figure 4. The Cockatoo Isabelle playing the game “Candy Crush” under the
guidance of Jennifer Cunha from Parrot Kindergarten (picture used with the
authorisation of the author).
A Monk Parrot playing a piano app on a grey cell phone lying on the table.
Figure 5. Monk Parakeet playing the piano music application “Mini Piano Lite”
in order to learn to use touchscreen interfaces with a wide active surface.
Figure 6. The Monk Parakeet Lorenzo playing the piano music application “Mini
Piano Lite” in order to learn to use touchscreen interfaces with a wide active
surface.
Two Monk Parrots playing steel drum music on cell phone apps, each with their
own device.
Figure 7. Monk Parakeets playing the steel drum music application “Meditation
Drum” in order to learn how to properly aim when using touchscreen interfaces.
Figure 8. The Monk Parakeets Tina (left) and Lorenzo (right) playing the steel
drum music application “Meditation Drum” to learn how to aim when using
touchscreen interfaces.
Inspired by previous informal Digital Life Enrichment experiments such as a
Cockatoo playing the video game Candy Crush (Figure 8), or Monk Parakeets
learning to use touch interfaces by playing music on it (Figures 8 and 8), we
designed, tested and used a web application InCA-WhatIsMore to digitally
replicate and extend Al Aïn et al. (2008)’s experimental setup. We obtained
similar results to that of Ain et al. for two individuals of a distinct
species of parrots, Monk Parakeets (_Myiopsitta Monachus_), using an
experimental protocol with increased guarantees against potential experimental
biases, at a lower set-up cost, with additional advantages brought by the
digital context, such as automatic logging and increased subject’s agency.
After describing a selection of concepts and results in the research area of
comparative psychology (Section 2), we describe the application InCA-
WhatIsMore (Section 3), an experimental protocol (including separate
development, training and testing phases) based upon it (Section 4), an
implementation of this protocol and an analysis of its results (Section 5),
and we conclude with a recapitulation of our results, a discussion of their
potential weaknesses and a perspective on future research (Section 6).
## 2\. Nonhuman Cognition
The cognitive abilities of nonhuman animals, traditionally less studied than
that of human ones, has been receiving more attention in the last half
century. Such studies started with the animals perceived to be “closest” to
humankind, such as apes, and has spread more recently to birds (Pepperberg,
1999; Al Aïn et al., 2008; Cunha and Clubb, 2018). We describe a general
overview of some projects and results about the cognitive abilities of some
ape and bird species (Section 2.1); Al Aïn et al. (2008)’s study of the
discrimination abilities of some parrots (Section 2.2); how devices
(analogical and digital) were introduced to nonhumans in order to improve
their well being, and often study their abilities at the same time (Section
2.3); how the distrust in results obtained by improper experimental protocols
has plagued scientific research in this area in the past (Section 2.4); and
how some general guiding principles in the design of experimental protocols
permit scientists to avoid such experimental biases (Section 2.5).
### 2.1. Comparative Psychology
Comparative psychology refers to the scientific study of the behavior and
mental processes of non-human animals (referred to as “nonhumans” thereafter),
especially as these relate to the phylogenetic history, adaptive significance,
and development of behavior in many different species, from insects to
primates. Pepperberg (2020) describes the history of the field of Comparative
Psychology of Intelligence in the last 30 years.
2020-ExoticsCon-ReadingComprehensionSkillsInAGoffinSCockatoo-Cunha
Cunha and Clubb (2018)
### 2.2. Discrimination Abilities in African Grey parrots
Al Aïn et al. (2008) tested the discrimination abilities of African Grey
(_Psittacus erithacus_) parrots on discrete and continuous amounts. More
precisely, they investigated the ability of three African grey parrots to
select the largest amount of food between two sets, in two types of
experiments. In the first experiment type, the subjects were tested on
discrete quantities via the presentation of two quantities of sunflower seeds
(Deli nature Beyers Belgium), between 1,2,3,4 and 5 seeds. In the second
experiment type, the subjects were tested on continuous quantities via the
presentation of two quantities of parrot formula, with amounts between
0.2,0.4,0.6,0.8 and 1 ml. For each experiment, the two amounts were presented
simultaneously and were visible at the time of choice. Albeit the subjects
sometimes failed to choose the largest value, they always performed above
chance, their performance improving when the difference between amounts was
the greatest.
The experimental setup was completely analogical. A permanent table was set-up
in the aviary, and two black pieces of cardboard were used to present food
item (sunflower seeds or parrot formula). For each experiment, different
amounts of either seeds or parrot formula were placed on each piece of
cardboard. The experimenter put the subject for 5 seconds in a position from
which they could observe the two sets, then placed them on the table at equal
distances from the two sets, letting them chose one set to it while removing
the ignored set. The position of the sets (small and large) was pseudo-
randomized: the larger set was never presented more than two times on the same
side and was presented as often on the right side as on the left side.
In the experimental setup described by Al Aïn et al. (2008), subjects could
eventually read involuntarily cues from the experimenter: even though the
experimenter was standing behind the subject, at equal distances from each
set, not pointing to it, looking at the subject, aiming to avoid communicating
any cue to the subject, the experimenter _knew_ where the largest quantity
was. While it was not an issue in Al Aïn et al. (2008)’s study because the
authors demonstrated in a previous study that the subjects were not able to
use any gazing cue, the protocol should not be applied as such to other
subjects without verifying their inability to read such cues, adding to the
cost of implementing such protocol.
Avoiding giving cues to the subject is hard even for a professionally trained
experimenter (Trestman, 2015). Requiring either such training or a separate
study to insure that the subject cannot read cues from the experimenter
restricts the applicability of a protocol to laboratories. For example, in the
context of _citizen science_ projects (Gura, 2013) where non professional
experimenters (such as zoo personal or simple citizen) guide the experiments,
a masked protocol (defined in Section 2.5) where the experimenters _ignore_
what the correct answer is (because they did not receive the information that
the subject did) would be more robust against subjects reading cues from the
experimenter. We describe in section 3 an application allowing for such an
alternate experimental setup which, if not exactly equivalent to that of Al
Aïn et al. (2008) (e.g. the reward is not proportional to the quantity
selected), presents the advantage of being “experimenter-masked”, inspired by
some of the life enrichment experiences described in the next section.
### 2.3. Life Enrichment and Cognition studies
One can study cognitive abilities of nonhumans through life enrichment
activities in general, and through digital ones in particular. General
preoccupation for the welfare of captive nonhumans is at least 150 years old.
Kohn (1994) dates the first legislation about zoo animal welfare to 1876, with
the “Cruelty to Animals Act” in the “Criminal Code of Canada”. Since then, the
list of duties of such institutions has grown to include not only the basic
welfare tenets of adequate feed, water, shelter, sanitation and veterinary
care of their nonhuman residents, but also higher level concerns such as the
handling and training of the nonhuman residents, their psychological well-
being, the design of their enclosures, the preservation of their species,
issues of environmental and conservation, and programs to breed captive
nonhumans. Kohn (1994) mentions (in 1994) the “ _emerging field of
psychological well-being in captive animals_ ”, incorporating physical health,
normal and captive behavior, and interactions with the enclosure environments
and mentioning how environmental enrichment is an important component of this
issue. He goes on to list innovations in life enrichment such as specialized
toys and puzzle feed boxes (but no digital applications).
Yet, the use of digital applications to measure nonhuman abilities seems to
predate Kohn (1994)’s report by at least 10 years. In his discussion of the
impact of game-like computerized tasks designed to promote and assess the
psychological well-being of captive nonhuman, Washburn (2015) refers to a
three decade old history in 2015, placing the beginning of such use sometimes
around 1985. In 1990, Richardson et al. (1990) describe a quite complete
Computerized Test System. They tested their system with a population of rhesus
monkeys, but defend its potential as a “ _rich and robust testing environment
for the cognitive and neuro-psychological capacities of great apes, rhesus
monkeys, mentally retarded and normally developing children, and adults_ ”, so
that subjects from various populations can be tested under comparable
conditions in such a way that “ _control is increased over testing conditions_
”. They mention that “ _the animals readily started to work even when the
reward was a small pellet of chow very similar in composition to the chow just
removed from the cage_ ”, and that “ _the tasks have some motivating or
rewarding of their own_ ”.
Nonhuman subjects seem to enjoy participating in cognitive studies involving
game-like digital applications. Washburn (2015) describes, among various other
anecdotes, how game-like application for apes were developed as early as 1984,
and how the subjects “ _chose to work on joystick-based tasks, even though
they did not need to perform the game-like tests in order to receive food_ ”,
and “ _opted for computer task activity over other potential activities that
were available to them_ ”. He goes on to mention how such game-like activities
have been used to study various cognitive phenomena such as the ability to
learn, memory, attention, perception, categorization, numerical cognition,
problem solving, the ability to reason, the ability to make decisions, meta-
cognition, social cognition and language. Among the details reported on the
methodology, he mentions that incorrect responses typically produced auditory
feedback, frequently accompanied by a time-out period, but that no other
punitive method was used to promote productivity, accuracy or rapid
responding. Lastly, he describes evidence that the nonhumans are not only
motivated by food rewards, but also by the enjoyment of the tasks themselves:
when given a choice between completing trials for pellets or receiving pellets
for free but not being able to play the game-like tasks during the free-pellet
period, the monkeys chose to work for their reward.
The use of digital applications might benefit nonhuman in less directed ways
too, by raising awareness and respect of t their cognitive abilities among the
public. Coghlan et al. (2021b) examined how digital technologies can be used
to improve ethical attitudes towards nonhumans (focusing on nonhuman apes kept
in zoos) by introducing digital technologies in zoos for both animal
enrichment and visitor education. Both analogical and digital setups must be
careful to avoid experimental biases: we describe two particularly relevant
ones to this work in the next section.
### 2.4. Experimental Biases
The history of Comparative Psychology has been prone with fights about the
validity of methodologies and results: Pepperberg (2016) describes various
such tensions between researchers about the cognition of animals, with some
accusing other researchers in the field to be “ _liars, cheats and frauds_ ”,
and she highlights how sign language researchers were accused of “ _cuing
their apes by ostensive signals_ ” and of “ _consistently over-interpreting
the animals’ signs_ ”. We explore here two issues relevant to the
experimentation protocol described in this work, namely _selective reporting
bias_ (Section 2.4.1) and the _“Clever Hans” effect_ (Section 2.4.2).
#### 2.4.1. Selective Reporting Bias
Selection biases occur in a survey or experimental data when the selection of
data points is not sufficiently random to draw a general conclusion. Selective
reporting biases are a specific form of selection bias whereby only
interesting or relevant examples are cited. Cognitive skills can be
particularly hard to study in nonhumans, requiring unconventional approaches
but often presenting the risk of such biases. For example, an experimenter who
would present a subject repeatedly with the same exercise could be tempted to
omit or exclude bad performances (eventually attributing them to a “bad mood”
of the subject, which stays a real possibility) and report only on good
performances, creating a biased representation of the abilities of the
subject, a selective reporting bias.
Whereas Bates and Byrne (2007) defends the use of anecdotes in comparative
psychology, he does so “ _provided certain conditions are met_ ” so that to
avoid such biases. defining an _anecdotal method_ in five steps:
1. (1)
Source Material Assembly;
2. (2)
Definition of the extraction process;
3. (3)
Categorization of extracted records;
4. (4)
Labeling of each record with a level of evidence (from ambiguous to highly
suggestive);
5. (5)
Hypothesis generation to guide future studies.
He emphasises the “ _need to use neutral descriptions of behaviour that avoid
implicit interpretation, recorded by experienced and knowledgeable observers
immediately after the occurrence of the event._”, and that all observations of
rare events should be made available for later analysis by the anecdotal
method. We describe how to use a digital application to systematically log the
result and easily avoid such bias in Section 3. Another type of bias is that
of the subject reading cues from the experimenter, which we describe in the
next section.
#### 2.4.2. “Clever Hans” effect
Among such methodological issues resulting in experimental biases, the most
iconic one might be the eponymous horse nicknamed “Clever Hans” which appeared
to be able to perform simple intellectual tasks, but in reality relied on
involuntary cues given by not only by their human handler, but also a variety
of human experimenters, as related by Trestman (2015):
> “ _In the early years of the 20th century an unlikely and controversial
> celebrity rose to fame in Germany and internationally: a horse, aptly named
> Clever Hans, who apparently displayed a startling aptitude in mathematics as
> well as music theory, not to mention the ability to identify colors and
> individual people by name, read German, and answer a variety of questions in
> normal spoken German. He responded to questions primarily via a code based
> on repeatedly tapping his right hoof, in combination with other responses
> such as nodding or shaking his head to indicate yes or no, and pointing to
> objects or photographs with his nose._ ”
The story is famous of course for how it illustrates how nonhumans can often
more easily learn to read cues from the experimenter than to solve the problem
asked from them. One ignores such rules not only at its own risk, but at the
risk of hurting a whole research area: in her recapitulation of the history of
animal language studies, Pepperberg (2016) describes how coverage of issues
about the validity of Comparative Psychology methodologies and results in the
public media in 1980 moved government agencies to respond to the blow-back by
cutting off the funding for all of the related studies. While issues such as
over interpreting subject’s signs or selectively reporting experimental
reports can be avoided with the appropriate amount of rigor (eventually with
some computerized help, as discussed in Section 6.1.8), avoiding having
subjects reading experimenter’s cues requires special care to be taken when
designing the experimental protocol: we describe in the next section some
guiding principles which exclude the very possibility of such biases from
experimentation results.
### 2.5. Masked Experimental Protocols
It is possible to avoid the confusion between a subject’s ability to read cues
from the experimenter from its ability to answer the tests presented to them
by such an experimenter. The principle is quite simple: make sure that the
experimenter does not know the test, by having a third party out of reach from
the subject’s reading to prepare the test. Whereas such experimental setup was
historically referred to as a ”Blind Setup” or a ”Blinded Setup”, we follow
the recommendations of Moris et al. (Morris D, 2007) and prefer the term of
”masked” to the term ”blind” when describing the temporary and purposeful
restricted access of the experimenter to the testing information.
In an analogical setting, the set-up of a masked experimental protocol is more
costly than that of less careful ones. For example, Cunha and Clubb (2018)
describe an experimental protocol where an assistant prepares a pile of prompt
cards, which the experimenter presents to the subject without looking at their
content until after the subject responded, in order to know whether to praise
and reward them or not. We describe our digital set-up for a masked
experimental protocol in Section 3.2: the digital device completely replaces
the assistant, and assists the experimenters, telling him whether the subject
answered correctly or not. As long as the device is placed in such a position
that the subject can see the visual display but the experimenter cannot, there
is physically no way for the subject to read cues from the experimenter, hence
avoiding the _“Clever Hans” effect_.
In the next section, we describe an application designed so that to facilitate
a type of “masked” experimental set-up, in which it is guaranteed that the
ability of the subject to read cues from the experimenter does not affect the
result of the experiment, as the experimenter himself ignores the question
(and hence its correct answer) being asked to the subject.
## 3\. Application
We developed a web application InCA-WhatIsMore as a simple combination of
JavaScript, CSS and HTML using libraries from the Svelte project, made
available on a simple web-page. While its simple structure (described in
Section 3.1) was originally developed as a simple mock-up to visualize how a
simple web application could help setting up masked experiments (described in
Section 3.2) with extensive logging abilities (described in Section 3.3), it
was found complete enough to be used as a final application, and the structure
to be simple enough that even the subjects themselves could navigate it.
### 3.1. Application’s Structure
The web application is composed of four views. The first two, the Main Menu
(described in Figures 12 and 16) and the Gaming View (which can be seen in
Figures 26 and 19 among others), are especially designed to be navigable by
nonhuman subjects. The access to the two others, the settings (see Figures 16
to 16) and the information views are tentatively restricted to the
experimenters by requesting the long pressing of a button.
Screenshot of the menu of the application “What is more”. Screenshot of the
menu of the web application.
Figure 9. The main menu of the application is designed so that the subject can
choose in which visualisation mode it wishes to play, in the hope to support a
sense of agency. Figure 10. The main menu of the application is designed so
that the subject can choose in which visualisation mode it wishes to play, in
the hope to support a sense of agency. The name of the application and the
collaborations were blurred to protect the anonymity of the submission.
Monk Parakeet choosing an option in the menu of the application “What is
More”. Monk Parakeet choosing an option in the menu of the application.
Figure 11. Both subjects quickly learned to select a display mode to start a
game, but did not seem to show a preference for a display mode in particular.
Figure 12. Both subjects quickly learned to select a display mode to start a
game, but did not seem to show a preference for a display mode in particular.
The name of the application and the collaborations were blurred to protect the
anonymity of the submission.
Screenshot of the first third of the settings menu. Screenshot of the first
third of the settings menu, dedicated to the log generation.
Figure 13. The logs are exported in the top part of the setting page of the
application. Figure 14. The logs are exported in the top part of the setting
page of the application. The names were blurred to protect the anonymity of
the submission.
Screenshot of the second third of the settings menu, dedicated to the
appearance and difficulty of exercises.
Figure 15. The part of the setting page dedicated to the appearance and
difficulty of the exercises.
Screenshot of the last third of the settings menu, dedicated to game features
and sound feedback.
Figure 16. The part of the setting page dedicated to the game features and
sound feedback.
#### 3.1.1. Main Menu
The view of the main menu is accessed when the application is opened: see
Figure 12 for a screenshot, and Figure 12 for a picture of a subject using it
to select a display mode. From this view, the user can navigate to the other
views of the application. On the center of the screen are four figures, each
one representing a different visualisation mode used on a random value. Two of
such modes are discrete: one is representing the value as a number of dots on
a dice face, the other on a 3 by 3 grid (as that of the dice) but as a heap of
dots. The two other modes are more continuous: one is representing each value
by a recipient more or less filled with liquid according to the value, the
other by a circle of radius growing with the value. The user (let it be the
experimenter or the subject) can pick any of the four display modes in order
to start the game in it. At the bottom of the screen stands a button to access
the settings section, activated after a long press (which length is set up in
the setting view).
#### 3.1.2. Gaming view
The most important view is the gaming view, allowing the subject to “play”:
see Figure 26 for a screenshot, and Figures 2, 2, 23, 23, and 26 for pictures
of subjects playing the game. The view displays a set of values in some
display mode, requesting the user to choose the largest one. Each action
triggers an audio feedback, indicating if it was correct or wrong. After a
given number of exercises, the game ends and give an audio feedback about how
the score for this game placed with two boundaries (boundaries which can be
modified in the settings page, as well as the words being vocalized in each
audio feedback). The view has also an exit button on the top left corner
intended to be usable by the user, and a settings button actionable by long
pressing it by a parameterized amount of time.
#### 3.1.3. Settings
Whereas the subject can choose the display mode in which they prefer to play,
and exit the game view to change it at any time, other settings are accessible
on a more technical view, designed for the experimenter to set-up other
aspects of the software: the settings view (Figures 16 and 16). As the visual
and sound outputs of touch screen devices were designed for humans, and as
very little is known about the adequateness of such outputs for nonhumans, the
software was designed so that to maximize the control given to each
experimenter on the visual and sound output of the application, so that each
experimenter can find the most adapted settings to the characteristics of the
species and/or subjects with which the software is used. As such, the software
permits the experimenter to change, among other things, the color schemes and
the number of the values displayed, the domain from which such values are
randomly chosen, and the number of questions before a game is ended: it is
hoped that such parameters will be useful in future studies.
#### 3.1.4. About
This last view, a priory accessed only by the experimenters, displays various
information about the application, such as its version, an overview of
features soon to be added and recently added, instructions of usage use,
references and acknowledgments to collaborators.
We describe how to use such a web application so that to implement a masked
experimentation protocol where the subject cannot read any clue from the
experimenter, because the experimenter ignores the question given to the
subject.
### 3.2. Masked Experimental Setup
Among other features, the web application was designed to facilitate digital
experiments similar to that performed by Al Aïn et al. (2008) but in a way
such that the experimenter does _not_ know what side the “correct” answer is,
a masked experimental setup. This insures that the subject cannot receive any
voluntary or involuntary cue from the experimenter. Such a purpose is achieved
through the extensive audio feedback system, which aims at notifying the
experimenter about any event which requires their intervention (e.g. rewarding
or encouraging the subject, or acknowledging that the subject does not want to
play this game any more), so that they do not need to check the screen of the
device at any point.
Artistic Rendition of a Masked Experimental Setup.
Figure 17. The Masked Setup. The subject (left) can see the display and hear
the device (center), but the experimenter (right) can only hear the device and
not see its display.
Picture of a Masked Experimental Setup with one Monk Parrot, one digital
device, and one experimenter who cannot see the screen of the device.
Figure 18. Example of masked experimental set-up: the experimenter can hear
the instructions from the device and encourage the subject, but cannot give
any cue about correct answers.
Picture of the experimenter’s view in a Masked Experimental Setup with two
Monk Parrots and two digital devices.
Figure 19. The masked experimental set-up as viewed by the experimenter, with
two subjects participating in the experiment at the same time, each with its
own device.
### 3.3. Logging structure
In non digital experiments in comparative psychology, the experiments is
usually recorded on video so that the video recording can be later processed
in order to generate an extensive log of the interactions of the subject
during the experiment. Such a task is long and tedious, and no video
processing software is yet able to automatize such a process. An important
advantage of a digital experimental set-up such as that allowed by the
software InCA-WhatIsMore is the ability to _automatically_ log the
interactions of the subject with the application.
The logs are exported in the top part of the setting page of the application
(previously described in Figure 16, on page 16). Two formats are available:
the first format, .txt is designed to be easily readable by humans; while the
second format, .csv is more adequate for machine processing. The software
InCA-WhatIsMore generates logs with data to be analyzed by researchers,
including information on both the test performed and the subject’s performance
(see Figure 21 for a short extract, and Figure 27 for a longer one).
Test no, Test Name, Learner, Trainer, C_0, C_1, C_2, C_3, C_4, Value selected
, Correction , Date, Answering Time (ms), Other Parameters 1, dice, Subject,
Experimenter, 1,4,,,, 4,true, [2022-05-19 17:02(25.981)], 7946, background
black, foreground green, bg opacity .2, Value Set [1,2,3,4,5] (...) 81, rect,
Subject, Experimenter, 4,2,3,,, 3,false, [2022-05-19 17:26(55.124)], 4655,
background black, foreground green, bg opacity .2, Value Set [1,2,3,4,5] (...)
180, heap, Subject, Experimenter, 3,2,1,,, 2,false, [2022-05-19 17:35(06.6)],
926, background black, foreground green, bg opacity .2, Value Set [1,2,3,4,5]
Figure 20. A short extract showing four selected lines of the log generated by
the application for the afternoon session of the 19th of May 2022 (deleted
blocks of lines are marked by “(...)”). See Figure 21 for a more readable
reformatting of the same extract. Log entries such as “background black,
foreground green, bg opacity .2” refer to visualisation options, not used in
this work.
Test no | Test Name | C0 | C1 | C2 | C3 | C4 | Value selected | Correction | Date | Other Parameters
---|---|---|---|---|---|---|---|---|---|---
1 | dice | 1 | 4 | | | | 4 | true | [2022-05-19 17:02(25.981)] | Value Set [1,2,3,4,5]
81 | rect | 4 | 2 | 3 | | | 3 | false | [2022-05-19 17:26(55.124)] | Value Set [1,2,3,4,5]
180 | heap | 3 | 2 | 1 | | | 2 | false | [2022-05-19 17:35(06.6)] | Value Set [1,2,3,4,5]
Figure 21. A more readable format of the log extract from Figure 20, with less
relevant columns removed for readability. Observe that the subject was offered
to choose the largest between 2 (on the first test) and 3 (on the $81$st and
$180$th tests) values, represented as dice (first test), rect ($81$st test)
and heap ($180$th test), and that the subject chose once correctly, and two
times incorrectly, in games where the values were taken from the set
$\\{1,2,3,4,5\\}$, with the precise time and date of each answer duly
recorded. The columns labeled C3 and C4 are empty because no test was
performed requesting the subject to choose the maximal value between 4 or 5.
We describe here the format of the logs in versions $2.x$ of the software. The
first three rows generated within the log indicate information about the test:
* •
Test Name: The test performed, the value is the type of representation, this
can be ”dice”, ”heap”, ”rect” or ”disc”.
* •
Learner: The name of the test subject, used to subsequently run analyses such
as performance over time or to differentiate its statistics.
* •
Trainer: The name of the experimenter. This could be used in later studies
where various experimenters apply the same test to the same subject, to check
for variance in performance from one experimenter to the other.
The following columns indicate quantitative information about the distribution
of quantities within the test, as well as information about the test subject’s
performance.
* •
C0,C1,C2,C3: The qualitative representation of the quantities delivered, these
can be discrete or their representation in continuous quantities, the order of
distribution of these quantities also indicates the order deployed within the
application, the values being ordered from left to right.
* •
Value Selected: The value chosen by the test subject.
* •
Correction: The correctness of the value selected by the test subject, being
”true” if it is the largest amount and ”false” otherwise.
The last columns indicate qualitative values about the test, these values
provide information about both the performance and the setup of the test.
* •
Date: The date on which the test was performed, in timestamp format, including
the precise time in milliseconds.
* •
Answering Time (ms): The time it takes for the test subject to respond from
the display of the quantities to be chosen, represented in milliseconds. Note
that this is more precise than the simple difference between the times of two
consecutive time stamps, as the application includes (parameterizable) waiting
time between tests, a pause between games, break to return to the menu, etc.
* •
Other Parameters: Parameters that visually describe the display of the
quantities, such as the color of the background and the color in which the
representations of the quantities are displayed. These parameters were
modified only in the development phase in order to find an adequate color
scheme for the two subjects in particular (see Section 6.2.3 for a discussion
of the sensory variability of general test subjects), but could be used in the
future to adapt the software to other individuals, potentially from other
species with distinct sensory ranges; and to formally study (ethically, see
Section 6.3.5 for a discussion of the related challenges) the limits of the
sensory range of any subject.
In the next section, we describe the training and experimental protocol which
was used to generate the data measured by such logs.
## 4\. Experimentation Protocol
The experimental protocol was divided in three phases, which we describe in
Section 4.1. The precautions taken to protect the well-being of the nonhuman
subjects (described in Section 4.2) were validated by the Institutional Animal
Care and Use Committee (IACUC) (in Spanish “COMITÉ INSTITUCIONAL de CUIDADO y
USO de ANIMALES (CICUA)”) of the researcher’s institution. The statistical
analysis (described in Section 4.3) were scheduled as part of the experimental
protocol, independently from the results of the experiments.
### 4.1. Phases of the protocol
The protocol was implemented in three phases: a phase of _development_ (of the
software) with only one subject (the first one), a phase of _training_ with
two subjects and a mix of unmasked and masked protocols, and a phase of
_testing_ using the masked protocol and collecting data with both subjects.
* •
During the _development phase_ , a single subject (hereafter referred to as
“the first subject”) interacted with the various prototypes of the software,
in a non masked experimental setting where the experimenter could observe the
display of the screen. Each time the software was modified, it was tested by
two humans subjects before being used by any of the nonhuman subjects, in
order to minimize the potential frustration of the nonhuman subjects while
using a malfunctioning application.
* •
During the _training phase_ , both subjects were invited to use the software,
each on its own device, in a non masked experimental setting where the
experimenter could observe the display of the screen: see Figures 2, 2, 23,
23, and 26 for pictures of the set-up during the training phase.
* •
During the _testing phase_ , both subjects were invited to use the software,
each on its own device, this time in a masked experimental setting where the
experimenter could not observe the display of the screen, so that they ignored
the question asked to each subject and could not cue them, and limiting
themselves to encourage and reward each subject according to the feedback
vocalized by the application (see Figures 19 to 19, on page 19, for examples
of masked experimental setups).
The subjects’ welfare was cared for during each of those three phases: we
describe some of the precautions taken in the next section.
### 4.2. Ethical Precautions
Various precautions were taken to protect both the physical (Section 4.2.1)
and psychological well-being (Sections 4.2.2, 4.2.3 and 4.2.5) of the subjects
during the three phases of the project.
develop how this is in accordance both with the recommendations from Mancini
et al.(2022-FVS-
TheCaseForAnimalPrivacyInTheDesignOfTechnologicallySupportedEnvironments-
PaciManciniNuseibeh; 2022-FAS-
RelevanceImpartialityWelfareAndConsentPrinciplesOfAnAnimalCenteredResearchEthics-
ManciniNannoni; 2019-ACI-AMethodForEvaluatingAnimalUsability-RugeMancini;
Mancini, 2017; 2016-ACMI-
IntroductionFrameworksForACIAnimalsAsStakeholdersInTheDesignProcess-
NorthMancini) and the ACI board, and in the interest of the validity of the
experience and the mental health of the researchers.
Mention 2021-Area-BeyondHumanEthics-Oliver (2019-ACI-
AMethodForEvaluatingAnimalUsability-RugeMancini) (Mancini, 2017)
#### 4.2.1. Physical settings
The subjects were hosted in a private residence counting with three aviaries,
each large enough to allow some amount of flight: one “laboratory” building
with meshed windows containing a “Nest” aviary with a simple door, of size
$3\times 1\mbox{m}^{2}$ and $2\mbox{m}$ high, containing a nest, a plant and
various toys and nesting material; one “South” aviary, corridor shaped with
two sets of double doors, of $6\times 1\mbox{m}^{2}$ of surface size and
$2\mbox{m}$ high; and one “North” aviary with one set of double doors, of
$6\times 3\mbox{m}^{2}$ of surface size and $1\mbox{m}$ high. The subjects
were mostly hosted in the “Nest” aviary, but transported to other aviaries
(with their consent) to allow them to fly on slightly larger distances (6m),
getting sun exposure, access distinct physical games and more generally to
vary their stimuli. The sessions of the development, training and testing
phases were almost always realized in a training area next to the opening of
the “Nest” aviary, and in a few occasions insider the larger “North” aviary.
At no point were the subjects food or water deprived: at any point they could
fly to their housing space, where food and water was available. The sessions
always occurred on one of three similar wood frames (see Figure 23), so that
to offer a familiar setting even when the location of the training changed
(e.g. in the “North” aviary). Even though the digital devices had to be
replaced at some point, those were always hold on the same wood structure
(etched with the name of the subject to which it was assigned), so that to
facilitate the recognition of which device was assigned to which subject. The
subjects were weighted on a weekly basis to detect any variation which could
indicate a potential health issue, and brought to a licensed veterinarian
twice a year.
Picture of the experimental set-up in the learning phase, with two devices
side by side and one Monk Parrot interacting with the right one.
Figure 22. Each subject disposes of its own device, placed on wood supports of
distinct sizes, each etched with the name of the subject to which it is
assigned.
Picture of a Monk Parakeet interacting with its assigned device, on a support
at its name.
Figure 23. Each device is placed on a wood support (as opposed to carried by
the experimenter), at a height comfortable to the subject, making the subject
as autonomous as possible.
#### 4.2.2. Application Usability
Summarize 2019-ACI-AMethodForEvaluatingAnimalUsability-RugeMancini’s rules. In
order to minimize the potential frustration of the subjects when facing
inadequate answers from the application, each version of the application was
systematically tested by two human subjects, and any issue detected during
such a phase corrected, before being presented to the nonhuman subjects.
During the phase of software development, when a feature of the application
(whether due to an error or to an setting proved to be inadequate) was
encountered to frustrate the subjects, the use of this application was
replaced by another activity until the software was corrected, tested and
separately validated by two human subjects.
#### 4.2.3. Sense of Agency
Both physical and virtual aspects of the protocol were designed so that to
maintain a sense of agency in the subjects. READ and maybe mention the last
(April 2022) article from 2022-FAS-
RelevanceImpartialityWelfareAndConsentPrinciplesOfAnAnimalCenteredResearchEthics-
ManciniNannoni about consent The physical setting of the experimentation was
designed so that to insure that the subject’s participation was voluntary
during all three phases of the process: the subjects were invited to come to
the training area (but could, and sometime did, refuse); at any time the
subjects could fly from the training area back to their aviary, to a
transportation pack with a large amount of seeds suspended above the training
area, or to an alternate training area on the side, presenting an alternate
choice of training exercises. Concerning the psychological aspects, the main
menu of the application was designed so that each subject can choose in which
visualisation mode they wish to play (see Figures 12 and 12), and a large
orange “exit” button is present on the playing screen allowing the subject to
signal that they do not wish to play this game any more, prompting the
experimenter to propose alternatives.
Screenshot of the game view of the application.
Figure 24. A screenshot of the game view of the application, asking to choose
the largest disk out of four. Top left is the orange “Exit” button actionable
by the subject. Bottom right is the setting button requesting a long pressure
to be activated. Bottom center is a summary of the game score.
Picture of a Monk Parakeet interacting with the game view of the application,
with the display mode “Disc”.
Figure 25. Monk Parakeet selecting the largest disc out of four. Figure 26.
The Monk Parakeet Lorenzo selecting the largest disc out of four.
The page to adjust parameters controlling the difficulty (e.g. domain and
number of values displayed, length of a game, etc.) of the games, more complex
display and sound choices (e.g. colors and spaces being used in the display,
words pronounced by the software in various situations, etc.), and the details
about the application logs, is accessed via a special button requiring a
longer press, making it harder to access to nonhuman subjects.
#### 4.2.4. Nonhuman Privacy issues
Summarize (2022-FVS-
TheCaseForAnimalPrivacyInTheDesignOfTechnologicallySupportedEnvironments-
PaciManciniNuseibeh) and justify that the log data does not constitute
information which could be inferred to be considered private by the nonhuman
subjects.
#### 4.2.5. Approval of the experimental protocol by CICUA
All interactions with animals were governed by a protocol reviewed and
approved by the Institutional Animal Care and Use Committee (IACUC) (in
Spanish “COMITÉ INSTITUCIONAL de CUIDADO y USO de ANIMALES (CICUA)”) of the
researchers’ institution, through a form of Experimentation Protocol of
Management and Care of Animals (“Protocolo de Manejo y Cuidado de Animales”).
### 4.3. Statistical Analysis Process
The statistical analysis of the experimental results was designed as part of
the experimental protocol, with the objectives to compute the accuracy of each
subject for each display mode and each size of the set of values presented to
the subject (Section 4.3.1), to compare it with the accuracy of selecting a
value uniformly at random (Section 4.3.2) and to search for correlation
between the answer’s accuracy and some measure on the values presented
(Section 4.3.3).
#### 4.3.1. Statistical tools used
The statistical analysis was performed in a python notebook, executed and
shared via the collaborative website https://colab.research.google.com: this
platform was chosen because it is easy to collaborate among peers as well as
to run and replicate statistical analyses. Such python notebook was developed
and tested on the logs generated during the (masked and unmasked) training
sessions, to be used later without major modification on the logs generating
during the masked experimental sessions of the testing phase. The computation
made use of the following libraries:
* •
pandas is a library written as an extension of numpy to facilitate the
manipulation and analysis of data.
* •
seaborn and matplotlib are libraries for the visualisation of statistical
data. seaborn was used for the creation of correlation graphs and matplotlib
for heat maps.
* •
scipy is a free and open source library for Python. It consists of
mathematical tools and algorithms, from this library we use scipy.stats for
the chi-square and binomial tests.
The python notebook operates on the log files via the pandas library. These
logs can be worked individually or concatenated to obtain a large overall
analysis of the test subject.
#### 4.3.2. Binomial Tests
The average accuracy of each subject for each display mode and each size of
the set of values presented to the subject is then the average of the
Correction entry in the log (replacing True by $1$ and False by $0$) over all
data points matching the criteria. For each such accuracy, we performed a
binomial test in order to decide if such accuracy was substantially better
than that achieved by selecting a value uniformly at random. To calculate the
binomial test we count the ”success” among all the points of the dataset, and
apply the binom_test method in scipy.
$p=binom\\_test(k,n,prob,alternative=^{\prime}greater^{\prime})$, where $k$ is
the total number of successes, $n$ is the total number of attempts, over tests
selecting the maximal value out of two, $prob=0.5$; over tests selecting the
maximal value out of three, $prob=0.33$; and over tests selecting the maximal
value out of four, $prob=0.25$. The greater alternative is used since we are
looking for an accuracy greater or equal to $50\%$ , $33\%$ and $25\%$
respectively. We performed such statistical analysis on the data of each
particular session and on their union, on each particular visualization mode
and on the type of visualisation mode (discrete or continuous) and on all
visualisation modes (see Tables 2, 3, 5 and 6).
#### 4.3.3. Pearson Correlation Analysis
In order to compare our results with that of Al Aïn et al. (2008)’s
experiments, we performed a Pearson correlation analysis of the relation
between the accuracy of the subjects’ answers when asked to select the maximal
out of two values on one hand, and the three variables they considered on the
other hand:
* •
the _sum_ of the values for each test (e.g. from $1+2=3$ to $4+5=9$),
* •
the _difference_ between the two extreme values presented within a trial (e.g.
from $1$ to $5-1=4$) and
* •
the _ratio_ of continuous quantities presented, by dividing the smallest
presented value by the largest one (e.g. from $\frac{1}{5}=0.2$ to
$\frac{4}{5}=0.8$).
We describe the results of the experiments and their statistical analysis in
the next section.
## 5\. Results
After relatively long phases of development and training (15 months) using
various domains of values (from $\\{0,1\\}$ to $\\{0,1,\ldots,9\\}$), the
experimental phase was quite short (one week), with all experiments performed
using a masked setup and a domain of values restricted to the set
$\\{1,2,3,4,5\\}$ in order to stay as close as possible to the settings of Al
Aïn et al. (2008)’s study. We summarize the number and content of the logs
obtained (Section 5.1), perform binomial tests on the experimental results
when choosing the maximal value out of two for both subjects (Section 5.2),
perform binomial tests on the experimental results when choosing the maximal
value out of three and four for the first subject (Section 5.3), and perform
various correlation tests between the performance of the subjects and the
_sum_ , _difference_ and _ratio_ of the values presented (Section 5.2.3).
### 5.1. Log results
A testing session typically lasts some 5 to 10 games of 20 questions each,
resulting into a log of 100 to 200 data points: see Figures 20 and 21 for a
shortened example of log, and Figure 27 for a longer one.
Test no, Test Name, Learner, Trainer, C_0, C_1, C_2, C_3, C_4, Value selected
, Correction , Date, Answering Time (ms), Other Parameters 1, dice, Subject,
Experimenter, 1,4,,,, 4,true, [2022-05-19 17:02(25.981)], 7946, background
black, foreground green, bg opacity .2, Value Set [1,2,3,4,5] 2, dice,
Subject, Experimenter, 1,5,,,, 5,true, [2022-05-19 17:02(30.82)], 3095,
background black, foreground green, bg opacity .2, Value Set [1,2,3,4,5] 3,
dice, Subject, Experimenter, 3,4,,,, 4,true, [2022-05-19 17:02(39.70)], 7981,
background black, foreground green, bg opacity .2, Value Set [1,2,3,4,5] 4,
dice, Subject, Experimenter, 1,4,,,, 4,true, [2022-05-19 17:02(46.295)], 6217,
background black, foreground green, bg opacity .2, Value Set [1,2,3,4,5] 5,
dice, Subject, Experimenter, 2,5,,,, 5,true, [2022-05-19 17:02(51.633)], 4331,
background black, foreground green, bg opacity .2, Value Set [1,2,3,4,5] 6,
dice, Subject, Experimenter, 3,1,,,, 1,false, [2022-05-19 17:03(00.79)], 7440,
background black, foreground green, bg opacity .2, Value Set [1,2,3,4,5] 7,
dice, Subject, Experimenter, 4,1,,,, 1,false, [2022-05-19 17:03(02.938)],
1852, background black, foreground green, bg opacity .2, Value Set [1,2,3,4,5]
8, dice, Subject, Experimenter, 1,2,,,, 2,true, [2022-05-19 17:03(06.86)],
2141, background black, foreground green, bg opacity .2, Value Set [1,2,3,4,5]
9, dice, Subject, Experimenter, 3,2,,,, 2,false, [2022-05-19 17:03(13.478)],
6383, background black, foreground green, bg opacity .2, Value Set [1,2,3,4,5]
10, dice, Subject, Experimenter, 4,5,,,, 5,true, [2022-05-19 17:03(16.578)],
2094, background black, foreground green, bg opacity .2, Value Set [1,2,3,4,5]
11, dice, Subject, Experimenter, 4,5,,,, 5,true, [2022-05-19 17:03(20.412)],
2826, background black, foreground green, bg opacity .2, Value Set [1,2,3,4,5]
12, dice, Subject, Experimenter, 1,4,,,, 4,true, [2022-05-19 17:03(28.740)],
7321, background black, foreground green, bg opacity .2, Value Set [1,2,3,4,5]
13, dice, Subject, Experimenter, 1,2,,,, 2,true, [2022-05-19 17:03(40.376)],
10629, background black, foreground green, bg opacity .2, Value Set
[1,2,3,4,5] 14, dice, Subject, Experimenter, 1,2,,,, 2,true, [2022-05-19
17:03(53.7)], 11624, background black, foreground green, bg opacity .2, Value
Set [1,2,3,4,5] 15, dice, Subject, Experimenter, 4,2,,,, 2,false, [2022-05-19
17:03(57.33)], 3018, background black, foreground green, bg opacity .2, Value
Set [1,2,3,4,5] 16, dice, Subject, Experimenter, 5,1,,,, 1,false, [2022-05-19
17:04(00.23)], 1984, background black, foreground green, bg opacity .2, Value
Set [1,2,3,4,5] 17, dice, Subject, Experimenter, 2,3,,,, 3,true, [2022-05-19
17:04(03.156)], 2127, background black, foreground green, bg opacity .2, Value
Set [1,2,3,4,5] 18, dice, Subject, Experimenter, 4,1,,,, 1,false, [2022-05-19
17:04(07.608)], 3443, background black, foreground green, bg opacity .2, Value
Set [1,2,3,4,5] 19, dice, Subject, Experimenter, 4,3,,,, 3,false, [2022-05-19
17:04(11.969)], 3354, background black, foreground green, bg opacity .2, Value
Set [1,2,3,4,5] 20, dice, Subject, Experimenter, 2,3,,,, 3,true, [2022-05-19
17:04(16.363)], 3386, background black, foreground green, bg opacity .2, Value
Set [1,2,3,4,5] 21, rect, Subject, Experimenter, 4,5,,,, 5,true, [2022-05-19
17:04(31.694)], 7191, background black, foreground green, bg opacity .2, Value
Set [1,2,3,4,5] 22, rect, Subject, Experimenter, 3,4,,,, 4,true, [2022-05-19
17:04(36.330)], 3629, background black, foreground green, bg opacity .2, Value
Set [1,2,3,4,5] 23, rect, Subject, Experimenter, 5,1,,,, 1,false, [2022-05-19
17:04(43.148)], 5810, background black, foreground green, bg opacity .2, Value
Set [1,2,3,4,5] 24, rect, Subject, Experimenter, 4,2,,,, 2,false, [2022-05-19
17:04(44.926)], 772, background black, foreground green, bg opacity .2, Value
Set [1,2,3,4,5] 25, rect, Subject, Experimenter, 1,5,,,, 5,true, [2022-05-19
17:04(46.731)], 798, background black, foreground green, bg opacity .2, Value
Set [1,2,3,4,5] 26, rect, Subject, Experimenter, 1,5,,,, 5,true, [2022-05-19
17:04(51.117)], 3380, background black, foreground green, bg opacity .2, Value
Set [1,2,3,4,5] 27, rect, Subject, Experimenter, 1,5,,,, 5,true, [2022-05-19
17:04(55.855)], 3730, background black, foreground green, bg opacity .2, Value
Set [1,2,3,4,5] 28, rect, Subject, Experimenter, 1,4,,,, 4,true, [2022-05-19
17:05(00.581)], 3718, background black, foreground green, bg opacity .2, Value
Set [1,2,3,4,5] 29, rect, Subject, Experimenter, 3,2,,,, 3,true, [2022-05-19
17:05(05.74)], 3487, background black, foreground green, bg opacity .2, Value
Set [1,2,3,4,5] 30, rect, Subject, Experimenter, 2,4,,,, 4,true, [2022-05-19
17:05(08.885)], 2803, background black, foreground green, bg opacity .2, Value
Set [1,2,3,4,5] 31, rect, Subject, Experimenter, 3,5,,,, 3,false, [2022-05-19
17:05(16.709)], 6818, background black, foreground green, bg opacity .2, Value
Set [1,2,3,4,5] 32, rect, Subject, Experimenter, 4,1,,,, 4,true, [2022-05-19
17:05(18.396)], 678, background black, foreground green, bg opacity .2, Value
Set [1,2,3,4,5] 33, rect, Subject, Experimenter, 4,2,,,, 2,false, [2022-05-19
17:05(22.709)], 3306, background black, foreground green, bg opacity .2, Value
Set [1,2,3,4,5] 34, rect, Subject, Experimenter, 5,3,,,, 3,false, [2022-05-19
17:05(25.95)], 1380, background black, foreground green, bg opacity .2, Value
Set [1,2,3,4,5] 35, rect, Subject, Experimenter, 5,2,,,, 5,true, [2022-05-19
17:05(29.448)], 3347, background black, foreground green, bg opacity .2, Value
Set [1,2,3,4,5] (...) 145, disc, Subject, Experimenter, 4,2,1,,, 2,false,
[2022-05-19 17:32(07.287)], 802, background black, foreground green, bg
opacity .2, Value Set [1,2,3,4,5] 146, disc, Subject, Experimenter, 5,4,3,,,
4,false, [2022-05-19 17:32(26.796)], 18496, background black, foreground
green, bg opacity .2, Value Set [1,2,3,4,5] 147, disc, Subject, Experimenter,
4,5,1,,, 5,true, [2022-05-19 17:32(32.142)], 4334, background black,
foreground green, bg opacity .2, Value Set [1,2,3,4,5] 148, disc, Subject,
Experimenter, 3,5,4,,, 5,true, [2022-05-19 17:32(43.167)], 10015, background
black, foreground green, bg opacity .2, Value Set [1,2,3,4,5] 149, disc,
Subject, Experimenter, 5,4,1,,, 5,true, [2022-05-19 17:32(51.628)], 7448,
background black, foreground green, bg opacity .2, Value Set [1,2,3,4,5] 150,
disc, Subject, Experimenter, 4,2,5,,, 5,true, [2022-05-19 17:32(55.922)],
3283, background black, foreground green, bg opacity .2, Value Set [1,2,3,4,5]
151, disc, Subject, Experimenter, 5,2,3,,, 5,true, [2022-05-19 17:33(00.783)],
3850, background black, foreground green, bg opacity .2, Value Set [1,2,3,4,5]
152, disc, Subject, Experimenter, 1,4,2,,, 4,true, [2022-05-19 17:33(05.553)],
3757, background black, foreground green, bg opacity .2, Value Set [1,2,3,4,5]
153, disc, Subject, Experimenter, 5,1,4,,, 4,false, [2022-05-19
17:33(08.485)], 1921, background black, foreground green, bg opacity .2, Value
Set [1,2,3,4,5] 154, disc, Subject, Experimenter, 5,1,3,,, 5,true, [2022-05-19
17:33(10.272)], 778, background black, foreground green, bg opacity .2, Value
Set [1,2,3,4,5] 155, disc, Subject, Experimenter, 1,5,3,,, 5,true, [2022-05-19
17:33(15.484)], 4204, background black, foreground green, bg opacity .2, Value
Set [1,2,3,4,5] 156, disc, Subject, Experimenter, 5,4,3,,, 4,false,
[2022-05-19 17:33(19.152)], 2658, background black, foreground green, bg
opacity .2, Value Set [1,2,3,4,5] 157, disc, Subject, Experimenter, 4,2,3,,,
3,false, [2022-05-19 17:33(22.15)], 1853, background black, foreground green,
bg opacity .2, Value Set [1,2,3,4,5] 158, disc, Subject, Experimenter,
1,5,3,,, 5,true, [2022-05-19 17:33(24.137)], 1111, background black,
foreground green, bg opacity .2, Value Set [1,2,3,4,5] 159, disc, Subject,
Experimenter, 2,3,5,,, 5,true, [2022-05-19 17:33(27.386)], 2240, background
black, foreground green, bg opacity .2, Value Set [1,2,3,4,5] 160, disc,
Subject, Experimenter, 3,1,4,,, 3,false, [2022-05-19 17:33(30.852)], 2456,
background black, foreground green, bg opacity .2, Value Set [1,2,3,4,5] 161,
heap, Subject, Experimenter, 5,1,3,,, 5,true, [2022-05-19 17:33(40.558)],
3063, background black, foreground green, bg opacity .2, Value Set [1,2,3,4,5]
162, heap, Subject, Experimenter, 4,1,2,,, 4,true, [2022-05-19 17:33(46.447)],
4879, background black, foreground green, bg opacity .2, Value Set [1,2,3,4,5]
163, heap, Subject, Experimenter, 3,4,2,,, 4,true, [2022-05-19 17:33(51.684)],
4225, background black, foreground green, bg opacity .2, Value Set [1,2,3,4,5]
164, heap, Subject, Experimenter, 2,1,4,,, 4,true, [2022-05-19 17:33(54.658)],
1963, background black, foreground green, bg opacity .2, Value Set [1,2,3,4,5]
165, heap, Subject, Experimenter, 2,3,4,,, 4,true, [2022-05-19 17:33(59.410)],
3741, background black, foreground green, bg opacity .2, Value Set [1,2,3,4,5]
166, heap, Subject, Experimenter, 2,1,3,,, 2,false, [2022-05-19
17:34(03.670)], 3247, background black, foreground green, bg opacity .2, Value
Set [1,2,3,4,5] 167, heap, Subject, Experimenter, 5,4,3,,, 3,false,
[2022-05-19 17:34(05.808)], 1128, background black, foreground green, bg
opacity .2, Value Set [1,2,3,4,5] 168, heap, Subject, Experimenter, 2,3,1,,,
3,true, [2022-05-19 17:34(07.846)], 1029, background black, foreground green,
bg opacity .2, Value Set [1,2,3,4,5] 169, heap, Subject, Experimenter,
2,3,5,,, 5,true, [2022-05-19 17:34(19.932)], 11078, background black,
foreground green, bg opacity .2, Value Set [1,2,3,4,5] 170, heap, Subject,
Experimenter, 2,3,4,,, 4,true, [2022-05-19 17:34(25.404)], 4462, background
black, foreground green, bg opacity .2, Value Set [1,2,3,4,5] 171, heap,
Subject, Experimenter, 1,5,2,,, 5,true, [2022-05-19 17:34(31.359)], 4945,
background black, foreground green, bg opacity .2, Value Set [1,2,3,4,5] 172,
heap, Subject, Experimenter, 1,4,5,,, 5,true, [2022-05-19 17:34(35.552)],
3182, background black, foreground green, bg opacity .2, Value Set [1,2,3,4,5]
173, heap, Subject, Experimenter, 2,4,3,,, 4,true, [2022-05-19 17:34(39.311)],
2747, background black, foreground green, bg opacity .2, Value Set [1,2,3,4,5]
174, heap, Subject, Experimenter, 2,4,3,,, 4,true, [2022-05-19 17:34(44.188)],
3867, background black, foreground green, bg opacity .2, Value Set [1,2,3,4,5]
175, heap, Subject, Experimenter, 5,3,1,,, 5,true, [2022-05-19 17:34(48.508)],
3308, background black, foreground green, bg opacity .2, Value Set [1,2,3,4,5]
176, heap, Subject, Experimenter, 3,4,2,,, 4,true, [2022-05-19 17:34(53.468)],
3950, background black, foreground green, bg opacity .2, Value Set [1,2,3,4,5]
177, heap, Subject, Experimenter, 3,2,5,,, 5,true, [2022-05-19 17:34(56.511)],
2032, background black, foreground green, bg opacity .2, Value Set [1,2,3,4,5]
178, heap, Subject, Experimenter, 5,2,4,,, 4,false, [2022-05-19
17:35(02.323)], 4802, background black, foreground green, bg opacity .2, Value
Set [1,2,3,4,5] 179, heap, Subject, Experimenter, 5,2,4,,, 4,false,
[2022-05-19 17:35(04.70)], 737, background black, foreground green, bg opacity
.2, Value Set [1,2,3,4,5] 180, heap, Subject, Experimenter, 3,2,1,,, 2,false,
[2022-05-19 17:35(06.6)], 926, background black, foreground green, bg opacity
.2, Value Set [1,2,3,4,5]
Figure 27. A page long example of log generated by the application, obtained
by taking the 35 first and last entries of the log of the masked session on
the afternoon of the 19th of May 2022. One can observe that the session
started with two games selecting the largest of two values, displayed as dice
and as rectangles, and ended with two games selecting the largest of three
values, displayed as discs and heaps.
The testing phase occurred between the 19th of May 2022 and the 26th of May
2022. These experiments used four different display modes (“Dice”, “Heap”,
“Disc” and “Rectangle”), requesting the subject to select the maximal value
out of a set of 2, 3 or 4 values, randomly chosen among a set of five values
$\\{1,2,3,4,5\\}$, in order to produce a setup relatively similar to that of
Al Aïn et al. (2008), with the vast majority of experiments selecting the
maximal out of two values, and only a few out of three or four values. Each
log corresponds to a separate training session and device, containing between
80 and 400 entries (each entry being a separate question and answer). In
total, 14 logs were collected for the first subject, and 5 logs were collected
for the second subject: the first subject was requested to select the maximal
value out of 2,3 or 4 values, while the second subject was requested to select
the maximal value only out of 2 values. Concerning the selection of the
maximal out of 2 values (the setting the most similar to that of Al Aïn et al.
(2008)), the first subject answered 449 dice tests, 400 heap tests, 262
rectangle tests and 103 disc tests, making a total of 1214 tests, while the
second subject answered 190 dice tests 26 rectangle tests and 193 disc tests
making a total of 409 tests. Concerning the selection of the maximal out of 3
values, the first subject answered 249 dice tests, 120 heap tests, 120
rectangle tests, and 99 disc tests, making a total of 588 tests. Concerning
the selection of the maximal out of 4 values, the first subject answered 154
dice tests, 51 heap tests, and 13 disc tests, making a total of 218 tests.
See Table 1 for a summary of the number of data points collected separated by
display modes (“Dice”, “Heap”, “Disc” and “Rectangle”), accumulated by the
type of display mode (“Discrete” or “Continuous”) and accumulated over all
display modes (“Total”). Even though the care to respect the agency of the
subjects introduced great imbalances between the number of data points
collected for each display mode and set size, 2429 data points were gathered
in only one week, through voluntary participation in the subject: this is much
higher than what could be achieved in the same amount of time via traditional
analogical protocols such that of Al Aïn et al. (2008).
Subject | Set Size | Dice | Heap | Discrete | Disc | Rectangle | Continuous | Total
---|---|---|---|---|---|---|---|---
1 | 2 | 449 | 400 | 849 | 103 | 262 | 448 | 1214
1 | 3 | 249 | 120 | 0 | 126 | 120 | 0 | 588
1 | 4 | 154 | 51 | 205 | 13 | 0 | 13 | 218
2 | 2 | 190 | 0 | 190 | 193 | 26 | 219 | 409
1 | total | 852 | 571 | 1054 | 242 | 382 | 461 | 2020
2 | total | 190 | 0 | 190 | 193 | 26 | 219 | 409
total | total | 1042 | 644 | 1244 | 435 | 468 | 680 | 2429
Table 1. Number of data points collected separated by display modes (“Dice”,
“Heap”, “Disc” and “Rectangle”), accumulated by the type of display mode
(“Discrete” or “Continuous”) and accumulated over all display modes (“Total”).
The imbalance between the frequencies of the display modes and between the
amounts of test results for each subjects is explained by the care to support
the agency of the subjects: they could interrupt the session at any time, and
had the option to choose the display mode at any time (even though they seldom
did).
We analyze those results statistically in the following sections.
### 5.2. Selecting the maximal value out of two
Both subjects played the game in the four display modes, the first subject
showing much more interest in participating than the second one, but none of
them marking a particular preference for any display mode. The first subject
showed an average accuracy of $81.79\%$ (Section 5.2.1), the second subject an
average accuracy of $74\%$ (Section 5.2.2). Both performed better when the
values were very different and worse when the values were close (Section
5.2.3), exactly as the three African Grey parrots in (Al Aïn et al., 2008)’s
study.
#### 5.2.1. First Subject
The results show a clear ability from the first subject to discriminate the
maximal value out of two quantities. Over all experimentations requesting to
select the maximal value out of two, the first subject responded correctly
$993$ times out of a total of $1214$ trials, corresponding to an average
accuracy of $81.79\%$. A simple binomial tests indicates that the probability
to achieve such an accuracy by answering uniformly at random $1214$ such
binary questions is $p=1.95\cdot 10^{-117}$ (see Table 2 for a more detailed
description of the results, by session and by display mode). It is very likely
that the subject did _not_ answer uniformly at random. Lacking a bias in the
experimental protocol, this seems to indicate a clear ability to discriminate
between the two values being presented.
Analyzing separately the results for each display mode or type of display
mode, corroborates the result and points out some interesting facts: First,
one does not observe any relevant improvement over time, which is explained by
the relatively long period of training before the relatively short period of
testing. Second, over all the subject performed with a slightly better
accuracy for continuous display modes ($88\%$ vs $79\%$) and, surprisingly
(because one would expect the reverse for human, and similar accuracy for
nonhumans), a much better accuracy for the “Heap” display mode over the “Dice”
display mode ($85\%$ vs $74\%$).
Session | Dice | Heap | Discrete | Disc | Rectangle | Continuous | Total
---|---|---|---|---|---|---|---
19,17h | $65(1e^{-1})$ | $60(2e^{-1})$ | $62(7e^{-2})$ | $90(2e^{-4})$ | $75(2e^{-2})$ | $82(2e^{-5})$ | $72(3e^{-5})$
21,17h | $80(5e^{-17})$ | $93(1e^{-15})$ | $84(1e^{-29})$ | $91(3e^{-5})$ | $95(3e^{-14})$ | $94(3e^{-18})$ | $86(6e^{-45})$
23,08h | $80(1e^{-5})$ | $84(2e^{-3})$ | $81(8e^{-8})$ | (no data) | $90(8e^{-15})$ | $90(8e^{-15})$ | $86(1e^{-20})$
23,15h | $70(5e^{-3})$ | $86(1e^{-20})$ | $82(8e^{-21})$ | $88(1e^{-8})$ | (no data) | $88(1e^{-8})$ | $83(1e^{-27})$
24,10h | $66(1e^{-4})$ | (no data) | $66(1e^{-4})$ | (no data) | (no data) | (no data) | $66(1e^{-4})$
24,17h | (no data) | $83(3e^{-16})$ | $83(3e^{-16})$ | (no data) | (no data) | (no data) | $83(3e^{-16})$
25,08h | (no data) | (no data) | (no data) | $60(3e^{-1})$ | $86(4e^{-14})$ | $84(1e^{-13})$ | $84(1e^{-13})$
25,13h | $71(3e^{-2})$ | (no data) | $71(3e^{-2})$ | (no data) | (no data) | (no data) | $71(3e^{-2})$
Total | $74(3e^{-25})$ | $85(78e^{-49})$ | $79(6e^{-69})$ | $86(81e^{-15})$ | $89(3e^{-40})$ | $88(2e^{-53})$ | $82(1e^{-117})$
Table 2. Finer analysis of the first subject’s performance on selecting the
maximal value out of two, separated by display modes (“Dice”, “Heap”, “Disc”
and “Rectangle”), accumulated by the type of display mode (“Discrete” or
“Continuous”) and accumulated over all display modes (“Total”). The sessions
occurred during the month of May 2022 and are identified by the date $d$ and
hour $h$ (e.g. the session which occurred at 17:02 on the 19th of May 2022 is
identified by the tag “19,17h”). Each entry is in the format $a(p)$ where $a$
is the accuracy reported, and $p$ is the probability of achieving such
accuracy or better by selecting answers uniformly at random. Note how the
accuracy percentages are mostly above $80\%$, and that the probability of such
accuracy or a better one to be attained by selecting answers uniformly at
random is smaller than $0.001$ in almost all the cases.
#### 5.2.2. Second Subject
The second subject was more reluctant to play, but showed a similar ability.
Overall experimentations requesting to select the maximal value out of two
during the testing phase, the second subject responded correctly $303$ times
out of a total of $409$ trials, corresponding to an average accuracy of
$74\%$. A simple binomial tests indicates that the probability of answering
correctly $383$ or more such binary questions out of $409$ by answering
uniformly at random is $p=2.24\cdot 10^{-23}$: here again, it is very likely
that the subject did _not_ answer uniformly at random. Lacking a bias in the
experimental protocol, this seems to indicate a clear ability to discriminate
between the two values being presented.
Session | Dice | Heap | Discrete | Disc | Rect | Continuous | Total
---|---|---|---|---|---|---|---
21,10h | $84(5e^{-15})$ | (no data) | $84(5e^{-15})$ | (no data) | $73(1e^{-2})$ | $73(1e^{-2})$ | $82(6e^{-16})$
23,15h | $64(5e^{-2})$ | (no data) | $64(5e^{-2})$ | (no data) | (no data) | (no data) | $64(5e^{-2})$
24,08h | (no data) | (no data) | (no data) | $79(5e^{-13})$ | (no data) | $79(5e^{-13})$ | $79(5e^{-13})$
24,17h | $51(5e^{-1})$ | (no data) | $51(5e^{-1})$ | $57(2e^{-1})$ | (no data) | $57(2e^{-1})$ | $55(2e^{-1})$
Total | $74(3e^{-12})$ | (no data) | $74(3e^{-12})$ | $73(2e^{-11})$ | $73(1e^{-2})$ | $73(1e^{-12})$ | $74(2e^{-23})$
Table 3. Finer analysis of the second subject’s performance on selecting the
maximal value out of two, separated by display mode and combined. Note how the
accuracy percentages are in between 51% (not much better than random, on the
last session) and 84% with an average of 74% (both much better than random),
and that the probability of such accuracy to be attained by selecting answers
uniformly at random is $p<0.001$ in almost all the cases.
#### 5.2.3. Relation between accuracy and variables
When selecting the maximal value out of two, both subjects showed a lower
accuracy when the two values were close (difference or ratio close to $1$):
see Table 4 for the percentages of correct answers for each subject and each
of the $10$ sets of values presented (ignoring the order). Such results
corroborate those of the three African Grey parrots in Al Aïn et al. (2008)’s
study.
Pearson’s correlation tests for the first subject (see Figure 29 for the
corresponding heat map and Figure 29 for the corresponding scatter plots)
suggest an inverse correlation between the accuracy of the subject’s selection
and the ratio between the two values: for example, for a combination with
small ratio $\frac{1}{5}=0.2$, the subject is more likely to correctly select
the maximal value.
Value Set | Total | Difference | Ratio | Accuracy
---|---|---|---|---
$(x,y)$ | $x+y$ | $y-x$ | $y/x$ | 1st Subject | 2nd Suject
{1,2} | 3 | 1 | 0.5 | 81% | 69%
{1,3} | 4 | 2 | 0.33 | 90% | 70%
{1,4} | 5 | 3 | 0.25 | 93% | 78%
{1,5} | 6 | 4 | 0.2 | 94% | 94%
{2,3} | 5 | 1 | 0.66 | 82% | 57%
{2,4} | 6 | 2 | 0.5 | 81% | 68%
{2,5} | 7 | 3 | 0.4 | 96% | 76%
{3,4} | 7 | 1 | 0.75 | 67% | 45%
{3,5} | 8 | 2 | 0.6 | 73% | 70%
{4,5} | 9 | 1 | 0.8 | 55% | 71%
Table 4. Both subject’s Accuracy for each pairs of values from the domain
$\\{1,2,3,4,5\\}$. Total is the total value of the representation shown to the
test subject, Difference is the difference between the two values presented,
Ratio is equal to the smallest quantity divided by the largest quantity. For
the first subject, note how the lowest _Accuracy_ ($55\%$) corresponds to the
highest _ratio_ ($0.8$), while for the second subject the lowest _Accuracy_
($45\%$) corresponds to the second highest _ratio_ ($0.75$), suggesting a
trend confirmed by the Pearson’s correlation tests.
Heat map correlation plot between the variables described in Table 4 for the
first subject.
Figure 28. Heat map correlation plot between the variables described in Table
4 for the first subject. Notice the strong negative correlation ($-0.9$)
between _Accuracy_ and _Ratio_ one one hand, and the strong positive
correlation ($0.74$) between _Accuracy_ and _Difference_ on the other hand.
Scatterplot between the variables described in Table 4 for the first subject.
Figure 29. Scatter-plot of the variables described in Table 4 for the first
subject. The diagonal plots show the distribution of the values of each
variable. Note the uniform distribution of the _Total_ and _Ratio_.
There is a strong negative correlation ratio of $r=-0.9$ between the accuracy
and the ratio, and a positive correlation ratio of $r=0.74$ between the
accuracy and the difference (see the heat map in Figure 29). The scatter plots
(in Figure 29) show a decreasing relationship between the accuracy and the
ratio, and an increasing relationship between the accuracy and the difference.
There is a similar correlation between accuracy and ratio in the results of
the second subject (see the heat-map in Figure 31 and the scatter plots in
Figure 31). There is a strong negative correlation ratio of $r=-0.74$ between
the ratio and the accuracy. The correlation ratio of $r=0.52$ between the
difference and the accuracy is much weaker.
Heat map correlation plot between the variables described in Table 4 for the
second subject.
Figure 30. Heat map correlation plot between the variables described in Table
4 for the second subject. Notice the negative correlation ($-0.74$) between
_Accuracy_ and _Ratio_.
Scatter plots correlation plot between the variables described in Table 4 for
the second subject.
Figure 31. Scatter plots for the variables described in Table 4 for the second
subject.
### 5.3. Selecting the maximal value out of three and four values
Only the first subject was tested on selecting the maximal value out of three
and four values: the second subject chose to stay in the “Nest” aviary or to
play the digital piano for the remaining sessions. The subject a lower
accuracy when asked to select the maximal value out of 3 or 4, than out of 2:
on average the achieved an accuracy of $70\%$ for selecting the maximal out of
three and $62\%$ for the maximal out of four, but still much better that what
would be expected ($33\%$ and $25\%$ respectively) if the subject chose
uniformly randomly among the values proposed (see Tables 5 and 6 for the
detailed performances separated by display mode and sessions).
Session | Dice | Heap | Discrete | Disc | Rect | Continuous | Total
---|---|---|---|---|---|---|---
19,17h | $55(4e^{-3})$ | $75(2e^{-4})$ | $61(7e^{-6})$ | $6(1e^{-2})$ | $85(3e^{-6})$ | $72(5e^{-7})$ | $66(3e^{-11})$
22,09h | $51(4e^{-3})$ | $65(4e^{-4})$ | $56(1e^{-5})$ | $78(1e^{-10})$ | (no data) | $78(1e^{-10})$ | $64(2e^{-13})$
22,11h | $57(1e^{-4})$ | $64(9e^{-6})$ | $60(6e^{-9})$ | $90(1e^{-16})$ | $89(6e^{-31})$ | $89(3e^{-46})$ | $77(2e^{-47})$
25,16h | $67(6e^{-12})$ | (no data) | $67(6e^{-12})$ | (no data) | (no data) | (no data) | $67(6e^{-12})$
Total | $59(1e^{-17})$ | $66(1e^{-11})$ | $61(2e^{-27})$ | $80(1e^{-25})$ | $88(7e^{-36})$ | $84(2e^{-59})$ | $70(3e^{-77})$
Table 5. Finer analysis of the first subject’s performance on selecting the
maximal value out of three, separated by display mode and combined. Note how
the average accuracy of random position selection in this case is 33%, so an
accuracy between 51% and 84% is a reasonable measure, as well as the
probability to achieve such accuracy or above when choosing one of the value
between 3 options uniformly at random
Two simple binomial tests give a more formal measure of how much better the
subject performed compared to someone choosing uniformly at random: the
probabilities of obtaining an accuracy equivalent or superior by randomly
choosingthe same number of answers is $p=3.479\cdot 10^{-77}$ with probability
$0.33$ of success (for selecting the maximal out of 3 values $588$ times) and
$p=2.549\cdot 10^{-31}$ with probability $0.25$ of success (for selecting the
maximal out of 4 values $136$ times): with very high probabilty, the subject
showed their ability to discriminate between three and four values.
Session | Dice | Heap | Discrete | Disc | Rect | Continuous | Total
---|---|---|---|---|---|---|---
25,16h | $59(7e^{-20})$ | (no data) | $59(7e^{-20})$ | (no data) | (no data) | (no data) | $59(7e^{-20})$
26,09h | (no data) | $72(1e^{-12})$ | $72(1e^{-12})$ | $53(2e^{-2})$ | (no data) | $53(2e^{-2})$ | $68(2e^{-13})$
Total | $59(7e^{-20})$ | $72(1e^{-12})$ | $62(2e^{-30})$ | $53(2e^{-2})$ | (no data) | $53(2e^{-2})$ | $62(3e^{-31})$
Table 6. Finer analysis of the first subject’s performance on selecting the
maximal value out of four, separated by display mode and combined. Note how
the average accuracy of random position selection in this case is 25%, so an
accuracy between 53% and 72% is a reasonable measure, as well as the
probability to achieve such accuracy or above when choosing one of the value
between 4 options uniformly at random
## 6\. Conclusion
We conclude with a summary of what the project achieved to the date (Section
6.1), a discussion of the potential issues with the results presented (Section
6.2) and some perspective for future research (Section 6.3).
### 6.1. Achievements
Whereas Al Aïn et al. (2008)’s protocol requested the subject to choose
between two pieces of cardboard holding distinct amount of food, for discrete
and continuous types of food material; we proposed a protocol which requests
the subject to choose the largest among a set of values (of parameterized
size) on a visual display, using discrete and continuous representations of
values, by touching a touchscreen on the representation of the largest value.
By developing a simple but extensively parameterized web application
requesting the user to select the largest among two to four values chosen at
random, using discrete and continuous representations of values and providing
visual and audio feedback about the correctness of the answer, we achieved a
solution with various advantages, which we tentatively list as follows.
#### 6.1.1. Better guarantees against subjects reading potential cues from
the experimenter
In the context of the measurement of the discrimination abilities between
discrete and continuous quantities of subjects, we designed a variant of Al
Aïn et al. (2008)’s experimental protocol which presents better guarantees
against subjects reading potential cues from the experimenter. Whereas their
protocol is performed in presence of a human experimenter who know the
complete set-up of the experiment, in our variant the experimenter can ignore
the options offered to the subjects and receive audio feedback to indicate
whether to reward or not the subject (see Section 2.5 for the definition of a
masked experimental set-up).
#### 6.1.2. Generalization of results to Monk Parakeets
Using such protocol, we replicated and generalized the results obtained by Al
Aïn et al. (2008) on the discrimination abilities of three African Grey
(_Psittacus erithacus_) parrots to that of of two Monk Parakeets (_Myiopsitta
Monachus_) parrots. Concerning the ability to discriminate the largest between
2 values chosen randomly in a domain of 5 distinct values, in discrete or
continuous quantities, the two Monk Parakeets parrots performed as well as the
three African Grey parrots from Al Aïn et al. (2008)’s study, with global
accuracies of $82\%$ for the first subject and $74\%$ for the second one (see
Section 5.2 for the detailed results). Similarly to the results described by
Al Aïn et al. (2008), we found a strong correlation between the ratio between
the smallest and largest values and the accuracy of the subject: close values
are harder to discriminate than others.
#### 6.1.3. Increased agency of the subject
A subject’s sense of _agency_ , defined as the faculty for the subject to take
decisions and to act upon them, was proven to be an important factor in the
well-being of captive nonhuman animals (Mancini, 2017; Perdue et al., 2012;
Kohn, 1994). In addition to features from the experimental protocol aiming to
promote the subject’s sense of agency, the web application itself provides
various means for the subject to exert its agency, from the ability to choose
the mode of display of the values to the ability to interrupt the game at any
time and to choose a different mode of display.
#### 6.1.4. Extension to tuples
Taking advantage of the extensive parametrization of the web application, we
slightly extended the settings of Al Aïn et al. (2008)’s study from pairs to
tuples: whereas their protocol requested the subject to choose between only
two quantities, we were able to study the discrimination abilities not only
between pairs of values, but also between triple and quadruple sets of values,
showing a reduction of accuracy when the size of such set increased.
#### 6.1.5. Diversifying Discrete and Continuous Representations
Furthermore, we refined the analysis by diversifying the types of discrete and
continuous representations of values (Section 5.2), again with the subjects
showing an accuracy similar to that of the study of Al Aïn et al. (2008).
#### 6.1.6. Increased Number of Experiments
The web application used in the experiments is similar to other digital life
enrichment applications made available to nonhuman animals by their guardians.
Similarly to the behavior described by Washburn (Washburn, 2015; Richardson et
al., 1990) of apes presented with digital life enrichment applications serving
as cognition tests, the subjects often chose to play such web application over
other toys made available to them, and often asked to continue playing after
the end of a game. This allowed for multiple repetitions of the experiments,
and to gather a large amount of data points without incommoding the subjects:
the two subjects of this study voluntarily answered a total 2429 tests in one
week (see Table 1 for a summary), without any observable negative consequences
during nor after the end of the testing phase.
#### 6.1.7. True Randomness
The web application generates the instances presented to the subjects
uniformly at random, whereas the high organisational cost of Al Aïn et al.
(2008)’s protocol limited it to testing the exhaustive enumeration of pairs
between values from a specific domain, in a random order. The later could
yield some issues if the domain is sufficiently small that a subject could
deduce the answer to some questions by an elimination process, based on
previous answers. As Al Aïn et al. (2008) considered a domain of values of
size $5$, the amount of distinct unordered pairs is $\frac{5\times 4}{2}=10$,
a list which subjects with working memory abilities similar to humans might be
able to manage. Beyond the fact that the web application allows the use of a
domain of size up to $10$ (which brings the amount of distinct unordered pairs
to $\frac{10\times 9}{2}=45$), and of sets of values of size larger than two,
the fact that the sets of values presented to the subject are generated at
random completely suppresses the possibility of a subject to deduce the answer
to some questions by an elimination process, based on previous answers.
#### 6.1.8. Automatic generation of the experimental logs
The web application automatically generates locally a log of the subject’s
interactions with it. This greatly reduces the generation cost of such log,
reduces the probability of errors in it, and increases the amount of
information captured by it, such as the exact time of each answer, allowing
for instance the computation of the amount of time taken to answer each
question or studies between the time and/or whether of the day and performance
(albeit we did not take advantage of such information in the present study).
#### 6.1.9. Reduction of Experimental Cost
As the web application can be run on a simple pocket device, this reduces the
cost of running such experiments to the extreme that it can be run on the
experimenter’s shoulder while the device is hold by hand (at the cost of some
accuracy in the results of such experiment). Such lowered cost might prove to
be key in the design of citizen science projects extending this work.
### 6.2. Discussion
Our digital adaptation of Al Aïn et al. (2008)’s experimental protocol present
some other key difference, which might the result of our study relatively
difficult to compare to that of Al Aïn et al. (2008). We attempt to list such
difference as follows:
#### 6.2.1. Non proportional rewards and reward withdrawal
The protocol defined by Al Aïn et al. (2008) instructs to reward the subject
with the content of the container they chose: the importance of the reward is
proportional to the value being selected. The protocol we defined instructs to
reward the subject with a single type of reward each time it does select the
maximal value of the set, and to withdraw such reward when the subject fails
to do so. Such a difference might alter the experiment in at least two
distinct ways:
* •
The proportionality of rewards could result in a larger incentive to select
the maximal value when the difference between the two values is the largest,
and a reduced incentive when the difference is small, and Al Aïn et al. (2008)
indeed noticed a correlation between the gap between the two values and the
accuracy of the answer from the subjects of their experiment. The absence of
such proportionality in our experiments might have reduced such an incentive,
but we observed the same correlation than they did (described in Section
5.2.3).
* •
The withdrawal of rewards when the subject fails to select the largest value
of the set is likely to affect the motivation of the subject to continue to
participate in the exercise on the short term, and in the experiment in the
long term. To palliate the frustration caused by such withdrawal, extensive
care was taken to progressively increase the difficulty of the exercises
(first through the size of the domain from which the values were taken, then
through the size of the set of values from which to select the maximal one).
No frustration was observed, with both subjects often choosing to continue
playing at the end of a game.
Implementing the proportionality of rewards is not incompatible with the use
of a digital application. For instance, it would be relatively easy to extend
the web application to vocalize the value selected by the subject, so that the
experimenter could reward the subject with the corresponding amount of food.
Such an extension was not implemented mostly because it would slow down the
experimentation, for relatively meagre benefits.
#### 6.2.2. Irregular pairs and tuples
The web application generates the sets of value presented to the subject
uniformly at random (without repetitions) from the domain of values set in the
parameter page. While such a random generation yields various advantages, it
has a major drawback concerning the statistical analysis of the results, as
some sets of value might be under-represented. An unbalanced representation of
each possible set of values is guaranteed only on average and for a large
number of exercises; whereas Al Aïn et al. (2008)’s protocol, using a
systematic enumeration of the possible sets of values (presented in a random
order to the subject), does not yield such issues. Such issue was deliberately
ignored in order to develop a solution able to measure discrimination
abilities on values taken from large domains (assuming that some nonhuman
species might display abilities superior to that of humans in this regard),
and presenting the subject with a systematic enumeration of the possible sets
of values is practical only for small domains (e.g. values from 1 to 5), not
for large domains. For a domain of size $5$ (as that of Al Aïn et al. (2008)),
enough datapoints were generated that no pair was under represented (see Table
4).
#### 6.2.3. Extension to sensory diverse species
The colors displayed by digital displays and the sound frequencies played by
devices are optimized for the majority of humans. It is not always clear how
much and which colours and sound can be seen and heard by individual of each
species. The web application presents extensive parameters to vary the colours
displayed and the sounds played to the subject. Even less intuitively, species
can differ in their Critical Flicker Fusion Frequency (CFFF) (ND et al.,
2021), the frequency at which they perceive the world and can react to it (in
some species, such frequency even vary depending on the time of the day or of
the season (Reas, 2014; Healy et al., 2013)). For instance, dogs have higher
CFFF while cats have lower ones, and the CFFF of reptiles vary with the
ambient temperature. Such variation might affect not only their ability to
comprehend the visual display and sound play from devices, but might also
affect how they comprehend some application designs over others. The web
application presents extensive parameters to vary the time between each
exercise and which game, so that part of the rhythm of the application can be
adjusted by the experimenter to the CFFF of the subject, but more research is
required in order to automatically adapt the rhythm of such applications to
the CFFF of individuals from a variety of species.
### 6.3. Perspective on future work
Some issues with the results presented in this work are not related to any
difference with Al Aïn et al. (2008)’s experimental protocol, but rather with
limitations of the current one. We list them along with some tentative
solutions, to be implemented in the future.
#### 6.3.1. Random Dice and Heap representations
The discrete representation modes Dice and Heap associate each value with a
fixed representation of a number of points corresponding to the value being
represented. This differs from what happens in Al Aïn et al. (2008)’s
experimental protocol, where the seeds are in no arranged configuration on the
cardboard. This might affect the results of the experience in that a subject
could learn to select a particular symbol (e.g. the one corresponding to the
largest value of the domain) anytime it is present, without any need for any
comparison between the presented values. Check in the results if value sets
including the largest value of the domain have a better accuracy ratio than
others: this could be an indication that the subjects learned to select the
corresponding symbol anytime it is present, without any need for comparing
values. The development and evaluation of their impact on the discrimination
abilities of human and nonhuman subjects will be the topic of a future study,
once the corresponding randomized representations have been added to the web
application. FABIAN: should you want to work on a second article after this
one, this topic might be either to study than the campaign mode, and the
random display should be quick enough to program…
#### 6.3.2. Systematic logs
The easiness with which logs are generated tends to make one forget about it,
to the point that the bottleneck could become the transfer of the logs from
the device used to perform the experience to a central repository. As one
guardian might get more excited to transfer the logs of sessions where the
subjects excelled at the activities than that of less positive sessions, this
might create a bias toward positive results in their report. While not an
issue while implemented by personal with a scientific training, such risk of a
bias might become more problematic in the context of a citizen science project
(Association, 2021). The development of a website serving as a central
repository of experimental data sent by web applications such as the one
presented in this work will be the topic of a future study. The roles of such
a central “back-end” website could include the automatizing of the most
frequent statistical tests on the data received; a greater ease of separation
between the roles of experimenter and researcher, which will be an important
step toward a true citizen science generalisation of this project; and the
aggregation of sensory and cognitive data from distinct applications,
individuals and species.
#### 6.3.3. Adaptive Difficulty
The great amount of parameters available in the settings page of the web
application makes it possible to adapt the difficulty of the activities to the
level of abilities of the subject. Such abilities evolve with time, most often
advancing and only rarely receding (such as after a long period without using
the web application). Choosing which values of the parameters is the most
adequate to the current level of abilities of the subject requires an
extensive understanding of the mechanisms of the application. An extension of
the web application presenting the subject with a sequence of parametrization
of increasing difficulty, along with a mechanism raising or lowering the
difficulty of the activities presented to the subject would greatly simplify
the task of the experimenter, and will be the topic of a future study.
#### 6.3.4. Cardinal Discrimination
Pepperberg (2006) recounts how the African Grey parrot (_Psittacus erithacus_)
Alex, after being trained to identify Arabic numerals from 1 to 6 (but not to
associate Arabic numbers with their relevant physical quantities) was able to
label which of two Arabic numerals is the biggest, having inferred the
relationship between the Arabic number and the quantity, and having understood
the ordinal relationship of his numbers. Modifying the web applicationInCA-
WhatIsMore so that to replace the graphical representations of values by
ordinal numbers would/will be easy. Testing ethically the ability or inability
of subjects to replicate Pepperberg (2006)’s results without frustrating those
subjects might require more sophistication in the design of the experimental
protocol. Such protocols concerning the measurement of skills that subject
might lack is the topic of the next section.
#### 6.3.5. Ethical Measurement of Inabilities
The frustration potentially caused by the withdrawal of rewards (described in
Section 6.2.1) when measuring skills that a subject might lack (an example of
which was given in Section 6.2.1) points out to another issue, of ethical
dimensions: how can one ethically demonstrate the inability of subjects to
perform a given action through experimentation, without hurting the well-being
of the subject by exposing them to the frustration of failing to performed the
requested action? Note that such issue is not specific to the action of
withdrawing rewards when a subject fails: as proportional rewards can also
generate frustration. One solution could be to mix potentially “difficult”
requests with other, similar but known to be “easy”, requests, in such a way
that the proportion and frequency of the former to be a fraction of the
proportion and frequency of “easy” requests that the subject fail (for
inattention or other reasons). One can hypothesize that 1) the frustration
generated by such questions would be minimum; that 2) a statistical analysis
of the correction of the difficult requests would yield useful information
about the ability or inability of the subject to answer those; and that 3) a
small proportion of “difficult” requests helps to further motivate the
subject, making the exercise more of a challenge.
#### 6.3.6. Citizen Science Extensions
The term “ _Citizen Science_ ” refers to scientific projects conducted, in
whole or in part, by amateur (or nonprofessional) scientists (Gura, 2013). It
is sometimes described as “public participation in scientific research”, with
the dual objectives to improve the scientific community’s capacity, as well as
improving the public’s understanding of science and conscience about the
research’s themes (Association, 2021). Citizen Science has become a means of
encouraging curiosity and greater understanding of science whilst providing an
unprecedented engagement between professional scientists and the general
public.
Such methodology must be used with care, in particular about the validity of
volunteer generated data. Projects using complex research methods or requiring
a lot of repetitive work may not be suitable for volunteers, and the lack of
proper training in research and monitoring protocols in participants might
introduce bias into the data (Thelen and Thiet, 2008). Nevertheless, in many
cases the low cost per observation can compensate for the lack of accuracy of
the resulting data (Gardiner et al., 2012), especially if using proper data
processing methods (McClure et al., 2020).
Scientific researchers in comparative psychology could definitely benefit from
some help, with many cognitive aspects to explores for so many species. In the
process of defining the _anecdotal method_ of investigation for creative and
cognitive processes, Bates and Byrne (2007) mentioned that “ _collation of
records of rare events into data-sets can illustrate much about animal
behaviour and cognition_ ”. Now that the technology is ready to analyze
extremely large data-sets, what is lacking in comparative psychology are the
means to gather such large data-sets.
Delegating part of the experimental process to citizens without proper
scientific training is not without risk. Given the conflicted history of
Comparative Psychology (Pepperberg, 2020) in general and Animal Language
Studies (Pepperberg, 2016) in particular, the challenge of avoiding “Clever
Hans” biases and related ones will be of tremendous importance. Could
applications and experimental protocols such as the one described in this work
help to design citizen science projects for the study of sensory and cognitive
abilities in nonhumans species living in close contact with humans?
## Contributions
Jérémy Barbay programmed the first versions of the software, managed the
interactions with the subjects during the development, training and testing
phases, obtained the approval of the _Institutional Animal Care and Use
Committee_ , structured the article and supervised the work of Fabián Jaña
Ubal and Cristóbal Sepulveda Álvarez. Fabián Jaña Ubal improved and maintained
the software, and described it (Sections 3.1 and 3.2). Cristóbal Sepulveda
Álvarez reviewed the experimental results, performed their statistical
analysis, and described the log structure (Section 3.3), the statistical
analysis process (Section 4.3) and the results (Section 5). All authors are
aware of the submission of this work and of its content.
###### Acknowledgements.
We wish to thank Joachim Barbay for his suggestion of using Svelte and his
mentoring in the development of the various versions of the application;
Jennifer Cunha from Parrot Kindergarten for sharing images and video of
parrots using touchscreens, suggesting the (Al Aïn et al., 2008)’s study and
for helping during the design and test of the preliminary versions of the
software InCA-WhatIsMore (as well as other software projects); Corinne
Renguette for her help concerning the bibliography and the ethical aspects of
the experiments and of its description; Cristina Doelling for pointing out
some of the existing literature about the use of touchscreens by apes in zoo;
and Francisco Gutierrez and Jenny Stamm for suggesting alternative names to
the problematic term “blind” in expressions such as “blind setup”, and for
pointing out some bibliography supporting such replacement.
## References
* (1)
* Al Aïn et al. (2008) Syrina Al Aïn, Nicolas Giret, Marion Grand, Michel Kreutzer, and Dalila Bovet. 2008. The discrimination of discrete and continuous amounts in African grey parrots (Psittacus erithacus). _Animal cognition_ 12 (09 2008), 145–54. https://doi.org/10.1007/s10071-008-0178-8
* Association (2021) Citizen Science Association. 2021\. CitizenScience.org. Website https://citizenscience.org/. Last accessed on [2022-05-27 Fri].
* Bates and Byrne (2007) Lucy Bates and Richard Byrne. 2007. Creative or created: Using anecdotes to investigate animal cognition. _Methods (San Diego, Calif.)_ 42 (06 2007), 12–21. https://doi.org/10.1016/j.ymeth.2006.11.006
* Coghlan et al. (2021a) Simon Coghlan, Sarah Webber, and Marcus Carter. 2021a. Improving ethical attitudes to animals with digital technologies: the case of apes and zoos. _Ethics and Information Technology_ 23 (12 2021), 1–15. https://doi.org/10.1007/s10676-021-09618-7
* Coghlan et al. (2021b) Simon Coghlan, Sarah Webber, and Marcus Carter. 2021b. Improving ethical attitudes to animals with digital technologies: the case of apes and zoos. _Ethics and Information Technology_ 23 (12 2021), 1–15. https://doi.org/10.1007/s10676-021-09618-7
* Cunha and Clubb (2018) Jennifer Cunha and Susan Clubb. 2018. Advancing Communaction with Birds: Can They Learn to Read? https://www.academia.edu/45183882/Advancing_Communication_with_Birds_Can_They_Learn_to_Read
* Egelkamp and Ross (2018) Crystal Egelkamp and Stephen Ross. 2018. A review of zoo-based cognitive research using touchscreen interfaces. _Zoo Biology_ 38 (11 2018), 220–235. https://doi.org/10.1002/zoo.21458
* Gardiner et al. (2012) Mary M Gardiner, Leslie L Allee, Peter MJ Brown, John E Losey, Helen E Roy, and Rebecca Rice Smyth. 2012\. Lessons from lady beetles: accuracy of monitoring data from US and UK citizen-science programs. _Frontiers in Ecology and the Environment_ 10, 9 (2012), 471–476. https://doi.org/10.1890/110185 arXiv:https://esajournals.onlinelibrary.wiley.com/doi/pdf/10.1890/110185
* Gura (2013) Trisha Gura. 2013\. Citizen science: Amateur experts. _Nature volume_ 496 (2013), 259–261.
* Healy et al. (2013) Kevin Healy, Luke McNally, Graeme D. Ruxton, Natalie Cooper, and Andrew L. Jackson. 2013\. Metabolic rate and body size are linked with perception of temporal information. _Animal Behaviour_ 86, 4 (2013), 685–696. DOI: 10.1016/j.anbehav.2013.06.018.
* Kohn (1994) B. Kohn. 1994. Zoo animal Welfaire. _Rev Sci Tech_ 13, 1 (1994), 233–45. doi: 10.20506/rst.13.1.764.
* Mancini (2017) Clara Mancini. 2017\. Towards an animal-centred ethics for Animal–Computer Interaction. _International Journal of Human-Computer Studies_ 98 (2017), 221–233. https://doi.org/10.1016/j.ijhcs.2016.04.008
* McClure et al. (2020) Eva C. McClure, Michael Sievers, Christopher J. Brown, Christina A. Buelow, Ellen M. Ditria, Matthew A. Hayes, Ryan M. Pearson, Vivitskaia J.D. Tulloch, Richard K.F. Unsworth, and Rod M. Connolly. 2020\. Artificial Intelligence Meets Citizen Science to Supercharge Ecological Monitoring. _Patterns_ 1, 7 (09 Oct 2020). https://doi.org/10.1016/j.patter.2020.100109
* Morris D (2007) Wormald R. Morris D, Fraser S. 2007\. Masking is better than blinding. _BMJ : British Medical Journal_ 334, 7597 (Apr 2007).
* ND et al. (2021) Mankowska ND, Marcinkowska AB, Waskow M, Sharma RI, Kot J, and Winklewski PJ. 2021\. Critical Flicker Fusion Frequency: A Narrative Review. _Medicina_ 57, 10 (Oct 2021), 1096\.
* Pepperberg (2006) Irene Pepperberg. 2006\. Ordinality and inferential abilities of a grey parrot (Psittacus erithacus). _Journal of comparative psychology (Washington, D.C. : 1983)_ 120 (09 2006), 205–16. https://doi.org/10.1037/0735-7036.120.3.205
* Pepperberg (2016) Irene Pepperberg. 2016\. Animal language studies: What happened? _Psychonomic Bulletin & Review_ 24 (07 2016). https://doi.org/10.3758/s13423-016-1101-y
* Pepperberg (2020) Irene Pepperberg. 2020\. The Comparative Psychology of Intelligence: Some Thirty Years Later. _Frontiers in Psychology_ 11 (05 2020). https://doi.org/10.3389/fpsyg.2020.00973
* Pepperberg (1999) Irene Maxine Pepperberg. 1999\. _The Alex Studies: Cognitive and Communicative Abilities of Grey Parrots_. Harvard University Press, Cambridge, Massachusetts and London, England.
* Perdue et al. (2012) Bonnie M. Perdue, Andrea W. Clay, Diann E. Gaalema, Terry L. Maple, and Tara S. Stoinski. 2012\. Technology at the Zoo: The Influence of a Touchscreen Computer on Orangutans and Zoo Visitors. _Zoo Biology_ 31, 1 (2012), 27–39. https://doi.org/10.1002/zoo.20378
* Reas (2014) E. Reas. 2014. Small Animals Live in a Slow-Motion World. _Scientific American Mind_ 25, 4 (2014).
* Richardson et al. (1990) W. K. Richardson, D. A. Washburn, W. D. Hopkins, E. S. Savage-Rumbaugh, and D. M. Rumbaugh. 1990\. The NASA/LRC computerized test system. _Behavior Research Methods, Instruments, and Computers_ 22 (1990), 127–131. https://doi.org/10.3758/BF03203132.
* Thelen and Thiet (2008) Brett Thelen and Rachel Thiet. 2008. Cultivating connection: Incorporating meaningful citizen science into Cape Cod National Seashore’s estuarine research and monitoring programs. _Park Science_ 25 (06 2008).
* Trestman (2015) Michael Trestman. 2015\. Clever Hans, Alex the Parrot, and Kanzi: What can Exceptional Animal Learning Teach us About Human Cognitive Evolution? _Biological Theory_ 10 (03 2015). https://doi.org/10.1007/s13752-014-0199-2
* Washburn (2015) David Washburn. 2015\. The Four Cs of Psychological Wellbeing: Lessons from Three Decades of Computer-based Environmental Enrichment. _Animal Behavior and Cognition_ 2 (08 2015), 218–232. https://doi.org/10.12966/abc.08.02.2015
|
# Flow-induced oscillations of pitching swept wings: Stability boundary,
vortex dynamics and force partitioning
Yuanhang Zhu1<EMAIL_ADDRESS>Kenneth Breuer1 1 Center for Fluid
Mechanics, School of Engineering, Brown University, Providence, RI 02912, USA
###### Abstract
We experimentally study the aeroelastic instability boundaries and three-
dimensional vortex dynamics of pitching swept wings, with the sweep angle
ranging from 0 to 25 degrees. The structural dynamics of the wings are
simulated using a cyber-physical control system. With a constant flow speed, a
prescribed high inertia and a small structural damping, we show that the
system undergoes a subcritical Hopf bifurcation to large-amplitude limit-cycle
oscillations (LCOs) for all the sweep angles. The onset of LCOs depends
largely on the static characteristics of the wing. The saddle-node point is
found to change non-monotonically with the sweep angle, which we attribute to
the non-monotonic power transfer between the ambient fluid and the elastic
mount. An optimal sweep angle is observed to enhance the power extraction
performance and thus promote LCOs and destabilize the aeroelastic system. The
frequency response of the system reveals a structural-hydrodynamic oscillation
mode for wings with relatively high sweep angles. Force, moment, and three-
dimensional flow structures measured using multi-layer stereoscopic particle
image velocimetry are analyzed to explain the differences in power extraction
for different swept wings. Finally, we employ a physics-based Force and Moment
Partitioning Method (FMPM) to quantitatively correlate the three-dimensional
vortex dynamics with the resultant unsteady aerodynamic moment.
###### keywords:
flow–structure interactions, vortex dynamics
## 1 Introduction
The fluid-structure interaction (FSI) of elastically mounted pitching wings
can lead to large-amplitude flow-induced oscillations under certain operating
conditions. In extreme cases, these flow-induced oscillations may affect
structural integrity and even cause catastrophic aeroelastic failures (Dowell
et al., 1989). On the other hand, however, hydro-kinetic energy can be
harnessed from these oscillations, providing an alternative solution for next-
generation renewable energy devices (Xiao & Zhu, 2014; Young et al., 2014;
Boudreau et al., 2018; Su & Breuer, 2019). Moreover, the aero-/hydro-elastic
interactions of passively pitching wings/fins have important connections with
animal flight (Wang, 2005; Bergou et al., 2007; Beatus & Cohen, 2015; Wu et
al., 2019) and swimming (Long & Nipper, 1996; Quinn & Lauder, 2021), and
understanding these interactions may further aid the design and development of
flapping-wing micro air vehicles (MAVs) (Shyy et al., 2010; Jafferis et al.,
2019) and oscillating-foil autonomous underwater vehicles (AUVs) (Zhong et
al., 2021b; Tong et al., 2022).
Flow-induced oscillations of pitching wings originate from the two-way
coupling between the structural dynamics of the elastic mount and the fluid
force exerted on the wing. While the dynamics of the elastic mount can be
approximated by a simple spring-mass-damper model, the fluid forcing term is
usually found to be highly nonlinear due to the formation, growth, and
shedding of a strong leading-edge vortex (LEV) (McCroskey, 1982; Dimitriadis &
Li, 2009; Mulleners & Raffel, 2012; Eldredge & Jones, 2019). Onoue et al.
(2015) and Onoue & Breuer (2016) experimentally studied the flow-induced
oscillations of a pitching plate whose structural stiffness, damping and
inertia were defined using a cyber-physical system (§2.1, see also Hover et
al. (1997); Mackowski & Williamson (2011); Zhu et al. (2020)) and, using this
approach, identified a subcritical bifurcation to aeroelastic instability. The
temporal evolution of the LEV associated with the aeroelastic oscillations was
characterized using particle image velocimetry (PIV), and the unsteady flow
structures were correlated with the unsteady aerodynamic moments using a
potential flow model. Menon & Mittal (2019) numerically studied a similar
problem, simulating an elastically mounted two-dimensional NACA-0015 airfoil
at a Reynolds number of 1000. An energy approach, which bridges prescribed
sinusoidal oscillations and passive flow-induced oscillations, was employed to
characterize the dynamics of the aeroelastic system. The energy approach maps
out the energy transfer between the ambient flow and the elastic mount over a
range of prescribed pitching amplitudes and frequencies and unveils the system
stability based on the sign of the energy gradient.
More recently, Zhu et al. (2020) characterized the effect of wing inertia on
the flow-induced oscillations of pitching wings and the corresponding LEV
dynamics. Two distinct oscillation modes were reported: (i) a structural mode,
which occurred via a subcritical bifurcation and was associated with a high-
inertia wing, and (ii) a hydrodynamic mode, which occurred via a supercritical
bifurcation and was associated with a low-inertia wing. The wing was found to
shed one strong LEV during each half-pitching cycle for the hydrodynamic mode,
whereas a weak secondary LEV was also shed in the high-inertial structural
mode.
These previous studies have collectively demonstrated that LEV dynamics play
an important role in shaping flow-induced oscillations and thus regulate the
stability characteristics of passively pitching wings. However, these studies
have only focused on studying the structural and flow dynamics of two-
dimensional wings or airfoils. The extent to which these important findings
for two-dimensional wings hold in three dimensions remains unclear.
Swept wings are commonly seen for flapping-wing fliers and swimmers in nature
(Ellington et al., 1996; Lentink et al., 2007; Borazjani & Daghooghi, 2013;
Bottom II et al., 2016; Zurman-Nasution et al., 2021), as well as on many
engineered fixed-wing flying vehicles. It is argued that wing sweep can
enhance lift generation for flapping wings because it stabilizes the LEV by
maintaining its size through spanwise vorticity transport – a mechanism
similar to the lift enhancement mechanism of delta wings (Polhamus, 1971).
Chiereghin et al. (2020) found significant spanwise flow for a high-aspect
ratio plunging swept wing at a sweep angle of 40 degrees. In another study,
for the same sweep angle, attached LEVs and vortex breakdown were observed
just like those on delta wings (Gursul & Cleaver, 2019). Recent works have
shown that the effect of wing sweep on LEV dynamics depends strongly on wing
kinematics. Beem et al. (2012) showed experimentally that for a plunging swept
wing, the strong spanwise flow induced by the wing sweep is not sufficient for
LEV stabilization. Wong et al. (2013) reinforced this argument by comparing
the LEV stability of plunging and flapping swept wings and showed that two-
dimensional (i.e. uniform without any velocity gradient) spanwise flow alone
cannot stabilize LEVs – there must be spanwise gradients in vorticity or
spanwise flow so that vorticity can be convected or stretched. Wong & Rival
(2015) demonstrated both theoretically and experimentally that the wing sweep
improves relative LEV stability of flapping swept wings by enhancing the
spanwise vorticity convection and stretching so as to keep the LEV size below
a critical shedding threshold (Rival et al., 2014). Onoue & Breuer (2017)
experimentally studied elastically mounted pitching unswept and swept wings
and proposed a universal scaling for the LEV formation time and circulation,
which incorporated the effects of the pitching frequency, the pivot location
and the sweep angle. The vortex circulation was demonstrated to be independent
of the three-dimensional vortex dynamics. In addition, they concluded that the
stability of LEV can be improved by moderating the LEV circulation through
vorticity annihilation, which is largely governed by the shape of the leading-
edge sweep, agreeing with the results of Wojcik & Buchholz (2014). More
recently, Visbal & Garmann (2019) numerically studied the effect of wing sweep
on the dynamic stall of pitching three-dimensional wings and reported that the
wing sweep can modify the LEV structures and change the net aerodynamic
damping of the wing. The effect of wing sweep on the LEV dynamics and
stability, as one can imagine, will further affect the unsteady aerodynamic
forces and thereby the aeroelastic response of pitching swept wings.
Another important flow feature associated with unsteady three-dimensional
wings is the behavior of the tip vortex (TV). Although the tip vortex usually
grows distinctly from the leading-edge vortex for rectangular platforms (Taira
& Colonius, 2009; Kim & Gharib, 2010; Hartloper et al., 2013), studies have
suggested that the TV is able to anchor the LEV in the vicinity of the wing
tip, which delays LEV shedding (Birch & Dickinson, 2001; Hartloper et al.,
2013). Moreover, the tip vortex has also been shown to affect the unsteady
wake dynamics of both unswept and swept wings (Taira & Colonius, 2009; Zhang
et al., 2020a, b; Ribeiro et al., 2022; Son et al., 2022a, b). However, it
remains elusive how the interactions between LEVs and TVs change with the wing
sweep, and more importantly, how this change will in turn affect the response
of aeroelastic systems.
To dissect the effects of complex vortex dynamics associated with unsteady
wings/airfoils, a physics-based Force and Moment Partitioning Method (FMPM)
has been proposed (Quartapelle & Napolitano, 1983; Zhang et al., 2015; Moriche
et al., 2017; Menon & Mittal, 2021a, b, c) (also known as the vortex
force/moment map method (Li & Wu, 2018; Li et al., 2020a)). The method has
attracted attention recently due to its high versatility for analyzing a
variety type of vortex-dominated flows. Under this framework, the Navier-
Stokes equation is projected onto the gradient of an influence potential to
separate the force contributions from the added-mass, vorticity-induced, and
viscous terms. It is particularly useful for analyzing vortex-dominated flows
because the spatial distribution of the vorticity-induced forces can be
visualized, enabling detailed dissections of aerodynamic loads generated by
individual vortical structures. For two-dimensional airfoils, Menon & Mittal
(2021c) applied FMPM and showed that the strain-dominated region surrounding
the rotation-dominated vortices has an important role to play in the
generation of unsteady aerodynamic forces. For three-dimensional wings, this
method has been implemented to study the contributions of spanwise and cross-
span vortices to the lift generation of rectangular wings (Menon et al.,
2022), the vorticity-induced force distributions on forward- and backward-
swept wings at a fixed angle of attack (Zhang & Taira, 2022), and the
aerodynamic forces on delta wings (Li et al., 2020b). More recently, efforts
have been made to apply FMPM to the analysis of experimental data, in
particular, flow fields obtained using particle image velocimetry. Zhu et al.
(2023) employed FMPM to analyze the vortex dynamics of a two-dimensional wing
pitching sinusoidally in a quiescent flow. Several practical issues in
applying FMPM to PIV data were discussed, including the effect of phase-
averaging and potential error sources.
In this study, we apply FMPM to three-dimensional flow field data measured
using three-component PIV, and use the results to gain insight into the three-
dimensional vortex dynamics and the corresponding unsteady forces acting on
elastically mounted pitching swept wings. We extend the methodology developed
in Zhu et al. (2020), and employ a layered stereoscopic PIV technique and the
FMPM to quantify the three-dimensional vortex dynamics. In the following
sections, we first introduce the experimental setup and method of analysis
(§2). The static force and moment coefficients of the wings are measured
(§3.1) before we characterize the amplitude response (§3.2) and the frequency
response (§3.3) of the system. Next, we associate the onset of flow-induced
oscillations with the static characteristics of the wing (§3.4) and use an
energy approach to explain the nonlinear stability boundaries (§3.5). The
unsteady force and moment measurements, together with the three-dimensional
flow structures (§3.6) are then analyzed to explain the differences in power
extraction for unswept and swept wings. Finally, we apply the Force and Moment
Partitioning Method to quantitatively correlate the three-dimensional vortex
dynamics with the resultant unsteady aerodynamic moment (§3.7). All the key
findings are summarized in §4.
## 2 Methods
Figure 1: (_a_) A schematic of the experimental setup. (_b_) Sketches of
unswept and swept wings used in the experiments. The pivot axes are indicated
by black dashed lines. The green panels represent volumes traversed by the
laser sheet for three-dimensional phase-averaged stereoscopic PIV
measurements.
### 2.1 Cyber-physical system and wing geometry
We perform all the experiments in the Brown University free-surface water
tunnel, which has a test section of $W\times D\times L=0.8~{}\mathrm{m}\times
0.6~{}\mathrm{m}\times 4.0~{}\mathrm{m}$. The turbulence intensity in the
water tunnel is around 2% at the velocity range tested in the present study.
Free-stream turbulence plays a critical role in shaping small-amplitude
laminar separation flutter (see Yuan et al. (2015)). However, as we will show
later, the flow-induced oscillations and the flow structures observed in the
present study are of high amplitude and large size, and we do not expect the
free-stream turbulence to play any significant role. Figure 1(_a_) shows a
schematic of the experimental setup. Unswept and swept NACA 0012 wings are
mounted vertically in the tunnel, with an endplate on the top as a symmetry
plane. The wing tip at the bottom does not have an endplate. The wings are
connected to a six-axis force/moment transducer (ATI Delta IP65) via a wing
shaft. The shaft further connects the transducer to an optical encoder (US
Digital E3-2500) and a servo motor (Parker SM233AE) coupled with a gearbox
(SureGear PGCN23-0525).
We implement a cyber-physical system (CPS) to facilitate a wide structural
parameter sweep (i.e. stiffness, $k$, damping, $b$, and inertia, $I$) while
simulating real aeroelastic systems with high fidelity. Details of the CPS
have been discussed in Zhu et al. (2020), therefore, only a brief introduction
will be given here. In the CPS, the force/moment transducer measures the fluid
moment, $M$, and feeds the value to the computer via a data acquisition (DAQ)
board (National Instruments PCIe-6353). This fluid moment is then added to the
stiffness moment ($k\theta$) and the damping moment ($b\dot{\theta}$) obtained
from the previous time step to get the total moment. Next, we divide this
total moment by the desired inertia ($I$) to get the acceleration
($\ddot{\theta}$) at the present time step. This acceleration is then
integrated once to get the velocity ($\dot{\theta}$) and twice to get the
pitching angle ($\theta$). This pitching angle signal is output to the servo
motor via the same DAQ board. The optical encoder, which is independent of the
CPS, is used to measure and verify the pitching angle. At the next time step,
the CPS recalculates the total moment based on the measured fluid moment and
the desired stiffness and damping, and thereby continues the loop.
Our CPS control loop runs at a frequency of 4000 Hz, which is well beyond the
highest Nyquist frequency of the aeroelastic system. Noise in the force/moment
measurements can be a potential issue for the CPS. However, because we are
using a position control loop, where the acceleration is integrated twice to
get the desired position, our system is less susceptive to noise. Therefore,
no filter is used within the CPS control loop. The position control loop also
requires the pitching motor to follow the commanded position signal as closely
as possible. This is achieved by carefully tuning the PID
(Proportional–Integral–Derivative) parameters of the pitching motor. The CPS
does not rely on any additional tunable parameters other than the virtual
inertia, damping, and stiffness. We validate the system using ‘ring-down’
experiments, as shown in the appendix of Zhu et al. (2020). Moreover, as we
will show later, the CPS results match remarkably well with prescribed
experiments (§3.5), demonstrating the robustness of the system.
The unswept and swept wings used in the present study are sketched in figure
1(_b_). All the wings have a span of $s=0.3$ m and a chord length of $c=0.1$
m, which results in a physical aspect ratio of $AR=3$. However, the effective
aspect ratio is 6 due to the existence of the symmetry plane (i.e. the
endplate). The minimum distance between the wing tip and the bottom of the
water tunnel is around $1.5c$. The chord-based Reynolds number is defined as
$Re\equiv\rho U_{\infty}c/\mu$, where $U_{\infty}$ is the free-stream
velocity, $\rho$ and $\mu$ are water density and dynamic viscosity,
respectively. We set the free-stream velocity to be $U_{\infty}=0.5$
$\mathrm{m~{}s^{-1}}$ for all the experiments (except for particle image
velocimetry measurements, see §2.2), which results in a constant Reynolds
number of $Re=50~{}000$, matching the $Re$ used in Zhu et al. (2020) to
facilitate direct comparisons. For both unswept and swept wings, the leading
edge (LE) and the trailing edge (TE) are parallel. Their pivot axes,
represented by vertical dashed lines in the figure, pass through the mid-chord
point $x/c=0.5$ of the mid-span plane $z/s=0.5$. We choose the current
location of the pitching axis because it splits the swept wings into two
equal-area sections (fore and aft). Moving the pitching axis or making it
parallel to the leading edge will presumably result in different system
dynamics, which will be investigated in future studies.
The sweep angle, $\Lambda$, is defined as the angle between the leading edge
and the vertical axis. Five wings with $\Lambda=0^{\circ}$ (unswept wing),
$10^{\circ},15^{\circ},20^{\circ}$ and $25^{\circ}$ (swept wings) are used in
the experiments. Further expanding the range of wing sweep would presumably
bring more interesting fluid-structure interaction behaviors. However, as we
will show in the later sections, there is already a series of rich (nonlinear)
flow physics associated with the current set of unswept and swept wings. Our
selection of the sweep angle is also closely related to the location of the
pitching axis. Currently, the pitching axis passes the mid-chord at the mid-
span. For a $\Lambda=25^{\circ}$ wing, the trailing edge is already in front
of the pitching axis at the wing root, and the leading edge is behind the
pitching axis at the wing tip. Further increasing the sweep angle brings
difficulties in physically pitching the wing for our existing setup.
### 2.2 Multi-layer stereoscopic particle image velocimetry
We use multi-layer phase-averaged stereoscopic particle image velocimetry
(SPIV) to measure the three-dimensional (3D) velocity field around the
pitching wings. We lower the free-stream velocity to $U_{\infty}=0.3$
$\mathrm{m~{}s^{-1}}$ to enable higher temporal measurement resolution. The
chord-based Reynolds number is consequently decreased to $Re=30~{}000$. It has
been shown by Zhu et al. (2020, see their appendix) that the variation of $Re$
in the range of 30 000 – 60 000 does not affect the system dynamics, as long
as the parameters of interest are properly non-dimensionalized. The water flow
is seeded using neutrally buoyant 50 $\mu$m silver-coated hollow ceramic
spheres (Potters Industries) and illuminated using a horizontal laser sheet,
generated by a double-pulse Nd:YAG laser (532 nm, Quantel EverGreen) with a
LaVision laser guiding arm and collimator. Two sCMOS cameras (LaVision,
$2560\times 2160$ pixels) with Scheimpflug adapters (LaVision) and 35mm lenses
(Nikon) are used to capture image pairs of the flow field. These SPIV image
pairs are fed into the LaVision DaVis software (v.10) for velocity vector
calculation using multi-pass cross-correlations (two passes at $64\times 64$
pixels, two passes at $32\times 32$ pixels, both with 50% overlap).
To measure the two-dimensional-three-component (2D3C) velocity field at
different spanwise layers, we use a motorized vertical traverse system with a
range of 120 mm to raise and lower the testing rig (i.e. all the components
connected by the shaft) in the $z$-axis (King et al., 2018; Zhong et al.,
2021a). Due to the limitation of the traversing range, three measurement
volumes (figure 1 _b_ , V1, V2 and V3) are needed to cover the entire wing
span plus the wing tip region. For each measurement volume, the laser sheet is
fixed at the top layer and the rig is traversed upward with a step size of 5
mm. Note that the entire wing stays submerged, even at the highest traversing
position, and for all wing positions, free surface effects are not observed.
The top two layers of V1 are discarded as the laser sheet is too close to the
endplate, which causes reflections. The bottom layer of V1 and the top layer
of V2 overlap with each other. The velocity fields of these two layers are
averaged to smooth the interface between the two volumes. The interface
between V2 and V3 is also smoothed in the same way. For each measurement
layer, we phase-average 1250 instantaneously measured 2D3C velocity fields
over 25 cycles (i.e. 50 measurements per cycle) to eliminate any instantaneous
variations of the flow field while maintaining the key coherent features
across different layers. Finally, 71 layers of 2D3C velocity fields are
stacked together to form a large volume of phase-averaged 3D3C velocity field
($\sim 3c\times 3c\times 3.5c$). The velocity fields of three wing models
($\Lambda=0^{\circ}$, $10^{\circ}$ and $20^{\circ}$) are measured. For the two
swept wings ($\Lambda=10^{\circ}$ and $20^{\circ}$), the laser volumes are
offset horizontally to compensate for the sweep angle (see the bottom
subfigure of figure 1 _b_).
### 2.3 Governing equations and non-dimensional parameters
The one-degree-of-freedom aeroelastic system considered in the present study
has a governing equation
$I\ddot{\theta}+b\dot{\theta}+k\theta=M,$ (1)
where $\theta$, $\dot{\theta}$, and $\ddot{\theta}$ are the angular position,
velocity and acceleration, respectively. $I=I_{p}+I_{v}$ is the effective
inertia, where $I_{p}$ is the physical inertia of the wing and $I_{v}$ is the
virtual inertia that we prescribe with the CPS. Because the friction is
negligible in our system, the effective structural damping, $b$, equals the
virtual damping $b_{v}$ in the CPS. $k$ is the effective torsional stiffness
and it equals the virtual stiffness $k_{v}$. Equation 1 resembles a forced
torsional spring-mass-damper system, where the fluid moment, $M$, acts as a
nonlinear forcing term. Following Onoue et al. (2015) and Zhu et al. (2020),
we normalize the effective inertia, damping, stiffness and the fluid moment
using the fluid inertia force to get the non-dimensional governing equation of
the system:
$I^{*}\ddot{\theta}^{*}+b^{*}\dot{\theta}^{*}+k^{*}\theta^{*}=C_{M},$ (2)
where
$\begin{gathered}\theta^{*}=\theta,~{}\dot{\theta}^{*}=\frac{\dot{\theta}c}{U_{\infty}},~{}\ddot{\theta}^{*}=\frac{\ddot{\theta}c^{2}}{U_{\infty}^{2}},\\\
I^{*}=\frac{I}{0.5\rho c^{4}s},~{}b^{*}=\frac{b}{0.5\rho
U_{\infty}c^{3}s},~{}k^{*}=\frac{k}{0.5\rho
U_{\infty}^{2}c^{2}s},~{}C_{M}=\frac{M}{0.5\rho
U_{\infty}^{2}c^{2}s}.\end{gathered}$ (3)
We should note that the inverse of the non-dimensional stiffness is equivalent
to the Cauchy number, $Ca=1/k^{*}$, and the non-dimensional inertia, $I^{*}$,
is analogous to the mass ratio between the wing and the surrounding fluid. We
define the non-dimensional velocity as $U^{*}=U_{\infty}/(2\pi f_{p}c)$, where
$f_{p}$ is the _measured_ pitching frequency. In addition to the aerodynamic
moment, we also measure the aerodynamic forces that are normal and tangential
to the wing chord, $F_{N}$ and $F_{T}$, respectively. The resultant lift and
drag forces are
$\begin{gathered}L=F_{N}\cos{\theta}-F_{T}\sin{\theta},\\\
D=F_{N}\sin{\theta}+F_{T}\cos{\theta}.\end{gathered}$ (4)
We further normalize the normal force, tangential force, lift and drag to get
the corresponding force coefficients
$C_{N}=\frac{F_{N}}{0.5\rho U_{\infty}^{2}cs},~{}C_{T}=\frac{F_{T}}{0.5\rho
U_{\infty}^{2}cs},~{}C_{L}=\frac{L}{0.5\rho
U_{\infty}^{2}cs},~{}C_{D}=\frac{D}{0.5\rho U_{\infty}^{2}cs}.$ (5)
### 2.4 Force and Moment Partitioning Method
To apply FMPM to three-dimensional PIV data, we first construct an influence
potential that satisfies Laplace’s equation and two different Neumann boundary
conditions on the airfoil and the outer boundary
$\nabla^{2}\phi=0,~{}\text{and}~{}\frac{\partial\phi}{\partial\boldsymbol{\mathrm{n}}}=\begin{cases}[(\boldsymbol{x}-\boldsymbol{x_{p}})\times\boldsymbol{\mathrm{n}}]\cdot\boldsymbol{\mathrm{e_{z}}}&\text{on
airfoil}\\\ 0&\text{on outer boundary}\end{cases},$ (6)
where $\boldsymbol{\mathrm{n}}$ is the unit vector normal to the boundary,
$\boldsymbol{x}-\boldsymbol{x_{p}}$ is the location vector pointing from the
pitching axis $\boldsymbol{x_{p}}$ towards location $\boldsymbol{x}$ on the
airfoil surface, and $\boldsymbol{\mathrm{e_{z}}}$ is the spanwise unit vector
(Menon & Mittal, 2021b). This influence potential quantifies the spatial
influence of any vorticity on the resultant force/moment. It is only a
function of the airfoil geometry and the pitching axis, and does not depend on
the kinematics of the wing. Note that this influence potential should not be
confused with the velocity potential from the potential flow theory. The
boundary conditions of equation 6 are specified for solving the influence
field of the spanwise moment, and they will be different for solving the lift
and drag influence fields. From the three-dimensional velocity data, we can
calculate the $Q$ field (Hunt et al., 1988; Jeong & Hussain, 1995)
$Q=\frac{1}{2}(\|\boldsymbol{\Omega}\|^{2}-\|\boldsymbol{\mathrm{S}}\|^{2}),$
(7)
where $Q$ is the second invariant of the velocity gradient tensor,
$\boldsymbol{\Omega}$ is the vorticity tensor and $\boldsymbol{\mathrm{S}}$ is
the strain-rate tensor. The vorticity-induced moment can be evaluated by
$M_{v}=-2\rho\int_{V}Q\phi~{}\mathrm{d}V,$ (8)
where $\int_{V}$ represents the volume integral within the measurement volume.
The spatial distribution of the vorticity-induced moment near the pitching
wing can thus be represented by the moment density, $-2Q\phi$ (i.e. the moment
distribution field). In the present study, we focus on the vorticity-induced
force (moment) as it has the most important contribution to the overall
unsteady aerodynamic load in vortex-dominated flows. Other forces including
the added-mass force, the force due to viscous diffusion, the forces
associated with irrotational effects and outer domain effects are not
considered although they can be estimated using FMPM as well (Menon & Mittal,
2021b). The contributions from these other forces, along with experimental
errors, might result in a mismatch in the magnitude of the FMPM-estimated
force and force transducer measurements, as shown by Zhu et al. (2023), and
the exact source of this mismatch is under investigation.
## 3 Results and discussion
### 3.1 Static characteristics of unswept and swept wings
Figure 2: (_a_) Static lift coefficient and (_b_) moment coefficient of
unswept and swept wings. Error bars denote standard deviations of the
measurement over 20 seconds.
The static lift and moment coefficient, $C_{L}$ and $C_{M}$, are measured for
the unswept ($\Lambda=0^{\circ}$) and swept wings ($\Lambda=10^{\circ}$ –
$25^{\circ}$) at $Re=50~{}000$ and the results are shown in figure 2. In
figure 2(_a_), we see that the static lift coefficient, $C_{L}(\theta)$, has
the same behavior for all sweep angles, despite some minor variations for
angles of attack higher than the static stall angle $\theta_{s}=12^{\circ}$
(0.21 rad). The collapse of $C_{L}(\theta)$ across different swept wings
agrees with the classic ‘independence principle’ (Jones, 1947) (i.e.
$C_{L}\sim\cos^{2}\Lambda$) at relatively small sweep angles. Figure 2(_b_)
shows that, for any fixed angle of attack, the static moment coefficient,
$C_{M}$, increases with the sweep angle, $\Lambda$. This trend is most
prominent when the angle of attack exceeds the static stall angle. The inset
shows a zoom-in view of the static $C_{M}$ for $\theta=0.14$ – 0.26. It is
seen that the $C_{M}$ curves cluster into two groups, with the unswept wing
($\Lambda=0^{\circ}$) being in Group 2 (G2) and all the other swept wings
($\Lambda=10^{\circ}$ – $25^{\circ}$) being in Group 1 (G1). As we will show
later, this grouping behavior is closely related to the onset of flow-induced
oscillations (§3.2 & §3.4) and it is important for understanding the system
stability. No hysteresis is observed for both static $C_{L}$ and $C_{M}$,
presumably due to free-stream turbulence in the water tunnel.
### 3.2 Subcritical bifurcations to flow-induced oscillations
We conduct bifurcation tests to study the stability boundaries of the
elastically mounted pitching wings. Zhu et al. (2020) have shown that for
unswept wings, the onset of limit-cycle oscillations (LCOs) is independent of
the wing inertia and the bifurcation type (i.e. subcritical or supercritical).
It has also been shown that the extinction of LCOs for subcritical
bifurcations at different wing inertias occurs at a fixed value of the non-
dimensional velocity $U^{*}$. For these reasons, we choose to focus on one
high-inertia case ($I^{*}=10.6$) in the present study. In the experiments, the
free-stream velocity is maintained at $U_{\infty}=0.5$ $\mathrm{m~{}s^{-1}}$.
We fix the structural damping of the system at a small value, $b^{*}=0.13$,
keep the initial angle of attack at zero, and use the Cauchy number, $Ca$, as
the control parameter. To test for the onset of LCOs, we begin the test with a
high-stiffness virtual spring (i.e. low $Ca$) and incrementally increase $Ca$
by decreasing the torsional stiffness, $k^{*}$. We then reverse the operation
to test for the extinction of LCOs and to check for any hysteresis. The
amplitude response of the system, $A$, is measured as the peak absolute
pitching angle (averaged over many pitching cycles). By this definition, $A$
is half of the peak-to-peak amplitude. The divergence angle, $\overline{A}$,
is defined as the mean absolute pitching angle. Although all the divergence
angles are shown to be positive, the wing can diverge to both positive and
negative angles in experiments.
Figure 3: Amplitude response and static divergence for unswept and swept
wings. $\triangleright$: increasing $Ca$, $\triangleleft$: decreasing $Ca$.
The inset illustrates the wing geometry and the pivot axis. The colors of the
wings correspond to the colors of the amplitude and divergence curves in the
figure.
Figure 3 shows the pitching amplitude response and the static divergence angle
for swept wings with $\Lambda=10^{\circ}$ to $25^{\circ}$. Data for the
unswept wing ($\Lambda=0^{\circ}$) are also replotted from Zhu et al. (2020)
for comparison. It can be seen that the system first remains stable without
any noticeable oscillations or divergence (regime 1 in the figure) when $Ca$
is small. In this regime, the high stiffness of the system is able to pull the
system back to a stable fixed point despite any small perturbations. As we
further increase $Ca$, the system diverges to a small static angle, where the
fluid moment is balanced by the virtual spring. This transition is presumably
triggered by free-stream turbulence, and both positive and negative directions
are possible. Due to the existence of random flow disturbances and the
decreasing spring stiffness, some small-amplitude oscillations around the
static divergence angle start to emerge (regime 2). As $Ca$ is further
increased above a critical value (i.e. the Hopf point), the amplitude response
of the system abruptly jumps into large-amplitude self-sustained LCOs and the
static divergence angle drops back to zero, indicating that the oscillations
are symmetric about the zero angle of attack. The large-amplitude LCOs are
observed to be near-sinusoidal and have a dominant characteristic frequency.
After the bifurcation, the amplitude response of the system continues to
increase with $Ca$ (regime 3). We then decrease $Ca$ and find that the large-
amplitude LCOs persist even when $Ca$ is decreased below the Hopf point
(regime 4). Finally, the system drops back to the stable fixed point regime
via a saddle-node (SN) point. A hysteretic bistable region is thus created in
between the Hopf point and the saddle-node point – a hallmark of a subcritical
Hopf bifurcation. In the bistable region, the system features two stable
solutions – a stable fixed point (regime 1) and a stable LCO (regime 4) – as
well as an unstable LCO solution, which is not observable in experiments
(Strogatz, 1994).
We observe that the Hopf points of unswept and swept wings can be roughly
divided into two groups (figure 3, G1 & G2), with the unswept wing
($\Lambda=0^{\circ}$) being in G2 and all the other swept wings
($\Lambda=10^{\circ}$ – $25^{\circ}$) being in G1, which agrees with the trend
observed in figure 2(_b_) for the static moment coefficient. This connection
will be discussed further in §3.4. It is also seen that as the sweep angle
increases, the LCO amplitude at the saddle-node point decreases monotonically.
However, the $Ca$ at which the saddle-node point occurs first extends towards
a lower value ($\Lambda=0^{\circ}\rightarrow 10^{\circ}$) but then moves back
towards a higher $Ca$ ($\Lambda=10^{\circ}\rightarrow 25^{\circ}$). This
indicates that increasing the sweep angle first destabilizes the system from
$\Lambda=0^{\circ}$ to $10^{\circ}$ and then re-stabilizes it from
$\Lambda=10^{\circ}$ to $25^{\circ}$. This non-monotonic behavior of the
saddle-node point will be revisited from a perspective of energy in §3.5. The
pitching amplitude response, $A$, follows a similar non-monotonic trend.
Between $\Lambda=0^{\circ}$ and $10^{\circ}$, $A$ is slightly higher at higher
$Ca$ values for the $\Lambda=10^{\circ}$ wing, whereas between
$\Lambda=10^{\circ}$ and $25^{\circ}$, $A$ decreases monotonically, indicating
that a higher sweep angle is not able to sustain LCOs at higher amplitudes.
The non-monotonic behaviors of the saddle-node point and the LCO amplitude
both suggest that there exists an optimal sweep angle, $\Lambda=10^{\circ}$,
which promotes flow-induced oscillations of pitching swept wings.
### 3.3 Frequency response of the system
Figure 4: (_a_) Frequency response of unswept and swept wings. (_b_ , _c_)
Force decomposition of the structural mode and the structural-hydrodynamic
mode. (_b_) and (_c_) correspond to the filled orange triangle and the filled
green diamond shown in (_a_), respectively. Note that $t/T=0$ corresponds to
$\theta=0$.
The characteristic frequencies of the flow-induced LCOs observed in figure 3
provide us with more information about the driving mechanism of the
oscillations. Figure 4(_a_) shows the measured frequency response,
$f_{p}^{*}$, as a function of the calculated natural (structural) frequency,
$f_{s}^{*}$, and sweep angle. In the figure, $f_{p}^{*}=f_{p}c/U_{\infty}$ and
$f_{s}^{*}=f_{s}c/U_{\infty}$, where $f_{p}$ is the measured pitching
frequency and
$f_{s}=\frac{1}{2\pi}\sqrt{\frac{k}{I}-(\frac{b}{2I})^{2}}$ (9)
is the structural frequency of the system (Rao, 1995). We observe that for all
the wings tested in the experiments and over most of the regimes tested, the
measured pitching frequency, $f_{p}^{*}$, locks onto the calculated structural
frequency, $f_{s}^{*}$, indicating that the oscillations are dominated by the
balance between the structural stiffness and inertia. These oscillations,
therefore, correspond to the _structural_ mode reported by Zhu et al. (2020),
and feature characteristics of high-inertial aeroelastic instabilities. We can
decompose the moments experienced by the wing into the inertial moment,
$I^{*}\ddot{\theta}^{*}$, the structural damping moment,
$b^{*}\dot{\theta}^{*}$, the stiffness moment, $k^{*}\theta^{*}$, and the
fluid moment, $C_{M}$. As an example, for the $\Lambda=10^{\circ}$ wing
pitching at $f^{*}_{s}=0.069$ (i.e. the filled orange triangle in figure 4
_a_), these moments are plotted in figure 4(_b_). We see that for the
structural mode, the stiffness moment is mainly balanced by the inertial
moment, while the structural damping moment and the fluid moment remain
relatively small.
In addition to the structural mode, Zhu et al. (2020) also observed a
hydrodynamic mode, which corresponds to a low-inertia wing. In the
hydrodynamic mode, the oscillations are dominated by the fluid forcing, so
that the measured pitching frequency, $f^{*}_{p}$, stays relatively constant
for a varying $Ca$. In figure 4(_a_), we see that for the $\Lambda=20^{\circ}$
and $25^{\circ}$ wings, $f^{*}_{p}$ flattens near the saddle-node boundary.
This flattening trend shows an emerging fluid-dominated time scale, resembling
a hydrodynamic mode despite the high wing inertia. Taking
$\Lambda=20^{\circ}$, $f^{*}_{s}=0.068$ (i.e. the filled green diamond in
figure 4 _a_) as an example, we can examine the different contributions to the
pitching moments in figure 4(_c_). It is observed that in this oscillation
mode, the stiffness moment balances both the inertial moment and the fluid
moment. This is different from both the structural mode and the hydrodynamic
mode, and for this reason, we define this hybrid oscillation mode as the
_structural-hydrodynamic_ mode.
There are currently no quantitative descriptions of the structural-
hydrodynamic mode. However, it can be qualitatively identified as when the
pitching frequency of a (1:1 lock-in) structural mode flattens as the natural
(structural) frequency increases. Based on the observations in the present
study, we believe this mode is not a fixed fraction of the structural
frequency. Instead, the frequency response shows a mostly flat trend (figure 4
_a_ , green and dark green curves) at high $f_{s}^{*}$, indicating an
increasingly dominating fluid forcing frequency. For a structural mode, the
oscillation frequency locks onto the natural frequency due to the high
inertial moment. However, as the sweep angle increases, the fluid moment also
increases (see also figure 8 _a_). The structural-hydrodynamic mode emerges as
the fluid forcing term starts to dominate in the nonlinear oscillator.
For a fixed structural frequency, $f_{s}^{*}$, as the sweep angle increases,
the measured pitching frequency, $f_{p}^{*}$, deviates from the 1:1 lock-in
curve and moves to lower frequencies. This deviation suggests a growing added-
mass effect, as the pitching frequency $f_{p}\sim\sqrt{1/(I+I_{add})}$.
Because the structural inertia $I$ is prescribed, a decreasing $f_{p}$
suggests an increasing added-mass inertia, $I_{add}$. This is expected because
of the way we pitch the wings in the experiments (see the inset of figure 3).
As $\Lambda$ increases, the accelerated fluid near the wing root and the wing
tip produces more moments due to the increase of the moment arm, which
amplifies the added-mass effect. The peak added-mass moment is estimated to be
around 2%, 3%, and 5% of the peak total moment for the $\Lambda=0^{\circ}$,
$10^{\circ}$, and $20^{\circ}$ wings, respectively. Because this effect is
small compared to the structural and vortex-induced forces, we will not
quantify this added-mass effect further in the present study but will leave it
for future work.
### 3.4 Onset of flow-induced oscillations
Figure 5: Temporal evolution of (_a_) the pitching angle $\theta$, (_b_) the
fluid moment $C_{M}$, and the stiffness moment $k^{*}\theta^{*}$ near the Hopf
point for the $\Lambda=15^{\circ}$ swept wing. The vertical gray dashed line
indicates the time instant ($t=645$ s) at which $Ca$ is increased above the
Hopf point. (_c_) Static moment coefficients of unswept and swept wings.
Inset: The predicted Hopf point based on the static stall angle and the
corresponding moment, $C_{M_{s}}/\theta_{s}^{*}$, versus the measured Hopf
point, $k_{H}^{*}$. The black dashed line shows a 1:1 scaling.
In figure 3, we have observed that the Hopf point of unswept and swept wings
can be roughly divided into two groups (figure 3, G1 & G2). In this section,
we explain this phenomenon. Figure 5(_a_) and (_b_) shows the temporal
evolution of the pitching angle, $\theta(t)$, the fluid moment, $C_{M}(t)$,
and the stiffness moment, $k^{*}\theta^{*}(t)$, for the $\Lambda=15^{\circ}$
swept wing as the Cauchy number is increased past the Hopf point. We see that
the wing undergoes small amplitude oscillations around the divergence angle
just prior to the Hopf point ($t<645$ s). The divergence angle is lower than
the static stall angle, $\theta_{s}$, and so we know that the flow stays
mostly attached, and the fluid moment, $C_{M}$, is balanced by the stiffness
moment, $k^{*}\theta^{*}$ (figure 5 _b_). When the Cauchy number,
$Ca=1/k^{*}$, is increased above the Hopf point (figure 5 _a_ , $t>645$ s),
$k^{*}\theta^{*}$ is no longer able to hold the pitching angle below
$\theta_{s}$. Once the pitching angle exceeds $\theta_{s}$, stall occurs and
the wing experiences a sudden drop in $C_{M}$. The stiffness moment,
$k^{*}\theta^{*}$, loses its counterpart and starts to accelerate the wing to
pitch towards the opposite direction. This acceleration introduces
unsteadiness to the system and the small-amplitude oscillations gradually
transition to large-amplitude LCOs over the course of several cycles, until
the inertial moment kicks in to balance $k^{*}\theta^{*}$ (see also figure 4
_b_). This transition process confirms the fact that the onset of large-
amplitude LCOs depends largely on the _static_ characteristics of the wing –
the LCOs are triggered when the static stall angle is exceeded.
The triggering of flow-induced LCOs starts from $\theta$ exceeding the static
stall angle after $k^{*}$ is decreased below the Hopf point, causing $C_{M}$
to drop below $k^{*}\theta^{*}$. At this value of $k^{*}$, the slope of the
static stall point should be equal to the stiffness at the Hopf point,
$k^{*}_{H}$ (i.e. $C_{M_{s}}=k^{*}_{H}\theta^{*}$, where $C_{M_{s}}$ is the
static stall moment). This argument is verified by figure 5(_c_), in which we
replot the static moment coefficients of unswept and swept wings from figure
2(_b_) (error bars omitted for clarity), together with the corresponding
$k^{*}_{H}\theta^{*}$. We see that the $k^{*}_{H}\theta^{*}$ lines all roughly
pass through the static stall points ($\theta_{s}^{*}$, $C_{M_{s}}$) of the
corresponding $\Lambda$. Note that $k^{*}_{H}\theta^{*}$ of
$\Lambda=15^{\circ}$ and $20^{\circ}$ overlap with each other. Similar to the
trend observed for the Hopf point in figure 3, the static stall moment
$C_{M_{s}}$ can also be divided into two groups, with the unswept wing
($\Lambda=0^{\circ}$) being in G2 and all the other wings
($\Lambda=10^{\circ}$ – $25^{\circ}$) being in G1 (see also figure 2 _b_). The
inset compares the predicted Hopf point, $C_{M_{s}}/\theta_{s}^{*}$, with the
measured Hopf point, $k_{H}^{*}$, and we see that data closely follow a 1:1
relationship. This reinforces the argument that the onset of flow-induced LCOs
is shaped by the static characteristics of the wing, and that this explanation
applies to both unswept and swept wings.
It is worth noting that Negi et al. (2021) performed global linear stability
analysis on an aeroelastic wing and showed that the aeroelastic instability is
triggered by a zero-frequency linear divergence mode. This agrees in part with
our experimental observation that the flow-induced oscillations emerge from
the static divergence state. However, as we have discussed in this section,
the onset of large-amplitude aeroelastic oscillations in our system occurs
when the divergence angle exceeds the static stall angle, whereas no stall is
involved in the study of Negi et al. (2021). In fact, Negi et al. (2021)
focused on laminar separation flutter, where the pitching amplitude is small
($A\sim 6^{\circ}$). In contrast, we focus on large-amplitude
($45^{\circ}<A<120^{\circ}$) flow-induced oscillations.
### 3.5 Power coefficient map and system stability
Figure 6: (_a_ -_e_) Power coefficient maps of prescribed sinusoidal
oscillations overlaid by the bifurcation diagrams of elastically mounted
unswept and swept wings. $\triangleright$: increasing $Ca$, $\triangleleft$:
decreasing $Ca$. (_f_) Neutral power transfer curves for unswept and swept
wings. The black star represents the case $U^{*}=1.87$ ($f_{p}^{*}=0.085$),
$A=1.05$ ($60^{\circ}$), where stereo PIV measurements are taken.
In this section, we analyze the stability of elastically mounted unswept and
swept wings from the perspective of energy transfer. Menon & Mittal (2019) and
Zhu et al. (2020) have shown numerically and experimentally that the flow-
induced oscillations of elastically mounted wings can only sustain when the
net energy transfer between the ambient fluid and the elastic mount equals
zero. To map out this energy transfer for a large range of pitching
frequencies and amplitudes, we _prescribe_ the pitching motion of the wing
using a sinusoidal profile
$\theta=A\sin(2\pi f_{p}t),$ (10)
where $0\leq A\leq 2.5$ rad and $0.15~{}\mathrm{Hz}\leq f_{p}\leq
0.6~{}\mathrm{Hz}$. The fluid moment $C_{M}$ measured with these prescribed
sinusoidal motions can be directly correlated to those measured in the passive
flow-induced oscillations because the flow-induced oscillations are near-
sinusoidal (see §3.2, and figure 5 _a_ , $t>700$ s). By integrating the
governing equation of the passive system 2 over $n=20$ cycles and taking the
cycle average (Zhu et al., 2020), we can get the power coefficient of the
system
$C_{p}=\frac{f_{p}^{*}}{n}\int_{t_{0}}^{t_{0}+nT}(C_{M}\dot{\theta}^{*}-b^{*}\dot{\theta}^{*2})~{}dt^{*},$
(11)
where $t_{0}$ is the starting time, $T$ is the pitching period and
$t^{*}=tU_{\infty}/c$ is the non-dimensional time. In this equation, the
$C_{M}\dot{\theta}^{*}$ term represents the power injected into the system
from the free-stream flow, whereas the $b^{*}\dot{\theta}^{*2}$ term
represents the power dissipated by the structural damping of the elastic
mount. The power coefficient maps of unswept and swept wings are shown in
figure 6(_a_ -_e_). In these maps, orange regions correspond to $C_{p}>0$,
where the power injected by the ambient flow is higher than that dissipated by
the structural damping. On the contrary, $C_{p}<0$ in the blue regions. The
colored dashed lines indicate the $C_{p}=0$ contours, where the power
injection balances the power dissipation, and the system is in equilibrium.
The $C_{p}=0$ equilibrium boundary can be divided into three branches. Zhu et
al. (2020) have shown that for unswept wings, the top branch corresponds to a
stable LCO solution for the structural oscillation mode, the middle branch
represents an unstable LCO solution for the structural mode, but a stable LCO
solution for the hydrodynamic mode, and the bottom branch is a fixed point
solution.
To correlate the power coefficient maps of prescribed oscillations with the
stability boundaries of flow-induced oscillations, we overlay the bifurcation
diagrams of the passive system from figure 3 onto figure 6(_a_ -_e_). The
measured pitching frequencies, $f_{p}$, are used to calculate the non-
dimensional velocity, $U^{*}$, for large-amplitude LCOs (filled triangles).
Because it is difficult to measure frequencies of fixed points and small-
amplitude oscillations, we use the calculated structural frequency, $f_{s}$,
to evaluate $U^{*}$ for non-LCO data points (hollow triangles). Figure 6(_a_
-_e_) show that for all the wings tested, the flow-induced large-amplitude
LCOs match well with the top branch of the $C_{p}=0$ curve, indicating the
broad applicability of the energy approach for both unswept and swept wings,
and confirming that this instability is a structural mode, as seen in the
frequency response (figure 4 _a_). This correspondence was also observed by
Menon & Mittal (2019) and Zhu et al. (2020) and is expected for instabilities
that are well-described by sinusoidal motions (Morse & Williamson, 2009). The
small discrepancies for large sweep angles can be attributed to the low
$C_{p}$ gradient near $C_{p}=0$. The junction between the top and the middle
$C_{p}=0$ branches, which corresponds to the saddle-node point, stays
relatively sharp for $\Lambda=0^{\circ}$ – $15^{\circ}$ and becomes smoother
for $\Lambda=20^{\circ}$ – $25^{\circ}$. These smooth turnings result in a
smooth transition from the structural mode to the hydrodynamic mode, giving
rise to the structural-hydrodynamic mode discussed in §3.3.
The $C_{p}=0$ curves for $\Lambda=0^{\circ}$ – $25^{\circ}$ are summarized in
figure 6(_f_). It is seen that the trend of the top branch is similar to that
observed in figure 3 for large-amplitude LCOs. The location of the junction
between the top branch and the middle branch changes non-monotonically with
$\Lambda$, which accounts for the non-monotonic behavior of the saddle-node
point. In addition, figures 6(_a_ -_e_) show that the maximum power transfer
from the fluid also has a non-monotonic dependency on the sweep angle (see the
shade variation of the positive $C_{p}$ regions as a function of the sweep
angle), with an optimal sweep angle at $\Lambda=10^{\circ}$, which might
inspire future designs of higher efficiency oscillating-foil energy harvesting
devices.
### 3.6 Force, moment and three-dimensional flow structures
Figure 7: (_a_) Phase-averaged aerodynamic moment coefficients, $C_{M}$, and
(_b,c_) force coefficients, $C_{N}$, $C_{T}$, $C_{L}$ and $C_{D}$, measured at
$f_{p}^{*}=0.085$, $A=1.05$ ($60^{\circ}$) for the $\Lambda=0^{\circ}$,
$10^{\circ}$ and $20^{\circ}$ wings, corresponding to the black star case in
figure 6(_f_). (_d-f_) Phase-averaged moment coefficients, $C_{M}$, and power
coefficients, $C_{P}$, for $\Lambda=0^{\circ}$, $10^{\circ}$ and $20^{\circ}$.
Green panels represent positive power input regions, where $C_{P}>0$. Gray
dashed lines and dotted lines represent the normalized pitching angle,
$\theta/A$, and pitching velocity, $\dot{\theta}/(2\pi f_{p}A)$, respectively.
Note that $t/T=0$ corresponds to $\theta=0$ (see the gray dashed curve).
In the previous section, §3.5, we have established the connection between
prescribed oscillations and flow-induced instabilities using the energy
approach. However, the question remains what causes the differences in the
power coefficients measured for prescribed pitching wings with different sweep
angles (figure 6). In this section, we analyze the aerodynamic force, moment
and the corresponding three-dimensional flow structures to gain more insights.
We focus on one pitching case, $A=1.05$ ($60^{\circ}$) and $f^{*}_{p}=0.085$
(i.e. the black star on figure 6 _f_), and three sweep angles,
$\Lambda=0^{\circ}$, $10^{\circ}$ and $20^{\circ}$. This particular pitching
kinematic is selected because it sits right on the $C_{p}=0$ curve for
$\Lambda=0^{\circ}$ but in the positive $C_{p}$ region for
$\Lambda=10^{\circ}$ and in the negative $C_{p}$ region for
$\Lambda=20^{\circ}$ (see figure 6 _a,b,d,f_).
Phase-averaged coefficients of the aerodynamic moment, $C_{M}$, the normal
force, $C_{N}$, the tangential force, $C_{T}$, the lift force, $C_{L}$, and
the drag force, $C_{D}$, are plotted in figure 7(_a-c_), respectively. Similar
to the three-dimensional velocity fields, the moment and force measurements
are phase-averaged over 25 cycles. We see that the moment coefficient (figure
7 _a_) behaves differently for different sweep angles, whereas the shape of
other force coefficients (figure 7 _b,c_) does not change with sweep angle,
resembling the trend observed in the static measurements (figure 2). The
observation that the wing sweep ($\Lambda=0^{\circ}$ to $25^{\circ}$) has
minimal effects on the aerodynamic force generation is non-intuitive, as one
would assume that the sweep-induced spanwise flow can enhance spanwise
vorticity transport in the leading-edge vortex and thereby alter the LEV
stability as well as the resultant aerodynamic load. However, our measurements
show the opposite, a result which is backed up by the experiments of heaving
(plunging) swept wings by Beem et al. (2012) ($\Lambda=0^{\circ}$ to
$45^{\circ}$) and Wong et al. (2013) ($\Lambda=0^{\circ}$ and $\pm
45^{\circ}$), simulations of pitching swept wings by Visbal & Garmann (2019)
($\Lambda=0^{\circ}$ to $30^{\circ}$), and simulations of fin-like pitch-heave
swept wings by Zurman-Nasution et al. (2021) ($\Lambda=0^{\circ}$ to
$40^{\circ}$), where the spanwise flow has been shown to exist but to have no
effect on the aerodynamic force. We also analyze aerodynamic forces for
different sweep angles and other wing kinematics and observe similar results
(not shown in this manuscript). The collapse of the normal force, $C_{N}$, at
different sweep angles suggests that the wing sweep regulates the aerodynamic
moment, $C_{M}$, by changing the moment arm, $d_{M}$, as $C_{M}=C_{N}d_{M}$.
This argument will be revisited later when we discuss the leading-edge vortex
and tip vortex dynamics.
Figure 7(_a_) shows that as the sweep angle increases, the moment coefficient,
$C_{M}$, peaks at a later time in the cycle, and has an increased maximum
value. To further analyze $C_{M}$ and its effects on the power coefficient,
$C_{P}$, for different wings sweeps, we compare $C_{M}$ and $C_{P}$ for
$\Lambda=0^{\circ}$, $10^{\circ}$ and $20^{\circ}$ in figure 7(_d-f_),
respectively. Note that here we define the power coefficient as
$C_{P}=C_{M}\dot{\theta}^{*}$, which is different from equation 11 in a way
that this $C_{P}$ is time-dependent instead of cycle-averaged, and that the
power dissipated by the structure, $b^{*}\dot{\theta}^{*2}$ is not considered
(this power dissipation is small because a small $b^{*}$ is used in the
experiments). The normalized pitching angle, $\theta/A$, and pitching
velocity, $\dot{\theta}/(2\pi f_{p}A)$, are also plotted for reference. We see
that at the beginning of the cycle ($0\leq t/T<0.15$), $C_{M}(t/T)$ grows
near-linearly for all three wings. Because $\dot{\theta}>0$ for the first
quarter cycle, the $x$-intercept of $C_{M}$ determines the starting point of
the positive $C_{P}(t/T)$ region, corresponding to the left edge of the green
panels in the figures. The $C_{P}>0$ region starts at $t/T=0$ for the unswept
wing as $C_{M}$ has a near-zero $y$-intercept. For the $\Lambda=10^{\circ}$
swept wing, because $C_{M}$ has a small positive $y$-intercept, the $C_{P}>0$
region starts even before $t/T=0$. On the contrary, the $C_{P}>0$ region
starts after $t/T=0$ for the $\Lambda=20^{\circ}$ swept wing due to a small
negative $y$-intercept of $C_{M}$. Owing to the combined effect of an
increasing $C_{M}$ and a decreasing $\dot{\theta}$, the power coefficient
peaks around $t/T=0.125$ for all the wings. The maximum $C_{P}$ of the
$\Lambda=10^{\circ}$ wing is slightly higher than that of the other two wings,
due to a slightly higher $C_{M}$.
As the pitching cycle continues, $C_{M}(t/T)$ peaks around $t/T=0.15$, 0.17
and 0.28 for $\Lambda=0^{\circ}$, $10^{\circ}$ and $20^{\circ}$, respectively.
The pitch reversal occurs at $t/T=0.25$, where $\theta$ reaches its maximum
and $\dot{\theta}$ switches its sign to negative. Because the pitching
velocity is now negative, the green panels terminate as $C_{P}$ drops below
zero, suggesting that $C_{M}$ starts to dissipate energy into the ambient
fluid. However, because $C_{M}$ continues to grow after $t/T=0.25$ for the
$\Lambda=20^{\circ}$ wing, it generates a much more negative $C_{P}$ as
compared to the wings with a lower sweep angle. Figure 7(_a_) shows that
$C_{M}$ decreases faster for the $\Lambda=10^{\circ}$ wing than the unswept
wing at $0.25\leq t/T<0.5$. This difference results in a less negative $C_{P}$
for the $\Lambda=10^{\circ}$ wing as compared to the $\Lambda=0^{\circ}$ wing.
The faster decrease of $C_{M}$ for the $\Lambda=10^{\circ}$ wing also makes it
the first to switch back to positive power generation, where $C_{M}$ and
$\dot{\theta}$ are both negative. The same story repeats after $t/T=0.5$ due
to the symmetry of the pitching cycle. In summary, we see that subtle
differences in the alignment of $C_{M}$ and $\dot{\theta}$ can result in
considerable changes of $C_{P}$ for different sweep angles. The start of the
$C_{P}>0$ region is determined by the phase of $C_{M}$, whereas the
termination of the $C_{P}>0$ region depends on $\dot{\theta}$. A non-monotonic
duration of the $C_{P}>0$ region (i.e. the size of the green panels) is
observed as the sweep angle increases. The cycle-averaged power coefficient,
which dictates the stability of aeroelastic systems (see §3.5), is regulated
by both the amplitude and phase of the aerodynamic moment.
Figure 8: (_a_) Moment coefficients replotted from figure 7(_a_) for half
pitching cycle. Three representative time instants $t_{1}/T=0.14$,
$t_{2}/T=0.22$ and $t_{3}/T=0.30$ are selected for studying the evolution of
the leading-edge vortex (LEV) and tip vortex (TV). (_b-d_) Phase-averaged
three-dimensional flow structures for the $\Lambda=0^{\circ}$ unswept wing,
and the $\Lambda=10^{\circ}$ and $\Lambda=20^{\circ}$ swept wings. The flow
structures are visualized with iso-$Q$ surfaces ($Q=50~{}\mathrm{s}^{-2}$) and
colored by the non-dimensional spanwise vorticity, $\omega_{z}c/U_{\infty}$.
All the flow fields are rotated by the pitching angle to keep the wing at a
zero angle of attack for better visualization of the flow structures. A video
capturing the three-dimensional flow structures for the entire pitching cycle
can be found in the supplementary material. (_e-g_) Side views and front views
of the corresponding three-dimensional LEV and TV geometries. Solid curves
represent LEVs and dotted lines represent TVs.
Next, we analyze the effect of wing sweep on the leading-edge vortex and tip
vortex dynamics and the resultant impact on the aerodynamic moment. Figure 8
shows (_a_) the moment measurements, (_b-d_) the phase-averaged three-
dimensional flow structures at $t_{1}/T=0.14$, $t_{2}/T=0.22$ and
$t_{3}/T=0.30$, and (_e-g_) the corresponding leading-edge vortex and tip
vortex geometries for the $\Lambda=0^{\circ}$, $10^{\circ}$ and $20^{\circ}$
wings. The three equally spaced time instants $t_{1}/T=0.14$, $t_{2}/T=0.22$
and $t_{3}/T=0.30$ are selected because they correspond to the times of the
formation, growth and shedding of the leading-edge vortex. The three-
dimensional flow structures are visualized using iso-$Q$ surfaces with a value
of $50~{}\mathrm{s}^{-2}$ and colored by the non-dimensional spanwise
vorticity, $\omega_{z}c/U_{\infty}$. In this view, the leading edge of the
wing is pitching towards us, but for clarity, the flow field is always plotted
with the coordinate system oriented so that the chord line is aligned with the
$x-$axis.
The initial linear growth of the moment coefficient before $t_{1}/T$ for all
three wings corresponds to the formation of a strong leading-edge vortex, as
depicted in figure 8(_b-d_) at $t_{1}/T=0.14$, which brings the lift and
moment coefficients above the static stall limit. At this stage, we see that
the structure of the leading-edge vortex is similar across different wing
sweeps, despite some minor variations near the wing tip. For the unswept wing,
the LEV stays mostly attached along the wing span, whereas for the two swept
wings, the LEV starts to detach near the tip region (see the small holes on
the feeding shear layer near the wing tip). A positive vortex tube on the
surface near the trailing edge is observed for all three wings, along with the
negative vortex tubes shed from the trailing edge. We also observe a
streamwise-oriented tip vortex wrapping over the wing tip, and this tip vortex
grows stronger with the sweep angle, presumably due to the higher tip velocity
associated with the larger wing sweep. Another possible cause for a stronger
TV at a higher sweep angle is that the effective angle of attack becomes
higher at the wing tip as the wing sweep increases.
The tracking of the vortex geometry (figure 8 _e-g_) provides a more
quantitative measure to analyze the LEV and TV dynamics. We see that at
$t_{1}/T=0.14$, the LEVs for all three wings are mostly aligned with the
leading edge except for the tip region ($z/c=0$). For the two swept wings, the
LEV also stays closer to the leading edge near the wing root ($z/c=3$). Due to
the high wing sweep of the $\Lambda=20^{\circ}$ wing, a small portion of the
LEV falls behind the pivot axis, presumably contributing to a negative moment.
However, the mean distance between the LEV and the pivot axis (i.e. the LEV
moment arm) stays roughly constant across different wing sweeps, potentially
explaining the agreement between the $C_{M}$ for different wings during the
linear growth region. On the other hand, the tip vortex moves downstream as
the wing sweep increases due to the wing geometry. For the unswept wing and
the $\Lambda=10^{\circ}$ swept wing, the majority of the tip vortex stays
behind the pivot axis. For the $\Lambda=20^{\circ}$ swept wing, the TV stays
entirely behind the pivot axis. As a result, the TV mostly contributes to the
generation of negative moments, which counteracts the LEV moment contribution.
At $t_{2}/T=0.22$, figure 8(_b_) and the front view of figure 8(_e_) show that
the LEV mostly detaches from the wing surface for the unswept wing except for
a small portion near the wing tip, which stays attached. A similar flow
structure was observed by Yilmaz & Rockwell (2012) for finite-span wings
undergoing linear pitch-up motions, and by Son et al. (2022a) for high-aspect-
ratio plunging wings. For the $\Lambda=10^{\circ}$ wing, this small portion of
the attached LEV shrinks (see the front view of figure 8 _f_). The top portion
of the LEV near the wing root is also observed to stay attached to the wing
surface as compared to the $\Lambda=0^{\circ}$ case. For the
$\Lambda=20^{\circ}$ wing, as shown by the front view of figure 8(_g_), the
attached portion of the LEV near the wing tip further shrinks and almost
detaches, while the top portion of the LEV also attaches to the wing surface,
similar to that observed for $\Lambda=10^{\circ}$. The shrinking of the LEV
attached region near the wing tip as a function of the wing sweep is
presumably caused by the decreased anchoring effect of the tip vortex. The
shrinking of the attached LEV could also be a result of an increased effective
angle of attack. The side views of figure 8(_e-g_) show that the LEV moves
towards the pivot axis at this time instant. The swept wing LEVs have slightly
longer mean moment arms due to their attached portions near the wing root.
This is more prominent for the $\Lambda=20^{\circ}$ wing, potentially
explaining the $C_{M}$ of $\Lambda=20^{\circ}$ exceeding the other two wings
at $t_{2}/T$. The tip vortex moves upwards and outwards with respect to the
wing surface from $t_{1}/T$ to $t_{2}/T$.
During the pitch reversal ($t_{3}/T=0.30$), the LEV further detaches from the
wing surface, and the TV also starts to detach. For the unswept wing, the LEV
mostly aligns with the pivot axis except for the tip portion, which still
remains attached. For the $\Lambda=10^{\circ}$ swept wing, the LEV also
roughly aligns with the pivot axis, with both the root and the tip portions
staying near the wing surface, forming a more prominent arch-like shape (see
the front view of figure 8 _f_) as compared to the previous time step. For the
$\Lambda=20^{\circ}$ wing, the root portion of the LEV stays attached and
remains far in front of the pivot axis. The LEV detaches near the wing tip and
joins with the detached TV, as shown by figure 8(_d_) and the front and top
views of figure 8(_g_). The attachment of the LEV near the wing root and the
detachment of the TV near the wing tip both contribute to a more positive
$C_{M}$, as compared to the other two wings with lower sweep. The change of
the LEV geometry as a function of the sweep angle can be associated with the
arch vortices reported by Visbal & Garmann (2019). In their numerical study,
it has been shown that for pitching unswept wings with free tips on both ends,
an arch-type vortical structure began to form as the pitch reversal started
(see their figure 6 _c_). In our experiments, the wings have a free tip and an
endplate (i.e. a wing-body junction, or symmetry plane). Therefore, the
vortical structure shown in figure 8(_b_) is equivalent to one-half of the
arch vortex. If we mirror the flow structures about the wing root (i.e. the
endplate), we can get a complete arch vortex similar to that observed by
Visbal & Garmann (2019). For swept wings, we observe one complete arch vortex
for both $\Lambda=10^{\circ}$ (figure 8 _c_) and $20^{\circ}$ (figure 8 _d_).
Again, if we mirror the flow structures about the wing root, there will be two
arch vortices for each swept wing, agreeing well with the observation of
Visbal & Garmann (2019) (see their figures 10 _c_ and 13 _c_). Moreover,
Visbal & Garmann (2019) reported that for swept wings, as $\Lambda$ increases,
the vortex arch moves towards the wing tip, which is also seen in our
experiments (compare the front views of figure 8 _e-g_).
### 3.7 Insights obtained from moment partitioning
We have shown in the previous section, §3.6, that the aerodynamic moment is
jointly determined by the leading-edge vortex and the tip vortex dynamics.
Specifically, the spatial locations and geometries of the LEV and TV, as well
as the vortex strength, have a combined effect on the unsteady aerodynamic
moment. To obtain further insights into this complex combined effect, we use
the Force and Moment Partitioning Method (FMPM) to analyze the three-
dimensional flow fields.
Figure 9: Iso-surface plots of three-dimensional influence potentials for
(_a_) the $\Lambda=0^{\circ}$ unswept wing, (_b_) the $\Lambda=10^{\circ}$
swept wing, and (_c_) the $\Lambda=20^{\circ}$ swept wing. (_d-f_) The
corresponding side views, with the wing boundaries outlined by yellow dotted
lines and the pitching axes indicated by green dashed lines.
As we discussed in §2.4, the first step of applying FMPM is to construct an
‘influence potential’, $\phi$. We solve equation 6 numerically using the
MATLAB Partial Differential Equation Toolbox (Finite Element Method, code
publicly available on MATLAB File Exchange). We use a 3D domain of $10c\times
10c\times 20c$, and a mesh resolution of $0.02c$ on the surface of the wing
and $0.1c$ on the outer domain. We visualize the calculated three-dimensional
influence field, $\phi$, for the $\Lambda=0^{\circ}$, $10^{\circ}$ and
$20^{\circ}$ wings using iso-$\phi$ surfaces in figure 9(_a-c_). Figure 9(_d-
f_) illustrates the corresponding side views, with the wing boundaries
outlined by yellow dotted lines and the pitching axes indicated by green
dashed lines. We see that for the unswept wing, the iso-$\phi$ surfaces show
symmetry with respect to the pivot axis and the wing chord, resulting in a
quadrant distribution of the influence field. The magnitude of $\phi$ peaks on
the wing surface and decreases towards the far field. The slight asymmetry of
$\phi$ with respect to the pitching axis (see figure 9 _d_) is caused by the
difference between the rounded leading edge and the sharp trailing edge of the
NACA 0012 wing (see also the 2D influence field reported in Zhu et al.
(2023)). The size of the iso-$\phi$ surfaces stays relatively constant along
the wing span, except at the wing tip, where the surfaces wrap around and seal
the tube.
As the sweep angle is increased to $\Lambda=10^{\circ}$ and $20^{\circ}$, we
see that the quadrant distribution of the influence field persists. However,
the iso-$\phi$ surfaces form funnel-like shapes on the fore wing and teardrop
shapes on the aft wing. This is caused by the variation of the effective pivot
axis along the wing span. Figure 9(_e_) and (_f_) show that, for swept wings,
the negative $\phi$ regions extend over the entire chord near the wing root,
even behind the pitching axis. Similarly, the positive $\phi$ regions (almost)
cover the entire wing tip and even spill over in front of the pitching axis.
As we will show next, this behavior of the $\phi$ field for swept wings will
result in some non-intuitive distributions of the aerodynamic moment. In
addition, the magnitude of the $\phi$ field is observed to increase with the
sweep angle, due to the increase of the effective moment arm (Zhu et al.,
2021).
Figure 10: (_a-c_) Phase-averaged iso-$Q$ surfaces ($Q=50~{}\mathrm{s}^{-2}$)
for the $\Lambda=0^{\circ}$ unswept wing and the $\Lambda=10^{\circ}$ and
$20^{\circ}$ swept wings, colored by the vorticity-induced moment density,
$-2Q\phi$ ($\mathrm{m^{2}~{}s^{-2}}$), at $t_{1}/T=0.14$, $t_{2}/T=0.22$ and
$t_{3}/T=0.30$. Note that the wings and flow fields are rotated in the
spanwise direction to maintain a zero angle of attack, for a better view of
the flow structures. (_d-f_) Spanwise distributions of the vorticity-induced
moment for the three wings at the three representative time instants, obtained
by integrating $-2Q\phi$ at different spanwise locations.
We multiply the three-dimensional $Q$ field by the influence field, $\phi$,
and get the spanwise moment (density) distribution field, $-2Q\phi$. To
visualize the moment distributions, we recolor the same iso-$Q$ surface plots
shown in figure 8 with the moment density, $-2Q\phi$, which are shown in
figure 10(_a-c_). As before, the wings and flow fields are rotated by $\theta$
so that we are always looking from a viewpoint normal to the chord line,
giving a better view of the flow structures. In these iso-$Q$ surface plots,
red regions indicate that the vortical structure induces a positive spanwise
moment, whereas blue regions represent the generation of a negative spanwise
moment. In between red and blue regions, white regions have zero contribution
to the spanwise moment.
At $t_{1}/T=0.14$ (figure 10 _a_), as expected, we see that the entire LEV on
the unswept wing is generating a positive moment. For the $\Lambda=10^{\circ}$
swept wing, however, the LEV generates a near-zero moment near the wing tip,
and for the $\Lambda=20^{\circ}$ swept wing, the tip region of the LEV
contributes a negative moment due to the non-conventional distribution of the
$\phi$ field. The TV generates almost no moment for the unswept wing, but
contributes a negative moment for the swept wings. The vortex tube formed near
the trailing edge of the wing surface contributes entirely to negative moments
for the unswept wing, but its top portion starts to generate positive moments
as the sweep angle increases. The contributions of each vortical structure on
the moment generation for the three wings become more clear if we plot the
spanwise distribution of the vorticity-induced moment.
By integrating the moment distribution field $-2Q\phi$ over the horizontal
($x,y$)-plane at each spanwise location, $z$, we are able to obtain the
spanwise distribution of the vorticity-induced moment, shown in figure 10(_d-
f_). For the unswept wing, $\Lambda=0^{\circ}$, figure 10(_d_) shows that the
LEV generates a near-uniform positive moment across the span. As the sweep
angle increases ($\Lambda=10^{\circ}$), the LEV generates a higher positive
moment near the wing root, and the TV starts to generate a negative moment.
For the $\Lambda=20^{\circ}$ wing, this trend persists. It is also interesting
to see that the spanwise moment distribution curves for the three wings
intersect around the mid span, where the effective pivot axis coincides at the
mid chord. For the two swept wings, the more positive moments near the wing
root counteract the negative LEV and TV contributions near the wing tip,
resulting in a similar overall moment as compared to the unswept wing. The
FMPM thus quantitatively explains why the three wings generate similar
unsteady moments at this time instant (figure 8 _a_).
At $t_{2}/T=0.22$ (figure 10 _b_), the LEV starts to detach and moves towards
the pitching axis. As discussed in the previous section, §3.6, the LEV forms a
half-arch for the unswept wing, with only the tip region staying attached, and
a complete arch for swept wings, with both the root and tip regions staying
attached. These arch-like LEV geometries, together with the special shapes of
the three-dimensional influence field, lead to some special distributions of
the aerodynamic moments. For the unswept wing, the color of the LEV becomes
lighter as compared to the $t_{1}/T$ case, indicating a decreasing
contribution to positive moments. However, the attached portion of the LEV
still generates a positive moment as it remains attached, close to the wing,
and in front of the pitching axis. Comparing the two swept wing cases, the LEV
for the $\Lambda=20^{\circ}$ wing generates more positive moments near the
wing root as compared to the $\Lambda=10^{\circ}$ wing due to the magnitude of
the $\phi$ field (figure 9). The TVs for the three wings behave similarly to
the cases at $t_{1}/T$. The aft wing vortex tube on the wing surface breaks
into two smaller tubes. Because of their small volumes, we do not expect them
to affect the total moment generation. Figure 10(_e_) shows that the large
part of the LEV does not contribute to any moment generation for the unswept
wing – only the tip region ($0\leq z/c\leq 1$) generates positive moments. As
compared to $t_{1}/T$, the LEV generates more positive moments near the wing
root for the two swept wings, especially for the $\Lambda=20^{\circ}$ wing,
and the TV generates slightly more negative moments. The overall trend
observed in figure 10(_e_) further explains the moment measurements shown in
figure 8(_a_), where the $\Lambda=20^{\circ}$ wing produces the highest
$C_{M}$, followed by the $\Lambda=10^{\circ}$ wing and then the unswept wing
at $t_{2}/T$.
At $t_{3}/T=0.30$ (figure 10 _c_), the LEV further detaches from the wing
surface. For the unswept wing, the LEV color becomes even lighter. Comparing
the temporal evolution of the LEV color for the unswept wing, we see that the
LEV progressively generates lower positive moments, agreeing well with the
decreasing moment measurement shown in figure 8(_a_). The LEV continues to
generate positive moments near the root region and negative moments near the
tip region for the $\Lambda=10^{\circ}$ swept wing, although it is largely
aligned with the pivot axis (see also the side view of figure 8 _f_). This is
again a result of the non-conventional funnel-shaped $\phi$ field near the
wing root and the teardrop-like $\phi$ field near the wing tip (figure 9 _b_
and _e_). This trend persists for the $\Lambda=20^{\circ}$ wing. However, the
LEV generates more positive moments due to its shorter distance from the
leading edge and the wing surface near the wing root. Moreover, the size of
the LEV iso-$Q$ surface also becomes larger for the $\Lambda=20^{\circ}$ wing
as compared to the previous time steps, indicating a stronger LEV and thus a
higher aerodynamic moment, which explains why the $C_{M}$ of
$\Lambda=20^{\circ}$ peaks around $t_{3}/T$ in figure 8(_a_). This is also
reflected in the spanwise moment plot in figure 10(_f_), where the LEV
generates more positive moments for the $\Lambda=20^{\circ}$ wing than the
$\Lambda=10^{\circ}$ wing. The tip vortex again behaves similarly to the
previous time steps for all three wings, although it becomes less coherent and
detaches from the wing surface.
It is worth noting that the integral of $-2Q\phi$ over the ($x,y$)-plane (i.e.
figure 10 _d-f_) also includes contributions from other vortical structures.
In figure 10(_a-c_), we can see that there are four main structures on each
wing: the LEV, the TV, the TEV, and the vortex tube on the aft wing surface.
Figure 9 shows that the amplitude of the influence field, $\phi$, is zero near
the trailing edge due to symmetry. This means that the contribution to the
moment by the TEV is negligible, because $-2Q\phi$ approaches zero in this
region and makes no contribution to the integrand. The aft wing vortex tube is
small in size compared to the LEV and TV. In addition, it is not as coherent,
because it breaks down at $t_{2}/T=0.22$. Therefore, we would expect its
contribution to the integral to be small as well.
In summary, the Force and Moment Partitioning Method enables us to associate
the complex three-dimensional vortex dynamics with the corresponding
vorticity-induced moments, and quantitatively explains the mechanisms behind
the observed differences in the unsteady moment generation, which further
drives the pitching motion of these swept wings. These insightful analyses
would not have been possible without the FMPM.
## 4 Conclusion
In this experimental study, we have explored the nonlinear flow-induced
oscillations and three-dimensional vortex dynamics of cyber-physically mounted
pitching unswept and swept wings, with the pitching axis passes through the
mid-chord point at the mid-span plane, and with the sweep angle varied from
$0^{\circ}$ to $25^{\circ}$. At a constant flow speed, a prescribed high
inertia and a small structural damping, we adjusted the wing stiffness to
systematically study the onset and extinction of large-amplitude flow-induced
oscillations. For the current selections of the pitching axis location and the
range of the sweep angle, the amplitude response revealed subcritical Hopf
bifurcations for all the unswept and swept wings, with a clustering behavior
for the Hopf point and a non-monotonic saddle-node point as a function of the
sweep angle. The flow-induced oscillations have been correlated with the
structural oscillation mode, where the oscillations are dominated by the
inertial behavior of the wing. For swept wings with high sweep angles, a
hybrid oscillation mode, namely the structural-hydrodynamic mode, has been
observed and characterized, in which the oscillations were regulated by both
the inertial moment and the fluid moment. The onset of flow-induced
oscillations (i.e. the Hopf point) has been shown to depend on the static
characteristics of the wing. The non-monotonic trend of the saddle-node point
against the sweep angle can be attributed to the non-monotonic power transfer
between the ambient fluid and the elastic mount, which further depends on the
amplitude and phase of the unsteady aerodynamic moment. Force and moment
measurements have shown that, perhaps surprisingly, the wing sweep has a
minimal effect on the aerodynamic forces and it was therefore inferred that
the wing sweep modulates the aerodynamic moment by affecting the moment arm.
Phase-averaged three-dimensional flow structures measured using stereoscopic
PIV have been analyzed to characterize the dynamics of the leading-edge vortex
and tip vortex. Finally, by employing the Force and Moment Partitioning Method
(FMPM), we have successfully correlated the complex LEV and TV dynamics with
the resultant aerodynamic moment in a quantitative manner.
In addition to reporting new observations and providing physical insights on
the effects of moderate wing sweep in large-amplitude aeroelastic
oscillations, the present study can serve as a source of validation data for
future theoretical/computational models. Furthermore, the optimal sweep angle
($\Lambda=10^{\circ}$) observed for promoting flow-induced oscillations may
have engineering implications. For example, one should avoid this sweep angle
for aero-structure designs to stay away from aeroelastic instabilities. On the
other hand, this angle could potentially be employed for developing higher-
efficiency flapping-foil energy-harvesting devices. Lastly, the use of FMPM to
analyze (especially three-dimensional) flow fields obtained from PIV
experiments has shown great utility, and the results further demonstrated the
powerful capability of this emerging method to provide valuable physical
insights into vortex-dominated flows, paving the way for more applications of
this method to data from future experimental and numerical studies.
## Acknowledgments
This work is funded by the Air Force Office of Scientific Research, Grant
FA9550-21-1-0462, managed by Dr. Gregg Abate. We acknowledge the helpful
discussions with Rajat Mittal, Karthik Menon, and Sushrut Kumar.
## Declaration of interests
The authors report no conflict of interest.
## References
* Beatus & Cohen (2015) Beatus, T. & Cohen, I. 2015 Wing-pitch modulation in maneuvering fruit flies is explained by an interplay between aerodynamics and a torsional spring. Phys. Rev. E 92 (2), 022712.
* Beem et al. (2012) Beem, H. R., Rival, D. E. & Triantafyllou, M. S. 2012 On the stabilization of leading-edge vortices with spanwise flow. Exp. Fluids 52 (2), 511–517.
* Bergou et al. (2007) Bergou, A. J., Xu, S. & Wang, Z. J. 2007 Passive wing pitch reversal in insect flight. J. Fluid Mech. 591, 321–337.
* Birch & Dickinson (2001) Birch, J. M. & Dickinson, M. H. 2001 Spanwise flow and the attachment of the leading-edge vortex on insect wings. Nature 412 (6848), 729–733.
* Borazjani & Daghooghi (2013) Borazjani, I. & Daghooghi, M. 2013 The fish tail motion forms an attached leading edge vortex. Proc. Royal Soc. B. 280 (1756), 20122071.
* Bottom II et al. (2016) Bottom II, R. G., Borazjani, I., Blevins, E. L. & Lauder, G. V. 2016 Hydrodynamics of swimming in stingrays: numerical simulations and the role of the leading-edge vortex. J. Fluid Mech. 788, 407–443.
* Boudreau et al. (2018) Boudreau, M., Dumas, G., Rahimpour, M. & Oshkai, P. 2018 Experimental investigation of the energy extraction by a fully-passive flapping-foil hydrokinetic turbine prototype. J. Fluids Struct. 82, 446–472.
* Chiereghin et al. (2020) Chiereghin, N., Bull, S., Cleaver, D. J. & Gursul, I. 2020 Three-dimensionality of leading-edge vortices on high aspect ratio plunging wings. Phys. Rev. Fluids 5 (6), 064701.
* Dimitriadis & Li (2009) Dimitriadis, G. & Li, J. 2009 Bifurcation behavior of airfoil undergoing stall flutter oscillations in low-speed wind tunnel. AIAA J. 47 (11), 2577–2596.
* Dowell et al. (1989) Dowell, E. H., Curtiss, H. C., Scanlan, R. H. & Sisto, F. 1989 A modern course in aeroelasticity. Springer.
* Eldredge & Jones (2019) Eldredge, J. D. & Jones, A. R. 2019 Leading-edge vortices: mechanics and modeling. Annu. Rev. Fluid Mech. 51, 75–104.
* Ellington et al. (1996) Ellington, C. P., van den Berg, C., Willmott, A. P. & Thomas, A. L. R. 1996 Leading-edge vortices in insect flight. Nature 384 (6610), 626.
* Gursul & Cleaver (2019) Gursul, I. & Cleaver, D. 2019 Plunging oscillations of airfoils and wings: Progress, opportunities, and challenges. AIAA J. 57 (9), 3648–3665.
* Hartloper et al. (2013) Hartloper, C., Kinzel, M. & Rival, D. E. 2013 On the competition between leading-edge and tip-vortex growth for a pitching plate. Exp. Fluids 54 (1), 1447.
* Hover et al. (1997) Hover, F. S., Miller, S. N. & Triantafyllou, M. S. 1997 Vortex-induced vibration of marine cables: experiments using force feedback. J. Fluids Struct. 11 (3), 307–326.
* Hunt et al. (1988) Hunt, J. C. R., Wray, A. A. & Moin, P. 1988 Eddies, streams, and convergence zones in turbulent flows. Center for Turbulence Research Report CTR-S88 pp. 193–208.
* Jafferis et al. (2019) Jafferis, N. T., Helbling, E. F., Karpelson, M. & Wood, R. J. 2019 Untethered flight of an insect-sized flapping-wing microscale aerial vehicle. Nature 570 (7762), 491–495.
* Jeong & Hussain (1995) Jeong, J. & Hussain, F. 1995 On the identification of a vortex. J. Fluid Mech. 285, 69–94.
* Jones (1947) Jones, R. T. 1947 Effects of sweepback on boundary layer and separation. Tech. Rep. 1042\. NACA.
* Kim & Gharib (2010) Kim, D. & Gharib, M. 2010 Experimental study of three-dimensional vortex structures in translating and rotating plates. Exp. Fluids 49, 329–339.
* King et al. (2018) King, J. T., Kumar, R. & Green, M. A. 2018 Experimental observations of the three-dimensional wake structures and dynamics generated by a rigid, bioinspired pitching panel. Phys. Rev. Fluids 3 (3), 034701.
* Lentink et al. (2007) Lentink, D., Müller, U. K., Stamhuis, E. J., de Kat, R., van Gestel, W., Veldhuis, L. L. M., Henningsson, P., Hedenström, A., Videler, J. J. & van Leeuwen, J. L. 2007 How swifts control their glide performance with morphing wings. Nature 446 (7139), 1082–1085.
* Li et al. (2020a) Li, J., Wang, Y., Graham, M. & Zhao, X. 2020a Vortex moment map for unsteady incompressible viscous flows. J. Fluid Mech. 891, A13.
* Li & Wu (2018) Li, J. & Wu, Z.-N. 2018 Vortex force map method for viscous flows of general airfoils. J. Fluid Mech. 836, 145–166.
* Li et al. (2020b) Li, J., Zhao, X. & Graham, M. 2020b Vortex force maps for three-dimensional unsteady flows with application to a delta wing. J. Fluid Mech. 900.
* Long & Nipper (1996) Long, J. H. & Nipper, K. S. 1996 The importance of body stiffness in undulatory propulsion. Am. Zool. 36 (6), 678–694.
* Mackowski & Williamson (2011) Mackowski, A. W. & Williamson, C. H. K. 2011 Developing a cyber-physical fluid dynamics facility for fluid-structure interaction studies. J. Fluids Struct. 27 (5-6), 748–757.
* McCroskey (1982) McCroskey, W. J. 1982 Unsteady airfoils. Annu. Rev. Fluid Mech. 14 (1), 285–311.
* Menon et al. (2022) Menon, K., Kumar, S. & Mittal, R. 2022 Contribution of spanwise and cross-span vortices to the lift generation of low-aspect-ratio wings: Insights from force partitioning. Phys. Rev. Fluids 7 (11), 114102.
* Menon & Mittal (2019) Menon, K. & Mittal, R. 2019 Flow physics and dynamics of flow-induced pitch oscillations of an airfoil. J. Fluid Mech. 877, 582–613.
* Menon & Mittal (2021a) Menon, K. & Mittal, R. 2021a On the initiation and sustenance of flow-induced vibration of cylinders: insights from force partitioning. J. Fluid Mech. 907.
* Menon & Mittal (2021b) Menon, K. & Mittal, R. 2021b Quantitative analysis of the kinematics and induced aerodynamic loading of individual vortices in vortex-dominated flows: a computation and data-driven approach. J. Comput. Phys. 443, 110515.
* Menon & Mittal (2021c) Menon, K. & Mittal, R. 2021c Significance of the strain-dominated region around a vortex on induced aerodynamic loads. J. Fluid Mech. 918, R3.
* Moriche et al. (2017) Moriche, M., Flores, O. & García-Villalba, M. 2017 On the aerodynamic forces on heaving and pitching airfoils at low reynolds number. J. Fluid Mech. 828, 395–423.
* Morse & Williamson (2009) Morse, T. L. & Williamson, C. H. K. 2009 Prediction of vortex-induced vibration response by employing controlled motion. J. Fluid Mech. 634, 5–39.
* Mulleners & Raffel (2012) Mulleners, K. & Raffel, M. 2012 The onset of dynamic stall revisited. Exp. Fluids 52 (3), 779–793.
* Negi et al. (2021) Negi, P. S., Hanifi, A. & Henningson, D. S. 2021 On the onset of aeroelastic pitch-oscillations of a naca0012 wing at transitional reynolds numbers. J. Fluids Struct. 105, 103344.
* Onoue & Breuer (2016) Onoue, K. & Breuer, K. S. 2016 Vortex formation and shedding from a cyber-physical pitching plate. J. Fluid Mech. 793, 229–247.
* Onoue & Breuer (2017) Onoue, K. & Breuer, K. S. 2017 A scaling for vortex formation on swept and unswept pitching wings. J. Fluid Mech. 832, 697–720.
* Onoue et al. (2015) Onoue, K., Song, A., Strom, B. & Breuer, K. S. 2015 Large amplitude flow-induced oscillations and energy harvesting using a cyber-physical pitching plate. J. Fluids Struct. 55, 262–275.
* Polhamus (1971) Polhamus, E. C. 1971 Predictions of vortex-lift characteristics by a leading-edge suction analogy. J. Aircr. 8 (4), 193–199.
* Quartapelle & Napolitano (1983) Quartapelle, L. & Napolitano, M. 1983 Force and moment in incompressible flows. AIAA J. 21 (6), 911–913.
* Quinn & Lauder (2021) Quinn, D. & Lauder, G. 2021 Tunable stiffness in fish robotics: mechanisms and advantages. Bioinspir. Biomim. 17 (1), 011002.
* Rao (1995) Rao, S. S. 1995 Mechanical Vibrations. Addison-Wesley.
* Ribeiro et al. (2022) Ribeiro, J. H. M., Yeh, C.-A., Zhang, K. & Taira, K. 2022 Wing sweep effects on laminar separated flows. J. Fluid Mech. 950, A23.
* Rival et al. (2014) Rival, D. E., Kriegseis, J., Schaub, P., Widmann, A. & Tropea, C. 2014 Characteristic length scales for vortex detachment on plunging profiles with varying leading-edge geometry. Exp. Fluids 55 (1), 1–8.
* Shyy et al. (2010) Shyy, W., Aono, H., Chimakurthi, S. K., Trizila, P., Kang, C.-K., Cesnik, C. E. S. & Liu, H. 2010 Recent progress in flapping wing aerodynamics and aeroelasticity. Prog. Aerosp. Sci. 46 (7), 284–327.
* Son et al. (2022a) Son, O., Gao, A.-K., Gursul, I., Cantwell, C. D., Wang, Z. & Sherwin, S. J. 2022a Leading-edge vortex dynamics on plunging airfoils and wings. J. Fluid Mech. 940, A28.
* Son et al. (2022b) Son, O., Wang, Z. & Gursul, I. 2022b Dynamics of tip vortices on plunging wings. Aerosp. Sci. Technol. 128, 107761.
* Strogatz (1994) Strogatz, S. H. 1994 Nonlinear Dynamics and Chaos: With Applications to Physics, Biology, Chemistry, and Engineering. Perseus Books.
* Su & Breuer (2019) Su, Y & Breuer, K. S. 2019 Resonant response and optimal energy harvesting of an elastically mounted pitching and heaving hydrofoil. Phys. Rev. Fluids 4 (6), 064701.
* Taira & Colonius (2009) Taira, K. & Colonius, T. 2009 Three-dimensional flows around low-aspect-ratio flat-plate wings at low reynolds numbers. J. Fluid Mech. 623, 187–207.
* Tong et al. (2022) Tong, R., Wu, Z., Chen, D., Wang, J., Du, S., Tan, M. & Yu, J. 2022 Design and optimization of an untethered high-performance robotic tuna. IEEE/ASME Trans. Mechatron. 27 (5), 4132–4142.
* Visbal & Garmann (2019) Visbal, M. R. & Garmann, D. J. 2019 Effect of sweep on dynamic stall of a pitching finite-aspect-ratio wing. AIAA J. 57 (8), 3274–3289.
* Wang (2005) Wang, Z. J. 2005 Dissecting insect flight. Annu. Rev. Fluid Mech. 37, 183–210.
* Wojcik & Buchholz (2014) Wojcik, C. J. & Buchholz, J. H. J. 2014 Vorticity transport in the leading-edge vortex on a rotating blade. J. Fluid Mech. 743, 249.
* Wong et al. (2013) Wong, J. G., Kriegseis, J. & Rival, D. E. 2013 An investigation into vortex growth and stabilization for two-dimensional plunging and flapping plates with varying sweep. J. Fluids Struct. 43, 231–243.
* Wong & Rival (2015) Wong, J. G. & Rival, D. E. 2015 Determining the relative stability of leading-edge vortices on nominally two-dimensional flapping profiles. J. Fluid Mech. 766, 611.
* Wu et al. (2019) Wu, K. S., Nowak, J. & Breuer, K. S. 2019 Scaling of the performance of insect-inspired passive-pitching flapping wings. J. R. Soc. Interface 16 (161), 20190609.
* Xiao & Zhu (2014) Xiao, Q. & Zhu, Q. 2014 A review on flow energy harvesters based on flapping foils. J. Fluids Struct. 46, 174–191.
* Yilmaz & Rockwell (2012) Yilmaz, T. O. & Rockwell, D. 2012 Flow structure on finite-span wings due to pitch-up motion. J. Fluid Mech. 691, 518.
* Young et al. (2014) Young, J., Lai, J. C. S. & Platzer, M. F. 2014 A review of progress and challenges in flapping foil power generation. Prog. Aerosp. Sci. 67, 2–28.
* Yuan et al. (2015) Yuan, W., Poirel, D., Wang, B. & Benaissa, A. 2015 Effect of freestream turbulence on airfoil limit-cycle oscillations at transitional reynolds numbers. J. Aircr. 52 (4), 1214–1225.
* Zhang et al. (2015) Zhang, C., Hedrick, T. L. & Mittal, R. 2015 Centripetal acceleration reaction: an effective and robust mechanism for flapping flight in insects. PloS One 10 (8), e0132093.
* Zhang et al. (2020a) Zhang, K., Hayostek, S., Amitay, M., Burtsev, A., Theofilis, V. & Taira, K. 2020a Laminar separated flows over finite-aspect-ratio swept wings. J. Fluid Mech. 905, R1.
* Zhang et al. (2020b) Zhang, K., Hayostek, S., Amitay, M., He, W., Theofilis, V. & Taira, K. 2020b On the formation of three-dimensional separated flows over wings under tip effects. J. Fluid Mech. 895, A9.
* Zhang & Taira (2022) Zhang, K. & Taira, K. 2022 Laminar vortex dynamics around forward-swept wings. Phys. Rev. Fluids 7 (2), 024704.
* Zhong et al. (2021a) Zhong, Q., Han, T., Moored, K. W. & Quinn, D. B. 2021a Aspect ratio affects the equilibrium altitude of near-ground swimmers. J. Fluid Mech. 917.
* Zhong et al. (2021b) Zhong, Q., Zhu, J., Fish, F. E., Kerr, S. J., Downs, A. M., Bart-Smith, H. & Quinn, D. B. 2021b Tunable stiffness enables fast and efficient swimming in fish-like robots. Sci. Robot. 6 (57), eabe4088.
* Zhu et al. (2023) Zhu, Y., Lee, H., Kumar, S., Menon, K., Mittal, R. & Breuer, K. 2023 Force moment partitioning and scaling analysis of vortices shed by a 2d pitching wing in quiescent fluid. Exp. Fluids 64 (10), 158.
* Zhu et al. (2021) Zhu, Y., Mathai, V. & Breuer, K. 2021 Nonlinear fluid damping of elastically mounted pitching wings in quiescent water. J. Fluid Mech. 923, R2.
* Zhu et al. (2020) Zhu, Y., Su, Y. & Breuer, K. 2020 Nonlinear flow-induced instability of an elastically mounted pitching wing. J. Fluid Mech. 899, A35.
* Zurman-Nasution et al. (2021) Zurman-Nasution, A. N., Ganapathisubramani, B. & Weymouth, G. D. 2021 Fin sweep angle does not determine flapping propulsive performance. J. R. Soc. Interface 18 (178), 20210174.
|
# Product of exponentials concentrates
around the exponential of the sum
Michael Anshelevich, Austin Pritchett Department of Mathematics, Texas A&M
University, College Station, TX 77843-3368<EMAIL_ADDRESS><EMAIL_ADDRESS>
###### Abstract.
For two matrices $A$ and $B$, and large $n$, we show that most products of $n$
factors of $e^{A/n}$ and $n$ factors of $e^{B/n}$ are close to $e^{A+B}$. This
extends the Lie-Trotter formula. The elementary proof is based on the relation
between words and lattice paths, asymptotics of binomial coefficients, and
matrix inequalities. The result holds for more than two matrices.
###### 2010 Mathematics Subject Classification:
Primary 15A16; Secondary 05A16
This work was supported in part by a Simons Foundation Collaboration Grant.
## 1\. Introduction.
Matrix products do not commute. One familiar consequence is that in general,
$e^{A}e^{B}\neq e^{A+B}.$
(Here for a square matrix $A$, the expression $e^{A}$ can be defined, for
example, using the power series expansion of the exponential function.)
However, a vestige of the “product of exponentials is the exponential of the
sum” property remains, as long as we take the factors in a very special
_alternating_ order.
###### Theorem (Lie-Trotter product formula).
Let $A$ and $B$ be complex square matrices. Then
$\lim_{n\rightarrow\infty}\left(e^{A/n}e^{B/n}\right)^{n}\rightarrow e^{A+B},$
where the convergence is with respect to any matrix norm.
This result goes back to Sophus Lie, see [1] or Proposition 16(b) below for an
elementary proof. Clearly, if we take $n$ factors $e^{A/n}$ and $n$ factors
$e^{B/n}$ but multiply them in a different order, the result will not always
converge to $e^{A+B}$. For example,
$\left(e^{A/n}\right)^{n}\left(e^{B/n}\right)^{n}=e^{A}e^{B},\quad\left(e^{B/n}\right)^{n}\left(e^{A/n}\right)^{n}=e^{B}e^{A}.$
The reader is invited to try out plotting all of such products for their
preferred choices of (real) matrices at
https://austinpritchett.shinyapps.io/nexpm_visualization/
Nevertheless, in this article we show that, for large $n$, the overwhelming
majority of products of $n$ factors $e^{A/n}$ and $n$ factors $e^{B/n}$ will
be close to $a^{A+B}$. In other words, such products _concentrate_ around
$e^{A+B}$. To give a precise formulation, we introduce some notation.
###### Definition 1.
Denote by $\mathcal{W}_{n}$ the set of all words in $A$ and $B$ which contain
exactly $n$ $A$’s and $n$ $B$’s. Denote by $w[i]$ the $i$’th letter in $w$.
###### Theorem 2.
Let $A$ and $B$ be complex square matrices. Consider all $\binom{2n}{n}$
products of $e^{A/n}$ and $e^{B/n}$ of the form $\prod_{i=1}^{2n}e^{w[i]/n}$
for $w\in\mathcal{W}_{n}$. Among these products, the proportion of those which
differ from $e^{A+B}$ in norm by less than
$\sqrt{\frac{\ln n}{n}}$
goes to $1$ as $n\rightarrow\infty$.
Along the way to the proof of this result, we discuss several metrics on the
space of words, which are interesting in their own right.
This expanded version also contains an appendix, which does not appear in the
published version. In it, we provide several figures illustrating possible
shapes of the set of products.
## 2\. Words and paths.
We define three metrics on the set of words $\mathcal{W}_{n}$.
###### Definition 3.
Let $w$ be a word. A _swap_ is an interchange of two neighboring letters in
$w$. The _swap distance_ $\operatorname{\rho_{\mathit{swap}}}(w,v)$ between
two words $w,v\in\mathcal{W}_{n}$ is the minimal number of swaps needed to
transform $w$ into $v$.
This metric may remind some readers of the bubble-sort algorithm.
###### Example 4.
We may swap
$AABB\mapsto ABAB\mapsto ABBA\mapsto BABA\mapsto BBAA.$
It is not hard to check that this is the minimal number of swaps needed, so
$\operatorname{\rho_{\mathit{swap}}}(AABB,BBAA)=4.$
To define the other two metrics, it is convenient to represent a word by a
_lattice path_. To be able to consider words of different length on equal
footing, our lattice paths will be normalized.
###### Definition 5.
A lattice path connects the origin $(0,0)$ to the point $(1,1)$ by a path
consisting of $n$ horizontal and $n$ vertical segments of length $1/n$.
We may identify words in $\mathcal{W}_{n}$ with such paths. For a word $w$,
denote
$w_{A}[j]=\\#\left\\{i\leq j:w[i]=A\right\\}$
the number of $A$’s among the first $j$ letters, and the same for $w_{B}[j]$.
Then the path corresponding to $w$ consists of points
$\left\\{\frac{1}{n}(w_{A}[j],w_{B}[j]):1\leq j\leq 2n\right\\}$
connected by straight line segments, with $A$ corresponding to a horizontal
step, and $B$ to a vertical step. See Figure 1.
Figure 1. The path corresponding to the word $AABBBBAAABAABBBAABAB$.
###### Definition 6.
For two words $w,v\in\mathcal{W}_{n}$, define the distance $\rho_{1}(w,v)$ to
be the (unsigned) area of the region between the paths. See Figure 2.
Figure 2. $\rho_{1}(w,v)=\frac{24}{100}$, while
$\rho_{\infty}(w,v)=\frac{4}{10}$. In the second plot,
$\left|w_{A}[5]-v_{A}[5]\right|=2$.
###### Lemma 7.
We can express $\rho_{1}(w,v)$ directly in terms of the words $w,v$ as
follows:
$\begin{split}\rho_{1}\left(w,v\right)&=\frac{1}{2n^{2}}\sum_{j=1}^{2n}\Bigl{|}\Bigl{(}w_{A}[j]-w_{B}[j]\Bigr{)}-\Bigl{(}v_{A}[j]-v_{B}[j]\Bigr{)}\Bigr{|}\\\
&=\frac{1}{n^{2}}\sum_{j=1}^{2n}\Bigl{|}w_{A}[j]-v_{A}[j]\Bigr{|}\\\
&=\frac{1}{2n^{2}}\sum_{j=1}^{2n}\left(\Bigl{|}w_{A}[j]-v_{A}[j]\Bigr{|}+\Bigl{|}w_{B}[j]-v_{B}[j]\Bigr{|}\right).\end{split}$
Here the first representation compares the excess of the number of $A$’s over
the number of $B$’s in $w$ and $v$.
###### Proof.
To obtain the second expression, we slice the region between the paths into
NW-SE diagonal regions. For each $j$,
(1) $w_{A}[j]+w_{B}[j]=v_{A}[j]+v_{B}[j]=j,$
and there are exactly $\Bigl{|}w_{A}[j]-v_{A}[j]\Bigr{|}$ squares located on
the diagonal between the points $\frac{1}{n}(w_{A}[j],w_{B}[j])$ and
$\frac{1}{n}(v_{A}[j],v_{B}[j])$. See Figure 2. For the first and third
expressions, again using the identity (1),
$\begin{split}\Bigl{|}\Bigl{(}w_{A}[j]-w_{B}[j]\Bigr{)}-\Bigl{(}v_{A}[j]-v_{B}[j]\Bigr{)}\Bigr{|}&=2\Bigl{|}w_{A}[j]-v_{A}[j]\Bigr{|}\\\
&=\Bigl{|}w_{A}[j]-v_{A}[j]\Bigr{|}+\Bigl{|}w_{B}[j]-v_{B}[j]\Bigr{|}.\qed\end{split}$
###### Definition 8.
The third distance we will consider is
$\begin{split}\rho_{\infty}\left(w,v\right)&=\frac{1}{n}\max_{1\leq j\leq
2n}\Bigl{|}\Bigl{(}w_{A}[j]-w_{B}[j]\Bigr{)}-\Bigl{(}v_{A}[j]-v_{B}[j]\Bigr{)}\Bigr{|}\\\
&=\frac{2}{n}\max_{1\leq j\leq n}\Bigl{|}w_{A}[j]-v_{A}[j]\Bigr{|}\\\
&=\frac{1}{n}\max_{1\leq j\leq
2n}\left(\Bigl{|}w_{A}[j]-v_{A}[j]\Bigr{|}+\Bigl{|}w_{B}[j]-v_{B}[j]\Bigr{|}\right).\end{split}$
It can interpreted as the maximal difference between the corresponding points
on the paths as measured in the NW-SE direction, with appropriate
normalization.
Clearly
(2) $\rho_{1}\leq\rho_{\infty}.$
We now observe that the swap metric and the path metric are related in a
simple way.
###### Theorem 9.
$\rho_{1}(w,v)=\dfrac{1}{n^{2}}\operatorname{\rho_{\mathit{swap}}}(w,v)$.
###### Proof.
Each swap of neighboring letters changes the area between the paths by
$\dfrac{1}{n^{2}}$. So
$\rho_{1}(w,v)\leq\dfrac{1}{n^{2}}\operatorname{\rho_{\mathit{swap}}}(w,v)$.
On the other hand, unless the words are equal, we can find an $A$ followed by
a $B$ such that at that point in the word, one word has more $A$’s than the
other one. Then swapping these $A$ and $B$ decreases $\rho_{1}$. So one can
always transform a word $w$ into $v$ by $n^{2}\rho_{1}(w,v)$ swaps. ∎
###### Proposition 10.
Let $M\in\left\\{1,\ldots,n\right\\}$. The number of words
$w\in\mathcal{W}_{n}$ for which the $\rho_{\infty}$ distance to the _standard
word_ $\operatorname{\overline{\mathrm{w}}}=ABAB\ldots AB\in\mathcal{W}_{n}$
is at least $M/n$ is at most $2\binom{2n}{n-M+1}$.
###### Proof.
Note first that
$\begin{split}\rho_{\infty}(w,\operatorname{\overline{\mathrm{w}}})&=\frac{1}{n}\max_{1\leq
j\leq
2n}\Bigl{|}\Bigl{(}w_{A}[j]-w_{B}[j]\Bigr{)}-\Bigl{(}\operatorname{\overline{\mathrm{w}}}_{A}[j]-\operatorname{\overline{\mathrm{w}}}_{B}[j]\Bigr{)}\Bigr{|}\\\
&\leq\frac{1}{n}\max_{1\leq j\leq
2n}\Bigl{|}w_{A}[j]-w_{B}[j]\Bigr{|}+\frac{1}{n}.\end{split}$
Suppose $\rho_{\infty}(w,\operatorname{\overline{\mathrm{w}}})\geq M/n$, so
that
$\max_{1\leq j\leq 2n}\Bigl{|}w_{A}[j]-w_{B}[j]\Bigr{|}\geq M-1.$
We now apply the Reflection Principle, see [4] or Section 10.3 of [2]. Let $j$
be the smallest index such that
(3) $\Bigl{|}w_{A}[j]-w_{B}[j]\Bigr{|}=M-1;$
note that such a $j$ exists. Let $\tilde{w}$ be a word of length $2n$ (not
necessarily in $\mathcal{W}_{n}$) such that
* •
for $i\leq j$, $\tilde{w}[i]=w[i]$,
* •
for $i>j$, $\tilde{w}[i]=A$ if $w[i]=B$, and $\tilde{w}[i]=B$ if $w[i]=A$.
Since $w_{A}[j]=w_{B}[j]\pm(M-1)$, $\tilde{w}$ contains
$w_{A}[j]+(n-w_{B}[j])=n\pm(M-1)$
$A$’s. Conversely, because $j$ is the _smallest_ index satisfying equation
(3), this procedure can be reversed, and each word $v$ with $n\pm(M-1)$ $A$’s
arises as $\tilde{w}$ for a unique $w\in\mathcal{W}_{n}$. It remains to note
that the total number of words of length $2n$ with $n\pm(M-1)$ $A$’s is
$2\binom{2n}{n-M+1}.\qed$
###### Remark 11.
Recall the little-o, big-O, and asymptotic notation. For two positive
sequences $(a_{n})$ and $(b_{n})$, we write
* •
$a_{n}=o(b_{n})$ if $a_{n}/b_{n}\rightarrow 0$
* •
$a_{n}=O(b_{n})$ is $a_{n}/b_{n}$ is bounded
* •
$a_{n}\sim b_{n}$ if $a_{n}/b_{n}\rightarrow 1$
###### Corollary 12.
Let $p(n)$ be a positive sequence such that $p(n)\rightarrow\infty$ and
$p(n)=o(n^{1/6})$. Then for large $n$, the proportion of words
$w\in\mathcal{W}_{n}$ for which
$\rho_{\infty}(w,\operatorname{\overline{\mathrm{w}}})\geq\frac{p(n)}{\sqrt{n}}$
is asymptotically at most $2e^{-p(n)^{2}}$.
###### Proof.
The desired proportion is at most
$2\frac{\binom{2n}{n-[\sqrt{n}p(n)]+1}}{\binom{2n}{n}},$
where $[\sqrt{n}p(n)]$ denotes the integer part. The asymptotics of this
expression can be found using Stirling’s formula, see equation (5.43) in [5].
∎
###### Remark 13.
$\rho_{\infty}$ is closely related to the notion of “span” from [3], where its
asymptotic expected value is computed (more precisely, “span” is the $\tau$
statistic in the final section below). Related analysis for paths which lie
entirely above the main diagonal is sometimes called the Sock Counting
Problem, for reasons we invite the reader to discover.
## 3\. Matrix estimates.
We now recall that $A,B$ are actually matrices in
$M_{d}(\mathbb{C})=\mathbb{C}^{d\times d}$. Denote by $\left\|{\cdot}\right\|$
some sub-multiplicative norm on this matrix space, such as the operator norm
or the Frobenius norm.
###### Lemma 14.
For every word $w\in\mathcal{W}_{n}$
$\left\|{e^{w[1]/n}e^{w[2]/n}\cdots e^{w[2n]/n}}\right\|\leq
e^{\left\|{A}\right\|+\left\|{B}\right\|},$
with a uniform bound which does not depend on the word or on $n$.
###### Proof.
Since the norm is sub-multiplicative,
$\left\|{e^{C}}\right\|=\left\|{\sum_{k=0}^{\infty}\frac{1}{k!}C^{k}}\right\|\leq\sum_{k=0}^{\infty}\frac{1}{k!}\left\|{C}\right\|^{k}=e^{\left\|{C}\right\|}.$
Therefore
$\begin{split}\left\|{e^{w[1]/n}e^{w[2]/n}\cdots
e^{w[2n]/n}}\right\|&\leq\prod_{i=1}^{2n}\left\|{e^{w[i]/n}}\right\|\leq\prod_{i=1}^{2n}e^{\left\|{w[i]}\right\|/n}\\\
&=e^{\sum_{i=1}^{2n}\left\|{w[i]}\right\|/n}=e^{\left\|{A}\right\|+\left\|{B}\right\|}.\qed\end{split}$
The following estimates can be improved (with a longer argument), but suffice
for our purposes.
###### Lemma 15.
For large $n$,
$\left\|{e^{A/n}e^{B/n}-e^{(A+B)/n}}\right\|\leq\frac{1}{n^{2}}\left\|{AB-
BA}\right\|$
and
$\left\|{e^{A/n}e^{B/n}-e^{B/n}e^{A/n}}\right\|\leq\frac{2}{n^{2}}\left\|{AB-
BA}\right\|.$
###### Proof.
If $AB=BA$, the result is immediate, so we assume that $AB\neq BA$. Then
$\begin{split}&\left\|{e^{A/n}e^{B/n}-e^{(A+B)/n}-\frac{1}{2n^{2}}(AB-
BA)}\right\|\\\
&\quad=\left\|{\sum_{k=0}^{\infty}\frac{1}{n^{k}}\sum_{\ell=0}^{k}\frac{1}{\ell!(k-\ell)!}A^{\ell}B^{k-\ell}-\sum_{k=0}^{\infty}\frac{1}{n^{k}}\frac{1}{k!}(A+B)^{k}-\frac{1}{2n^{2}}(AB-
BA)}\right\|\\\
&\quad=\left\|{\sum_{k=3}^{\infty}\frac{1}{n^{k}}\frac{1}{k!}\left(\sum_{\ell=0}^{k}\binom{k}{\ell}A^{\ell}B^{k-\ell}-(A+B)^{k}\right)}\right\|\\\
&\quad\leq
2\sum_{k=3}^{\infty}\frac{1}{n^{k}}\frac{1}{k!}(\left\|{A}\right\|+\left\|{B}\right\|)^{k}\\\
&\quad\leq\frac{2}{n^{3}}e^{\left\|{A}\right\|+\left\|{B}\right\|}.\end{split}$
Therefore
$\left\|{e^{A/n}e^{B/n}-e^{(A+B)/n}}\right\|\leq\frac{1}{2n^{2}}\left\|{AB-
BA}\right\|+\frac{2}{n^{3}}e^{\left\|{A}\right\|+\left\|{B}\right\|}\leq\frac{1}{n^{2}}\left\|{AB-
BA}\right\|$
for large $n$. The second estimate follows from the first. ∎
###### Proposition 16.
1. (a)
Swapping two neighboring letters in a word changes the corresponding product
by $O(1/n^{2})$. More precisely,
$\Bigl{\|}e^{w[1]/n}\cdots e^{w[i]/n}e^{w[i+1]/n}\cdots e^{w[2n]/n}\\\
-e^{w[1]/n}\cdots e^{w[i+1]/n}e^{w[i]/n}\cdots
e^{w[2n]/n}\Bigr{\|}\leq\frac{2}{n^{2}}\left\|{AB-
BA}\right\|e^{\left\|{A}\right\|+\left\|{B}\right\|}.$
2. (b)
The Lie-Trotter formula:
$\left\|{\left(e^{A/n}e^{B/n}\right)^{n}-e^{A+B}}\right\|\leq\frac{1}{n}\left\|{AB-
BA}\right\|e^{\left\|{A}\right\|+\left\|{B}\right\|}.$
###### Proof.
For (a), using the two preceding lemmas,
$\begin{split}&\Bigl{\|}e^{w[1]/n}\cdots e^{w[i]/n}e^{w[i+1]/n}\cdots
e^{w[2n]/n}\\\ &\qquad-e^{w[1]/n}\cdots e^{w[i+1]/n}e^{w[i]/n}\cdots
e^{w[2n]/n}\Bigr{\|}\\\ &\quad=\Bigl{\|}e^{w[1]/n}\cdots e^{w[i-1]/n}\\\
&\quad\qquad\times\Bigl{(}e^{w[i]/n}e^{w[i+1]/n}-e^{w[i+1]/n}e^{w[i]/n}\Bigr{)}e^{w[i+2]/n}\cdots
e^{w[2n]/n}\Bigr{\|}\\\ &\quad\leq
e^{\left\|{A}\right\|+\left\|{B}\right\|}\left\|{e^{A/n}e^{B/n}-e^{B/n}e^{A/n}}\right\|\\\
&\quad\leq\frac{2}{n^{2}}e^{\left\|{A}\right\|+\left\|{B}\right\|}\left\|{AB-
BA}\right\|.\end{split}$
Similarly, for (b),
$\begin{split}\left\|{\left(e^{A/n}e^{B/n}\right)^{n}-e^{A+B}}\right\|&=\left\|{\left(e^{A/n}e^{B/n}\right)^{n}-\left(e^{(A+B)/n}\right)^{n}}\right\|\\\
&\leq
n\left\|{e^{A/n}e^{B/n}-e^{(A+B)/n}}\right\|e^{\left\|{A}\right\|+\left\|{B}\right\|}\\\
&\leq\frac{1}{n}\left\|{AB-
BA}\right\|e^{\left\|{A}\right\|+\left\|{B}\right\|}.\qed\end{split}$
###### Theorem 17.
For fixed matrices $A,B$, the map
$F:(\mathcal{W}_{n},\rho_{1})\rightarrow(M_{d}(\mathbb{C}),\left\|{\cdot}\right\|)$
given by
$F(w)=e^{w[1]/n}e^{w[2]/n}\cdots e^{w[2n]/n}$
is Lipschitz continuous, with the Lipschitz constant independent of $n$.
###### Proof.
By Proposition 16(a) and Theorem 9, for any two words $w,v\in\mathcal{W}_{n}$,
(4)
$\begin{split}\left\|{\prod_{i=1}^{2n}e^{w[i]/n}-\prod_{i=1}^{2n}e^{v[i]/n}}\right\|&\leq\frac{\operatorname{\rho_{\mathit{swap}}}(w,v)}{n^{2}}2\left\|{AB-
BA}\right\|e^{\left\|{A}\right\|+\left\|{B}\right\|}\\\
&=\rho_{1}(w,v)2\left\|{AB-
BA}\right\|e^{\left\|{A}\right\|+\left\|{B}\right\|}.\end{split}$
So the map $F$ is Lipschitz continuous, with the constant depending only on
$A$ and $B$. ∎
###### Remark 18.
One can identify the lattice paths discussed above with non-decreasing step
functions from $[0,1]$ to $[0,1]$ which (for some $n$) take values in
$\left\\{\frac{k}{n}:0\leq k\leq n\right\\}$ and are constant on the intervals
in the uniform partition of $[0,1]$ into $n$ subintervals. It is easy to check
that the closure of this space, with respect to the metric $\rho_{1}$, is the
space of _all_ increasing functions from $[0,1]$ to $[0,1]$. By Theorem 17,
the map $F$ extends continuously to a map from the space of all such
increasing functions (with the $\rho_{1}$ metric) to $M_{d}(\mathbb{C})$.
## 4\. The main result.
###### Proof of Theorem 2.
Fix $c>0$. Applying Corollary 12 with $p(n)=\sqrt{c\ln n}$, the proportion of
words $w\in\mathcal{W}_{n}$ for which
$\rho_{\infty}(w,\operatorname{\overline{\mathrm{w}}})\geq\sqrt{c\frac{\ln
n}{n}}$
is asymptotically at most
$2e^{-c\ln n}=\frac{2}{n^{c}}.$
On the other hand, by inequality (2), for $w$ with
$\rho_{\infty}(w,\operatorname{\overline{\mathrm{w}}})<\sqrt{c\frac{\ln
n}{n}}$
we also have
$\rho_{1}(w,\operatorname{\overline{\mathrm{w}}})<\sqrt{c\frac{\ln n}{n}}.$
By equation (4), for such $w$,
$\left\|{F(w)-F(\operatorname{\overline{\mathrm{w}}})}\right\|\leq\sqrt{c\frac{\ln
n}{n}}2\left\|{AB-BA}\right\|e^{\left\|{A}\right\|+\left\|{B}\right\|}.$
Finally, by Proposition 16(b), for such $w$,
(5) $\begin{split}\left\|{F(w)-e^{A+B}}\right\|&\leq\left(\sqrt{c\frac{\ln
n}{n}}+\frac{1}{2n}\right)2\left\|{AB-
BA}\right\|e^{\left\|{A}\right\|+\left\|{B}\right\|}\\\ &\leq
4\sqrt{c\frac{\ln n}{n}}\left\|{AB-
BA}\right\|e^{\left\|{A}\right\|+\left\|{B}\right\|}\end{split}$
for large $n$.
If $AB=BA$, then $F(w)=e^{A+B}$ for each $w\in\mathcal{W}_{n}$. If $AB\neq
BA$, set
$c=\frac{1}{\left(4\left\|{AB-
BA}\right\|e^{\left\|{A}\right\|+\left\|{B}\right\|}\right)^{2}}.$
It follows that the proportion of words $w\in\mathcal{W}_{n}$ with
$\left\|{F(w)-e^{A+B}}\right\|<\sqrt{\frac{\ln n}{n}}$ is at least
$1-\frac{2}{n^{c}},$
and so goes to one as $n\rightarrow\infty$. ∎
For readers familiar with probability theory, we can state a cleaner result.
We will need the following device.
###### Lemma (Borel-Cantelli).
Denote by $P(E)$ the probability of an event. If a family of events
$\left\\{E_{n}:n\in\mathbb{N}\right\\}$ has the property that the series
$\sum_{n=1}^{\infty}P(E_{n})<\infty$, then almost surely, an element $x$ lies
in at most finitely many $E_{n}$’s.
###### Corollary 19.
Let $\mathcal{W}_{n}$ and $F$ be as before. Put on $\mathcal{W}_{n}$ the
uniform measure, so that each word has probability $\dfrac{1}{\binom{2n}{n}}$.
Let $\mathcal{W}=\prod_{n=1}^{\infty}\mathcal{W}_{n}$ be the collection of all
sequences of words of progressively longer length, with the usual product
probability measure. Then for $\mathbf{w}=(w_{1},w_{2},\ldots)\in\mathcal{W}$,
almost surely with respect to this product measure,
$F(w_{n})\rightarrow e^{A+B}$
in the matrix norm as $n\rightarrow\infty$.
###### Proof.
In the proof of Theorem 2, take $c>1$. Since the series $\sum\frac{1}{n^{c}}$
converges, combining equation (5) and the Borel-Cantelli lemma, almost surely
$\left\|{F(w_{n})-e^{A+B}}\right\|\leq 4\sqrt{c\frac{\ln n}{n}}\left\|{AB-
BA}\right\|e^{\left\|{A}\right\|+\left\|{B}\right\|}$
for all but finitely many $n$. This implies that $F(w_{n})\rightarrow
e^{A+B}$. ∎
###### Remark 20.
We don’t need the full power of Theorem 2 to prove the preceding corollary.
Indeed, all we need is that for any $\varepsilon>0$,
$\left\|{F(w_{n})-e^{A+B}}\right\|<\varepsilon$ for all but finitely many
terms. This corresponds to the asymptotics of the binomial coefficient
$\binom{2n}{n-[n\varepsilon]}$ for $\varepsilon<1$. These asymptotics
(describing the large rather than moderate deviations from the mean) are both
easier and better known, namely
$\binom{2n}{n-[n\varepsilon]}\sim\sqrt{1-\varepsilon^{2}}e^{-H(\varepsilon)n}\binom{2n}{n},$
see for example Section 5.3 in [5]. Here
$H(\varepsilon)=(1+\varepsilon)\ln(1+\varepsilon)+(1-\varepsilon)\ln(1-\varepsilon).$
Since the function $x\ln x$ is concave up, $H(\varepsilon)\geq 0$. So the
series $\sum_{n}e^{-H(\varepsilon)n}$ converges, and the Borel-Cantelli lemma
still implies the result.
## 5\. The case of several matrices.
Similar results hold if instead of $A$ and $B$, we start with an $N$-tuple of
matrices $A_{1},\ldots,A_{N}\in M_{d}(\mathbb{C})$. Several parts of the
argument require modification, while others are almost the same (and so are
only outlined).
###### Definition 21.
Let $\mathcal{W}_{n}^{(N)}$ be the collection of all words of length $Nn$
containing exactly $n$ of each $A_{j}$, $1\leq j\leq N$. Define the standard
word $\operatorname{\overline{\mathrm{w}}}$ to be the word $A_{1}A_{2}\ldots
A_{N}$ repeated $n$ times. Define the swap distance exactly as before,
$\rho_{1}\left(w,v\right)=\frac{1}{N^{2}n^{2}}\sum_{j=1}^{Nn}\sum_{k=1}^{N}\Bigl{|}w_{A_{k}}[j]-v_{A_{k}}[j]\Bigr{|},$
and
$\rho_{\infty}\left(w,v\right)=\frac{2}{n}\max_{\begin{subarray}{c}1\leq k\leq
N\\\ 1\leq j\leq Nn\end{subarray}}\Bigl{|}w_{A_{k}}[j]-v_{A_{k}}[j]\Bigr{|}.$
We also define $F$ by exactly the same formula as in Theorem 17.
###### Example 22.
$\operatorname{\rho_{\mathit{swap}}}$ and $\rho_{1}$ no longer determine each
other. Indeed, omitting the normalization factor,
$\rho_{1}(ACB,BCA)=2+2+0=4\quad\text{while}\quad\operatorname{\rho_{\mathit{swap}}}(ACB,BCA)=3.$
On the other hand,
$\rho_{1}(ABC,BCA)=2+1+1=4\quad\text{while}\quad\operatorname{\rho_{\mathit{swap}}}(ABC,BCA)=2.$
However, all we really need is an inequality between
$\operatorname{\rho_{\mathit{swap}}}$ and $\rho_{\infty}$, which still holds.
###### Proposition 23.
$\operatorname{\rho_{\mathit{swap}}}\leq\frac{1}{2}N^{2}n^{2}\rho_{\infty}$.
###### Proof.
Suppose the $i$’th position is the first one where $w$ and $v$ differ, and
$w[i]=A_{k}$. Then the next $A_{k}$ appears in $v$ no later than
$N\frac{n}{2}\rho_{\infty}(w,v)$ positions away. So no more than $(Nn)\cdot
N\frac{n}{2}\rho_{\infty}(w,v)$ swaps are necessary to transform $v$ into $w$.
∎
Counting exactly the number of words which lie within a given $\rho_{\infty}$
distance from the standard word is a difficult question, see Section 10.17 in
[2]. For our needs, the following slightly different estimate suffices.
###### Definition 24.
Let $w\in W_{n}^{(N)}$. Denote
$\tau(w)=\frac{1}{n}\max_{\begin{subarray}{c}1\leq k,\ell\leq N\\\ 1\leq j\leq
Nn\end{subarray}}\Bigl{|}w_{A_{k}}[j]-w_{A_{\ell}}[j]\Bigr{|}.$
Just like $\rho_{\infty}(w,\operatorname{\overline{\mathrm{w}}})$, $\tau(w)$
measures how far the path corresponding to $w$ is from the straight path
connecting the origin to $(n,n,\ldots,n)$. In fact,
###### Lemma 25.
For any $w\in W_{n}^{(N)}$,
$\frac{1}{2}\rho_{\infty}(w,\operatorname{\overline{\mathrm{w}}})\leq\tau(w)+\frac{1}{n}.$
###### Proof.
Note that
$\operatorname{\overline{\mathrm{w}}}_{A_{k}}[j]=\left[\frac{j+N-k}{N}\right]$
and $\sum_{\ell=1}^{N}w_{A_{\ell}}[j]=j$. So
$\begin{split}\Bigl{|}w_{A_{k}}[j]-\operatorname{\overline{\mathrm{w}}}_{A_{k}}[j]\Bigr{|}&\leq\left|w_{A_{k}}[j]-\frac{j}{N}\right|+\left|\left[\frac{j+N-k}{N}\right]-\frac{j}{N}\right|\\\
&\leq\frac{1}{N}\sum_{\ell=1}^{N}\Bigl{|}w_{A_{k}}[j]-w_{A_{\ell}}[j]\Bigr{|}+1\\\
&\leq\max_{k,\ell}\Bigl{|}w_{A_{k}}[j]-w_{A_{\ell}}[j]\Bigr{|}+1.\qed\end{split}$
###### Proposition 26.
1. (a)
The number of words $w\in\mathcal{W}_{n}^{(N)}$ for which the $\rho_{\infty}$
distance to the standard word is at least $(2M+2)/n$ is at most
$2N^{2}\binom{Nn}{n-M,n+M,n,\ldots,n}$.
2. (b)
Let $p(n)=o(n^{1/6})$. Then the proportion of words $w\in\mathcal{W}_{n}$ for
which the $\rho_{\infty}$ distance to the standard word is at least
$\frac{p(n)}{\sqrt{n}}$ goes to zero as $n\rightarrow\infty$.
###### Proof.
For part (a), by the preceding lemma, it suffices to consider words with
$n\tau(w)\geq M$. Let $j$ be the smallest index such that for some $k,\ell$,
$\Bigl{|}w_{A_{k}}[j]-w_{A_{\ell}}[j]\Bigr{|}=M,$
and let $k$ and $\ell$ be the indices for which this occurs. Then applying the
reflection principle as in Proposition 10 just to the letters $A_{k}$ and
$A_{\ell}$ (keeping all the other letters in their places), the number of such
paths is at most the multinomial coefficient
$2\binom{Nn}{n-M,n+M,n,\ldots,n}$. Since there are $N^{2}$ choices for the
pair $(k,\ell)$, the result follows.
For part (b), we note that the ratio of multinomial coefficients
$\frac{\binom{Nn}{n-M,n+M,n,\ldots,n}}{\binom{Nn}{n,\ldots,n}}=\frac{n!n!}{(n-M)!(n+M)!}=\frac{\binom{2n}{n-M}}{\binom{2n}{n}}.$
So the direct application of Corollary 12 gives the result. ∎
###### Corollary 27.
Put on $\mathcal{W}_{n}^{(N)}$ the uniform measure, and let
$\mathcal{W}^{(N)}=\prod_{n=1}^{\infty}\mathcal{W}_{n}^{(N)}$, with the usual
product probability measure. Then for
$\mathbf{w}=(w_{1},w_{2},\ldots)\in\mathcal{W}^{(N)}$, almost surely with
respect to this product measure,
$F(w_{n})\rightarrow e^{A_{1}+\ldots+A_{N}}$
in the matrix norm as $n\rightarrow\infty$.
Acknowledgements. The first author would like to thank Matthew Junge, who
reminded him that words can be treated as random walks. The authors are
grateful to Harold Boas for numerous comments, which led to a substantial
improvement of the article (the remaining errors are, of course, our own), and
to the reviewers for a careful reading of the manuscript and helpful comments.
## References
* [1] Herzog, G. (2014). A proof of Lie’s product formula. _Amer. Math. Monthly._ 121(3): 254–257.
* [2] Krattenthaler, C. (2015). Lattice path enumeration. In: Bóna, M., ed. _Handbook of enumerative combinatorics_. Discrete Math. Appl. (Boca Raton). Boca Raton, FL: CRC Press, pp. 589–678.
* [3] Panny, W., Prodinger, H. (1985). The expected height of paths for several notions of height. _Studia Sci. Math. Hungar._ 20(1-4): 119–132.
* [4] Renault, M. (2008). Lost (and found) in translation: André’s actual method and its application to the generalized ballot problem. _Amer. Math. Monthly._ 115(4): 358–363.
* [5] Spencer, S. (2014). _Asymptopia_. Student Mathematical Library, vol. 71. Providence, RI: American Mathematical Society. With Laura Florescu.
## Appendix A The set of products.
We now return to the case of two matrices. The results earlier in the paper
are concerned with the asymptotic _density_ of the sets $F(\mathcal{W}_{n})$.
A quite different question is to give an asymptotic description of these sets
themselves, or perhaps of the closure
$\mathcal{S}=\overline{\bigcup_{n=1}^{\infty}F(\mathcal{W}_{n})}\subset
M_{d}(\mathbb{C}).$
It is clear that $\mathcal{S}$ is connected, closed, and bounded. It is also
easy to check that all of its elements have same determinant, and so lie in a
hypersurface. Beyond these elementary properties, we do not in general have a
good description of this set. Several examples are included in Figure 3. The
two-dimensional images may be hard to read, so the reader is invited to take
advantage of the three-dimensional functionality at
https://austinpritchett.shinyapps.io/nexpm_visualization/
Figure 3. The sets $F(\mathcal{W}_{8})$ for several choices of $A$ and $B$. We
plot the $(1,1),(1,2)$, and $(2,1)$ entries, and indicate the value of the
$(2.2)$ entry via the color.
We finish with an example where the set $\mathcal{S}$, and in fact the
function $F:\mathcal{W}_{n}\rightarrow M_{2}(\mathbb{C})$, can be described
completely. Denote by $E_{ij}$ the matrix with a $1$ in the $(i,j)$ position,
and $0$’s elsewhere.
###### Remark 28.
Recall that in Remark 18 we identified a word with a non-decreasing step
function from $[0,1]$ to $[0,1]$. Here is an explicit description of this
correspondence. For the $i$’th $A$ in $w$, denote by $h_{i}(w)$ the number of
$B$’s which have appeared in $w$ before it. Equivalently, the $i$’th $A$
appears in position $i+h_{i}(w)$ in $w$. Then the function
$L_{w}:[0,1]\rightarrow[0,1]$ corresponding to $w$ takes the value $h_{i}(w)$
on the interval $\left(\frac{i-1}{n},\frac{i}{n}\right)$. See Figure 1.
###### Theorem 29.
Let $A=E_{12}$ and $B=E_{11}$ in $M_{2}(\mathbb{C})$. Using the notation just
above,
1. (a)
For $w\in\mathcal{W}_{n}$,
$F(w)=\begin{pmatrix}e&\frac{1}{n}\left(e^{h_{1}(w)/n}+\ldots+e^{h_{n}(w)/n}\right)\\\
0&1\end{pmatrix}.$
2. (b)
For a general increasing function $L:[0,1]\rightarrow[0,1]$ as in Remark 18,
$F(L)=\begin{pmatrix}e&\int_{0}^{1}e^{L(x)}\,dx\\\ 0&1\end{pmatrix}.$
###### Proof.
It suffices to prove recursively that for any $1\leq j\leq 2n$,
$\prod_{i=1}^{j}e^{w[i]/n}=\begin{pmatrix}e^{w_{B}[j]/n}&\frac{1}{n}\sum_{i=1}^{w_{A}[j]}e^{h_{i}(w)/n}\\\
0&1\end{pmatrix}.$
First consider $j=1$. If the first letter of $w$ is $A$,
$e^{A/n}=\begin{pmatrix}1&\frac{1}{n}\\\
0&1\end{pmatrix}=\begin{pmatrix}e^{w_{B}[1]/n}&\frac{1}{n}\sum_{i=1}^{w_{A}[1]}e^{h_{i}(w)/n}\\\
0&1\end{pmatrix}$
since $h_{1}(w)=0$. If the first letter of $w$ is $B$,
$e^{B/n}=\begin{pmatrix}e^{1/n}&0\\\
0&1\end{pmatrix}=\begin{pmatrix}e^{w_{B}[1]/n}&\frac{1}{n}\sum_{i=1}^{w_{A}[1]}e^{h_{i}(w)/n}\\\
0&1\end{pmatrix}$
since the latter sum is empty. Now recursively, if $w[j+1]=A$,
$\begin{split}&\begin{pmatrix}e^{w_{B}[j]/n}&\frac{1}{n}\sum_{i=1}^{w_{A}[j]}e^{h_{i}(w)/n}\\\
0&1\end{pmatrix}\ \begin{pmatrix}1&\frac{1}{n}\\\ 0&1\end{pmatrix}\\\
&\qquad=\begin{pmatrix}e^{w_{B}[j]/n}&\frac{1}{n}\sum_{i=1}^{w_{A}[j]}e^{h_{i}(w)/n}+\frac{1}{n}e^{w_{B}[j]/n}\\\
0&1\end{pmatrix}\\\
&\qquad=\begin{pmatrix}e^{w_{B}[j+1]/n}&\frac{1}{n}\sum_{i=1}^{w_{A}[j+1]}e^{h_{i}(w)/n}\\\
0&1\end{pmatrix}\end{split}$
Indeed, since $w[j+1]$ is the $w_{A}[j+1]$’th $A$ in $w$, we have
$w_{B}[j+1]=w_{B}[j]=h_{i}(w)$ for $i=w_{A}[j+1]$. Similarly, if $w[j+1]=B$,
$\begin{split}&\begin{pmatrix}e^{w_{B}[j]/n}&\frac{1}{n}\sum_{i=1}^{w_{A}[j]}e^{h_{i}(w)/n}\\\
0&1\end{pmatrix}\ \begin{pmatrix}e^{1/n}&0\\\ 0&1\end{pmatrix}\\\
&\qquad=\begin{pmatrix}e^{(w_{B}[j]+1)/n}&\frac{1}{n}\sum_{i=1}^{w_{A}[j]}e^{h_{i}(w)/n}\\\
0&1\end{pmatrix}\\\
&\qquad=\begin{pmatrix}e^{w_{B}[j+1]/n}&\frac{1}{n}\sum_{i=1}^{w_{A}[j+1]}e^{h_{i}(w)/n}\\\
0&1\end{pmatrix}\end{split}$
since $w_{A}[j+1]=w_{A}[j]$.
Part (b) follows: the expression in part (a) is the Riemann sum for the
integral $\int_{0}^{1}e^{L(x)}\,dx$, and since the function $L$ is increasing,
it is Riemann integrable. ∎
###### Remark 30.
In the example above, $\mathcal{S}$ is a (one-dimensional) curve from
$e^{A}e^{B}$ to $e^{B}e^{A}$. There are several general situations where this
behavior also occurs. Denoting $[A,B]=AB-BA$ the commutator of $A$ and $B$,
these include
* •
Quasi-commuting matrices [1] for which the commutator is non-zero but commutes
with both $A$ and $B$,
* •
Matrices which satisfy $[A,B]=sB$,
* •
$A=\begin{pmatrix}x&1\\\ 0&x\end{pmatrix},\quad B=\begin{pmatrix}a&0\\\
0&b\end{pmatrix}$
for $a\neq b$.
## References
* [1] Neal H. McCoy, _On quasi-commutative matrices_ , Trans. Amer. Math. Soc. 36 (1934), no. 2, 327–340. MR 1501746
|
# Gradient estimates for a nonlinear diffusion equation on complete manifolds
Jia-Yong Wu Department of Mathematics, East China Normal University,
Shanghai, China 200241<EMAIL_ADDRESS>
(Date: January 1, 2009.)
###### Abstract.
Let $(M,g)$ be a complete non-compact Riemannian manifold with the
$m$-dimensional Bakry-Émery Ricci curvature bounded below by a non-positive
constant. In this paper, we give a localized Hamilton-type gradient estimate
for the positive smooth bounded solutions to the following nonlinear diffusion
equation
$u_{t}=\Delta u-\nabla\phi\cdot\nabla u-au\log u-bu,$
where $\phi$ is a $C^{2}$ function, and $a\neq 0$ and $b$ are two real
constants. This work generalizes the results of Souplet and Zhang (Bull.
London Math. Soc., 38 (2006), pp. 1045-1053) and Wu (Preprint, 2008).
###### Key words and phrases:
local gradient estimate; nonlinear diffusion equation; Bakry-Émery Ricci
curvature
###### 2000 Mathematics Subject Classification:
Primary 58J35; Secondary 58J35, 58J05. Chinese Library Classification:
O175.26; O186.12
## 1\. Introduction
Let $(M,g)$ be an $n$-dimensional non-compact Riemannian manifold with the
$m$-dimensional Bakry-Émery Ricci curvature bounded below. Consider the
following diffusion equation:
(1.1) $u_{t}=\Delta u-\nabla\phi\cdot\nabla u-au\log u-bu$
in $B(x_{0},R)\times[t_{0}-T,t_{0}]\subset M\times(-\infty,\infty)$, where
$\phi$ is a $C^{2}$ function, and $a\neq 0$ and $b$ are two real constants.
Eq. (1.1) is closely linked with the gradient Ricci solitons, which are the
self-similar solutions to the Ricci flow introduced by Hamilton [3]. Ricci
solitons have inspired the entropy and Harnack estimates, the space-time
formulation of the Ricci flow, and the reduced distance and reduced volume.
Below we recall the definition of Ricci solitons (see also Chapter 4 of [4]).
###### Definition 1.1.
A Riemannian manifold $(M,g)$ is called a _gradient Ricci soliton_ if there
exists a smooth function $f:M\rightarrow\mathbb{R}$, sometimes called
_potential function_ , such that for some constant $c\in\mathbb{R}$, it
satisfies
(1.2) $Ric(g)+\nabla^{g}\nabla^{g}f=cg$
on $M$, where $Ric(g)$ is the Ricci curvature of manifold $M$ and
$\nabla^{g}\nabla^{g}f$ is the Hessian of $f$. A soliton is said to be
_shrinking_ , _steady_ or _expanding_ if the constant $c$ is respectively
positive, zero or negative.
Suppose that $(M,g)$ be a gradient Ricci soliton, and $c$, $f$ are described
in Definition A. Letting $u=e^{f}$, under some curvature assumptions, we can
derive from (1.2) that (cf. [5], Eq. (7))
(1.3) $\Delta u+2cu\log u=(A_{0}-nc)u,$
for some constant $A_{0}$. Eq. (1.3) is a nonlinear elliptic equation and a
special case of Eq. (1.1). For this kind of equations, Ma (see Theorem 1 in
[5]) obtained the following result.
Theorem A ([5]). _Let $(M,g)$ be a complete non-compact Riemannian manifold of
dimension $n\geq 3$ with Ricci curvature bounded below by the constant
$-K:=-K(2R)$, where $R>0$ and $K(2R)\geq 0$, in the metric ball $B_{2R}(p)$.
Let $u$ be a positive smooth solution to the elliptic equation_
(1.4) $\Delta u-au\log u=0$
_with $a>0$. Let $f=\log u$ and let $(f,2f)$ be the maximum among $f$ and
$2f$. Then there are two uniform positive constant $c_{1}$ and $c_{2}$ such
that_
(1.5) $\displaystyle|\nabla f|^{2}-a(f,2f)$ $\displaystyle\leq$
$\displaystyle\frac{n\Big{[}(n+2)c^{2}_{1}+(n-1)c^{2}_{1}(1+R\sqrt{K})+c_{2}\Big{]}}{R^{2}}+2n\Big{(}|a|+K\Big{)}$
_in $B_{R}(p)$._
Then Yang (see Theorem 1.1 in [6]) extended the above result and obtained the
following local gradient estimate for the nonlinear equation (1.1) with
$\phi\equiv c_{0}$, where $c_{0}$ is a fixed constant.
Theorem B ([6]). _Let $M$ be an $n$-dimensional complete non-compact
Riemannian Manifold. Suppose the Ricci curvature of $M$ is bounded below by
$-K:=-K(2R)$, where $R>0$ and $K(2R)\geq 0$, in the metric ball $B_{2R}(p)$.
If $u$ is a positive smooth solution to Eq. (1.1) with $\phi\equiv c_{0}$ on
$M\times[0,\infty)$ and $f=\log u$, then for any $\alpha>1$ and $0<\delta<1$_
(1.6) $\displaystyle|\nabla f|^{2}(x,t)-\alpha af(x,t)-\alpha b-\alpha
f_{t}(x,t)$ $\displaystyle\leq$ $\displaystyle\frac{n\alpha^{2}}{2\delta
t}+\frac{n\alpha^{2}}{2\delta}\Bigg{\\{}\frac{2\epsilon^{2}}{R^{2}}+\frac{\nu}{R^{2}}+\sigma+\frac{\epsilon^{2}}{R^{2}}(n-1)\left(1+R\sqrt{K(2R)}\right)$
$\displaystyle+\frac{K(2R)}{\alpha-1}+\frac{n\alpha^{2}\epsilon^{2}}{8(1-\delta)(\alpha-1)R^{2}}\Bigg{\\}}$
_in $B_{R}(p)\times(0,\infty)$, where $\epsilon>0$ and $\nu>0$ are some
constants and where $\sigma=a/2$ if $a>0$; $\sigma=-a$ if $a<0$._
Recently, the author (see Theorem 1.1 in [2]) used Souplet-Zhang’s method in
[1] and obtained a localized Hamilton-type gradient estimate for the positive
smooth bounded solutions of the equation (1.1) with $\phi\equiv c_{0}$.
Theorem C ([2]). _Let $(M,g)$ be an $n$-dimensional non-compact Riemannian
manifold with $Ric(M)\geq-K$ for some constant $K\geq 0$. Suppose that
$u(x,t)$ is a positive smooth solution to the parabolic equation (1.1) with
$\phi\equiv c_{0}$ in $Q_{R,T}\equiv B(x_{0},R)\times[t_{0}-T,t_{0}]\subset
M\times(-\infty,\infty)$. Let $f:=\log u$. We also assume that there exists
non-negative constants $\alpha$ and $\delta$ such that $\alpha-f\geq\delta>0$.
Then there exist three dimensional constants $\tilde{c}$, $c(\delta)$ and
$c(\alpha,\delta)$ such that_
(1.7) $\frac{|\nabla
u|}{u}\leq\left(\frac{\tilde{c}}{R}\beta{+}\frac{c(\alpha,\delta)}{R}{+}\frac{c(\delta)}{\sqrt{T}}{+}c(\delta)\left(|a|+K\right)^{1/2}\kern-2.0pt{+}c(\delta)|a|^{1/2}\beta^{1/2}\right)\left(\alpha{-}\frac{b}{a}{-}\log
u\right)$
_in $Q_{R/2,T/2}$, where $\beta:=\max\left\\{1,|\alpha/\delta-1|\right\\}$._
The purpose of this paper is to extend Theorem C to the general nonlinear
diffusion equation (1.1) via the $m$-dimensional Bakry-Émery Ricci curvature.
Let us first recall some facts about the $m$-dimensional Bakry-Émery Ricci
curvature (please see [7, 8, 9, 10] for more details). Given an
$n$-dimensional Riemannian manifold $(M,g)$ and a $C^{2}$ function $\phi$, we
may define a symmetric diffusion operator $L:=\Delta-\nabla\phi\cdot\nabla,$
which is the infinitesimal generator of the Dirichlet form
$\mathcal{E}(f,g)=\int_{M}(\nabla f,\nabla g)\mathrm{d}\mu,\,\,\,\forall
f,g\in C_{0}^{\infty}(M),$
where $\mu$ is an invariant measure of $L$ given by
$\mathrm{d}\mu=e^{-\phi}\mathrm{d}x.$ It is well-known that $L$ is self-
adjoint with respect to the weighted measure $\mathrm{d}\mu$.
The $\infty$-dimensional Bakry-Émery Ricci curvature $Ric(L)$ is defined by
$Ric(L):=Ric+Hess(\phi),$
where $Ric$ and $Hess$ denote the Ricci curvature of the metric $g$ and the
Hessian respectively. Following the notation used in [10], we also define the
$m$-dimensional Bakry-Émery Ricci curvature of $L$ on an $n$-dimensional
Riemaniann manifold as follows
$Ric_{m,n}(L):=Ric(L)-\frac{\nabla\phi\otimes\nabla\phi}{m-n},$
where $m:=\mathrm{dim}_{BE}(L)$ is called the Bakry-Émery dimension of $L$.
Note that the number $m$ is not necessarily to be an integer and $m\geq
n=\mathrm{dim}M$.
The main result of this paper can be stated in the following:
###### Theorem 1.2.
Let $(M,g)$ be an n-dimensional non-compact Riemannian manifold with
$Ric_{m,n}(L)\geq-K$ for some constant $K\geq 0$. Suppose that $u(x,t)$ is a
positive smooth solution to the diffusion equation (1.1) in $Q_{R,T}\equiv
B(x_{0},R)\times[t_{0}-T,t_{0}]\subset M\times(-\infty,\infty)$. Let $f:=\log
u$. We also assume that there exists non-negative constants $\alpha$ and
$\delta$ such that $\alpha-f\geq\delta>0$. Then there exist three dimensional
constants $\tilde{c}$, $c(\delta)$ and $c(\alpha,\delta,m)$ such that
(1.8) $\frac{|\nabla
u|}{u}\leq\left(\frac{\tilde{c}}{R}\beta{+}\frac{c(\alpha,\delta,m)}{R}{+}\frac{c(\delta)}{\sqrt{T}}{+}c(\delta)\left(|a|+K\right)^{1/2}\kern-3.0pt{+}c(\delta)|a|^{1/2}\beta^{1/2}\right)\left(\alpha{-}\frac{b}{a}{-}\log
u\right)$
in $Q_{R/2,T/2}$, where $\beta:=\max\left\\{1,|\alpha/\delta-1|\right\\}$.
We make some remarks on the above theorem below.
###### Remark 1.3.
(i). In Theorem 1.2, it seems that the assumption $\alpha-f\geq\delta>0$ is
reasonable. Because from this assumption, we can get $u\leq
e^{\alpha-\delta}$. We say that this upper bound of $u$ can be achieved in
some setting. For example, from Corollary 1.2 in [6], we know that positive
smooth solutions to the elliptic equation (1.4) with $a<0$ have $u(x)\leq
e^{n/2}$ for all $x\in M$ provided the Ricci curvature of $M$ is non-negative.
(ii). Note that the theorem still holds if $m$-dimensional Bakry-Émery Ricci
curvature is replaced by $\infty$-dimensional Bakry-Émery Ricci curvature. In
fact this result can be obtained by (2.10) in Section 2.
(iii). Theorem 1.2 generalizes the above mentioned Theorem C. When we choose
$\phi\equiv c_{0}$, we return Theorem C. The proof of our main theorem is
based on Souplet-Zhang’s gradient estimate and the trick used in [2] with some
modifications.
In particular, if $u(x,t)\leq 1$ is a positive smooth solution to the
diffusion equation (1.1) with $a<0$, then we have a simple estimate.
###### Corollary 1.4.
Let $(M,g)$ be an n-dimensional non-compact Riemannian manifold with
$Ric_{m,n}(L)\geq-K$ for some constant $K\geq 0$. Suppose that $u(x,t)\leq 1$
is a positive smooth solution to the diffusion equation (1.1) with $a<0$ in
$Q_{R,T}\equiv B(x_{0},R)\times[t_{0}-T,t_{0}]\subset
M\times(-\infty,\infty)$. Then there exist two dimensional constants $c$ and
$c(m)$ such that
(1.9) $\frac{|\nabla
u|}{u}\leq\left(\frac{c(m)}{R}+\frac{c}{\sqrt{T}}+c\sqrt{K+|a|}\right)\left(1-\frac{b}{a}+\log{\frac{1}{u}}\right)$
in $Q_{R/2,T/2}$.
###### Remark 1.5.
We point out that our localized Hamilton-type gradient estimate can be also
regarded as the generalization of the result of Souplet-Zhang [1] for the heat
equation on complete manifolds. In fact, the above Corollary 1.4 is similar to
the result of Souplet-Zhang (see Theorem 1.1 of [1]). From the inequality
(4.4) below, we can conclude that if $\phi\equiv c_{0}$ and $a=0$, then our
result can be reduced to theirs.
The method of proving Theorem 1.2 is the gradient estimate, which is
originated by Yau [11] (see also Cheng-Yau [12]), and developed further by Li-
Yau [13], Li [14] and Negrin [15]. Then R. S. Hamilton [16] gave an elliptic
type gradient estimate for the heat equation. But this type of estimate is a
global result which requires the heat equation defined on closed manifolds.
Recently, a localized Hamilton-type gradient estimate was proved by Souplet
and Zhang [1], which can be viewed as a combination of Li-Yau’s Harnack
inequality [13] and Hamilton’s gradient estimate [16]. In this paper, we
obtain a localized Hamilton-type gradient estimate for a general diffusion
equation (1.1) as Souplet and Zhang in [1] did for the heat equation on
complete manifolds. To prove Theorem 1.2, we mainly follow the arguments of
Souplet-Zhang in [1], together with some facts about Bakry-Émery Ricci
curvature. Note that the diffusion equation (1.1) is nonlinear. So our case is
a little more complicated than theirs.
The structure of this paper is as follows. In Section 2, we will give a basic
lemma to prepare for proving Theorem 1.2. Section 3 is devoted to the proof of
Theorem 1.2. In Section 4, we will prove Corollary 1.4 in the case $0<u\leq 1$
with $a<0$.
## 2\. A basic lemma
In this section, we will prove the following lemma which is essential in the
derivation of the gradient estimate of the equation (1.1). Replacing $u$ by
$e^{-b/a}u$, we only need to consider positive smooth solutions of the
following diffusion equation:
(2.1) $u_{t}=\Delta u-\nabla\phi\cdot\nabla u-au\log u.$
Suppose that $u(x,t)$ is a positive smooth solution to the diffusion equation
(1.1) in $Q_{R,T}\equiv B(x_{0},R)\times[t_{0}-T,t_{0}]$. Define a smooth
function
$f(x,t):=\log u(x,t)$
in $Q_{R,T}$. By (2.1), we have
(2.2) $\left(L-\frac{\partial}{\partial t}\right)f+|\nabla f|^{2}-af=0.$
Then we have the following lemma, which is a generalization of the computation
carried out in [1, 2].
###### Lemma 2.1.
Let $(M,g)$ be an n-dimensional non-compact Riemannian manifold with
$Ric_{m,n}(L)\geq-K$ for some constant $K\geq 0$. Let $f(x,t)$ is a smooth
function defined on $Q_{R,T}$ satisfying the diffusion equation (2.2). We also
assume that there exist non-negative constants $\alpha$ and $\delta$ such that
$\alpha-f\geq\delta>0$. Then for all $(x,t)$ in $Q_{R,T}$ the function
(2.3) $\omega:=\left|\nabla\log(\alpha-f)\right|^{2}=\frac{|\nabla
f|^{2}}{(\alpha-f)^{2}}$
satisfies the following inequality
(2.4) $\displaystyle\left(L-\frac{\partial}{\partial t}\right)\omega$
$\displaystyle\geq$
$\displaystyle\frac{2(1-\alpha)+2f}{\alpha-f}\left\langle\nabla
f,\nabla\omega\right\rangle+2(\alpha-f)\omega^{2}+2(a-K)\omega+\frac{2af}{\alpha-f}\omega.$
###### Proof.
By (2.3), we have
(2.5)
$\displaystyle\omega_{j}=\frac{2f_{i}f_{ij}}{(\alpha-f)^{2}}+\frac{2f^{2}_{i}f_{j}}{(\alpha-f)^{3}},$
(2.6)
$\displaystyle\Delta\omega=\frac{2|f_{ij}|^{2}}{(\alpha-f)^{2}}+\frac{2f_{i}f_{ijj}}{(\alpha-f)^{2}}+\frac{8f_{i}f_{j}f_{ij}}{(\alpha-f)^{3}}+\frac{2f^{2}_{i}f_{jj}}{(\alpha-f)^{3}}+\frac{6f^{2}_{i}f^{2}_{j}}{(\alpha-f)^{4}}$
and
(2.7) $\displaystyle L\omega$ $\displaystyle=\Delta\omega-\phi_{j}\omega_{j}$
$\displaystyle=\frac{2|f_{ij}|^{2}}{(\alpha{-}f)^{2}}+\frac{2f_{i}f_{ijj}}{(\alpha{-}f)^{2}}+\frac{8f_{i}f_{j}f_{ij}}{(\alpha{-}f)^{3}}+\frac{2f^{2}_{i}f_{jj}}{(\alpha{-}f)^{3}}+\frac{6f^{4}_{i}}{(\alpha{-}f)^{4}}-\frac{2f_{ij}f_{i}\phi_{j}}{(\alpha{-}f)^{2}}-\frac{2f^{2}_{i}f_{j}\phi_{j}}{(\alpha{-}f)^{3}}$
$\displaystyle=\frac{2|f_{ij}|^{2}}{(\alpha{-}f)^{2}}+\frac{2f_{i}(Lf)_{i}}{(\alpha{-}f)^{2}}+\frac{2(R_{ij}+\phi_{ij})f_{i}f_{j}}{(\alpha{-}f)^{2}}+\frac{8f_{i}f_{j}f_{ij}}{(\alpha{-}f)^{3}}+\frac{2f^{2}_{i}\cdot
Lf}{(\alpha{-}f)^{3}}+\frac{6f^{4}_{i}}{(\alpha{-}f)^{4}},$
where $f_{i}:=\nabla_{i}f$ and $f_{ijj}:=\nabla_{j}\nabla_{j}\nabla_{i}f$,
etc. By (2.3) and (2.2), we also have
(2.8) $\displaystyle\omega_{t}$
$\displaystyle=\frac{2\nabla_{i}f\cdot\nabla_{i}\Big{[}Lf+|\nabla
f|^{2}-af\Big{]}}{(\alpha-f)^{2}}+\frac{2|\nabla f|^{2}\Big{[}Lf+|\nabla
f|^{2}-af\Big{]}}{(\alpha-f)^{3}}$ $\displaystyle=\frac{2\nabla f\nabla
Lf}{(\alpha-f)^{2}}+\frac{4f_{i}f_{j}f_{ij}}{(\alpha-f)^{2}}-\frac{2a|\nabla
f|^{2}}{(\alpha-f)^{2}}+\frac{2f^{2}_{i}Lf}{(\alpha-f)^{3}}+\frac{2|\nabla
f|^{4}}{(\alpha-f)^{3}}-\frac{2af|\nabla f|^{2}}{(\alpha-f)^{3}}.$
Combining (2.7) with (2.8), we can get
(2.9) $\displaystyle\left(L-\frac{\partial}{\partial t}\right)\omega$
$\displaystyle=\frac{2|f_{ij}|^{2}}{(\alpha-f)^{2}}+\frac{2(R_{ij}+\phi_{ij})f_{i}f_{j}}{(\alpha-f)^{2}}+\frac{8f_{i}f_{j}f_{ij}}{(\alpha-f)^{3}}+\frac{6f^{4}_{i}}{(\alpha-f)^{4}}$
$\displaystyle\,\,\,\,\,\,-\frac{4f_{i}f_{j}f_{ij}}{(\alpha-f)^{2}}-\frac{2f^{4}_{i}}{(\alpha-f)^{3}}+\frac{2af^{2}_{i}}{(\alpha-f)^{2}}+\frac{2aff^{2}_{i}}{(\alpha-f)^{3}}.$
Noting that $Ric_{m,n}(L)\geq-K$ for some constant $K\geq 0$, we have
(2.10) $(R_{ij}+\phi_{ij})f_{i}f_{j}\geq\frac{|\nabla\phi\cdot\nabla
f|^{2}}{m-n}-K|\nabla f|^{2}\geq-K|\nabla f|^{2}.$
By (2.5), we have
(2.11)
$\displaystyle\omega_{j}f_{j}=\frac{2f_{i}f_{j}f_{ij}}{(\alpha-f)^{2}}+\frac{2f^{2}_{i}f^{2}_{j}}{(\alpha-f)^{3}},$
and consequently,
(2.12) $\displaystyle
0=-2\omega_{j}f_{j}+\frac{4f_{i}f_{j}f_{ij}}{(\alpha-f)^{2}}+\frac{4f^{4}_{i}}{(\alpha-f)^{3}},$
(2.13) $\displaystyle
0=\frac{1}{\alpha-f}\left[2\omega_{j}f_{j}-\frac{4f^{4}_{i}}{(\alpha-f)^{3}}\right]-\frac{4f_{i}f_{j}f_{ij}}{(\alpha-f)^{3}}.$
Substituting (2.10) into (2.9) and then adding (2.9) with (2.12) and (2.13),
we can get
(2.14) $\displaystyle\left(L-\frac{\partial}{\partial t}\right)\omega$
$\displaystyle\geq\frac{2|f_{ij}|^{2}}{(\alpha-f)^{2}}-\frac{2K|\nabla
f|^{2}}{(\alpha-f)^{2}}+\frac{4f_{i}f_{j}f_{ij}}{(\alpha-f)^{3}}+\frac{2f^{4}_{i}}{(\alpha-f)^{4}}+\frac{2f^{4}_{i}}{(\alpha-f)^{3}}$
$\displaystyle\,\,\,\,\,\,+\frac{2(1-\alpha)+2f}{\alpha-f}f_{i}\omega_{i}+\frac{2af^{2}_{i}}{(\alpha-f)^{2}}+\frac{2aff^{2}_{i}}{(\alpha-f)^{3}}.$
Note that $\alpha-f\geq\delta>0$ implies
$\displaystyle\frac{2|f_{ij}|^{2}}{(\alpha-f)^{2}}+\frac{4f_{i}f_{j}f_{ij}}{(\alpha-f)^{3}}+\frac{2f^{4}_{i}}{(\alpha-f)^{4}}\geq
0.$
This, together with (2.14), yields the desired estimate (LABEL:lemmaequ3).
∎
## 3\. Proof of Theorem 1.2
In this section, we will use Lemma 2.1 and the localization technique of
Souplet-Zhang [1] to give the elliptic type gradient estimates on the positive
and bounded smooth solutions of the diffusion equation (1.1).
###### Proof.
First we give the well-known cut-off function by Li-Yau [13] (see also [1]) as
follows. We caution the reader that the calculation is not the same as that in
[13] due to the difference of the first-order term.
Let $\psi=\psi(x,t)$ be a smooth cut-off function supported in $Q_{R,T}$
satisfying the following properties:
1. (1)
$\psi=\psi(d(x,x_{0}),t)\equiv\psi(r,t)$; $\psi(x,t)=1$ in $Q_{R/2,T/2}$,
$0\leq\psi\leq 1$;
1. (2)
$\psi$ is decreasing as a radial function in the spatial variables;
1. (3)
$\frac{|\partial_{r}\psi|}{\psi^{\epsilon}}\leq\frac{C_{\epsilon}}{R}$,
$\frac{|\partial^{2}_{r}\psi|}{\psi^{\epsilon}}\leq\frac{C_{\epsilon}}{R^{2}}$,
when $0<\epsilon<1$;
1. (4)
$\frac{|\partial_{t}\psi|}{\psi^{1/2}}\leq\frac{C}{T}$.
From Lemma 2.1, by a straight forward calculation, we have
(3.1) $\displaystyle L(\psi\omega)-\frac{2(1-\alpha)+2f}{\alpha-f}\nabla
f\cdot\nabla(\psi\omega)-2\frac{\nabla\psi}{\psi}\cdot\nabla(\psi\omega)-(\psi\omega)_{t}$
$\displaystyle\geq$ $\displaystyle
2\psi(\alpha-f)\omega^{2}-\left[\frac{2(1-\alpha)+2f}{\alpha-f}\nabla
f\cdot\nabla\psi\right]\omega-2\frac{|\nabla\psi|^{2}}{\psi}\omega$
$\displaystyle+(L\psi)\omega-\psi_{t}\omega+2(a-K)\psi\omega+2\frac{af}{\alpha-f}\psi\omega.$
Let $(x_{1},t_{1})$ be a point where $\psi\omega$ achieves the maximum. By Li-
Yau [13], without loss of generality we assume that $x_{1}$ is not in the cut-
locus of $M$. Then at this point, we have
$\displaystyle L(\psi\omega)\leq 0,\,\,\,\,\,\,(\psi\omega)_{t}\geq
0,\,\,\,\,\,\,\nabla(\psi\omega)=0.$
Hence at $(x_{1},t_{1})$, by (LABEL:lemdx3), we get
(3.2) $\displaystyle 2\psi(\alpha-f)\omega^{2}(x_{1},t_{1})$
$\displaystyle\leq\Bigg{\\{}\left[\frac{2(1-\alpha)+2f}{\alpha-f}\nabla
f\cdot\nabla\psi\right]\omega+2\frac{|\nabla\psi|^{2}}{\psi}\omega-(L\psi)\omega$
$\displaystyle\,\,\,\,\,\,\,\,\,+\psi_{t}\omega-2(a-K)\psi\omega-2\frac{af}{\alpha-f}\psi\omega\Bigg{\\}}(x_{1},t_{1}).$
In the following, we will introduce the upper bounds for each term of the
right-hand side (RHS) of (3.2). Following similar arguments of Souplet-Zhang
([1], pp. 1050-1051), we have the estimates of the first term of the BHS of
(3.2)
(3.3) $\displaystyle\left[\frac{2f}{\alpha-f}\nabla
f\cdot\nabla\psi\right]\omega$ $\displaystyle\leq$ $\displaystyle
2|f|\cdot|\nabla\psi|\cdot\omega^{3/2}=2\left[\psi(\alpha-f)\omega^{2}\right]^{3/4}\cdot\frac{|f|\cdot|\nabla\psi|}{[\psi(\alpha-f)]^{3/4}}$
$\displaystyle\leq$
$\displaystyle\psi(\alpha-f)\omega^{2}+\tilde{c}\frac{(f|\nabla\psi|)^{4}}{[\psi(\alpha-f)]^{3}}\leq\psi(\alpha-f)\omega^{2}+\tilde{c}\frac{f^{4}}{R^{4}(\alpha-f)^{3}}$
and
(3.4) $\displaystyle\left[\frac{2(1-\alpha)}{\alpha-f}\nabla
f\cdot\nabla\psi\right]\omega$ $\displaystyle\leq$ $\displaystyle
2|1-\alpha||\nabla\psi|\omega^{3/2}=(\psi\omega^{2})^{3/4}\cdot\frac{2|1-\alpha||\nabla\psi|}{\psi^{3/4}}$
$\displaystyle\leq$
$\displaystyle\frac{\delta}{12}\psi\omega^{2}+c(\alpha,\delta)\left(\frac{|\nabla\psi|}{\psi^{3/4}}\right)^{4}\leq\frac{\delta}{12}\psi\omega^{2}+\frac{c(\alpha,\delta)}{R^{4}}.$
For the second term of the RHS of (3.2), we have
(3.5) $\displaystyle 2\frac{|\nabla\psi|^{2}}{\psi}\omega$
$\displaystyle=2\psi^{1/2}\omega\cdot\frac{|\nabla\psi|^{2}}{\psi^{3/2}}\leq\frac{\delta}{12}\psi\omega^{2}+c(\delta)\left(\frac{|\nabla\psi|^{2}}{\psi^{3/2}}\right)^{2}$
$\displaystyle\leq\frac{\delta}{12}\psi\omega^{2}+\frac{c(\delta)}{R^{4}}.$
For the third term of the RHS of (3.2), since $Ric_{m,n}(L)\geq-K$, by the
generalized Laplacian comparison theorem (see [9] or [10]),
$Lr\leq(m-1)\sqrt{K}\coth(\sqrt{K}r).$
Consequently, we have
(3.6) $\displaystyle-(L\psi)\omega$
$\displaystyle=-\left[(\partial_{r}\psi)Lr+(\partial^{2}_{r}\psi)\cdot|\nabla
r|^{2}\right]\omega$
$\displaystyle\leq-\left[\partial_{r}\psi(m-1)\sqrt{K}\coth(\sqrt{K}r)+\partial^{2}_{r}\psi\right]\omega$
$\displaystyle\leq-\left[\partial_{r}\psi(m-1)\left(\frac{1}{r}+\sqrt{K}\right)+\partial^{2}_{r}\psi\right]\omega$
$\displaystyle\leq\left[|\partial^{2}_{r}\psi|+2(m-1)\frac{|\partial_{r}\psi|}{R}+(m-1)\sqrt{K}|\partial_{r}\psi|\right]\omega$
$\displaystyle\leq\psi^{1/2}\omega\frac{|\partial^{2}_{r}\psi|}{\psi^{1/2}}+\psi^{1/2}\omega
2(m-1)\frac{|\partial_{r}\psi|}{R\psi^{1/2}}+\psi^{1/2}\omega(m-1)\frac{\sqrt{K}|\partial_{r}\psi|}{\psi^{1/2}}$
$\displaystyle\leq\frac{\delta}{12}\psi\omega^{2}+c(\delta,m)\left[\left(\frac{|\partial^{2}_{r}\psi|}{\psi^{1/2}}\right)^{2}+\left(\frac{|\partial_{r}\psi|}{R\psi^{1/2}}\right)^{2}+\left(\frac{\sqrt{K}|\partial_{r}\psi|}{\psi^{1/2}}\right)^{2}\right]$
$\displaystyle\leq\frac{\delta}{12}\psi\omega^{2}+\frac{c(\delta,m)}{R^{4}}+\frac{c(\delta,m)K}{R^{2}}.$
Now we estimate the fourth term:
(3.7) $\displaystyle|\psi_{t}|\omega$
$\displaystyle=\psi^{1/2}\omega\frac{|\psi_{t}|}{\psi^{1/2}}\leq\frac{\delta}{12}\left(\psi^{1/2}\omega\right)^{2}+c(\delta)\left(\frac{|\psi_{t}|}{\psi^{1/2}}\right)^{2}$
$\displaystyle\leq\frac{\delta}{12}\psi\omega^{2}+\frac{c(\delta)}{T^{2}}.$
Notice that we have used Young’s inequality below in obtaining
(LABEL:term1)-(3.7):
$ab\leq\frac{a^{p}}{p}+\frac{b^{q}}{q},\,\,\,\,\,\,\forall\,\,\,p,q>0\,\,\,\mathrm{with}\,\,\,\frac{1}{p}+\frac{1}{q}=1.$
Finally, we estimate the last two terms:
(3.8) $\displaystyle-2(a-K)\psi\omega\leq
2(|a|+K)\psi\omega\leq\frac{\delta}{12}\psi\omega^{2}+c(\delta)(|a|+K)^{2};$
and
(3.9) $-2\frac{af}{\alpha-f}\psi\omega\leq
2\frac{|a|\cdot|f|}{\alpha-f}\psi\omega\leq\frac{\delta}{12}\psi\omega^{2}+c(\delta)a^{2}\frac{f^{2}}{(\alpha-f)^{2}}.$
Substituting (LABEL:term1)-(3.9) to the RHS of (3.2) at $(x_{1},t_{1})$, we
get
(3.10) $\displaystyle 2\psi(\alpha-f)\omega^{2}$
$\displaystyle\leq\psi(\alpha-f)\omega^{2}+\frac{\tilde{c}f^{4}}{R^{4}(\alpha-f)^{3}}+\frac{\delta}{2}\psi\omega^{2}+\frac{c(\alpha,\delta)}{R^{4}}+\frac{c(\delta)}{R^{4}}+\frac{c(\delta,m)}{R^{4}}$
$\displaystyle\,\,\,\,\,\,+\frac{c(\delta,m)K}{R^{2}}+\frac{c(\delta)}{T^{2}}+c(\delta)(|a|+K)^{2}+c(\delta)a^{2}\frac{f^{2}}{(\alpha-f)^{2}}.$
Recall that $\alpha-f\geq\delta>0$, (3.10) implies
(3.11) $\displaystyle\psi\omega^{2}(x_{1},t_{1})$
$\displaystyle\leq\tilde{c}\frac{f^{4}}{R^{4}(\alpha-f)^{4}}+\frac{1}{2}\psi\omega^{2}(x_{1},t_{1})+\frac{c(\alpha,\delta)}{R^{4}}+\frac{c(\delta,m)}{R^{4}}+\frac{c(\delta,m)K}{R^{2}}$
$\displaystyle\,\,\,\,\,\,+\frac{c(\delta)}{T^{2}}+c(\delta)(|a|+K)^{2}+c(\delta)a^{2}\frac{f^{2}}{(\alpha-f)^{2}}.$
Furthermore, we need to estimate the RHS of (3.11). If $f\leq 0$ and
$\alpha\geq 0$, then we have
(3.12) $\displaystyle\frac{f^{4}}{(\alpha-f)^{4}}\leq
1,\,\,\,\,\,\,\,\,\,\,\,\,\frac{f^{2}}{(\alpha-f)^{2}}\leq 1;$
if $f>0$, by the assumption $\alpha-f\geq\delta>0$, we know that
(3.13)
$\displaystyle\frac{f^{4}}{(\alpha-f)^{4}}\leq\frac{(\alpha-\delta)^{4}}{\delta^{4}}=\left(\frac{\alpha}{\delta}-1\right)^{4},\,\,\,\,\,\,\,\,\,\,\,\,\frac{f^{2}}{(\alpha-f)^{2}}\leq\left(\frac{\alpha}{\delta}-1\right)^{2}.$
Plugging (3.12) (or (3.13)) into (3.11), we obtain
(3.14)
$\displaystyle(\psi\omega^{2})(x_{1},t_{1})\leq\frac{\tilde{c}\beta^{4}+c(\alpha,\delta,m)}{R^{4}}+\frac{c(\delta,m)K}{R^{2}}+\frac{c(\delta)}{T^{2}}+c(\delta)(|a|+K)^{2}+c(\delta)a^{2}\beta^{2},$
where $\beta:=\max\left\\{1,|\alpha/\delta-1|\right\\}$.
The above inequality implies, for all $(x,t)$ in $Q_{R,T}$
(3.15) $\displaystyle(\psi^{2}\omega^{2})(x,t)$
$\displaystyle\leq\psi^{2}(x_{1},t_{1})\omega^{2}(x_{1},t_{1})\leq\psi(x_{1},t_{1})\omega^{2}(x_{1},t_{1})$
$\displaystyle\leq\frac{\tilde{c}\beta^{4}+c(\alpha,\delta,m)}{R^{4}}+\frac{c(\delta,m)K}{R^{2}}+\frac{c(\delta)}{T^{2}}+c(\delta)(|a|+K)^{2}+c(\delta)a^{2}\beta^{2}.$
Note that $\psi(x,t)=1$ in $Q_{R/2,T/2}$ and $\omega={|\nabla
f|^{2}}/{(\alpha-f)^{2}}$. Therefore we have
(3.16) $\displaystyle\frac{|\nabla
f|}{\alpha-f}\leq\left(\frac{\tilde{c}\beta^{4}+c(\alpha,\delta,m)}{R^{4}}+\frac{c(\delta,m)K}{R^{2}}+\frac{c(\delta)}{T^{2}}+c(\delta)(|a|+K)^{2}+c(\delta)a^{2}\beta^{2}\right)^{1/4}.$
Since $f=\log u$, we get the following estimate for Eq. (2.1)
(3.17) $\displaystyle\frac{|\nabla
u|}{u}\leq\left(\frac{\tilde{c}\beta^{4}+c(\alpha,\delta,m)}{R^{4}}+\frac{c(\delta)}{T^{2}}+c(\delta)(|a|+K)^{2}+c(\delta)a^{2}\beta^{2}\right)^{1/4}\Big{(}\alpha-\log
u\Big{)}.$
Replacing $u$ by $e^{b/a}u$ gives the desired estimate (1.8). This completes
the proof of Theorem 1.2. ∎
## 4\. Proof of Corollary 1.4
###### Proof.
The proof is similar to that of Theorem 1.2. We still use the technique of a
cut-off function in a local neighborhood of Riemannian manifolds. For $0<u\leq
1$, we let $f=\log u$. Then $f\leq 0$. Set
$\omega:=|\nabla\log(1-f)|^{2}=\frac{|\nabla f|^{2}}{(1-f)^{2}}.$
By Lemma 2.1, we have
(4.1) $\displaystyle\left(L-\frac{\partial}{\partial
t}\right)\omega\geq\frac{2f}{1-f}\left\langle\nabla
f,\nabla\omega\right\rangle+2(1-f)\omega^{2}-2(|a|+K)\omega.$
We define a smooth cut-off function $\psi=\psi(x,t)$ in the same way as
Section 3. Follow all steps in the last section (see also pp. 1050-1051 in
[1]), we can easily get the following inequality
(4.2) $\displaystyle 2(1-f)\psi\omega^{2}$
$\displaystyle\leq(1-f)\psi\omega^{2}+\frac{cf^{4}}{R^{4}(1-f)^{3}}+\frac{\psi\omega^{2}}{2}+\frac{c}{R^{4}}$
$\displaystyle\,\,\,\,\,\,+\frac{c(m)}{R^{4}}+\frac{c(m)K}{R^{2}}+\frac{c}{T^{2}}+c(|a|+K)^{2},$
where we used similar estimates (LABEL:term1)-(3.9) with the difference that
these estimates do not contain the parameter $\delta$. Using the same method
as that in proving Theorem 1.2, for all $(x,t)$ in $Q_{R/2,T/2}$ we can get
(4.3) $\displaystyle\omega^{2}(x,t)$
$\displaystyle\leq\frac{c(m)}{R^{4}}+\frac{c(m)K}{R^{2}}+\frac{c}{T^{2}}+c(|a|+K)^{2}$
$\displaystyle\leq\frac{c(m)}{R^{4}}+\frac{c(m)}{R^{2}}(|a|+K)+\frac{c}{T^{2}}+c(|a|+K)^{2}$
$\displaystyle\leq\frac{c(m)}{R^{4}}+\frac{c}{T^{2}}+c(|a|+K)^{2}.$
Again, using the same argument in the proof of Theorem 1.2 gives
(4.4) $\frac{|\nabla
f|}{1-f}\leq\frac{c(m)}{R}+\frac{c}{\sqrt{T}}+c\sqrt{K+|a|},$
where $c$ is a constant depending only on $n$, $c(m)$ is a constant depending
only on $n$ and $m$.
Since $f=\log u$, we get
(4.5) $\frac{|\nabla
u|}{u}\leq\left(\frac{c(m)}{R}+\frac{c}{\sqrt{T}}+c\sqrt{K+|a|}\right)\cdot\left(1+\log{\frac{1}{u}}\right).$
At last, replacing $u$ by $e^{b/a}u$ above yields (1.9). ∎
## Acknowledgment
The author would like to thank Professor Yu Zheng for his helpful suggestions
on this problem, and for his encouragement. He would also like to thank the
referees for useful suggestions. This work is partially supported by the
NSFC10871069.
## References
* [1] Souplet P., Zhang Q S. Sharp gradient estimate and Yau’s Liouville theorem for the heat equation on noncompact manifolds. _Bull. London Math. Soc_ , 2006, 38: 1045-1053.
* [2] Wu J Y. Elliptic type gradient estimates for a nonlinear parabolic equation on complete manifolds. 2008, preprint.
* [3] Hamilton R S. Three-manifolds with positive Ricci curvature. _J Diff Geom_ , 1982, 17: 255-306.
* [4] Chow B, Lu P, Ni L. Hamilton’s Ricci flow, Lectures in Contemporary Mathematics 3, Science Press and American Mathematical Society, 2006.
* [5] Ma L. Gradient estimates for a simple elliptic equation on non-compact Riemannian manifolds. _J Funct Anal_ , 2006, 241: 374-382.
* [6] Yang Y Y. Gradient estimates for a nonlinear parabolic equation on Riemannian manifold. _Proceeding of AMS_ , 2008, 136: 4095-4102.
* [7] Bakry D. L hypercontractivité et son utilisation en théorie des semigroupes, 1-114, Lect. Notes in Math., vol. 1581, Springer-Verlag, Berlin/New York, 1994.
* [8] Bakry D, Emery M. Diffusion hypercontractivitives, in Séminaire de Probabilités 19, 1983/1984, 177-206, Lect Notes in Math 1123, Berlin: Springer, 1985.
* [9] Bakry D, Qian Z M. Volume comparison theorems without Jacobi fields. Current trends in potential theory, 115-122, Theta Ser Adv Math, 4, Theta, Bucharest, 2005.
* [10] Li X D. Liouville theorems for symmetric diffusion operators on complete Riemannian manifolds. _J Math Pure Appl_ , 2005, 84: 1295-1361.
* [11] Yau S T. Harmonic functions on complete Riemannian manifolds. _Comm Pure Appl Math_ , 1975, 28: 201-228.
* [12] Cheng S Y, Yau S T. Differential equations on Riemannian manifolds and their geometric applications. _Comm Pure Appl Math_ , 1975, 28: 333-354.
* [13] Li P, Yau S T. On the parabolic kernel of the Schrodinger operator. _Acta Math_ , 1986, 156: 153-201.
* [14] Li J Y. Gradient estimates and Harnack inequalities for nonlinear parabolic and nonlinear elliptic equations on Riemannian manifolds. _J Funct Anal_ , 1991, 100: 233-256.
* [15] Negrin E. Gradient estimates and a Liouville type theorem for the Schrodinger operator. _J Funct Anal_ , 1995, 127: 198-203.
* [16] Hamilton R S. A matrix Harnack estimate for the heat equation. _Comm Anal Geom_ , 1993, 1: 113-126.
|
* [66] G. Deng, Y. Liu, Y. Li, K. Wang, Y. Zhang, Z. Li, H. Wang, T. Zhang, and Y. Liu, “Jailbreaker: Automated jailbreak across multiple large language model chatbots,” _CoRR_ , vol. abs/2307.08715, 2023.
* [67] N. Carlini, F. Tramèr, E. Wallace, M. Jagielski, A. Herbert-Voss, K. Lee, A. Roberts, T. B. Brown, D. Song, Ú. Erlingsson, A. Oprea, and C. Raffel, “Extracting training data from large language models,” in _USENIX Security_ , 2021, pp. 2633–2650.
* [68] J. Huang, H. Shao, and K. C. Chang, “Are large pre-trained language models leaking your personal information?” in _EMNLP_ , 2022, pp. 2038–2047.
* [69] F. Mireshghallah, A. Uniyal, T. Wang, D. Evans, and T. Berg-Kirkpatrick, “An empirical analysis of memorization in fine-tuned autoregressive language models,” in _EMNLP_ , 2022, pp. 1816–1826.
* [70] N. Lukas, A. Salem, R. Sim, S. Tople, L. Wutschitz, and S. Z. Béguelin, “Analyzing leakage of personally identifiable information in language models,” in _SP_ , 2023, pp. 346–363.
* [71] A. Zou, Z. Wang, J. Z. Kolter, and M. Fredrikson, “Universal and transferable adversarial attacks on aligned language models,” _CoRR_ , vol. abs/2307.15043, 2023.
* [72] M. Shanahan, K. McDonell, and L. Reynolds, “Role play with large language models,” _Nat._ , vol. 623, no. 7987, pp. 493–498, 2023.
* [73] Y. Liu, G. Deng, Z. Xu, Y. Li, Y. Zheng, Y. Zhang, L. Zhao, T. Zhang, and Y. Liu, “Jailbreaking chatgpt via prompt engineering: An empirical study,” _CoRR_ , vol. abs/2305.13860, 2023.
* [74] Y. Wolf, N. Wies, Y. Levine, and A. Shashua, “Fundamental limitations of alignment in large language models,” _CoRR_ , vol. abs/2304.11082, 2023.
* [75] A. Wei, N. Haghtalab, and J. Steinhardt, “Jailbroken: How does LLM safety training fail?” _CoRR_ , vol. abs/2307.02483, 2023.
* [76] B. Barak, “Another jailbreak for gpt4: Talk to it in morse code,” https://twitter.com/boazbaraktcs/status/1637657623100096513, 2023.
* [77] N. kat, “New jailbreak based on virtual functions smuggle,” https://old.reddit.com/r/ChatGPT/comments/10urbdj/new_jailbreak_based_on_virtual_functions_smuggle/, 2023\.
* [78] Y. Zhu, R. Kiros, R. S. Zemel, R. Salakhutdinov, R. Urtasun, A. Torralba, and S. Fidler, “Aligning books and movies: Towards story-like visual explanations by watching movies and reading books,” in _ICCV_ , 2015, pp. 19–27.
* [79] T. H. Trinh and Q. V. Le, “A simple method for commonsense reasoning,” _CoRR_ , vol. abs/1806.02847, 2018.
* [80] R. Zellers, A. Holtzman, H. Rashkin, Y. Bisk, A. Farhadi, F. Roesner, and Y. Choi, “Defending against neural fake news,” in _NeurIPS_ , 2019, pp. 9051–9062.
* [81] J. Baumgartner, S. Zannettou, B. Keegan, M. Squire, and J. Blackburn, “The pushshift reddit dataset,” in _ICWSM_ , 2020, pp. 830–839.
* [82] L. Gao, S. Biderman, S. Black, L. Golding, T. Hoppe, C. Foster, J. Phang, H. He, A. Thite, and N. N. et al., “The pile: An 800gb dataset of diverse text for language modeling,” _CoRR_ , vol. abs/2101.00027, 2021.
* [83] H. Laurençon, L. Saulnier, T. Wang, C. Akiki, A. V. del Moral, T. L. Scao, L. von Werra, C. Mou, E. G. Ponferrada, and H. N. et al., “The bigscience ROOTS corpus: A 1.6tb composite multilingual dataset,” in _NeurIPS_ , 2022.
* [84] S. Kim, S. Yun, H. Lee, M. Gubri, S. Yoon, and S. J. Oh, “Propile: Probing privacy leakage in large language models,” _CoRR_ , vol. abs/2307.01881, 2023\.
* [85] M. Fan, C. Chen, C. Wang, and J. Huang, “On the trustworthiness landscape of state-of-the-art generative models: A comprehensive survey,” _CoRR_ , vol. abs/2307.16680, 2023.
* [86] H. Shao, J. Huang, S. Zheng, and K. C. Chang, “Quantifying association capabilities of large language models and its implications on privacy leakage,” _CoRR_ , vol. abs/2305.12707, 2023.
* [87] X. Wu, R. Duan, and J. Ni, “Unveiling security, privacy, and ethical concerns of chatgpt,” _CoRR_ , vol. abs/2307.14192, 2023.
* [88] N. Carlini, D. Ippolito, M. Jagielski, K. Lee, F. Tramèr, and C. Zhang, “Quantifying memorization across neural language models,” in _ICLR_ , 2023\.
* [89] F. Mireshghallah, A. Uniyal, T. Wang, D. Evans, and T. Berg-Kirkpatrick, “Memorization in NLP fine-tuning methods,” _CoRR_ , vol. abs/2205.12506, 2022.
* [90] M. Jagielski, O. Thakkar, F. Tramèr, D. Ippolito, K. Lee, N. Carlini, E. Wallace, S. Song, A. G. Thakurta, N. Papernot, and C. Zhang, “Measuring forgetting of memorized training examples,” in _ICLR_ , 2023.
* [91] S. Gehman, S. Gururangan, M. Sap, Y. Choi, and N. A. Smith, “Realtoxicityprompts: Evaluating neural toxic degeneration in language models,” in _EMNLP_ , 2020, pp. 3356–3369.
* [92] N. Ousidhoum, X. Zhao, T. Fang, Y. Song, and D. Yeung, “Probing toxic content in large pre-trained language models,” in _ACL_ , 2021, pp. 4262–4274.
* [93] O. Shaikh, H. Zhang, W. Held, M. S. Bernstein, and D. Yang, “On second thought, let’s not think step by step! bias and toxicity in zero-shot reasoning,” in _ACL_ , 2023, pp. 4454–4470.
* [94] S. Bordia and S. R. Bowman, “Identifying and reducing gender bias in word-level language models,” in _NAACL-HLT_ , 2019, pp. 7–15.
* [95] C. Wald and L. Pfahler, “Exposing bias in online communities through large-scale language models,” _CoRR_ , vol. abs/2306.02294, 2023.
* [96] J. Welbl, A. Glaese, J. Uesato, S. Dathathri, J. Mellor, L. A. Hendricks, K. Anderson, P. Kohli, B. Coppin, and P. Huang, “Challenges in detoxifying language models,” in _EMNLP_ , 2021, pp. 2447–2469.
* [97] Y. Huang, Q. Zhang, P. S. Yu, and L. Sun, “Trustgpt: A benchmark for trustworthy and responsible large language models,” _CoRR_ , vol. abs/2306.11507, 2023.
* [98] Y. Wang and Y. Chang, “Toxicity detection with generative prompt-based inference,” _CoRR_ , vol. abs/2205.12390, 2022.
* [99] J. Li, T. Du, S. Ji, R. Zhang, Q. Lu, M. Yang, and T. Wang, “Textshield: Robust text classification based on multimodal embedding and neural machine translation,” in _USENIX Security_ , 2020, pp. 1381–1398.
* [100] A. Deshpande, V. Murahari, T. Rajpurohit, A. Kalyan, and K. Narasimhan, “Toxicity in chatgpt: Analyzing persona-assigned language models,” _CoRR_ , vol. abs/2304.05335, 2023.
* [101] E. M. Smith, M. Hall, M. Kambadur, E. Presani, and A. Williams, “”i’m sorry to hear that”: Finding new biases in language models with a holistic descriptor dataset,” in _EMNLP_ , 2022, pp. 9180–9211.
* [102] T. Hossain, S. Dev, and S. Singh, “MISGENDERED: limits of large language models in understanding pronouns,” in _ACL_ , 2023, pp. 5352–5367.
* [103] M. Nadeem, A. Bethke, and S. Reddy, “Stereoset: Measuring stereotypical bias in pretrained language models,” in _ACL_ , 2021, pp. 5356–5371.
* [104] W. Fish, “Perception, hallucination, and illusion.” _OUP USA_ , 2009.
* [105] Z. Ji, N. Lee, R. Frieske, T. Yu, D. Su, Y. Xu, E. Ishii, Y. Bang, A. Madotto, and P. Fung, “Survey of hallucination in natural language generation,” _ACM Comput. Surv._ , vol. 55, no. 12, pp. 248:1–248:38, 2023.
* [106] Y. Zhang, Y. Li, L. Cui, D. Cai, L. Liu, T. Fu, X. Huang, E. Zhao, Y. Zhang, Y. Chen _et al._ , “Siren’s song in the ai ocean: A survey on hallucination in large language models,” _CoRR_ , vol. abs/2309.01219, 2023\.
* [107] L. Huang, W. Yu, W. Ma, W. Zhong, Z. Feng, H. Wang, Q. Chen, W. Peng, X. Feng, B. Qin _et al._ , “A survey on hallucination in large language models: Principles, taxonomy, challenges, and open questions,” _CoRR_ , vol. abs/2311.05232, 2023.
* [108] P. Laban, W. Kryscinski, D. Agarwal, A. R. Fabbri, C. Xiong, S. Joty, and C. Wu, “Llms as factual reasoners: Insights from existing benchmarks and beyond,” _CoRR_ , vol. abs/2305.14540, 2023.
* [109] D. Tam, A. Mascarenhas, S. Zhang, S. Kwan, M. Bansal, and C. Raffel, “Evaluating the factual consistency of large language models through news summarization,” in _Findings of ACL_ , 2023, pp. 5220–5255.
* [110] J. Fan, D. Aumiller, and M. Gertz, “Evaluating factual consistency of texts with semantic role labeling,” in _*SEM@ACL_ , 2023, pp. 89–100.
* [111] S. Lin, J. Hilton, and O. Evans, “Truthfulqa: Measuring how models mimic human falsehoods,” in _ACL_ , 2022, pp. 3214–3252.
* [112] P. Hase, M. T. Diab, A. Celikyilmaz, X. Li, Z. Kozareva, V. Stoyanov, M. Bansal, and S. Iyer, “Methods for measuring, updating, and visualizing factual beliefs in language models,” in _EACL_ , 2023, pp. 2706–2723.
* [113] N. Lee, W. Ping, P. Xu, M. Patwary, P. Fung, M. Shoeybi, and B. Catanzaro, “Factuality enhanced language models for open-ended text generation,” in _NeurIPS_ , 2022.
* [114] K. Shuster, S. Poff, M. Chen, D. Kiela, and J. Weston, “Retrieval augmentation reduces hallucination in conversation,” in _Findings of EMNLP_ , 2021, pp. 3784–3803.
* [115] B. Peng, M. Galley, P. He, H. Cheng, Y. Xie, Y. Hu, Q. Huang, L. Liden, Z. Yu, W. Chen, and J. Gao, “Check your facts and try again: Improving large language models with external knowledge and automated feedback,” _CoRR_ , vol. abs/2302.12813, 2023.
* [116] X. Yue, B. Wang, Z. Chen, K. Zhang, Y. Su, and H. Sun, “Automatic evaluation of attribution by large language models,” in _Findings of EMNLP_ , 2023, pp. 4615–4635.
* [117] J. Xie, K. Zhang, J. Chen, R. Lou, and Y. Su, “Adaptive chameleon or stubborn sloth: Unraveling the behavior of large language models in knowledge clashes,” _CoRR_ , vol. abs/2305.13300, 2023.
* [118] G. Penedo, Q. Malartic, D. Hesslow, R. Cojocaru, A. Cappelli, H. Alobeidli, B. Pannier, E. Almazrouei, and J. Launay, “The refinedweb dataset for falcon LLM: outperforming curated corpora with web data, and web data only,” _CoRR_ , vol. abs/2306.01116, 2023.
* [119] D. Li, A. S. Rawat, M. Zaheer, X. Wang, M. Lukasik, A. Veit, F. X. Yu, and S. Kumar, “Large language models with controllable working memory,” in _Findings of ACL_ , 2023, pp. 1774–1793.
* [120] A. Mallen, A. Asai, V. Zhong, R. Das, D. Khashabi, and H. Hajishirzi, “When not to trust language models: Investigating effectiveness of parametric and non-parametric memories,” in _ACL_ , 2023, pp. 9802–9822.
* [121] K. Sun, Y. E. Xu, H. Zha, Y. Liu, and X. L. Dong, “Head-to-tail: How knowledgeable are large language models (llm)? A.K.A. will llms replace knowledge graphs?” _CoRR_ , vol. abs/2308.10168, 2023.
* [122] S. Zheng, J. Huang, and K. C. Chang, “Why does chatgpt fall short in answering questions faithfully?” _CoRR_ , vol. abs/2304.10513, 2023.
* [123] C. Kang and J. Choi, “Impact of co-occurrence on factual knowledge of large language models,” _CoRR_ , vol. abs/2310.08256, 2023.
* [124] S. Li, X. Li, L. Shang, Z. Dong, C. Sun, B. Liu, Z. Ji, X. Jiang, and Q. Liu, “How pre-trained language models capture factual knowledge? a causal-inspired analysis,” in _Findings of ACL_ , 2022, pp. 1720–1732.
* [125] K. Lee, D. Ippolito, A. Nystrom, C. Zhang, D. Eck, C. Callison-Burch, and N. Carlini, “Deduplicating training data makes language models better,” in _ACL_ , 2022, pp. 8424–8445.
* [126] N. Kandpal, H. Deng, A. Roberts, E. Wallace, and C. Raffel, “Large language models struggle to learn long-tail knowledge,” in _ICML_ , 2023, p. 15696–15707.
* [127] D. Hernandez, T. Brown, T. Conerly, N. DasSarma, D. Drain, S. El-Showk, N. Elhage, Z. Hatfield-Dodds, T. Henighan, T. Hume, S. Johnston, B. Mann, C. Olah, C. Olsson, D. Amodei, N. Joseph, J. Kaplan, and S. McCandlish, “Scaling laws and interpretability of learning from repeated data,” _CoRR_ , vol. abs/2205.10487, 2022.
* [128] N. McKenna, T. Li, L. Cheng, M. J. Hosseini, M. Johnson, and M. Steedman, “Sources of hallucination by large language models on inference tasks,” in _Findings of EMNLP_ , 2023, pp. 2758–2774.
* [129] J. W. Wei, D. Huang, Y. Lu, D. Zhou, and Q. V. Le, “Simple synthetic data reduces sycophancy in large language models,” _CoRR_ , vol. abs/2308.03958, 2023.
* [130] M. Sharma, M. Tong, T. Korbak, D. Duvenaud, A. Askell, S. R. Bowman, N. Cheng, E. Durmus, Z. Hatfield-Dodds, S. R. Johnston, S. Kravec, T. Maxwell, S. McCandlish, K. Ndousse, O. Rausch, N. Schiefer, D. Yan, M. Zhang, and E. Perez, “Towards understanding sycophancy in language models,” _CoRR_ , vol. abs/2310.13548, 2023.
* [131] M. Zhang, O. Press, W. Merrill, A. Liu, and N. A. Smith, “How language model hallucinations can snowball,” _CoRR_ , vol. abs/2305.13534, 2023.
* [132] A. Azaria and T. M. Mitchell, “The internal state of an LLM knows when its lying,” _CoRR_ , vol. abs/2304.13734, 2023.
* [133] D. Halawi, J. Denain, and J. Steinhardt, “Overthinking the truth: Understanding how language models process false demonstrations,” _CoRR_ , vol. abs/2307.09476, 2023.
* [134] Q. Dong, L. Li, D. Dai, C. Zheng, Z. Wu, B. Chang, X. Sun, J. Xu, L. Li, and Z. Sui, “A survey on in-context learning,” _CoRR_ , vol. abs/2301.00234, 2023.
* [135] C. Olsson, N. Elhage, N. Nanda, N. Joseph, N. DasSarma, T. Henighan, B. Mann, A. Askell, Y. Bai, A. Chen, T. Conerly, D. Drain, D. Ganguli, Z. Hatfield-Dodds, D. Hernandez, S. Johnston, A. Jones, J. Kernion, L. Lovitt, K. Ndousse, D. Amodei, T. Brown, J. Clark, J. Kaplan, S. McCandlish, and C. Olah, “In-context learning and induction heads,” _CoRR_ , vol. abs/2209.11895, 2022.
* [136] N. Dziri, A. Madotto, O. Zaïane, and A. J. Bose, “Neural path hunter: Reducing hallucination in dialogue systems via path grounding,” in _EMNLP_ , 2021, pp. 2197–2214.
* [137] Y. Chen, R. Guan, X. Gong, J. Dong, and M. Xue, “D-DAE: defense-penetrating model extraction attacks,” in _SP_ , 2023, pp. 382–399.
* [138] Y. Shen, X. He, Y. Han, and Y. Zhang, “Model stealing attacks against inductive graph neural networks,” in _SP_ , 2022, pp. 1175–1192.
* [139] J. Mattern, F. Mireshghallah, Z. Jin, B. Schölkopf, M. Sachan, and T. Berg-Kirkpatrick, “Membership inference attacks against language models via neighbourhood comparison,” in _ACL_ , 2023, pp. 11 330–11 343.
* [140] J. Zhou, Y. Chen, C. Shen, and Y. Zhang, “Property inference attacks against gans,” in _NDSS_ , 2022.
* [141] H. Yang, M. Ge, and K. X. andF Jingwei Li, “Using highly compressed gradients in federated learning for data reconstruction attacks,” _IEEE Trans. Inf. Forensics Secur._ , vol. 18, pp. 818–830, 2023.
* [142] M. Fredrikson, S. Jha, and T. Ristenpart, “Model inversion attacks that exploit confidence information and basic countermeasures,” in _CCS_ , 2015, pp. 1322–1333.
* [143] G. Xia, J. Chen, C. Yu, and J. Ma, “Poisoning attacks in federated learning: A survey,” _IEEE Access_ , vol. 11, pp. 10 708–10 722, 2023.
* [144] E. O. Soremekun, S. Udeshi, and S. Chattopadhyay, “Towards backdoor attacks and defense in robust machine learning models,” _Comput. Secur._ , vol. 127, p. 103101, 2023.
* [145] I. J. Goodfellow, J. Shlens, and C. Szegedy, “Explaining and harnessing adversarial examples,” in _ICLR_ , 2015.
* [146] I. Shumailov, Y. Zhao, D. Bates, N. Papernot, R. D. Mullins, and R. Anderson, “Sponge examples: Energy-latency attacks on neural networks,” in _SP_ , 2021, pp. 212–231.
* [147] W. M. Si, M. Backes, and Y. Zhang, “Mondrian: Prompt abstraction attack against large language models for cheaper api pricing,” _CoRR_ , vol. abs/2308.03558, 2023.
* [148] J. Shi, Y. Liu, P. Zhou, and L. Sun, “Badgpt: Exploring security vulnerabilities of chatgpt via backdoor attacks to instructgpt,” _CoRR_ , vol. abs/2304.12298, 2023.
* [149] J. Li, Y. Yang, Z. Wu, V. G. V. Vydiswaran, and C. Xiao, “Chatgpt as an attack tool: Stealthy textual backdoor attack via blackbox generative model trigger,” _CoRR_ , vol. abs/2304.14475, 2023.
* [150] J. Jia, A. Salem, M. Backes, Y. Zhang, and N. Z. Gong, “Memguard: Defending against black-box membership inference attacks via adversarial examples,” in _CCS_ , 2019, pp. 259–274.
* [151] J. Wang, X. Hu, W. Hou, H. Chen, R. Zheng, Y. Wang, L. Yang, H. Huang, W. Ye, and X. G. et al., “On the robustness of chatgpt: An adversarial and out-of-distribution perspective,” _CoRR_ , vol. abs/2302.12095, 2023.
* [152] Z. Li, C. Wang, P. Ma, C. Liu, S. Wang, D. Wu, and C. Gao, “On the feasibility of specialized ability extracting for large language code models,” _CoRR_ , vol. abs/2303.03012, 2023.
* [153] S. Zhao, J. Wen, A. T. Luu, J. Zhao, and J. Fu, “Prompt as triggers for backdoor attack: Examining the vulnerability in language models,” in _EMNLP_ , 2023, pp. 12 303–12 317.
* [154] M. Hilton, N. Nelson, T. Tunnell, D. Marinov, and D. Dig, “Trade-offs in continuous integration: assurance, security, and flexibility,” in _ESEC/FSE_ , 2017, pp. 197–207.
* [155] I. Koishybayev, A. Nahapetyan, R. Zachariah, S. Muralee, B. Reaves, A. Kapravelos, and A. Machiry, “Characterizing the security of github CI workflows,” in _USENIX_ , 2022, pp. 2747–2763.
* [156] S. Lee, H. Han, S. K. Cha, and S. Son, “Montage: A neural network language model-guided javascript engine fuzzer,” in _USENIX_ , 2020, pp. 2613–2630.
* [157] C. Lao, Y. Le, K. Mahajan, Y. Chen, W. Wu, A. Akella, and M. M. Swift, “ATP: in-network aggregation for multi-tenant learning,” in _NSDI_ , 2021, pp. 741–761.
* [158] Q. Xiao, Y. Chen, C. Shen, Y. Chen, and K. Li, “Seeing is not believing: Camouflage attacks on image scaling algorithms,” in _USENIX Security_ , 2019, pp. 443–460.
* [159] H. T. Maia, C. Xiao, D. Li, E. Grinspun, and C. Zheng, “Can one hear the shape of a neural network?: Snooping the GPU via magnetic side channel,” in _USENIX_ , 2022, pp. 4383–4400.
* [160] Y. Tobah, A. Kwong, I. Kang, D. Genkin, and K. G. Shin, “Spechammer: Combining spectre and rowhammer for new speculative attacks,” in _SP_ , 2022, pp. 681–698.
* [161] X. Luo and R. K. C. Chang, “On a new class of pulsing denial-of-service attacks and the defense,” in _NDSS_ , 2005.
* [162] E. Quiring, D. Klein, D. Arp, M. Johns, and K. Rieck, “Adversarial preprocessing: Understanding and preventing image-scaling attacks in machine learning,” in _USENIX Security_ , 2020, pp. 1363–1380.
* [163] Z. Zhan, Z. Zhang, S. Liang, F. Yao, and X. D. Koutsoukos, “Graphics peeping unit: Exploiting EM side-channel information of gpus to eavesdrop on your neighbors,” in _SP_ , 2022, pp. 1440–1457.
* [164] H. Mai, J. Zhao, H. Zheng, Y. Zhao, Z. Liu, M. Gao, C. Wang, H. Cui, X. Feng, and C. Kozyrakis, “Honeycomb: Secure and efficient GPU executions via static validation,” in _OSDI_ , 2023, pp. 155–172.
* [165] Y. Deng, C. Wang, S. Yu, S. Liu, Z. Ning, K. Leach, J. Li, S. Yan, Z. He, J. Cao, and F. Zhang, “Strongbox: A GPU TEE on arm endpoints,” in _CCS_ , 2022, pp. 769–783.
* [166] S. Tan, B. Knott, Y. Tian, and D. J. Wu, “Cryptgpu: Fast privacy-preserving machine learning on the GPU,” in _SP_ , 2021, pp. 1021–1038.
* [167] A. S. Rakin, Z. He, and D. Fan, “Bit-flip attack: Crushing neural network with progressive bit search,” in _ICCV_ , 2019, pp. 1211–1220.
* [168] F. Yao, A. S. Rakin, and D. Fan, “Deephammer: Depleting the intelligence of deep neural networks through targeted chain of bit flips,” in _USENIX_ , 2020, pp. 1463–1480.
* [169] J. Wang, Z. Zhang, M. Wang, H. Qiu, T. Zhang, Q. Li, Z. Li, T. Wei, and C. Zhang, “Aegis: Mitigating targeted bit-flip attacks against deep neural networks,” in _USENIX_ , 2023, pp. 2329–2346.
* [170] Q. Liu, J. Yin, W. Wen, C. Yang, and S. Sha, “Neuropots: Realtime proactive defense against bit-flip attacks in neural networks,” in _USENIX_ , 2023, pp. 6347–6364.
* [171] Y. Peng, Y. Zhu, Y. Chen, Y. Bao, B. Yi, C. Lan, C. Wu, and C. Guo, “A generic communication scheduler for distributed DNN training acceleration,” in _SOSP_ , T. Brecht and C. Williamson, Eds. ACM, 2019, pp. 16–29.
* [172] Y. Jiang, Y. Zhu, C. Lan, B. Yi, Y. Cui, and C. Guo, “A unified architecture for accelerating distributed DNN training in heterogeneous GPU/CPU clusters,” in _OSDI_ , 2020, pp. 463–479.
* [173] A. Wei, Y. Deng, C. Yang, and L. Zhang, “Free lunch for testing: Fuzzing deep-learning libraries from open source,” in _ICSE_ , 2022, pp. 995–1007.
* [174] R. Nakano, J. Hilton, S. Balaji, J. Wu, L. Ouyang, C. Kim, C. Hesse, S. Jain, V. Kosaraju, W. Saunders _et al._ , “Webgpt: Browser-assisted question-answering with human feedback,” _CoRR_ , vol. abs/2112.09332, 2021\.
* [175] Y. Shen, K. Song, X. Tan, D. Li, W. Lu, and Y. Zhuang, “Hugginggpt: Solving ai tasks with chatgpt and its friends in huggingface,” _CoRR_ , vol. abs/2303.17580, 2023.
* [176] Z. Xi, W. Chen, X. Guo, W. He, Y. Ding, B. Hong, M. Zhang, J. Wang, S. Jin, E. Zhou _et al._ , “The rise and potential of large language model based agents: A survey. corr, abs/2309.07864, 2023. doi: 10.48550,” _CoRR_ , vol. abs/2309.07864, 2023.
* [177] L. Wang, C. Ma, X. Feng, Z. Zhang, H. Yang, J. Zhang, Z. Chen, J. Tang, X. Chen, Y. Lin _et al._ , “A survey on large language model based autonomous agents,” _CoRR_ , vol. abs/2309.07864, 2023.
* [178] B. Peng, M. Galley, P. He, H. Cheng, Y. Xie, Y. Hu, Q. Huang, L. Liden, Z. Yu, W. Chen, and J. Gao, “Check your facts and try again: Improving large language models with external knowledge and automated feedback,” _CoRR_ , vol. abs/2302.12813, 2023.
* [179] T. Gao, H. Yen, J. Yu, and D. Chen, “Enabling large language models to generate text with citations,” in _EMNLP_ , 2023, pp. 6465–6488.
* [180] W. Shi, X. Han, M. Lewis, Y. Tsvetkov, L. Zettlemoyer, and S. W. Yih, “Trusting your evidence: Hallucinate less with context-aware decoding,” _CoRR_ , vol. abs/2305.14739, 2023.
* [181] S. Yao, J. Zhao, D. Yu, N. Du, I. Shafran, K. R. Narasimhan, and Y. Cao, “React: Synergizing reasoning and acting in language models,” in _ICLR_ , 2023.
* [182] O. Ram, Y. Levine, I. Dalmedigos, D. Muhlgay, A. Shashua, K. Leyton-Brown, and Y. Shoham, “In-context retrieval-augmented language models,” _CoRR_ , vol. abs/2302.00083, 2023.
* [183] S. Zhang, L. Pan, J. Zhao, and W. Y. Wang, “Mitigating language model hallucination with interactive question-knowledge alignment,” _CoRR_ , vol. abs/2305.13669, 2023.
* [184] O. Press, M. Zhang, S. Min, L. Schmidt, N. A. Smith, and M. Lewis, “Measuring and narrowing the compositionality gap in language models,” in _Findings of EMNLP_ , 2023, pp. 5687–5711.
* [185] R. W. McGee, “Is chat gpt biased against conservatives? an empirical study,” _An Empirical Study (February 15, 2023)_ , 2023.
* [186] T. Y. Zhuo, Y. Huang, C. Chen, and Z. Xing, “Exploring ai ethics of chatgpt: A diagnostic analysis,” _CoRR_ , vol. abs/2301.12867, 2023.
* [187] E. Ferrara, “Should chatgpt be biased? challenges and risks of bias in large language models,” _CoRR_ , vol. abs/2304.03738, 2023.
* [188] O. Oviedo-Trespalacios, A. E. Peden, T. Cole-Hunter, A. Costantini, M. Haghani, J. Rod, S. Kelly, H. Torkamaan, A. Tariq, J. D. A. Newton _et al._ , “The risks of using chatgpt to obtain common safety-related information and advice,” _Safety Science_ , vol. 167, p. 106244, 2023.
* [189] N. Imran, A. Hashmi, and A. Imran, “Chat-gpt: Opportunities and challenges in child mental healthcare,” _Pakistan Journal of Medical Sciences_ , vol. 39, no. 4.
* [190] OPC, “Opc to investigate chatgpt jointly with provincial privacy authorities,” https://www.priv.gc.ca/en/opc-news/news-and-announcements/2023/an_230525-2/, 2023\.
* [191] M. Gurman, “Samsung bans staff’s ai use after spotting chatgpt data leak,” https://www.bloomberg.com/news/articles/2023-05-02/samsung-bans-chatgpt-and-other-generative-ai-use-by-staff-after-leak?srnd=technology-vp&in_source=embedded-checkout-banner/.
* [192] S. Sabin, “Companies are struggling to keep corporate secrets out of chatgpt,” https://www.axios.com/2023/03/10/chatgpt-ai-cybersecurity-secrets/.
* [193] Y. Elazar, N. Kassner, S. Ravfogel, A. Feder, A. Ravichander, M. Mosbach, Y. Belinkov, H. Schütze, and Y. Goldberg, “Measuring causal effects of data statistics on language model’sfactual’predictions,” _CoRR_ , vol. abs/2207.14251, 2022.
* [194] H. Alkaissi and S. I. McFarlane, “Artificial hallucinations in chatgpt: implications in scientific writing,” _Cureus_ , vol. 15, no. 2, 2023.
* [195] Y. Bang, S. Cahyawijaya, N. Lee, W. Dai, D. Su, B. Wilie, H. Lovenia, Z. Ji, T. Yu, W. Chung _et al._ , “A multitask, multilingual, multimodal evaluation of chatgpt on reasoning, hallucination, and interactivity,” _CoRR_ , vol. abs/2302.04023, 2023.
* [196] J. Vincent, “Google’s ai chatbot bard makes factual error in first demo.” https://www.theverge.com/2023/2/8/23590864/google-ai-chatbot-bard-mistake-error-exoplanet-demo.
* [197] I. Solaiman, M. Brundage, J. Clark, A. Askell, A. Herbert-Voss, J. Wu, A. Radford, G. Krueger, J. W. Kim, S. Kreps _et al._ , “Release strategies and the social impacts of language models,” _CoRR_ , vol. abs/1908.09203, 2019.
* [198] J. Wu, W. Gan, Z. Chen, S. Wan, and H. Lin, “Ai-generated content (aigc): A survey,” _CoRR_ , vol. abs/2304.06632, 2023.
* [199] M. Elsen-Rooney, “Nyc education department blocks chatgpt on school devices, networks,” https://ny.chalkbeat.org/2023/1/3/23537987/nyc-schools-ban-chatgpt-writing-artificial-intelligence.
* [200] U. Ede-Osifo, “College instructor put on blast for accusing students of using chatgpt on final assignments,” https://www.nbcnews.com/tech/chatgpt-texas-collegeinstructor-backlash-rcna8488.
* [201] J. Lee, T. Le, J. Chen, and D. Lee, “Do language models plagiarize?” in _Proceedings of the ACM Web Conference 2023_ , 2023, pp. 3637–3647.
* [202] J. P. Wahle, T. Ruas, F. Kirstein, and B. Gipp, “How large language models are transforming machine-paraphrased plagiarism,” _CoRR_ , vol. abs/2210.03568, 2022.
* [203] P. Sharma and B. Dash, “Impact of big data analytics and chatgpt on cybersecurity,” in _2023 4th International Conference on Computing and Communication Systems (I3CS)_ , 2023, pp. 1–6.
* [204] P. Charan, H. Chunduri, P. M. Anand, and S. K. Shukla, “From text to mitre techniques: Exploring the malicious use of large language models for generating cyber attack payloads,” _CoRR_ , vol. abs/2305.15336, 2023.
* [205] O. Asare, M. Nagappan, and N. Asokan, “Is github’s copilot as bad as humans at introducing vulnerabilities in code?” _CoRR_ , vol. abs/2204.04741, 2022\.
* [206] B. N, “Europol warns that hackers use chatgpt to conduct cyber attacks.” https://cybersecuritynews.com/hackers-use-chatgpt-to-conduct-cyber-attacks/.
* [207] ——, “Chatgpt successfully built malware but failed to analyze the complex malware.” https://cybersecuritynews.com/chatgpt-failed-to-analyze-the-complex-malware/.
* [208] Github, “Github copilot,” https://github.com/features/copilot, 2023.
* [209] E. Crothers, N. Japkowicz, and H. L. Viktor, “Machine-generated text: A comprehensive survey of threat models and detection methods,” _IEEE Access_ , 2023.
* [210] R. Goodside, “Gpt-3 prompt injection defenses,” https://twitter.com/goodside/status/1578278974526222336?s=20&t=3UMZB7ntYhwAk3QLpKMAbw, 2022\.
* [211] L. Prompting, “Defensive measures,” https://learnprompting.org/docs/category/-defensive-measures, 2023.
* [212] C. Mark, “Talking to machines: prompt engineering & injection,” https://artifact-research.com/artificial-intelligence/talking-to-machines-prompt-engineering-injection/, 2022\.
* [213] A. Volkov, “Discovery of sandwich defense,” https://twitter.com/altryne?ref_src=twsrc%5Egoogle%7Ctwcamp%5Eserp%7Ctwgr%5Eauthor, 2023\.
* [214] R. G. Stuart Armstrong, “Using gpt-eliezer against chatgpt jailbreaking,” https://www.alignmentforum.org/posts/pNcFYZnPdXyL2RfgA/using-gpt-eliezer-against-chatgpt-jailbreak, 2022\.
* [215] R. Goodside, “Quoted/escaped the input strings to defend against prompt attacks,” https://twitter.com/goodside/status/1569457230537441286?s=20, 2022.
* [216] J. Selvi, “Exploring prompt injection attacks,” https://research.nccgroup.com/2022/12/05/exploring-prompt-injection-attacks/, 2022\.
* [217] J. Xu, D. Ju, M. Li, Y.-L. Boureau, J. Weston, and E. Dinan, “Recipes for safety in open-domain chatbots,” _CoRR_ , vol. abs/2010.07079, 2020.
* [218] S. Gehman, S. Gururangan, M. Sap, Y. Choi, and N. A. Smith, “Realtoxicityprompts: Evaluating neural toxic degeneration in language models,” in _Findings_ , 2020.
* [219] J. Welbl, A. Glaese, J. Uesato, S. Dathathri, J. F. J. Mellor, L. A. Hendricks, K. Anderson, P. Kohli, B. Coppin, and P.-S. Huang, “Challenges in detoxifying language models,” _CoRR_ , vol. abs/2109.07445, 2021.
* [220] I. Solaiman and C. Dennison, “Process for adapting language models to society (palms) with values-targeted datasets,” _CoRR_ , vol. abs/2106.10328, 2021\.
* [221] B. Wang, W. Ping, C. Xiao, P. Xu, M. Patwary, M. Shoeybi, B. Li, A. Anandkumar, and B. Catanzaro, “Exploring the limits of domain-adaptive training for detoxifying large-scale language models,” _CoRR_ , vol. abs/2202.04173, 2022\.
* [222] OpenAI, “GPT-4 Technical Report,” _CoRR_ , vol. abs/2303.08774, 2023.
* [223] NVIDIA, “Nemo guardrails,” https://github.com/NVIDIA/NeMo-Guardrails, 2023\.
* [224] nostalgebraist, “interpreting gpt: the logit lens,” https://www.alignmentforum.org/posts/AcKRB8wDpdaN6v6ru/interpreting-gpt-the-logit-lens, 2020\.
* [225] N. Belrose, Z. Furman, L. Smith, D. Halawi, I. Ostrovsky, L. McKinney, S. Biderman, and J. Steinhardt, “Eliciting latent predictions from transformers with the tuned lens,” _CoRR_ , vol. abs/2303.08112, 2023.
* [226] Z. Kan, L. Qiao, H. Yu, L. Peng, Y. Gao, and D. Li, “Protecting user privacy in remote conversational systems: A privacy-preserving framework based on text sanitization,” _CoRR_ , vol. abs/2306.08223, 2023.
* [227] Y. Li, Z. Tan, and Y. Liu, “Privacy-preserving prompt tuning for large language model services,” _CoRR_ , vol. abs/2305.06212, 2023.
* [228] P. Ruch, R. H. Baud, A. Rassinoux, P. Bouillon, and G. Robert, “Medical document anonymization with a semantic lexicon,” in _AMIA_ , 2000.
* [229] L. Deléger, K. Molnár, G. Savova, F. Xia, T. Lingren, Q. Li, K. Marsolo, A. G. Jegga, M. Kaiser, L. Stoutenborough, and I. Solti, “Large-scale evaluation of automated clinical note de-identification and its impact on information extraction,” _J. Am. Medical Informatics Assoc._ , vol. 20, no. 1, pp. 84–94, 2013.
* [230] F. Dernoncourt, J. Y. Lee, Ö. Uzuner, and P. Szolovits, “De-identification of patient notes with recurrent neural networks,” _J. Am. Medical Informatics Assoc._ , vol. 24, no. 3, pp. 596–606, 2017.
* [231] A. E. W. Johnson, L. Bulgarelli, and T. J. Pollard, “Deidentification of free-text medical records using pre-trained bidirectional transformers,” in _CHIL_ , 2020, pp. 214–221.
* [232] N. Kandpal, E. Wallace, and C. Raffel, “Deduplicating training data mitigates privacy risks in language models,” in _ICML_ , ser. Proceedings of Machine Learning Research, vol. 162, 2022, pp. 10 697–10 707.
* [233] C. Dwork, F. McSherry, K. Nissim, and A. D. Smith, “Calibrating noise to sensitivity in private data analysis,” _J. Priv. Confidentiality_ , vol. 7, no. 3, pp. 17–51, 2016.
* [234] C. Dwork, “A firm foundation for private data analysis,” _Commun. ACM_ , vol. 54, no. 1, pp. 86–95, 2011.
* [235] C. Dwork and A. Roth, “The algorithmic foundations of differential privacy,” _Found. Trends Theor. Comput. Sci._ , vol. 9, no. 3-4, pp. 211–407, 2014\.
* [236] S. Hoory, A. Feder, A. Tendler, S. Erell, A. Peled-Cohen, I. Laish, H. Nakhost, U. Stemmer, A. Benjamini, A. Hassidim, and Y. Matias, “Learning and evaluating a differentially private pre-trained language model,” in _EMNLP_ , 2021, pp. 1178–1189.
* [237] J. Majmudar, C. Dupuy, C. Peris, S. Smaili, R. Gupta, and R. S. Zemel, “Differentially private decoding in large language models,” _CoRR_ , vol. abs/2205.13621, 2022.
* [238] D. Yu, S. Naik, A. Backurs, S. Gopi, H. A. Inan, G. Kamath, J. Kulkarni, Y. T. Lee, A. Manoel, and L. W. et al., “Differentially private fine-tuning of language models,” in _ICLR_ , 2022.
* [239] H. Ebadi, D. Sands, and G. Schneider, “Differential privacy: Now it’s getting personal,” in _POPL_ , 2015, pp. 69–81.
* [240] I. Kotsogiannis, S. Doudalis, S. Haney, A. Machanavajjhala, and S. Mehrotra, “One-sided differential privacy,” in _ICDE_ , 2020, pp. 493–504.
* [241] W. Shi, A. Cui, E. Li, R. Jia, and Z. Yu, “Selective differential privacy for language modeling,” in _NAACL_ , 2022, pp. 2848–2859.
* [242] W. Shi, R. Shea, S. Chen, C. Zhang, R. Jia, and Z. Yu, “Just fine-tune twice: Selective differential privacy for large language models,” in _EMNLP_ , 2022, pp. 6327–6340.
* [243] Z. Bu, Y. Wang, S. Zha, and G. Karypis, “Differentially private bias-term only fine-tuning of foundation models,” _CoRR_ , vol. abs/2210.00036, 2022.
* [244] A. Ginart, L. van der Maaten, J. Zou, and C. Guo, “Submix: Practical private prediction for large-scale language models,” _CoRR_ , vol. abs/2201.00971, 2022.
* [245] H. Duan, A. Dziedzic, N. Papernot, and F. Boenisch, “Flocks of stochastic parrots: Differentially private prompt learning for large language models,” _CoRR_ , vol. abs/2305.15594, 2023.
* [246] A. Panda, T. Wu, J. T. Wang, and P. Mittal, “Differentially private in-context learning,” _CoRR_ , vol. abs/2305.01639, 2023.
* [247] J. Pavlopoulos, P. Malakasiotis, and I. Androutsopoulos, “Deeper attention to abusive user content moderation,” in _EMNLP_ , 2017, pp. 1125–1135.
* [248] S. V. Georgakopoulos, S. K. Tasoulis, A. G. Vrahatis, and V. P. Plagianakos, “Convolutional neural networks for toxic comment classification,” in _SETN_ , 2018, pp. 35:1–35:6.
* [249] Z. Zhao, Z. Zhang, and F. Hopfgartner, “A comparative study of using pre-trained language models for toxic comment classification,” in _WWW_ , 2021, pp. 500–507.
* [250] C. AI, “Perspective api documentation,” https://github.com/conversationai/perspectiveapi, 2021.
* [251] Azure, “Azure ai content safety,” https://azure.microsoft.com/en-us/products/ai-services/ai-content-safety, 2023\.
* [252] T. Bolukbasi, K. Chang, J. Y. Zou, V. Saligrama, and A. T. Kalai, “Man is to computer programmer as woman is to homemaker? debiasing word embeddings,” in _NeurIPS_ , 2016, pp. 4349–4357.
* [253] J. Zhao, T. Wang, M. Yatskar, R. Cotterell, V. Ordonez, and K. Chang, “Gender bias in contextualized word embeddings,” in _NAACL-HLT_ , 2019, pp. 629–634.
* [254] R. H. Maudslay, H. Gonen, R. Cotterell, and S. Teufel, “It’s all in the name: Mitigating gender bias with name-based counterfactual data substitution,” in _EMNLP-IJCNLP_ , 2019, pp. 5266–5274.
* [255] H. Thakur, A. Jain, P. Vaddamanu, P. P. Liang, and L. Morency, “Language models get a gender makeover: Mitigating gender bias with few-shot data interventions,” in _ACL_ , 2023, pp. 340–351.
* [256] C. N. dos Santos, I. Melnyk, and I. Padhi, “Fighting offensive language on social media with unsupervised text style transfer,” in _ACL_ , 2018, pp. 189–194.
* [257] L. Laugier, J. Pavlopoulos, J. Sorensen, and L. Dixon, “Civil rephrases of toxic texts with self-supervised transformers,” in _EACL_ , 2021, pp. 1442–1461.
* [258] V. Logacheva, D. Dementieva, S. Ustyantsev, D. Moskovskiy, D. Dale, I. Krotova, N. Semenov, and A. Panchenko, “Paradetox: Detoxification with parallel data,” in _ACL_ , 2022, pp. 6804–6818.
* [259] J. Zhao, Y. Zhou, Z. Li, W. Wang, and K. Chang, “Learning gender-neutral word embeddings,” in _EMNLP_ , 2018, pp. 4847–4853.
* [260] X. Peng, S. Li, S. Frazier, and M. O. Riedl, “Reducing non-normative text generation from language models,” in _INLG_ , 2020, pp. 374–383.
* [261] S. Dev, T. Li, J. M. Phillips, and V. Srikumar, “Oscar: Orthogonal subspace correction and rectification of biases in word embeddings,” in _EMNLP_ , 2021, pp. 5034–5050.
* [262] Z. Xie and T. Lukasiewicz, “An empirical analysis of parameter-efficient methods for debiasing pre-trained language models,” in _ACL_ , 2023, pp. 15 730–15 745.
* [263] X. He, S. Zannettou, Y. Shen, and Y. Zhang, “You only prompt once: On the capabilities of prompt learning on large language models to tackle toxic content,” _CoRR_ , vol. abs/2308.05596, 2023.
* [264] L. Ranaldi, E. S. Ruzzetti, D. Venditti, D. Onorati, and F. M. Zanzotto, “A trip towards fairness: Bias and de-biasing in large language models,” _CoRR_ , vol. abs/2305.13862, 2023.
* [265] A. Glaese, N. McAleese, M. Trebacz, J. Aslanides, V. Firoiu, T. Ewalds, M. Rauh, L. Weidinger, M. J. Chadwick, and P. T. et al., “Improving alignment of dialogue agents via targeted human judgements,” _CoRR_ , vol. abs/2209.14375, 2022.
* [266] H. Touvron, L. Martin, K. Stone, P. Albert, A. Almahairi, Y. Babaei, N. Bashlykov, S. Batra, P. Bhargava, and S. B. et al., “Llama 2: Open foundation and fine-tuned chat models,” _CoRR_ , vol. abs/2307.09288, 2023\.
* [267] A. Abbas, K. Tirumala, D. Simig, S. Ganguli, and A. S. Morcos, “Semdedup: Data-efficient learning at web-scale through semantic deduplication,” _CoRR_ , vol. abs/2303.09540, 2023.
* [268] Y. Zhang, Y. Li, L. Cui, D. Cai, L. Liu, T. Fu, X. Huang, E. Zhao, Y. Zhang, Y. Chen, L. Wang, A. T. Luu, W. Bi, F. Shi, and S. Shi, “Siren’s song in the AI ocean: A survey on hallucination in large language models,” _CoRR_ , vol. abs/2309.01219, 2023.
* [269] Z. Sun, S. Shen, S. Cao, H. Liu, C. Li, Y. Shen, C. Gan, L. Gui, Y. Wang, Y. Yang, K. Keutzer, and T. Darrell, “Aligning large multimodal models with factually augmented RLHF,” _CoRR_ , vol. abs/2309.14525, 2023.
* [270] T. Shen, R. Jin, Y. Huang, C. Liu, W. Dong, Z. Guo, X. Wu, Y. Liu, and D. Xiong, “Large language model alignment: A survey,” _CoRR_ , vol. abs/2309.15025, 2023.
* [271] K. Huang, H. P. Chan, and H. Ji, “Zero-shot faithful factual error correction,” in _ACL_ , 2023, pp. 5660–5676.
* [272] A. Chen, P. Pasupat, S. Singh, H. Lee, and K. Guu, “PURR: efficiently editing language model hallucinations by denoising language model corruptions,” _CoRR_ , vol. abs/2305.14908, 2023.
* [273] R. Zhao, X. Li, S. Joty, C. Qin, and L. Bing, “Verify-and-edit: A knowledge-enhanced chain-of-thought framework,” in _ACL_ , 2023, pp. 5823–5840.
* [274] W. Yu, Z. Zhang, Z. Liang, M. Jiang, and A. Sabharwal, “Improving language models via plug-and-play retrieval feedback,” _CoRR_ , vol. abs/2305.14002, 2023.
* [275] Z. Feng, X. Feng, D. Zhao, M. Yang, and B. Qin, “Retrieval-generation synergy augmented large language models,” _CoRR_ , vol. abs/2310.05149, 2023.
* [276] Z. Shao, Y. Gong, Y. Shen, M. Huang, N. Duan, and W. Chen, “Enhancing retrieval-augmented large language models with iterative retrieval-generation synergy,” in _Findings of EMNLP_ , 2023, pp. 9248–9274.
* [277] S. Ahn, H. Choi, T. Pärnamaa, and Y. Bengio, “A neural knowledge language model,” _CoRR_ , vol. abs/1608.00318, 2016.
* [278] R. L. L. IV, N. F. Liu, M. E. Peters, M. Gardner, and S. Singh, “Barack’s wife hillary: Using knowledge graphs for fact-aware language modeling,” in _ACL_ , 2019, pp. 5962–5971.
* [279] Y. Wen, Z. Wang, and J. Sun, “Mindmap: Knowledge graph prompting sparks graph of thoughts in large language models,” _CoRR_ , vol. abs/2308.09729, 2023\.
* [280] Z. Gou, Z. Shao, Y. Gong, Y. Shen, Y. Yang, N. Duan, and W. Chen, “CRITIC: large language models can self-correct with tool-interactive critiquing,” _CoRR_ , vol. abs/2305.11738, 2023.
* [281] N. Varshney, W. Yao, H. Zhang, J. Chen, and D. Yu, “A stitch in time saves nine: Detecting and mitigating hallucinations of llms by validating low-confidence generation,” _CoRR_ , vol. abs/2307.03987, 2023.
* [282] Y. Chuang, Y. Xie, H. Luo, Y. Kim, J. R. Glass, and P. He, “Dola: Decoding by contrasting layers improves factuality in large language models,” _CoRR_ , vol. abs/2309.03883, 2023.
* [283] K. Li, O. Patel, F. B. Viégas, H. Pfister, and M. Wattenberg, “Inference-time intervention: Eliciting truthful answers from a language model,” _CoRR_ , vol. abs/2306.03341, 2023.
* [284] X. L. Li, A. Holtzman, D. Fried, P. Liang, J. Eisner, T. Hashimoto, L. Zettlemoyer, and M. Lewis, “Contrastive decoding: Open-ended text generation as optimization,” in _ACL_ , 2023, pp. 12 286–12 312.
* [285] S. Willison, “Reducing sycophancy and improving honesty via activation steering,” https://www.alignmentforum.org/posts/zt6hRsDE84HeBKh7E/reducing-sycophancy-and-improving-honesty-via-activation, 2023\.
* [286] Y. Du, S. Li, A. Torralba, J. B. Tenenbaum, and I. Mordatch, “Improving factuality and reasoning in language models through multiagent debate,” _CoRR_ , vol. abs/2305.14325, 2023.
* [287] R. Cohen, M. Hamri, M. Geva, and A. Globerson, “LM vs LM: detecting factual errors via cross examination,” in _EMNLP_ , 2023, pp. 12 621–12 640.
* [288] N. Akhtar and A. S. Mian, “Threat of adversarial attacks on deep learning in computer vision: A survey,” _IEEE Access_ , vol. 6, pp. 14 410–14 430, 2018.
* [289] M. Jagielski, N. Carlini, D. Berthelot, A. Kurakin, and N. Papernot, “High-fidelity extraction of neural network models,” _CoRR_ , vol. abs/1909.01838, 2019.
* [290] F. Tramèr, F. Zhang, A. Juels, M. K. Reiter, and T. Ristenpart, “Stealing machine learning models via prediction apis,” in _USENIX Security_ , 2016, pp. 601–618.
* [291] T. Orekondy, B. Schiele, and M. Fritz, “Prediction poisoning: Towards defenses against DNN model stealing attacks,” in _ICLR_ , 2020.
* [292] I. M. Alabdulmohsin, X. Gao, and X. Zhang, “Adding robustness to support vector machines against adversarial reverse engineering,” in _CIKM_ , 2014, pp. 231–240.
* [293] V. Chandrasekaran, K. Chaudhuri, I. Giacomelli, S. Jha, and S. Yan, “Model extraction and active learning,” _CoRR_ , vol. abs/1811.02054, 2018.
* [294] T. Lee, B. Edwards, I. M. Molloy, and D. Su, “Defending against neural network model stealing attacks using deceptive perturbations,” in _S &P Workshop_, 2019, pp. 43–49.
* [295] M. Juuti, S. Szyller, S. Marchal, and N. Asokan, “PRADA: protecting against DNN model stealing attacks,” in _EuroS &P_, 2019, pp. 512–527.
* [296] H. Jia, C. A. Choquette-Choo, V. Chandrasekaran, and N. Papernot, “Entangled watermarks as a defense against model extraction,” in _USENIX Security_ , 2021, pp. 1937–1954.
* [297] A. B. Kahng, J. C. Lach, W. H. Mangione-Smith, S. Mantik, I. L. Markov, M. Potkonjak, P. Tucker, H. Wang, and G. Wolfe, “Watermarking techniques for intellectual property protection,” in _DAC_ , 1998, pp. 776–781.
* [298] M. Abadi, A. Chu, I. J. Goodfellow, H. B. McMahan, I. Mironov, K. Talwar, and L. Zhang, “Deep learning with differential privacy,” in _SIGSAC_ , 2016, pp. 308–318.
* [299] C. Dwork, “Differential privacy: A survey of results,” in _TAMC_ , 2008, pp. 1–19.
* [300] D. Chen, N. Yu, and M. Fritz, “Relaxloss: Defending membership inference attacks without losing utility,” in _ICLR_ , 2022.
* [301] C. Guo, G. Pleiss, Y. Sun, and K. Q. Weinberger, “On calibration of modern neural networks,” in _ICML_ , 2017, pp. 1321–1330.
* [302] G. Pereyra, G. Tucker, J. Chorowski, L. Kaiser, and G. E. Hinton, “Regularizing neural networks by penalizing confident output distributions,” in _ICLR workshop_ , 2017.
* [303] M. Nasr, R. Shokri, and A. Houmansadr, “Machine learning with membership privacy using adversarial regularization,” in _CCS_ , 2018, pp. 634–646.
* [304] J. Jia and N. Z. Gong, “Attriguard: A practical defense against attribute inference attacks via adversarial machine learning,” in _USENIX Security_ , 2018, pp. 513–529.
* [305] S. Awan, B. Luo, and F. Li, “CONTRA: defending against poisoning attacks in federated learning,” in _ESORICS_ , 2021, pp. 455–475.
* [306] F. Qi, M. Li, Y. Chen, Z. Zhang, Z. Liu, Y. Wang, and M. Sun, “Hidden killer: Invisible textual backdoor attacks with syntactic trigger,” in _ACL/IJCNLP_ , 2021, pp. 443–453.
* [307] W. Yang, Y. Lin, P. Li, J. Zhou, and X. Sun, “Rethinking stealthiness of backdoor attack against NLP models,” in _ACL/IJCNLP_ , 2021, pp. 5543–5557.
* [308] B. Wang, Y. Yao, S. Shan, H. Li, B. Viswanath, H. Zheng, and B. Y. Zhao, “Neural cleanse: Identifying and mitigating backdoor attacks in neural networks,” in _S &P_, 2019, pp. 707–723.
* [309] Y. Liu, W. Lee, G. Tao, S. Ma, Y. Aafer, and X. Zhang, “ABS: scanning neural networks for back-doors by artificial brain stimulation,” in _CCS_ , 2019, pp. 1265–1282.
* [310] J. Lu, T. Issaranon, and D. A. Forsyth, “Safetynet: Detecting and rejecting adversarial examples robustly,” in _ICCV_ , 2017, pp. 446–454.
* [311] J. H. Metzen, T. Genewein, V. Fischer, and B. Bischoff, “On detecting adversarial perturbations,” in _ICLR_ , 2017, p. 105978.
* [312] S. Gu and L. Rigazio, “Towards deep neural network architectures robust to adversarial examples,” in _ICLR workshop_ , 2015.
* [313] D. Meng and H. Chen, “Magnet: A two-pronged defense against adversarial examples,” in _Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security, CCS 2017, Dallas, TX, USA, October 30 \- November 03, 2017_ , 2017, pp. 135–147.
* [314] G. Katz, C. W. Barrett, D. L. Dill, K. Julian, and M. J. Kochenderfer, “Reluplex: An efficient SMT solver for verifying deep neural networks,” in _CAV_ , 2017, pp. 97–117.
* [315] D. Gopinath, G. Katz, C. S. Pasareanu, and C. W. Barrett, “Deepsafe: A data-driven approach for checking adversarial robustness in neural networks,” _CoRR_ , vol. abs/1710.00486, 2017.
* [316] N. Papernot, P. D. McDaniel, X. Wu, S. Jha, and A. Swami, “Distillation as a defense to adversarial perturbations against deep neural networks,” in _S &P_, 2016, pp. 582–597.
* [317] G. E. Hinton, O. Vinyals, and J. Dean, “Distilling the knowledge in a neural network,” _CoRR_ , vol. abs/1503.02531, 2015.
* [318] R. Huang, B. Xu, D. Schuurmans, and C. Szepesvári, “Learning with a strong adversary,” _CoRR_ , vol. abs/1511.03034, 2015.
* [319] OWASP, “Owasp top 10 for llm applications,” https://owasp.org/www-project-top-10-for-large-language-model-applications/assets/PDF/OWASP-Top-10-for-LLMs-2023-v1_0_1.pdf, 2023\.
* [320] E. Göktas, E. Athanasopoulos, H. Bos, and G. Portokalidis, “Out of control: Overcoming control-flow integrity,” in _SP_ , 2014, pp. 575–589.
* [321] N. Carlini, A. Barresi, M. Payer, D. A. Wagner, and T. R. Gross, “Control-flow bending: On the effectiveness of control-flow integrity,” in _USENIX Security_ , 2015, pp. 161–176.
* [322] C. Zhang, T. Wei, Z. Chen, L. Duan, L. Szekeres, S. McCamant, D. Song, and W. Zou, “Practical control flow integrity and randomization for binary executables,” in _SP_ , 2013, pp. 559–573.
* [323] R. T. Gollapudi, G. Yuksek, D. Demicco, M. Cole, G. Kothari, R. Kulkarni, X. Zhang, K. Ghose, A. Prakash, and Z. Umrigar, “Control flow and pointer integrity enforcement in a secure tagged architecture,” in _SP_ , 2023, pp. 2974–2989.
* [324] W. U. Hassan, M. Lemay, N. Aguse, A. Bates, and T. Moyer, “Towards scalable cluster auditing through grammatical inference over provenance graphs,” in _NDSS_ , 2018.
* [325] X. Han, T. F. J. Pasquier, A. Bates, J. Mickens, and M. I. Seltzer, “Unicorn: Runtime provenance-based detector for advanced persistent threats,” in _NDSS_ , 2020.
* [326] Q. Wang, W. U. Hassan, D. Li, K. Jee, X. Yu, K. Zou, J. Rhee, Z. Chen, W. Cheng, C. A. Gunter, and H. Chen, “You are what you do: Hunting stealthy malware via data provenance analysis,” in _NDSS_ , 2020.
* [327] L. Yu, S. Ma, Z. Zhang, G. Tao, X. Zhang, D. Xu, V. E. Urias, H. W. Lin, G. F. Ciocarlie, V. Yegneswaran, and A. Gehani, “Alchemist: Fusing application and audit logs for precise attack provenance without instrumentation,” in _NDSS_ , 2021.
* [328] H. Ding, J. Zhai, D. Deng, and S. Ma, “The case for learned provenance graph storage systems,” in _USENIX Security_ , 2023.
* [329] F. Yang, J. Xu, C. Xiong, Z. Li, and K. Zhang, “PROGRAPHER: an anomaly detection system based on provenance graph embedding,” in _USENIX Security_ , 2023.
* [330] A. Tabiban, H. Zhao, Y. Jarraya, M. Pourzandi, M. Zhang, and L. Wang, “Provtalk: Towards interpretable multi-level provenance analysis in networking functions virtualization (NFV),” in _NDSS_ , 2022.
* [331] A. Bates, D. Tian, K. R. B. Butler, and T. Moyer, “Trustworthy whole-system provenance for the linux kernel,” in _USENIX Security_ , 2015, pp. 319–334.
* [332] S. M. Milajerdi, R. Gjomemo, B. Eshete, R. Sekar, and V. N. Venkatakrishnan, “HOLMES: real-time APT detection through correlation of suspicious information flows,” in _SP_ , 2019, pp. 1137–1152.
* [333] A. Alsaheel, Y. Nan, S. Ma, L. Yu, G. Walkup, Z. B. Celik, X. Zhang, and D. Xu, “ATLAS: A sequence-based learning approach for attack investigation,” in _USENIX Security_ , 2021, pp. 3005–3022.
* [334] L. Yu, S. Ma, Z. Zhang, G. Tao, X. Zhang, D. Xu, V. E. Urias, H. W. Lin, G. F. Ciocarlie, V. Yegneswaran, and A. Gehani, “Alchemist: Fusing application and audit logs for precise attack provenance without instrumentation,” in _NDSS_ , 2021.
* [335] X. Han, T. F. J. Pasquier, A. Bates, J. Mickens, and M. I. Seltzer, “Unicorn: Runtime provenance-based detector for advanced persistent threats,” in _NDSS_ , 2020.
* [336] K. Mukherjee, J. Wiedemeier, T. Wang, J. Wei, F. Chen, M. Kim, M. Kantarcioglu, and K. Jee, “Evading provenance-based ML detectors with adversarial system actions,” in _USENIX Security_ , 2023, pp. 1199–1216.
* [337] Q. Wang, W. U. Hassan, D. Li, K. Jee, X. Yu, K. Zou, J. Rhee, Z. Chen, W. Cheng, C. A. Gunter, and H. Chen, “You are what you do: Hunting stealthy malware via data provenance analysis,” in _NDSS_ , 2020.
* [338] M. A. Inam, Y. Chen, A. Goyal, J. Liu, J. Mink, N. Michael, S. Gaur, A. Bates, and W. U. Hassan, “Sok: History is a vast early warning system: Auditing the provenance of system intrusions,” in _SP_ , 2023, pp. 2620–2638.
* [339] C. Fu, Q. Li, M. Shen, and K. Xu, “Realtime robust malicious traffic detection via frequency domain analysis,” in _CCS_ , 2021, pp. 3431–3446.
* [340] D. Barradas, N. Santos, L. Rodrigues, S. Signorello, F. M. V. Ramos, and A. Madeira, “Flowlens: Enabling efficient flow classification for ml-based network security applications,” in _NDSS_ , 2021.
* [341] G. Zhou, Z. Liu, C. Fu, Q. Li, and K. Xu, “An efficient design of intelligent network data plane,” in _USENIX Security_ , 2023.
* [342] S. Panda _et al._ , “Smartwatch: accurate traffic analysis and flow-state tracking for intrusion prevention using smartnics,” in _CoNEXT_ , 2021, pp. 60–75.
* [343] G. Siracusano _et al._ , “Re-architecting traffic analysis with neural network interface cards,” in _NSDI_ , 2022, pp. 513–533.
* [344] Y. Mirsky, T. Doitshman, Y. Elovici, and A. Shabtai, “Kitsune: An ensemble of autoencoders for online network intrusion detection,” in _NDSS_ , 2018.
* [345] J. Holland, P. Schmitt, N. Feamster, and P. Mittal, “New directions in automated traffic analysis,” in _CCS_ , 2021, pp. 3366–3383.
* [346] C. Fu, Q. Li, and K. Xu, “Detecting unknown encrypted malicious traffic in real time via flow interaction graph analysis,” in _NDSS_ , 2023.
* [347] M. Tran _et al._ , “On the feasibility of rerouting-based ddos defenses,” in _SP_ , 2019, pp. 1169–1184.
* [348] D. Wagner _et al._ , “United we stand: Collaborative detection and mitigation of amplification ddos attacks at scale,” in _CCS_ , 2021, pp. 970–987.
* [349] M. Wichtlhuber _et al._ , “IXP scrubber: learning from blackholing traffic for ml-driven ddos detection at scale,” in _SIGCOMM_ , 2022, pp. 707–722.
* [350] VirusTotal, “Virustotal,” https://www.virustotal.com/gui/home/upload, 2023\.
* [351] S. Thirumuruganathan, M. Nabeel, E. Choo, I. Khalil, and T. Yu, “Siraj: a unified framework for aggregation of malicious entity detectors,” in _SP_ , 2022, pp. 507–521.
* [352] T. Scholte, W. Robertson, D. Balzarotti, and E. Kirda, “Preventing input validation vulnerabilities in web applications through automated type analysis,” in _CSA_ , 2012, pp. 233–243.
* [353] A. Blankstein and M. J. Freedman, “Automating isolation and least privilege in web services,” in _SP_ , 2014, pp. 133–148.
* [354] D. Sánchez, M. Batet, and A. Viejo, “Automatic general-purpose sanitization of textual documents,” _IEEE Transactions on Information Forensics and Security_ , vol. 8, no. 6, pp. 853–862, 2013.
* [355] Y. Guo, J. Liu, W. Tang, and C. Huang, “Exsense: Extract sensitive information from unstructured data,” _Computers & Security_, vol. 102, p. 102156, 2021\.
* [356] F. Hassan, D. Sánchez, J. Soria-Comas, and J. Domingo-Ferrer, “Automatic anonymization of textual documents: detecting sensitive information via word embeddings,” in _TrustCom/BigDataSE_ , 2019, pp. 358–365.
* [357] W. G. D. Note, “Ethical principles for web machine learning,” https://www.w3.org/TR/webmachinelearning-ethics, 2023.
* [358] G. AI, “Guardrails ai,” https://www.guardrailsai.com/docs/, 2023.
* [359] Laiyer.ai, “Llm guard - the security toolkit for llm interactions,” https://github.com/laiyer-ai/llm-guard/, 2023.
* [360] Azure, “Content filtering,” https://learn.microsoft.com/en-us/azure/ai-services/openai/concepts/content-filter?tabs=warning%2Cpython, 2023\.
* [361] K. Gémes and G. Recski, “Tuw-inf at germeval2021: Rule-based and hybrid methods for detecting toxic, engaging, and fact-claiming comments,” in _GermEval KONVENS_ , 2021, pp. 69–75.
* [362] K. Gémes, Á. Kovács, and G. Recski, “Offensive text detection across languages and datasets using rule-based and hybrid methods,” in _CIKM workshop_ , 2022.
* [363] P. Nakov, V. Nayak, K. Dent, A. Bhatawdekar, S. M. Sarwar, M. Hardalov, Y. Dinkov, D. Zlatkova, G. Bouchard, and I. Augenstein, “Detecting abusive language on online platforms: A critical analysis,” _CoRR_ , vol. abs/2103.00153, 2021.
* [364] F. Alam, S. Cresci, T. Chakraborty, F. Silvestri, D. Dimitrov, G. D. S. Martino, S. Shaar, H. Firooz, and P. Nakov, “A survey on multimodal disinformation detection,” _CoRR_ , vol. abs/2103.12541, 2021.
* [365] P. Nakov, H. T. Sencar, J. An, and H. Kwak, “A survey on predicting the factuality and the bias of news media,” _CoRR_ , vol. abs/2103.12506, 2021\.
* [366] T. Hartvigsen, S. Gabriel, H. Palangi, M. Sap, D. Ray, and E. Kamar, “Toxigen: A large-scale machine-generated dataset for adversarial and implicit hate speech detection,” _CoRR_ , vol. abs/2203.09509, 2022.
* [367] A. V. Lilian Weng, Vik Goel, “Using gpt-4 for content moderation,” https://searchengineland.com/openai-ai-classifier-no-longer-available-429912/, 2023\.
* [368] M. AI, “Llama 2 responsible use guide,” https://ai.meta.com/llama/responsible-use-guide/, 2023.
* [369] J. Chen, G. Kim, A. Sriram, G. Durrett, and E. Choi, “Complex claim verification with evidence retrieved in the wild,” _CoRR_ , vol. abs/2305.11859, 2023.
* [370] B. A. Galitsky, “Truth-o-meter: Collaborating with llm in fighting its hallucinations,” 2023.
* [371] S. Min, K. Krishna, X. Lyu, M. Lewis, W.-t. Yih, P. W. Koh, M. Iyyer, L. Zettlemoyer, and H. Hajishirzi, “Factscore: Fine-grained atomic evaluation of factual precision in long form text generation,” _CoRR_ , vol. abs/2305.14251, 2023.
* [372] F. Nan, R. Nallapati, Z. Wang, C. N. d. Santos, H. Zhu, D. Zhang, K. McKeown, and B. Xiang, “Entity-level factual consistency of abstractive text summarization,” _CoRR_ , vol. abs/2102.09130, 2021.
* [373] J. Maynez, S. Narayan, B. Bohnet, and R. McDonald, “On faithfulness and factuality in abstractive summarization,” _CoRR_ , vol. abs/2005.00661, 2020\.
* [374] A. Agrawal, L. Mackey, and A. T. Kalai, “Do language models know when they’re hallucinating references?” _CoRR_ , vol. abs/2305.18248, 2023.
* [375] R. Cohen, M. Hamri, M. Geva, and A. Globerson, “Lm vs lm: Detecting factual errors via cross examination,” _CoRR_ , vol. abs/2305.13281, 2023.
* [376] T. Scialom, P.-A. Dray, P. Gallinari, S. Lamprier, B. Piwowarski, J. Staiano, and A. Wang, “Questeval: Summarization asks for fact-based evaluation,” _CoRR_ , vol. abs/2103.12693, 2021.
* [377] O. Honovich, L. Choshen, R. Aharoni, E. Neeman, I. Szpektor, and O. Abend, “$q^{2}$: Evaluating factual consistency in knowledge-grounded dialogues via question generation and question answering,” _CoRR_ , vol. abs/2104.08202, 2021.
* [378] A. R. Fabbri, C.-S. Wu, W. Liu, and C. Xiong, “Qafacteval: Improved qa-based factual consistency evaluation for summarization,” _CoRR_ , vol. abs/2112.08542, 2021.
* [379] Z. Guo, M. Schlichtkrull, and A. Vlachos, “A survey on automated fact-checking,” _Transactions of the Association for Computational Linguistics_ , vol. 10, pp. 178–206, 2022.
* [380] R. Zhao, X. Li, S. Joty, C. Qin, and L. Bing, “Verify-and-edit: A knowledge-enhanced chain-of-thought framework,” _CoRR_ , vol. abs/2305.03268, 2023.
* [381] L. Gao, Z. Dai, P. Pasupat, A. Chen, A. T. Chaganty, Y. Fan, V. Zhao, N. Lao, H. Lee, D.-C. Juan _et al._ , “Rarr: Researching and revising what language models say, using language models,” in _ACL_ , 2023, pp. 16 477–16 508.
* [382] Z. Gou, Z. Shao, Y. Gong, Y. Shen, Y. Yang, N. Duan, and W. Chen, “Critic: Large language models can self-correct with tool-interactive critiquing,” _CoRR_ , vol. abs/2305.11738, 2023.
* [383] X. Wang, J. Wei, D. Schuurmans, Q. Le, E. Chi, S. Narang, A. Chowdhery, and D. Zhou, “Self-consistency improves chain of thought reasoning in language models,” _CoRR_ , vol. abs/2203.11171, 2022.
* [384] R. Tang, Y.-N. Chuang, and X. Hu, “The science of detecting llm-generated texts,” _CoRR_ , vol. abs/2303.07205, 2023.
* [385] J. Kirchenbauer, J. Geiping, Y. Wen, J. Katz, I. Miers, and T. Goldstein, “A watermark for large language models,” _CoRR_ , vol. abs/2301.10226, 2023\.
* [386] J. Fang, Z. Tan, and X. Shi, “Cosywa: Enhancing semantic integrity in watermarking natural language generation,” in _NLPCC_ , 2023, pp. 708–720.
* [387] M. J. Atallah, V. Raskin, M. Crogan, C. Hempelmann, F. Kerschbaum, D. Mohamed, and S. Naik, “Natural language watermarking: Design, analysis, and a proof-of-concept implementation,” in _Information Hiding_ , 2001, pp. 185–200.
* [388] Z. Jalil and A. M. Mirza, “A review of digital watermarking techniques for text documents,” in _ICIMT_ , 2009, pp. 230–234.
* [389] U. Topkara, M. Topkara, and M. J. Atallah, “The hiding virtues of ambiguity: quantifiably resilient watermarking of natural language text through synonym substitutions,” in _MM &Sec_, 2006, pp. 164–174.
* [390] J. T. Brassil, S. Low, N. F. Maxemchuk, and L. O’Gorman, “Electronic marking and identification techniques to discourage document copying,” _IEEE Journal on Selected Areas in Communications_ , vol. 13, no. 8, pp. 1495–1504, 1995\.
* [391] S. Abdelnabi and M. Fritz, “Adversarial watermarking transformer: Towards tracing text provenance with data hiding,” in _S &P_, 2021, pp. 121–140.
* [392] V. S. Sadasivan, A. Kumar, S. Balasubramanian, W. Wang, and S. Feizi, “Can ai-generated text be reliably detected?” _CoRR_ , vol. abs/2303.11156, 2023\.
* [393] G. Li, Y. Chen, J. Zhang, J. Li, S. Guo, and T. Zhang, “Warfare:breaking the watermark protection of ai-generated content,” _CoRR_ , vol. abs/2310.07726, 2023.
* [394] B. Huang, B. Zhu, H. Zhu, J. D. Lee, J. Jiao, and M. I. Jordan, “Towards optimal statistical watermarking,” _CoRR_ , vol. abs/2312.07930, 2023.
* [395] C. Chen, Y. Li, Z. Wu, M. Xu, R. Wang, and Z. Zheng, “Towards reliable utilization of AIGC: blockchain-empowered ownership verification mechanism,” _IEEE Open J. Comput. Soc._ , vol. 4, pp. 326–337, 2023.
* [396] A. Chakraborty, M. Alam, V. Dey, A. Chattopadhyay, and D. Mukhopadhyay, “A survey on adversarial attacks and defences,” _CAAI Trans. Intell. Technol._ , vol. 6, no. 1, pp. 25–45, 2021.
* [397] K. Zhu, J. Wang, J. Zhou, Z. Wang, H. Chen, Y. Wang, L. Yang, W. Ye, N. Z. Gong, Y. Zhang _et al._ , “Promptbench: Towards evaluating the robustness of large language models on adversarial prompts,” _CoRR_ , vol. abs/2306.04528, 2023.
* [398] B. Wang, C. Xu, S. Wang, Z. Gan, Y. Cheng, J. Gao, A. H. Awadallah, and B. Li, “Adversarial glue: A multi-task benchmark for robustness evaluation of language models,” _CoRR_ , vol. abs/2111.02840, 2021.
* [399] Y. Nie, A. Williams, E. Dinan, M. Bansal, J. Weston, and D. Kiela, “Adversarial nli: A new benchmark for natural language understanding,” _CoRR_ , vol. abs/1910.14599, 2019.
* [400] L. Yang, S. Zhang, L. Qin, Y. Li, Y. Wang, H. Liu, J. Wang, X. Xie, and Y. Zhang, “Glue-x: Evaluating natural language understanding models from an out-of-distribution generalization perspective,” _CoRR_ , vol. abs/2211.08073, 2022.
* [401] L. Yuan, Y. Chen, G. Cui, H. Gao, F. Zou, X. Cheng, H. Ji, Z. Liu, and M. Sun, “Revisiting out-of-distribution robustness in nlp: Benchmark, analysis, and llms evaluations,” _CoRR_ , vol. abs/2306.04618, 2023.
* [402] N. Vaghani and M. Thummar, “Flipkart product reviews with sentiment dataset,” https://www.kaggle.com/dsv/4940809, 2023.
* [403] T. Liu, Y. Zhang, C. Brockett, Y. Mao, Z. Sui, W. Chen, and B. Dolan, “A token-level reference-free hallucination detection benchmark for free-form text generation,” _CoRR_ , vol. abs/2104.08704, 2021.
* [404] P. Manakul, A. Liusie, and M. J. F. Gales, “Selfcheckgpt: Zero-resource black-box hallucination detection for generative large language models,” in _EMNLP_ , H. Bouamor, J. Pino, and K. Bali, Eds., 2023, pp. 9004–9017.
* [405] L. K. Umapathi, A. Pal, and M. Sankarasubbu, “Med-halt: Medical domain hallucination test for large language models,” _CoRR_ , vol. abs/2307.15343, 2023.
* [406] J. Li, X. Cheng, W. X. Zhao, J.-Y. Nie, and J.-R. Wen, “Halueval: A large-scale hallucination evaluation benchmark for large language models,” in _EMNLP_ , 2023, pp. 6449–6464.
* [407] J. Luo, C. Xiao, and F. Ma, “Zero-resource hallucination prevention for large language models,” _CoRR_ , vol. abs/2309.02654, 2023.
* [408] S. Casper, J. Lin, J. Kwon, G. Culp, and D. Hadfield-Menell, “Explore, establish, exploit: Red teaming language models from scratch,” _CoRR_ , vol. abs/2306.09442, 2023.
* [409] B. Mathew, P. Saha, S. M. Yimam, C. Biemann, P. Goyal, and A. Mukherjee, “Hatexplain: A benchmark dataset for explainable hate speech detection,” in _AAAI_ , 2021, pp. 14 867–14 875.
* [410] Y. Huang, Q. Zhang, L. Sun _et al._ , “Trustgpt: A benchmark for trustworthy and responsible large language models,” _CoRR_ , vol. abs/2306.11507, 2023.
* [411] J. Deng, J. Zhou, H. Sun, C. Zheng, F. Mi, H. Meng, and M. Huang, “Cold: A benchmark for chinese offensive language detection,” _CoRR_ , vol. abs/2201.06025, 2022.
* [412] G. Xu, J. Liu, M. Yan, H. Xu, J. Si, Z. Zhou, P. Yi, X. Gao, J. Sang, R. Zhang _et al._ , “Cvalues: Measuring the values of chinese large language models from safety to responsibility,” _CoRR_ , vol. abs/2307.09705, 2023\.
* [413] J. Zhang, K. Bao, Y. Zhang, W. Wang, F. Feng, and X. He, “Is chatgpt fair for recommendation? evaluating fairness in large language model recommendation,” _CoRR_ , vol. abs/2305.07609, 2023.
* [414] J. Dhamala, T. Sun, V. Kumar, S. Krishna, Y. Pruksachatkun, K.-W. Chang, and R. Gupta, “Bold: Dataset and metrics for measuring biases in open-ended language generation,” in _FAccT_ , 2021, pp. 862–872.
* [415] E. M. Smith, M. Hall, M. Kambadur, E. Presani, and A. Williams, ““i’m sorry to hear that”: Finding new biases in language models with a holistic descriptor dataset,” in _EMNLP_ , 2022, pp. 9180–9211.
* [416] J. Zhou, J. Deng, F. Mi, Y. Li, Y. Wang, M. Huang, X. Jiang, Q. Liu, and H. Meng, “Towards identifying social bias in dialog systems: Frame, datasets, and benchmarks,” _CoRR_ , vol. abs/2202.08011, 2022.
* [417] J. D. Blom, _A dictionary of hallucinations_. Springer, 2010.
* [418] A. P. Parikh, X. Wang, S. Gehrmann, M. Faruqui, B. Dhingra, D. Yang, and D. Das, “Totto: A controlled table-to-text generation dataset,” _CoRR_ , vol. abs/2004.14373, 2023.
* [419] E. Durmus, H. He, and M. Diab, “Feqa: A question answering evaluation framework for faithfulness assessment in abstractive summarization,” _CoRR_ , vol. abs/2005.03754, 2020.
* [420] B. Dhingra, M. Faruqui, A. Parikh, M.-W. Chang, D. Das, and W. W. Cohen, “Handling divergent reference texts when evaluating table-to-text generation,” _CoRR_ , vol. abs/1906.01081, 2019.
* [421] B. Goodrich, V. Rao, P. J. Liu, and M. Saleh, “Assessing the factual accuracy of generated text,” in _SIGKDD_ , 2019, pp. 166–175.
* [422] T. Falke, L. F. Ribeiro, P. A. Utama, I. Dagan, and I. Gurevych, “Ranking generated summaries by correctness: An interesting but challenging application for natural language inference,” in _ACL_ , 2019, pp. 2214–2220.
* [423] J. Pfeiffer, F. Piccinno, M. Nicosia, X. Wang, M. Reid, and S. Ruder, “mmt5: Modular multilingual pre-training solves source language hallucinations,” _CoRR_ , vol. abs/2305.14224, 2023.
* [424] K. Filippova, “Controlled hallucinations: Learning to generate faithfully from noisy data,” _CoRR_ , vol. abs/2010.05873, 2020.
* [425] F. Nie, J.-G. Yao, J. Wang, R. Pan, and C.-Y. Lin, “A simple recipe towards reducing hallucination in neural surface realisation,” in _ACL_ , 2019, pp. 2673–2679.
* [426] Y. Wang, Y. Zhao, and L. Petzold, “Are large language models ready for healthcare? a comparative study on clinical language understanding,” _CoRR_ , vol. abs/2304.05368, 2023.
* [427] OpenAI, “Open AI Privacy Policy,” https://openai.com/policies/privacy-policy, 2023.
* [428] S. A. Khowaja, P. Khuwaja, and K. Dev, “Chatgpt needs spade (sustainability, privacy, digital divide, and ethics) evaluation: A review,” _CoRR_ , vol. abs/2305.03123, 2023.
* [429] B. Wang, W. Chen, H. Pei, C. Xie, M. Kang, C. Zhang, C. Xu, Z. Xiong, R. Dutta, R. Schaeffer, S. T. Truong, S. Arora, M. Mazeika, D. Hendrycks, Z. Lin, Y. Cheng, S. Koyejo, D. Song, and B. Li, “Decodingtrust: A comprehensive assessment of trustworthiness in GPT models,” _CoRR_ , vol. abs/2306.11698, 2023.
* [430] L. Reynolds and K. McDonell, “Prompt programming for large language models: Beyond the few-shot paradigm,” in _CHI Extended Abstracts_ , 2021, pp. 1–7.
* [431] H. Brown, K. Lee, F. Mireshghallah, R. Shokri, and F. Tramèr, “What does it mean for a language model to preserve privacy?” in _FAccT_ , 2022, pp. 2280–2292.
* [432] X. Li, Y. Li, L. Liu, L. Bing, and S. Joty, “Is gpt-3 a psychopath? evaluating large language models from a psychological perspective,” _CoRR_ , vol. abs/2212.10529, 2022.
* [433] J. Rutinowski, S. Franke, J. Endendyk, I. Dormuth, and M. Pauly, “The self-perception and political biases of chatgpt,” _CoRR_ , vol. abs/2304.07333, 2023.
* [434] M. Das, S. K. Pandey, and A. Mukherjee, “Evaluating chatgpt’s performance for multilingual and emoji-based hate speech detection,” _CoRR_ , vol. abs/2305.13276, 2023.
* [435] D. Hendrycks, C. Burns, S. Basart, A. Critch, J. Li, D. Song, and J. Steinhardt, “Aligning ai with shared human values,” _CoRR_ , vol. abs/2008.02275, 2020.
* [436] F. Huang, H. Kwak, and J. An, “Is chatgpt better than human annotators? potential and limitations of chatgpt in explaining implicit hate speech,” _CoRR_ , vol. abs/2302.07736, 2023.
* [437] E. Sheng, K.-W. Chang, P. Natarajan, and N. Peng, “Societal biases in language generation: Progress and challenges,” _CoRR_ , vol. abs/2105.04054, 2021\.
* [438] M. Nadeem, A. Bethke, and S. Reddy, “Stereoset: Measuring stereotypical bias in pretrained language models,” _CoRR_ , vol. abs/2004.09456, 2020.
* [439] J. Hartmann, J. Schwenzow, and M. Witte, “The political ideology of conversational ai: Converging evidence on chatgpt’s pro-environmental, left-libertarian orientation,” _CoRR_ , vol. abs/2301.01768, 2023.
* [440] Y. Cao, L. Zhou, S. Lee, L. Cabello, M. Chen, and D. Hershcovich, “Assessing cross-cultural alignment between chatgpt and human societies: An empirical study,” _CoRR_ , vol. abs/2303.17466, 2023.
* [441] A. Ramezani and Y. Xu, “Knowledge of cultural moral norms in large language models,” _CoRR_ , vol. abs/2306.01857, 2023.
* [442] Y. Wan, W. Wang, P. He, J. Gu, H. Bai, and M. Lyu, “Biasasker: Measuring the bias in conversational ai system,” _CoRR_ , vol. abs/2305.12434, 2023.
* [443] Q. Luo, M. J. Puett, and M. D. Smith, “A perspectival mirror of the elephant: Investigating language bias on google, chatgpt, wikipedia, and youtube,” _CoRR_ , vol. abs/2303.16281, 2023.
* [444] Y. Tian, X. Yang, J. Zhang, Y. Dong, and H. Su, “Evil geniuses: Delving into the safety of llm-based agents,” _arXiv preprint arXiv:2311.11855_ , 2023\. |
# On forced periodicity of perfect colorings
Pyry Herva and Jarkko Kari
###### Abstract
We study forced periodicity of two-dimensional configurations under certain
constraints and use an algebraic approach to multidimensional symbolic
dynamics in which $d$-dimensional configurations and finite patterns are
presented as formal power series and Laurent polynomials, respectively, in $d$
variables. We consider perfect colorings that are configurations such that the
number of points of a given color in the neighborhood of any point depends
only on the color of the point for some fixed relative neighborhood, and we
show that by choosing the alphabet suitably any perfect coloring has a non-
trivial annihilator, that is, there exists a Laurent polynomial whose formal
product with the power series presenting the perfect coloring is zero. Using
known results we obtain a sufficient condition for forced periodicity of two-
dimensional perfect colorings. As corollaries of this result we get simple new
proofs for known results of forced periodicity on the square and the
triangular grids. Moreover, we obtain a new result concerning forced
periodicity of perfect colorings in the king grid. We also consider perfect
colorings of a particularly simple type: configurations that have low abelian
complexity with respect to some shape, and we generalize a result that gives a
sufficient condition for such configurations to be necessarily periodic. Also,
some algorithmic aspects are considered.
## 1 Introduction
We say that a $d$-dimensional configuration $c\in\mathcal{A}^{\Z^{d}}$, that
is, a coloring of the $d$-dimensional integer grid $\Z^{d}$ using colors from
a finite set $\mathcal{A}$ is a perfect coloring with respect to some finite
relative neighborhood $D\subseteq\Z^{d}$ if the number of any given color of
$\mathcal{A}$ in the pattern $c|_{\mathbf{u}+D}$ depends only on the color
$c(\mathbf{u})$ for any $\mathbf{u}\in\Z^{d}$. There is a similar version of
this definition for general graphs: a vertex coloring $\varphi\colon
V\rightarrow\mathcal{A}$ of a graph $G=(V,E)$ with a finite set $\mathcal{A}$
of colors is a perfect coloring of radius $r$ if the number of any given color
in the $r$-neighborhood of a vertex $u\in V$ depends only on the color
$\varphi(u)$ of $u$ [28, 29]. More generally, the definition of perfect
colorings is a special case of the definition of equitable partitions [8].
If $\varphi\colon V\rightarrow\\{0,1\\}$ is a binary vertex coloring of a
graph $G=(V,E)$ then we can define a subset $C\subseteq V$ of the vertex set –
a code – such that it contains all the vertices with color $1$. If $\varphi$
is a perfect coloring of radius $r$, then the code $C$ has the property that
the number of codewords of $C$ in the $r$-neighborhood of a vertex $u\in V$ is
$a$ if $u\not\in C$ and $b$ if $u\in C$ for some fixed non-negative integers
$a$ and $b$. This kind of code is called a perfect $(r,b,a)$-covering or
simply just a perfect multiple covering [1, 5]. This definition is related to
domination in graphs and covering codes [11, 5].
Let $D\subseteq\Z^{d}$ be a finite set and $\mathcal{A}$ a finite set of
colors. Two finite patterns $p,q\in\mathcal{A}^{D}$ are abelian equivalent if
the number of occurrences of each symbol in $\mathcal{A}$ is the same in them.
The abelian complexity of a configuration $c\in\mathcal{A}^{\Z^{d}}$ with
respect to a finite shape $D$ is the number of abelian equivalence classes of
patterns of shape $D$ in $c$ [30]. We note that if $c\in\mathcal{A}^{\Z^{d}}$
is a perfect coloring with respect to $D$ and $|\mathcal{A}|=n$, then the
abelian complexity of $c$ with respect to $D$ is at most $n$. Abelian
complexity is a widely studied concept in one-dimensional symbolic dynamics
and combinatorics on words [22].
In this paper we study forced periodicity of two-dimensional perfect
colorings, that is, we study conditions under which all the colorings are
necessarily periodic. We give a general condition for forced periodicity. As
corollaries of this result we get new proofs for known results [1, 28, 29]
concerning forced periodicity of perfect colorings in the square and the
triangular grid and a new result for forced periodicity of perfect colorings
in the king grid. Moreover, we study two-dimensional configurations of low
abelian complexity, that is, configurations that have abelian complexity 1
with respect to some shape: we generalize a statement of forced periodicity
concerning this type of configurations. We use an algebraic approach [17] to
multidimensional symbolic dynamics, i.e., we present configurations as formal
power series and finite patterns as Laurent polynomials. This approach was
developed to make progress in a famous open problem in symbolic dynamics –
Nivat’s conjecture [27] – concerning forced periodicity of two-dimensional
configurations that have a sufficiently low number of $m\times n$ rectangular
patterns for some $m,n$. The Nivat’s conjecture thus claims a two-dimensional
generalization of the Morse-Hedlund theorem [24].
This article is an extended version of the conference paper [12] where we
considered forced periodicity of perfect coverings, that is, perfect colorings
with only two colors.
### The structure of the paper
We begin in Section 2 by introducing the basic concepts of symbolic dynamics,
cellular automata and graphs, and defining perfect colorings formally. In
Section 3 we present the relevant algebraic concepts and the algebraic
approach to multidimensional symbolic dynamics, and in Section 4 we describe
an algorithm to find the line polynomial factors of a given two-dimensional
Laurent polynomial. In Section 5 we consider forced periodicity of perfect
coverings, i.e., perfect colorings with only two colors and then in Section 6
we extend the results from the previous section to concern perfect colorings
using arbitrarily large alphabets. After this we prove a statement concerning
forced periodicity of two-dimensional configurations of low abelian complexity
in Section 7. In Section 8 we consider some algorithmic questions concerning
perfect colorings.
## 2 Preliminaries
### Basics on symbolic dynamics
Let us review briefly some basic concepts of symbolic dynamics relevant to us.
For a reference see _e.g._ [4, 19, 21]. Although our results concern mostly
two-dimensional configurations, we state our definitions in an arbitrary
dimension.
Let $\mathcal{A}$ be a finite set (the _alphabet_) and let $d$ be a positive
integer (the _dimension_). A $d$-dimensional _configuration_ over
$\mathcal{A}$ is a coloring of the infinite grid $\Z^{d}$ using colors from
$\mathcal{A}$, that is, an element of $\mathcal{A}^{\Z^{d}}$ – the _$d$
-dimensional configuration space_ over the alphabet $\mathcal{A}$. We denote
by $c_{\mathbf{u}}=c(\mathbf{u})$ the symbol or color that a configuration
$c\in\mathcal{A}^{\Z^{d}}$ has in cell $\mathbf{u}$. The _translation_
$\tau^{\mathbf{t}}$ by a vector $\mathbf{t}\in\Z^{d}$ shifts a configuration
$c$ such that $\tau^{\mathbf{t}}(c)_{\mathbf{u}}=c_{\mathbf{u}-\mathbf{t}}$
for all $\mathbf{u}\in\Z^{d}$. A configuration $c$ is _$\mathbf{t}$ -periodic_
if $\tau^{\mathbf{t}}(c)=c$, and it is _periodic_ if it is
$\mathbf{t}$-periodic for some non-zero $\mathbf{t}\in\Z^{d}$. Moreover, we
say that a configuration is _periodic in direction_
$\mathbf{v}\in\Q^{d}\setminus\\{\mathbf{0}\\}$ if it is $k\mathbf{v}$-periodic
for some $k\in\Z$. A $d$-dimensional configuration $c$ is _strongly periodic_
if it has $d$ linearly independent vectors of periodicity. A strongly periodic
configuration is periodic in every rational direction. Two-dimensional
strongly periodic configurations are called _two-periodic_.
A finite _pattern_ is an assignment of symbols on some finite shape
$D\subseteq\Z^{d}$, that is, an element of $\mathcal{A}^{D}$. In particular,
the finite patterns in $\mathcal{A}^{D}$ are called _$D$ -patterns_. Let us
denote by $\mathcal{A}^{*}$ the set of all finite patterns over $\mathcal{A}$
where the dimension $d$ is known from the context. We say that a finite
pattern $p\in\mathcal{A}^{D}$ _appears_ in a configuration
$c\in\mathcal{A}^{\Z^{d}}$ or that $c$ _contains_ $p$ if
$\tau^{\mathbf{t}}(c)|_{D}=p$ for some $\mathbf{t}\in\Z^{d}$. For a fixed
shape $D$, the set of all $D$-patterns of $c$ is the set
$\mathcal{L}_{D}(c)=\\{\tau^{\mathbf{t}}(c)|_{D}\mid\mathbf{t}\in\Z^{d}\\}$
and the set of all finite patterns of $c$ is denoted by $\mathcal{L}(c)$ which
is called the _language of $c$_. For a set
$\mathcal{S}\subseteq\mathcal{A}^{\Z^{d}}$ of configurations we define
$\mathcal{L}_{D}(\mathcal{S})$ and $\mathcal{L}(\mathcal{S})$ as the unions of
$\mathcal{L}_{D}(c)$ and $\mathcal{L}(c)$ over all $c\in\mathcal{S}$,
respectively.
The _pattern complexity_ $P(c,D)$ of a configuration
$c\in\mathcal{A}^{\Z^{d}}$ with respect to a shape $D$ is the number of
distinct $D$-patterns that $c$ contains. For any $a\in\mathcal{A}$ we denote
by $|p|_{a}$ the number of occurrences of the color $a$ in a finite pattern
$p$. Two finite patterns $p,q\in\mathcal{A}^{D}$ are called _abelian
equivalent_ if $|p|_{a}=|q|_{a}$ for all $a\in\mathcal{A}$, that is, if the
number of occurrences of each color is the same in both $p$ and $q$. The
_abelian complexity_ $A(c,D)$ of a configuration $c\in\mathcal{A}^{\Z^{2}}$
_with respect to a finite shape $D$_ is the number of different $D$-patterns
in $c$ up to abelian equivalence [30]. Clearly $A(c,D)\leq P(c,D)$. We say
that $c$ has _low complexity_ with respect to $D$ if
$P(c,D)\leq|D|$
and that $c$ has _low abelian complexity_ with respect to $D$ if
$A(c,D)=1.$
The configuration space $\mathcal{A}^{\Z^{d}}$ can be made a compact
topological space by endowing $\mathcal{A}$ with the discrete topology and
considering the product topology it induces on $\mathcal{A}^{\Z^{d}}$ – the
_prodiscrete topology_. This topology is induced by a metric where two
configurations are close if they agree on a large area around the origin. So,
$\mathcal{A}^{\Z^{d}}$ is a compact metric space.
A subset $\mathcal{S}\subseteq\mathcal{A}^{\Z^{d}}$ of the configuration space
is a _subshift_ if it is topologically closed and translation-invariant
meaning that if $c\in\mathcal{S}$, then for all $\mathbf{t}\in\Z^{d}$ also
$\tau^{\mathbf{t}}(c)\in\mathcal{S}$. Equivalently, subshifts can be defined
using forbidden patterns: Given a set $F\subseteq\mathcal{A}^{*}$ of
_forbidden_ finite patterns, the set
$X_{F}=\\{c\in\mathcal{A}^{\Z^{d}}\mid\mathcal{L}(c)\cap F=\emptyset\\}$
of configurations that avoid all forbidden patterns is a subshift. Moreover,
every subshift is obtained by forbidding some set of finite patterns. If
$F\subseteq\mathcal{A}^{*}$ is finite, then we say that $X_{F}$ is a _subshift
of finite type_ (SFT).
The _orbit_ of a configuration $c$ is the set
$\mathcal{O}(c)=\\{\tau^{\mathbf{t}}(c)\mid\mathbf{t}\in\Z^{d}\\}$ of its
every translate. The _orbit closure_ $\overline{\mathcal{O}(c)}$ is the
topological closure of its orbit under the prodiscrete topology. The orbit
closure of a configuration $c$ is the smallest subshift that contains $c$. It
consists of all configurations $c^{\prime}$ such that
$\mathcal{L}(c^{\prime})\subseteq\mathcal{L}(c)$.
### Cellular automata
Let us describe briefly an old result of cellular automata theory that we use
in Section 6. See [13] for a more thorough survey on the topic.
A $d$-dimensional _cellular automaton_ or a _CA_ for short over a finite
alphabet $\mathcal{A}$ is a map
$F\colon\mathcal{A}^{\Z^{d}}\longrightarrow\mathcal{A}^{\Z^{d}}$ determined by
a neighborhood vector $N=(\mathbf{t}_{1},\ldots,\mathbf{t}_{n})$ and a local
rule $f\colon\mathcal{A}^{n}\longrightarrow\mathcal{A}$ such that
$F(c)(\mathbf{u})=f(c(\mathbf{u}+\mathbf{t}_{1}),\ldots,c(\mathbf{u}+\mathbf{t}_{n})).$
A CA is _additive_ or _linear_ if its local rule is of the form
$f(x_{1},\ldots,x_{n})=a_{1}x_{1}+\ldots+a_{n}x_{n}$
where $a_{1},\ldots,a_{n}\in R$ are elements of some finite ring $R$ and
$\mathcal{A}$ is an $R$-module.
In Section 6 we consider the surjectivity of cellular automata and use a
classic result called the _Garden-of-Eden theorem_ proved by Moore and Myhil
that gives a characterization for surjectivity in terms of injectivity on
“finite” configurations. Two configurations $c_{1}$ and $c_{2}$ are called
_asymptotic_ if the set $\text{diff}(c_{1},c_{2})=\\{\mathbf{u}\mid
c_{1}(\mathbf{u})\neq c_{2}(\mathbf{u})\\}$ of cells where they differ is
finite. A cellular automaton $F$ is _pre-injective_ if $F(c_{1})\neq F(c_{2})$
for any distinct asymptotic configurations $c_{1}$ and $c_{2}$. Clearly
injective CA are pre-injective. The Garden-of-Eden theorem states that pre-
injectivity of a CA is equivalent to surjectivity:
###### Theorem (Garden-of-Eden theorem, [23, 25]).
A CA is surjective if and only if it is pre-injective.
In the one-dimensional setting the Garden-of-Eden theorem yields the following
corollary:
###### Corollary.
For a one-dimensional surjective CA every configuration has only a finite
number of pre-images.
### Graphs
In this paper we consider graphs that are _simple_ , _undirected_ and
_connected_. A graph $G$ that has vertex set $V$ and edge set $E$ is denoted
by $G=(V,E)$. The _distance_ $d(u,v)$ of two vertices $u\in V$ and $v\in V$ of
a graph $G=(V,E)$ is the length of a shortest path between them in $G$. The
$r$-neighborhood of $u\in V$ in a graph $G=(V,E)$ is the set $N_{r}(u)=\\{v\in
V\mid d(v,u)\leq r\\}$. The graphs we consider has vertex set $V=\Z^{2}$ and a
translation invariant edge set
$E\subseteq\\{\\{\mathbf{u},\mathbf{v}\\}\mid\mathbf{u},\mathbf{v}\in\Z^{2},\mathbf{u}\neq\mathbf{v}\\}$.
This implies that for all $r$ and for any two points $\mathbf{u}\in\Z^{2}$ and
$\mathbf{v}\in\Z^{2}$ their $r$-neighborhoods are the same up to translation,
that is, $N_{r}(\mathbf{u})=N_{r}(\mathbf{v})+\mathbf{u}-\mathbf{v}$.
Moreover, we assume that all the vertices of $G$ have only finitely many
neighbors, i.e., we assume that the _degree_ of $G$ is finite. We call these
graphs two-dimensional _(infinite) grid graphs_ or just _(infinite) grids_. In
a grid graph $G$, let us call the $r$-neighborhood of $\mathbf{0}$ the
_relative $r$-neighborhood_ of $G$ since it determines the $r$-neighborhood of
any vertex in $G$. Indeed, for all $\mathbf{u}\in\Z^{2}$ we have
$N_{r}(\mathbf{u})=N_{r}+\mathbf{u}$ where $N_{r}$ is the relative
$r$-neighborhood of $G$. Given the edge set of a grid graph, the relative
$r$-neighborhood is determined for every $r$. We specify three 2-dimensional
infinite grid graphs:
* •
The _square grid_ is the infinite grid graph $(\Z^{2},E_{\mathcal{S}})$ with
$E_{\mathcal{S}}=\\{\\{\mathbf{u},\mathbf{v}\\}\mid\mathbf{u}-\mathbf{v}\in\\{(\pm
1,0),(0,\pm 1)\\}\\}.$
* •
The _triangular grid_ is the infinite grid graph $(\Z^{2},E_{\mathcal{T}})$
with
$E_{\mathcal{T}}=\\{\\{\mathbf{u},\mathbf{v}\\}\mid\mathbf{u}-\mathbf{v}\in\\{(\pm
1,0),(0,\pm 1),(1,1),(-1,-1)\\}\\}.$
* •
The _king grid_ is the infinite grid graph $(\Z^{2},E_{\mathcal{K}})$ with
$E_{\mathcal{K}}=\\{\\{\mathbf{u},\mathbf{v}\\}\mid\mathbf{u}-\mathbf{v}\in\\{(\pm
1,0),(0,\pm 1),(\pm 1,\pm 1)\\}\\}.$
The relative $2$-neighborhoods of these grid graphs are pictured in Figure 1.
Figure 1: The relative $2$-neighborhoods of the square grid, the triangular
grid and the king grid, respectively.
### Perfect colorings
Let $\mathcal{A}=\\{a_{1},\ldots,a_{n}\\}$ be a finite alphabet of $n$ colors
and let $D\subseteq\Z^{d}$ be a finite shape. A configuration
$c\in\mathcal{A}^{\Z^{d}}$ is a _perfect coloring with respect to
$D\subseteq\Z^{d}$_ or a _$D$ -perfect coloring_ if for all
$i,j\in\\{1,\ldots,n\\}$ there exist numbers $b_{ij}$ such that for all
$\mathbf{u}\in\Z^{d}$ with $c_{\mathbf{u}}=a_{j}$ the number of occurrences of
color $a_{i}$ in the $D$-neighborhood of $\mathbf{u}$, i.e., in the pattern
$c|_{\mathbf{u}+D}$ is exactly $b_{ij}$. The _matrix of a $D$-perfect
coloring_ $c$ is the matrix $\mathbf{B}=(b_{ij})_{n\times n}$ where the
numbers $b_{ij}$ are as above. A $D$-perfect coloring with matrix $\mathbf{B}$
is called a (perfect) _$(D,\mathbf{B})$ -coloring_. Any $D$-perfect coloring
is called simply a perfect coloring. In other words, a configuration is a
perfect coloring if the number of cells of a given color in the given
neighborhood of a vertex $\mathbf{u}$ depends only on the color of
$\mathbf{u}$.
Perfect colorings are defined also for arbitrary graphs $G=(V,E)$. Again, let
$\mathcal{A}=\\{a_{1},\ldots,a_{n}\\}$ be a finite set of $n$ colors. A vertex
coloring $\varphi\colon V\rightarrow\mathcal{A}$ of $G$ is an $r$-perfect
coloring with matrix $\mathbf{B}=(b_{ij})_{n\times n}$ if the number of
vertices of color $a_{i}$ in the $r$-neighborhood of a vertex of color $a_{j}$
is exactly $b_{ij}$. Clearly if $G$ is a translation invariant graph with
vertex set $\Z^{d}$, then the $r$-perfect colorings of $G$ are exactly the
$D$-perfect colorings in $\mathcal{A}^{\Z^{d}}$ where $D$ is the relative
$r$-neighborhood of the graph $G$.
## 3 Algebraic concepts
We review the basic concepts and some results relevant to us concerning an
algebraic approach to multidimensional symbolic dynamics introduced and
studied in [17]. See also [14] for a short survey of the topic.
Let $c\in\mathcal{A}^{\Z^{d}}$ be a $d$-dimensional configuration. The power
series presenting $c$ is the formal power series
$c(X)=c(x_{1},\ldots,x_{d})=\sum_{\mathbf{u}=(u_{1},\ldots,u_{d})\in\Z^{d}}c_{\mathbf{u}}x_{1}^{u_{1}}\cdots
x_{d}^{u_{d}}=\sum_{\mathbf{u}\in\Z^{d}}c_{\mathbf{u}}X^{\mathbf{u}}$
in $d$ variables $X=(x_{1},\ldots,x_{d})$. We denote the set of all formal
power series in $d$ variables $X=(x_{1},\ldots,x_{d})$ over a domain $M$ by
$M[[X^{\pm 1}]]=M[[x_{1}^{\pm 1},\ldots,x_{d}^{\pm 1}]]$. If $d=1$ or $d=2$,
we denote $x=x_{1}$ and $y=x_{2}$. A power series is _finitary_ if it has only
finitely many distinct coefficients and _integral_ if its coefficients are all
integers, i.e., if it belongs to the set $\Z[[X^{\pm 1}]]$. A configuration is
always presented by a finitary power series and a finitary power series always
presents a configuration. So, from now on we may call any finitary power
series a configuration.
We consider also Laurent polynomials which we may call simply just
polynomials. We denote the set of Laurent polynomials in $d$ variables
$X=(x_{1},\ldots,x_{d})$ over a ring $R$ by $R[X^{\pm 1}]=R[x_{1}^{\pm
1},\ldots,x_{d}^{\pm 1}]$. The term “proper” is used when we talk about proper
(i.e., non-Laurent) polynomials and denote the proper polynomial ring over $R$
by $R[X]$ as usual.
We say that two Laurent polynomials have no common factors if all their common
factors are units in the polynomial ring under consideration and that they
have a common factor if they have a non–unit common factor. For example, in
$\C[X^{\pm 1}]$ two polynomials have no common factors if all their common
factors are constants or monomials, and two proper polynomials in $\C[X]$ have
no common factors if all their common factors are constants. The _support_ of
a power series $c=c(X)=\sum_{\mathbf{u}\in\Z^{d}}c_{\mathbf{u}}X^{\mathbf{u}}$
is the set $\text{\rm supp}(c)=\\{\mathbf{u}\in\Z^{d}\mid c_{\mathbf{u}}\neq
0\\}$. Clearly a polynomial is a power series with a finite support. The $k$th
dilation of a polynomial $f(X)$ is the polynomial $f(X^{k})$. See Figure 2 for
an illustration of dilations.
Figure 2: The supports of the polynomial
$f(X)=1+x^{-1}y^{-1}+x^{-1}y^{1}+x^{1}y^{-1}+x^{1}y^{1}$ and its dilations
$f(X^{2})$ and $f(X^{3})$.
The $x_{i}$-resultant $\text{\rm Res}_{x_{i}}(f,g)$ of two proper polynomials
$f,g\in R[x_{1},\ldots,x_{d}]$ is the determinant of the _Sylvester matrix_ of
$f$ and $g$ with respect to variable $x_{i}$. We omit the details which the
reader can check from [6], and instead we consider the resultant $\text{\rm
Res}_{x_{i}}(f,g)\in R[x_{1},\ldots,x_{i-1},x_{i+1},\ldots,x_{d}]$ for every
$i\in\\{1,\ldots,d\\}$ as a certain proper polynomial that has the following
two properties:
* •
$\text{\rm Res}_{x_{i}}(f,g)$ is in the ideal generated by $f$ and $g$, i.e.,
there exist proper polynomials $h$ and $l$ such that
$hf+lg=\text{\rm Res}_{x_{i}}(f,g).$
* •
If two proper polynomials $f$ and $g$ have no common factors in
$R[x_{1},\ldots,x_{d}]$, then $\text{\rm Res}_{x_{i}}(f,g)\neq 0$.
Let $R$ be a ring and $M$ a (left) $R$-module. The formal product of a
polynomial $f=f(X)=\sum_{i=1}^{m}a_{i}X^{\mathbf{u}_{i}}\in R[X^{\pm 1}]$ and
a power series
$c=c(X)=\sum_{\mathbf{u}\in\Z^{d}}c_{\mathbf{u}}X^{\mathbf{u}}\in M[X^{\pm
1}]$ is well-defined as the formal power series
$fc=f(X)c(X)=\sum_{\mathbf{u}\in\Z^{d}}(fc)_{\mathbf{u}}X^{\mathbf{u}}\in
M[X^{\pm 1}]$
where
$(fc)_{\mathbf{u}}=\sum_{i=1}^{m}a_{i}c_{\mathbf{u}-\mathbf{u}_{i}}.$
We say that a polynomial $f=f(X)$ _annihilates_ (or _is an annihilator of_) a
power series $c=c(X)$ if $fc=0$, that is, if their product is the zero power
series.
In a typical setting, we assume that $\mathcal{A}\subseteq\Z$ and hence
consider any configuration $c\in\mathcal{A}^{\Z^{d}}$ as a finitary and
integral power series $c(X)$. Since multiplying $c(X)$ by the monomial
$X^{\mathbf{u}}$ produces the power series presenting the translation
$\tau^{\mathbf{u}}(c)$ of $c$ by $\mathbf{u}$, we have that $c$ is
$\mathbf{u}$-periodic if and only if $c(X)$ is annihilated by the _difference
polynomial_ $X^{\mathbf{u}}-1$. (By a difference polynomial we mean a
polynomial $X^{\mathbf{u}}-1$ for any $\mathbf{u}\neq 0$.) This means that it
is natural to consider multiplication of $c$ by polynomials in $\C[X^{\pm
1}]$. However, note that the product of $c$ and a polynomial $f\in\C[X^{\pm
1}]$ may not be integral, but it is still finitary, hence a configuration. We
say that a polynomial $f$ _periodizes_ (or _is a periodizer of_) a
configuration $c$ if $fc$ is strongly periodic, that is, periodic in $d$
linearly independent directions. We denote the set of all periodizers with
complex coefficients of a configuration $c$ by $\text{\rm Per}(c)$ which is an
ideal of $\C[X^{\pm 1}]$ and hence we call it the _periodizer ideal_ of $c$.
Note that annihilators are periodizers. Note also that if $c$ has a periodizer
$f$, then $(X^{\mathbf{u}}-1)f$ is an annihilator of $c$ for some
$\mathbf{u}$. Thus, $c$ has a non-trivial (= non-zero) annihilator if and only
if it has a non-trivial periodizer. The following theorem states that if a
configuration has a non-trivial periodizer, then it has in fact an annihilator
of a particular simple form – a product of difference polynomials.
###### Theorem 1 ([17]).
Let $c\in\Z[[X^{\pm 1}]]$ be a configuration in any dimension and assume that
it has a non-trivial periodizer. Then there exist $m\geq 1$ and pairwise
linearly independent vectors $\mathbf{t}_{1},\ldots,\mathbf{t}_{m}$ such that
$(X^{\mathbf{t}_{1}}-1)\cdots(X^{\mathbf{t}_{m}}-1)$
annihilates $c$.
A _line polynomial_ is a polynomial whose support contains at least two points
and the points of the support lie on a unique line. In other words, a
polynomial $f$ is a line polynomial if it is not a monomial and there exist
vectors $\mathbf{u},\mathbf{v}\in\Z^{d}$ such that $\text{\rm
supp}(f)\subseteq\mathbf{u}+\Q\mathbf{v}$. In this case we say that $f$ is a
line polynomial in direction $\mathbf{v}$. We say that non-zero vectors
$\mathbf{v},\mathbf{v}^{\prime}\in\Z^{d}$ are _parallel_ if
$\mathbf{v}^{\prime}\in\Q\mathbf{v}$, and clearly then a line polynomial in
direction $\mathbf{v}$ is also a line polynomial in any parallel direction. A
vector $\mathbf{v}\in\Z^{d}$ is _primitive_ if its components are pairwise
relatively prime. If $\mathbf{v}$ is primitive, then
$\Q\mathbf{v}\cap\Z^{d}=\Z\mathbf{v}$. For any non-zero $\mathbf{v}\in\Z^{d}$
there exists a parallel primitive vector $\mathbf{v}^{\prime}\in\Z^{d}$. Thus,
we may assume the vector $\mathbf{v}$ in the definition of a line polynomial
$f$ to be primitive so that $\text{\rm
supp}(f)\subseteq\mathbf{u}+\Z\mathbf{v}$. In the following our preferred
presentations of directions are in terms of primitive vectors.
Any line polynomial $\phi$ in a (primitive) direction $\mathbf{v}$ can be
written uniquely in the form
$\phi=X^{\mathbf{u}}(a_{0}+a_{1}X^{\mathbf{v}}+\ldots+a_{n}X^{n\mathbf{v}})=X^{\mathbf{u}}(a_{0}+a_{1}t+\ldots+a_{n}t^{n})$
where $\mathbf{u}\in\Z^{d},n\geq 1,a_{0}\neq 0,a_{n}\neq 0$ and
$t=X^{\mathbf{v}}$. Let us call the single variable proper polynomial
$a_{0}+a_{1}t+\ldots+a_{n}t^{n}\in\C[t]$ the _normal form_ of $\phi$.
Moreover, for a monomial $aX^{\mathbf{u}}$ we define its normal form to be
$a$. So, two line polynomials in the direction $\mathbf{v}$ have the same
normal form if and only if they are the same polynomial up to multiplication
by $X^{\mathbf{u}}$, for some $\mathbf{u}\in\Z^{d}$.
Difference polynomials are line polynomials and hence the annihilator provided
by Theorem 1 is a product of line polynomials. Annihilation by a difference
polynomial means periodicity. More generally, annihilation of a configuration
$c$ by a line polynomial in a primitive direction $\mathbf{v}$ can be
understood as the annihilation of the one-dimensional _$\mathbf{v}$ -fibers_
$\sum_{k\in\Z}c_{\mathbf{u}+k\mathbf{v}}X^{\mathbf{u}+k\mathbf{v}}$ of $c$ in
direction $\mathbf{v}$, and since annihilation in the one-dimensional setting
implies periodicity with a bounded period, we conclude that a configuration is
periodic if and only if it is annihilated by a line polynomial. It is known
that if $c$ has a periodizer with line polynomial factors in at most one
primitive direction, then $c$ is periodic:
###### Theorem 2 ([18]).
Let $c\in\Z[[x^{\pm 1},y^{\pm 1}]]$ be a two-dimensional configuration and let
$f$ be a periodizer of $c$. Then the following conditions hold.
* •
If $f$ does not have any line polynomial factors, then $c$ is two-periodic.
* •
If all line polynomial factors of $f$ are in the same primitive direction,
then $c$ is periodic in this direction.
_Proof sketch._ The periodizer ideal $\text{\rm Per}(c)=\\{g\in\C[x^{\pm
1},y^{\pm 1}]\mid gc\text{ is two-periodic}\\}$ of $c$ is a principal ideal
generated by a polynomial $g=\phi_{1}\cdots\phi_{m}$ where
$\phi_{1},\ldots,\phi_{m}$ are line polynomials in pairwise non-parallel
directions [18]. Because $f\in\text{\rm Per}(c)$, we know that $g$ divides
$f$. If $f$ does not have any line polynomial factors, then $g=1$ and hence
$c=gc$ is two-periodic. If $f$ has line polynomial factors, and they are in
the same primitive direction $\mathbf{v}$, then $g$ is a line polynomial in
this direction. Since $gc$ is two-periodic, it is annihilated by
$(X^{k\mathbf{v}}-1)$ for some $k\in\Z$. This implies that the configuration
$c$ is annihilated by the line polynomial $(X^{k\mathbf{v}}-1)g$ in direction
$\mathbf{v}$. We conclude that $c$ is periodic in direction $\mathbf{v}$. ∎
The proof of the previous theorem sketched above relies heavily on the
structure of the ideal $\text{\rm Per}(c)$ developed in [17]. We give an
alternative proof sketch that mimics the usage of resultants in [16]:
_Second proof sketch of Theorem 2._ The existence of a non-trivial periodizer
$f$ implies by Theorem 1 that $c$ has a special annihilator
$g=\phi_{1}\cdots\phi_{m}$ that is a product of (difference) line polynomials
$\phi_{1},\ldots,\phi_{m}$ in pairwise non-parallel directions. All
irreducible factors of $g$ are line polynomials. If $f$ does not have any line
polynomial factors, then the periodizers $f$ and $g$ do not have common
factors. We can assume that both are proper polynomials as they can be
multiplied by a suitable monomial if needed. Because $f,g\in\text{\rm
Per}(c)$, also their resultant $\text{\rm Res}_{x}(f,g)\in\text{\rm Per}(c)$,
implying that $c$ has a non-trivial annihilator containing only variable $y$
since $\text{\rm Res}_{x}(f,g)\neq 0$ because $f$ and $g$ have no common
factors. This means that $c$ is periodic in the vertical direction.
Analogously, the _$y$ -resultant_ $\text{\rm Res}_{y}(f,g)$ shows that $c$ is
horizontally periodic, and hence two-periodic.
The proof for the case that $f$ has line polynomial factors only in one
direction $\mathbf{v}$ goes analogously by considering $\phi c$ instead of
$c$, where $\phi$ is the greatest common line polynomial factor of $f$ and $g$
in the direction $\mathbf{v}$. We get that $\phi c$ is two-periodic, implying
that $c$ is periodic in direction $\mathbf{v}$. ∎
In this paper we also consider configurations over alphabets $\mathcal{A}$
that are finite subsets of $\Z^{n}$, that is, the set of length $n$ integer
vectors, and hence study finitary formal power series from the set
$\Z^{n}[[X^{\pm 1}]]$ for $n\geq 2$. In particular, we call this kind of
configurations _integral vector configurations_. Also in this setting we
consider multiplication of power series by polynomials. The coefficients of
the polynomials are $n\times n$ integer matrices, i.e., elements of the ring
$\Z^{n\times n}$. Since $\Z^{n}$ is a (left) $\Z^{n\times n}$-module where we
consider the vectors of $\Z^{n}$ as column vectors, the product of a
polynomial $f=f(X)\in\Z^{n\times n}[X^{\pm 1}]$ and a power series
$c=c(X)\in\Z^{n}[[X^{\pm 1}]]$ is well-defined. Consequently, we say that
$c(X)\in\Z^{n}[[X^{\pm 1}]]$ is $\mathbf{t}$-periodic if it is annihilated by
the polynomial $\mathbf{I}X^{\mathbf{t}}-\mathbf{I}$ and that it is periodic
if it is $\mathbf{t}$-periodic for some non-zero $\mathbf{t}$.
There is a natural way to present configurations over arbitrary alphabets as
integral vector configurations. Let $\mathcal{A}=\\{a_{1},\ldots,a_{n}\\}$ be
a finite alphabet with $n$ elements. The _vector presentation_ of a
configuration $c\in\mathcal{A}^{\Z^{d}}$ is the configuration
$c^{\prime}\in\\{\mathbf{e}_{1},\ldots,\mathbf{e}_{n}\\}^{\Z^{d}}$ (or the
power series $c^{\prime}(X)\in\Z^{n}[[X^{\pm 1}]]$ presenting $c^{\prime}$)
defined such that $c^{\prime}_{\mathbf{u}}=\mathbf{e}_{i}$ if and only if
$c_{\mathbf{u}}=a_{i}$. Here by $\mathbf{e}_{i}\in\Z^{n}$ we denote the $i$th
natural base vector, i.e., the vector whose $i$th component is 1 while all the
other components are 0. Clearly $c$ is $\mathbf{t}$-periodic if and only if
its vector presentation is $\mathbf{t}$-periodic. Thus, to study the
periodicity of a configuration it is sufficient to study the periodicity of
its vector presentation.
The $i$th _layer_ of
$c=\sum\mathbf{c}_{\mathbf{u}}X^{\mathbf{u}}\in\Z^{n}[[X^{\pm 1}]]$ is the
power series
$\text{layer}_{i}(c)=\sum c_{\mathbf{u}}^{(i)}X^{\mathbf{u}}\in\Z[[X^{\pm
1}]]$
where $c_{\mathbf{u}}^{(i)}$ is the $i$th component of
$\mathbf{c}_{\mathbf{u}}$. Clearly $c\in\Z^{n}[[X^{\pm 1}]]$ is periodic in
direction $\mathbf{v}$ if and only if for all $i\in\\{1,\ldots,n\\}$ the $i$th
layer of $c$ is periodic in direction $\mathbf{v}$.
Finally, let $R$ be a finite ring and $\mathcal{A}$ a finite $R$-module. A
polynomial $f(X)=\sum_{i=1}^{n}a_{i}X^{-\mathbf{u}_{i}}\in R[x_{1}^{\pm
1},\ldots,x_{d}^{\pm 1}]$ defines an additive CA that has neighborhood vector
$(\mathbf{u}_{1},\ldots,\mathbf{u}_{n})$ and local rule
$f^{\prime}(y_{1},\ldots,y_{n})=a_{1}y_{1}+\ldots+a_{n}y_{n}$. More precisely,
the image of a configuration $c$ under the CA determined by $f$ is the
configuration $fc$.
## 4 Finding the line polynomial factors of a given two-variate Laurent
polynomial
In this section we have $d=2$ and hence all our polynomials are in two
variables $x$ and $y$. The open and closed _discrete half planes_ determined
by a non-zero vector $\mathbf{v}\in\Z^{2}$ are the sets
$H_{\mathbf{v}}=\\{\mathbf{u}\in\Z^{2}\mid\langle\mathbf{u},\mathbf{v}^{\perp}\rangle>0\\}$
and
$\overline{H}_{\mathbf{v}}=\\{\mathbf{u}\in\Z^{2}\mid\langle\mathbf{u},\mathbf{v}^{\perp}\rangle\geq
0\\}$, respectively, where $\mathbf{v}^{\perp}=(v_{2},-v_{1})$ is orthogonal
to $\mathbf{v}=(v_{1},v_{2})$. Let us also denote by
$l_{\mathbf{v}}=\overline{H}_{\mathbf{v}}\setminus H_{\mathbf{v}}$ the
discrete line parallel to $\mathbf{v}$ that goes through the origin. In other
words, the half plane determined by $\mathbf{v}$ is the half plane “to the
right” of the line $l_{\mathbf{v}}$ when moving along the line in the
direction of $\mathbf{v}$. We say that a finite set $D\subseteq\Z^{2}$ has an
_outer edge_ in direction $\mathbf{v}$ if there exists a vector
$\mathbf{t}\in\Z^{2}$ such that
$D\subseteq\overline{H}_{\mathbf{v}}+\mathbf{t}$ and
$|D\cap(l_{\mathbf{v}}+\mathbf{t})|\geq 2$. We call
$D\cap(l_{\mathbf{v}}+\mathbf{t})$ the outer edge of $D$ in direction
$\mathbf{v}$. An outer edge corresponding to $\mathbf{v}$ means that the
convex hull of $D$ has an edge in direction $\mathbf{v}$ in the clockwise
orientation around $D$.
If a finite non-empty set $D$ does not have an outer edge in direction
$\mathbf{v}$, then there exists a vector $\mathbf{t}\in\Z^{2}$ such that
$D\subseteq\overline{H}_{\mathbf{v}}+\mathbf{t}$ and
$|D\cap(l_{\mathbf{v}}+\mathbf{t})|=1$, and then we say that $D$ has a vertex
in direction $\mathbf{v}$. We call $D\cap(l_{\mathbf{v}}+\mathbf{t})$ the
vertex of $D$ in direction $\mathbf{v}$. We say that a polynomial $f$ has an
outer edge or a vertex in direction $\mathbf{v}$ if its support has an outer
edge or a vertex in direction $\mathbf{v}$, respectively. Note that every non-
empty finite shape $D$ has either an edge or a vertex in any non-zero
direction. Note also that in this context directions $\mathbf{v}$ and
$-\mathbf{v}$ are not the same: a shape may have an outer edge in direction
$\mathbf{v}$ but no outer edge in direction $-\mathbf{v}$. The following lemma
shows that a polynomial can have line polynomial factors only in the
directions of its outer edges.
###### Lemma 3 ([16]).
Let $f$ be a non-zero polynomial with a line polynomial factor in direction
$\mathbf{v}$. Then $f$ has outer edges in directions $\mathbf{v}$ and
$-\mathbf{v}$.
Let $\mathbf{v}\in\Z^{2}\setminus\\{\mathbf{0}\\}$ be a non-zero primitive
vector and let $f=\sum f_{\mathbf{u}}X^{\mathbf{u}}$ be a polynomial. Recall
that a _$\mathbf{v}$ -fiber_ of $f$ is a polynomial of the form
$\sum_{k\in\Z}f_{\mathbf{u}+k\mathbf{v}}X^{\mathbf{u}+k\mathbf{v}}$
for some $\mathbf{u}\in\Z^{2}$. Thus, a non-zero $\mathbf{v}$-fiber of a
polynomial is either a line polynomial or a monomial. Let us denote by
$\mathcal{F}_{\mathbf{v}}(f)$ the set of different normal forms of all non-
zero $\mathbf{v}$-fibers of a polynomial $f$, which is hence a finite set of
one-variate proper polynomials. The following simple example illustrates the
concept of fibers and their normal forms.
###### Example 4.
Let us determine the set $\mathcal{F}_{\mathbf{v}}(f)$ for
$f=f(X)=f(x,y)=3x+y+xy^{2}+xy+x^{3}y^{3}+x^{4}y^{4}$ and $\mathbf{v}=(1,1)$.
By grouping the terms we can write
$f=3x+y(1+xy)+xy(1+x^{2}y^{2}+x^{3}y^{3})=X^{(1,0)}\cdot
3+X^{(0,1)}(1+t)+X^{(1,1)}(1+t^{2}+t^{3})$
where $t=X^{(1,1)}=xy$. Hence,
$\mathcal{F}_{\mathbf{v}}(f)=\\{3,1+t,1+t^{2}+t^{3}\\}$. See Figure 3 for a
pictorial illustration. ∎
$3x$$y$$xy^{2}$$xy$$x^{3}y^{3}$$x^{4}y^{4}$ Figure 3: The support of
$f=3x+y+xy^{2}+xy+x^{3}y^{3}+x^{4}y^{4}$ and its different $(1,1)$-fibers.
As noticed in the example above, polynomials are linear combinations of their
fibers: for any polynomial $f$ and any non-zero primitive vector $\mathbf{v}$
we can write
$f=X^{\mathbf{u}_{1}}\psi_{1}+\ldots+X^{\mathbf{u}_{n}}\psi_{n}$
for some $\mathbf{u}_{1},\ldots,\mathbf{u}_{n}\in\Z^{2}$ where
$\psi_{1},\ldots,\psi_{n}\in\mathcal{F}_{\mathbf{v}}(f)$. We use this in the
proof of the next theorem.
###### Theorem 5.
A polynomial $f$ has a line polynomial factor in direction $\mathbf{v}$ if and
only if the polynomials in $\mathcal{F}_{\mathbf{v}}(f)$ have a common factor.
###### Proof.
For any line polynomial $\phi$ in direction $\mathbf{v}$, and for any
polynomial $g$, the $\mathbf{v}$-fibers of the product $\phi g$ have a common
factor $\phi$. In other words, if a polynomial $f$ has a line polynomial
factor $\phi$ in direction $\mathbf{v}$, then the polynomials in
$\mathcal{F}_{\mathbf{v}}(f)$ have the normal form of $\phi$ as a common
factor.
For the converse direction, assume that the polynomials in
$\mathcal{F}_{\mathbf{v}}(f)$ have a common factor $\phi$. Then there exist
vectors $\mathbf{u}_{1},\ldots,\mathbf{u}_{n}\in\Z^{2}$ and polynomials
$\phi\psi_{1},\ldots,\phi\psi_{n}\in\mathcal{F}_{\mathbf{v}}(f)$ such that
$f=X^{\mathbf{u}_{1}}\phi\psi_{1}+\ldots+X^{\mathbf{u}_{n}}\phi\psi_{n}.$
Hence, $\phi$ is a line polynomial factor of $f$ in direction $\mathbf{v}$. ∎
Note that Lemma 3 actually follows immediately from Theorem 5: A vertex
instead of an outer edge in direction $\mathbf{v}$ or $-\mathbf{v}$ provides a
non-zero monomial $\mathbf{v}$-fiber, which implies that the polynomials in
$\mathcal{F}_{\mathbf{v}}(f)$ have no common factors.
So, to find out the line polynomial factors of $f$ we first need to find out
the possible directions of the line polynomials, that is, the directions of
the (finitely many) outer edges of $f$, and then we need to check for which of
these possible directions $\mathbf{v}$ the polynomials in
$\mathcal{F}_{\mathbf{v}}(f)$ have a common factor. There are clearly
algorithms to find the outer edges of a given polynomial and to determine
whether finitely many line polynomials have a common factor. If such a factor
exists, then by Theorem 5 the polynomial $f$ has a line polynomial factor in
this direction. We have proved the following theorem.
###### Theorem 6.
There is an algorithm to find the line polynomial factors of a given (Laurent)
polynomial in two variables.
## 5 Forced periodicity of perfect colorings with two colors
In this section we consider forced periodicity of two-dimensional perfect
colorings with only two colors. Without loss of generality we may assume that
$\mathcal{A}=\\{a_{1},a_{2}\\}=\\{0,1\\}$ ($a_{1}=0,a_{2}=1$) and consider
perfect colorings $c\in\mathcal{A}^{\Z^{2}}$ since the names of the colors do
not matter in our considerations. So, let $c\in\\{0,1\\}^{\Z^{2}}$ be a
perfect coloring with respect to $D\subseteq\Z^{2}$ and let
$\mathbf{B}=(b_{ij})_{2\times 2}$ be the matrix of $c$. Let us define a set
$C=\\{\mathbf{u}\in\Z^{2}\mid c_{\mathbf{u}}=1\\}$. This set has the property
that the neighborhood $\mathbf{u}+D$ of a point $\mathbf{u}$ contains exactly
$a=b_{21}$ points of color $1$ if $\mathbf{u}\not\in C$ and exactly $b=b_{22}$
points of color $1$ if $\mathbf{u}\in C$. In fact, $C$ is a _perfect
(multiple) covering_ of the infinite grid $G$ determined by the relative
neighborhood $D$. More precisely, the set $C$ is a (perfect) _$(D,b,a)$
-covering_ of $G$. This is a variant of the following definition: in any graph
a subset $C$ of its vertex set is an _$(r,b,a)$ -covering_ if the number of
vertices of $C$ in the $r$-neighborhood of a vertex $u$ is $a$ if $u\not\in C$
and $b$ if $u\in C$. See [1] for a reference. Clearly in translation invariant
graphs the $(r,b,a)$-coverings correspond to $(D,b,a)$-coverings where $D$ is
the relative $r$-neighborhood of the graph. Thus, it is natural to call any
perfect coloring with only two colors a perfect covering. Note that a
$(D,b,a)$-covering is a $D$-perfect coloring with the matrix
$\mathbf{B}=\begin{pmatrix}|D|-a&|D|-b\\\ a&b\end{pmatrix}.$
The following theorem by Axenovich states that “almost every”
$(1,b,a)$-covering in the square grid is two-periodic.
###### Theorem 7 ([1]).
If $b-a\neq 1$, then every $(1,b,a)$-covering in the square grid is two-
periodic.
For a finite set $D\subseteq\Z^{2}$ we define its _characteristic polynomial_
to be the polynomial $f_{D}(X)=\sum_{\mathbf{u}\in D}X^{-\mathbf{u}}$. We
denote by $\mathbbm{1}(X)$ the constant power series
$\sum_{\mathbf{u}\in\Z^{2}}X^{\mathbf{u}}$. If $c\in\\{0,1\\}^{\Z^{2}}$ is a
$(D,b,a)$-covering, then from the definition we get that
$f_{D}(X)c(X)=(b-a)c(X)+a\mathbbm{1}(X)$ which is equivalent to
$\left(f_{D}(X)-(b-a)\right)c(X)=a\mathbbm{1}(X)$. Thus, if $c$ is a
$(D,b,a)$-covering, then $f_{D}(X)-(b-a)$ is a periodizer of $c$. Hence, by
Theorem 2 the condition that the polynomial $f_{D}(X)-(b-a)$ has no line
polynomial factors is a sufficient condition for forced periodicity of a
$(D,b,a)$-covering. Hence, we have the following corollary of Theorem 2:
###### Corollary 8.
Let $D\subseteq\Z^{2}$ be a finite shape and let $b$ and $b$ be non-negative
integers. If $g=f_{D}-(b-a)$ has no line polynomial factors, then every
$(D,b,a)$-covering is two-periodic.
Using our formulation and the algebraic approach we get a simple proof for
Theorem 7:
###### Reformulation of Theorem 7.
Let $D$ be the relative 1-neighborhood of the square grid and assume that
$b-a\neq 1$. Then every $(D,b,a)$-covering is two-periodic.
###### Proof.
Let $c$ be an arbitrary $(D,b,a)$-covering. The outer edges of
$g=f_{D}-(b-a)=x^{-1}+y^{-1}+1-(b-a)+x+y$ are in directions
$(1,1),(-1,-1),(1,-1)$ and $(-1,1)$ and hence by Lemma 3 any line polynomial
factor of $g$ is either in direction $(1,1)$ or $(1,-1)$. For
$\mathbf{v}\in\\{(1,1),(1,-1)\\}$ we have
$\mathcal{F}_{\mathbf{v}}(g)=\\{1+t,1-(b-a)\\}$. See Figure 4 for an
illustration. Since $1-(b-a)$ is a non-trivial monomial, by Theorem 5 the
periodizer $g\in\text{\rm Per}(c)$ has no line polynomial factors and hence
the claim follows by corollary 8. ∎
We also get a similar proof for the following known result concerning the
forced periodicity perfect coverings in the square grid with radius $r\geq 2$.
###### Theorem 9 ([29]).
Let $r\geq 2$ and let $D$ be the relative $r$-neighborhood of the square grid.
Then every $(D,b,a)$-covering is two-periodic. In other words, all
$(r,b,a)$-coverings in the square grid are two-periodic for all $r\geq 2$.
###### Proof.
Let $c$ be an arbitrary $(D,b,a)$-covering. By Lemma 3 any line polynomial
factor of $g=f_{D}-(b-a)$ has direction $(1,1)$ or $(1,-1)$. So, assume that
$\mathbf{v}\in\\{(1,1),(1,-1)\\}$. We have
$\phi_{1}=1+t+\ldots+t^{r}\in\mathcal{F}_{\mathbf{v}}(g)$ and
$\phi_{2}=1+t+\ldots+t^{r-1}\in\mathcal{F}_{\mathbf{v}}(g)$. See Figure 4 for
an illustration in the case $r=2$. Since $\phi_{1}-\phi_{2}=t^{r}$, the
polynomials $\phi_{1}$ and $\phi_{2}$ have no common factors, and hence by
Theorem 5 the periodizer $g$ has no line polynomial factors. Corollary 8 gives
the claim. ∎
$1+t$$1-(b-a)$$1+t+t^{2}$$1+t$$1+t+t^{2}+t^{3}+t^{4}$$1+t+(1-(b-a))t^{2}+t^{3}+t^{4}$$1+t$$1+(1-(b-a))t+t^{2}$$1+t+t^{2}$$1+t+t^{2}+t^{3}$
Figure 4: Pictorial illustrations for the proofs of Theorems 7, 9, 10, 11 and
12. The constellation on the left of the upper row illustrates the proof of
Theorem 7. The constellation in the center of the upper row illustrates the
proof of Theorem 9 with $r=2$. The constellation on the right of the upper row
illustrates the proof of Theorem 12 with $r=2$. The constellation on the left
of the lower row illustrates the proof of Theorem 10. The constellation on the
right of the lower row illustrates the proof of Theorem 11 with $r=2$. In each
of the constellations we have pointed out two normal forms with no common
factors in $\mathcal{F}_{\mathbf{v}}(g)$ from the points of $\text{\rm
supp}(g)$ for one of the outer edges $\mathbf{v}$ of $\text{\rm supp}(g)$.
There are analogous results in the triangular grid, and we can prove them
similarly using Corollary 8.
###### Theorem 10 ([29]).
Let $D$ be the relative 1-neighborhood of the triangular grid and assume that
$b-a\neq-1$. Then every $(D,b,a)$-covering in the triangular grid is two-
periodic. In other words, all $(1,b,a)$-coverings in the triangular grid are
two-periodic whenever $b-a\neq-1$.
###### Proof.
Let $c$ be an arbitrary $(D,b,a)$-covering. The outer edges of
$g=f_{D}-(b-a)=x^{-1}y^{-1}+x^{-1}+y^{-1}+1-(b-a)+x+y+xy$ have directions
$(1,1),(-1,-1),(1,0),(-1,0)$, $(0,1)$ and $(0,-1)$ and hence by Lemma 3 any
line polynomial factor of $g$ has direction $(1,1)$, $(1,0)$ or $(0,1)$. So,
let $\mathbf{v}\in\\{(1,1),(1,0),(0,1)\\}$. We have
$\mathcal{F}_{\mathbf{v}}(g)=\\{1+t,1+(1-(b-a))t+t^{2}\\}$. See Figure 4 for
an illustration. Polynomials $\phi_{1}=1+t$ and $\phi_{2}=1+(1-(b-a))t+t^{2}$
satisfy $\phi_{1}^{2}-\phi_{2}=(1+b-a)t$. Thus, they do not have any common
factors if $b-a\neq-1$ and hence by Theorem 5 the polynomial $g$ has no line
polynomial factors. The claim follows by Corollary 8. ∎
###### Theorem 11 ([29]).
Let $r\geq 2$ and let $D$ be the relative $r$-neighborhood of the triangular
grid. Then every $(D,b,a)$-covering is two-periodic. In other words, every
$(r,b,a)$-covering in the triangular grid is two-periodic for all $r\geq 2$.
###### Proof.
Let $c$ be an arbitrary $(D,b,a)$-covering. The outer edges of $g=f_{D}-(b-a)$
have directions $(1,1)$, $(-1,-1)$, $(1,0)$, $(-1,0)$, $(0,1)$ and $(0,-1)$,
and hence by Lemma 3 any line polynomial factor of $g$ has direction $(1,1)$,
$(1,0)$ or $(0,1)$. So, let $\mathbf{v}\in\\{(1,1),(1,0),(0,1)\\}$. There
exists $n\geq 1$ such that $1+t+\ldots+t^{n}\in\mathcal{F}_{\mathbf{v}}(g)$
and $1+t+\ldots+t^{n+1}\in\mathcal{F}_{\mathbf{v}}(g)$. See Figure 4 for an
illustration with $r=2$. Since these two polynomials have no common factors,
by Theorem 5 the polynomial $g$ has no line polynomial factors. Again,
Corollary 8 yields the claim. ∎
If $a\neq b$, then for all $r\geq 1$ any $(r,b,a)$-covering in the king grid
is two-periodic:
###### Theorem 12.
Let $r\geq 1$ be arbitrary and let $D$ be the relative $r$-neighborhood of the
king grid and assume that $a\neq b$. Then any $(D,b,a)$-covering is two-
periodic. In other words, all $(r,b,a)$-coverings in the king grid are two-
periodic whenever $a\neq b$.
###### Proof.
Let $c$ be an arbitrary $(D,b,a)$-covering. The outer edges of $g=f_{D}-(b-a)$
are in directions $(1,0),(-1,0),(0,1)$ and $(0,-1)$. Hence, by Lemma 3 any
line polynomial factor of $g$ has direction $(1,0)$ or $(0,1)$. Let
$\mathbf{v}\in\\{(1,0),(0,1)\\}$. We have
$\phi_{1}=1+t+\ldots+t^{r-1}+(1-(b-a))t^{r}+t^{r+1}+\ldots+t^{2r}\in\mathcal{F}_{\mathbf{v}}(g)$
and $\phi_{2}=1+t+\ldots+t^{2r}\in\mathcal{F}_{\mathbf{v}}(g)$. See Figure 4
for an illustration in the case $r=2$. Since $\phi_{2}-\phi_{1}=(b-a)t^{r}$ is
a non-trivial monomial, $\phi_{1}$ and $\phi_{2}$ have no common factors.
Thus, by Theorem 5 the polynomial $g$ has no line polynomial factors and the
claim follows by Corollary 8. ∎
In the above proofs we used the fact that two Laurent polynomials in one
variable have no common factors if and only if they generate the entire ideal
$\C[t^{\pm 1}]$, and they do this if and only if they generate a non-zero
monomial. This is known as the _weak Nullstellensatz_ [6].
A shape $D\subseteq\Z^{2}$ is _convex_ if it is the intersection $D=\text{\rm
conv}(D)\cap\Z^{2}$ where $\text{\rm conv}(D)\subseteq\R^{2}$ is the real
convex hull of $D$. Above all our shapes were convex. Next we generalize the
above theorems and give a sufficient condition for forced periodicity of
$(D,b,a)$-coverings for convex $D$.
So, let $D\subseteq\Z^{2}$ be a finite convex shape. Any $(D,b,a)$-covering
has a periodizer $g=f_{D}-(b-a)$. As earlier, we study whether $g$ has any
line polynomial factors since if it does not, then Corollary 8 guarantees
forced periodicity. For any $\mathbf{v}\neq\mathbf{0}$ the set
$\mathcal{F}_{\mathbf{v}}(f_{D})$ contains only polynomials
$\phi_{n}=1+\ldots+t^{n-1}$ for different $n\geq 1$ since $D$ is convex: if
$D$ contains two points, then $D$ contains every point between them. Thus,
$\mathcal{F}_{\mathbf{v}}(g)$ contains only polynomials $\phi_{n}$ for
different $n\geq 1$ and, if $b-a\neq 0$, it may also contain a polynomial
$\phi_{n_{0}}-(b-a)t^{m_{0}}$ for some $n_{0}\geq 1$ such that
$\phi_{n_{0}}\in\mathcal{F}_{\mathbf{v}}(f_{D})$ and for some $m_{0}\geq 0$.
If $b-a=0$, then $g=f_{D}$ and thus
$\mathcal{F}_{\mathbf{v}}(g)=\mathcal{F}_{\mathbf{v}}(f_{D})$.
Two polynomials $\phi_{m}$ and $\phi_{n}$ have a common factor if and only if
$\gcd(m,n)>1$. More generally, the polynomials
$\phi_{n_{1}},\ldots,\phi_{n_{r}}$ have a common factor if and only if
$d=\gcd(n_{1},\ldots,n_{r})>1$ and, in fact, their greatest common factor is
the $d$th _cyclotomic polynomial_
$\prod_{\begin{subarray}{c}1\leq k\leq d\\\
\gcd(k,d)=1\end{subarray}}(t-e^{i\cdot\frac{2\pi k}{d}}).$
Let us introduce the following notation. For any polynomial $f$, we denote by
$\mathcal{F}^{\prime}_{\mathbf{v}}(f)$ the set of normal forms of the non-zero
fibers $\sum_{k\in\Z}f_{\mathbf{u}+k\mathbf{v}}X^{\mathbf{u}+k\mathbf{v}}$ for
all $\mathbf{u}\not\in\Z\mathbf{v}$. In other words, we exclude the fiber
through the origin. Let us also denote $\text{\rm fib}_{\mathbf{v}}(f)$ for
the normal form of the fiber $\sum_{k\in\Z}f_{k\mathbf{v}}X^{k\mathbf{v}}$
through the origin. We have
$\mathcal{F}_{\mathbf{v}}(f)=\mathcal{F}^{\prime}_{\mathbf{v}}(f)\cup\\{\text{\rm
fib}_{\mathbf{v}}(f)\\}$ if $\text{\rm fib}_{\mathbf{v}}(f)\neq 0$ and
$\mathcal{F}_{\mathbf{v}}(f)=\mathcal{F}^{\prime}_{\mathbf{v}}(f)$ if
$\text{\rm fib}_{\mathbf{v}}(f)=0$.
Applying Theorems 2 and 5 we have the following theorem that gives sufficient
conditions for every $(D,b,a)$-covering to be periodic for a finite and convex
$D$. This theorem generalizes the results proved above. In fact, they are
corollaries of the theorem. The first part of the theorem was also mentioned
in [7] in a slightly different context and in a more general form.
###### Theorem 13.
Let $D$ be a finite convex shape, $g=f_{D}-(b-a)$ and let $E$ be the set of
the outer edge directions of $g$.
* •
Assume that $b-a=0$. For any $\mathbf{v}\in E$ denote
$d_{\mathbf{v}}=\gcd(n_{1},\ldots,n_{r})$ where
$\mathcal{F}_{\mathbf{v}}(g)=\\{\phi_{n_{1}},\ldots,\phi_{n_{r}}\\}$. If
$d_{\mathbf{v}}=1$ holds for all $\mathbf{v}\in E$, then every
$(D,b,a)$-covering is two-periodic. If $d_{\mathbf{v}}=1$ holds for all but
some parallel $\mathbf{v}\in E$, then every $(D,b,a)$-covering is periodic.
* •
Assume that $b-a\neq 0$. For any $\mathbf{v}\in E$ denote
$d_{\mathbf{v}}=\gcd(n_{1},\ldots,n_{r})$ where
$\mathcal{F}^{\prime}_{\mathbf{v}}(g)=\\{\phi_{n_{1}},\ldots,\phi_{n_{r}}\\}$.
If the $d_{\mathbf{v}}$’th cyclotomic polynomial and $\text{\rm
fib}_{\mathbf{v}}(g)$ have no common factors for any $\mathbf{v}\in E$, then
every $(D,b,a)$-covering is two-periodic. If the condition holds for all but
some parallel $\mathbf{v}\in E$, then every $(D,b,a)$-covering is periodic.
(Note that the condition is satisfied, in particular, if $d_{\mathbf{v}}=1$.)
###### Proof.
Assume first that $b-a=0$. If $d_{\mathbf{v}}=1$ for all $\mathbf{v}\in E$,
then the $\mathbf{v}$-fibers of $g$ have no common factors and hence by
Theorem 5 $g$ has no line polynomial factors. If $d_{\mathbf{v}}=1$ holds for
all but some parallel $\mathbf{v}\in E$, then all the line polynomial factors
of $g$ are in parallel directions. Thus, the claim follows by Theorem 2.
Assume then that $b-a\neq 0$. If the $d_{\mathbf{v}}$’th cyclotomic polynomial
and $\text{\rm fib}_{\mathbf{v}}(g)$ have no common factors for all
$\mathbf{v}\in E$, then by Theorem 5 $g$ has no line polynomial factors. If
the condition holds for all but some parallel $\mathbf{v}\in E$, then all the
line polynomial factors of $g$ are in parallel directions. Thus, by Theorem 2
the claim holds also in this case. ∎
## 6 Forced periodicity of perfect colorings over arbitrarily large alphabets
In this section we prove a theorem that gives a sufficient condition for
forced periodicity of two-dimensional perfect colorings over an arbitrarily
large alphabet. As corollaries of the theorem and theorems from the previous
section we obtain conditions for forced periodicity of perfect colorings in
two-dimensional infinite grid graphs.
We start by proving some lemmas that work in any dimension. We consider the
vector presentations of perfect colorings because this way we get a non-
trivial annihilator for any such vector presentation:
###### Lemma 14.
Let $c$ be the vector presentation of a $D$-perfect coloring over an alphabet
of size $n$ with matrix $\mathbf{B}=(b_{ij})_{n\times n}$. Then $c$ is
annihilated by the polynomial
$f(X)=\sum_{\mathbf{u}\in D}\mathbf{I}X^{-\mathbf{u}}-\mathbf{B}.$
_Remark._ Note the similarity of the above annihilator to the periodizer
$\sum_{\mathbf{u}\in D}X^{-\mathbf{u}}-(b-a)$ of a $(D,b,a)$-covering.
###### Proof.
Let $\mathbf{v}\in\Z^{d}$ be arbitrary and assume that
$c_{\mathbf{v}}=\mathbf{e}_{j}$. Then
$(\mathbf{B}c)_{\mathbf{v}}=\mathbf{B}\mathbf{e}_{j}$ is the $j$th column of
$\mathbf{B}$. On the other hand, from the definition of $\mathbf{B}$ we have
$((\sum_{\mathbf{u}\in
D}\mathbf{I}X^{-\mathbf{u}})c)_{\mathbf{v}}=\sum_{\mathbf{u}\in
D}c_{\mathbf{v}+\mathbf{u}}=\sum_{i=1}^{n}b_{ij}\mathbf{e}_{i}$ which is also
the $j$th column of $\mathbf{B}$. Thus, $(fc)_{\mathbf{v}}=0$ and hence $fc=0$
since $\mathbf{v}$ was arbitrary. ∎
The following lemma shows that as in the case of integral configurations with
non-trivial annihilators, also the vector presentation of a perfect coloring
has a special annihilator which is a product of difference polynomials. By
congruence of two polynomials with integer matrices as coefficients (mod $p$)
we mean that their corresponding coefficients are congruent (mod $p$) and by
congruence of two integer matrices (mod $p$) we mean that their corresponding
components are congruent (mod $p$).
###### Lemma 15.
Let $c$ be the vector presentation of a $D$-perfect coloring over an alphabet
of size $n$ with matrix $\mathbf{B}=(b_{ij})_{n\times n}$. Then $c$ is
annihilated by the polynomial
$g(X)=(\mathbf{I}X^{\mathbf{v}_{1}}-\mathbf{I})\cdots(\mathbf{I}X^{\mathbf{v}_{m}}-\mathbf{I})$
for some vectors $\mathbf{v}_{1},\ldots,\mathbf{v}_{m}$.
###### Proof.
By Lemma 14 the power series $c$ is annihilated by $f(X)=\sum_{\mathbf{u}\in
D}\mathbf{I}X^{-\mathbf{u}}-\mathbf{B}$. Let $p$ be a prime larger than
$nc_{\text{max}}$ where $c_{\text{max}}$ is the maximum absolute value of the
components of the coefficients of $c$. Since the coefficients of $f$ commute
with each other, we have for any positive integer $k$ using the binomial
theorem that
$f^{p^{k}}=f^{p^{k}}(X)\equiv\sum_{\mathbf{u}\in
D}\mathbf{I}X^{-p^{k}\mathbf{u}}-\mathbf{B}^{p^{k}}\ \ (\text{mod }p).$
We have $f^{p^{k}}(X)c(X)\equiv 0\ \ (\text{mod }p)$. There are only finitely
many distinct matrices $\mathbf{B}^{p^{k}}\ \ (\text{mod }p)$. So, let $k$ and
$k^{\prime}$ be distinct and such that
$\mathbf{B}^{p^{k}}\equiv\mathbf{B}^{p^{k^{\prime}}}\ \ (\text{mod }p)$. Then
the coefficients of $f^{\prime}=f^{p^{k}}-f^{p^{k^{\prime}}}\ \ (\text{mod
}p)$ are among $\mathbf{I}$ and $-\mathbf{I}$. Since $f^{p^{k}}c\equiv 0\ \
(\text{mod }p)$ and $f^{p^{k^{\prime}}}c\equiv 0\ \ (\text{mod }p)$, also
$f^{\prime}c\equiv 0\ \ (\text{mod }p).$
The components of the configuration $f^{\prime}c$ are bounded in absolute
value by $nc_{\text{max}}$. Since we chose $p$ larger than $nc_{\text{max}}$,
this implies that
$f^{\prime}c=0.$
Because $f^{\prime}=\sum_{\mathbf{u}\in
P_{1}}\mathbf{I}X^{\mathbf{u}}-\sum_{\mathbf{u}\in
P_{2}}\mathbf{I}X^{\mathbf{u}}$ for some finite subsets $P_{1}$ and $P_{2}$ of
$\Z^{d}$, the annihilation of $c$ by $f^{\prime}$ is equivalent to the
annihilation of every layer of $c$ by $f^{\prime\prime}=\sum_{\mathbf{u}\in
P_{1}}X^{\mathbf{u}}-\sum_{\mathbf{u}\in P_{2}}X^{\mathbf{u}}$. Thus, every
layer of $c$ has a non-trivial annihilator and hence by Theorem 1 every layer
of $c$ has a special annihilator which is a product of difference polynomials.
Let
$g^{\prime}=(X^{\mathbf{v}_{1}}-1)\cdots(X^{\mathbf{v}_{m}}-1)$
be the product of all these special annihilators. Since $g^{\prime}$
annihilates every layer of $c$, the polynomial
$g=(\mathbf{I}X^{\mathbf{v}_{1}}-\mathbf{I})\cdots(\mathbf{I}X^{\mathbf{v}_{m}}-\mathbf{I})$
annihilates $c$. ∎
###### Lemma 16.
Let $p$ be a prime and let $H$ be an additive CA over $\Z_{p}^{n}$ determined
by a polynomial
$h=\sum_{i=0}^{k}\mathbf{A}_{i}X^{\mathbf{u}_{i}}\in\Z_{p}^{n\times n}[X^{\pm
1}]$ whose coefficients $\mathbf{A}_{i}$ commute with each other. Assume that
there exist $M\in\Z_{p}\setminus\\{0\\}$ and matrices
$\mathbf{C}_{0},\ldots,\mathbf{C}_{k}$ that commute with each other and with
every $\mathbf{A}_{i}$ such that
$\mathbf{C}_{0}\mathbf{A}_{0}+\ldots+\mathbf{C}_{k}\mathbf{A}_{k}=M\cdot\mathbf{I}$
holds in $\Z_{p}^{k\times k}$. Then $H$ is surjective.
###### Proof.
Assume the contrary that $H$ is not surjective. By the Garden-of-Eden theorem
$H$ is not pre-injective and hence there exist two distinct asymptotic
configurations $c_{1}$ and $c_{2}$ such that $H(c_{1})=H(c_{2})$, that is,
$h(X)c_{1}(X)=h(X)c_{2}(X)$. Thus, $h$ is an annihilator of $e=c_{1}-c_{2}$.
Without loss of generality we may assume that $c_{1}(\mathbf{0})\neq
c_{2}(\mathbf{0})$, i.e., that $e(\mathbf{0})=\mathbf{v}\neq\mathbf{0}$. Let
$l$ be such that the support $\text{\rm supp}(e)=\\{\mathbf{u}\in\Z^{d}\mid
e(\mathbf{u})\neq\mathbf{0}\\}$ of $e$ is contained in a $d$-dimensional
$p^{l}\times\ldots\times p^{l}$ hypercube. Note that in $\Z_{p}^{k\times k}$
we have
$f^{p^{l}}=\sum_{i=0}^{k}\mathbf{A}_{i}^{p^{l}}X^{p^{l}\mathbf{u}_{i}}$
which is also an annihilator of $e$. Hence, by the choice of $l$ we have
$\mathbf{A}_{i}^{p^{l}}\mathbf{v}=\mathbf{0}$ for all $i\in\\{1,\ldots,k\\}$.
By raising the identity
$\mathbf{C}_{0}\mathbf{A}_{0}+\ldots+\mathbf{C}_{k}\mathbf{A}_{k}=M\cdot\mathbf{I}$
to power $p^{l}$ and multiplying the result by the vector $\mathbf{v}$ from
the right we get
$M^{p^{l}}\cdot\mathbf{v}=\mathbf{C}_{0}^{p^{l}}\mathbf{A}_{0}^{p^{l}}\mathbf{v}+\ldots+\mathbf{C}_{k}^{p^{l}}\mathbf{A}_{k}^{p^{l}}\mathbf{v}=\mathbf{0}+\ldots+\mathbf{0}=\mathbf{0}.$
However, this is a contradiction because $M^{p^{l}}\mathbf{v}\neq\mathbf{0}$.
Thus, $H$ must be surjective as claimed. ∎
###### Theorem 17.
Let $D\subseteq\Z^{2}$ be a finite shape and assume that there exists an
integer $t_{0}$ such that the polynomial $f_{D}-t=\sum_{\mathbf{u}\in
D}X^{-\mathbf{u}}-t$ has no line polynomial factors whenever $t\neq t_{0}$.
Then any $D$-perfect coloring with matrix $\mathbf{B}$ is two-periodic
whenever $\det(\mathbf{B}-t_{0}\mathbf{I})\neq 0$. If $f_{D}-t$ has no line
polynomial factors for any $t$, then every $D$-perfect coloring is two-
periodic.
###### Proof.
Let $c$ be the vector presentation of a $D$-perfect coloring with matrix
$\mathbf{B}$. By Lemmas 14 and 15 it has two distinct annihilators:
$f=\sum_{\mathbf{u}\in D}\mathbf{I}X^{-\mathbf{u}}-\mathbf{B}$ and
$g=(\mathbf{I}X^{\mathbf{v}_{1}}-\mathbf{I})\cdots(\mathbf{I}X^{\mathbf{v}_{m}}-\mathbf{I})$.
Let us replace $\mathbf{I}$ by 1 and $\mathbf{B}$ by a variable $t$ and
consider the corresponding integral polynomials
$f^{\prime}=\sum_{\mathbf{u}\in D}X^{-\mathbf{u}}-t=f_{D}-t$ and
$g^{\prime}=(X^{\mathbf{v}_{1}}-1)\cdots(X^{\mathbf{v}_{m}}-1)$ in
$\C[x,y,t]$. Here $X=(x,y)$.
Without loss of generality we may assume that $f^{\prime}$ and $g^{\prime}$
are proper polynomials. Indeed, we can multiply $f^{\prime}$ and $g^{\prime}$
by monomials such that the obtained polynomials $f^{\prime\prime}$ and
$g^{\prime\prime}$ are proper polynomials and that they have a common factor
if and only if $f^{\prime}$ and $g^{\prime}$ have a common factor. So, we may
consider $f^{\prime\prime}$ and $g^{\prime\prime}$ instead of $f^{\prime}$ and
$g^{\prime}$ if they are not proper polynomials.
We consider the $y$-resultant $\text{\rm Res}_{y}(f^{\prime},g^{\prime})$ of
$f^{\prime}$ and $g^{\prime}$, and write
$\text{\rm
Res}_{y}(f^{\prime},g^{\prime})=f_{0}(t)+f_{1}(t)x+\ldots+f_{k}(t)x^{k}.$
By the properties of resultants $\text{\rm Res}_{y}(f^{\prime},g^{\prime})$ is
in the ideal generated by $f^{\prime}$ and $g^{\prime}$, and it can be the
zero polynomial only if $f^{\prime}$ and $g^{\prime}$ have a common factor.
Since $g^{\prime}$ is a product of line polynomials, any common factor of
$f^{\prime}$ and $g^{\prime}$ is also a product of line polynomials. In
particular, if $f^{\prime}$ and $g^{\prime}$ have a common factor, then they
have a common line polynomial factor. However, by the assumption $f^{\prime}$
has no line polynomial factors if $t\neq t_{0}$. Thus, $f^{\prime}$ and
$g^{\prime}$ may have a common factor only if $t=t_{0}$ and hence $\text{\rm
Res}_{y}(f^{\prime},g^{\prime})$ can be zero only if $t=t_{0}$. On the other
hand, $\text{\rm Res}_{y}(f^{\prime},g^{\prime})=0$ if and only if
$f_{0}(t)=\ldots=f_{k}(t)=0$. We conclude that
$\gcd(f_{0}(t),\ldots,f_{k}(t))=(t-t_{0})^{m}$ for some $m\geq 0$. Thus,
$\text{\rm
Res}_{y}(f^{\prime},g^{\prime})=(t-t_{0})^{m}(f^{\prime}_{0}(t)+f^{\prime}_{1}(t)x+\ldots+f^{\prime}_{k}(t)x^{k})$
where the polynomials $f^{\prime}_{0}(t),\ldots,f^{\prime}_{k}(t)$ have no
common factors.
By the Euclidean algorithm there are polynomials $a_{0}(t),\ldots,a_{k}(t)$
such that
$a_{0}(t)f_{0}^{\prime}(t)+\ldots+a_{k}(t)f_{k}^{\prime}(t)=1.$ (1)
Moreover, the coefficients of the polynomials $a_{0}(t),\ldots,a_{k}(t)$ are
rational numbers because the polynomials
$f^{\prime}_{0}(t),\ldots,f^{\prime}_{k}(t)$ are integral. Note that if
$f^{\prime}$ has no line polynomial factors for any $t$, then $m=0$ and hence
$f_{i}^{\prime}(t)=f_{i}(t)$ for every $i\in\\{1,\ldots,k\\}$.
Let us now consider the polynomial
$(\mathbf{B}-t_{0}\mathbf{I})^{m}(f_{0}^{\prime}(\mathbf{B})+f_{1}^{\prime}(\mathbf{B})x+\ldots+f^{\prime}_{k}(\mathbf{B})x^{k})$
which is obtained from $\text{\rm Res}_{y}(f^{\prime},g^{\prime})$ by plugging
back $\mathbf{I}$ and $\mathbf{B}$ in the place of $1$ and $t$, respectively.
Since $\text{\rm Res}_{y}(f^{\prime},g^{\prime})$ is in the ideal generated by
$f^{\prime}$ and $g^{\prime}$, the above polynomial is in the ideal generated
by $f$ and $g$. Thus, it is an annihilator of $c$ because both $f$ and $g$ are
annihilators of $c$.
Assume that $\det(\mathbf{B}-t_{0}\mathbf{I})\neq 0$ or that $m=0$. Now also
$h=f_{0}^{\prime}(\mathbf{B})+f_{1}^{\prime}(\mathbf{B})x+\ldots+f^{\prime}_{k}(\mathbf{B})x^{k}$
is an annihilator of $c$. Since $f^{\prime}_{0}(t),\ldots,f^{\prime}_{k}(t)$
have no common factors, $h$ is non-zero, because otherwise it would be
$f_{0}^{\prime}(\mathbf{B})=\ldots=f_{k}^{\prime}(\mathbf{B})=0$ and the
minimal polynomial of $\mathbf{B}$ would be a common factor of
$f^{\prime}_{0}(t),\ldots,f^{\prime}_{k}(t)$, a contradiction.
Plugging $t=\mathbf{B}$ to Equation 1 we get
$a_{0}(\mathbf{B})f_{0}^{\prime}(\mathbf{B})+\ldots+a_{k}(\mathbf{B})f_{k}^{\prime}(\mathbf{B})=\mathbf{I}.$
Let us multiply the above equation by a common multiple $M$ of all the
denominators of the rational numbers appearing in the equation and let us
consider it (mod $p$) where $p$ is a prime that does not divide $M$. We obtain
the following identity
$a_{0}^{\prime}(\mathbf{B})f_{0}^{\prime}(\mathbf{B})+\ldots+a_{k}^{\prime}(\mathbf{B})f_{k}^{\prime}(\mathbf{B})=M\cdot\mathbf{I}\not\equiv
0\ \ (\text{mod }p)$
where all the coefficients in the equation are integer matrices.
By Lemma 16 the additive CA determined by
$h=\sum_{i=0}^{k}f_{i}^{\prime}(\mathbf{B})x^{i}$ is surjective. Since $h$ is
a polynomial in variable $x$ only, it defines a 1-dimensional CA $H$ which is
surjective and which maps every horizontal fiber of $c$ to 0. Hence, every
horizontal fiber of $c$ is a pre-image of 0. Let $c^{\prime}$ be a horizontal
fiber of $c$. The Garden-of-Eden theorem implies that $0$ has finitely many,
say $N$, pre-images under $H$. Since also every translation of $c^{\prime}$ is
a pre-image of $0$, we conclude that $c^{\prime}=\tau^{i}(c^{\prime})$ for
some $i\in\\{0,\ldots,N-1\\}$. Thus, $(N-1)!$ is a common period of all the
horizontal fibers of $c$ and hence $c$ is horizontally periodic.
Repeating the same argumentation for the $x$-resultant of $f^{\prime}$ and
$g^{\prime}$ we can show that $c$ is also vertically periodic. Thus, $c$ is
two-periodic. ∎
As corollaries of the above theorem and theorems from the previous section, we
obtain new proofs for forced periodicity of perfect colorings in the square
and the triangular grids, and a new result for forced periodicity of perfect
colorings in the king grid:
###### Corollary 18 ([29]).
Let $D$ be the relative 1-neighborhood of the square grid. Then any
$D$-perfect coloring with matrix $\mathbf{B}$ is two-periodic whenever
$\det(\mathbf{B}-\mathbf{I})\neq 0$. In other words, any $1$-perfect coloring
with matrix $\mathbf{B}$ in the square grid is two-periodic whenever
$\det(\mathbf{B}-\mathbf{I})\neq 0$.
###### Proof.
In our proof of Theorem 7 it was shown that the polynomial $f_{D}-t$ has no
line polynomial factors if $t\neq 1$. Thus, by Theorem 17 any
$(D,\mathbf{B})$-coloring is two-periodic whenever
$\det(\mathbf{B}-\mathbf{I})\neq 0$. ∎
###### Corollary 19 ([29]).
Let $D$ be the relative 1-neighborhood of the triangular grid. Then any
$D$-perfect coloring with matrix $\mathbf{B}$ is two-periodic whenever
$\det(\mathbf{B}+\mathbf{I})\neq 0$. In other words, any $1$-perfect coloring
with matrix $\mathbf{B}$ in the triangular grid is two-periodic whenever
$\det(\mathbf{B}+\mathbf{I})\neq 0$.
###### Proof.
In the proof of Theorem 10 it was shown that the polynomial $f_{D}-t$ has no
line polynomial factors if $t\neq-1$. Thus, by Theorem 17 any
$(D,\mathbf{B})$-coloring is two-periodic whenever
$\det(\mathbf{B}+\mathbf{I})\neq 0$. ∎
###### Corollary 20 ([29]).
Let $r\geq 2$ and let $D$ be the relative $r$-neighborhood of the square grid.
Then every $D$-perfect coloring is two-periodic. In other words, any
$r$-perfect coloring in the square grid is two-periodic for all $r\geq 2$.
###### Proof.
In the proof of Theorem 9 it was shown that the polynomial $f_{D}-t$ has no
line polynomial factors for any $t$. Thus, by Theorem 17 every $D$-perfect
coloring is two-periodic. ∎
###### Corollary 21 ([29]).
Let $r\geq 2$ and let $D$ be the relative $r$-neighborhood of the triangular
grid. Then every $D$-perfect coloring is two-periodic. In other words, any
$r$-perfect coloring in the triangular grid is two-periodic for all $r\geq 2$.
###### Proof.
In the proof of Theorem 11 it was shown that the polynomial $f_{D}-t$ has no
line polynomial factors for any $t$. Thus, by Theorem 17 every $D$-perfect
coloring is two-periodic. ∎
###### Corollary 22.
Let $r\geq 1$ and let $D$ be the relative $r$-neighborhood of the king grid.
Then every $D$-perfect coloring with matrix $\mathbf{B}$ is two-periodic
whenever $\det(\mathbf{B})\neq 0$. In other words, every $r$-perfect coloring
with matrix $\mathbf{B}$ in the king grid is two-periodic whenever
$\det(\mathbf{B})\neq 0$.
###### Proof.
In the proof of Theorem 12 we showed that the polynomial $f_{D}-t$ has no line
polynomial factors if $t\neq 0$. Thus, by Theorem 17 any
$(D,\mathbf{B})$-coloring is two-periodic whenever $\det(\mathbf{B})\neq 0$. ∎
_Remark._ Note that the results in Corollaries 18, 19, 20 and 21 were stated
and proved in [29] in a slightly more general form. Indeed, in [29] it was
proved that if a configuration $c\in\mathcal{A}^{\Z^{2}}$ is annihilated by
$\sum_{\mathbf{u}\in D}\mathbf{I}X^{-\mathbf{u}}-\mathbf{B}$
where $\mathbf{B}\in\Z^{n\times n}$ is an arbitrary integer matrix whose
determinant satisfies the conditions in the four corollaries and $D$ is as in
the corollaries, then $c$ is necessarily periodic. This kind of configuration
was called a _generalized centered function_. However, in Lemma 14 we proved
that the vector presentation of any $D$-perfect coloring with matrix
$\mathbf{B}$ is annihilated by this polynomial, that is, we proved that the
vector presentation of a perfect coloring is a generalized centered function.
By analyzing the proof of Theorem 17 we see that the theorem holds also for
generalized centered functions and hence the corollaries following it hold
also for generalized centered functions, and thus we have the same results as
in [29].
## 7 Forced periodicity of configurations of low abelian complexity
In this section we prove a statement concerning forced periodicity of two-
dimensional configurations of low abelian complexity which generalizes a
result in [7]. In fact, as in [7] we generalize the definition of abelian
complexity from finite patterns to polynomials and prove a statement of forced
periodicity under this more general definition of abelian complexity.
Let $c\in\\{\mathbf{e}_{1},\ldots,\mathbf{e}_{n}\\}^{\Z^{d}}$ and let
$D\subseteq\Z^{d}$ be a finite shape. Consider the polynomial
$f=\mathbf{I}\cdot f_{D}(X)=\sum_{\mathbf{u}\in
D}\mathbf{I}X^{-\mathbf{u}}\in\Z^{n\times n}[X^{\pm 1}]$. The $i$th
coefficient of $(fc)_{\mathbf{v}}=\sum_{\mathbf{u}\in
D}\mathbf{I}\cdot\mathbf{c_{\mathbf{v}+\mathbf{u}}}$ tells the number of cells
of color $\mathbf{e}_{i}$ in the $D$-neighborhood of $\mathbf{v}$ in $c$ and
hence the abelian complexity of $c$ with respect to $D$ is exactly the number
of distinct coefficients of $fc$.
More generally, we define the abelian complexity $A(c,f)$ of an integral
vector configuration $c\in\mathcal{A}^{\Z^{d}}$ where $\mathcal{A}$ is finite
set of integer vectors _with respect to a polynomial $f\in\Z^{n\times
n}[X^{\pm 1}]$_ as
$A(c,f)=|\\{(fc)_{\mathbf{v}}\mid\mathbf{v}\in\Z^{d}\\}|.$
This definition can be extended to integral configurations and polynomials.
Indeed, we define the abelian complexity $A(c,f)$ of a configuration
$c\in\mathcal{A}^{\Z^{d}}$ where $\mathcal{A}\subseteq\Z$ with respect to a
polynomial $f=\sum f_{i}X^{\mathbf{u}_{i}}\in\Z[X^{\pm 1}]$ to be the abelian
complexity $A(c^{\prime},f^{\prime})$ of the vector presentation $c^{\prime}$
of $c$ with respect to the polynomial $f^{\prime}=\mathbf{I}\cdot f=\sum
f_{i}\cdot\mathbf{I}\cdot X^{\mathbf{u}_{i}}$. Consequently, we say that $c$
has low abelian complexity with respect to a polynomial $f$ if $A(c,f)=1$.
Clearly this definition is consistent with the definition of low abelian
complexity of a configuration with respect to a finite shape since if $c$ is
an integral configuration, then $A(c,D)=1$ if and only if $A(c,f_{D})=1$, and
if $c$ is an integral vector configuration, then $A(c,D)=1$ if and only if
$A(c,\mathbf{I}\cdot f_{D})=1$.
We study forced periodicity of two-dimensional configurations of low abelian
complexity. Note that a configuration of low abelian complexity is not
necessarily periodic. Indeed, in [30] it was shown that there exist non-
periodic two-dimensional configurations that have abelian complexity
$A(c,D)=1$ for some finite shape $D$. However, in [7] it was shown that if
$A(c,f)=1$ and if the polynomial $f$ has no line polynomial factors, then $c$
is two-periodic assuming that the support of $f$ is convex. The following
theorem strengthens this result and shows that the convexity assumption of the
support of the polynomial is not needed. We obtain this result as a corollary
of Theorem 2.
###### Theorem 23.
Let $c$ be a two-dimensional integral configuration over an alphabet of size
$n$ and assume that it has low abelian complexity with respect to a polynomial
$f\in\Z[x^{\pm 1},y^{\pm 1}]$. If $f$ has no line polynomial factors, then $c$
is two-periodic. If $f$ has line polynomial factors in a unique primitive
direction $\mathbf{v}$, then $c$ is $\mathbf{v}$-periodic. Thus, if $f_{D}$
has no line polynomial factors or its line polynomial factors are in a unique
primitive direction, then any configuration that has low abelian complexity
with respect to $D$ is two-periodic or periodic, respectively.
###### Proof.
By the assumption that $A(c,f)=1$ we have
$f^{\prime}c^{\prime}=\mathbf{c}_{0}\mathbbm{1}$ for some
$\mathbf{c}_{0}\in\Z^{n}$ where $c^{\prime}$ is the vector presentation of $c$
and $f^{\prime}=\mathbf{I}\cdot f$. Thus, $f$ periodizes every layer of
$c^{\prime}$. If $f$ has no line polynomial factors, then by Theorem 2 every
layer of $c^{\prime}$ is two-periodic and hence $c^{\prime}$ is two-periodic.
If $f$ has line polynomial factors in a unique primitive direction
$\mathbf{v}$, then by Theorem 2 every layer of $c^{\prime}$ is
$\mathbf{v}$-periodic and hence also $c^{\prime}$ is $\mathbf{v}$-periodic.
Since $c$ is periodic if and only if its vector presentation $c^{\prime}$ is
periodic, the claim follows. ∎
_Remark._ In [7] a polynomial $f\in\Z[X^{\pm 1}]$ is called abelian rigid if
an integral configuration $c$ having low abelian complexity with respect to
$f$ implies that $c$ is strongly periodic. In the above theorem we proved that
if a polynomial $f\in\Z[x^{\pm 1},y^{\pm 1}]$ has no line polynomial factors
then it is abelian rigid. Also, the converse holds as proved in [7], that is,
if a polynomial $f\in\Z[x^{\pm 1},y^{\pm 1}]$ has a line polynomial factor
then it is not abelian rigid. This means that if $f$ has a line polynomial
factor then there exists a configuration which is not two-periodic but has low
abelian complexity with respect to $f$. In fact this direction holds for all
$d$, not just for $d=2$ as reported in [7].
In the following example we introduce an open problem related to
configurations of low abelian complexity.
###### Example 24 (Periodic tiling problem).
This example concerns _translational tilings_ by a single tile. In this
context by a tile we mean any finite subset $F\subseteq\Z^{d}$ and by a tiling
by the tile $F$ we mean such subset $C\subseteq\Z^{d}$ that every point of the
grid $\Z^{d}$ has a unique presentation as a sum of an element of $F$ and an
element of $C$. Presenting the tiling $C$ as its indicator function we obtain
a $d$-dimensional binary configuration $c\in\\{0,1\\}^{\Z^{d}}$ defined by
$c_{\mathbf{u}}=\begin{cases}1,\text{ if }\mathbf{u}\in C\\\ 0,\text{ if
}\mathbf{u}\not\in C\end{cases}.$
The configuration $c$ has exactly $|F|$ different patterns of shape $-F$,
namely the patterns with exactly one symbol 1. In other words, it has low
complexity with respect to $-F$. Let $f=f_{F}=\sum_{\mathbf{u}\in
F}X^{-\mathbf{u}}$ be the characteristic polynomial of $F$. Since $C$ is a
tiling by $F$, we have $fc=\mathbbm{1}$. In fact, $c$ has low abelian
complexity with respect to $f$ and $-F$. Thus, by Theorem 23 any tiling by
$F\subset\Z^{2}$ is two-periodic if $f_{F}$ has no line polynomial factors.
The periodic tiling problem claims that if there exists a tiling by a tile
$F\subseteq\Z^{d}$, then there exists also a periodic tiling by $F$ [20, 31].
By a simple pigeonholing argument it can be seen that in dimension $d=1$ all
translational tilings by a single tile are periodic and hence the periodic
tiling problem holds in dimension 1 [26]. For $d\geq 2$ the conjecture is much
trickier and only recently it was proved by Bhattacharya that it holds for
$d=2$ [3]. In [9] it was presented a slightly different proof in the case
$d=2$ with some generalizations. For $d\geq 3$ the conjecture is still partly
open. However, very recently it has been proved that for some sufficiently
large $d$ the periodic tiling conjecture is false [10].
## 8 Algorithmic aspects
All configurations in a subshift are periodic, in particular, if there are no
configurations in the subshift at all! It is useful to be able to detect such
trivial cases.
The set
$\mathcal{S}(D,b,a)=\\{c\in\\{0,1\\}^{\Z^{2}}\mid(f_{D}-(b-a))c=a\mathbbm{1}(X)\\}$
of all $(D,b,a)$-coverings is an SFT for any given finite shape $D$ and non-
negative integers $b$ and $a$. Hence, the question whether there exist any
$(D,b,a)$-coverings for a given neighborhood $D$ and covering constants $b$
and $a$ is equivalent to the question whether the SFT $\mathcal{S}(D,b,a)$ is
non-empty. The question of emptiness of a given SFT is undecidable in general,
but if the SFT is known to be not aperiodic, then the problem becomes
decidable as a classic argumentation by Hao Wang shows:
###### Lemma 25 ([32]).
If an SFT is either the empty set or it contains a strongly periodic
configuration, then its emptiness problem is decidable, that is, there is an
algorithm to determine whether there exist any configurations in the SFT.
In particular, if $g=f_{D}-(b-a)$ has line polynomial factors in at most one
direction, then the question whether there exist any $(D,b,a)$-coverings is
decidable:
###### Theorem 26.
Let a finite $D\subseteq\Z^{2}$ and non-negative integers $b$ and $a$ be given
such that the polynomial $g=f_{D}-(b-a)\in\Z[x^{\pm 1},y^{\pm 1}]$ has line
polynomial factors in at most one primitive direction. Then there exists an
algorithm to determine whether there exist any $(D,b,a)$-coverings.
###### Proof.
Let $\mathcal{S}=\mathcal{S}(D,b,a)$ be the SFT of all $(D,b,a)$-coverings.
Since $g$ has line polynomial factors in at most one primitive direction, by
Theorem 2 every element of $\mathcal{S}$ is periodic. Any two-dimensional SFT
that contains periodic configurations contains also two-periodic
configurations. Thus, $\mathcal{S}$ is either empty or contains a two-periodic
configuration and hence by Lemma 25 there is an algorithm to determine whether
$\mathcal{S}$ is non-empty. ∎
One may also want to design a perfect $(D,b,a)$-covering for given $D$, $b$
and $a$. This can be effectively done under the assumptions of Theorem 26: As
we have seen, if $\mathcal{S}=\mathcal{S}(D,b,a)$ is non-empty, it contains a
two-periodic configuration. For any two-periodic configuration $c$ it is easy
to check if $c$ contains a forbidden pattern. By enumerating two-periodic
configurations one-by-one one is guaranteed to find eventually one that is in
$\mathcal{S}$.
If the polynomial $g$ has no line polynomial factors, then the following
stronger result holds:
###### Theorem 27.
If the polynomial $g=f_{D}-(b-a)$ has no line polynomial factors for given
finite shape $D\subseteq\Z^{2}$ and non-negative integers $b$ and $a$, then
the SFT $\mathcal{S}=\mathcal{S}(D,b,a)$ is finite. One can then effectively
construct all the finitely many elements of $\mathcal{S}$.
The proof of the first part of above theorem relies on the fact that a two-
dimensional subshift is finite if and only if it contains only two-periodic
configurations [2]. If $g$ has no line polynomial factors, then every
configuration it periodizes (including every configuration in $\mathcal{S}$)
is two-periodic by Theorem 2, and hence $\mathcal{S}$ is finite. The second
part of the theorem, i.e., the fact that one can effectively produce all the
finitely many elements of $\mathcal{S}$ holds generally for finite SFTs in any
dimension:
###### Lemma 28.
Given a finite $F\subseteq\mathcal{A}^{*}$ such that $X_{F}$ is finite, one
can effectively construct the elements of $X_{F}$.
###### Proof.
Given a finite $F\subseteq\mathcal{A}^{*}$ and a pattern
$p\in\mathcal{A}^{D}$, assuming that strongly periodic configurations are
dense in $X_{F}$, one can effectively check whether $p\in\mathcal{L}(X_{F})$.
Indeed, we have a semi-algorithm for the positive instances that guesses a
strongly periodic configuration $c$ and verifies that $c\in X_{F}$ and
$p\in\mathcal{L}(c)$. A semi-algorithm for the negative instances exists for
any SFT $X_{F}$ and is a standard compactness argument: guess a finite
$E\subseteq\Z^{d}$ such that $D\subseteq E$ and verify that every
$q\in\mathcal{A}^{E}$ such that $q|_{D}=p$ contains a forbidden subpattern.
Consequently, given finite $F,G\subseteq\mathcal{A}^{*}$, assuming that
strongly periodic configurations are dense in $X_{F}$ and $X_{G}$, one can
effectively determine whether $X_{F}=X_{G}$. Indeed, $X_{F}\subseteq X_{G}$ if
and only if no $p\in G$ is in $\mathcal{L}(X_{F})$, a condition that we have
shown above to be decidable. Analogously we can test $X_{G}\subseteq X_{F}$.
Finally, let a finite $F\subseteq\mathcal{A}^{*}$ be given such that $X_{F}$
is known to be finite. All elements of $X_{F}$ are strongly periodic so that
strongly periodic configurations are certainly dense in $X_{F}$. One can
effectively enumerate all finite sets $P$ of strongly periodic configurations.
For each $P$ that is translation invariant (and hence a finite SFT) one can
construct a finite set $G\subseteq\mathcal{A}^{*}$ of forbidden patterns such
that $X_{G}=P$. As shown above, there is an algorithm to test whether
$X_{F}=X_{G}=P$. Since $X_{F}$ is finite, a set $P$ is eventually found such
that $X_{F}=P$. ∎
Let us now turn to the more general question of existence of perfect colorings
over alphabets of arbitrary size. Let $D\subseteq\Z^{2}$ be a finite shape and
let $\mathbf{B}$ be an $n\times n$ integer matrix. To determine whether there
exist any $(D,\mathbf{B})$-colorings is equivalent to asking whether the SFT
$\mathcal{S}(D,\mathbf{B})=\\{c\in\\{\mathbf{e}_{1},\ldots,\mathbf{e}_{n}\\}^{\Z^{2}}\mid
gc=0\\}$
is non-empty where $g=\sum_{\mathbf{u}\in
D}\mathbf{I}X^{-\mathbf{u}}-\mathbf{B}$ since it is exactly the set of the
vector presentations of all $(D,\mathbf{B})$-colorings.
###### Theorem 29.
Let a finite shape $D\subseteq\Z^{2}$, a non-negative integer matrix
$\mathbf{B}$ and an integer $t_{0}$ be given such that the polynomial
$f_{D}(x,y)-t\in\Z[x^{\pm 1},y^{\pm 1}]$ has no line polynomial factors
whenever $t\neq t_{0}$ and $\det(\mathbf{B}-t_{0}\mathbf{I})\neq 0$. Then
there are only finitely many $(D,\mathbf{B})$-colorings and one can
effectively construct them. In particular, there is an algorithm to determine
whether there exist any $(D,\mathbf{B})$-colorings.
###### Proof.
Let $\mathcal{S}=\mathcal{S}(D,\mathbf{B})$ be the SFT of the vector
presentations of all $(D,\mathbf{B})$-colorings. By Theorem 17 all elements of
$\mathcal{S}$ are two-periodic. Hence, $\mathcal{S}$ is finite, and the claim
follows by Lemma 28. ∎
Corollaries 18, 19, 20, 21 and 22 together with above theorem yield the
following corollary.
###### Corollary 30.
The following decision problems are decidable for a given matrix $\mathbf{B}$
satisfying the given conditions.
* •
The existence of $(D,\mathbf{B})$-colorings where $D$ is the relative
1-neighborhood of the square grid and $\det(\mathbf{B}-\mathbf{I})\neq 0$.
* •
The existence of $(D,\mathbf{B})$-colorings where $D$ is the relative
1-neighborhood of the triangular grid and $\det(\mathbf{B}+\mathbf{I})\neq 0$.
* •
The existence of $(D,\mathbf{B})$-colorings where $D$ is the relative
$r$-neighborhood of the square grid and $\mathbf{B}$ is arbitrary.
* •
The existence of $(D,\mathbf{B})$-colorings where $D$ is the relative
$r$-neighborhood of the triangular grid and $\mathbf{B}$ is arbitrary.
* •
The existence of $(D,\mathbf{B})$-colorings where $D$ is the relative
$r$-neighborhood of the king grid and $\det(\mathbf{B})\neq 0$.
###### Theorem 31.
Given a polynomial $f$ in two variables with line polynomial factors in at
most one parallel direction there is an algorithm to determine whether there
exist any two-dimensional configurations over an alphabet of size $n$ that
have low abelian complexity with respect to $f$. In fact, there are only
finitely many such configurations and one can effectively construct all of
them.
###### Proof.
The set
$\\{c\in\\{\mathbf{e}_{1},\ldots,\mathbf{e}_{n}\\}^{\Z^{2}}\mid\mathbf{I}fc=0\\}$
of the vector presentations of all configurations over an alphabet of size $n$
with low abelian complexity with respect to $f$ is an SFT. By Theorem 23 it
contains only two-periodic configurations and hence it is finite. Thus, by
Lemma 28 we have the claim. ∎
## 9 Conclusions
We studied two-dimensional perfect colorings and proved a general condition
(Theorem 17) for their forced periodicity using an algebraic approach to
multidimensional symbolic dynamics. As corollaries of this theorem we obtained
new proofs for known results of forced periodicity in the square and the
triangular grid and a new result in the king grid. Moreover, we generalized a
statement of forced periodicity of two-dimensional configurations of low
abelian complexity. Also, some observations of algorithmic decidability were
made in the context of forced periodicity.
All our results of forced periodicity of perfect colorings used Theorem 2 and
hence concerned only two-dimensional configurations. However, a
$d$-dimensional version of Theorem 2 exists [15], and so we wonder whether an
analogous result to Theorem 17 exists that would give a sufficient condition
for forced periodicity of $d$-dimensional perfect colorings for arbitrary
dimension $d$. Note that clearly every one-dimensional perfect coloring is
necessarily periodic.
## References
##
* [1] M. A. Axenovich. On multiple coverings of the infinite rectangular grid with balls of constant radius. Discrete Mathematics, 268(1):31 – 48, 2003.
* [2] A. Ballier, B. Durand, and E. Jeandal. Structural aspects of tilings. In Susanne Albers and Pascal Weil, editors, 25th International Symposium on Theoretical Aspects of Computer Science, volume 1 of Leibniz International Proceedings in Informatics (LIPIcs), pages 61–72, Dagstuhl, Germany, 2008. Schloss Dagstuhl–Leibniz-Zentrum fuer Informatik.
* [3] S. Bhattacharya. Periodicity and decidability of tilings of $\mathbb{Z}^{2}$. American Journal of Mathematics, 142, 02 2016.
* [4] T. Ceccherini-Silberstein and M. Coornaert. Cellular Automata and Groups. Springer Monographs in Mathematics. Springer Berlin Heidelberg, 2010.
* [5] G. Cohen, I. Honkala, S. Litsyn, and A. Lobstein. Covering Codes. Elsevier, 1997.
* [6] D. A. Cox, J. Little, and D. O’Shea. Ideals, Varieties, and Algorithms: An Introduction to Computational Algebraic Geometry and Commutative Algebra. Springer, 2015.
* [7] N. Geravker and S. A. Puzynina. Abelian Nivat’s conjecture for non-rectangular patterns. arXiv:2111.04690, December 2021.
* [8] C. Godsil. Equitable partitions. Paul Erdös is Eighty Vol. 1, pages 173–192, 1993.
* [9] R. Greenfeld and T. Tao. The structure of translational tilings in $\mathbb{Z}^{d}$. Discrete Analysis, 2021.
* [10] R. Greenfeld and T. Tao. A counterexample to the periodic tiling conjecture, 2022.
* [11] T. W. Haynes, S. Hedetniemi, and P. Slater. Fundamentals of Domination in Graphs. CRC Press, 1 edition, 1997.
* [12] E. Heikkilä, P. Herva, and J. Kari. On perfect coverings of two-dimensional grids. In Volker Diekert and Mikhail Volkov, editors, Developments in Language Theory, pages 152–163, Cham, 2022. Springer International Publishing.
* [13] J. Kari. Theory of cellular automata: A survey. Theoretical Computer Science, 334(1):3–33, 2005.
* [14] J. Kari. Low-complexity tilings of the plane. In Descriptional Complexity of Formal Systems - 21st IFIP WG 1.02 International Conference, DCFS 2019, volume 11612 of Lecture Notes in Computer Science, pages 35–45. Springer, 2019.
* [15] J. Kari. Expansivity and periodicity in algebraic subshifts. Submitted for publication, 2022.
* [16] J. Kari and E. Moutot. Nivat’s conjecture and pattern complexity in algebraic subshifts. Theoretical Computer Science, 777:379 – 386, 2019.
* [17] J. Kari and M. Szabados. An algebraic geometric approach to Nivat’s conjecture. In Proceedings of ICALP 2015, part II, volume 9135 of Lecture Notes in Computer Science, pages 273–285, 2015.
* [18] J. Kari and M. Szabados. An algebraic geometric approach to Nivat’s conjecture. Information and Computation, 271:104481, 2020.
* [19] P. Kurka. Topological and Symbolic Dynamics. Collection SMF. Société mathématique de France, 2003.
* [20] J. C. Lagarias and Y. Wang. Tiling the line with translates of one tile. Inventiones Mathematicae, 124:341–365, 1996.
* [21] D. Lind and B. Marcus. An Introduction to Symbolic Dynamics and Coding. Cambridge University Press, 1995.
* [22] M. Lothaire. Combinatorics on Words. Cambridge Mathematical Library. Cambridge University Press, 2 edition, 1997.
* [23] E. F. Moore. Machine models of self-reproduction. 1962\.
* [24] M. Morse and G. A. Hedlund. Symbolic dynamics. American Journal of Mathematics, 60(4):815–866, 1938.
* [25] J. R. Myhill. The converse of Moore’s Garden-of-Eden theorem. 1963\.
* [26] D. Newman. Tesselation of integers. J. Number Theory, 9(1):107–111, 1977.
* [27] M. Nivat. Invited talk at the 24th International Colloquium on Automata, Languages, and Programming (ICALP 1997), 1997.
* [28] S. A. Puzynina. Perfect colorings of radius $r>1$ of the infinite rectangular grid. Èlektron. Mat. Izv., 5:283–292, 2008.
* [29] S. A. Puzynina. On periodicity of generalized two-dimensional infinite words. Information and Computation, 207(11):1315–1328, 2009.
* [30] S. A. Puzynina. Aperiodic two-dimensional words of small abelian complexity. The Electronic Journal of Combinatorics, 26(4), 2019.
* [31] M. Szegedy. Algorithms to tile the infinite grid with finite clusters. Proceedings 39th Annual Symposium on Foundations of Computer Science (Cat. No.98CB36280), pages 137–145, 1998.
* [32] H. Wang. Proving theorems by pattern recognition – II. The Bell System Technical Journal, 40(1):1–41, 1961.
|
$100$ disjoint samples. This results in each partition being a sample of ~$2$
billion symbols for n-grams and ~$130$ million for tokens. We then calculate
the entropy for each partition and the KL divergence between the entropy of
the $0.5$, $0.50$, and $0.95$ quantile points and a uniform distribution.
These quantiles are then plotted on Fig. 9 to illustrate sampling noise—$90\%$
of sampled entropies fall within these bounds. The log scaling of Fig. 9 hides
some of the noise trends, namely that the noise grows with $n$ and that
settings like GZip and EqualInfoAC are noisier than AC and RNG. These trends
are seen in Fig. 12 where the entropy has been normalized based on the mean
entropy calculated across the partitions.
(a) N-Grams
(b) Tokens
Figure 13: Bias corrected KL divergence between the observed and uniform
distributions for different segmentations of the bitstream. This plot is
similar to Fig. 9, however, the KL divergence calculations use the entropy of
the observed distribution after applying the Miller-Madow bias correction.
After applying bias correction, we see that the expected $0$ KL divergence for
the RNG baseline is now within the 90th percentile bounds. However, this can
results in an, incorrect, negative KL divergence which is removed from the
graph. Thus the RNG 50th percentile is shown as a scatter plot rather than a
broken line. In this setting it is clear that the 50th percentile for
AC$[v\mathord{=}65\text{k}]$s above the 50th percentile for RNG, however, it
is hard to disentangle the two as their 5th percentile lines are similar.
The maximum likelihood, or plug-in, estimator of entropy,
$\hat{H}=-\sum_{x\in\mathcal{X}}\hat{p}(x)\log_{2}\hat{p}(x)$, is negatively
biased—in fact, all entropy estimators are biased [48]. The Miller-Madow
estimator attempts to correct for this bias by adding the approximate bias,
cased by sampling, to the plug-in estimator.252525There are other methods for
entropy bias correction such as [15] based on bootstrapping [20], however,
with the size of the C4 training data, the required resampling was not
possible. Thus, we use Miller-Madow in this work. The Miller-Madow estimator
is given by $\hat{H}_{MM}=\hat{H}+\frac{\hat{|V|}-1}{2m}$. In this case, $m$
is the size of the sample used to estimate entropy and $\hat{|V|}$ is the
estimated vocabulary size. In some applications, the vocabulary may often need
to be estimated—for example new words may be added to languages—but in this
case our vocabulary size is always $2^{n}$ where $n$ is the size of the
current segmentation.
When we plot the KL divergence between the Miller-Madow estimated entropy and
the uniform distribution, we see that the percentile interval for the RNG
baseline now includes $0$, the KL divergence we expect given the data was
generated from random and independent bits. As bias correction is approximate,
it is possible that, for a given sample, the correction will result in an
entropy greater than the maximum entropy possible for a given vocabulary size.
Given that KL divergence between a distribution $P$ and the uniform
distribution $U$ simplifies to the entropy of $U$ minus the entropy of $P$,
$\text{KL}(P||U)=H[U]-\hat{H}[P]=\log_{2}|V|-\hat{H}[p]$, this results in a
negative KL divergence, which is not allowed. These points get removed from
the graph during log scaling and the resulting $50\%$ percentile line for RNG
data looks strange. Therefore, we only plot points with positive KL divergence
in Fig. 13. The Miller-Madow estimation of entropy makes it clear that the
$0.5$ entropy quantile for AC compressed data is much higher than the $50\%$
percentile for RNG data. Additionally, for $n>2$, the AC entropy is
statistically significantly less than the RNG entropy; however, differences in
the mean entropy only start to appear after ~$8$ decimal places. This slight
difference in mean, coupled with the fact that the $5\%$ percentiles are
similar, means we cannot confidently assert the model will be able to easily
distinguish the AC compressed data from random data. Given that we care about
the differences between the entropy of data compressed with different
methods—which is invariant to bias—and the strange plots when values are less
than $0$, we opt to plot the plug-in estimator in Fig. 9 instead of the
Miller-Madow estimator.
## Appendix K Analysis Implementation
Matplolib [33] and Seaborn [71] were used to make all the included graphs.
Statistical significance tests were done using Welch’s t-test [72] using the
function scipy.stats.ttest_ind_from_stats from SciPy [69]. We used $p<0.05$ as
the statistical significance threshold.
## Appendix L Corner Cases of Tokenization lead to Unstable Mappings
There are some cases where SentencePiece does not have stable text
$\rightarrow$ token mappings when looking at various substrings. This
generally occurs when a singular and plural version of a noun are both common
enough to be tokenized into a single token. An example from the T5 vocabulary
[52] is “chair” $\rightarrow$ [3533] and “chairs” $\rightarrow$ [6406]. When
you look at the surface text substring “chair”, it seems to map to multiple
tokens, however when you look at the full surface term “chairs” the stability
returns. This is in contrast to a byte-level vocabulary where the text “chair”
always maps to [102, 107, 100, 108, 117], even as part of the text “chairs”
where an extra [118] is appended to the end. While the loss of shared
representations of clearly related concepts in unfortunate, the performance of
modern models based on this kind of tokenization shows that it is well handled
by the model. While these edge cases exist, they are rare enough that the
SentencePiece tokenizer should be considered stable.
Similarly, there are cases where the initial token $\rightarrow$ text mapping
in a EqualInfoAC window can be unstable. In the case where there is a
character whose bitstream crosses the token boundary—the purple characters in
Fig. 7—only the prefix that is part of the initial token will determine the
value of that token. It is possible that there may be other places in the
input text where the characters wholly contained within the initial token
match but the character that crosses the token boundary may be different. If
the prefix of that character’s bitstream, which is part of the initial token,
matches the previous case but of the bitstream, which is in the following
token, do not it is possible to have the same initial token while the
underlying text is different. When this happens, the text prefix is still
stable and the notion of mapping a compressed token to exact characters is not
well defined, as there are always cases there a character is spread across two
tokens. Note, this only occurs at token boundaries;
EqualInfoAC$[b\mathord{=}16,\,v\mathord{=}65\text{k}]$ is stable as no
characters cross windows. Therefore, we consider EqualInfoAC stable enough to
enable learnability by M2.
Interestingly, [40] point out this same issue, where a fixed size view of a
variable length stream can cause false equivalencies when prefixes match.
Similar to our findings, they find the models do have some limited ability to
deal with these situations.
## Appendix M Window Text Patterns and Token Positions
We tokenize $20$ documents of length $1{,}024$ with
EqualInfoAC$[b\mathord{=}16,\,v\mathord{=}256]$ and find that all $256$
possible token values occur multiple times, both as the first and as the
second token within the window. When tokenized with
EqualInfoAC$[b\mathord{=}16,\,v\mathord{=}65\text{k}]$, $34.5\%$ of attested
tokens appear more than once. Table 16 shows all the window text for repeated
tokens.
Table 16: The deduplicated window text from all instances of tokens that appear multiple times when we tokenized $20$ documents of length $1{,}024$ ($20{,}480$ compressed tokens) with EqualInfoAC$[b\mathord{=}16,\,v\mathord{=}256]$. Token | Window Position | Window Text
---|---|---
$185$ | $1$ | [or ] / [or a ] / [or ac] / [or al] / [or cr] / [or d] / [or f] / [or h]
| | [or hi] / [or i] / [or k] / [or ma] / [or pr] / [or r] / [or s] / [or se]
| | [or su] / [or t] / [or to] / [or v] / [or wha] / [or y] / [or yo] / [or, t]
| | [or-] / [or.] / [ora] / [orc] / [orce ] / [ord] / [ord a] / [order]
| | [ore a] / [ore e] / [ore ev] / [ore g] / [ore i]
| $2$ | [ 4] / [ of F] / [ records ] / [. Lo] / [Alt] / [OI] / [ase ] / [at y]
| | [cian] / [cri] / [d. I] / [ery] / [h de] / [hen s] / [ides] / [n ne]
| | [oft] / [om i] / [onte] / [opp] / [pir] / [rev] / [reve] / [s may]
| | [tion a] / [y do] / [y t]
$151$ | $1$ | [le] / [le s] / [le t] / [le. ] / [lea] / [lec] / [led] / [led ]
| | [led t] / [leg] / [lege] / [leh] / [lem ] / [leme] / [lems] / [len]
| | [ler] / [les] / [less] / [let] / [lett] / [level] / [lew ] / [ley] / [lf ]
| $2$ | [ all ] / [ nut] / [ this] / [ un] / [. I w] / [Ni] / [as t] / [ceed ]
| | [choos] / [e Mi] / [e-li] / [etti] / [imag] / [ion a] / [k a] / [ne a]
| | [ng up] / [niversi] / [npo] / [nt pr] / [pi] / [rvices] / [s T] / [s your]
| | [s?] / [so c] / [stag] / [thou] / [thoug] / [ust] / [ust ] |
# Essential m-dissipativity and hypocoercivity of Langevin dynamics with
multiplicative noise
Alexander Bertram , Department of Mathematics, TU Kaiserslautern, PO box 3049,
67653 Kaiserslautern<EMAIL_ADDRESS>(corresponding
author) Martin Grothaus11footnotemark: 1<EMAIL_ADDRESS>
###### Abstract
We provide a complete elaboration of the $L^{2}$-Hilbert space hypocoercivity
theorem for the degenerate Langevin dynamics with multiplicative noise,
studying the longtime behavior of the strongly continuous contraction
semigroup solving the abstract Cauchy problem for the associated backward
Kolmogorov operator. Hypocoercivity for the Langevin dynamics with constant
diffusion matrix was proven previously by Dolbeault, Mouhot and Schmeiser in
the corresponding Fokker-Planck framework, and made rigorous in the Kolmogorov
backwards setting by Grothaus and Stilgenbauer. We extend these results to
weakly differentiable diffusion coefficient matrices, introducing
multiplicative noise for the corresponding stochastic differential equation.
The rate of convergence is explicitly computed depending on the choice of
these coefficients and the potential giving the outer force. In order to
obtain a solution to the abstract Cauchy problem, we first prove essential
self-adjointness of non-degenerate elliptic Dirichlet operators on Hilbert
spaces, using prior elliptic regularity results and techniques from Bogachev,
Krylov and Röckner. We apply operator perturbation theory to obtain essential
m-dissipativity of the Kolmogorov operator, extending the m-dissipativity
results from Conrad and Grothaus. We emphasize that the chosen Kolmogorov
approach is natural, as the theory of generalized Dirichlet forms implies a
stochastic representation of the Langevin semigroup as the transition kernel
of a diffusion process which provides a martingale solution to the Langevin
equation with multiplicative noise. Moreover, we show that even a weak
solution is obtained this way.
Keywords: Langevin equation, multiplicative noise, hypocoercivity, essential
m-dissipativity, essential self-adjointness, Fokker-Planck equation
MSC (2020): 37A25, 47D07, 35Q84, 47B44, 47B25
### Acknowledgment
This version of the article has been accepted for publication, after peer
review but is not the Version of Record and does not reflect post-acceptance
improvements, or any corrections.
The Version of Record is available online at:
https://doi.org/10.1007/s00028-022-00773-y
## 1 Introduction
We study the exponential decay to equilibrium of Langevin dynamics with
multiplicative noise. The corresponding evolution equation is given by the
following stochastic differential equation on $\mathbb{R}^{2d}$,
$d\in\mathbb{N}$, as
$\displaystyle dX_{t}$ $\displaystyle=V_{t}\,\mathrm{d}t,$ (1.1)
$\displaystyle dV_{t}$
$\displaystyle=b(V_{t})\mathrm{d}t-\nabla\Phi(X_{t})\,\mathrm{d}t+\sqrt{2}\sigma(V_{t})\,\mathrm{d}B_{t},$
where $\Phi:\mathbb{R}^{d}\to\mathbb{R}$ is a suitable potential whose
properties are specified later, $B=(B_{t})_{t\geq 0}$ is a standard
$d$-dimensional Brownian motion, $\sigma:\mathbb{R}^{d}\to\mathbb{R}^{d\times
d}$ a variable diffusion matrix with at least weakly differentiable
coefficients, and $b:\mathbb{R}^{d}\to\mathbb{R}^{d}$ given by
$b_{i}(v)=\sum_{j=1}^{d}\partial_{j}a_{ij}(v)-a_{ij}(v)v_{j},$
where $a_{ij}=\Sigma_{ij}$ with $\Sigma=\sigma\sigma^{T}$.
This equation describes the evolution of a particle described via its position
$(X_{t})_{t\geq 0}$ and velocity $(V_{t})_{t\geq 0}$ coordinates, which is
subject to friction, stochastic perturbation depending on its velocity, and
some outer force $\nabla\Phi$. To simplify notation, we split
$\mathbb{R}^{2d}$ into the two components $x,v\in\mathbb{R}^{d}$ corresponding
to position and velocity respectively. This extends to differential operators
$\nabla_{x},\nabla_{v}$, and the Hessian matrix $H_{v}$.
Using Itô’s formula, we obtain the associated Kolmogorov operator $L$ as
$L=\operatorname{tr}\left(\Sigma
H_{v}\right)+b(v)\cdot\nabla_{v}+v\cdot\nabla_{x}-\nabla\Phi(x)\cdot\nabla_{v}.$
(1.2)
Here $a\cdot b$ or alternatively $(a,b)_{\mathrm{euc}}$ denotes the standard
inner product of $a,b\in\mathbb{R}^{d}$. We introduce the measure
$\mu=\mu_{\Sigma,\Phi}$ on $(\mathbb{R}^{2d},\mathcal{B}(\mathbb{R}^{2d}))$ as
$\mu_{\Sigma,\Phi}=(2\pi)^{-\frac{d}{2}}\mathrm{e}^{-\Phi(x)-\frac{v^{2}}{2}}\,\mathrm{d}x\otimes\mathrm{d}v=\mathrel{\vbox{\hbox{\scriptsize.}\hbox{\scriptsize.}}}\mathrm{e}^{-\Phi(x)}\otimes\nu,$
i.e. $\nu$ is the normalized standard Gaussian measure on $\mathbb{R}^{d}$. We
consider the operator $L$ on the Hilbert space
$H\mathrel{\vbox{\hbox{\scriptsize.}\hbox{\scriptsize.}}}=L^{2}(\mathbb{R}^{2d},\mu)$.
We note that the results below on exponential convergence to equilibrium can
also be translated to a corresponding Fokker-Planck setting, with the
differential operator $L^{\mathrm{FP}}$ given as the adjoint, restricted to
sufficiently smooth functions, of $L$ in
$L^{2}(\mathbb{R}^{2d},\mathrm{d}(x,v))$. The considered Hilbert space there
is
$\tilde{H}\mathrel{\vbox{\hbox{\scriptsize.}\hbox{\scriptsize.}}}=L^{2}(\mathbb{R}^{2d},\tilde{\mu})$,
where
$\tilde{\mu}\mathrel{\vbox{\hbox{\scriptsize.}\hbox{\scriptsize.}}}=(2\pi)^{-\frac{d}{2}}\mathrm{e}^{\Phi(x)+\frac{v^{2}}{2}}\,\mathrm{d}x\otimes\mathrm{d}v.$
Indeed, this is the space in which hypocoercivity of the kinetic Fokker-Planck
equation associated with the classical Langevin dynamics was proven in [1].
The rigorous connection to the Kolmogorov backwards setting considered
throughout this paper and convergence behaviour of solutions to the abstract
Cauchy problem $\partial_{t}f(t)=L^{\mathrm{FP}}f(t)$ are discussed in Section
5.3.
The concept of hypocoercivity was first introduced in the memoirs of Cédric
Villani ([2]), which is recommended as further literature to the interested
reader. The approach we use here was introduced algebraically by Dolbeault,
Mouhot and Schmeiser (see [3] and [1]), and then made rigorous including
domain issues in [4] by Grothaus and Stilgenbauer, where it was applied to
show exponential convergence to equilibrium of a Fiber laydown process on the
unit sphere. This setting was further generalized by Wang and Grothaus in [5],
where the coercivity assumptions involving in part the classical Poincaré
inequality for Gaussian measures were replaced by weak Poincaré inequalities,
allowing for more general measures for both the spatial and the velocity
component. In this case, the authors still obtained explicit, but
subexponential rates of convergence. On the other hand, the stronger notion of
hypercontractivity was explored in [6] on general separable Hilbert spaces
without the necessity to explicitly state the invariant measure. The specific
case of hypocoercivity for Langevin dynamics on the position space
$\mathbb{R}^{d}$ has been further explored in [7] and serves as the basis for
our hypocoercivity result. However, all of these prior results assume the
diffusion matrix to be constant, while we allow for velocity-dependent
coefficients.
In contrast to [7], we do not know if our operator
$(L,C_{c}^{\infty}(\mathbb{R}^{2d}))$ is essentially m-dissipative, and are
therefore left to prove that first. This property of the Langevin operator has
been shown by Helffer and Nier in [8] for smooth potentials and generalized to
locally Lipschitz-continuous potentials by Conrad and Grothaus in [9,
Corollary 2.3]. However, a corresponding result for a non-constant second
order coefficient matrix $\Sigma$ is not known to the authors.
Moreover, the symmetric part $S$ of our operator $L$ does not commute with the
linear operator $B$ as in [7], hence the boundedness of the auxiliary operator
$BS$ needs to be shown in a different way, which we do in Proposition 3.10.
In Theorem 3.4, we show under fairly light assumptions on the coefficients and
the potential that the operator $(L,C_{c}^{\infty}(\mathbb{R}^{2d}))$ is
essentially m-dissipative and therefore generates a strongly continuous
contraction semigroup on $H$. The proof is given in Section 4 and follows the
main ideas as in the proof of [9, Theorem 2.1], where a corresponding result
for $\Sigma=I$ was obtained.
For that proof we rely on perturbation theory of m-dissipative operators,
starting with essential m-dissipativity of the symmetric part of $L$. To that
end, we state an essential self-adjointness result for a set of non-degenerate
elliptic Dirichlet differential operators $(S,C_{c}^{\infty}(\mathbb{R}^{d}))$
on $L^{2}$-spaces where the measure is absolutely continuous wrt. the Lebesgue
measure. This result is stated in Theorem 4.5 and combines regularity results
from [10] and [11] with the approach to show essential self-adjointness from
[12].
Finally, our main hypocoercivity result reads as follows:
###### Theorem 1.1.
Let $d\in\mathbb{N}$. Assume that $\Sigma:\mathbb{R}^{d}\to\mathbb{R}^{d\times
d}$ is a symmetric matrix of coefficients $a_{ij}:\mathbb{R}^{d}\to\mathbb{R}$
which is uniformly strictly elliptic with ellipticity constant $c_{\Sigma}$.
Moreover, let each $a_{ij}$ be bounded and locally Lipschitz-continuous, hence
$a_{ij}\in H_{\mathrm{loc}}^{1,p}(\mathbb{R}^{d},\nu)\cap
L^{\infty}(\mathbb{R}^{d})$ for each $p\geq 1$. Assume the growth behaviour of
$\partial_{k}a_{ij}$ for all $1\leq k\leq d$ to be bounded either by
$|\partial_{k}a_{ij}(v)|\leq M(1+|v|)^{\beta}$
for $\nu$-almost all $v\in\mathbb{R}^{d}$ and some $M<\infty$,
$\beta\in(-\infty,0]$ or by
$|\partial_{k}a_{ij}(v)|\leq M(\mathds{1}_{B_{1}(0)}(v)+|v|^{\beta})$
for $\nu$-almost all $v\in\mathbb{R}^{d}$ and some $M<\infty$,
$\beta\in(0,1)$. Define $N_{\Sigma}$ in the first case as
$N_{\Sigma}\mathrel{\vbox{\hbox{\scriptsize.}\hbox{\scriptsize.}}}=\sqrt{M_{\Sigma}^{2}+(B_{\Sigma}\vee
M)^{2}}$ and in the second case as
$N_{\Sigma}\mathrel{\vbox{\hbox{\scriptsize.}\hbox{\scriptsize.}}}=\sqrt{M_{\Sigma}^{2}+B_{\Sigma}^{2}+dM^{2}}$,
where
$\displaystyle M_{\Sigma}$
$\displaystyle\mathrel{\vbox{\hbox{\scriptsize.}\hbox{\scriptsize.}}}=\max\\{\|a_{ij}\|_{\infty}\mid
1\leq i,j\leq d\\}\quad\text{ and }$ $\displaystyle B_{\Sigma}$
$\displaystyle\mathrel{\vbox{\hbox{\scriptsize.}\hbox{\scriptsize.}}}=\max\left\\{|\partial_{j}a_{ij}(v)|:v\in\overline{B_{1}(0)},\
1\leq i,j\leq d\right\\}.$
Let further $\Phi:\mathbb{R}^{d}\to\mathbb{R}$ be bounded from below, satisfy
$\Phi\in C^{2}(\mathbb{R}^{d})$ and that $\mathrm{e}^{-\Phi(x)}\,\mathrm{d}x$
is a probability measure on $(\mathbb{R}^{d},\mathcal{B}(\mathbb{R}^{d}))$
which satisfies a Poincaré inequality of the form
$\|\nabla
f\|_{L^{2}(\mathrm{e}^{-\Phi(x)}\,\mathrm{d}x)}^{2}\geq\Lambda\left\|f-\int_{R^{d}}f\mathrm{e}^{-\Phi(x)}\,\mathrm{d}x\right\|_{L^{2}(\mathrm{e}^{-\Phi(x)}\,\mathrm{d}x)}^{2}$
for some $\Lambda\in(0,\infty)$ and all $f\in C_{c}(\mathbb{R}^{d})$.
Furthermore assume the existence of a constant $c<\infty$ such that
$|H\Phi(x)|\leq c(1+|\nabla\Phi(x)|)\quad\text{ for all }x\in\mathbb{R}^{d},$
where $H$ denotes the Hessian matrix and $|H\Phi|$ the Euclidian matrix norm.
If $\beta>-1$, then also assume that there are constants $N<\infty$,
$\gamma<\frac{2}{1+\beta}$ such that
$|\nabla\Phi(x)|\leq N(1+|x|^{\gamma})\qquad\text{ for all
}x\in\mathbb{R}^{d}.$
Then the Langevin operator $(L,C_{c}^{\infty}(\mathbb{R}^{2d}))$ as defined in
(1.2) is closable on $H$ and its closure $(L,D(L))$ generates a strongly
continuous contraction semigroup $(T_{t})_{t\geq 0}$ on $H$. Further, it holds
that for each $\theta_{1}\in(1,\infty)$, there is some
$\theta_{2}\in(0,\infty)$ such that
$\left\|T_{t}g-(g,1)_{H}\right\|_{H}\leq\theta_{1}\mathrm{e}^{-\theta_{2}t}\left\|g-(g,1)_{H}\right\|_{H}$
for all $g\in H$ and all $t\geq 0$. In particular, $\theta_{2}$ can be
specified as
$\theta_{2}=\frac{\theta_{1}-1}{\theta_{1}}\frac{c_{\Sigma}}{n_{1}+n_{2}N_{\Sigma}+n_{3}N_{\Sigma}^{2}},$
and the coefficients $n_{i}\in(0,\infty)$ only depend on the choice of $\Phi$.
Finally, our main results may be summarized by the following list:
* •
Essential m-dissipativity (equivalently essential self-adjointness) of non-
degenerate elliptic Dirichlet differential operators with domain
$C_{c}^{\infty}(\mathbb{R}^{d})$ on Hilbert spaces with measure absolutely
continuous wrt. the $d$-dimensional Lebesgue measure is proved, see Theorem
4.5.
* •
Essential m-dissipativity of the backwards Kolmogorov operator
$(L,C_{c}^{\infty}(\mathbb{R}^{d}))$ associated with the Langevin equation
with multiplicative noise (1.1) on the Hilbert space $H$ under weak
assumptions on the coefficient matrix $\Sigma$ and the potential $\Phi$, in
particular not requiring smoothness, is shown, see Theorem 3.4.
* •
Exponential convergence to a stationary state of the corresponding solutions
to the abstract Cauchy problem $\partial_{t}u(t)=Lu(t)$, see (5.1) on the
Hilbert space $H$ with explicitly computable rate of convergence, as stated in
Theorem 1.1, is proved.
* •
Adaptation of this convergence result to the equivalent formulation as a
Fokker-Planck PDE on the appropriate Hilbert space
$\tilde{H}\mathrel{\vbox{\hbox{\scriptsize.}\hbox{\scriptsize.}}}=L^{2}(\mathbb{R}^{2d},\tilde{\mu})$
is provided. In particular, this yields exponential convergence of the
solutions to the abstract Fokker-Planck Cauchy problem
$\partial_{t}u(t)=L^{\mathrm{FP}}u(t)$, with $L^{\mathrm{FP}}$ given by (5.3),
to a stationary state, see Section 5.3.
* •
A stochastic interpretation of the semigroup as a transition kernel for a
diffusion process is worked out. Moreover, we prove this diffusion process to
be a weak solution to the Langevin SDE (1.1) and derive for it strong mixing
properties with explicit rates of convergence, see Section 5.2.
## 2 The abstract hypocoercivity setting
We start by recalling some basic facts about closed unbounded operators on
Hilbert spaces:
###### Lemma 2.1.
Let $(T,D(T))$ be a densely defined linear operator on $H$ and let $L$ be a
bounded linear operator with domain $H$.
1. (i)
The adjoint operator $(T^{*},D(T^{*}))$ exists and is closed. If $D(T^{*})$ is
dense in $H$, then $(T,D(T))$ is closable and for the closure
$(\overline{T},D(\overline{T}))$ it holds $\overline{T}=T^{**}$.
2. (ii)
$L^{*}$ is bounded and $\|L^{*}\|=\|L\|$.
3. (iii)
If $(T,D(T))$ is closed, then $D(T^{*})$ is automatically dense in $H$.
Consequently by (i), $T=T^{**}$.
4. (iv)
Let $(T,D(T))$ be closed. Then the operator $TL$ with domain
$D(TL)=\\{f\in H\mid Lf\in D(T)\\}$
is also closed.
5. (v)
$LT$ with domain $D(T)$ need not be closed, however
$(LT)^{*}=T^{*}L^{*}.$
Let us now briefly state the abstract setting for the hypocoercivity method as
in [4].
###### Data conditions (D).
We require the following conditions which are henceforth assumed without
further mention.
1. (D1)
_The Hilbert space:_ Let $(E,\mathcal{F},\mu)$ be some probability space and
define $H$ to be $H=L^{2}(E,\mu)$ equipped with the standard inner product
$(\cdot,\cdot)_{H}$.
2. (D2)
_The $C_{0}$-semigroup and its generator:_ $(L,D(L))$ is some linear operator
on $H$ generating a strongly continuous contraction semigroup $(T_{t})_{t\geq
0}$.
3. (D3)
_Core property of $L$:_ Let $D\subset D(L)$ be a dense subspace of $H$ which
is a core for $(L,D(L))$.
4. (D4)
_Decomposition of $L$:_ Let $(S,D(S)))$ be symmetric, $(A,D(A))$ be closed and
antisymmetric on $H$ such that $D\subset D(S)\cap D(A)$ as well as
$L|_{D}=S-A$.
5. (D5)
_Orthogonal projections:_ Let $P:H\to H$ be an orthogonal projection
satisfying $P(H)\subset D(S),\,SP=0$ as well as $P(D)\subset
D(A),\,AP(D)\subset D(A)$. Moreover, let $P_{S}:H\to H$ be defined as
$P_{S}f\mathrel{\vbox{\hbox{\scriptsize.}\hbox{\scriptsize.}}}=Pf+(f,1)_{H},\qquad
f\in H.$
6. (D6)
_Invariant measure:_ Let $\mu$ be invariant for $(L,D)$ in the sense that
$(Lf,1)_{H}=\int_{E}Lf\,\mathrm{d}\mu=0\qquad\text{ for all }f\in D.$
7. (D7)
_Conservativity:_ It holds that $1\in D(L)$ and $L1=0$.
Since $(A,D(A))$ is closed, $(AP,D(AP))$ is also closed and densely defined.
Hence by von Neumann’s theorem, the operator
$I+(AP)^{*}(AP):D((AP)^{*}AP)\to H,$
where $D((AP)^{*}AP)=\\{f\in D(AP)\mid APf\in D((AP)^{*})\\}$, is bijective
and admits a bounded inverse. We therefore define the operator
$(B,D((AP)^{*}))$ via
$B\mathrel{\vbox{\hbox{\scriptsize.}\hbox{\scriptsize.}}}=(I+(AP)^{*}AP)^{-1}(AP)*$
Then $B$ extends to a bounded operator on $H$.
As in the given source, we also require the following assumptions:
###### Assumption (H1).
_Algebraic relation:_ It holds that $PAP|_{D}=0$.
###### Assumption (H2).
_Microscopic coercivity:_ There exists some $\Lambda_{m}>0$ such that
$-(Sf,f)_{H}\geq\Lambda_{m}\|(I-P_{S})f\|^{2}\qquad\text{ for all }f\in D.$
###### Assumption (H3).
_Macroscopic coercivity:_ Define $(G,D)$ via $G=PA^{2}P$ on $D$. Assume that
$(G,D)$ is essentially self-adjoint on $H$. Moreover, assume that there is
some $\Lambda_{M}>0$ such that
$\|APf\|^{2}\geq\Lambda_{M}\|Pf\|^{2}\qquad\text{ for all }f\in D.$
###### Assumption (H4).
_Boundedness of auxiliary operators:_ The operators $(BS,D)$ and $(BA(I-P),D)$
are bounded and there exist constants $c_{1},c_{2}<\infty$ such that
$\|BSf\|\leq c_{1}\|(I-P)f\|\quad\text{ and }\quad\|BA(I-P)f\|\leq
c_{2}\|(I-P)f\|$
hold for all $f\in D$.
We now state the central abstract hypocoercivity theorem as in [4]:
###### Theorem 2.2.
Assume that (D) and (H1)-(H4) hold. Then there exist strictly positive
constants $\kappa_{1},\kappa_{2}<\infty$ which are explicitly computable in
terms of $\Lambda_{m},\Lambda_{M},c_{1}$ and $c_{2}$ such that for all $g\in
H$ we have
$\|T_{t}g-(g,1)_{H}\|\leq\kappa_{1}\mathrm{e}^{-\kappa_{2}t}\|g-(g,1)_{H}\|\quad\text{
for all }t\geq 0.$
More specifically, if there exist $\delta>0$, $\varepsilon\in(0,1)$ and
$0<\kappa<\infty$ such that for all $g\in D(L)$, $t\geq 0$, it holds
$\displaystyle\kappa\|f_{t}\|^{2}\leq\left(\Lambda_{m}-\varepsilon(1+c_{1}+c_{2})\left(1+\frac{1}{2\delta}\right)\right)$
$\displaystyle\|(I-P)f_{t}\|^{2}$ (2.1)
$\displaystyle+\varepsilon\left(\frac{\Lambda_{M}}{1+\Lambda_{M}}-(1+c_{1}+c_{2})\frac{\delta}{2}\right)$
$\displaystyle\|Pf_{t}\|^{2},$
where
$f_{t}\mathrel{\vbox{\hbox{\scriptsize.}\hbox{\scriptsize.}}}=T_{t}g-(g,1)_{H}$,
then the constants $\kappa_{1}$ and $\kappa_{2}$ are given by
$\kappa_{1}=\sqrt{\frac{1+\varepsilon}{1-\varepsilon}},\qquad\kappa_{2}=\frac{\kappa}{1+\varepsilon}.$
In order to prove (H4), we will make use of the following result:
###### Lemma 2.3.
Assume (H3). Let $(T,D(T))$ be a linear operator with $D\subset D(T)$ and
assume $AP(D)\subset D(T^{*})$. Then
$(I-G)(D)\subset D((BT)^{*})\quad\text{ with
}\quad(BT)^{*}(I-G)f=T^{*}APf,\quad f\in D.$
If there exists some $C<\infty$ such that
$\|(BT)^{*}g\|\leq C\|g\|\qquad\text{ for all }g=(I-G)f,\quad f\in D,$ (2.2)
then $(BT,D(T))$ is bounded and its closure $(\overline{BT})$ is a bounded
operator on $H$ with $\|\overline{BT}\|=\|(BT)^{*}\|$.
In particular, if $(S,D(S))$ and $(A,D(A))$ satisfy these assumptions, the
corresponding inequalities in (H4) are satisfied with $c_{1}=\|(BS)^{*}\|$ and
$c_{2}=\|(BA)^{*}\|$.
###### Proof:
Let $h\in D((AP)^{*})$ and $f\in D$. Set $g=(I-G)f$. By the representation of
$B$ on $D((AP)^{*})$ together with self-adjointness of $(I+(AP)^{*}AP)^{-1}$
and $D\subset D(AP)$, we get
$(h,B^{*}g)_{H}=(Bh,(I-G)f)_{H}=((AP)^{*}h,f)_{H}=(h,APf)_{H}.$
So $B^{*}g=APf\in D(T^{*})$. By Lemma 2.1 (v),
$((BT)^{*},D((BT)^{*}))=(T^{*}B^{*},D(T^{*}B^{*}))$, which implies
$(BT)^{*}g=T^{*}B^{*}g=T^{*}APf$.
By essential self-adjointness and hence essential m-dissipativity of $G$,
$(I-G)(D)$ is dense in $H$. Therefore by (2.2), the closed operator
$((BT)^{*},D((BT)^{*}))$ is a bounded operator on $H$. Since $(BT,D(T))$ is
densely defined, by Lemma 2.1 (i) and (ii), it is closable with
$\overline{BT}=(BT)^{**}$, which is a bounded operator on $H$ with the stated
norm.
The last part follows directly by $Sf=S(I-P)f$ for $f\in D$. $\square$
## 3 Hypocoercivity for Langevin dynamics with multiplicative noise
As stated in the introduction, the aim of this section is to prove exponential
convergence to equilibrium of the semigroup solving the abstract Kolmogorov
equation corresponding to the Langevin equation with multiplicative noise
(1.1).
We remark that most of the conditions are verified analogously to [7], the
main difference being the proof of essential m-dissipativity for the operator
$(L,C_{c}^{\infty}(\mathbb{R}^{2d}))$ as well as the first inequality in (H4).
Nevertheless, some care has to be taken whenever $S$ is involved, as it
doesn’t preserve regularity to the same extent as in the given reference.
### 3.1 The data conditions
We start by introducing the setting and verifying the data conditions (D). The
notations introduced in this part will be used for the remainder of the
section without further mention.
Let $d\in\mathbb{N}$ and set the state space as $E=\mathbb{R}^{2d}$,
$\mathcal{F}=\mathcal{B}(\mathbb{R}^{2d})$. In the following, the first $d$
components of $E$ will be written as $x$, the latter $d$ components as $v$.
Let $\nu$ be the normalised Gaussian measure on $\mathbb{R}^{d}$ with mean
zero and covariance matrix $I$, i.e.
$\nu(A)=\int_{A}(2\pi)^{-\frac{d}{2}}\
\mathrm{e}^{-\frac{x^{2}}{2}}\,\mathrm{d}x.$
###### Assumption (P).
The potential $\Phi:\mathbb{R}^{d}\to\mathbb{R}$ is assumed to depend only on
the position variable $x$ and to be locally Lipschitz-continuous. We further
assume $\mathrm{e}^{-\Phi(x)}\,\mathrm{d}x$ to be a probability measure on
$(\mathbb{R}^{d},\mathcal{B}(\mathbb{R}^{d}))$.
Note that the first part implies $\Phi\in
H_{\text{loc}}^{1,\infty}(\mathbb{R}^{d})$. Moreover, $\Phi$ is differentiable
$\mathrm{d}x$-a.e. on $\mathbb{R}^{d}$, such that the weak gradient and the
derivative of $\Phi$ coincide $\mathrm{d}x$-a.e. on $\mathbb{R}^{d}$. In the
following, we fix a version of $\nabla\Phi$.
The probability measure $\mu$ on $(E,\mathcal{F})$ is then given by
$\mu=\mathrm{e}^{-\Phi(x)}\,\mathrm{d}x\otimes\nu$, and we set
$H\mathrel{\vbox{\hbox{\scriptsize.}\hbox{\scriptsize.}}}=L^{2}(E,\mu)$, which
satisfies condition (D1). Next we assume the following about
$\Sigma=(a_{ij})_{1\leq i,j\leq d}$ with $a_{ij}:\mathbb{R}^{d}\to\mathbb{R}$:
###### Assumption ($\Sigma$1).
$\Sigma$ is symmetric and uniformly strictly elliptic, i.e. there is some
$c_{\Sigma}>0$ such that
$(y,\Sigma(v)y)\geq c_{\Sigma}\cdot|y|^{2}\quad\text{ for all
}y,v\in\mathbb{R}^{d}.$
###### Assumption ($\Sigma$2).
There is some $p>d$ such that for all $1\leq i,j\leq d$, it holds that
$a_{ij}\in H_{\text{loc}}^{1,p}(\mathbb{R}^{d},\nu)\cap
L^{\infty}(\mathbb{R}^{d})$. Additionally, $a_{ij}$ is locally Lipschitz-
continuous for all $1\leq i,j\leq d$.
Additionally, we will consider one of the following conditions on the growth
of the partial derivatives:
###### Assumption ($\Sigma$3).
There are constants $0\leq M<\infty$, $-\infty<\beta\leq 0$ such that for all
$1\leq i,j,k\leq d$
$|\partial_{k}a_{ij}(v)|\leq M(1+|v|)^{\beta}\quad\text{ for $\nu$-almost all
}v\in\mathbb{R}^{d}.$
###### Assumption ($\Sigma$3′).
There are constants $0\leq M<\infty$, $0<\beta<1$ such that for all $1\leq
i,j,k\leq d$
$|\partial_{k}a_{ij}(v)|\leq
M(\mathds{1}_{B_{1}(0)}(v)+|v|^{\beta})\quad\text{ for $\nu$-almost all
}v\in\mathbb{R}^{d}.$
We note that any of these imply $\partial_{j}a_{ij}\in
L^{2}(\mathbb{R}^{d},\nu)$ for all $1\leq i,j\leq d$.
###### Definition 3.1.
Let $\Sigma$ satisfy ($\Sigma$2). Then we set
$\displaystyle M_{\Sigma}$
$\displaystyle\mathrel{\vbox{\hbox{\scriptsize.}\hbox{\scriptsize.}}}=\max\\{\|a_{ij}\|_{\infty}:1\leq
i,j\leq d\\}\qquad\text{ and }$ $\displaystyle B_{\Sigma}$
$\displaystyle\mathrel{\vbox{\hbox{\scriptsize.}\hbox{\scriptsize.}}}=\max\\{|\partial_{j}a_{ij}(v)|:v\in\overline{B_{1}(0)},\
1\leq i,j\leq d\\}.$
If $\Sigma$ additionally satisfies ($\Sigma$3), then we define
$N_{\Sigma}\mathrel{\vbox{\hbox{\scriptsize.}\hbox{\scriptsize.}}}=\sqrt{M_{\Sigma}^{2}+(B_{\Sigma}\vee
M)^{2}}.$
If instead ($\Sigma$3′) is fulfilled, then we consider instead
$N_{\Sigma}\mathrel{\vbox{\hbox{\scriptsize.}\hbox{\scriptsize.}}}=\sqrt{M_{\Sigma}^{2}+B_{\Sigma}^{2}+dM^{2}}.$
###### Definition 3.2.
Let $D=C_{c}^{\infty}(E)$ be the space of compactly supported smooth functions
on $E$. We define the linear operators $S,A$ and $L$ on $D$ via
$\displaystyle Sf$
$\displaystyle=\sum_{i,j=1}^{d}a_{ij}\partial_{v_{j}}\partial_{v_{i}}f+\sum_{i=1}^{d}b_{i}\partial_{v_{i}}f,$
$\displaystyle\quad\text{ where
}b_{i}(v)=\sum_{j=1}^{d}(\partial_{j}a_{ij}(v)-a_{ij}(v)v_{j}),$
$\displaystyle Af$
$\displaystyle=\nabla\Phi(x)\cdot\nabla_{v}f-v\cdot\nabla_{x}f,$
$\displaystyle Lf$ $\displaystyle=(S-A)f,\qquad\text{ for }f\in D.$
Integration by parts shows that $(S,D)$ is symmetric and non-positive definite
on $H$, and $(A,D)$ is antisymmetric on $H$. Hence, all three operators with
domain $D$ are dissipative and therefore closable. We denote their closure
respectively by $(S,D(S)),(A,D(A))$ and $(L,D(L))$.
For $f\in D$ and $g\in H^{1,2}(E,\mu)$, integration by parts yields
$(Lf,g)_{H}=-\int_{E}\left(\nabla f,\begin{pmatrix}0&-I\\\
I&\Sigma\end{pmatrix}\nabla g\right)_{\mathrm{euc}}\,\mathrm{d}\mu.$
In particular, (D6) is obviously fulfilled. Next we provide an estimate which
will be needed later:
###### Proposition 3.3.
Let ($\Sigma$2) and either ($\Sigma$3) or ($\Sigma$3′) hold respectively and
recall Definition 3.1. Then for all $1\leq i,j\leq d$, it holds that
$\|\partial_{j}a_{ij}-a_{ij}v_{j}\|_{L^{2}(\nu)}\leq N_{\Sigma}.$
###### Proof:
Due to integration by parts, it holds that
$\int_{\mathbb{R}^{d}}a_{ij}^{2}v_{j}^{2}\,\mathrm{d}\nu=\int_{\mathbb{R}^{d}}a_{ij}^{2}+2a_{ij}v_{j}\partial_{j}a_{ij}\,\mathrm{d}\nu.$
Hence we obtain in the case ($\Sigma$3′)
$\displaystyle\int_{\mathbb{R}^{d}}(\partial_{j}a_{ij}-a_{ij}v_{j})^{2}\,\mathrm{d}\nu$
$\displaystyle=\int_{\mathbb{R}^{d}}(\partial_{j}a_{ij})^{2}+a_{ij}^{2}\,\mathrm{d}\nu$
$\displaystyle\leq\int_{B_{1}(0)}(\partial_{j}a_{ij})^{2}\,\mathrm{d}\nu+\int_{\mathbb{R}^{d}\setminus
B_{1}(0)}(\partial_{j}a_{ij})^{2}\,\mathrm{d}\nu+M_{\Sigma}^{2}$
$\displaystyle\leq B_{\Sigma}^{2}+\int_{\mathbb{R}^{d}\setminus
B_{1}(0)}(M|v|^{\beta})^{2}\,\mathrm{d}\nu+M_{\Sigma}^{2}$ $\displaystyle\leq
B_{\Sigma}^{2}+M_{\Sigma}^{2}+\sum_{k=1}^{d}M^{2}\int_{\mathbb{R}^{d}}v_{k}^{2}\,\mathrm{d}\nu=B_{\Sigma}^{2}+M_{\Sigma}^{2}+M^{2}d.$
The case ($\Sigma$3) follows from $(\partial_{j}a_{ij})^{2}\leq(B_{\Sigma}\vee
M)^{2}$. $\square$
We now state the essential m-dissipativity result, which will be proven in the
next section.
###### Theorem 3.4.
Let ($\Sigma$1), ($\Sigma$2) and either ($\Sigma$3) or ($\Sigma$3′) be
fulfilled, and let $\Phi$ be as in (P). Assume further that $\Phi$ is bounded
from below and that $|\nabla\Phi|\in
L^{2}(\mathbb{R}^{d},\mathrm{e}^{-\Phi(x)}\,\mathrm{d}x)$. If $\beta$ is
larger than $-1$, then assume additionally that there is some $N<\infty$ such
that
$|\nabla\Phi(x)|\leq N(1+|x|^{\gamma}),\quad\text{ where
}\gamma<\frac{2}{1+\beta}.$
Then the linear operator $(L,C_{c}^{\infty}(\mathbb{R}^{2d}))$ is essentially
m-dissipative and hence its closure $(L,D(L))$ generates a strongly continuous
contraction semigroup on $H$. In particular, the conditions (D2)-(D4) are
satisfied.
Let us now introduce the orthogonal projections $P_{S}$ and $P$:
###### Definition 3.5.
Define $P_{S}:H\to H$ as
$P_{S}f=\int_{\mathbb{R}^{d}}f\,\mathrm{d}\nu(v),\qquad f\in H,$
where integration is understood w.r.t the velocity variable $v$. By Fubini’s
theorem and the fact that $\nu$ is a probability measure on $(E,\mathcal{F})$,
it follows that $P_{S}$ is a well-defined orthogonal projection on $H$ with
$P_{S}f\in
L^{2}(\mathbb{R}^{d},\mathrm{e}^{-\Phi(x)}\,\mathrm{d}x),\quad\|P_{S}f\|_{L^{2}(\mathbb{R}^{d},\mathrm{e}^{-\Phi(x)}\,\mathrm{d}x)}=\|P_{S}f\|_{H},\quad
f\in H,$
where $L^{2}(\mathbb{R}^{d},\mathrm{e}^{-\Phi(x)}\,\mathrm{d}x)$ is
interpreted as embedded in $H$.
Then define $P:H\to H$ via $Pf=P_{S}f-(f,1)_{H}$ for $f\in H$. Again, $P$ is
an orthogonal projection on $H$ with
$Pf\in
L^{2}(\mathbb{R}^{d},\mathrm{e}^{-\Phi(x)}\,\mathrm{d}x),\quad\|Pf\|_{L^{2}(\mathbb{R}^{d},\mathrm{e}^{-\Phi(x)}\,\mathrm{d}x)}=\|Pf\|_{H},\quad
f\in H.$
Additionally, for each $f\in D$, $P_{S}f$ admits a unique representation in
$C_{c}^{\infty}(\mathbb{R}^{d})$, which we will denote by $f_{S}\in
C_{c}^{\infty}(\mathbb{R}^{d})$.
In order to show the last remaining conditions (D5) and (D7), we will make use
of a standard sequence of cutoff functions as specified below:
###### Definition 3.6.
Let $\varphi\in C_{c}^{\infty}(\mathbb{R}^{d})$ such that $0\leq\varphi\leq
1$, $\varphi=1$ on $B_{1}(0)$ and $\varphi=0$ outside of $B_{2}(0)$. Define
$\varphi_{n}(z)\mathrel{\vbox{\hbox{\scriptsize.}\hbox{\scriptsize.}}}=\varphi(\frac{z}{n})$
for each $z\in\mathbb{R}^{d}$, $n\in\mathbb{N}$. Then there exists a constant
$C<\infty$ independent of $n\in\mathbb{N}$ such that
$|\partial_{i}\varphi_{n}(z)|\leq\frac{C}{n},\
|\partial_{ij}\varphi_{n}(z)|\leq\frac{C}{n^{2}}\quad\text{ for all
}z\in\mathbb{R}^{d},1\leq i,j\leq d.$
Moreover $0\leq\varphi_{n}\leq 1$ for all $n\in\mathbb{N}$ and $\varphi_{n}\to
1$ pointwisely on $\mathbb{R}^{d}$ as $n\to\infty$.
###### Lemma 3.7.
Let ($\Sigma$2) and either ($\Sigma$3) or ($\Sigma$3′) be fulfilled, and let
$\Phi$ be as in (P). Then the operator $L$ satisfies the following:
1. ()
$P(H)\subset D(S)$ with $SPf=0$ for all $f\in H$,
2. ()
$P(D)\subset D(A)$ and $APf=-v\cdot\nabla_{x}(P_{S}f)$,
3. ()
$AP(D)\subset D(A)$ with $A^{2}Pf=\langle
v,\nabla_{x}^{2}(P_{S}f)v\rangle-\nabla\Phi\cdot\nabla_{x}(P_{S}f)$.
4. ()
It holds $1\in D(L)$ and $L1=0$.
In particular, (D5) and (D7) are fulfilled.
###### Proof:
We only show (i), as the other parts can be shown exactly as in [7]. First,
let $f\in C_{c}^{\infty}(\mathbb{R}^{d})$ and define $f_{n}\in D$ via
$f_{n}(x,y)\mathrel{\vbox{\hbox{\scriptsize.}\hbox{\scriptsize.}}}=f(x)\varphi_{n}(v)$.
Then by Lebesgue’s dominated convergence theorem and the inequalities in the
previous definition,
$Sf_{n}=f\cdot\left(\sum_{i,j=1}^{d}a_{ij}\partial_{ij}\varphi_{n}+\sum_{i=1}^{d}b_{i}\partial_{i}\varphi_{n}\right)\to
0\quad\text{ in $H$ as }n\to\infty,$
since $a_{ij}\in L^{\infty}(\mathbb{R}^{d})\subset L^{2}(\mathbb{R}^{d},\nu)$,
$|v|\in L^{2}(\mathbb{R}^{d},\nu)$ and $\partial_{j}a_{ij}\in
L^{2}(\mathbb{R}^{d},\nu)$ for all $1\leq i,j\leq d$.
Since $f_{n}\to f$ in $H$ and by closedness of $(S,D(S))$, this implies $f\in
D(S)$ with $Sf=0$, where $f$ is interpreted as an element of $H$.
Now let $g\in P(H)$ and identify $g$ as an element of
$L^{2}(\mathbb{R}^{d},\mathrm{e}^{-\Phi(x)}\,\mathrm{d}x)$. Then there exist
$g_{n}\in C_{c}^{\infty}(\mathbb{R}^{d})$ with $g_{n}\to g$ in
$L^{2}(\mathbb{R}^{d},\mathrm{e}^{-\Phi(x)}\,\mathrm{d}x)$ as $n\to\infty$.
Identifying all $g_{n}$ and $g$ with elements in $H$ then yields $g_{n}\to g$
in $H$ as $n\to\infty$ and $g_{n}\in D(S)$, $Sg_{n}=0$ for all
$n\in\mathbb{N}$. Therefore, again by closedness of $(S,D(S))$, $g\in D(S)$
and $Sg=0$. $\square$
### 3.2 The hypocoercivity conditions
Now we verify the hypocoercivity conditions (H1)-(H4) for the operator $L$.
From here on, we will assume $\Sigma$ to satisfy ($\Sigma$1), ($\Sigma$2) and
either ($\Sigma$3) or ($\Sigma$3′), with $N_{\Sigma}$ referring to the
appropriate constant as in Definition 3.1. Analogously to [7] we introduce the
following conditions:
###### Hypocoercivity assumptions (C1)-(C3).
We require the following assumptions on $\Phi:\mathbb{R}^{d}\to\mathbb{R}$:
1. (C1)
The potential $\Phi$ is bounded from below, is an element of
$C^{2}(\mathbb{R}^{d})$ and $\mathrm{e}^{-\Phi(x)}\,\mathrm{d}x$ is a
probability measure on $(\mathbb{R}^{d},\mathcal{B}(\mathbb{R}^{d}))$.
2. (C2)
The probability measure $\mathrm{e}^{-\Phi(x)}\,\mathrm{d}x$ satisfies a
Poincaré inequality of the form
$\|\nabla
f\|_{L^{2}(\mathrm{e}^{-\Phi(x)}\,\mathrm{d}x)}^{2}\geq\Lambda\|f-(f,1)_{L^{2}(\mathrm{e}^{-\Phi(x)}\,\mathrm{d}x)}\|_{L^{2}(\mathrm{e}^{-\Phi(x)}\,\mathrm{d}x)}^{2}$
for some $\Lambda\in(0,\infty)$ and all $f\in C_{c}^{\infty}(\mathbb{R}^{d})$.
3. (C3)
There exists a constant $c<\infty$ such that
$|\nabla^{2}\Phi(x)|\leq c(1+|\nabla\Phi(x)|)\quad\text{ for all
}x\in\mathbb{R}^{d}.$
Note that in particular, (C1) implies (P). As shown in [2, Lemma A.24],
conditions (C3) and (C1) imply $\nabla\Phi\in
L^{2}(\mathrm{e}^{-\Phi(x)}\,\mathrm{d}x)$.
Since we only change the operator $(S,D(S))$ in comparison to the framework of
[7], the results stated there involving only $(A,D(A))$ and the projections
also hold here and are collected as follows:
###### Proposition 3.8.
Let $\Phi$ satisfy (P). Then the following hold:
1. (i)
Assume additionally $\nabla\Phi\in L^{2}(\mathrm{e}^{-\Phi(x)}\,\mathrm{d}x)$.
Then (H1) is fulfilled.
2. (ii)
Assume that $\Phi$ satisfies (C1) and that $\nabla\Phi\in
L^{2}(\mathrm{e}^{-\Phi(x)}\,\mathrm{d}x)$. Then the operator $(G,D)$ defined
by $G\mathrel{\vbox{\hbox{\scriptsize.}\hbox{\scriptsize.}}}=PA^{2}P$ is
essentially self-adjoint, equivalently essentially m-dissipative. For $f\in
D$, it holds
$Gf=PAAPf=\Delta f_{S}-\nabla\Phi\cdot\nabla f_{S}.$
3. (iii)
Assume that $\Phi$ satisfies (C1) and (C2) as well as $\nabla\Phi\in
L^{2}(\mathrm{e}^{-\Phi(x)}\,\mathrm{d}x)$. Then (H3) holds with
$\Lambda_{M}=\Lambda$.
4. (iv)
Assume that $\Phi$ satisfies (C1)-(C3). Then the second estimate in (H4) is
satisfied, and the constant there is given as $c_{2}=c_{\Phi}\in[0,\infty)$,
which only depends on the choice of $\Phi$.
It remains to show (H2) and the first half of (H4):
###### Proposition 3.9.
Let $\Phi$ be as in (P). Then Condition (H2) is satisfied with
$\Lambda_{m}=c_{\Sigma}$.
###### Proof:
Let $g\in C_{c}^{\infty}(\mathbb{R}^{d})$. The Poincaré inequality for
Gaussian measures, see for example [13], states
$\|\nabla
g\|_{L^{2}(\nu)}^{2}\geq\left\|g-\int_{\mathbb{R}^{d}}g(v)\,\mathrm{d}\nu(v)\right\|_{L^{2}(\nu)}^{2}.$
Therefore, integration by parts yields for all $f\in D$:
$\displaystyle(-Sf,f)_{H}$
$\displaystyle=\int_{E}\langle\nabla_{v}f,\Sigma\nabla_{v}f\rangle\,\mathrm{d}\mu\geq\int_{\mathbb{R}^{d}}\int_{\mathbb{R}^{d}}c_{\Sigma}|\nabla_{v}f(x,v)|^{2}\,\mathrm{d}\nu\,\mathrm{e}^{-\Phi(x)}\,\mathrm{d}x$
$\displaystyle\geq
c_{\Sigma}\int_{\mathbb{R}^{d}}\int_{\mathbb{R}^{d}}(f-P_{S}f)^{2}\,\mathrm{d}\nu\,\mathrm{e}^{-\Phi(x)}\,\mathrm{d}x=c_{\Sigma}\|(I-P_{S})f\|_{H}^{2}$
$\square$
Finally, we verify the first part of (H4):
###### Proposition 3.10.
Assume that $\Phi$ satisfies (C1) and (C2) as well as $\nabla\Phi\in
L^{2}(\mathrm{e}^{-\Phi}\,\mathrm{d}x)$. Then the first inequality of (H4) is
also satisfied with
$c_{1}=d_{\Sigma}\mathrel{\vbox{\hbox{\scriptsize.}\hbox{\scriptsize.}}}=\sqrt{2d^{3}}N_{\Sigma}$.
###### Proof:
For $f\in D$, define $Tf\in H$ by
$Tf\mathrel{\vbox{\hbox{\scriptsize.}\hbox{\scriptsize.}}}=\sum_{i=1}^{d}b_{i}\partial_{i}(f_{S})=\sum_{i,j=1}^{d}(\partial_{j}a_{ij}-a_{ij}v_{j})\partial_{x_{i}}(P_{S}f).$
We want to apply Lemma 2.3 to the operator $(S,D(S))$. Let $f\in D$, $h\in
D(S)$ and $h_{n}\in D$ such that $h_{n}\to h$ and $Sh_{n}\to Sh$ in $H$ as
$n\to\infty$. Then, by integration by parts,
$(Sh,APf)_{H}=\lim_{n\to\infty}(Sh_{n},-v\cdot\nabla_{x}(P_{S}f))_{H}=\lim_{n\to\infty}(h_{n},-Tf)_{H}=(h,-Tf)_{H}.$
This shows $APf\in D(S^{*})$ and by the first part of Lemma 2.3, $(I-G)f\in
D((BS)^{*})$ and $(BS)^{*}(I-G)f=S^{*}APf=-Tf$. Now set $g=(I-G)f$, then, via
Proposition 3.3,
$\displaystyle\|(BS)^{*}g\|_{H}^{2}$
$\displaystyle=\|Tf\|_{H}^{2}=\int_{E}\left(\sum_{i=1}^{d}b_{i}\partial_{i}f_{S}\right)^{2}\,\mathrm{d}\mu$
$\displaystyle\leq
d^{2}\sum_{i,j=1}^{d}\int_{\mathbb{R}^{d}}\int_{\mathbb{R}^{d}}(\partial_{j}a_{ij}(v)-a_{ij}(v)v_{j})^{2}\
\mathrm{d}\nu(v)\ (\partial_{x_{i}}(P_{S}f)(x))^{2}\,\mathrm{e}^{-\Phi(x)}\
\mathrm{d}x$ $\displaystyle\leq
d^{3}N_{\Sigma}^{2}\sum_{i=1}^{d}\int_{\mathbb{R}^{d}}\partial_{x_{i}}(Pf)\cdot\partial_{x_{i}}(P_{S}f)\,\mathrm{e}^{-\Phi(x)}\
\mathrm{d}x.$
A final integration by parts then yields
$\displaystyle\|(BS)^{*}g\|_{H}^{2}$
$\displaystyle\leq-d^{3}N_{\Sigma}^{2}\int_{\mathbb{R}^{d}}Pf\cdot(\Delta_{x}P_{S}f-\nabla\Phi\nabla_{x}(P_{S}f))\,\mathrm{e}^{-\Phi(x)}\
\mathrm{d}x$ $\displaystyle=-d^{3}N_{\Sigma}^{2}\int_{\mathbb{R}^{d}}Pf\cdot
Gf\,\mathrm{e}^{-\Phi(x)}\ \mathrm{d}x$ $\displaystyle\leq
d^{3}N_{\Sigma}^{2}\,\|Pf\|_{L^{2}(\text{e}^{-\Phi(x)}\,\mathrm{d}x)}\cdot\|Gf\|_{L^{2}(\text{e}^{-\Phi(x)}\,\mathrm{d}x)}$
$\displaystyle\leq d^{3}N_{\Sigma}^{2}\,\|Pf\|_{H}(\|(I-G)f\|_{H}+\|f\|_{H})$
$\displaystyle\leq 2d^{3}N_{\Sigma}^{2}\,\|g\|_{H}^{2},$
where the last inequality is due to dissipativity of $(G,D)$. $\square$
###### Proof (of Theorem 1.1):
Under the given assumptions, all conditions (C1)-(C3), ($\Sigma$1),
($\Sigma$2) and either ($\Sigma$3) or ($\Sigma$3′) are satisfied. Therefore
hypocoercivity follows by the previous propositions and Theorem 2.2. It
remains to show the stated convergence rate, which will be done as in [7] or
[14] using the determined values for $c_{1}$, $c_{2}$, $\Lambda_{M}$ and
$\Lambda_{m}$. Fix
$\delta\mathrel{\vbox{\hbox{\scriptsize.}\hbox{\scriptsize.}}}=\frac{\Lambda}{1+\Lambda}\frac{1}{1+c_{\Phi}+d_{\Sigma}}.$
Then the coefficients on the right hand side of (2.1) can be written as
$c_{\Sigma}-\varepsilon r_{\Phi}(N_{\Sigma})$ and $\varepsilon s_{\Phi}$
respectively, where
$\displaystyle r_{\Phi}(N_{\Sigma})$
$\displaystyle\mathrel{\vbox{\hbox{\scriptsize.}\hbox{\scriptsize.}}}=(1+c_{\Phi}+\sqrt{2d^{3}}N_{\Sigma})\left(1+\frac{1+\Lambda}{2\Lambda}(1+c_{\Phi}+\sqrt{2d^{3}}N_{\Sigma})\right)\quad\text{
and }$ $\displaystyle s_{\Phi}$
$\displaystyle\mathrel{\vbox{\hbox{\scriptsize.}\hbox{\scriptsize.}}}=\frac{1}{2}\frac{\Lambda}{1+\Lambda}.$
and $\varepsilon=\varepsilon_{\Phi}(\Sigma)\in(0,1)$ still needs to be
determined. Write $r_{\Phi}(N_{\Sigma})+s_{\Phi}$ as the polynomial
$r_{\Phi}(N_{\Sigma})+s_{\Phi}=a_{1}+a_{2}N_{\Sigma}+a_{3}N_{\Sigma}^{2},$
where all $a_{i}\in(0,\infty)$, $i=1,\dots,3$ depend on $\Phi$. Then define
$\tilde{\varepsilon}_{\Phi}(N_{\Sigma})\mathrel{\vbox{\hbox{\scriptsize.}\hbox{\scriptsize.}}}=\frac{N_{\Sigma}}{r_{\Phi}(N_{\Sigma})+s_{\Phi}}=\frac{N_{\Sigma}}{a_{1}+a_{2}N_{\Sigma}+a_{3}N_{\Sigma}^{2}}.$
Some rough estimates show $\tilde{\varepsilon}_{\Phi}(N_{\Sigma})\in(0,1)$.
Now let $v>0$ be arbitrary and set
$\varepsilon\mathrel{\vbox{\hbox{\scriptsize.}\hbox{\scriptsize.}}}=\frac{v}{1+v}\frac{c_{\Sigma}}{N_{\Sigma}}\tilde{\varepsilon}_{\Phi}(N_{\Sigma})\in(0,1).$
Then $\varepsilon r_{\Phi}(N_{\Sigma})+\varepsilon
s_{\Phi}=\frac{v}{1+v}c_{\Sigma}<c_{\Sigma}$, hence we get the estimate
$c_{\Sigma}-\varepsilon r_{\Phi}(N_{\Sigma})>\varepsilon
s_{\Phi}=\frac{v}{1+v}\frac{2c_{\Sigma}}{n_{1}+n_{2}N_{\Sigma}+n_{3}N_{\Sigma}^{2}}=\mathrel{\vbox{\hbox{\scriptsize.}\hbox{\scriptsize.}}}\kappa,$
where all $n_{i}\in(0,\infty)$ depend on $\Phi$ and are given by
$n_{i}\mathrel{\vbox{\hbox{\scriptsize.}\hbox{\scriptsize.}}}=\frac{2}{s_{\Phi}}a_{i},\qquad\text{
for each }i=1,\dots,3.$
Clearly, $\kappa$, $\varepsilon$ and $\delta$ now solve (2.1) and the
convergence rate coefficients are given via Theorem 2.2 by
$\displaystyle\kappa_{1}$
$\displaystyle=\sqrt{\frac{1+\varepsilon}{1-\varepsilon}}=\sqrt{\frac{1+v+\frac{c_{\Sigma}}{N_{\Sigma}}\tilde{\varepsilon}_{\Phi}(N_{\Sigma})v}{1+v-\frac{c_{\Sigma}}{N_{\Sigma}}\tilde{\varepsilon}_{\Phi}(N_{\Sigma})v}}\leq\sqrt{1+2v+v^{2}}=1+v\quad\text{
and }$ $\displaystyle\kappa_{2}$
$\displaystyle=\frac{\kappa}{1+\varepsilon}>\frac{1}{2}\kappa$
Hence, by choosing $\theta_{1}=1+v$ and
$\theta_{2}=\frac{1}{2}\kappa=\frac{\theta_{1}-1}{\theta_{1}}\frac{c_{\Sigma}}{n_{1}+n_{2}N_{\Sigma}+n_{3}N_{\Sigma}^{2}}$,
the rate of convergence claimed in the theorem is shown. $\square$
###### Remark 3.11.
We remark here that all previous considerations up to the explicit rate of
convergence can also be applied to the formal adjoint operator $(L^{*},D)$
with $L^{*}=S+A$, the closure of which generates the adjoint semigroup
$(T_{t}^{*})_{t\geq 0}$ on $H$. For example, the perturbation procedure to
prove essential m-dissipativity is exactly the same as for $L$, since the sign
of $A$ does not matter due to antisymmetry. We can use this to construct
solutions to the corresponding Fokker-Planck PDE associated with our Langevin
dynamics, see Section 5.3.
## 4 Essential m-dissipativity of the Langevin operator
The goal of this section is to prove Theorem 3.4. We start by giving some
basics on perturbation of semigroup generators.
### 4.1 Basics on generators and perturbation
###### Definition 4.1.
Let $(A,D(A))$ and $(B,D(B))$ be linear operators on $H$. Then $B$ is said to
be _$A$ -bounded_ if $D(A)\subset D(B)$ and there exist constants $a,b<\infty$
such that
$\|Bf\|_{H}\leq a\|Af\|_{H}+b\|f\|_{H}$ (4.1)
holds for all $f\in D(A)$. The number $\inf\\{a\in\mathbb{R}\mid\text{
\eqref{eq:a-bound} holds for some }b<\infty\\}$ is called the _$A$ -bound_ of
$B$.
###### Theorem 4.2.
Let $D\subset H$ be a dense linear subspace, $(A,D)$ be an essentially
m-dissipative linear operator on $H$ and let $(B,D)$ be dissipative and
$A$-bounded with $A$-bound strictly less than $1$. Then $(A+B,D)$ is
essentially m-dissipative and its closure is given by
$(\overline{A}+\overline{B},D(\overline{A}))$.
A useful criterion for verifying $A$-boundedness is given by:
###### Lemma 4.3.
Let $D\subset H$ be a dense linear subspace, $(A,D)$ be essentially
m-dissipative and $(B,D)$ be dissipative. Assume that there exist constants
$c,d<\infty$ such that
$\|Bf\|_{H}^{2}\leq c(Af,f)_{H}+d\|f\|_{H}^{2}$
holds for all $f\in D$. Then $B$ is $A$-bounded with $A$-bound $0$.
We also require the following generalization of the perturbation method:
###### Lemma 4.4.
Let $D\subset H$ be a dense linear subspace, $(A,D)$ be essentially
m-dissipative and $(B,D)$ be dissipative on $H$. Assume that there exists a
complete orthogonal family $(P_{n})_{n\in\mathbb{N}}$, i.e. each $P_{n}$ is an
orthogonal projection, $P_{n}P_{m}=0$ for all $n\neq m$ and
$\sum_{n\in\mathbb{N}}P_{n}=I$ strongly, such that
$P_{n}(D)\subset D,\qquad P_{n}A=AP_{n},\quad\text{ and }\quad P_{n}B=BP_{n}$
for all $n\in\mathbb{N}$. Set
$A_{n}\mathrel{\vbox{\hbox{\scriptsize.}\hbox{\scriptsize.}}}=AP_{n}$,
$B_{n}\mathrel{\vbox{\hbox{\scriptsize.}\hbox{\scriptsize.}}}=BP_{n}$, both
with domain
$D_{n}\mathrel{\vbox{\hbox{\scriptsize.}\hbox{\scriptsize.}}}=P_{n}(D)$, as
operators on $P_{n}(H)$. Assume that each $B_{n}$ is $A_{n}$-bounded with
$A_{n}$-bound strictly less than $1$. Then $(A+B,D)$ is essentially
m-dissipative.
### 4.2 The symmetric part
We first prove essential self-adjointness, equivalently essential
m-dissipativity, for a certain class of symmetric differential operators on
specific Hilbert spaces. This is essentially a combination of two results by
Bogachev, Krylov, and Röckner, namely [10, Corollary 2.10] and [12, Theorem
7], however, the combined statement does not seem to be well known and might
hold interest as the basis for similar m-dissipativity proofs. We use the
slightly more general statement from [11, Theorem 5.1] in order to relax the
assumptions.
###### Theorem 4.5.
Let $d\geq 2$ and consider $H=L^{2}(\mathbb{R}^{d},\mu)$ where
$\mu=\rho\,\mathrm{d}x$, $\rho=\varphi^{2}$ for some $\varphi\in
H_{\mathrm{loc}}^{1,2}(\mathbb{R}^{d})$ such that $\frac{1}{\rho}\in
L_{\mathrm{loc}}^{\infty}(\mathbb{R}^{d})$. Let $A=(a_{ij})_{1\leq i,j\leq
d}:\mathbb{R}^{d}\to\mathbb{R}^{d\times d}$ be symmetric and locally strictly
elliptic with $a_{ij}\in L^{\infty}(\mathbb{R}^{d})$ for all $1\leq i,j\leq
d$. Assume there is some $p>d$ such that $a_{ij}\in
H_{\mathrm{loc}}^{1,p}(\mathbb{R}^{d})$ for all $1\leq i,j\leq d$ and that
$|\nabla\rho|\in L_{\mathrm{loc}}^{p}(\mathbb{R}^{d})$. Consider the bilinear
form $(B,D)$ given by $D=C_{c}^{\infty}(\mathbb{R}^{d})$ and
$B(f,g)\mathrel{\vbox{\hbox{\scriptsize.}\hbox{\scriptsize.}}}=(\nabla
f,A\nabla g)_{H}=\int_{\mathbb{R}^{d}}(\nabla f(x),A(x)\nabla
g(x))_{\mathrm{euc}}\,\rho(x)\,\mathrm{d}x,\qquad f,g\in D.$
Define further the linear operator $(S,D)$ via
$Sf\mathrel{\vbox{\hbox{\scriptsize.}\hbox{\scriptsize.}}}=\sum_{i,j=1}^{d}a_{ij}\partial_{j}\partial_{i}f+\sum_{i=1}^{d}b_{i}\partial_{i}f,\qquad
f\in D,$
where
$b_{i}=\sum_{j=1}^{d}(\partial_{j}a_{ij}+a_{ij}\frac{\partial_{j}\rho}{\rho})\in
L_{\mathrm{loc}}^{p}(\mathbb{R}^{d})$, so that $B(f,g)=(-Sf,g)_{H}$. Then
$(S,D)$ is essentially self-adjoint on $H$.
###### Proof:
Analogously to the proof of [12, Theorem 7], it can be shown that $\rho$ is
continuous, hence locally bounded. Assume that there is some $g\in H$ such
that
$\int_{\mathbb{R}^{d}}(S-I)f(x)\cdot
g(x)\cdot\rho(x)\,\mathrm{d}x=0\quad\text{ for all }f\in D.$ (4.2)
Define the locally finite signed Borel measure $\nu$ via
$\nu=g\rho\,\mathrm{d}x$, which is then absolutely continuous with respect to
the Lebesgue measure. By definition it holds that
$\int_{\mathbb{R}^{d}}\left(\sum_{i,j=1}^{d}a_{ij}\partial_{j}\partial_{i}f+\sum_{i=1}^{d}b_{i}\partial_{i}f-f\right)\,\mathrm{d}\nu=0\quad\text{
for all }f\in D,$
so by [11, Theorem 5.1], the density $g\cdot\rho$ of $\nu$ is in
$H_{\mathrm{loc}}^{1,p}(\mathbb{R}^{d})$ and locally Hölder continuous, hence
locally bounded. This implies $g=g\rho\cdot\frac{1}{\rho}\in
L_{\mathrm{loc}}^{p}(\mathbb{R}^{d})\cap
L_{\mathrm{loc}}^{\infty}(\mathbb{R}^{d})$ and $\nabla
g=\nabla(g\rho)\cdot\frac{1}{\rho}-(g\rho)\frac{\nabla\rho}{\rho^{2}}\in
L_{\mathrm{loc}}^{p}(\mathbb{R}^{d})$. Hence $g\in
H_{\mathrm{loc}}^{1,p}(\mathbb{R}^{d})$, is locally bounded, and $g\cdot
b_{i}\in L_{\mathrm{loc}}^{p}(\mathbb{R}^{d})$ for all $1\leq i\leq d$.
Therefore, we can apply integration by parts to (4.2) and get for every $f\in
D$:
$\displaystyle 0$
$\displaystyle=-\sum_{i,j=1}^{d}(a_{ij}\partial_{i}f,\partial_{j}g)_{H}-\sum_{i=1}^{d}(\partial_{i}f,b_{i}g)_{H}+\sum_{i=1}^{d}(\partial_{i}f,b_{i}g)_{H}-(f,g)_{H}$
(4.3) $\displaystyle=-\int_{\mathbb{R}^{d}}(\nabla f,A\nabla
g)_{\mathrm{euc}}\,\mathrm{d}\mu-(f,g)_{H}.$
Note that this equation can then be extended to all $f\in
H^{1,2}(\mathbb{R}^{d})$ with compact support, since $p>2$ by definition. Now
let $\psi\in C_{c}^{\infty}(\mathbb{R}^{d})$ and set $\eta=\psi g\in
H^{1,2}(\mathbb{R}^{d})$, which has compact support. The same then holds for
$f\mathrel{\vbox{\hbox{\scriptsize.}\hbox{\scriptsize.}}}=\psi\eta\in
H^{1,2}(\mathbb{R}^{d})$. Elementary application of the product rule yields
$(\nabla\eta,A\nabla(\psi g))_{\mathrm{euc}}=(\nabla f,A\nabla
g)_{\mathrm{euc}}-\eta(\nabla\psi,A\nabla
g)_{\mathrm{euc}}+g(\nabla\eta,A\nabla\psi)_{\mathrm{euc}}.$ (4.4)
From now on, for $a,b:\mathbb{R}^{d}\to\mathbb{R}^{d}$, let $(a,b)$ always
denote the evaluation of the Euclidean inner product $(a,b)_{\mathrm{euc}}$.
By using (4.4) and applying (4.3) to $f$, we get
$\displaystyle\int_{\mathbb{R}^{d}}$ $\displaystyle(\nabla(\psi
g),A\nabla(\psi g))\,\mathrm{d}\mu+\int_{\mathbb{R}^{d}}(\psi
g)^{2}\,\mathrm{d}\mu=\int_{\mathbb{R}^{d}}(\nabla\eta,A\nabla(\psi
g))\,\mathrm{d}\mu+\int_{\mathbb{R}^{d}}\eta\psi g\,\mathrm{d}\mu$
$\displaystyle=\int_{\mathbb{R}^{d}}(\nabla f,A\nabla
g)\,\mathrm{d}\mu-\int_{\mathbb{R}^{d}}\eta(\nabla\psi,A\nabla
g)\,\mathrm{d}\mu+\int_{\mathbb{R}^{d}}g(\nabla\eta,A\nabla\psi)\,\mathrm{d}\mu+\int_{\mathbb{R}^{d}}fg\,\mathrm{d}\mu$
$\displaystyle=-\int_{\mathbb{R}^{d}}\psi g(\nabla\psi,A\nabla
g)\,\mathrm{d}\mu+\int_{\mathbb{R}^{d}}g(\nabla(\psi
g),A\nabla\psi)\,\mathrm{d}\mu$
$\displaystyle=\int_{\mathbb{R}^{d}}g^{2}(\nabla\psi,A\nabla\psi)\,\mathrm{d}\mu,$
where the last step follows from the product rule and symmetry of $A$. Since
$A$ is locally strictly elliptic, there is some $c>0$ such that
$0\leq\int_{\mathbb{R}^{d}}c(\nabla(\psi g),\nabla(\psi
g))\,\mathrm{d}\mu\leq\int_{\mathbb{R}^{d}}(\nabla(\psi g),A\nabla(\psi
g))\,\mathrm{d}\mu$
and therefore it follows that
$\int_{\mathbb{R}^{d}}(\psi
g)^{2}\,\mathrm{d}\mu\leq\int_{\mathbb{R}^{d}}g^{2}(\nabla\psi,A\nabla\psi)\,\mathrm{d}\mu.$
(4.5)
Let $(\psi_{n})_{n\in\mathbb{N}}$ be as in Definition 3.6. Then (4.5) holds
for all $\psi=\psi_{n}$. By dominated convergence, the left part converges to
$\|g\|_{H}^{2}$ as $n\to\infty$. The integrand of the right hand side term is
dominated by $d^{2}C^{2}M\cdot g^{2}\in L^{1}(\mu)$, where $C$ is from Def.
3.6 and $M\mathrel{\vbox{\hbox{\scriptsize.}\hbox{\scriptsize.}}}=\max_{1\leq
i,j\leq d}\|a_{ij}\|_{\infty}$. By definition of the $\psi_{n}$, that
integrand converges pointwisely to zero as $n\to\infty$, so again by dominated
convergence it follows that $g=0$ in $H$.
This implies that $(S-I)(D)$ is dense in $H$ and therefore that $(S,D)$ is
essentially self-adjoint. $\square$
###### Remark 4.6.
The above theorem also holds for $d=1$, as long as $p\geq 2$. Indeed,
continuity of $\rho$ follows from similar regularity estimates, see [12,
Remark 2]. The proof of [11, Theorem 5.1] mirrors the proof of [10, Theorem
2.8], where $d\geq 2$ is used to apply [10, Theorem 2.7]. However, in the
cases where it is applied, this distinction is not necessary (since
$p^{\prime}<q$ always holds). Finally, the extension of (4.3) requires $p\geq
2$.
We use this result to prove essential m-dissipativity of the symmetric part
$(S,D)$ of our operator $L$:
###### Theorem 4.7.
Let $H,D$ and the operator $S$ be defined as in Section 3.1. Then $(S,D)$ is
essentially m-dissipative on $H$. Its closure $(S,D(S))$ generates a sub-
Markovian strongly continuous contraction semigroup on $H$.
###### Proof:
Define the operator $(\tilde{S},C_{c}^{\infty}(\mathbb{R}^{d}))$ on
$L^{2}(\mathbb{R}^{d},\nu)$ by
$\tilde{S}f\mathrel{\vbox{\hbox{\scriptsize.}\hbox{\scriptsize.}}}=\sum_{i,j=1}^{d}a_{ij}\partial_{j}\partial_{i}f+\sum_{i=1}^{d}b_{i}\partial_{i}f,\quad
f\in C_{c}^{\infty}(\mathbb{R}^{d}).$
The density $\rho$ of $\nu$ wrt. the Lebesgue measure is given by
$\rho(v)=\mathrm{e}^{-v^{2}/2}=(\mathrm{e}^{-v^{2}/4})^{2}$. Due to the
conditions ($\Sigma$1), ($\Sigma$2) and either ($\Sigma$3) or ($\Sigma$3′),
all assumptions from Theorem 4.5 are fulfilled and therefore,
$(\tilde{S},C_{c}^{\infty}(\mathbb{R}^{d}))$ is essentially m-dissipative in
$L^{2}(\nu)$. Let $g=g_{1}\otimes g_{2}\in
C_{c}^{\infty}(\mathbb{R}^{d})\otimes C_{c}^{\infty}(\mathbb{R}^{d})$ be a
pure tensor. Then there is a sequence $(\tilde{f}_{n})_{n\in\mathbb{N}}$ in
$C_{c}^{\infty}(\mathbb{R}^{d})$ such that $(I-\tilde{S})\tilde{f}_{n}\to
g_{2}$ in $L^{2}(\nu)$ as $n\to\infty$. Define $f_{n}\in D$ for each
$n\in\mathbb{N}$ by
$f_{n}(x,v)\mathrel{\vbox{\hbox{\scriptsize.}\hbox{\scriptsize.}}}=g_{1}(x)\tilde{f}_{n}(v).$
Then
$\|(I-S)f_{n}-g\|_{H}=\|g_{1}\otimes((I-\tilde{S})\tilde{f}_{n}-g_{2})\|_{H}=\|g_{1}\|_{L^{2}(\mathrm{e}^{-\Phi(x)}\,\mathrm{d}x)}\cdot\|(I-\tilde{S})\tilde{f}_{n}-g_{2}\|_{L^{2}(\nu)},$
which converges to zero as $n\to\infty$. By taking linear combinations, this
shows that $(I-S)(D)$ is dense in $C_{c}^{\infty}(\mathbb{R}^{d})\otimes
C_{c}^{\infty}(\mathbb{R}^{d})$ wrt. the $H$-norm. Since
$C_{c}^{\infty}(\mathbb{R}^{d})\otimes C_{c}^{\infty}(\mathbb{R}^{d})$ is
dense in $H$, $(S,D)$ is essentially m-dissipative and its closure $(S,D(S))$
generates a strongly continuous contraction semigroup.
It can easily be shown that $(Sf,f^{+})_{H}\leq 0$ for all $f\in D$.
Parallelly to the proof of (D7), it holds that $1\in D(S)$ and $S1=0$. This
together implies that $(S,D(S))$ is a Dirichlet operator and the generated
semigroup is sub-Markovian. $\square$
### 4.3 Perturbation of the symmetric part for nice coefficients
Now we extend the essential m-dissipativity stepwise to the non-symmetric
operator $L$ by perturbation. This follows and is mostly based on the method
seen in the proof of [15, Theorem 6.3.1], which proved that result for
$\Sigma=I$.
Since $S$ is dissipative on
$D_{1}\mathrel{\vbox{\hbox{\scriptsize.}\hbox{\scriptsize.}}}=L_{0}^{2}(\mathrm{e}^{-\Phi}\,\mathrm{d}x)\otimes
C_{c}^{\infty}(\mathbb{R}^{d})\supset D$, the operator $(S,D_{1})$ is
essentially m-dissipative as well. The unitary transformation
$T:L^{2}(\mathbb{R}^{d},\mathrm{d}(x,v))\to H$ given by
$Tf(x,v)=\mathrm{e}^{\frac{v^{2}}{4}+\frac{\Phi(x)}{2}}f(x,v)$ leaves $D_{1}$
invariant. This implies that the operator $(S_{1},D_{1})$ on
$L^{2}(\mathbb{R}^{d},\mathrm{d}(x,v))$ , where $S_{1}=T^{-1}ST$, is again
essentially m-dissipative. Note that $S_{1}$ is explicitly given by
$S_{1}f=\sum_{i,j=1}^{d}a_{ij}\partial_{v_{j}}\partial_{v_{i}}f-\frac{1}{4}(v,\Sigma
v)f+\frac{1}{2}\operatorname{tr}(\Sigma)f+\sum_{i,j=1}^{d}\partial_{j}a_{ij}(\frac{v_{i}}{2}f+\partial_{v_{i}}f)$
Now consider the operator $(ivxI,D_{1})$, which is dissipative as
$\operatorname{Re}(ivxf,f)_{L^{2}(\mathbb{R}^{d},\mathrm{d}(x,v))}=0$ for
$f\in D_{1}$. We show the following perturbation result:
###### Proposition 4.8.
Let $\Sigma$ satisfy ($\Sigma$3) with $\beta\leq-1$. Then the operator
$(S_{1}+ivxI,D_{1})$ is essentially m-dissipative on
$L^{2}(\mathbb{R}^{d},\mathrm{d}(x,v))$.
###### Proof:
Define the orthogonal projections $P_{n}$ via
$P_{n}f(x,v)\mathrel{\vbox{\hbox{\scriptsize.}\hbox{\scriptsize.}}}=\xi_{n}(x)f(x,v)$,
where $\xi_{n}$ is given by $\xi_{n}=\mathds{1}_{[n-1,n)}(|x|)$, which leave
$D_{1}$ invariant. Then the conditions for Lemma 4.4 are fulfilled, and we are
left to show the $A_{n}$-bounds. Note that due to the restriction on $\beta$,
there is some constant $C<\infty$ such that $\partial_{j}a_{ij}(v)v_{i}\leq C$
for all $1\leq i,j\leq d$, $v\in\mathbb{R}^{d}$. For each fixed
$n\in\mathbb{N}$ it holds for all $f\in P_{n}D_{1}$:
$\displaystyle\|ivxf\|_{L^{2}}^{2}$ $\displaystyle\leq
n^{2}\int_{\mathbb{R}^{2d}}|v|^{2}f^{2}\,\mathrm{d}(x,v)\leq
4c_{\Sigma}^{-1}n^{2}\int_{\mathbb{R}^{2d}}\frac{(v,\Sigma
v)}{4}f^{2}\,\mathrm{d}(x,v)$ $\displaystyle\leq
4c_{\Sigma}^{-1}n^{2}\int_{\mathbb{R}^{2d}}\frac{(v,\Sigma
v)}{4}f^{2}+(\nabla_{v}f,\Sigma\nabla_{v}f)\,\mathrm{d}(x,v)$
$\displaystyle=4c_{\Sigma}^{-1}n^{2}\int_{\mathbb{R}^{2d}}\left(-\sum_{i,j=1}^{d}a_{ij}\partial_{v_{j}}\partial_{v_{i}}f-\sum_{i,j=1}^{d}\partial_{j}a_{ij}\partial_{v_{i}}f+\frac{(v,\Sigma
v)}{4}f\right)f\,\mathrm{d}(x,v)$
$\displaystyle=4c_{\Sigma}^{-1}n^{2}\left((-P_{n}S_{1}f,f)+\int_{\mathbb{R}^{2d}}\frac{1}{2}\operatorname{tr}(\Sigma)f^{2}+\sum_{i,j=1}^{d}\partial_{j}a_{ij}\frac{v_{i}}{2}f^{2}\,\mathrm{d}(x,v)\right)$
$\displaystyle\leq
4c_{\Sigma}^{-1}n^{2}\left((-S_{1}f,f)+(d^{2}C+\frac{dM_{\Sigma}}{2})\|f\|_{L^{2}}^{2}\right).$
Hence by Lemma 4.3, $(ivxIP_{n},P_{n}D_{1})$ is $S_{1}P_{n}$-bounded with
Kato-bound zero. Application of Lemma 4.4 yields the statement. $\square$
Since $C_{c}^{\infty}(\mathbb{R}^{d})\otimes C_{c}^{\infty}(\mathbb{R}^{d})$
is dense in $D_{1}$ wrt. the graph norm of $S_{1}+ivxI$, we obtain essential
m-dissipativity of $(S_{1}+ivxI,C_{c}^{\infty}(\mathbb{R}^{d})\otimes
C_{c}^{\infty}(\mathbb{R}^{d}))$ and therefore also of its dissipative
extension $(S_{1}+ivxI,D_{2})$ with
$D_{2}\mathrel{\vbox{\hbox{\scriptsize.}\hbox{\scriptsize.}}}=\mathcal{S}(\mathbb{R}^{d})\otimes
C_{c}^{\infty}(\mathbb{R}^{d}))$, where $\mathcal{S}(\mathbb{R}^{d})$ denotes
the set of smooth functions of rapid decrease on $\mathbb{R}^{d}$. Applying
Fourier transform in the $x$-component leaves $D_{2}$ invariant and shows that
$(L_{2},D_{2})$ is essentially m-dissipative, where $L_{2}=S_{1}+v\nabla_{x}$.
Now we add the part depending on the potential $\Phi$.
###### Proposition 4.9.
Let $\Sigma$ satisfy ($\Sigma$3) with $\beta\leq-1$ and $\Phi$ be Lipschitz-
continuous. Then the operator $(L^{\prime},D_{2})$ with
$L^{\prime}=L_{2}-\nabla\Phi\nabla_{v}$ is essentially m-dissipative on
$L^{2}(\mathbb{R}^{d},\mathrm{d}(x,v))$.
###### Proof:
It holds due to antisymmetry of $v\nabla_{x}$ that
$\displaystyle\|\nabla\Phi\nabla_{v}f\|_{L^{2}}^{2}$
$\displaystyle\leq\||\nabla\Phi|\|_{\infty}^{2}c_{\Sigma}^{-1}\left((\nabla_{v}f,\Sigma\nabla_{v}f)_{L^{2}}+\left(\frac{(v,\Sigma
v)}{4}f,f\right)_{L^{2}}-(v\nabla_{x}f,f)_{L^{2}}\right)$
$\displaystyle\leq\||\nabla\Phi|\|_{\infty}^{2}c_{\Sigma}^{-1}\left((-L_{2}f,f)_{L^{2}}+(d^{2}C+\frac{dM_{\Sigma}}{2})\|f\|_{L^{2}}^{2}\right),$
analogously to the proof of Proposition 4.8, which again implies that the
antisymmetric, hence dissipative operator $(\nabla\Phi\nabla_{v},D_{2})$ is
$L_{2}$-bounded with bound zero. This shows the claim. $\square$
Denote by $H_{c}^{1,\infty}(\mathbb{R}^{d})$ the space of functions in
$H^{1,\infty}(\mathbb{R}^{d})$ with compact support and set
$D^{\prime}\mathrel{\vbox{\hbox{\scriptsize.}\hbox{\scriptsize.}}}=H_{c}^{1,\infty}(\mathbb{R}^{d})\otimes
C_{c}^{\infty}(\mathbb{R}^{d})$. As $(L^{\prime},D^{\prime})$ is dissipative
and its closure extends $(L^{\prime},D_{2})$, it is itself essentially
m-dissipative. The unitary transformation $T$ from the beginning of this
section leaves $D^{\prime}$ invariant, and it holds that $TL^{\prime}T^{-1}=L$
on $D^{\prime}$. This brings us to the first m-dissipativity result for the
complete Langevin operator:
###### Theorem 4.10.
Let $\Sigma$ satisfy ($\Sigma$3) with $\beta\leq-1$ and $\Phi$ be Lipschitz-
continuous. Then $(L,D)$ with is essentially m-dissipative on $H$.
###### Proof:
By the previous considerations, $(L,D^{\prime})$ is essentially m-dissipative
on $H$. Let $f\in D^{\prime}$ with $f=g\otimes h$. It holds $g\in
H_{c}^{1,\infty}(\mathbb{R}^{d})\subset H^{1,2}(\mathbb{R}^{d})$. Choose a
sequence $(g_{n})_{n\in\mathbb{N}}$ with $g_{n}\in
C_{c}^{\infty}(\mathbb{R}^{d})$, such that $g_{n}\to g$ in
$H^{1,2}(\mathbb{R}^{d})$ as $n\to\infty$. Due to boundedness of
$\mathrm{e}^{-\Phi}$ and $v_{j}\mathrm{e}^{-v^{2}/2}$ for all $1\leq j\leq d$,
it follows immediately that $g_{n}\otimes h\to f$ and $L(g_{n}\otimes h)\to
Lf$ in $H$ as $n\to\infty$. This extends to arbitrary $f\in D^{\prime}$ via
linear combinations and therefore shows that
$C_{c}^{\infty}(\mathbb{R}^{d})\otimes C_{c}^{\infty}(\mathbb{R}^{d})$ and
hence also $D$, is a core for $(L,D(L))$. $\square$
### 4.4 Proof of Theorem 3.4
It is now left to relax the assumptions on $\Sigma$ and $\Phi$ by
approximation. Let the assumptions of Theorem 3.4 hold and wlog $\Phi\geq 0$.
For $n\in\mathbb{N}$ we define $\Sigma_{n}$ via
$\Sigma_{n}=(a_{ij,n})_{1\leq i,j\leq d},\quad
a_{ij,n}(v)\mathrel{\vbox{\hbox{\scriptsize.}\hbox{\scriptsize.}}}=a_{ij}\left(\left(\frac{n}{|v|}\wedge
1\right)v\right).$
Then each $\Sigma_{n}$ also satisfies ($\Sigma$1)-($\Sigma$3) with $\beta=-1$,
since $\partial_{k}a_{ij,n}=\partial_{k}a_{ij}$ on $B_{n}(0)$ and
$|\partial_{k}a_{ij,n}|\leq\frac{(1+\sqrt{d})nL_{\Sigma,n}}{|v|}$ outside of
$\overline{B_{n}(0)}$, where $L_{\Sigma,n}$ denotes the supremum of
$\max_{1\leq k\leq d}|\partial_{k}a_{ij}|$ on $\overline{B_{n}(0)}$. Let
further $\eta_{m}\in C_{c}^{\infty}(\mathbb{R}^{d})$ for each $m\in\mathbb{N}$
with $\eta=1$ on $B_{m}(0)$ and set $\Phi_{m}=\eta_{m}\Phi$, which is
Lipschitz-continuous. Define $H_{m}$ as
$L^{2}(\mathbb{R}^{2d},\mathrm{e}^{-\frac{v^{2}}{2}-\Phi_{m}(x)}\,\mathrm{d}(x,v))$
and $(L_{n,m},D)$ via
$L_{n,m}f=\sum_{i,j=1}^{d}a_{ij,n}\partial_{v_{j}}\partial_{v_{i}}f+\sum_{i=1}^{d}\sum_{j=1}^{d}(\partial_{j}a_{ij,n}(v)-a_{ij,n}(v)v_{j})\partial_{v_{i}}f+v\cdot\nabla_{x}f-\nabla\Phi_{m}\cdot\nabla_{v}f.$
Then Theorem 4.10 shows that for each $n,m\in\mathbb{N}$, $(L_{n,m},D)$ is
essentially m-dissipative on $H_{m}$, and it holds that $L_{n,m}f=Lf$ for all
$f\in D$ on $B_{m}(0)\times B_{n}(0)$. Note further that
$\|\cdot\|_{H}\leq\|\cdot\|_{H_{m}}$.
We need the following estimates:
###### Lemma 4.11.
Let $n,m\in\mathbb{N}$ and $\Sigma_{n}$, $\Phi_{m}$ as defined above. Then
there is a constant $D_{1}<\infty$ independent of $n,m$ such that for each
$1\leq j\leq d$, the following hold for all $f\in D$:
$\displaystyle\|v_{j}f\|_{H_{m}}$ $\displaystyle\leq
D_{1}n^{\frac{1+\beta}{2}}\|(I-L_{n,m})f\|_{H_{m}},$
$\displaystyle\|\partial_{v_{j}}f\|_{H_{m}}$ $\displaystyle\leq
D_{1}n^{\frac{1+\beta}{2}}\|(I-L_{n,m})f\|_{H_{m}}.$
###### Proof:
Recall the unitary transformations
$T_{m}:L^{2}(\mathbb{R}^{2d},\mathrm{d}(x,v))\to H_{m}$ defined by
$T_{m}f=\mathrm{e}^{\frac{v^{2}}{4}+\frac{\Phi_{m}(x)}{2}}f$, as well as the
operator $L_{n,m}^{\prime}=T_{m}^{-1}L_{n,m}T_{m}$, and let $f\in
T_{m}^{-1}D$. Then
$\displaystyle
L_{n,m}^{\prime}f=\sum_{i,j=1}^{d}a_{ij,n}\partial_{v_{j}}\partial_{v_{i}}f$
$\displaystyle-\frac{1}{4}(v,\Sigma_{n}v)f+\frac{1}{2}\operatorname{tr}(\Sigma_{n})f+\sum_{i,j=1}^{d}\partial_{j}a_{ij,n}(\frac{v_{i}}{2}f+\partial_{v_{i}}f)$
$\displaystyle-v\nabla_{x}f+\nabla\Phi_{m}\nabla_{v}f.$
Analogously to the proof of Proposition 4.8 and due to antisymmetry of
$v\nabla_{x}$ and $\nabla\Phi_{m}\nabla_{v}$ on $L^{2}(\mathrm{d}(x,v))$, it
holds that
$\displaystyle\|v_{j}T_{m}f\|_{H_{m}}^{2}$
$\displaystyle=\|v_{j}f\|_{L^{2}(\mathrm{d}(x,v))}^{2}\leq
4c_{\Sigma}^{-1}\int_{\mathbb{R}^{2d}}\frac{1}{4}(v,\Sigma_{n}v)f^{2}\,\mathrm{d}(x,v)$
$\displaystyle\leq
4c_{\Sigma}^{-1}\left((-L_{n,m}^{\prime}f,f)_{L^{2}(\mathrm{d}(x,v))}+\int_{\mathbb{R}^{2d}}\frac{f^{2}}{2}\left(\operatorname{tr}(\Sigma_{n})+\sum_{i,j=1}^{d}\partial_{j}a_{ij,n}v_{i}\right)\mathrm{d}(x,v)\right).$
Since $|\operatorname{tr}(\Sigma_{n})|\leq|\operatorname{tr}(\Sigma)|\leq
d\cdot M_{\Sigma}$ and
$|\partial_{j}a_{ij,n}(v)v_{i}|\leq|\partial_{j}a_{ij}(v)|\cdot|v_{i}|\leq\max\\{B_{\Sigma},M\cdot
n^{\beta+1}\\}\quad\text{ for all }v\in B_{n}(0),$
as well as
$|\partial_{j}a_{ij,n}(v)v_{i}|\leq(1+\sqrt{d})n\frac{|v_{i}|}{|v|}\max_{1\leq
k\leq d}\sup_{y\in B_{n}(0)}|\partial_{k}a_{ij}(y)|\leq
2\sqrt{d}Mn^{\beta+1}\quad\text{ for all }v\notin B_{n}(0),$
and wlog $B_{\Sigma}\leq M\cdot n^{\beta+1}$, it follows that
$\|v_{j}T_{m}f\|_{H_{m}}^{2}\leq
4c_{\Sigma}^{-1}(-L_{n,m}^{\prime}f,f)_{L^{2}(\mathrm{d}(x,v))}+2c_{\Sigma}^{-1}(dM_{\Sigma}+2d^{5/2}Mn^{\beta+1})\|f\|_{L^{2}(\mathrm{d}(x,v))}^{2}.$
Further, it clearly holds that
$\displaystyle(-L_{n,m}^{\prime}f,f)_{L^{2}(\mathrm{d}(x,v))}$
$\displaystyle\leq\frac{1}{4}\left(\|L_{n,m}^{\prime}f\|_{L^{2}(\mathrm{d}(x,v))}+\|f\|_{L^{2}(\mathrm{d}(x,v))}\right)^{2}\quad\text{
and }$ $\displaystyle\|f\|_{L^{2}(\mathrm{d}(x,v))}^{2}$
$\displaystyle\leq\left(\|L_{n,m}^{\prime}f\|_{L^{2}(\mathrm{d}(x,v))}+\|f\|_{L^{2}(\mathrm{d}(x,v))}\right)^{2}.$
Dissipativity of $(L_{n,m}^{\prime},T_{m}^{-1}D)$ on $L^{2}(\mathrm{d}(x,v))$
implies
$\|L_{n,m}^{\prime}f\|_{L^{2}(\mathrm{d}(x,v))}+\|f\|_{L^{2}(\mathrm{d}(x,v))}\leq\|(I-L_{n,m}^{\prime})f\|_{L^{2}(\mathrm{d}(x,v))}+2\|(I-L_{n,m}^{\prime})f\|_{L^{2}(\mathrm{d}(x,v))}.$
Overall, we get
$\displaystyle\|v_{j}T_{m}f\|_{H_{m}}^{2}$ $\displaystyle\leq
2c_{\Sigma}^{-1}(1+2(dM_{\Sigma}+2d^{5/2}Mn^{\beta+1}))\|(I-L_{n,m}^{\prime})f\|_{L^{2}(\mathrm{d}(x,v))}^{2}$
$\displaystyle\leq
18c_{\Sigma}^{-1}d^{3}n^{\beta+1}\max\\{M_{\Sigma},M\\}\|(I-L_{n,m}^{\prime})f\|_{L^{2}(\mathrm{d}(x,v))}^{2}.$
Since
$\|(I-L_{n,m}^{\prime})f\|_{L^{2}(\mathrm{d}(x,v))}^{2}=\|T_{m}^{-1}(I-L_{n,m})T_{m}f\|_{L^{2}(\mathrm{d}(x,v))}^{2}=\|(I-L_{n,m})T_{m}f\|_{H_{m}}^{2},$
this proves the first statement with
$D_{1}=\sqrt{18c_{\Sigma}^{-1}d^{3}\max\\{M_{\Sigma},M\\}}$.
For the second part, note that
$\partial_{v_{j}}T_{m}f=T_{m}\partial_{v_{j}}f+\frac{v_{j}}{2}T_{m}f$ and that
$\displaystyle\|T_{m}\partial_{v_{j}}f\|_{H_{m}}^{2}$
$\displaystyle=(\partial_{v_{j}}f,\partial_{v_{j}}f)_{L^{2}(\mathrm{d}(x,v))}^{2}\leq
c_{\Sigma}^{-1}\int_{\mathbb{R}^{2d}}(\nabla_{v}f,\Sigma_{n}\nabla_{v}f)_{\mathrm{euc}}\,\mathrm{d}(x,v)$
$\displaystyle\leq
c_{\Sigma}^{-1}\left((-L_{n,m}^{\prime}f,f)_{L^{2}}+\int_{\mathbb{R}^{2d}}\frac{1}{2}\operatorname{tr}(\Sigma_{n})f^{2}+\sum_{i,j=1}^{d}\partial_{j}a_{ij,n}\frac{v_{i}}{2}f^{2}\,\mathrm{d}(x,v)\right).$
Repeating all calculations of the first part yields
$\|\partial_{v_{j}}T_{m}f\|_{H_{m}}\leq\left(\frac{D_{1}}{2}+\frac{D_{1}}{2}\right)n^{1+\beta}\|(I-L_{n,m})T_{m}f\|_{H_{m}}.$
$\square$
Fix some pure tensor $g\in C_{c}^{\infty}(\mathbb{R}^{d})\otimes
C_{c}^{\infty}(\mathbb{R}^{d})$. We prove that for every $\varepsilon>0$, we
can find some $f\in D$ such that $\|(I-L)f-g\|_{H}<\varepsilon$. This then
extends to arbitrary $g\in C_{c}^{\infty}(\mathbb{R}^{d})\otimes
C_{c}^{\infty}(\mathbb{R}^{d})$ via linear combinations and therefore implies
essential m-dissipativity of $(L,D)$ on $H$, since
$C_{c}^{\infty}(\mathbb{R}^{d})\otimes C_{c}^{\infty}(\mathbb{R}^{d})$ is
dense in $H$. If $\beta\leq-1$, then the proof is easier and follows
analogously to the proof of of [15, Theorem 6.3.1]. Therefore we will assume
$\beta>-1$. Recall that in this case, we have $|\nabla\Phi(x)|\leq
N(1+|x|^{\gamma})$ for all $x\in\mathbb{R}^{d}$, where
$\gamma<\frac{2}{1+\beta}$, see the assumptions of Theorem 3.4.
Denote the support of $g$ by $K_{x}\times K_{v}$, where $K_{x}$ and $K_{v}$
are compact sets in $\mathbb{R}^{d}$. By a standard construction, for each
$\delta_{x},\delta_{v}>0$, there are smooth cutoff functions
$0\leq\phi_{\delta_{x}},\psi_{\delta_{v}}\leq 1\in
C_{c}^{\infty}(\mathbb{R}^{d})$ with
$\operatorname{supp}(\phi_{\delta_{x}})\subset B_{\delta_{x}}(K_{x})$,
$\operatorname{supp}(\psi_{\delta_{v}})\subset B_{\delta_{v}}(K_{v})$,
$\phi_{\delta_{x}}=1$ on $K_{x}$, $\psi_{\delta_{v}}=1$ on $K_{v}$. Moreover,
there are constants $C_{\phi},C_{\psi}$ independent of $\delta_{x}$ and
$\delta_{v}$ such that
$\|\partial^{s}\phi_{\delta_{x}}\|_{\infty}\leq
C_{\phi}\delta_{x}^{-|s|}\quad\text{ and
}\quad\|\partial^{s}\psi_{\delta_{v}}\|_{\infty}\leq
C_{\psi}\delta_{v}^{-|s|}$
for all multi-indices $s\in\mathbb{N}^{d}$. Fix $\alpha$ such that
$\frac{1+\beta}{2}<\alpha<\frac{1}{\gamma}$. For any $\delta>0$, we set
$\delta_{x}\mathrel{\vbox{\hbox{\scriptsize.}\hbox{\scriptsize.}}}=\delta^{\alpha}$
and
$\delta_{v}\mathrel{\vbox{\hbox{\scriptsize.}\hbox{\scriptsize.}}}=\delta$,
and then define
$\chi_{\delta}(x,v)\mathrel{\vbox{\hbox{\scriptsize.}\hbox{\scriptsize.}}}=\phi_{\delta_{x}}(x)\psi_{\delta_{v}}(v)=\phi_{\delta^{\alpha}}(x)\psi_{\delta}(v)$.
For $f\in D$, $\delta>0$, consider
$f_{\delta}\mathrel{\vbox{\hbox{\scriptsize.}\hbox{\scriptsize.}}}=\chi_{\delta}f$,
which is an element of $D$, as $\chi_{\delta}\in D$. Without loss of
generality, we consider $\delta$ and hence $\delta^{\alpha}$ sufficiently
large such that $\operatorname{supp}(\phi_{\delta^{\alpha}})\subset
B_{2\delta^{\alpha}}(0)$, $\operatorname{supp}(\psi_{\delta})\subset
B_{2\delta}(0)$ and that there are $n,m\in\mathbb{N}$ that satisfy
$\operatorname{supp}(\phi_{\delta^{\alpha}})\times\operatorname{supp}(\psi_{\delta})\subset
B_{m}(0)\times B_{n}(0)\subset B_{2\delta^{\alpha}}(0)\times B_{2\delta}(0).$
(4.6)
The following then holds:
###### Lemma 4.12.
Let $g\in C_{c}^{\infty}(\mathbb{R}^{d})\otimes
C_{c}^{\infty}(\mathbb{R}^{d})$ and $\phi,\psi$ as above. Then there is a
constant $D_{2}<\infty$ and a function $\rho:\mathbb{R}\to\mathbb{R}$
satisfying $\rho(s)\to 0$ as $s\to\infty$, such that for any $\delta$, $n$ and
$m$ satisfying (4.6),
$\|(I-L)f_{\delta}-g\|_{H}\leq\|(I-L_{n,m})f-g\|_{H_{m}}+D_{2}\cdot\rho(\delta)\|(I-L_{n,m})f\|_{H_{m}}$
holds for all $f\in D$.
###### Proof:
By the product rule,
$\displaystyle\|(I-L)f_{\delta}-g\|_{H}$
$\displaystyle\leq\|\chi_{\delta}((I-L)f-g)\|_{H}+\sum_{i,j=1}^{d}\|a_{ij}\phi_{\delta^{\alpha}}(x)\partial_{j}\partial_{i}\psi_{\delta}(v)f\|_{H}$
$\displaystyle+2\sum_{i,j=1}^{d}\|a_{ij}\phi_{\delta^{\alpha}}(x)\partial_{i}\psi_{\delta}(v)\partial_{v_{j}}f\|_{H}+\sum_{i,j=1}^{d}\|\partial_{j}a_{ij}\phi_{\delta^{\alpha}}(x)\partial_{i}\psi_{\delta}(v)f\|_{H}$
$\displaystyle+\sum_{i,j=1}^{d}\|a_{ij}v_{j}\phi_{\delta^{\alpha}}(x)\partial_{i}\psi_{\delta}(v)f\|_{H}+\sum_{i=1}^{d}\|v_{i}\partial_{i}\phi_{\delta^{\alpha}}(x)\psi_{\delta}(v)f\|_{H}$
$\displaystyle+\sum_{i=1}^{d}\|\partial_{i}\Phi\phi_{\delta^{\alpha}}(x)\partial_{i}\psi_{\delta}(v)f\|_{H}.$
Due to the choice of $n$ and $m$, every $\|\cdot\|_{H}$ on the right hand side
can be replaced with $\|\cdot\|_{H_{m}}$, $a_{ij}$ by $a_{ij,n}$, and $\Phi$
by $\Phi_{m}$, hence $L$ by $L_{n,m}$.
We now give estimates for each summand of the right hand side, in their order
of appearance:
1. (1)
$\|\chi_{\delta}((I-L)f-g)\|_{H}\leq\|(I-L_{n,m})f-g\|_{H_{m}}$,
2. (2)
$\|a_{ij}\phi_{\delta^{\alpha}}(x)\partial_{j}\partial_{i}\psi_{\delta}(v)f\|_{H}\leq
M_{\Sigma}C_{\psi}\delta^{-2}\|f\|_{H_{m}}$,
3. (3)
$\|a_{ij}\phi_{\delta^{\alpha}}(x)\partial_{i}\psi_{\delta}(v)\partial_{v_{j}}f\|_{H}\leq
M_{\Sigma}C_{\psi}\delta^{-1}\|\partial_{v_{j}}f\|_{H_{m}}$,
4. (4)
$\|\partial_{j}a_{ij}\phi_{\delta^{\alpha}}(x)\partial_{i}\psi_{\delta}(v)f\|_{H}\leq\max\\{B_{\Sigma},M\cdot(2\delta)^{\beta\vee
0}\\}C_{\psi}\delta^{-1}\|f\|_{H_{m}}$,
5. (5)
$\|a_{ij}v_{j}\phi_{\delta^{\alpha}}(x)\partial_{i}\psi_{\delta}(v)f\|_{H}\leq
M_{\Sigma}C_{\psi}\delta^{-1}\|v_{j}f\|_{H_{m}}$,
6. (6)
$\|v_{i}\partial_{i}\phi_{\delta^{\alpha}}(x)\psi_{\delta}(v)f\|_{H}\leq
C_{\phi}\delta^{-\alpha}\|v_{i}f\|_{H_{m}}$,
7. (7)
$\|\partial_{i}\Phi\phi_{\delta^{\alpha}}(x)\partial_{i}\psi_{\delta}(v)f\|_{H}\leq
N(1+(2\delta^{\alpha})^{\gamma})C_{\psi}\delta^{-1}\|f\|_{H_{m}}$,
where the last inequality is due to $|\partial_{i}\Phi(x)|\leq
N(1+|x|^{\gamma})$ for all $x\in\mathbb{R}^{d}$ and the support of the cutoff
as in (4.6). Application of Lemma 4.11 shows the existence of $D_{2}$
independent of $n,m$, such that
$\|(I-L)f_{\delta}-g\|_{H}\leq\|(I-L_{n,m})f-g\|_{H_{m}}+D_{2}\cdot\rho(\delta)\|(I-L_{n,m})f\|_{H_{m}}$
where
$\rho(\delta)\mathrel{\vbox{\hbox{\scriptsize.}\hbox{\scriptsize.}}}=\delta^{-2}+2^{\frac{1+\beta}{2}}\delta^{\frac{1+\beta}{2}-1}+2^{\beta\vee
0}\delta^{(\beta\vee
0)-1}+2^{\frac{1+\beta}{2}}\delta^{\frac{1+\beta}{2}-\alpha}+\delta^{-1}+2^{\gamma}\delta^{\alpha\gamma-1}.$
Clearly $\rho(\delta)\to 0$ as $\delta\to\infty$ due to $\beta<1$ and the
definition of $\alpha$. $\square$
Now finally we show that for each $\varepsilon>0$, we can find some
$f_{\delta}\in D$ such that
$\|(I-L)f_{\delta}-g\|_{H}<\varepsilon.$
Choose $\delta>0$ large enough such that
$\rho(\delta)<\frac{\varepsilon}{4D_{2}\|g\|_{H}}$ (where $\rho$ ans $D_{2}$
are provided by Lemma 4.12) and that there exist $n,m$ satisfying (4.6).
Then choose $f\in D$ via Theorem 4.10 such that
$\|(I-L_{n,m})f-g\|_{H_{m}}<\min\\{\frac{\varepsilon}{2},\|g\|_{H}\\}$ and
define $f_{\delta}$ as before. Note that due to the choice of the cutoffs, it
holds $\|g\|_{H}=\|g\|_{H_{m}}$, therefore
$\|(I-L)f_{\delta}-g\|_{H}<\frac{\varepsilon}{2}+\frac{\varepsilon}{4\|g\|_{H_{m}}}(\|(I-L_{n,m})f-g\|_{H_{m}}+\|g\|_{H_{m}})<\varepsilon.$
As mentioned earlier, this shows essential m-dissipativity of the operator
$(L,D)$ on $H$ and therefore concludes the proof of Theorem 3.4.
## 5 Applications
### 5.1 The associated Cauchy problem
We consider the abstract Cauchy problem associated with the operator $L$.
Given the initial condition $u_{0}\in H$, $u:[0,\infty)\to H$ should satisfy
$\partial_{t}u(t)=\left(\operatorname{tr}\left(\Sigma
H_{v}\right)+b\cdot\nabla_{v}+v\cdot\nabla_{x}-\nabla\Phi\cdot\nabla_{v}\right)u(t)\quad\text{
and }\quad u(0)=u_{0}.$ (5.1)
If we set
$u(t)\mathrel{\vbox{\hbox{\scriptsize.}\hbox{\scriptsize.}}}=T_{t}u_{0}$,
where $(T_{t})_{t\geq 0}$ is the semigroup on $H$ generated by the closure
$(L,D(L))$ of $(L,D)$, then the map $t\mapsto u(t)$ is continuous in $H$. For
all $t\geq 0$, it holds that $\int_{0}^{t}u(s)\,\mathrm{d}s\in D(L)$ with
$L\int_{0}^{t}u(s)\,\mathrm{d}s=T_{t}u_{0}-u_{0}=u(t)-u_{0}$, hence $u$ is the
unique mild solution to the abstract Cauchy problem.
If $u_{0}\in D(L)$, then $u(t)\in D(L)$ for all $t\geq 0$, and
$\partial_{t}u(t)=LT_{t}u_{0}=Lu(t)$, so $u$ is even a classical solution to
the abstract Cauchy problem associated to $L$. In particular, this holds for
all $u_{0}\in C_{c}^{2}(\mathbb{R}^{d\times d})$, since $L$ is dissipative
there and it extends $D$, which implies $C_{c}^{2}(\mathbb{R}^{d\times
d})\subset D(L)$.
In this context, Theorem 1.1 shows exponential convergence of the unique
solution $u(t)$ to a constant as $t\to\infty$. More precisely, for each
$\theta_{1}>1$ we can calculate $\theta_{2}\in(0,\infty)$ depending on the
choice of $\Sigma$ and $\Phi$ such that for all $t\geq 0$,
$\left\|u(t)-\int_{E}u_{0}\,\mathrm{d}\mu\right\|_{H}\leq\theta_{1}\mathrm{e}^{-\theta_{2}t}\left\|u_{0}-\int_{E}u_{0}\,\mathrm{d}\mu\right\|_{H}.$
### 5.2 Connection to Langevin dynamics with multiplicative noise
So far, our considerations have been purely analytical, giving results about
the core property of $D$ for $L$ and rate of convergence for the generated
semigroup $(T_{t})_{t\geq 0}$ in $H$. However, this approach is still quite
natural in the context of the Langevin SDE (1.1), as the semigroup has a
meaningful stochastic representation. The connection is achieved via the
powerful theory of generalized Dirichlet forms as developed by Stannat in
[16], which gives the following:
Assume the context of Theorem 3.4. There exists a Hunt process
$\mathbf{M}=\left(\Omega,\mathcal{F},(\mathcal{F}_{t})_{t\geq
0},(X_{t},V_{t}),(P_{(x,v)})_{(x,v)\in\mathbb{R}^{d}\times\mathbb{R}^{d}}\right)$
with state space $E=\mathbb{R}^{d}\times\mathbb{R}^{d}$, infinite lifetime and
continuous sample paths ($P_{(x,v)}$-a.s. for all $(x,v)\in E$), which is
properly associated in the resolvent sense with $(T_{t})_{t\geq 0}$. In
particular (see [15, Lemma 2.2.8]), this means that for each bounded
measurable $f$ which is also square-integrable with respect to the invariant
measure $\mu$ and all $t>0$, $T_{t}f$ is a $\mu$-version of $p_{t}f$, where
$(p_{t})_{t\geq 0}$ is the transition semigroup of $\mathbf{M}$ with
$p_{t}f:\mathbb{R}^{d}\times\mathbb{R}^{d}\to\mathbb{R},\qquad(x,v)\mapsto\mathbb{E}_{(x,v)}\left[f(X_{t},V_{t})\right].$
This representation can be further extended to all $f\in H$, see for example
[17, Exercise IV.2.9]. Moreover, if $\mu$-versions of $\Sigma$ and $\Phi$ are
fixed, then $P_{(x,v)}$ solves the martingale problem for $L$ on
$C_{c}^{2}(E)$ for $L$-quasi all $(x,v)\in E$, i.e. for each $f\in
C_{c}^{2}(E)$, the stochastic process $(M_{t}^{[f]})_{t\geq 0}$ defined by
$M_{t}^{[f]}\mathrel{\vbox{\hbox{\scriptsize.}\hbox{\scriptsize.}}}=f(X_{t},V_{t})-f(X_{0},V_{0})-\int_{0}^{t}Lf(X_{s},V_{s})\,\mathrm{d}s,$
is a martingale with respect to $P_{(x,v)}$. If $h\in L^{2}(\mu)$ is a
probability density with respect to $\mu$, then the law
$P_{h}\mathrel{\vbox{\hbox{\scriptsize.}\hbox{\scriptsize.}}}=\int_{E}P_{(x,v)}h(x,v)\,\mathrm{d}\mu$
solves the martingale problem for $(L,D(L))$, without the need to fix specific
versions of $\Sigma$ and $\Phi$. In particular, this holds for $h=1$. As in
[15, Lemma 2.1.8], for $f\in D(L)$ with $f^{2}\in D(L)$ and $Lf\in
L^{4}(\mu)$, a martingale is also defined via
$N_{t}^{[f]}\mathrel{\vbox{\hbox{\scriptsize.}\hbox{\scriptsize.}}}=(M_{t}^{[f]})^{2}-\int_{0}^{t}L(f^{2})(X_{s},V_{s})-(2fLf)(X_{s},V_{s})\,\mathrm{d}s,\qquad
t\geq 0,$
which may serve as a way to verify that $\mathbf{M}$ is already a weak
solution of (1.1), as it allows a representation of the quadratic variation
process. Indeed, if we set
$f_{n}^{i}(x,v)\mathrel{\vbox{\hbox{\scriptsize.}\hbox{\scriptsize.}}}=\varphi_{n}(x_{i})x_{i}$
for a suitable sequence $(\varphi_{n})_{n\in\mathbb{N}}$ of cutoff functions
as in Definition 3.6, evaluation of $N_{t}^{[f_{n}^{i}]}$ shows that the
quadratic variation $[M^{[f_{n}^{i}]}]_{t}$ of $M_{t}^{[f_{n}^{i}]}$ is
constantly zero, which implies the same for $M_{t}^{[f_{n}^{i}]}$. Hence, by
introducing appropriate stopping times, it follows that
$X_{t}^{i}-X_{0}^{i}=\int_{0}^{t}V_{s}^{i}\,\mathrm{d}s$, so the first line of
the SDE (1.1) is satisfied.
In an analogous procedure, using
$g_{n}^{i}(x,v)\mathrel{\vbox{\hbox{\scriptsize.}\hbox{\scriptsize.}}}=\varphi_{n}(v_{i})v_{i}$,
we can see that the quadratic covariation $[V^{i},V^{j}]_{t}$ is given by
$2\int_{0}^{t}a_{ij}(V_{s})\,\mathrm{d}s$. Since $\Sigma$ is strictly
elliptic, the diffusion matrix $\sigma$ is invertible and by Lévy’s
characterization, the process
$B_{t}\mathrel{\vbox{\hbox{\scriptsize.}\hbox{\scriptsize.}}}=\int_{0}^{t}\frac{1}{\sqrt{2}}\sigma^{-1}(V_{s})\,\mathrm{d}M_{s}$
is a standard $d$-dimensional Brownian motion, where
$M_{t}\mathrel{\vbox{\hbox{\scriptsize.}\hbox{\scriptsize.}}}=(M_{t}^{[v_{1}]},\dots,(M_{t}^{[v_{d}]})$,
which is a local martingale. Moreover, it holds that
$\mathrm{d}V_{t}=\mathrm{d}M_{t}+b(V_{t})-\nabla\Phi(X_{t})\,\mathrm{d}t=\sqrt{2}\sigma(V_{t})\mathrm{d}B_{t}+b(V_{t})-\nabla\Phi(X_{t})\,\mathrm{d}t,$
so $(X_{t},V_{t})$ is a weak solution to the SDE (1.1) with initial
distribution $h\mu$ under $P_{h}$.
Finally, in this context, the statement on hypocoercivity (Theorem 1.1) shows
that for every $\theta_{1}>1$, there is an explicitly computable
$\theta_{2}\in(0,\infty)$ depending on the choice of $\Sigma$ and $\Phi$, such
that the transition semigroup $(p_{t})_{t\geq 0}$ satisfies
$\|p_{t}g-\int_{E}g\,\mathrm{d}\mu\|_{L^{2}(\mu)}\leq\theta_{1}\mathrm{e}^{-\theta_{2}t}\|g-\int_{E}g\,\mathrm{d}\mu\|_{L^{2}(\mu)}$
(5.2)
for all $g\in L^{2}(\mu)$ and $t\geq 0$. In particular, this implies that the
probability law $P_{\mu}$ on the space of continuous paths on $E$ with initial
distribution (and invariant measure) $\mu$ has the strong mixing property,
i.e. for any Borel sets $A_{1},A_{2}$ on the path space, it holds that
$P_{\mu}(\varphi_{t}A_{1}\cap A_{2})\to
P_{\mu}(A_{1})P_{\mu}(A_{2})\quad\text{ as }t\to\infty,$
where
$\varphi_{t}A_{1}\mathrel{\vbox{\hbox{\scriptsize.}\hbox{\scriptsize.}}}=\\{(Z_{s})_{s\geq
0}\in C([0,\infty),E)\mid(Z_{s+t})_{s\geq 0}\in A_{1}\\}$. This follows from
(5.2) and associatedness of the semigroups to the probability law $P_{\mu}$,
see for example [15, Remark 2.1.13].
### 5.3 Corresponding Fokker-Planck equation
In this part we give a reformulation of the convergence rate result detailed
in Section 5.1 for readers which are more familiar with the classical Fokker-
Planck formulation for probability densities. In the current literature,
Fokker-Planck equations are more often expressed as equations on measures,
rather than functions. For example, in the non-degenerate case, exponential
convergence in total variation to a stationary solution is studied in [18],
which includes further references to related works. Our goal here however is
simply to make the convergence result immediately applicable to less
specialized readers in the form of the estimate (5.4) for solutions to the
Cauchy problem associated with the operator defined in (5.3), hence we stick
to the expression via probability densities.
Given a Kolmogorov backwards equation of the form
$-\partial_{t}u(x,t)=L^{\mathrm{K}}u(x,t)$, the corresponding Fokker-Planck
equation is given by $\partial_{t}f(x,t)=L^{\mathrm{FP}}f(x,t)$, where
$L^{\mathrm{FP}}=(L^{\mathrm{K}})^{\prime}$ is the adjoint operator of
$L^{\mathrm{K}}$ in $L^{2}(\mathbb{R}^{d},\mathrm{d}x)$, restricted to smooth
functions. In our setting, $L^{\mathrm{K}}=L$ produces via integration by
parts for $f\in D$:
$L^{\mathrm{FP}}f=\sum_{i,j=1}^{d}\partial_{v_{i}}(a_{ij}\partial_{v_{j}}f+v_{j}a_{ij}f)-v\cdot\nabla_{x}f+\nabla\Phi\nabla_{v}f.$
(5.3)
Consider the Fokker-Planck Hilbert space
$\widetilde{H}\mathrel{\vbox{\hbox{\scriptsize.}\hbox{\scriptsize.}}}=L^{2}(E,\widetilde{\mu})$,
where
$\widetilde{\mu}\mathrel{\vbox{\hbox{\scriptsize.}\hbox{\scriptsize.}}}=(2\pi)^{-\frac{d}{2}}\mathrm{e}^{\Phi(x)+\frac{v^{2}}{2}}\,\mathrm{d}x\otimes\mathrm{d}v.$
Then a unitary Hilbert space transformation between $H$ and $\widetilde{H}$ is
given by
$T:H\to\widetilde{H},\quad Tg=\rho g\quad\text{ with
}\quad\rho(x,v)\mathrel{\vbox{\hbox{\scriptsize.}\hbox{\scriptsize.}}}=\mathrm{e}^{-\Phi(x)-\frac{v^{2}}{2}}.$
Let $(T_{t})_{t\geq 0}$ be the semigroup on $H$ generated by $(L,D(L))$ and
denote by $(T_{t}^{*})_{t\geq 0}$ and $L^{*}$ the adjoint semigroup on $H$ and
its generator, respectively. It is evident that for $f\in D$, $L^{*}$ is given
as $L^{*}f=(S+A)f$, where $S$ and $A$ refer to the symmetric and antisymmetric
components of $L$ respectively, as defined in Definition 3.2. As mentioned in
3.11, we achieve the exact same results for the equation corresponding to
$L^{*}$ as for the one corresponding to $L$, which we considered in Section 3.
In particular, $(L^{*},D)$ is essentially m-dissipative and its closure
$(L^{*},D(L^{*}))$ generates $(T_{t}^{*})_{t\geq 0}$, which converges
exponentially to equilibrium with the same rate as $(T_{t})_{t\geq 0}$.
Let
$\widetilde{T}_{t}g\mathrel{\vbox{\hbox{\scriptsize.}\hbox{\scriptsize.}}}=T(T_{t}^{*})T^{-1}g$
for $t\geq 0$, $g\in\widetilde{H}$. Then $(\widetilde{T}_{t})_{t\geq 0}$ is a
strongly continuous contraction semigroup on $\widetilde{H}$ with the
generator $(TL^{*}T^{-1},T(D(L^{*})))$. It is easy to see that
$L^{\mathrm{FP}}=TL^{*}T^{-1}$, so for each initial condition
$u_{0}\in\widetilde{H}$,
$u(t)\mathrel{\vbox{\hbox{\scriptsize.}\hbox{\scriptsize.}}}=\widetilde{T}_{t}u_{0}$
is a mild solution to the Fokker-Planck Cauchy problem. Note that for $\Phi\in
C^{\infty}(\mathbb{R}^{d})$, the transformation $T$ leaves $D$ invariant,
which implies $D\subset T(D(L^{*}))$ and essential m-dissipativity of
$(L^{\mathrm{FP}},D)$ on $\widetilde{H}$.
If $u_{0}\in T(D(L^{*}))$, then
$\partial_{t}\widetilde{T}_{t}u_{0}=T(L^{*}T_{t}^{*})T^{-1}u_{0},$
and therefore
$\displaystyle\int_{E}\partial_{t}u(t)f\,\mathrm{d}(x,v)$
$\displaystyle=\int_{E}L^{*}T_{t}^{*}T^{-1}u_{0}f\,\mathrm{d}\mu=\int_{E}T_{t}^{*}T^{-1}u_{0}Lf\,\mathrm{d}\mu$
$\displaystyle=\int_{E}TT_{t}^{*}T^{-1}u_{0}Lf\,\mathrm{d}(x,v)=\int_{E}L^{\mathrm{FP}}u(t)f\,\mathrm{d}(x,v),$
so $u(t)$ is also a classical solution. Due to the invariance of $\mu$ for
$L$, a stationary solution is given by $\rho$ and by Theorem 1.1, for every
$\theta_{1}>1$ and the appropriate $\theta_{2}$ it holds that
$\displaystyle\left\|u(t)-\rho(u_{0},\rho)_{\widetilde{H}}\right\|_{\widetilde{H}}$
$\displaystyle=\left\|T_{t}^{*}T^{-1}u_{0}-(T^{-1}u_{0},1)_{H}\right\|_{H}$
(5.4)
$\displaystyle\leq\theta_{1}\mathrm{e}^{-\theta_{2}t}\left\|T^{-1}u_{0}-(T^{-1}u_{0},1)_{H}\right\|_{H}$
$\displaystyle=\theta_{1}\mathrm{e}^{-\theta_{2}t}\left\|u_{0}-\rho(u_{0},\rho)_{\widetilde{H}}\right\|_{\widetilde{H}}.$
This shows exponential convergence to a stationary state for solutions to the
Fokker-Planck equation.
## References
* [1] J. Dolbeault, C. Mouhot, C. Schmeiser, Transactions of the American Mathematical Society pp. 3807–3828 (2015). URL https://hal.archives-ouvertes.fr/hal-00482286. 21 pages
* [2] C. Villani, Mem. Amer. Math. Soc 202 (2006). 10.1090/S0065-9266-09-00567-5
* [3] J. Dolbeault, C. Mouhot, C. Schmeiser, Comptes Rendus Mathematique 347(9), 511 (2009). https://doi.org/10.1016/j.crma.2009.02.025. URL http://www.sciencedirect.com/science/article/pii/S1631073X09000880
* [4] M. Grothaus, P. Stilgenbauer, Journal of Functional Analysis 267(10), 3515 (2014). https://doi.org/10.1016/j.jfa.2014.08.019. URL http://www.sciencedirect.com/science/article/pii/S0022123614003462
* [5] M. Grothaus, F.Y. Wang, The Annals of Probability 47 (2017). 10.1214/18-AOP1328
* [6] F.Y. Wang, Journal of Functional Analysis 272(12), 5360 (2017). https://doi.org/10.1016/j.jfa.2017.03.015. URL https://www.sciencedirect.com/science/article/pii/S0022123617301404
* [7] M. Grothaus, P. Stilgenbauer, Methods Funct. Anal. Topology 22(2), 152 (2016). URL http://mfat.imath.kiev.ua/article/?id=847
* [8] B. Helffer, F. Nier, _Hypoelliptic Estimates and Spectral Theory for Fokker-Planck Operators and Witten Laplacians_ (Springer Berlin Heidelberg, Berlin, Heidelberg, 2005). 10.1007/b104762
* [9] F. Conrad, M. Grothaus, Journal of Evolution Equations 10, 623 (2010). 10.1007/s00028-010-0064-0
* [10] V. Bogachev, N. Krylov, M. Röckner, Communications in Partial Differential Equations 26(11-12), 2037 (2001). 10.1081/PDE-100107815
* [11] B. Baur, M. Grothaus, P. Stilgenbauer, Potential Analysis 38, 1233 (2013)
* [12] V.I. Bogachev, N.V. Krylov, M. Röckner, Annali della Scuola Normale Superiore di Pisa - Classe di Scienze Ser. 4, 24(3), 451 (1997). URL http://www.numdam.org/item/ASNSP_1997_4_24_3_451_0
* [13] W. Beckner, Proceedings of the American Mathematical Society 105(2), 397 (1989). URL http://www.jstor.org/stable/2046956
* [14] J. Dolbeault, A. Klar, C. Mouhot, C. Schmeiser, Applied Mathematics Research eXpress 2013, 165 (2013). 10.1093/amrx/abs015. URL https://hal.archives-ouvertes.fr/hal-00658343. 8 pages
* [15] F. Conrad, Construction and analysis of langevin dynamics in continuous particle systems. Ph.D. thesis, TU Kaiserslautern (2011)
* [16] W. Stannat, Mem. Amer. Math. Soc 142(678) (1999)
* [17] Z.M. Ma, M. Röckner, _Introduction to the Theory of (Non-Symmetric) Dirichlet Forms_ (Springer Berlin Heidelberg, Berlin, Heidelberg, 1992). 10.1007/978-3-642-77739-4
* [18] V.I. Bogachev, M. Röckner, S.V. Shaposhnikov, Journal of Mathematical Sciences 242(1), 69 (2019). 10.1007/s10958-019-04467-8
|
* Wen [2003] L. Wen, The Astrophysical Journal 598, 419 (2003).
* Samsing _et al._ [2014] J. Samsing, M. MacLeod, and E. Ramirez-Ruiz, Astrophys. J. 784, 71 (2014), arXiv:1308.2964 [astro-ph.HE] .
* VanLandingham _et al._ [2016] J. H. VanLandingham, M. Miller, D. P. Hamilton, and D. C. Richardson, Astrophys. J. 828, 77 (2016), arXiv:1604.04948 [astro-ph.HE] .
* Miller and Hamilton [2002] M. C. Miller and D. P. Hamilton, Astrophys. J. 576, 894 (2002), arXiv:astro-ph/0202298 .
* Blaes _et al._ [2002] O. Blaes, M. H. Lee, and A. Socrates, Astrophys. J. 578, 775 (2002), arXiv:astro-ph/0203370 .
* Lithwick and Naoz [2011] Y. Lithwick and S. Naoz, Astrophys. J. 742, 94 (2011), arXiv:1106.3329 [astro-ph.EP] .
* Vinson and Chiang [2018] B. R. Vinson and E. Chiang, Mon. Not. R. Astron. Soc. 474, 4855 (2018), arXiv:1711.10495 [astro-ph.EP] .
* Naoz _et al._ [2013a] S. Naoz, W. M. Farr, Y. Lithwick, F. A. Rasio, and J. Teyssandier, Mon. Not. Roy. Astron. Soc. 431, 2155 (2013a), arXiv:1107.2414 [astro-ph.EP] .
* Naoz _et al._ [2013b] S. Naoz, B. Kocsis, A. Loeb, and N. Yunes, Astrophys. J. 773, 187 (2013b), arXiv:1206.4316 [astro-ph.SR] .
* Randall and Xianyu [2018a] L. Randall and Z.-Z. Xianyu, Astrophys. J. 864, 134 (2018a), arXiv:1802.05718 [gr-qc] .
* Randall and Xianyu [2018b] L. Randall and Z.-Z. Xianyu, Astrophys. J. 853, 93 (2018b), arXiv:1708.08569 [gr-qc] .
* Antognini _et al._ [2014] J. M. Antognini, B. J. Shappee, T. A. Thompson, and P. Amaro-Seoane, Mon. Not. Roy. Astron. Soc. 439, 1079 (2014), arXiv:1308.5682 [astro-ph.HE] .
* Antognini and Thompson [2016] J. M. O. Antognini and T. A. Thompson, Mon. Not. R. Astron. Soc. 456, 4219 (2016), arXiv:1507.03593 [astro-ph.SR] .
* Kimpson _et al._ [2016] T. O. Kimpson, M. Spera, M. Mapelli, and B. M. Ziosi, Mon. Not. R. Astron. Soc. 463, 2443 (2016), arXiv:1608.05422 [astro-ph.GA] .
* Trani [2020] A. A. Trani 351, 10.1017/S174392131900721X (2020), arXiv:1908.07535 [astro-ph.HE] .
* Fragione and Kocsis [2020] G. Fragione and B. Kocsis, Mon. Not. Roy. Astron. Soc. 493, 3920 (2020), arXiv:1910.00407 [astro-ph.GA] .
* Yu _et al._ [2020] H. Yu, S. Ma, M. Giesler, and Y. Chen, Phys. Rev. D 102, 123009 (2020), arXiv:2007.12978 [gr-qc] .
* Liu _et al._ [2019a] B. Liu, D. Lai, and Y.-H. Wang, ”Astrophys. J. Lett.” 883, L7 (2019a), arXiv:1906.07726 [astro-ph.HE] .
* Yamada and Asada [2011] K. Yamada and H. Asada, arXiv e-prints , arXiv:1105.2998 (2011), arXiv:1105.2998 [gr-qc] .
* Stephan _et al._ [2019] A. P. Stephan, S. Naoz, A. M. Ghez, M. R. Morris, A. Ciurlo, T. Do, K. Breivik, S. Coughlin, and C. L. Rodriguez, Astrophys. J. 878, 58 (2019), arXiv:1903.00010 [astro-ph.SR] .
* Liu _et al._ [2019b] B. Liu, D. Lai, and Y.-H. Wang, Astrophys. J. 881, 41 (2019b), arXiv:1905.00427 [astro-ph.HE] .
* Will [2017] C. M. Will, Phys. Rev. D 96, 023017 (2017), arXiv:1705.03962 [astro-ph.EP] .
* Lim and Rodriguez [2020] H. Lim and C. L. Rodriguez, Phys. Rev. D 102, 064033 (2020), arXiv:2001.03654 [astro-ph.HE] .
* Kuntz _et al._ [2021] A. Kuntz, F. Serra, and E. Trincherini, arXiv e-prints , arXiv:2104.13387 (2021), arXiv:2104.13387 [hep-th] .
* Martinez _et al._ [2020] M. A. S. Martinez _et al._ , Astrophys. J. 903, 67 (2020), arXiv:2009.08468 [astro-ph.GA] .
* Li _et al._ [2014] G. Li, S. Naoz, M. Holman, and A. Loeb, Astrophys. J. 791, 86 (2014), arXiv:1405.0494 [astro-ph.EP] .
* Deme _et al._ [2020] B. Deme, B.-M. Hoang, S. Naoz, and B. Kocsis, Astrophys. J. 901, 125 (2020), arXiv:2005.03677 [astro-ph.HE] .
* Hoang _et al._ [2019] B.-M. Hoang, S. Naoz, B. Kocsis, W. Farr, and J. McIver, Astrophys. J. Lett. 875, L31 (2019), arXiv:1903.00134 [astro-ph.HE] .
* Li _et al._ [2015] G. Li, S. Naoz, B. Kocsis, and A. Loeb, Mon. Not. Roy. Astron. Soc. 451, 1341 (2015), arXiv:1502.03825 [astro-ph.GA] .
* Antonini and Perets [2012] F. Antonini and H. B. Perets, Astrophys. J. 757, 27 (2012), arXiv:1203.2938 [astro-ph.GA] .
* Yunes _et al._ [2011] N. Yunes, M. Coleman Miller, and J. Thornburg, Phys. Rev. D 83, 044030 (2011), arXiv:1010.1721 [astro-ph.GA] .
* Gupta _et al._ [2020] P. Gupta, H. Suzuki, H. Okawa, and K.-i. Maeda, Phys. Rev. D 101, 104053 (2020), arXiv:1911.11318 [gr-qc] .
* Yang and Casals [2017] H. Yang and M. Casals, Phys. Rev. D 96, 083015 (2017), arXiv:1704.02022 [gr-qc] .
* Yu and Chen [2021] H. Yu and Y. Chen, Phys. Rev. Lett. 126, 021101 (2021), arXiv:2009.02579 [gr-qc] .
* Bonga _et al._ [2019] B. Bonga, H. Yang, and S. A. Hughes, Phys. Rev. Lett. 123, 101103 (2019), arXiv:1905.00030 [gr-qc] .
* Gupta _et al._ [2021] P. Gupta, B. Bonga, A. J. K. Chua, and T. Tanaka, arXiv e-prints , arXiv:2104.03422 (2021), arXiv:2104.03422 [gr-qc] .
* Miller [2009] M. C. Miller, Class. Quant. Grav. 26, 094031 (2009), arXiv:0812.3028 [astro-ph] .
* Amaro-Seoane _et al._ [2009] P. Amaro-Seoane, M. C. Miller, and M. Freitag, Astrophys. J. Lett. 692, L50 (2009), arXiv:0901.0604 [astro-ph.SR] .
* Martel and Poisson [1999] K. Martel and E. Poisson, Phys. Rev. D 60, 124008 (1999).
* Wahlquist [1987] H. Wahlquist, Gen. Rel. Grav. 19, 1101 (1987).
* Moreno-Garrido _et al._ [1995] C. Moreno-Garrido, E. Mediavilla, and J. Buitrago, Mon. Not. Roy. Astron. Soc. 274, 115 (1995).
* Yunes _et al._ [2009] N. Yunes, K. G. Arun, E. Berti, and C. M. Will, Phys. Rev. D 80, 084001 (2009).
* Klein _et al._ [2013] A. Klein, N. Cornish, and N. Yunes, Phys. Rev. D 88, 124015 (2013), arXiv:1305.1932 [gr-qc] .
* Moore _et al._ [2018] B. Moore, T. Robson, N. Loutrel, and N. Yunes, Classical and Quantum Gravity 35, 235006 (2018), arXiv:1807.07163 [gr-qc] .
* Poisson and Will [2014] E. Poisson and C. M. Will, (Cambridge University Press, 2014).
* Kinoshita and Nakai [2007] H. Kinoshita and H. Nakai, Celestial Mechanics and Dynamical Astronomy 98, 67 (2007).
* Abramowitz and Stegun [1965] M. Abramowitz and I. A. Stegun, (1965).
* Morse _et al._ [1954] P. M. Morse, H. Feshbach, and E. L. Hill, Vol. 22 (1954).
* Erdös [2000] P. Erdös, American Journal of Physics 68, 10.1119/1.1285882 (2000).
* Peters [1964] P. C. Peters, Phys. Rev. 136, B1224 (1964).
* Peters and Mathews [1963] P. C. Peters and J. Mathews, Phys. Rev. 131, 435 (1963).
* Moore and Yunes [2019] B. Moore and N. Yunes, Classical and Quantum Gravity 36, 185003 (2019), arXiv:1903.05203 [gr-qc] .
* Maggiore [2007] M. Maggiore, Oxford Master Series in Physics (Oxford University Press, 2007).
* Bender and Orszag [1999] C. M. Bender and S. A. Orszag, (1999).
* Chatziioannou _et al._ [2017a] K. Chatziioannou, A. Klein, N. Yunes, and N. Cornish, Phys. Rev. D 95, 104004 (2017a), arXiv:1703.03967 [gr-qc] .
* Chatziioannou _et al._ [2017b] K. Chatziioannou, A. Klein, N. Cornish, and N. Yunes, Phys. Rev. Lett. 118, 051101 (2017b), arXiv:1606.03117 [gr-qc] .
* Mardling and Aarseth [2001] R. A. Mardling and S. J. Aarseth, Mon. Not. Roy. Astron. Soc. 321, 398 (2001).
* Loutrel _et al._ [2019] N. Loutrel, S. Liebersbach, N. Yunes, and N. Cornish, Class. Quant. Grav. 36, 025004 (2019), arXiv:1810.03521 [gr-qc] . |
# Text in the Dark: Extremely Low-Light Text Image Enhancement
Che-Tsung Lin111These authors contributed equally to this work. Chun Chet
Ng222These authors contributed equally to this work. Zhi Qin Tan Wan Jun Nah
Xinyu Wang Jie Long Kew Pohao Hsu Shang Hong Lai Chee Seng Chan
<EMAIL_ADDRESS>Christopher Zach
###### Abstract
Text extraction in extremely low-light images is challenging. Although
existing low-light image enhancement methods can enhance images as pre-
processing before text extraction, they do not focus on scene text. Further
research is also hindered by the lack of extremely low-light text datasets.
Thus, we propose a novel extremely low-light image enhancement framework with
an edge-aware attention module to focus on scene text regions. Our method is
trained with text detection and edge reconstruction losses to emphasize low-
level scene text features. Additionally, we present a Supervised Deep Curve
Estimation model to synthesize extremely low-light images based on the public
ICDAR15 (IC15) dataset. We also labeled texts in the extremely low-light See
In the Dark (SID) and ordinary LOw-Light (LOL) datasets to benchmark extremely
low-light scene text tasks. Extensive experiments prove our model outperforms
state-of-the-art methods on all datasets. Code and dataset will be released
publicly at https://github.com/chunchet-ng/Text-in-the-Dark.
###### keywords:
Extremely Low-Light Image Enhancement, Edge Attention, Text Aware
Augmentation, Scene Text Detection, Scene Text Recognition
††journal: Signal Processing: Image Communication
[chalmers]organization=Chalmers University of Technology, state=Gothenburg,
country=Sweden
[um]organization=Universiti Malaya, state=Kuala Lumpur, country=Malaysia
[nthu]organization=National Tsing Hua University, state=Hsinchu,
country=Taiwan
[adelaide]organization=The University of Adelaide, state=Adelaide,
country=Australia
We present a new method to enhance low-light images, especially scene text
regions.
We developed a novel Supervised-DCE model to synthesize extremely low-light
images.
We create 3 new low-light text datasets SID-Sony-Text, SID-Fuji-Text, and LOL-
Text.
Our new datasets assess enhanced low-light images with scene text extraction
tasks.
Our method achieves the best results on all datasets quantitatively &
qualitatively.
## 1 Introduction
(a)
(b)
(c)
(d)
Figure 1: From left to right: (a) Original images; (b) Enhanced results with
our proposed method; (c-d) Zoomed-in (2x) regions of the blue and green
bounding boxes. Top row: SID-Sony-Text; Middle row: SID-Fuji-Text; Bottom row:
LOL-Text. Extremely low-light images in the SID dataset are significantly
darker than those in the LOL dataset, and our model enhances the images to the
extent that texts are clearly visible with sharp edges.
Scene text understanding involves extracting text information from images
through text detection and recognition, which is a fundamental task in
computer vision. However, performance drops sharply when images are captured
under low-light conditions. The main difficulty in detecting text in low-light
images is that low-level features, such as edges and character strokes, are no
longer prominent or hardly visible. On the other hand, enhancing images
captured in extremely low-light conditions pose a greater challenge than
ordinary low-light images due to the higher noise levels and greater
information loss. For instance, we show the difference in darkness level in
Figure 1, where it is evident that the See In the Dark (SID) datasets [1] are
darker and, in theory, more difficult to enhance than the LOw-Light (LOL)
dataset [2]. Quantitatively, we calculated the PSNR and SSIM values for two
subsets of SID, SID-Sony and SID-Fuji, and LOL by comparing each image against
pure black images in Table 1. Based on each dataset’s average perceptual
lightness (L* in the CIELAB color space), images in SID are at least 15 times
darker than those in LOL. Hence, low-light image enhancement is a necessary
pre-processing step for scene text extraction under such conditions.
Over the years, many general or low-light image enhancement models have been
proposed to improve the interpretability and extraction of information in
images by providing better input for subsequent image content analysis. Early
methods [3, 18, 19] typically attempted to restore the statistical properties
of low-light images to those of long-exposure images from a mathematical
perspective. On the other hand, deep learning-based methods [2, 1, 23, 25] aim
to learn the mapping between low-light images and their corresponding long-
exposure versions via regression. To the best of our knowledge, most existing
low-light image enhancement works have not explicitly addressed the restored
image quality in terms of downstream scene text tasks.
Dataset PSNR $\uparrow$ SSIM $\uparrow$ Avg. L* $\downarrow$ SID-Sony [1]
44.350 0.907 0.009 SID-Fuji [1] 41.987 0.820 0.004 LOL [2] 23.892 0.195 0.142
Pure Black $\infty$ 1.000 0.000
Table 1: The difference between the extremely low-light dataset, SID, and the
ordinary low-light dataset, LOL, is shown in terms of PSNR and SSIM values,
computed by comparing short-exposure images against pure black images. Avg. L*
is the average perceptual lightness in the CIELAB color space, calculated
based on short-exposure images. Scores are averaged across training and test
sets. Higher PSNR and SSIM values, along with lower Avg. L*, indicate darker
images that are more challenging for image enhancement and scene text
extraction.
Recent advancements in visual attention mechanisms have demonstrated their
effectiveness in identifying and boosting salient features in images. Channel-
only attention [11, 12, 13], spatial attention [14, 15] or the subsequent
channel-spatial attention [16, 17] modules were proposed to emphasize the most
informative areas. However, these methods cannot preserve texture details,
especially fine-grained edge information that is intuitively needed to enhance
extremely low-light images with complex textures. To overcome this limitation,
we introduce Edge-Aware Attention (Edge-Att). This novel attention module
simultaneously performs channel and spatial attention-based feature learning
on high-level image and edge features. Our model also considers text
information in the image through a text-aware loss function. This way, our
model can effectively enhance low-light images while preserving fine-grained
edge information, texture details, and legibility of text.
The scarcity of extremely low-light text datasets presents a hurdle for
further research. To address this, we annotated all text instances in both the
training and testing sets of the SID and LOL datasets, creating three new low-
light text datasets: SID-Sony-Text, SID-Fuji-Text, and LOL-Text. We then
proposed a novel Supervised Deep Curve Estimation (Supervised-DCE) model to
synthesize extremely low-light scene text images based on the commonly used
ICDAR15 (IC15) scene text dataset. It allows researchers to easily translate
naive scene text datasets into extremely low-light text datasets. In addition
to the previously published conference version of this work [45], we have made
four significant extensions. Firstly, we propose a novel dual encoder-decoder
framework that can achieve superior performance on low-light scene text tasks
(Section 3.1). Secondly, we introduce a new image synthesis method capable of
generating more realistic extremely low-light text images (Section 4.1).
Thirdly, we have further annotated texts in the Fuji and LOL datasets, thereby
forming the largest low-light scene text datasets to date (Section 5).
Fourthly, comprehensive experiments and analyses are carried out to study the
latest methods along with our proposed methods on all synthetic and real low-
light text datasets. The main contributions of our work are as follows:
* 1.
We present a novel scene text-aware extremely low-light image enhancement
framework with dual encoders and decoders to enhance extremely low-light
images, especially scene text regions within them. Our proposed method is
equipped with Edge-Aware Attention modules and trained with new Text-Aware
Copy-Paste (Text-CP) augmentation. Our model can restore images in challenging
lighting conditions without losing low-level features.
* 2.
We developed a Supervised-DCE model to synthesize extremely low-light images.
This allows us to use existing publicly available scene text datasets such as
IC15 to train our model alongside genuine ones for scene text research under
such extreme lighting conditions.
* 3.
We labeled the texts in the SID-Sony, SID-Fuji, and LOL datasets and named
them SID-Sony-Text, SID-Fuji-Text, and LOL-Text, respectively. This provides a
new perspective for objectively assessing enhanced extremely low-light images
through scene text tasks.
## 2 Related Works
Low-light Image Enhancement. Retinex theory assumes that an image can be
decomposed into illumination and reflectance. Most Retinex-based methods
enhance results by removing the illumination part [3], while others such as
LIME [18] keep a portion of the illumination to preserve naturalness. BIMEF
[19] further designs a dual-exposure fusion framework for accurate contrast
and lightness enhancement. RetinexNet [2] combines deep learning and Retinex
theory, adjusting illumination for enhancement after image decomposition. The
recent successes of generative adversarial networks (GANs) [20] have attracted
attention from low-light image enhancement because GANs have proven successful
in image translation. Pix2pix [21] and CycleGAN [22] have shown good image-
translation results in paired and unpaired image settings, respectively. To
overcome the complexity of CycleGAN, EnlightenGAN [23] proposed an
unsupervised one-path GAN structure. Besides general image translation, [1]
proposed learning-based low-light image enhancement on raw sensor data to
replace much of the traditional image processing pipeline, which tends to
perform poorly on such data. EEMEFN [24] also attempted to enhance images
using multi-exposure raw data that is not always available.
Zero-Reference Deep Curve Estimation (Zero-DCE) [25] designed a light-weight
CNN to estimate pixel-wise high-order curves for dynamic range adjustment of a
given image without needing paired images. [26] designed a novel Self-
Calibrated Illumination (SCI) learning with an unsupervised training loss to
constrain the output at each stage under the effects of a self-calibrated
module. ChebyLighter [27] learns to estimate an optimal pixel-wise adjustment
curve under the paired setting. Recently, the Transformer [28] architecture
has become the de-facto standard for Natural Language Processing (NLP) tasks.
ViT [29] applied the attention mechanism in the vision task by splitting the
image into tokens before sending it into Transformer. Illumination Adaptive
Transformer (IAT) [30] uses attention queries to represent and adjust ISP-
related parameters. Most existing models enhance images in the spatial domain.
Fourier-based Exposure Correction Network (FECNet) [31] presents a new
perspective for exposure correction with spatial-frequency interaction and has
shown that their model can be extended to low-light image enhancement.
Scene Text Extraction. Deep neural networks have been widely used for scene
text detection. CRAFT [32] predicts two heatmaps: the character region score
map and the affinity score map. The region score map localizes individual
characters in the image, while the affinity score map groups each character
into a single word instance. Another notable scene text detection method is
Pixel Aggregation Network (PAN) [33] which is trained to predict text regions,
kernels, and similarity vectors. Both text segmentation models have proven to
work well on commonly used scene text datasets such as IC15 [34] and TotalText
[35]. Inspired by them, we introduced a text detection loss in our proposed
model to focus on scene text regions during extremely low-light image
enhancement. Furthermore, state-of-the-art text recognition methods such as
ASTER [36] and TRBA [37] are known to perform well on images captured in
complex scenarios. ASTER [36] employs a flexible rectification module to
straighten the word images before passing them to a sequence-to-sequence model
with the bi-directional decoder. The experimental results of ASTER showed that
the rectification module could achieve superior performance on multiple scene
text recognition datasets, including the likes of IC15 and many more. Besides,
TRBA [37] provided interesting insights by breaking down the scene text
recognition framework into four main stages: spatial transformation, character
feature extraction, followed by sequence modeling, and the prediction of
character sequences. Given these methods’ robustness on difficult texts, they
are well-suited to recognize texts from enhanced low-light images.
## 3 Extremely Low-Light Text Image Enhancement
### 3.1 Problem Formulations
Let $x\in R^{W\times H\times 3}$ be a short-exposure image of width $W$ and
height $H$. An ideal image enhancement expects that a neural network
$LE(x;\theta)$ parameterized by $\theta$ can restore this image to its
corresponding long-exposure image, $y\in R^{W\times H\times 3}$, i.e.,
$LE(x;\theta)\simeq y$. However, previous works normally pursued the lowest
per-pixel intensity difference, which should not be the goal for image
enhancement because we usually expect that some high-level computer vision
tasks can work reasonably well on those enhanced images. For example, in terms
of text detection, the goal of the neural network can be the lowest detection
bounding boxes discrepancy, i.e., $B(LE(x;\theta))\simeq B(y)$.
Our novel image enhancement model consists of a U-Net accommodating extremely
low-light images and edge maps using two independent encoders. During model
training, instead of channel attention, the encoded edges guide the spatial
attention sub-module in the proposed Edge-Att to attend to edge pixels related
to text representations. Besides the image enhancement losses, our model
incorporates text detection and edge reconstruction losses into the training
process. This integration effectively guides the model’s attention towards
text-related features and regions, facilitating improved image textual content
analysis. As a pre-processing step, we introduced a novel augmentation
technique called Text-CP to increase the presence of non-overlapping and
unique text instances in training images, thereby promoting comprehensive
learning of text representations.
### 3.2 Network Design
Figure 2: Illustration of the architecture of our proposed framework, designed
to enhance extremely low-light images while incorporating scene text
awareness. Figure 3: (a) Visual representation of our edge decoder, wherein A
and B represent the output from the corresponding convolution blocks in Figure
2 and S denotes the scaling of the image. (b) Illustration of the proposed
Edge-Aware Attention module.
Our model was inspired by U-Net[1] with some refinements. Firstly, the network
expects heterogeneous inputs, i.e., extremely low-light images, $x$, and the
corresponding RCF [38] edge maps, $e$. Secondly, input-edge pairs are handled
by two separate encoders with edge-aware attention modules between them. The
attended features are then bridged with the decoder through skip connections.
Finally, our multi-tasking network predicts the enhanced image, $x^{\prime}$,
and the corresponding reconstructed edge, $e^{\prime}$. The overall
architecture of our network can be seen in Figure 2 and modeled as:
$x^{\prime},e^{\prime}=LE(x,e;\theta).$ (1)
### 3.3 Objectives
Our proposed model is trained to optimize four loss functions. The first two,
Smooth L1 loss and multi-scale SSIM loss focus on enhancing the overall image
quality. The third, text detection loss, targets the enhancement of scene text
regions specifically. The fourth, edge reconstruction loss, focuses on crucial
low-level edge features.
Firstly, we employ smooth L1 loss as the reconstruction loss to better enforce
low-frequency correctness [21] between $x^{\prime}$ and $y$ as:
$\displaystyle\mathcal{L}_{recons}=\left\\{\begin{matrix}&0.5\cdot(x^{\prime}-y)^{2}/\delta,&\text{if
}{|x^{\prime}-y|<\delta}\\\
&|x^{\prime}-y|-0.5\cdot\delta,&\text{otherwise}{}\end{matrix}\right.$ (2)
where we empirically found that $\delta=1$ achieved good result. The authors
of Pix2Pix [21] showed that by utilizing L1 loss, the model can achieve better
results as the generated images are less blurry and proved that L1 loss can
better enforce the learning of low-frequency details, which is also essential
for OCR tasks. On the other hand, the L1 norm is less sensitive to outliers
than the L2 norm, thus resulting in a more robust model towards extreme pixel
intensities.
Secondly, the multi-scale SSIM metric was proposed in [39] for reference-based
image quality assessment, focusing on image structure consistency. An
$M$-scale SSIM between the enhanced image $x^{\prime}$ and ground truth image
$y$ is:
$SSIM_{MS}(x^{\prime},y)=[l_{M}(x^{\prime},y)]^{\tau}\cdot\prod\nolimits^{M}_{j=1}[c_{j}(x^{\prime},y)]^{\phi}[s_{j}(x^{\prime},y)]^{\psi},$
(3)
where $l_{M}$ is the luminance at M-scale; $c_{j}$ and $s_{j}$ represent the
contrast and the structure similarity measures at the $j$-th scale; $\tau$,
$\phi$, and $\psi$ are parameters to adjust the importance of the three
components. Inspired by [39], we adopted the $M$-scale SSIM loss function in
our work to enforce the image structure of $x^{\prime}$ to be close to that of
$y$:
$\mathcal{L}_{SSIM_{MS}}=1-{SSIM_{MS}}(x^{\prime},y).$ (4)
Thirdly, a well-enhanced extremely low-light image implies that we could
obtain similar text detection results on both the enhanced and ground truth
images. As such, we propose to employ CRAFT [32] to localize texts in images
through its region score heatmap. To implicitly enforce our model to focus on
scene text regions, we define the text detection loss, $\mathcal{L}_{text}$
as:
$\mathcal{L}_{text}=\|R(x^{\prime})-R(y)\|_{1},$ (5)
where $R(x^{\prime})$ and $R(y)$ denote the region score heatmaps of the
enhanced and ground truth images, respectively.
Fourthly, the edge reconstruction decoder in our model is designed to extract
edges better, which are essential for text pixels. Figure 3(a) shows an
overview of the edge decoder. The loss at pixel $i$ of detected edge, $e_{i}$,
with respect to the ground truth edge, $g_{i}$ is defined as:
$\displaystyle l(e_{i})=\left\\{\begin{matrix}&\alpha\cdot
log(1-P(e_{i})),&\text{if }g_{i}=0\\\ &\beta\cdot logP(e_{i}),&\text{if
}g_{i}=1\end{matrix}\right.$ (6)
where
$\displaystyle\alpha=\lambda\cdot\frac{\left|Y^{+}\right|}{\left|Y^{+}\right|+\left|Y^{-}\right|},$
(7)
$\displaystyle\beta=\frac{\left|Y^{-}\right|}{\left|Y^{+}\right|+\left|Y^{-}\right|},$
$Y^{+}$ and $Y^{-}$ denote the positive and negative sample sets,
respectively. $\lambda$ is set to 1.1 to balance both types of samples. The
ground truth edge is generated using a Canny edge detector [40], and
P($e_{i}$) is the sigmoid function. Then, the overall edge reconstruction loss
can be formulated as:
$\mathcal{L}_{edge}=\sum_{i=1}^{|I|}\sum_{j=1}^{J}l(e^{j}_{i})+l(e^{{}^{\prime}}_{i}),$
(8)
where $l(e_{i}^{j})$ is the predicted edge at pixel $i$ and level $j$. $J=3$
is the number of side edge outputs in our model. $e_{i}^{{}^{\prime}}$ is the
final predicted edge map from the concatenation of side outputs. $|I|$ is the
number of pixels in a cropped image during training.
Finally, the total joint loss function, $\mathcal{L}_{total\\_en}$ of our
proposed model is:
$\mathcal{L}_{total\\_en}=\omega_{recons}\mathcal{L}_{recons}+\omega_{text}\mathcal{L}_{text}+\omega_{SSIM_{MS}}\mathcal{L}_{SSIM_{MS}}+\omega_{edge}\mathcal{L}_{edge},$
(9)
where $\omega_{recons}$, $\omega_{text}$, $\omega_{SSIM_{MS}}$, and
$\omega_{edge}$ are the weights to address the importance of each loss term
during training.
### 3.4 Edge-Aware Attention
Polarized Self-Attention (PSA) [41] is one of the first works to propose an
attention mechanism catered to high-quality pixel-wise regression tasks.
However, we found that the original PSA module that only considers a single
source of feature map for both channel and spatial attention is ineffective
for extremely low-light image enhancement. Under low light conditions, the
details of content such as the edges of the texts are barely discernible which
is less effective in guiding the network to attend to spatial details.
Therefore, we designed our Edge-Aware Attention (Edge-Att) module to take in
feature maps from two encoders and process them differently, i.e., the feature
maps of extremely low-light images from the image encoder are attended
channel-wise, whereas the spatial attention submodule attends to feature maps
from the edge encoder. By doing so, we can ensure that Edge-Att can attend to
rich images and edge features simultaneously. The proposed attention module is
illustrated in Figure 3(b).
Firstly, the feature map from the image encoder, $F$ is fed into the channel
attention, $A^{ch}(F)\in\mathbb{R}^{C\times 1\times 1}$ with calculation as
follows:
$A^{ch}(F)=\sigma_{3}\left[F_{SG}(W_{z}(\sigma_{1}(W_{v}(F))\times
F_{SM}(\sigma_{2}(W_{q}(F)))))\right],$ (10)
where $W_{q}$, $W_{v}$, and $W_{z}$ are 1x1 convolution layers, $\sigma_{1}$,
$\sigma_{2}$ and $\sigma_{3}$ are tensor reshape operators. $F_{SM}(.)$ and
$F_{SG}(.)$ refer to softmax and sigmoid operators. The output of this branch
is $A^{ch}(F)\bigodot^{ch}F\in\mathbb{R}^{C\times H\times W}$, where
$\bigodot^{ch}$ is a channel-wise multiplication operator.
Secondly, given the edge-branch feature map $E$, the edge-aware spatial
attention, $A^{sp}(E)\in\mathbb{R}^{1\times H\times W}$, is defined as:
$A^{sp}(E)=F_{SG}\left[\sigma_{3}(F_{SM}(\sigma_{1}(F_{GP}(W_{q}(E))))\times\sigma_{2}(W_{v}(E)))\right],$
(11)
where $W_{q}$ and $W_{v}$ are 1x1 convolution layers, $\sigma_{1}$,
$\sigma_{2}$, and $\sigma_{3}$ are three tensor reshape operators. $F_{SM}(.)$
is a softmax operator, $F_{GP}(.)$ is a a global pooling operator, and
$F_{SG}(.)$ is a sigmoid operator. The output of this branch is
$A^{sp}(E)\bigodot^{sp}F\in\mathbb{R}^{C\times H\times W}$, where
$\bigodot^{sp}$ is a spatial-wise multiplication operator, and $F$ is the
image enhancement branch’s feature map. Finally, output of the proposed Edge-
Att module is the composition of two submodules:
$Edge{\text{-}}Att(F,E)=A^{ch}(F)\odot^{ch}F+A^{sp}(E)\odot^{sp}F.$ (12)
### 3.5 Text-Aware Copy-Paste Augmentation
Figure 4: Illustration of the Text-Aware Copy-Paste (Text-CP) data
augmentation. Compared with the original Copy-Paste, our method generates
images with non-overlapping text instances that allow the detection of texts
outside their usual context.
This work aims to enhance extremely low-light images to improve text detection
and recognition. However, the dataset’s limited number of text instances could
hinder the model’s ability. Although Copy-Paste Augmentation [42] can increase
the number of text instances, overlapping texts introduced by random placement
might confuse CRAFT in text detection loss since CRAFT is not trained to
detect such texts. In the commonly used scene text datasets such as ICDAR15
[34], overlapping texts are marked as ”do not care” regions which are excluded
from models’ training and evaluation. Thus, to adhere to ICDAR’s standard and
to address overlapping text issues, we propose a novel approach called Text-
Aware Copy-Paste Augmentation (Text-CP). Text-CP considers each text box’s
location and size by leveraging uniform and Gaussian distributions derived
from the dataset. For a training image $t$ of width $w_{t}$ and height $h_{t}$
to be augmented, we initialize a set of labeled text boxes in the training set
as $C$, which is:
$\mathit{\emph{C}}=\left\\{(u_{1},v_{1},w_{1},h_{1}),...,(u_{\left|\mathit{\emph{C}}\right|},v_{\left|\mathit{\emph{C}}\right|},w_{\left|\mathit{\emph{C}}\right|},h_{\left|\mathit{\emph{C}}\right|}))\right\\},$
(13)
where each tuple represents the top left position of a text located at $u_{k}$
and $v_{k}$ with width, $w_{k}$, and height, $h_{k}$ with $k$ representing the
index of the current text’s box in the set. We then sample a target number of
text instances, $n_{\text{target}}$, from the set of $C$ to form $C_{t}$,
defined as the set of text boxes to be pasted on that training image, $t$. The
next step is to crop and paste the sampled texts without overlapping. For each
$c_{k}\in C_{t}$, we adopt two uniform distributions in modeling the position
of the texts, $\hat{u}_{k}$ and $\hat{v}_{k}$:
$\displaystyle\hat{u}_{k}\sim U(0,w_{t}),$ (14) $\displaystyle\hat{v}_{k}\sim
U(0,h_{t}).$
As for $w_{k}$ and $h_{k}$, they are sampled from Gaussian distribution as:
$\displaystyle\hat{w}_{k}\sim\mathcal{N}(\mu_{W},\sigma_{W}^{2}),$ (15)
$\displaystyle\hat{h}_{k}\sim\mathcal{N}(\mu_{H},\sigma_{H}^{2}),$
where $\mu$ and $\sigma^{2}$ are estimated means and variances of width $W$
and height $H$ from all the labeled texts in the training set. We illustrate
the overall data augmentation process of Text-CP and its augmented results in
Figure 4. The pseudocode of Text-CP is detailed in the supplementary material.
## 4 Extremely Low-Light Image Synthesis
### 4.1 Problem Formulations
To the best of our knowledge, the research community has not extensively
explored extremely low-light image synthesis, mainly due to the limited
availability of datasets designed explicitly for extremely low-light scene
text. While extremely low-light dataset, SID, and low-light dataset, LOL,
exist, they are not primarily collected with scene text in mind. This scarcity
of dedicated datasets for extremely low-light scene text poses challenges for
evaluating the performance of existing image enhancement methods in terms of
image quality and scene text metrics. In order to address this issue, we
define the extremely low-light image synthesis problem as follows:
$\hat{x}=LS(y;\theta_{s}),$ (16)
where given a long-exposure image $y$, a low-light image synthesis neural
network, $LS(y;\theta_{s})$ parameterized by $\theta_{s}$, will synthesize a
set of images $\hat{x}$, such that $B(LS(y;\theta_{s}))\simeq B(x)$. We want
the synthesized extremely low-light images to be as realistic as possible to
genuine low-light images, $x$.
Therefore, we introduce a Supervised-DCE model focusing on synthesizing a set
of realistic extremely low-light images, enabling existing image enhancement
techniques to leverage publicly available scene text datasets. Consequently,
existing low-light image enhancement methods can benefit from training with
synthetic data to the extent that they can perform better on the downstream
scene text detection task, as detailed in Section 6.5.
### 4.2 Network Design
Figure 5: Illustration of the proposed Supervised-DCE model for extremely low-
light image synthesis.
Zero-DCE [25] was originally proposed to perform image enhancement through
curve estimation. However, its network can only adjust brightness slightly
since the per-pixel trainable curve parameter, $\alpha$, in the quadratic
curve limits the pixel variation. The advantage of performing intensity
adjustment in terms of the quadratic curve is that the pixel range can be
better constrained. In this work, we propose a Supervised-DCE model that
learns to provide reconstructable extremely low-light images with paired
short- and long-exposure images. The framework of our image synthesis network,
Supervised-DCE, can be seen in Figure 5. Our goal is to push most values
closer to zero n the context of synthesizing extremely low-light images.
Accordingly, we propose a reformulation of the DCE model as follows:
$\hat{x}=-(H(y)+U(y))y^{2}+(1+H(y))y,$ (17)
where $y$ is the input (i.e., long-exposure image); $\hat{x}$ is the
synthesized low-light image; $H(y)$ and $U(y)$ are the output of Tanh and ReLU
branches, respectively. By introducing the second $U(y)$ branch, we eliminate
the need for iteratively applying the model to produce the desired output, and
drastic intensity adjustment can be done with only a single iteration.
In the original Zero-DCE model, image enhancement is learned by setting the
exposure value to 0.6 in the exposure control loss. However, manually setting
an exposure value to synthesize extremely low-light images is too heuristic
and inefficient. In our proposed synthesis framework, the overall learning is
done by training SID long-exposure images to be similar to their short-
exposure counterparts. Most importantly, these images are translated in the
hope that their text information is somewhat deteriorated as the one in
genuine extremely low-light images and cannot be easily reconstructed. Then,
the trained model can be used to transform scene text datasets in the public
domain to boost the performance of extremely low-light image enhancement in
terms of text detection.
### 4.3 Objectives
During extremely low-light image synthesis, we expect the output to maintain
spatial consistency while reducing the overall proximity loss:
$\mathcal{L}_{prox}=\|\hat{x}-x\|_{1}+\mathcal{L}_{entropy}(\hat{x},x)+\mathcal{L}_{smoothness}(\hat{x},x),$
(18)
where $\hat{x}$ is the synthesized extremely low-light image given the long-
exposure image $y$, and $x$, is the genuine low-light image, i.e., ground
truth for $\hat{x}$. Entropy loss, $\mathcal{L}_{entropy}$, and smoothness
loss, $\mathcal{L}_{smoothness}$ [43], are also used to encourage the
differences to be both sparse and local. With the introduction of
$\mathcal{L}_{prox}$, we removed the color constancy loss of the original
Zero-DCE model since color constancy can be enforced through the supervised
loss.
The spatial consistency loss, $\mathcal{L}_{spa}$ encourages spatial coherence
of the synthesized image by preserving the difference of neighboring regions
between the input image and its synthesized low-light version:
$\mathcal{L}_{spa}=\frac{1}{\mathcal{M}}\sum_{i=1}^{\mathcal{M}}\sum_{j\in\omega(i)}(|\hat{X}_{i}-\hat{X}_{j}|-\alpha_{s}\log_{10}(9|Y_{i}-Y_{j}|+1))^{2},$
(19)
where $\mathcal{M}$ is the number of local regions, and $\omega(i)$ is four
neighboring regions (top, down, left, right) centered at region $i$. $\hat{X}$
and $Y$ are the averaged intensity values of local regions of the synthesized
images and the long-exposure images, respectively. We introduced logarithm
operation and $\alpha_{s}$ parameter to reduce the large spatial difference of
$Y$ where $\alpha_{s}$ is set to 0.05. We set the local region size to
$4\times 4$, following the original setting of Zero-DCE.
Besides spatial consistency, we also expect the monotonicity relation between
neighboring pixels to be preserved. To achieve this, we reused the
illumination smoothness loss:
$\mathcal{L}_{tv_{Z}}=\sum_{\forall
c\in\xi}(|\nabla_{x}Z^{c}|+|\nabla_{y}Z^{c}|)^{2},\xi=\left\\{R,G,B\right\\},$
(20)
where $\nabla_{x}$ and $\nabla_{y}$ are gradient operations on the x-axis and
y-axis, respectively. Illumination smoothness loss, $\mathcal{L}_{tv_{Z}}$, is
applied on both $H(y)$ and $U(y)$, i.e., the curve parameter maps of the two
branches, respectively, by substituting $Z$ with $H$ and $U$, resulting in
$\mathcal{L}_{tv_{H}}$ and $\mathcal{L}_{tv_{U}}$.
In summary, the overall learning objective, $\mathcal{L}_{total\\_syn}$ to
train our extremely low-light image synthesis network is defined as:
$\mathcal{L}_{total\\_syn}=\omega_{prox}\mathcal{L}_{prox}+\omega_{spa}\mathcal{L}_{spa}+\omega_{tv_{H}}\mathcal{L}_{tv_{H}}+\omega_{tv_{U}}\mathcal{L}_{tv_{U}}.$
(21)
## 5 New Low-Light Text Datasets
Dataset Training Set Testing Set GT Img. Leg. Illeg. $\mu_{W}$ $\mu_{H}$
$\sigma_{W}$ $\sigma_{H}$ GT Img. Leg. Illeg. SID-Sony-Text 161 5937 2128
79.270 34.122 123.635 50.920 50 611 359 SID-Fuji-Text 135 6213 4534 128.579
57.787 183.199 68.466 41 1018 1083 LOL-Text 485 613 1423 23.017 14.011 21.105
17.542 15 28 45 IC15 1000 4468 7418 78.410 29.991 55.947 24.183 500 2077 3153
Table 2: Statistics reported based on long-exposure images for all datasets.
GT Img. stands for ground truth image count, where Leg. and Illeg. stand for
legible and illegible text count, respectively.
In this work, we annotated all text instances in the extremely low-light
dataset, SID [1], and the ordinary low-light dataset, LOL [2]. SID has two
subsets: SID-Sony, captured by Sony $\alpha$7S II, and SID-Fuji, captured by
Fujifilm X-T2. For this work, we included 878/810 short-exposure images and
211/176 long-exposure images at a resolution of 4240×2832/6000×4000 from SID-
Sony and SID-Fuji, respectively. The short-exposure time is 1/30, 1/25, and
1/10, while the corresponding reference (long-exposure) images were captured
with 100 to 300 times longer exposure, i.e., 10 to 30 seconds. In our
experiments, we converted short- and long-exposure SID images to RGB format.
The LOL dataset provides low/normal-light image pairs taken from real scenes
by controlling exposure time and ISO. There are 485 and 15 images at a
resolution of 600×400 in the training and test sets, respectively. We closely
annotated text instances in the SID and LOL datasets following the common IC15
standard. We show some samples in Figure 6. The newly annotated datasets are
named SID-Sony-Text, SID-Fuji-Text, and LOL-Text to differentiate them from
their low-light counterparts.
(a) SID-Sony-Text
(b) SID-Fuji-Text
(c) LOL-Text
Figure 6: Green boxes represent legible texts, and blue boxes represent
illegible texts.
IC15 dataset was introduced in the ICDAR 2015 Robust Reading Competition for
incidental scene text detection and recognition. It contains 1500 scene text
images at a resolution of $1280\times 720$. In this study, IC15 is primarily
used to synthesize extremely low-light scene text images. Detailed statistics
of the text annotations for SID-Sony-Text, SID-Fuji-Text, LOL-Text, and IC15
are shown in Table 2, where we included the statistics for long-exposure
images only for the sake of brevity. In this table, we also report relevant
statistics of the mean and standard deviation of labeled texts’ width and
height to be used by the proposed Text-Aware Copy-Paste augmentation. The text
annotations for SID-Sony-Text, SID-Fuji-Text, and LOL-Text datasets will be
released at https://github.com/chunchet-ng/Text-in-the-Dark.
Moreover, we synthesized extremely low-light images based on IC15 by using
U-Net and our proposed Supervised-DCE model, respectively. To study the
difference between these two variations of image synthesis methods, we
generated a total of four sets of images by using the aforementioned two
models trained on SID-Sony and SID-Fuji, individually. Naming convention of
such synthetic datasets follows the format of “{Syn-
IC15}-{Sony/Fuji}-{v1/v2}”. “{Sony/Fuji}” is an indication of which dataset
the image synthesis model is trained on, while “{v1/v2}” differentiates the
image synthesis models where v1 is U-Net and v2 is our proposed Supervised-DCE
model. For instance, the synthetic images generated by a U-Net trained on SID-
Sony and SID-Fuji, are named Syn-IC15-Sony-v1 and Syn-IC15-Fuji-v1. And,
synthetic images generated by our proposed Supervised-DCE model are denoted as
Syn-IC15-Sony-v2 and Syn-IC15-Fuji-v2.
## 6 Experimental Results
### 6.1 Experiment Setup
Datasets and Metrics. All low-light image enhancement methods are trained and
tested on the datasets detailed in Section 5. They are then evaluated in terms
of intensity metrics (PSNR, SSIM), perceptual similarity (LPIPS), and text
detection (H-Mean). For the SID-Sony-Text, SID-Fuji-Text, and LOL-Text
datasets, which are annotated with text bounding boxes only, we used well-
known and commonly used scene text detectors (CRAFT [32] and PAN [33]) to
analyze the enhanced images. For IC15, which provides both text detection and
text recognition labels, we conducted a two-stage text spotting experiment
using the aforementioned text detectors (CRAFT, PAN) and two robust text
recognizers (TRBA [37] and ASTER [36]) on the synthesized IC15 images after
enhancement. The metric for text spotting is case-insensitive word accuracy.
Implementation Details. We trained our image enhancement model for 4000 epochs
using the Adam optimizer [44] with a batch size of 2. The initial learning
rate is set to $1e^{-4}$ and decreased to $1e^{-5}$ after 2000 epochs. At each
training iteration, we randomly cropped a $512\times 512$ patch with at least
one labeled text box inside and applied random flipping and image transpose as
data augmentation strategies. The weightings of each loss term, i.e.,
$\omega_{recons}$, $\omega_{text}$, $\omega_{SSIM_{MS}}$, and $\omega_{edge}$,
were empirically set to 0.2125, 0.425, 0.15, and 0.2125 respectively,
following the work of ELIE_STR [45]. For other image enhancement methods, we
re-trained them on all datasets using the best set of hyperparameters
specified in their respective code repositories or papers.
As for the Supervised-DCE model, we used a batch size of 8 and trained for 200
epochs using the Adam optimizer with default parameters and a fixed learning
rate of $1e^{-4}$. It was trained on $256\times 256$ image patches with loss
weightings of $\omega_{prox}$, $\omega_{spa}$, $\omega_{tv_{A}}$ and
$\omega_{tv_{B}}$, set to 1, 20, 10, and 10 respectively.
### 6.2 Results on SID-Sony-Text and SID-Fuji-Text Datasets
Our model’s performance is demonstrated in Table 3, achieving the highest
H-Mean scores on all datasets with CRAFT and PAN. Following [45], we
illustrate the CRAFT text detection results on SID-Sony-Text in Figure 7.
Qualitative results of existing methods on SID-Fuji-Text are presented in the
supplementary material. The effectiveness of our model in enhancing extremely
low-light images to a level where text can be accurately detected is readily
apparent. In Figure 7, only the images enhanced by our proposed model yield
accurate text detection results. On the other hand, existing methods generally
produce noisier images, resulting in inferior text detection results. While
GAN-enhanced images tend to be less noisy, the text regions are blurry, making
text detection challenging. Moreover, our model achieves the highest PSNR and
SSIM scores on both SID-Sony-Text and SID-Fuji-Text datasets, showing that our
enhanced images are the closest to the image quality of ground truth images.
In short, better text detection is achieved on our enhanced images through the
improvement of overall image quality and preservation of fine details within
text regions.
---
(a) Low-Light
---
(b) LIME [18]
---
(c) BIMEF [19]
---
(d) Zero-DCE [25]
---
(e) Zero-DCE++ [46]
---
(f) SCI [26]
---
(g) CycleGAN [22]
---
(h) EnlightenGAN [23]
---
(i) RetinexNet [2]
---
(j) Pix2Pix [21]
---
(k) ChebyLighter [27]
---
(l) FECNet [31]
---
(m) IAT [30]
---
(n) ELIE_STR [45]
---
(o) Ours
---
(p) Ground Truth
Figure 7: Comparison with state-of-the-art methods on the SID-Sony-Text
dataset is shown in the following manner: for each column, the first row
displays enhanced images marked with blue boxes as regions of interest. The
second row displays zoomed-in regions of enhanced images overlaid with red
text detection boxes from CRAFT [32]. Column 7(a) displays the low-light
image. Columns 7(b) to 7(o) show image enhancement results from all related
methods. The last cell displays ground truth images.
Type Method Image Quality H-Mean PSNR $\uparrow$ SSIM $\uparrow$ LPIPS
$\downarrow$ CRAFT $\uparrow$ PAN $\uparrow$ SID-Sony-Text Input - - - 0.057
0.026 TRAD LIME [18] 13.870 0.135 0.873 0.127 0.057 BIMEF [19] 12.870 0.110
0.808 0.136 0.079 ZSL Zero-DCE [25] 10.495 0.080 0.999 0.196 0.157 Zero-DCE++
[46] 12.368 0.076 0.982 0.218 0.162 SCI [26] 11.814 0.100 1.000 0.201 0.151 UL
CycleGAN [22] 15.340 0.453 0.832 0.090 0.053 EnlightenGAN [23] 14.590 0.426
0.793 0.146 0.075 SL RetinexNet [2] 15.490 0.368 0.785 0.115 0.040 Pix2Pix
[21] 21.070 0.662 0.837 0.266 0.190 ChebyLighter [27] 15.418 0.381 0.787 0.260
0.184 FECNet [31] 22.575 0.648 0.788 0.245 0.188 IAT [30] 19.234 0.562 0.778
0.244 0.176 ELIE_STR [45] 25.507 0.716 0.789 0.324 0.266 Ours 25.596 0.751
0.751 0.368 0.298 GT - - - 0.842 0.661 SID-Fuji-Text Input - - - 0.048 0.005
ZSL Zero-DCE [25] 8.992 0.035 1.228 0.249 0.061 Zero-DCE++ [46] 11.539 0.047
1.066 0.262 0.077 SCI [26] 10.301 0.056 1.130 0.300 0.073 UL CycleGAN [22]
17.832 0.565 0.735 0.277 0.191 EnlightenGAN [23] 18.834 0.572 0.822 0.310
0.277 SL Pix2Pix [21] 19.601 0.599 0.803 0.353 0.296 ChebyLighter [27] 20.313
0.616 0.791 0.412 0.318 FECNet [31] 18.863 0.365 0.829 0.382 0.185 IAT [30]
19.647 0.537 0.844 0.445 0.277 ELIE_STR [45] 19.816 0.614 0.801 0.426 0.333
Ours 21.880 0.649 0.788 0.487 0.356 GT - - - 0.775 0.697 LOL-Text Input - - -
0.333 0.133 ZSL Zero-DCE [25] 14.928 0.587 0.328 0.421 0.229 Zero-DCE++ [46]
15.829 0.537 0.408 0.389 0.242 SCI [26] 14.835 0.549 0.335 0.421 0.171 UL
CycleGAN [22] 19.826 0.734 0.288 0.250 0.133 EnlightenGAN [23] 15.800 0.654
0.300 0.343 0.125 SL Pix2Pix [21] 20.581 0.771 0.247 0.353 0.129 ChebyLighter
[27] 19.820 0.769 0.199 0.353 0.176 FECNet [31] 20.432 0.787 0.231 0.378 0.229
IAT [30] 20.437 0.772 0.234 0.421 0.188 ELIE_STR [45] 19.782 0.824 0.167 0.462
0.235 Ours 21.330 0.828 0.163 0.474 0.294 GT - - - 0.439 0.205
Table 3: Quantitative results of PSNR, SSIM, LPIPS, and text detection H-Mean
for low-light image enhancement methods on SID-Sony-Text, SID-Fuji-Text, and
LOL-Text datasets. Please note that TRAD, ZSL, UL, and SL stand for
traditional methods, zero-shot learning, unsupervised learning, and supervised
learning respectively. Scores in bold are the best of all.
### 6.3 Results on LOL-Text Dataset
To demonstrate the effectiveness of our model in enhancing low-light images
with varying levels of darkness, we conducted experiments on the widely used
LOL dataset, which is relatively brighter than the SID dataset, as depicted in
Table 1. Interestingly, we found that our enhanced images achieved the best
detection results on LOL-Text among existing methods, as shown in Table 3.
Surprisingly, despite the lower resolution (600x400) of the images in LOL, our
method’s enhanced images with sharper and crisper low-level details surpassed
the ground truth images’ H-Mean scores. Qualitative results on the LOL-Text
dataset are illustrated in the supplementary material. Although certain
methods yielded output images with acceptable image quality (i.e., bright
images without color shift), their text detection results were inferior to
ours. Furthermore, our superior results on the LOL-Text dataset emphasize our
method’s ability to generalize well on both ordinary and extremely low-light
images, effectively enhancing a broader range of low-light images while making
the text clearly visible.
### 6.4 Effectiveness of the Proposed Supervised-DCE Model
The goal of image synthesis in our work is to translate images captured in
well-illuminated scenarios to extremely low light. In this work, we choose the
commonly used IC15 scene text dataset as our main synthesis target. The
synthesized dataset then serves as additional data to train better scene text-
aware image enhancement models, which are studied in Section 6.5.
Intuitively, realistic synthesized images should possess similar
characteristics to genuine extremely low-light images. To verify the
effectiveness of our synthesis model, we compared our proposed Supervised-DCE
model (v2) with the U-Net proposed in SID [1] (v1). Specifically, we trained
the synthesizing models on the training set and synthesized the images based
on the corresponding test set. Then, we calculated the PSNR and SSIM of the
synthesized images by comparing them with the genuine ones along with the
average perceptual lightness in CIELAB color space. The comparison was made on
two SID datasets, SID-Sony and SID-Fuji.
In Table 4, we show that v2’s PSNR and SSIM are higher than v1’s, indicating
higher similarity between our synthesized and genuine images. Our new method
(v2) also exhibits closer Avg. L* values and H-Mean scores to the genuine
images than v1, indicating darker and more accurate deterioration of fine text
details. In addition, qualitative results for the proposed Supervised-DCE
model and results of synthetic IC15 datasets including Syn-IC15-Sony-v1, Syn-
IC15-Sony-v2, Syn-IC15-Fuji-v1, and Syn-IC15-Fuji-v2 are presented in the
supplementary material for comprehensive analyses.
Dataset PSNR SSIM Avg. L* CRAFT PAN Syn-SID-Sony-v1 41.095 0.809 0.176 0.294
0.083 Syn-SID-Sony-v2 45.442 0.942 0.003 0.135 0.014 Genuine SID-Sony - -
0.008 0.057 0.026 Syn-SID-Fuji-v1 39.187 0.784 0.172 0.402 0.042 Syn-SID-
Fuji-v2 41.881 0.863 0.002 0.093 0.002 Genuine SID-Fuji - - 0.004 0.048 0.005
Table 4: The difference between genuine extremely low-light dataset, SID, and
synthetic extremely low-light images generated using U-Net (v1) and
Supervised-DCE (v2). Please note that synthetic images’ PSNR and SSIM values
are based on comparison against genuine low-light images in the test set
instead of pure black images calculated in Table 1. Additionally, we can
notice that v2-images are more realistic and darker, similar to genuine
extremely low-light images due to their higher values of PSNR and SSIM, along
with closer Avg. L*.
### 6.5 Results on Training with Mixed Datasets
We trained top-performing models from Section 6.2 using a mixture of genuine
(SID) and synthetic low-light (IC15) datasets to test whether extremely low-
light image enhancement can benefit from synthesized images. The trained
models were evaluated on their respective genuine low-light datasets. Results
in Table 5 showed a significant increase in H-Mean, and we found that both
versions (v1 and v2) can fill the gap caused by the scarcity of genuine low-
light images. This justifies the creation of a synthetic IC15 dataset for such
a purpose. Furthermore, v2-images, i.e., extremely low-light images
synthesized by our proposed Supervised-DCE, further pushed the limit of H-mean
scores on genuine extremely low-light images, and our enhancement model
benefited the most because it could learn more from text instances and
reconstruct necessary details to represent texts. Despite our method’s
success, a noticeable gap exists between our results and the ground truth,
emphasizing the need for further research and development to achieve even more
accurate and reliable scene text extraction in low-light conditions.
Type Method SID-Sony-Text + Syn-IC15-Sony-v1 SID-Sony-Text + Syn-IC15-Sony-v2
SID-Fuji-Text + Syn-IC15-Fuji-v1 SID-Fuji-Text + Syn-IC15-Fuji-v2 CRAFT
$\uparrow$ PAN $\uparrow$ CRAFT $\uparrow$ PAN $\uparrow$ CRAFT $\uparrow$ PAN
$\uparrow$ CRAFT $\uparrow$ PAN $\uparrow$ Input 0.057 0.026 0.057 0.026 0.048
0.005 0.048 0.005 ZSL Zero-DCE++ [46] 0.230 0.159 0.242 0.153 0.274 0.080
0.281 0.076 SCI [26] 0.240 0.154 0.243 0.160 0.307 0.076 0.313 0.084 UL
CycleGAN [22] 0.180 0.071 0.219 0.143 0.297 0.284 0.310 0.277 EnlightenGAN
[23] 0.205 0.146 0.237 0.163 0.329 0.246 0.342 0.282 SL ELIE_STR [45] 0.348
0.278 0.361 0.296 0.444 0.359 0.466 0.375 Ours 0.383 0.311 0.395 0.319 0.515
0.392 0.549 0.416 GT 0.842 0.661 0.842 0.661 0.775 0.697 0.775 0.697
Table 5: Text detection H-Mean on genuine extremely low-light datasets when
trained on a combination of genuine and synthetic datasets. Scores in bold are
the best of all.
### 6.6 Ablation Study of Proposed Modules
Proposed Modules Image Quality H-Mean Text-CP Dual Encoder Edge-Att Edge
Decoder PSNR $\uparrow$ SSIM $\uparrow$ LPIPS $\downarrow$ CRAFT $\uparrow$
PAN $\uparrow$ - - - - 21.847 0.698 0.783 0.283 0.205 ✓ - - - 21.263 0.658
0.771 0.304 0.252 ✓ ✓ - - 20.597 0.655 0.780 0.335 0.261 ✓ ✓ ✓ - 21.440 0.669
0.776 0.342 0.256 ✓ ✓ - ✓ 21.588 0.674 0.779 0.353 0.285 ✓ - ✓ ✓ 23.074 0.712
0.783 0.350 0.281 - ✓ ✓ ✓ 24.192 0.738 0.784 0.356 0.292 ✓ ✓ ✓ ✓ 25.596 0.751
0.751 0.368 0.298
Table 6: Ablation study of proposed modules in terms of PSNR, SSIM, LPIPS, and
text detection H-Mean on the SID-Sony-Text dataset. Scores in bold are the
best of all.
To understand the effect of each component of our model, we conducted several
ablation experiments by either adding or removing them one at a time. Results
are presented in Table 6. The baseline was a plain U-Net without any proposed
modules. We initiated the ablation study by adding Text-CP data augmentation,
which improved CRAFT H-Mean from 0.283 to 0.304, indicating that involving
more text instances during training is relevant to text-aware image
enhancement for models to learn text representation. Moreover, scores
increased steadily by gradually stacking the baseline with more modules. For
instance, with the help of the dual encoder structure and Edge-Att module in
our proposed framework, CRAFT H-Mean increased from 0.304 to 0.342. This shows
that they can extract image features better and attend to edges that shape
texts in enhanced images. The edge reconstruction loss calculated based on
predictions from the edge decoder helped strengthen the learning of edge
features and empowered encoders in our model. Interestingly, we found that
removing one of the two most representative modules (i.e., dual encoder or
Edge-Att module) led to significant differences in H-Mean because these two
modules’ designs allow them to extract and attend to significant image
features independently. We concluded the experiment by showing that combining
all proposed modules led to the highest scores, as each module played an
integral role in our final network. Further analysis of Edge-Att and Text-CP
are included in the supplementary material to study their effectiveness as
compared to the original versions.
## 7 Conclusion
This paper presents a novel scene text-aware extremely low-light image
enhancement framework consisting of a Text-Aware Copy-Paste augmentation
method as a pre-processing step, followed by a new dual-encoder-decoder
architecture armed with Edge-Aware attention modules. With further assistance
from text detection and edge reconstruction losses, our model can enhance
images to the extent that high-level perceptual reasoning tasks can be better
fulfilled. Extremely low-light image synthesis has rarely been discussed over
the years. Thus, we proposed a novel Supervised-DCE model to provide better
synthesized extremely low-light images so that extremely low-light image
enhancement can benefit from publicly available scene text datasets such as
IC15. Furthermore, our proposed extremely low-light image enhancement model
has been rigorously evaluated against various competing methods, including
traditional techniques and deep learning-based approaches, on challenging
datasets such as SID-Sony-Text, SID-Fuji-Text, LOL-Text, and synthetic IC15.
Through extensive experimentation, our findings consistently demonstrate our
model’s superiority in extremely low-light image enhancement and text
extraction tasks.
## References
* [1] C. Chen, Q. Chen, J. Xu, V. Koltun, Learning to see in the dark, in: CVPR, 2018\.
* [2] C. Wei, W. Wang, W. Yang, J. Liu, Deep retinex decomposition for low-light enhancement, arXiv preprint arXiv:1808.04560 (2018).
* [3] D. J. Jobson, Z.-u. Rahman, G. A. Woodell, Properties and performance of a center/surround retinex, IEEE Transactions on Image Processing 6 (3) (1997) 451–462.
* [4] T. Çelik, T. Tjahjadi, Contextual and variational contrast enhancement, IEEE Transactions on Image Processing 20 (2011) 3431–3441.
* [5] C. Lee, C. Lee, C.-S. Kim, Contrast enhancement based on layered difference representation of 2d histograms, IEEE Transactions on Image Processing 22 (12) (2013) 5372–5384.
* [6] L. Tao, C. Zhu, J. Song, T. Lu, H. Jia, X. Xie, Low-light image enhancement using cnn and bright channel prior, in: ICIP, 2017.
* [7] L. Tao, C. Zhu, G. Xiang, Y. Li, H. Jia, X. Xie, LLCNN: A convolutional neural network for low-light image enhancement, in: VCIP, 2017.
* [8] K. G. Lore, A. Akintayo, S. Sarkar, LLNet: A deep autoencoder approach to natural low-light image enhancement, Pattern Recognition 61 (2017) 650–662.
* [9] M. Gharbi, J. Chen, J. Barron, S. W. Hasinoff, F. Durand, Deep bilateral learning for real-time image enhancement, ACM Transactions on Graphics 36 (4) (2017) 1–12.
* [10] M. Liu, L. Tang, S. Zhong, H. Luo, J. Peng, Learning noise-decoupled affine models for extreme low-light image enhancement, Neurocomputing 448 (2021) 21–29.
* [11] J. Hu, L. Shen, G. Sun, Squeeze-and-excitation networks, in: CVPR, 2018.
* [12] J. Hu, L. Shen, S. Albanie, G. Sun, A. Vedaldi, Gather-excite: Exploiting feature context in convolutional neural networks, in: NIPS, 2018.
* [13] C. Chen, B. Li, An interpretable channelwise attention mechanism based on asymmetric and skewed gaussian distribution, Pattern Recognition 139 (2023) 109467\.
* [14] L. Ju, J. Kittler, M. A. Rana, W. Yang, Z. Feng, Keep an eye on faces: Robust face detection with heatmap-assisted spatial attention and scale-aware layer attention, Pattern Recognition 140 (2023) 109553.
* [15] X. Hou, M. Liu, S. Zhang, P. Wei, B. Chen, Canet: Contextual information and spatial attention based network for detecting small defects in manufacturing industry, Pattern Recognition 140 (2023) 109558.
* [16] S. Woo, J. Park, J.-Y. Lee, I. S. Kweon, Cbam: Convolutional block attention module, in: ECCV, 2018.
* [17] J. Fu, J. Liu, H. Tian, Y. Li, Y. Bao, Z. Fang, H. Lu, Dual attention network for scene segmentation, in: CVPR, 2019.
* [18] X. Guo, Y. Li, H. Ling, Lime: Low-light image enhancement via illumination map estimation, IEEE Transactions on Image Processing 26 (2) (2016) 982–993.
* [19] Z. Ying, G. Li, W. Gao, A bio-inspired multi-exposure fusion framework for low-light image enhancement, arXiv preprint arXiv:1711.00591 (2017).
* [20] I. J. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. C. Courville, Y. Bengio, Generative adversarial nets, in: NIPS, 2014.
* [21] P. Isola, J.-Y. Zhu, T. Zhou, A. A. Efros, Image-to-image translation with conditional adversarial networks, in: CVPR, 2017.
* [22] J.-Y. Zhu, T. Park, P. Isola, A. A. Efros, Unpaired image-to-image translation using cycle-consistent adversarial networkss, in: ICCV, 2017.
* [23] Y. Jiang, X. Gong, D. Liu, Y. Cheng, C. Fang, X. Shen, J. Yang, P. Zhou, Z. Wang, Enlightengan: Deep light enhancement without paired supervision, ArXiv abs/1906.06972 (2019).
* [24] M. Zhu, P. Pan, W. Chen, Y. Yang, EEMEFN: Low-light image enhancement via edge-enhanced multi-exposure fusion network., in: AAAI, 2020.
* [25] C. G. Guo, C. Li, J. Guo, C. C. Loy, J. Hou, S. Kwong, R. Cong, Zero-reference deep curve estimation for low-light image enhancement, in: CVPR, 2020.
* [26] L. Ma, T. Ma, R. Liu, X. Fan, Z. Luo, Toward fast, flexible, and robust low-light image enhancement, in: CVPR, 2022, pp. 5637–5646.
* [27] J. Pan, D. Zhai, Y. Bai, J. Jiang, D. Zhao, X. Liu, Chebylighter: Optimal curve estimation for low-light image enhancement, in: ACM MM, 2022.
* [28] A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, Ł. Kaiser, I. Polosukhin, Attention is all you need, NIPS (2017).
* [29] A. Dosovitskiy, L. Beyer, A. Kolesnikov, D. Weissenborn, X. Zhai, T. Unterthiner, M. Dehghani, M. Minderer, G. Heigold, S. Gelly, et al., An image is worth 16x16 words: Transformers for image recognition at scale, in: ICLR, 2021.
* [30] Z. Cui, K. Li, L. Gu, S. Su, P. Gao, Z. Jiang, Y. Qiao, T. Harada, You only need 90k parameters to adapt light: a light weight transformer for image enhancement and exposure correction, in: BMVC, 2022.
* [31] J. Huang, Y. Liu, F. Zhao, K. Yan, J. Zhang, Y. Huang, M. Zhou, Z. Xiong, Deep fourier-based exposure correction network with spatial-frequency interaction, in: ECCV, 2022.
* [32] Y. Baek, B. Lee, D. Han, S. Yun, H. Lee, Character region awareness for text detection, in: CVPR, 2019.
* [33] W. Wang, E. Xie, X. Song, Y. Zang, W. Wang, T. Lu, G. Yu, C. Shen, Efficient and accurate arbitrary-shaped text detection with pixel aggregation network, in: ICCV, 2019.
* [34] D. Karatzas, L. G. I. Bigorda, A. Nicolaou, S. Ghosh, A. D. Bagdanov, M. Iwamura, J. Matas, L. Neumann, V. Chandrasekhar, S. Lu, F. Shafait, S. Uchida, E. Valveny, ICDAR 2015 competition on robust reading, in: ICDAR, 2015\.
* [35] C.-K. Ch’ng, C. S. Chan, C.-L. Liu, Total-text: toward orientation robustness in scene text detection, International Journal on Document Analysis and Recognition (IJDAR) 23 (1) (2020) 31–52.
* [36] B. Shi, M. Yang, X. Wang, P. Lyu, C. Yao, X. Bai, ASTER: An attentional scene text recognizer with flexible rectification, IEEE Transactions on Pattern Analysis and Machine Intelligence 41 (9) (2019) 2035–2048.
* [37] J. Baek, G. Kim, J. Lee, S. Park, D. Han, S. Yun, S. J. Oh, H. Lee, What is wrong with scene text recognition model comparisons? dataset and model analysis, in: ICCV, 2019.
* [38] Y. Liu, M.-M. Cheng, X. Hu, J. Bian, L. Zhang, X. Bai, J. Tang, Richer convolutional features for edge detection, IEEE Transactions on Pattern Analysis and Machine Intelligence 41 (8) (2019) 1939–1946.
* [39] Z. Wang, E. Simoncelli, A. Bovik, Multiscale structural similarity for image quality assessment, in: ACSSC, 2003.
* [40] J. Canny, A computational approach to edge detection, IEEE Transactions on Pattern Analysis and Machine Intelligence (6) (1986) 679–698.
* [41] H. Liu, F. Liu, X. Fan, D. Huang, Polarized self-attention: Towards high-quality pixel-wise regression, arXiv preprint arXiv:2107.00782 (2021).
* [42] G. Ghiasi, Y. Cui, A. Srinivas, R. Qian, T.-Y. Lin, E. D. Cubuk, Q. V. Le, B. Zoph, Simple copy-paste is a strong data augmentation method for instance segmentation, in: CVPR, 2021.
* [43] P. Samangouei, A. Saeedi, L. Nakagawa, N. Silberman, Explaingan: Model explanation via decision boundary crossing transformations, in: ECCV, 2018.
* [44] D. P. Kingma, J. Ba, Adam: A method for stochastic optimization, arXiv preprint arXiv:1412.6980 (2015).
* [45] P.-H. Hsu, C.-T. Lin, C. C. Ng, J. L. Kew, M. Y. Tan, S.-H. Lai, C. S. Chan, C. Zach, Extremely low-light image enhancement with scene text restoration, in: ICPR, 2022.
* [46] C. Li, C. Guo, C. C. Loy, Learning to enhance low-light image via zero-reference deep curve estimation, IEEE Transactions on Pattern Analysis and Machine Intelligence 44 (8) (2021) 4225–4238.
|
# Spectral Turán Type Problems on Cancellative Hypergraphs
Zhenyu Ni Department of Mathematics, Hainan University, Haikou 570228, P.R.
China (995264@hainanu.edu.cn). Lele Liu College of Science, University of
Shanghai for Science and Technology, Shanghai 200093, P.R. China
(ahhylau@outlook.com). This author is supported by the National Natural
Science Foundation of China (No. 12001370).Corresponding author. Liying Kang
Department of Mathematics, Shanghai University, Shanghai 200444, P.R. China
(lykang@shu.edu.cn). This author is supported by the National Natural Science
Foundation of China (Nos. 11871329, 11971298)
###### Abstract
Let $G$ be a cancellative $3$-uniform hypergraph in which the symmetric
difference of any two edges is not contained in a third one. Equivalently, a
$3$-uniform hypergraph $G$ is cancellative if and only if $G$ is
$\\{F_{4},F_{5}\\}$-free, where $F_{4}=\\{abc,abd,bcd\\}$ and
$F_{5}=\\{abc,abd,cde\\}$. A classical result in extremal combinatorics stated
that the maximum size of a cancellative hypergraph is achieved by the balanced
complete tripartite $3$-uniform hypergraph, which was firstly proved by
Bollobás and later by Keevash and Mubayi. In this paper, we consider spectral
extremal problems for cancellative hypergraphs. More precisely, we determine
the maximum $p$-spectral radius of cancellative $3$-uniform hypergraphs, and
characterize the extremal hypergraph. As a by-product, we give an alternative
proof of Bollobás’ result from spectral viewpoint.
Keywords: Hypergraph; Spectral radius; Spectral Turán problem.
AMS Classification: 05C35; 05C50; 05C65.
## 1 Introduction
Consider an $r$-uniform hypergraph (or $r$-graph for brevity) $G$ and a family
of $r$-graphs $\mathcal{F}$. We say $G$ is _$\mathcal{F}$ -free_ if $G$ does
not contain any member of $\mathcal{F}$ as a subhypergraph. The _Turán number_
$\operatorname{ex}(n,\mathcal{F})$ is the maximum number of edges of an
$\mathcal{F}$-free hypergraph on $n$ vertices. Determining Turán numbers of
graphs and hypergraphs is one of the central problems in extremal
combinatorics. For graphs, the problem was asymptotically solved for all non-
bipartite graphs by the celebrated Erdős-Stone-Simonovits Theorem. By contrast
with the graph case, there is comparatively little understanding of the
hypergraph Turán number. We refer the reader to the surveys [6, 9, 12].
In this paper we consider spectral analogues of Turán type problems for
$r$-graphs. For $r=2$, the picture is relatively complete, due in large part
to a longstanding project of Nikiforov, see e.g., [13] for details. However,
for $r\geq 3$ there are very few known results. In [10], Keevash-Lenz-Mubayi
determine the maximum $p$-spectral radius of any $3$-graph on $n$ vertices not
containing the Fano plane when $n$ is sufficiently large. They also obtain a
$p$-spectral version of the Erdős-Ko-Rado theorem on $t$-intersecting
$r$-graphs. Recently, Ellingham-Lu-Wang [4] show that the $n$-vertex
outerplanar $3$-graph of maximum spectral radius is the unique 3-graph whose
shadow graph is the join of an isolated vertex and the path $P_{n-1}$. Gao-
Chang-Hou [7] study the extremal problem for $K_{r+1}^{+}$-free $r$-graphs
among linear hypergraphs, where $K_{r+1}^{+}$ is obtained from the complete
graph $K_{r+1}$ by enlarging each edge of $K_{r+1}$ with $r-2$ new vertices
disjoint from $V(K_{r+1})$ such that distinct edges of $K_{r+1}$ are enlarged
by distinct vertices.
To state our results precisely, we need some basic definitions and notations.
A $3$-graph is _tripartite_ or _$3$ -partite_ if it has a vertex partition
into three parts such that every edge has exactly one vertex in each part. Let
$T_{3}(n)$ be the complete $3$-partite $3$-graph on $n$ vertices with part
sizes $\lfloor n/3\rfloor$, $\lfloor(n+1)/3\rfloor$, $\lfloor(n+2)/3\rfloor$,
and $t_{3}(n)$ be the number of edges of $T_{3}(n)$. That is,
$t_{3}(n)=\Big{\lfloor}\frac{n}{3}\Big{\rfloor}\cdot\Big{\lfloor}\frac{n+1}{3}\Big{\rfloor}\cdot\Big{\lfloor}\frac{n+2}{3}\Big{\rfloor}.$
We call an $r$-graph $G$ _cancellative_ if $G$ has the property that for any
edges $A$, $B$, $C$ whenever $A\cup B=A\cup C$, we have $B=C$. Equivalently,
$G$ is cancellative if $G$ has no three distinct triples $A$, $B$, $C$
satisfying $B\triangle C\subset A$, where $\triangle$ is the symmetric
difference. For graphs, the condition is equivalent to saying that $G$ is
triangle-free. Moving on to $3$-graphs, we observe that $B\Delta C\subset A$
can only occur when $|B\cap C|=2$ for $B\neq C$. This leads us to identify the
two non-isomorphic configurations that are forbidden in a cancellative
$3$-graph: $F_{4}=\\{abc,abd,bcd\\}$ and $F_{5}=\\{abc,abd,cde\\}$.
It is well-known that the study of Turán numbers dates back to Mantel’s
theorem, which states that $\operatorname{ex}(n,K_{3})=\lfloor
n^{2}/4\rfloor$. As an extension of the problem to hypergraphs, Katona
conjectured, and Bollobás [1] proved the following result.
###### Theorem 1.1 ([1]).
A cancellative $3$-graph on $n$ vertices has at most $t_{3}(n)$ edges, with
equality only for $T_{3}(n)$.
In [8], Keevash and Mubayi presented a new proof of Bollobás’ result, and
further proved a stability theorem for cancellative hypergraphs. The main
result of this paper is the following $p$-spectral analogues of Bollobás’
result.
###### Theorem 1.2.
Let $p\geq 1$ and $G$ be a cancellative $3$-graph on $n$ vertices.
1. $(1)$
If $p\geq 3$, then $\lambda^{(p)}(G)\leq\lambda^{(p)}(T_{3}(n))$, with
equality if and only if $G=T_{3}(n)$.
2. $(2)$
If $p=1$, then $\lambda^{(1)}(G)=1/9$.
## 2 Preliminaries
In this section we introduce definitions and notation that will be used
throughout the paper, and give some preliminary lemmas.
Given an $r$-graph $G=(V(G),E(G))$ and a vertex $v$ of $G$. The _link_
$L_{G}(v)$ is the $(r-1)$-graph consisting of all $S\subset V(G)$ with
$|S|=r-1$ and $S\cup\\{v\\}\in E(G)$. The _degree_ $d_{G}(v)$ of $v$ is the
size of $L_{G}(v)$. As usual, we denote by $N_{G}(v)$ the neighbor of a vertex
$v$, i.e., the set formed by all the vertices which form an edge with $v$. In
the above mentioned notation, we will skip the index $G$ whenever $G$ is
understood from the context.
The _shadow graph_ of $G$, denoted by $\partial(G)$, is the graph with
$V(\partial(G))=V(G)$ and $E(\partial(G))$ consisting of all pairs of vertices
that belong to an edge of $G$, i.e., $E(\partial(G))=\\{e:|e|=2,\,e\subseteq
f\ \text{for some}\ f\in E(G)\\}$. For more definitions and notation from
hypergraph theory, see e.g., [2].
For any real number $p\geq 1$, the $p$-spectral radius was introduced by
Keevash, Lenz and Mubayi [10] and subsequently studied by Nikiforov [14, 15].
Let $G$ be an $r$-graph of order $n$, the polynomial form of $G$ is a multi-
linear function $P_{G}(\bm{x}):\mathbb{R}^{n}\to\mathbb{R}$ defined for any
vector $\bm{x}=(x_{1},x_{2},\ldots,x_{n})^{\mathrm{T}}\in\mathbb{R}^{n}$ as
$P_{G}(\bm{x})=r\sum_{\\{i_{1},i_{2},\ldots,i_{r}\\}\in
E(G)}x_{i_{1}}x_{i_{2}}\cdots x_{i_{r}}.$
The _$p$ -spectral radius_111We modified the definition of $p$-spectral radius
by removing a constant factor $(r-1)!$ from [10], so that the $p$-spectral
radius is the same as the one in [3] when $p=r$. This is not essential and
does not affect the results at all. of $G$ is defined as
$\lambda^{(p)}(G):=\max_{\|\bm{x}\|_{p}=1}P_{G}(\bm{x}),$ (2.1)
where $\|\bm{x}\|_{p}:=(|x_{1}|^{p}+\cdots+|x_{n}|^{p})^{1/p}$.
For any real number $p\geq 1$, we denote by $\mathbb{S}_{p,+}^{n-1}$ the set
of all nonnegative real vectors $\bm{x}\in\mathbb{R}^{n}$ with
$\|\bm{x}\|_{p}=1$. If $\bm{x}\in\mathbb{R}^{n}$ is a vector with
$\|\bm{x}\|_{p}=1$ such that $\lambda^{(p)}(G)=P_{G}(\bm{x})$, then $\bm{x}$
is called an _eigenvector_ corresponding to $\lambda^{(p)}(G)$. Note that
$P_{G}(\bm{x})$ can always reach its maximum at some nonnegative vectors. By
Lagrange’s method, we have the _eigenequations_ for $\lambda^{(p)}(G)$ and
$\bm{x}\in\mathbb{S}_{p,+}^{n-1}$ as follows:
$\lambda^{(p)}(G)x_{i}^{p-1}=\sum_{\\{i,i_{2},\ldots,i_{r}\\}\in
E(G)}x_{i_{2}}\cdots x_{i_{r}}~{}~{}\text{for}\ x_{i}>0.$ (2.2)
It is worth mentioning that the $p$-spectral radius $\lambda^{(p)}(G)$ shows
remarkable connections with some hypergraph invariants. For instance,
$\lambda^{(1)}(G)/r$ is the Lagrangian of $G$, $\lambda^{(r)}(G)$ is the usual
spectral radius introduced by Cooper and Dutle [3], and
$\lambda^{(\infty)}(G)/r$ is the number of edges of $G$ (see [14, Proposition
2.10]).
Given two vertices $u$ and $v$, we say that $u$ and $v$ are _equivalent_ in
$G$, in writing $u\sim v$, if transposing $u$ and $v$ and leaving the
remaining vertices intact, we get an automorphism of $G$.
###### Lemma 2.1 ([14]).
Let $G$ be a uniform hypergraph on $n$ vertices and $u\sim v$. If $p>1$ and
$\bm{x}\in\mathbb{S}_{p}^{n-1}$ is an eigenvector to $\lambda^{(p)}(G)$, then
$x_{u}=x_{v}$.
## 3 Cancellative hypergraph of maximum $p$-spectral radius
The aim of this section is to give a proof of Theorem 1.2. We split it into
Theorem 3.1 – Theorem 3.3, which deal with $p=3$, $p>3$ and $p=1$,
respectively.
### 3.1 General properties on cancellative hypergraphs
We start this subsection with a basic fact.
###### Lemma 3.1.
Let $G$ be a cancellative hypergraph, and $u,v$ be adjacent vertices. Then
$L(u)$ and $L(v)$ are edge-disjoint graphs.
###### Proof.
Assume by contradiction that $e\in E(L(u))\cap E(L(v))$. Since $u$ and $v$ are
adjacent in $G$, we have $\\{u,v\\}\subset e_{1}\in E(G)$ for some edge
$e_{1}$. Hence, $e_{2}=e\cup\\{u\\}$, $e_{3}=e\cup\\{v\\}$ and $e_{1}$ are
three edges of $G$ such that $e_{2}\Delta e_{3}\subset e_{1}$, a
contradiction. ∎
Let $G$ be a $3$-graph and $v\in V(G)$. We denote by $E_{v}(G)$ the collection
of edges of $G$ containing $v$, i.e., $E_{v}(G)=\\{e:v\in e\in E(G)\\}$. For a
pair of vertices $u$ and $v$ in $G$, we denote by $T_{v}^{u}(G)$ a new
$3$-graph with $V(T_{v}^{u}(G))=V(G)$ and
$E(T_{v}^{u}(G))=\big{(}E(G)\setminus
E_{v}(G)\big{)}\cup\\{(e\setminus\\{u\\})\cup\\{v\\}:e\in E_{u}(G)\setminus
E_{v}(G)\\}.$
###### Lemma 3.2.
Let $G$ be a cancellative $3$-graph. Then $T_{v}^{u}(G)$ is also cancellative
for any $u,v\in V(G)$.
###### Proof.
Suppose to the contrary that there exist three edges $e_{1},e_{2},e_{3}\in
T_{v}^{u}(G)$ such that $e_{1}\triangle e_{2}\subset e_{3}$. Recalling the
definition of $T_{v}^{u}(G)$, we deduce that $u$, $v$ are non-adjacent in
$T_{v}^{u}(G)$, and $(e\cup\\{u\\})\setminus\\{v\\}\in E(G)$ for any $e\in
E_{v}(T_{v}^{u}(G))$. On the other hand, since $G$ is cancellative, we have
$v\in e_{1}\cup e_{2}\cup e_{3}$. Denote by $\alpha$ the number of edges
$e_{1}$, $e_{2}$, $e_{3}$ containing $v$. It suffices to consider the
following three cases.
Case 1. $\alpha=3$. We have $v\in e_{1}\cap e_{2}\cap e_{3}$. Hence,
$e_{1}^{\prime}=\left(e_{1}\cup\\{u\\}\right)\setminus\\{v\\}$,
$e_{2}^{\prime}=\left(e_{2}\cup\\{u\\}\right)\setminus\\{v\\}$ and
$e_{3}^{\prime}=\left(e_{3}\cup\\{u\\}\right)\setminus\\{v\\}$ are three edges
in $G$ with $e_{1}^{\prime}\triangle e_{2}^{\prime}\subset e_{3}^{\prime}$.
This contradicts the fact that $G$ is cancellative.
Case 2. $\alpha=2$. Without loss of generality, we assume $v\in(e_{1}\cap
e_{2})\setminus e_{3}$ or $v\in(e_{1}\cap e_{3})\setminus e_{2}$. If
$v\in(e_{1}\cap e_{2})\setminus e_{3}$, then $e_{3}\in E(G)$. It follows that
$e_{1}^{\prime}=(e_{1}\cup\\{u\\})\setminus\\{v\\}$,
$e_{2}^{\prime}=(e_{2}\cup\\{u\\})\setminus\\{v\\}$ and $e_{3}$ are three
edges of $G$ with $e_{1}^{\prime}\triangle e_{2}^{\prime}\subset e_{3}$, which
is a contradiction. If $v\in(e_{1}\cap e_{3})\setminus e_{2}$, then $e_{2}\in
E(G)$. It follows that $e_{1}^{\prime}=(e_{1}\cup\\{u\\})\setminus\\{v\\}$,
$e_{2}$ and $e_{3}^{\prime}=(e_{3}\cup\\{u\\})\setminus\\{v\\}$ are three
edges of $G$ with $e_{1}^{\prime}\triangle e_{2}\subset e_{3}^{\prime}$, a
contradiction.
Case 3. $\alpha=1$. Without loss of generality, we assume $v\in
e_{3}\setminus(e_{1}\cup e_{2})$. Then $e_{1}\in E(G)$ and $e_{2}\in E(G)$. We
immediately obtain that $e_{1}$, $e_{2}$ and
$e_{3}^{\prime}=(e_{3}\cup\\{u\\})\setminus\\{v\\}$ are three edges of $G$
with $e_{1}\triangle e_{2}\subset e_{3}^{\prime}$. This is a contradiction and
proves Lemma 3.2. ∎
###### Lemma 3.3.
Let $p>1$ and $G$ be a complete $3$-partite $3$-graph. Then
$\lambda^{(p)}(G)=\frac{(27\cdot|E(G)|)^{1-1/p}}{9}.$
###### Proof.
Assume that $V_{1}$, $V_{2}$ and $V_{3}$ are the vertex classes of $G$ with
$n_{i}:=|V_{i}|$ and $n_{1}\geq n_{2}\geq n_{3}$. Let
$\bm{x}\in\mathbb{S}_{p,+}^{n-1}$ be an eigenvector corresponding to
$\lambda^{(p)}(G)$. By Lemma 2.1, for $i=1,2,3$ we denote $a_{i}:=x_{v}$ for
$v\in V_{i}$, and set $\lambda:=\lambda^{(p)}(G)$ for short. In light of
eigenequation (2.2), we find that
$\begin{cases}\lambda a_{1}^{p-1}=n_{2}n_{3}a_{2}a_{3},\\\ \lambda
a_{2}^{p-1}=n_{1}n_{3}a_{1}a_{3},\\\ \lambda
a_{3}^{p-1}=n_{1}n_{2}a_{1}a_{2},\end{cases}$
from which we obtain that $a_{i}=(3n_{i})^{-1/p}$, $i=1,2,3$. Therefore,
$\lambda=\frac{(27\cdot
n_{1}n_{2}n_{3})^{1-1/p}}{9}=\frac{(27\cdot|E(G)|)^{1-1/p}}{9}.$
This completes the proof of Lemma 3.3. ∎
### 3.2 Extremal $p$-spectral radius of cancellative hypergraphs
Let $\operatorname{Ex}_{sp}(n,\\{F_{4},F_{5}\\})$ be the set of all $3$-graphs
attaining the maximum $p$-spectral radius among cancellative hypergraphs on
$n$ vertices. Given a vector $\bm{x}\in\mathbb{R}^{n}$ and a set
$S\subset[n]:=\\{1,2,\ldots,n\\}$, we write $\bm{x}(S):=\prod_{i\in S}x_{i}$
for short. The _support set_ $S$ of a vector $\bm{x}$ is the index of non-zero
elements in $\bm{x}$, i.e., $S=\\{i\in[n]:x_{i}\neq 0\\}$. Also, we denote by
$x_{\min}:=\min\\{|x_{i}|:i\in[n]\\}$ and
$x_{\max}:=\max\\{|x_{i}|:i\in[n]\\}$.
###### Lemma 3.4.
Let $p>1$, $G\in\operatorname{Ex}_{sp}(n,\\{F_{4},F_{5}\\})$, and
$\bm{x}\in\mathbb{S}_{p,+}^{n-1}$ be an eigenvector corresponding to
$\lambda^{(p)}(G)$. If $u,v$ are two non-adjacent vertices, then
$x_{u}=x_{v}$.
###### Proof.
Assume $u$ and $v$ are two non-adjacent vertices in $G$. Since $G$ is a
cancellative $3$-graph, we have $T_{u}^{v}(G)$ is also cancellative by Lemma
3.2. It follows from (2.1) and (2.2) that
$\displaystyle\lambda^{(p)}(T_{u}^{v}(G))$ $\displaystyle\geq 3\sum_{e\in
E(G)}\bm{x}(e)-3\sum_{e\in E_{u}(G)}\bm{x}(e)+3\sum_{e\in
E_{v}(G)}\bm{x}(e\setminus\\{v\\})\cdot x_{u}$
$\displaystyle=\lambda^{(p)}(G)-3\lambda^{(p)}(G)x_{u}^{p}+3\lambda^{(p)}(G)x_{v}^{p-1}x_{u}$
$\displaystyle=\lambda^{(p)}(G)+3\lambda^{(p)}(G)(x_{v}^{p-1}-x_{u}^{p-1})\cdot
x_{u},$
which yields that $x_{u}\geq x_{v}$. Likewise, we also have $x_{v}\geq x_{u}$
by considering $T_{v}^{u}(G)$. Hence, $x_{u}=x_{v}$, completing the proof of
Lemma 3.4. ∎
###### Lemma 3.5.
Let $p>1$, $G\in\operatorname{Ex}_{sp}(n,\\{F_{4},F_{5}\\})$, and $u,v$ be two
non-adjacent vertices. Then there exists a cancellative $3$-graph $H$ such
that
$L_{H}(u)=L_{H}(v),~{}~{}\lambda^{(p)}(H)=\lambda^{(p)}(G),\ \text{and}\
d_{H}(w)\leq d_{G}(w),~{}~{}w\in V(G).$ (3.1)
###### Proof.
Assume that $\bm{x}\in\mathbb{S}_{p,+}^{n-1}$ is an eigenvector corresponding
to $\lambda^{(p)}(G)$. By Lemma 3.4, $x_{u}=x_{v}$. Without loss of
generality, we assume $d_{G}(u)\geq d_{G}(v)$. In view of (2.1) and (2.2), we
have
$\displaystyle\lambda^{(p)}(T_{u}^{v}(G))$ $\displaystyle\geq 3\sum_{e\in
E(G)}\bm{x}(e)-3\sum_{e\in E_{u}(G)}\bm{x}(e)+3\sum_{e\in
E_{v}(G)}\bm{x}(e\setminus\\{v\\})\cdot x_{u}$
$\displaystyle=\lambda^{(p)}(G)+3\lambda^{(p)}(G)(x_{v}^{p-1}-x_{u}^{p-1})\cdot
x_{u}$ $\displaystyle=\lambda^{(p)}(G).$
Observe that $T_{u}^{v}(G)$ is a cancellative $3$-graph and
$G\in\operatorname{Ex}_{sp}(n,\\{F_{4},F_{5}\\})$. We immediately obtain that
$\lambda^{(p)}(T_{u}^{v}(G))=\lambda^{(p)}(G)$. It is straightforward to check
that $H:=T_{u}^{v}(G)$ is a cancellative $3$-graph satisfying (3.1), as
desired. ∎
Next, we give an estimation on the entries of eigenvectors corresponding to
$\lambda^{(p)}(G)$.
###### Lemma 3.6.
Let $G\in\operatorname{Ex}_{sp}(n,\\{F_{4},F_{5}\\})$ and
$\bm{x}\in\mathbb{S}_{p,+}^{n-1}$ be an eigenvector corresponding to
$\lambda^{(p)}(G)$. If $1<p\leq 3$, then
$x_{\min}>\Big{(}\frac{3}{4}\Big{)}^{2/(p-1)}\cdot x_{\max}.$
###### Proof.
Suppose to the contrary that
$x_{\min}\leq\big{(}\frac{3}{4}\big{)}^{2/(p-1)}\cdot x_{\max}$. Let $u$ and
$v$ be two vertices such that $x_{u}=x_{\min}$ and $x_{v}=x_{\max}>0$. Then we
have
$\left(1+\frac{x_{u}}{x_{v}}\right)\left(\frac{x_{u}}{x_{v}}\right)^{p-1}\leq\bigg{(}1+\left(\frac{3}{4}\right)^{2/(p-1)}\bigg{)}\left(\frac{3}{4}\right)^{2}\leq\frac{7}{4}\cdot\frac{9}{16}<1,$
which implies that
$x_{v}^{p}-x_{u}^{p}>x_{u}^{p-1}x_{v}.$ (3.2)
On the other hand, by eigenequations we have
$\sum_{e\in E_{v}(G)\setminus
E_{u}(G)}\bm{x}(e)\geq\lambda^{(p)}(G)(x_{v}^{p}-x_{u}^{p}).$ (3.3)
Now, we consider the cancellative $3$-graph $T_{u}^{v}(G)$. In light of (2.1)
and (3.3), we have
$\displaystyle\lambda^{(p)}(T_{u}^{v}(G))$ $\displaystyle\geq 3\sum_{e\in
E(G)}\bm{x}(e)-3\sum_{e\in E_{u}(G)}\bm{x}(e)+3\sum_{e\in E_{v}(G)\setminus
E_{u}(G)}\bm{x}(e\setminus\\{v\\})\cdot x_{u}$
$\displaystyle\geq\lambda^{(p)}(G)-3\lambda^{(p)}(G)x_{u}^{p}+3\lambda^{p}(G)(x_{v}^{p}-x_{u}^{p})\cdot\frac{x_{u}}{x_{v}}$
$\displaystyle>\lambda^{(p)}(G)+3\lambda^{(p)}(G)\Big{(}-x_{u}^{p}+x_{u}^{p-1}x_{v}\cdot\frac{x_{u}}{x_{v}}\Big{)}$
$\displaystyle=\lambda^{(p)}(G),$
where the third inequality is due to (3.2). This contradicts the fact that $G$
has maximum $p$-spectral radius over all cancellative hypergraphs. ∎
Now, we are ready to give a proof of Theorem 1.2 for $p=3$.
###### Theorem 3.1.
Let $G$ be a cancellative $3$-graph on $n$ vertices. Then
$\lambda^{(3)}(G)\leq\lambda^{(3)}(T_{3}(n))$ with equality if and only if
$G=T_{3}(n)$.
###### Proof.
According to Lemma 3.5, we assume that
$G^{*}\in\operatorname{Ex}_{sp}(n,\\{F_{4},F_{5}\\})$ is a $3$-graph such that
$L_{G^{*}}(u)=L_{G^{*}}(v)$ for any non-adjacent vertices $u$ and $v$.
Our first goal is to show $G^{*}=T_{3}(n)$ by Claim 3.1 – Claim 3.3. Assume
that $\bm{x}\in\mathbb{S}_{3,+}^{n-1}$ is an eigenvector corresponding to
$\lambda^{(3)}(G^{*})$; $u_{1}$ is a vertex in $G^{*}$ such that
$x_{u_{1}}=x_{\max}$ and $u_{2}$ is a vertex with $x_{u_{2}}=\max\\{x_{v}:v\in
N_{G^{*}}(u_{1})\\}$. Let $U_{1}:=V(G^{*})\setminus N_{G^{*}}(u_{1})$ and
$U_{2}:=V(G^{*})\setminus N_{G^{*}}(u_{2})$. Since $u_{2}\in V(G^{*})\setminus
U_{1}$, there exists a vertex $u_{3}$ such that $\\{u_{1},u_{2},u_{3}\\}\in
E(G^{*})$. Let $U_{3}=V(G^{*})\setminus N_{G^{*}}(u_{3})$. Recall that for any
non-adjacent vertices $u$ and $v$ we have $L_{G^{*}}(u)=L_{G^{*}}(v)$. Hence,
the sets $U_{1}$, $U_{2}$ and $U_{3}$ are well-defined.
###### Claim 3.1.
The following statements hold:
1. $(1)$
$d_{G^{*}}(u_{1})>n(n-1)/9$;
2. $(2)$
$d_{G^{*}}(u_{2})>n(n-1)/12$;
3. $(3)$
$d_{G^{*}}(v)>n(n-1)/16$, $v\in V(G^{*})$.
Proof of Claim 3.1. Since $T_{3}(n)$ is a cancellative $3$-graph, it follows
from Lemma 3.3 that
$\lambda^{(3)}(G^{*})\geq\lambda^{(3)}(T_{3}(n))=\frac{\big{(}27\cdot
t_{3}(n)\big{)}^{2/3}}{9}.$
By simple algebra we see
$\lambda^{(3)}(G^{*})\geq\frac{\big{(}(n-2)(n+1)^{2}\big{)}^{2/3}}{9}>\frac{n(n-1)}{9}.$
(3.4)
(1). By eigenequation with respect to $u_{1}$, we have
$\lambda^{(3)}(G^{*})x_{u_{1}}^{2}=\sum_{\\{u_{1},i,j\\}\in
E(G^{*})}x_{i}x_{j}\leq d_{G^{*}}(u_{1})x_{u_{1}}^{2}.$
Combining with (3.4), we get
$d_{G^{*}}(u_{1})\geq\lambda^{(3)}(G^{*})>\frac{n(n-1)}{9}.$ (3.5)
(2). Observe that the definition of $U_{1}$, and $L_{G^{*}}(u)=L_{G^{*}}(v)$
for any pair $u,v\in U_{1}$. We immediately obtain that
$|(e\setminus\\{u_{2}\\})\cap U_{1}|\leq 1$ for each $e\in E_{u_{2}}(G^{*})$.
It follows from $x_{u_{2}}=\max\\{x_{v}:v\in V(G^{*})\setminus U_{1}\\}$ that
$\lambda^{(3)}(G^{*})x_{u_{2}}^{2}=\sum_{\\{u_{2},i,j\\}\in
E(G^{*})}x_{i}x_{j}\leq d_{G^{*}}(u_{2})x_{u_{1}}x_{u_{2}},$
which, together with Lemma 3.6 for $p=3$, gives
$\displaystyle d_{G^{*}}(u_{2})$
$\displaystyle\geq\frac{x_{u_{2}}}{x_{u_{1}}}\cdot\lambda^{(3)}(G^{*})$
$\displaystyle\geq\frac{3}{4}\cdot\lambda^{(3)}(G^{*})$
$\displaystyle>\frac{1}{12}n(n-1).$
The last inequality is due to (3.4).
(3). Let $v$ be an arbitrary vertex in $V(G^{*})$. Then
$\lambda^{(3)}(G^{*})x_{v}^{2}=\sum_{\\{v,i,j\\}\in E(G^{*})}x_{i}x_{j}\leq
d_{G^{*}}(v)x_{u_{1}}^{2}.$
Hence, by Lemma 3.6 and (3.4) we have
$d_{G^{*}}(v)\geq\Big{(}\frac{x_{v}}{x_{u_{1}}}\Big{)}^{2}\cdot\lambda^{(3)}(G^{*})>\frac{1}{16}n(n-1),$
as desired. $\Box$
Next, we consider the graph $H=L_{G^{*}}(u_{1})\cup L_{G^{*}}(u_{2})\cup
L_{G^{*}}(u_{3})$. Let $\phi:E(H)\to[3]$ be a mapping such that $\phi(f)=i$ if
$f\in L_{G^{*}}(u_{i})$, $i\in[3]$. By Lemma 3.1, $\phi$ is an edge coloring
of $H$. For convenience, we denote $L:=V(G^{*})\setminus(U_{1}\cup U_{2}\cup
U_{3})$.
###### Claim 3.2.
If $L\neq\emptyset$, then there is no rainbow star $K_{1,3}$ in the induced
subgraph $H[L]$ with the coloring $\phi$.
Proof of Claim 3.2. Suppose to the contrary that there exist
$v_{1},v_{2},v_{3},v_{4}\in L$ with $\phi(v_{1}v_{2})=1$, $\phi(v_{1}v_{3})=2$
and $\phi(v_{1}v_{4})=3$. We first show that $\\{v_{1},v_{2},v_{3},v_{4}\\}$
induced a clique in $\partial(G^{*})$ by contradiction. Without loss of
generality, we assume $v_{2}v_{3}\notin E(\partial(G^{*}))$. Then
$L_{G^{*}}(v_{2})=L_{G^{*}}(v_{3})$. Since $\phi(v_{1}v_{2})=1$ and
$\phi(v_{1}v_{3})=2$, we have $\\{u_{1},v_{1},v_{2}\\}\in E(G^{*})$ and
$\\{u_{2},v_{1},v_{3}\\}\in E(G^{*})$. This implies that
$e_{1}=\\{u_{1},u_{2},u_{3}\\}$, $e_{2}=\\{u_{1},v_{1},v_{2}\\}$ and
$e_{3}=\\{u_{2},v_{1},v_{2}\\}$ are three edges in $G^{*}$ with
$e_{2}\triangle e_{3}\subset e_{1}$, which is impossible.
On the other hand, since $L=V(G^{*})\setminus(U_{1}\cup U_{2}\cup U_{3})$, we
have $v_{i}u_{j}\in E(\partial(G^{*}))$ for any $i\in[4]$, $j\in[3]$.
Therefore, every pair of vertices in
$\\{v_{1},v_{2},v_{3},v_{4},u_{1},u_{2},u_{3}\\}$ is contained in an edge of
$G^{*}$. Consider the graph
$H^{\prime}:=\bigg{(}\bigcup_{i=1}^{3}L_{G^{*}}(u_{i})\bigg{)}\bigcup\bigg{(}\bigcup_{i=1}^{4}L_{G^{*}}(v_{i})\bigg{)}.$
By Claim 3.1, we have
$\displaystyle|E(H^{\prime})|$ $\displaystyle=\sum_{1\leq i\leq
3}d_{G^{*}}(u_{i})+\sum_{1\leq j\leq 4}d_{G^{*}}(v_{j})$
$\displaystyle>\left(1+\frac{3}{4}+5\times\frac{9}{16}\right)\cdot\frac{1}{9}n(n-1)$
$\displaystyle=\frac{73}{144}n(n-1)$ $\displaystyle>\binom{n}{2},$
a contradiction completing the proof of Claim 3.2. $\Box$
###### Claim 3.3.
$L=\emptyset$.
Proof of Claim 3.3. Suppose to the contrary that $L\neq\emptyset$. For
$i=1,2,3$, let $L_{i}$ be the set of vertices in $L$ which is not contained in
an edge with coloring $i$. By Claim 3.2, we have $L=L_{1}\cup L_{2}\cup
L_{3}$. Without loss of generality, we assume $L_{1}\neq\emptyset$. Let $w$ be
a vertex in $L_{1}$. Then there exists an edge $f$ in $G^{*}$ such that
$f=\\{u_{1},w,w^{\prime}\\}$, where $w^{\prime}\in U_{2}\cup U_{3}$. If
$w^{\prime}\in U_{2}$, then $f^{\prime}=\\{u_{1},u_{3},w^{\prime}\\}\in
E(G^{*})$. Since $G^{*}$ is cancellative, $w$ is not a neighbor of $u_{3}$ in
$G^{*}$. This implies that $w\in U_{3}$, a contradiction to $w\in L$.
Similarly, if $w^{\prime}\in U_{3}$, then $w\in U_{2}$, which is also a
contradiction. $\Box$
Now, we continue our proof. By Claim 3.3, we immediately obtain that $G^{*}$
is a complete $3$-partite $3$-graph with vertex classes $U_{1}$, $U_{2}$ and
$U_{3}$. Hence, $G^{*}=T_{3}(n)$ by Lemma 3.3.
Finally, it is enough to show that $G=T_{3}(n)$ for any
$G\in\operatorname{Ex}_{sp}(n,\\{F_{4},F_{5}\\})$. According to Lemma 3.5 and
Claim 3.3, we can transfer $G$ to the complete $3$-partite $3$-graph
$T_{3}(n)$ by a sequence of switchings $T_{u}^{v}(\,\cdot\,)$ that keeping the
spectral radius unchanged. Let $T_{1},\ldots,T_{s}$ be such a sequence of
switchings $T_{u}^{v}(\,\cdot\,)$ which turn $G$ into $T_{3}(n)$. Consider the
$3$-graphs $G=G_{0},G_{1},\ldots,G_{s}=T_{3}(n)$ in which $G_{i}$ is obtained
from $G_{i-1}$ by applying $T_{i}$. Let $\bm{z}\in\mathbb{S}_{3,+}^{n-1}$ be
an eigenvector corresponding to $\lambda^{(3)}(G_{s-1})$ and
$T_{u}^{v}(G_{s-1})=T_{3}(n)$, and denote
$A:=V(G_{s-1})\setminus\big{(}N_{G_{s-1}}(v)\cup\\{u\\}\cup\\{v\\}\big{)}.$
Hence, we have $L_{G_{s-1}}(w)=L_{G_{s-1}}(v)=L_{T_{3}(n)}(v)$ for each $w\in
A$. In what follows, we shall prove $L_{G_{s-1}}(u)=L_{G_{s-1}}(v)$, and
therefore $G_{s-1}=T_{3}(n)$. If $L_{G_{s-1}}(u)\neq L_{G_{s-1}}(v)$, there
exists an edge $e=v_{1}v_{2}\in L_{G_{s-1}}(u)\setminus L_{G_{s-1}}(v)$ since
$z_{u}=z_{v}$ by Lemma 3.4. Let $M_{1}$ and $M_{2}$ be two subsets of
$V(G_{s-1})$ such that $M_{1}\cup M_{2}=N_{G_{s-1}}(v)$ and
$L_{G_{s-1}}(v)=K_{|M_{1}|,|M_{2}|}$. If $\\{v_{1},v_{2}\\}\subset
N_{G_{s-1}}(v)$, then $\\{v_{1},v_{2}\\}\subset M_{1}$ or
$\\{v_{1},v_{2}\\}\subset M_{2}$. It follows that there exists a vertex $w\in
N_{G_{s-1}}(v)$ such that $f_{1}:=\\{v,w,v_{1}\\}\in E(G_{s-1})$ and
$f_{2}:=\\{v,w,v_{2}\\}\in E(G_{s-1})$. However, $f_{1}\Delta
f_{2}\subset\\{u,v_{1},v_{2}\\}\in E(G_{s-1})$, a contradiction. So we obtain
$\\{v_{1},v_{2}\\}\cap A\neq\emptyset$. Without loss of generality, we assume
$v_{1}\in A$. Then $L_{G_{s-1}}(v_{1})=L_{G_{s-1}}(v)$, i.e., $uv_{2}\in
L_{G_{s-1}}(v)$. Thus, $u\in N_{G_{s-1}}(v)$, a contradiction. This implies
that $G_{s-1}=T_{3}(n)$. Likewise, $G_{i-1}=G_{i}$ for each $i\in[s-1]$, and
therefore $G=T_{3}(n)$. This completes the proof of the theorem. ∎
According to Theorem 3.1, we can give an alternative proof of Bollobás’ result
for $n\equiv{0}\pmod{3}$.
###### Corollary 3.1.
Let $G$ be a cancellative $3$-graph on $n$ vertices with $n\equiv{0}\pmod{3}$.
Then $|E(G)|\leq t_{3}(n)$ with equality if and only if $G=T_{3}(n)$.
###### Proof.
Denote by $\bm{z}$ the all-ones vector of dimension $n$. In view of (2.1), we
deduce that
$\lambda^{(3)}(G)\geq\frac{P_{G}(\bm{z})}{\|\bm{z}\|_{3}^{3}}=\frac{3|E(G)|}{n}.$
On the other hand, by Theorem 3.1 we have
$\lambda^{(3)}(G)\leq\lambda^{(3)}(T_{3}(n))=(t_{3}(n))^{2/3}.$
As a consequence,
$|E(G)|\leq\frac{n}{3}\cdot(t_{3}(n))^{2/3}=t_{3}(n).$
Equality may occur only if
$\lambda^{(3)}(G)=(t_{3}(n))^{2/3}=\lambda^{(3)}(T_{3}(n))$, and therefore
$G=T_{3}(n)$ by Theorem 3.1. ∎
Next, we will prove Theorem 1.2 for the case $p>3$ as stated in Theorem 3.2.
###### Lemma 3.7 ([14]).
Let $p\geq 1$ and $G$ be an $r$-graph with $m$ edges. Then the function
$f_{G}(p):=\bigg{(}\frac{\lambda^{(p)}(G)}{rm}\bigg{)}^{p}$
is non-increasing in $p$.
###### Theorem 3.2.
Let $p>3$ and $G$ be a cancellative $3$-graph on $n$ vertices. Then
$\lambda^{(p)}(G)\leq\lambda^{(p)}(T_{3}(n))$ with equality if and only if
$G=T_{3}(n)$.
###### Proof.
Assume that $p>3$ and $G$ is a $3$-graph in
$\operatorname{Ex}_{sp}(n,\\{F_{4},F_{5}\\})$ with $m$ edges. It is enough to
show that $G=T_{3}(n)$. By Lemma 3.7, we have
$\bigg{(}\frac{\lambda^{(p)}(G)}{3m}\bigg{)}^{p}\leq\bigg{(}\frac{\lambda^{(3)}(G)}{3m}\bigg{)}^{3},$
which, together with $\lambda^{(3)}(G)\leq(t_{3}(n))^{2/3}$ by Theorem 3.1,
gives
$\lambda^{(p)}(G)\leq(3m)^{1-3/p}\cdot(\lambda^{(3)}(G))^{3/p}\leq(3m)^{1-3/p}\cdot(t_{3}(n))^{2/p}.$
On the other hand, we have
$\lambda^{(p)}(G)\geq\lambda^{(p)}(T_{3}(n))=\frac{\big{(}27\cdot
t_{3}(n)\big{)}^{1-1/p}}{9}.$
We immediately obtain $m\geq t_{3}(n)$. The result follows from Theorem 1.1. ∎
Finally, we shall give a proof of Theorem 1.2 for the remaining case $p=1$. In
what follows, we always assume that $\bm{x}\in\mathbb{S}_{1,+}^{n-1}$ is an
eigenvector such that $\bm{x}$ has the minimum possible number of non-zero
entries among all eigenvectors corresponding to $\lambda^{(1)}(G)$. Before
continuing, we need the following result.
###### Lemma 3.8 ([5]).
Let $G$ be an $r$-graph and $S$ be the support set of $\bm{x}$. Then for each
pair vertices $u$ and $v$ in $S$, there is an edge in $G[S]$ containing both
$u$ and $v$.
###### Theorem 3.3.
Let $G$ be a cancellative $3$-graph. Then $\lambda^{(1)}(G)=1/9$.
###### Proof.
Assume that $G$ is a cancellative $3$-graph with support set $S$. Let
$H:=G[S]$. By Lemma 3.8, for any $u,v\in S$ there is an edge in $H$ containing
both $u$ and $v$. Hence, for any two edges for each pair of edges of $H$ has
at most one common vertex by $H$ being cancellative. So the shadow graph of
$H$ is the complete graph $K_{|S|}$. Since $H$ is cancellative, the link
graphs $L_{H}(u)$ and $L_{H}(v)$ are edge-disjoint graphs for any distinct
vertices $u,v\in S$. It follows from (2.2) that
$|S|\cdot\lambda^{(1)}(G)=\sum_{uv\in
E(\partial(H))}x_{u}x_{v}\leq\frac{1}{2}\Big{(}1-\frac{1}{|S|}\Big{)},$ (3.6)
where the last inequality follows from Motzkin–Straus Theorem [11]. On the
other hand, set
$z_{v}=\begin{cases}1/|S|,&v\in S,\\\ 0,&\text{otherwise}.\end{cases}$
We immediately have
$\lambda^{(1)}(G)\geq 3\sum_{e\in E(H)}\bm{z}(e)=\sum_{v\in
V(H)}\bigg{(}z_{v}\cdot\sum_{f\in
L_{H}(v)}\bm{z}(f)\bigg{)}=\frac{|S|-1}{2|S|^{2}},$
where the last inequality follows from the fact that $d_{H}(v)=(|S|-1)/2$ for
$v\in V(H)$. Combining with (3.6) we get
$\lambda^{(1)}(G)=\frac{|S|-1}{2|S|^{2}}.$
Clearly, $(|S|-1)/|S|^{2}$ attains its maximum at $|S|=3$ when $|S|\geq 3$.
Hence, we see $\lambda^{(1)}(G)\leq 1/9$. Finally, noting that
$\lambda^{(1)}(G)$ is at least the Lagrangian of an edge $K_{3}^{(3)}$, i.e.,
$\lambda^{(1)}(G)\geq\lambda^{(1)}(K_{3}^{3})=\frac{1}{9},$
we obtain $\lambda^{(1)}(G)=1/9$, as desired. ∎
###### Remark 3.1.
For an $r$-graph $G$ on $n$ vertices, it is well-known that
$\lambda^{(1)}(G)/r$ is the Lagrangian of $G$. In [16], Yan and Peng present a
tight upper bound on $\lambda^{(1)}(G)$ for $F_{5}$-free $3$-graphs, see [16]
for details.
## References
* [1] B. Bollobás, Three-graphs without two triples whose symmetric difference is contained in a third, Discrete Math. 8 (1974) 21–24.
* [2] A. Bretto, Hypergraph Theory: An Introduction, Springer, 2013.
* [3] J. Cooper, A. Dutle, Spectra of uniform hypergraphs, Linear Algebra Appl. 436 (2012) 3268–3299.
* [4] M.N. Ellingham, L. Lu, Z. Wang, Maximum spectral radius of outerplanar $3$-uniform hypergraphs, J. Graph Theory 100 (4) (2022) 671–685.
* [5] P. Frankl, V. Rödl, Hypergraphs do not jump, Combinatorica 4 (1984) 149–159.
* [6] Z. Füredi, Turán type problems, in Surveys in Combinatories, Cambridge University Press, Cambridge, 1991, pp. 253–300.
* [7] G. Gao, A. Chang, Y. Hou, Spectral radius on linear $r$-graphs without expanded $K_{r+1}$, SIAM J. Discrete Math. 36 (2) (2022) 1000–1011.
* [8] P. Keevasha, D. Mubayi, Stability theorems for cancellative hypergraphs, J. Combin. Theory Ser. B 92 (2004) 163–175.
* [9] P. Keevash, Hypergraph Turán problems, in Surveys in Combinatorics, Cambridge University Press, Cambridge, 2011, pp. 83–139.
* [10] P. Keevash, J. Lenz, D. Mubayi, Spectral extremal problems for hypergraphs, SIAM J. Discrete Math. 28 (4) (2014) 1838–1854.
* [11] T. Motzkin, E. Straus, Maxima for graphs and a new proof of a theorem of Turán, Canad. J. Math. 17 (1965) 533–540.
* [12] D. Mubayi, J. Verstraëte, A survey of Turán problems for expansions, in Recent Trends in Combinatorics, IMA Vol. Math. Appl. 159, Springer, 2016, pp. 117–143.
* [13] V. Nikiforov, Some new results in extremal graph theory, in Surveys in Combinatorics, Cambridge University Press, Cambridge, 2011, pp. 141–181.
* [14] V. Nikiforov, Analytic methods for uniform hypergraphs, Linear Algebra Appl. 457 (2014) 455–535.
* [15] V. Nikiforov, Some extremal problems for hereditary properties of graphs, Electron. J. Combin. 21 (2014) P1.17.
* [16] Z. Yan, Y. Peng, $\lambda$-perfect hypergraphs and Lagrangian densities of hypergraph cycles, Discrete Math. 342 (2019) 2048–2059.
|
# A Phoneme-informed Neural Network Model for Note-level Singing Transcription
###### Abstract
Note-level automatic music transcription is one of the most representative
music information retrieval (MIR) tasks and has been studied for various
instruments to understand music. However, due to the lack of high-quality
labeled data, transcription of many instruments is still a challenging task.
In particular, in the case of singing, it is difficult to find accurate notes
due to its expressiveness in pitch, timbre, and dynamics. In this paper, we
propose a method of finding note onsets of singing voice more accurately by
leveraging the linguistic characteristics of singing, which are not seen in
other instruments. The proposed model uses mel-scaled spectrogram and phonetic
posteriorgram (PPG), a frame-wise likelihood of phoneme, as an input of the
onset detection network while PPG is generated by the pre-trained network with
singing and speech data. To verify how linguistic features affect onset
detection, we compare the evaluation results through the dataset with
different languages and divide onset types for detailed analysis. Our approach
substantially improves the performance of singing transcription and therefore
emphasizes the importance of linguistic features in singing analysis.
Index Terms— singing transcription, onset detection, phoneme classification,
music information retrieval
## 1 Introduction
Note-level singing transcription is an music information retrieval (MIR) task
that predicts attributes of note events (i.e., onset time, offset time, and
pitch value) from audio recordings of singing voice. Although this task has
been studied for a long time, the performance of singing transcription is
generally inferior to those of other musical instruments such as polyphonic
piano music [1, 2]. The lack of large-scale labeled datasets is one of the
major technical barriers. In addition, singing voice has highly diverse
expressiveness in terms of pitch, timbre, dynamics, as well as phonation of
lyrics. For example, singing techniques such as vibrato, bending, and
portamento make it difficult to find note boundaries and note-level pitches.
This variability makes even manual note transcription by human experts
difficult [3]. This in turn has resulted in the lack of high-quality labeled
datasets.
Another important characteristic of singing voice which is well distinguished
from other instruments is that it conveys linguistic information through
lyrics and this influences note segmentation. Given that most singing notes
are syllabic (i.e., one syllable of text is set to one note of music) and
melismatic (i.e., one syllable is sung with multiple notes), the relationship
between the change of syllables and the change of notes is sophisticated. This
makes certain kinds of note patterns of singing voice not seen in any other
instruments. Therefore, we need to consider such linguistic characteristic in
automatic singing transcription models.
Fig. 1: An example of singing voice: mel-spectrogram (top), piano roll with
onsets and pitches of notes (middle), and phonetic posteriorgram (PPG)
(bottom) from singing (phonemes with probability under 0.5 in this example
were omitted).
In this paper, we propose a neural network model that incorporates linguistic
information into the input to improve note-level singing transcription for
singing voice. Similar to earlier research, we use log-scaled mel-spectrogram
as a primary input. In addition to that, we take phonetic posteriorgram (PPG)
from a pre-trained phoneme classifier as the second input. As shown in Figure
1, PPG shows a pattern distinct from the ones of mel-spectrogram, and it can
be noted that the transition pattern of PPG can better describe the onset
event at 1.2 and 2.8 second. We propose a two-branch neural network model
based on a convolutional recurrent neural network (CRNN) backbone to represent
both of the input features effectively. In the experiment, we conduct an
ablation study to examine the effectiveness of model design, mel-spectrogram,
and PPG. Also, we compare the effects of mel-spectrogram and PPG on transition
and re-onset, the two types of challenging onset events in singing
transcription. Finally, we demonstrate that our proposed model outperforms a
few state-of-the-art note-level singing transcription models, especially in
terms of onset detection.
Fig. 2: The proposed model architecture
## 2 Related Works
Traditional studies mainly used various types of spectral difference for onset
detection of audio signals [4]. The spectral difference is particularly
successful at finding percussive onsets but it performs poorly on expressive
instruments that have soft onsets. Deep neural networks have been actively
applied to singing voice as well. Nishikimi _et al_. [5] suggested an
attention-based encoder-decoder network with long short-term memory (LSTM)
modules. Fu _et al_. [6] proposed a hierarchical structure of note change
states to segment singing notes and used multi-channel features to increase
the performance. Hsu et al. [7] suggested a semi-supervised AST framework.
More recently, [8] proposed the object detection-based approach to
significantly improve the performance of singing voice onset/offset detection.
While the majority of them relied on note onset and offset information from
melody labels, one recent attempted to use phoneme information as part of
input features for note segmentation [9]. However, the performance was not
convincing. In this work, we present a neural network architecture to make an
effective use of the phoneme information.
Training dataset | | SSVD v2.0 | CSD-refined
---|---|---|---
Evaluation dataset | | ISMIR2014 | SSVD v2.0 | CSD-refined | ISMIR2014 | SSVD v2.0
| Feature | COn | COff | COn | COff | COn | COff | COn | COff | COn | COff
(a) Single CRNN | $X$ | 0.8244 | 0.7751 | 0.8956 | 0.8983 | 0.9797 | 0.9719 | 0.8812 | 0.7524 | 0.8866 | 0.8007
(b) Dual CRNNs + one RNN | $X,X$ | 0.9133 | 0.8513 | 0.9486 | 0.9566 | 0.9888 | 0.9838 | 0.9004 | 0.7636 | 0.8988 | 0.8089
(c) Single CRNN | $\hat{P}$ | 0.8655 | 0.7776 | 0.9223 | 0.9105 | 0.9890 | 0.9660 | 0.9048 | 0.7685 | 0.9063 | 0.8296
(d) Dual CRNNs + one RNN | $\hat{P},\hat{P}$ | 0.9094 | 0.8310 | 0.9342 | 0.9470 | 0.9907 | 0.9638 | 0.9090 | 0.7733 | 0.9142 | 0.8336
(e) Dual CNNs + one RNN | $X,\hat{P}$ | 0.9024 | 0.8349 | 0.9439 | 0.9420 | 0.9877 | 0.9791 | 0.9016 | 0.7852 | 0.9098 | 0.8340
(f) Dual CNNs + two RNNs | $X,\hat{P}$ | 0.9230 | 0.8538 | 0.9496 | 0.9531 | 0.9914 | 0.9839 | 0.9150 | 0.7804 | 0.9199 | 0.8328
(g) Dual CRNNs + one RNN | $X,\hat{P}$ | 0.9305 | 0.8576 | 0.9569 | 0.9692 | 0.9923 | 0.9864 | 0.9145 | 0.7723 | 0.9166 | 0.8257
Table 1: Onset/Offset detection results from various neural network
architectures with two input features. $X$ and $\hat{P}$ denote mel-
spectrogram and PPG, respectively. (g) corresponds to the neural network
architecture in Figure 2.
## 3 Proposed Method
### 3.1 Model Architecture
Our proposed model architecture consists of two branch networks and a single
RNN with a dense layer as illustrated in Figure 2. One branch network takes
log-scaled mel-spectrogram $X$ and the other branch network takes phonetic
posteriorgram (PPG) $\hat{P}$ from a pretrained phoneme classifier. Both of
the branches are CRNN where CNN architectures are a modified version of
_ConvNet_ proposed in [10], which is commonly used in the piano transcription
task [1, 11]. To get the wider time-scale receptive field, we changed the
first convolution layer with a dilated convolution with 2 dilation on the time
frame axis. To predict the note events, we combined the two branch networks by
concatenating the outputs and connecting them to an additional RNN layer and a
dense layer. The output layer is represented with a 3-dimensional sigmoid
vector where each element detects onset, offset, and activation as binary
states. The activation indicates whether the note is on or off at each frame.
### 3.2 Framewise Phoneme Classifier
We extracted the phonetic information using a phoneme classifier which returns
the output as a PPG. We implemented it using a single CRNN network with a
dense layer. We used the original _ConvNet_ architecture for the CNN part. We
tried two loss functions to train the phoneme classifier network. One is the
framewise cross entropy loss, which is possible when we have time-aligned
phoneme labels. Since it is difficult to obtain time-aligned phoneme labels in
frame-level especially for singing voice, we also used the connectionist
temporal classification (CTC) loss function [12] which can handle the
alignment between the predicted phoneme sequence ($\hat{p}$) and the ground
truth phoneme sequence ($S$) which have unequal lengths. The CTC algorithm
predicts phoneme sequences with inserted blank labels along the possible
prediction paths $\mathcal{B}$. Since the CTC loss function is optimized for
predicting the entire sequence, the prediction pattern tends to be spiky and
sparse and thus it does not find the boundaries of phonemes well [12, 13]. To
solve this problem, we used two layers of bidirectional LSTM layers and a
single dense layer that reconstruct the input log-scaled mel-spectrogram
($\hat{X}$). This was proposed to enhance the time alignment when the CTC loss
is used [14]. For the reconstruction loss ($\mathcal{L}_{\text{recon}}$), we
normalized the log-scaled mel-spectrogram from $-1$ to $1$ ($\tilde{X}$) and
applied the $\tanh$ function for the activation and used the $L_{2}$ loss
function. These loss functions are defined as:
$\displaystyle\mathcal{L}_{\text{CTC}}$
$\displaystyle=-\log\sum_{\hat{p},\mathcal{B}(\hat{p})=p}\prod_{t=0}^{T-1}\mathbb{P}(\hat{p}_{t}|X)\,,$
$\displaystyle\vspace{2mm}\mathcal{L}_{\text{recon}}$
$\displaystyle=\|\hat{X}-\tilde{X}\|^{2}\,,$ (1)
$\displaystyle\vspace{2mm}\mathcal{L}_{\text{PPG}}$
$\displaystyle=\mathcal{L}_{\text{CTC}}+\mathcal{L}_{\text{recon}}\,,$
where $T$ is the total number of time steps, $p$ is the ground truth phoneme
sequence and $\mathbb{P}(\hat{p}_{t}|X)$ is the PPG at time $t$.
### 3.3 Label Smoothing
Unlike other instruments, synthesized or auto-aligned onset/offset labels are
hardly available in the case of the singing datasets [15]. In addition, since
singing onsets are temporally soft, has a soft onset, to locate the exact
onset positions of singing by means of with a waveform or mel-spectrogram is
by no means straightforward. Such softness of the onset is one of the factors
that makes the onset of singing voices more challenging to train. Previous
frame-wise onset detection studies [6, 7] extended the duration of the onset
label to solve this problem.
Following these previous studies, we also used a smoothing method to increase
the length of the onset and offset label. Specifically, we smoothed the 1-D
one-hot onset label sequence $y_{\text{on}}:=y_{\text{on}}[n]$ ($n$ denotes
the time index) and the offset label sequence
$y_{\text{off}}:=y_{\text{off}}[n]$ through the linear convolution with a
scaled triangular window function $w_{\text{tri}}[n]$ to improve the precision
simultaneously. The scale factor of the triangular function $N$ stands for the
number of frames with nonzero values. To make the center of the label to $1$
after the smoothing, we only used the odd numbers for the scale factor $N$.
The convolution process is represented as
$\displaystyle w[n]$
$\displaystyle=\begin{cases}1-\left|\frac{n}{(N+1)/2}\right|&\text{if
$|n|\leq\frac{(N+1)}{2}$}\\\ 0&\text{otherwise.}\end{cases}$ $\displaystyle
y_{\text{on\\_s}}[n]$ $\displaystyle=y_{\text{on}}[n]\ast w_{\text{tri}}[n]$
(2) $\displaystyle y_{\text{off\\_s}}[n]$ $\displaystyle=y_{\text{off}}[n]\ast
w_{\text{tri}}[n]$
where the operation $\ast$ represents the linear convolution and $n$ is the
frame index.
### 3.4 Note Decoding
To find the positions of onsets from the prediction output, we set a constant
threshold and set the frame with the maximal value above the threshold as the
position of onset. When finding the offset of a note, we first find the offset
candidates between the current onset time and the next onset time. The offset
candidate is either the highest peak of the offset prediction or the time
frame that the activation prediction goes lower than 0.5. If multiple offset
candidates exist, we set the offset to the latest offset candidate. If no
offset candidate is found, the offset of the note is set to the time frame of
the next onset. The threshold of onset and offset is set to 0.2. In order to
determine the threshold, we evaluated the validation set using a threshold
ranging from 0.1 to 0.9 in increments of 0.1 to identify the optimal
threshold.
For note-level singing transcription, we estimated the note-level pitch from
frame-wise F0s of the note segment to find the pitch of the note, following
[6]. We extracted F0s with the PYIN algorithm [16], which is one of the most
accurate pitch trackers. To compress the F0 contour to the note-level pitch,
we used the weighted median algorithm, which finds the 50% percentile in the
ordered elements with given weights. In this experiment, we use the normalized
Hann window function with the same length of the note segment frames as the
weight of the weighted median to reduce the influence of the F0 near the
boundaries, which are the most expressive part. Since the sum of all weight
values should be one, the Hann window function is normalized by dividing by
the sum of the window elements.
## 4 Experiments
### 4.1 Datasets
We used SSVD v2.0 as the primary dataset [8]. It contains multiple sight-
singing recordings, consisting of 67 singing audio files for the train and
validation set, and 127 audio files for the test set. The human labeled
annotations include onset, offset, and averaged note pitch. To use both
phoneme and note labels given the audio, we also used the 50 songs in Korean
from the CSD dataset [17], which have both note and phoneme labels of a female
professional singer. Since the original note annotations of CSD was targeted
for singing voice synthesis, we found it needs some refinement for the note
transcription task. Thus, we re-annotated 50 songs of CSD for our experiment,
following the rule suggested by [3]. The re-annotated label of CSD can be
found on our GitHub page 111https://github.com/seyong92/CSD_reannotation. The
refined CSD is split 35, 5, and 10 songs for train, validation, and test set
each.
To train the phoneme classifier, we used TIMIT [18] which contains English
speech with time-aligned phoneme labels for the model with SSVD v2.0. TIMIT
contains 5.4 hours of audio of English speech. While training the phoneme
classifier network, we reduced the phoneme types to 39 following the CMU
pronouncing dictionary [19]. For the model with CSD, we used the unaligned
phoneme label in CSD to train.
To compare the transcription performance of the proposed model with previous
work, we also used the ISMIR2014 [3] dataset, which contains 38 songs sung by
both adults and children, as a test set.
### 4.2 Evaluation Metrics
We evaluated the models with the mir_eval library [20] for onset/offset
detection and note-level transcription. We used the metrics proposed in [3]:
F1-measure of COn (correct onset), COff (correct offset), COnOff (correct
onset and offset), COnP (correct onset and pitch), and COnPOff (Correct onset,
offset and pitch). We used the default parameters of mir_eval, which sets the
onset tolerance to 50 ms, the offset tolerance to larger value between 50 ms
and 0.2 of note duration, and the 50 cents for the pitch tolerance. Also, we
report the results when the onset/off thresholds are 100 ms considering the
softness of singing onsets.
### 4.3 Training Details
We computed 80 bin mel-spectrogram $X$ with 320 samples in hop size (20 ms)
and 1024 samples in FFT size after resampling audio files to 16 kHz. For the
modified _ConvNet_ module, we set 48/48/96 nodes to the convolutional layers
and 768 nodes to the dense layer. We used 768 nodes in all bidirectional LSTM
layers and set the last FC layer in the note onset/activation detector to have
two separate nodes for onset and activation detection, respectively. For the
label smoothing, we used a scale factor of 5 to extend the label length to 100
ms, which shows the best results in our experiment.
To train the note onset/offset detection network, we used the AdamW optimizer
[21] with a batch size of 8 and a learning rate of 1e-6. We reduced the
learning rate with a reducing factor of 0.98 for every 1000 steps. While
training, we used the random audio segment with 5 seconds. The validation set
was evaluated for every 500 steps and we stopped training when there is no
advance in the model for 10 validation steps. To train the phoneme classifier,
we used the Adam optimizer with a batch size of 16 and a learning rate of
2e-4. We reduced the learning rate with a reducing factor of 0.98 for every
900 steps. We validated the model with every 500 steps for the phoneme
classifier and trained the model while there is no advance in the model for 5
validation steps.
Fig. 3: Transition and re-onset recall of the models in the ablation study on ISMIR2014. The red triangle is the model with mel-spectrogram, the blue square is the model with PPG, and the green circle is the model with both features. Model | COn (50ms) | COn (100ms) | COff (50ms) | COff (100ms)
---|---|---|---|---
| P | R | F | P | R | F | P | R | F | P | R | F
TONY [22] | 0.7068 | 0.6326 | 0.6645 | 0.8402 | 0.7486 | 0.7877 | 0.7862 | 0.6981 | 0.7358 | 0.8405 | 0.7471 | 0.7870
Omnizart [7, 23] | 0.7797 | 0.8229 | 0.7951 | 0.8667 | 0.9153 | 0.8843 | 0.7698 | 0.8132 | 0.7852 | 0.8394 | 0.8842 | 0.8554
MusicYOLO (retrained) [8] | 0.9427 | 0.8970 | 0.9176 | 0.9711 | 0.9247 | 0.9456 | 0.8924 | 0.8504 | 0.8693 | 0.9476 | 0.9024 | 0.9227
Proposed | 0.9448 | 0.9188 | 0.9305 | 0.9652 | 0.9387 | 0.9506 | 0.8701 | 0.8473 | 0.8576 | 0.9429 | 0.9176 | 0.9290
Table 2: Onset/Offset detection results on ISMIR2014. Both of MusicYOLO and
the proposed model were trained with SSVD v2.0. Omnizart is a pretrained note
transcription model package (not with SSVD v2.0). Tony is a free, open-source
application for pitch and note transcription.
## 5 Results and Discussions
### 5.1 Ablation Study
We conducted an ablation study to see the effect of input features and model
architectures. The proposed model shown in Figure 2 corresponds to "Dual CRNNs
+ one RNN" in (g). We first compare it to a single CRNN model with only one
type of features (either mel spectrogram in (a) or PPG in (c)). Considering
that the model architecture can affect the performance, we also compared the
proposed model to the same "Dual CRNNs + one RNN" but with one type of input
features for both inputs (either mel spectrogram in (b) or PPG in (d)). Given
the proposed model, we also removed the RNN module in each CRNN branch in (e),
and then stacked another RNN module on top of (e) in (f).
Table 1 show the onset/offset detection results of all compared models. Single
CRNNs with only one input features in (a) and (c) have significantly lower
accuracy than the proposed model in (g). The gap is relatively lower when the
model was trained with CSD. Interestingly, the single CRNN model with PPG
consistently outperformed the one with mel spectrogram. The results from the
same model architecture with different input features in (b), (d), and (g)
shows that using both mel-spectrogram and PPG is more effective than using
either one of them. However, the gaps are less significant than those in the
comparison with single CRNN in (a) and (c). This indicates that model
architecture is also important to improve the performance. Likewise, the
results in (e), (f), and (g) show that the design choice of neural network
affects the performance. Since CSD is a small dataset, the proposed model have
a tendency to overfit it. Overall, the propose model in (g) shows the best
performance.
We further investigated the effect of the input features by looking into the
recall accuracy for two special types of onsets: re-onset and transition. They
are note onsets which have 20 ms or less apart from the offset of the previous
note. The difference between the two types is whether the pitch changes
(transition) or not (re-onset). The re-onset usually occurs when the syllable
in lyrics or energy changes while continuing the same pitch. Note that, since
our model does not predict the onset types, only recall accuracy can be
computed. As shown in Figure 3, the models with mel-spectrogram (in red) tend
to detect more transitions, indicating that it is more sensitive to pitch
change. On the other hand, the models with PPG (in blue) tend to detect more
re-onsets, showing that it captures phonetic changes well. Lastly, the models
with both features have more balanced accuracy in both transition and re-
onset. The demo examples, more analysis, and pre-trained models are available
on the companion website. 222https://seyong92.github.io/phoneme-informed-
transcription-blog/
### 5.2 Comparison with Prior Work
Table 2 shows the comparison with prior work on the ISMIR2014 dataset, which
has been widely used for singing voice onset/offset detection (or note
segmentation). For fair comparison, we retrained a recent state-of-the-art
model [8] with the same dataset we used for the proposed model. Our proposed
model outperforms the state-of-the-art model in onset F-score in both
tolerances while it is slightly worse in offset F-score in 50ms tolerance. The
publicly available note transcription software (TONY) and model package
(Omnizart) have significantly lower accuracy than the two models. Finally, to
see the performance for singing note transcription including pitch
information, we measured COnP and COnPOff on ISMIR2014 and SSVD v2.0 in Table
3. The results show that the proposed model achieves consistently better
performances than TONY and Omnizart.
| ISMIR2014 | SSVD v2.0
---|---|---
Model | COnP | COnPOff | COnP | COnPOff
Tony [22] | 0.6009 | 0.4621 | 0.7311 | 0.6794
Omnizart [7, 23] | 0.6174 | 0.4992 | 0.6047 | 0.5151
Proposed | 0.8975 | 0.7728 | 0.8558 | 0.8303
Table 3: Note transcription results on ISMIR2014 and SSVD v2.0. The proposed
model was trained with SSVD v2.0
## 6 Conclusion
We presented a neural network architecture for note-level singing
transcription that takes advantage of PPG on top of mel-spectrogram. Through
the ablation study, we examined various architectures along with the two input
features, showing that the additional phonetic information is effective in
singing onset/offset detection. Also, we showed that the proposed model
outperforms the compared models on ISMIR2014 and SSVD v2.0. For future work,
we plan to explore models that effectively handle weak supervision from noisy
melody and lyrics labels on a large-scaled dataset [24].
## References
* [1] Curtis Hawthorne, Erich Elsen, Jialin Song, Adam Roberts, Ian Simon, Colin Raffel, Jesse Engel, Sageev Oore, and Douglas Eck, “Onsets and frames: dual-objective piano transcription,” in Proc. of the 19th Int. Society for Music Information Retrieval Conf., Paris, France, 2018, pp. 50–57.
* [2] Qiuqiang Kong, Bochen Li, Xuchen Song, Yuan Wan, and Yuxuan Wang, “High-resolution piano transcription with pedals by regressing onset and offset times,” IEEE/ACM Transactions on Audio, Speech, and Language Processing, vol. 29, pp. 3707–3717, October 2021.
* [3] Emilio Molina, Ana M. Barbancho, Lorenzo J. Tardón, and Isabel Barbancho, “Evaluation framework for automatic singing transcription,” in Proc. of the 15th Int. Society for Music Information Retrieval Conf., Taipei, Taiwan, 2014, pp. 567–572.
* [4] Simon Dixon, “Onset detection revisited,” in Proc. of the 9th Int. Conf. on Digital Audio Effects, Montréal, Canada, 2006, pp. 133–137.
* [5] Ryo Nishikimi, Eita Nakamura, Satoru Fukayama, Masataka Goto, and Kazuyoshi Yoshii, “Automatic singing transcription based on encoder-decoder recurrent neural networks with a weakly-supervised attention mechanism,” in IEEE Int. Conf. on Acoustics, Speech, and Signal Processing, Brighton, UK, 2019, pp. 161–165.
* [6] Zih-Sing Fu and Li Su, “Hierarchical classification networks for singing voice segmentation and transcription,” in Proc. of the 20th Int. Society for Music Information Retrieval, Delft, The Netherlands, 2019, pp. 900–907.
* [7] Jui-Yang Hsu and Li Su, “VOCANO: A note transcription framework for sining voice in polyphonic music,” in Proc. of the 22nd Int. Society for Music Information Retrieval, Online, 2021, pp. 293–300.
* [8] Xianke Wang, Wei Xu, Weiming Yang, and Wenqing Cheng, “Musicyolo: A sight-singing onset/offset detection framework based on object detection instead of spectrum frames,” in IEEE Int. Conf. on Acoustics, Speech, and Signal Processing, Singapore, Singapore, 2022, pp. 396–400.
* [9] Yukun Li, Emir Edmirel, Polina Proutskova, and Simon Dixon, “Phoneme-informed note segmentation of monophonic vocal music,” in Proc. of the 2nd Workshop on NLP for Music and Spoken Audio (NLP4MusA), Online, 2021, pp. 17–21.
* [10] Rainer Kelz, Matthias Dorfer, Filip Korzeniowski, Sebastian Böck, Andreas Arzt, and Gerhard Widmer, “On the potential of simple framewise approaches to piano transcription,” in Proc. of the 17th Int. Society for Music Information Retrieval Conf., Linz, Austria, 2016, pp. 475–481.
* [11] Taegyun Kwon, Dasaem Jeong, and Juhan Nam, “Polyphonic piano transcription using autoregressive multi-state note model,” in Proc. of the 21st Int. Society for Music Information Retrieval Conf., Montréal, Canada, 2020, pp. 454–461.
* [12] Alex Graves, Santiago Fernández, Faustino Gomez, and Jürgen Schmidhuber, “Connectionist temporal classification: Labelling unsegmented sequence data with recurrent neural networks,” in Proc. of the 23rd International Conference on Machine Learning, Pittsburgh, PA, 2006, pp. 369–376.
* [13] Haşim Sak, Andrew Senior, Kanishka Rao, Ozan İrsoy, Alex Graves, Françoise Beaufays, and Johan Schalkwyk, “Learning acoustic frame labeling for speech recognition with recurrent neural networks,” in IEEE Int. Conf. on Acoustics, Speech, and Signal Processing, South Brisbane, QLD, Australia, 2015, pp. 4280–4284.
* [14] Yann Teytaut and Axel Roebel, “Phoneme-to-audio alignment with recurrent neural networks for speaking and singing voice,” in Proc. Interspeech 2021, Brno, Czechia, 2021, pp. 61–65.
* [15] Curtis Hawthorne, Andriy Stasyuk, Adam Roberts, Ian Simon, Cheng-Zhi Anna Huang, Sander Dieleman, Erich Elsen, Jesse Engel, and Douglas Eck, “Enabling factorized piano music modeling and generation with the maestro dataset,” in The Int. Conf. on Learning Representations, New Orleans, LA, USA, 2019.
* [16] Matthias Mauch and Simon Dixon, “PYIN: A fundamental frequency estimator using probabilistic threshold distributions,” in IEEE Int. Conf. on Acoustics, Speech, and Signal Processing, Florence, Italy, 2014, pp. 659–663.
* [17] Soonbeom Choi, Wonil Kim, Saebyul Park, Sangeon Yong, and Juhan Nam, “Children’s song dataset for singing voice research,” in ISMIR Late Breaking and Demo Papers, Montréal, Canada, 2020\.
* [18] John S. Garofolo, Lori F. Lamel, William M. Fisher, Jonathan G. Fiscus, David S. Pallett, Nancy L. Dahlgren, and Victor Zue, “TIMIT acoustic-phonetic continuous speech corpus,” LDC93S1, Philadelphia: Linguistic Data Consortium, 1993.
* [19] “The cmu pronouncing dictionary,” http://www.speech.cs.cmu.edu/cgi-bin/cmudict, accessed 2022-10-22.
* [20] Colin Raffel, Brian McFee, Eric J. Humphrey, Justin Salamon, Oriol Nieto, Dawen Liang, and Daniel P. W. Ellis, “mir_eval: A transparent implementation of common MIR metrics,” in Proc. of the 15th Int. Society for Music Information Retrieval, Taipei, Taiwan, 2014, pp. 367–372.
* [21] Ilya Loshchilov and Frank Hutter, “Decoupled weight decay regularization,” in The Int. Conf. on Learning Representations, New Orleans, LA, USA, 2019.
* [22] Matthias Mauch, Chris Cannam, Rachel Bittner, Geroge Fazekas, Justin Salamon, Jiajie Dai, Juan Bello, and Simon Dixon, “Computer-aided melody note transcription using the tony software: Accuracy and efficiency,” in Proc. of Sound and Music Computing, Maynooth, Ireland, 2015.
* [23] Yu-Te Wu, Yin-Jyun Luo, Tsung-Ping Chen, I-Chieh Wei, Jui-Yang Hsu, Yi-Chin Chuang, and Li Su, “Omnizart: A general toolbox for automatic music transcription,” Journal of Open Source Software, vol. 6, no. 68, pp. 3391, Dec 2021\.
* [24] Gabriel Meseguer-Brocal, Alice Cohen-Hadria, and Geoffroy Peeters, “DALI: A large dataset of synchronized audio, lyrics, and notes, automatically created using teacher-student machine learning paradigm,” in Proc. of the 19th Int. Society for Music Information Retrieval, Paris, France, 2018, pp. 431–437.
|
# Learning Constraints and Descriptive Segmentation for Subevent Detection
Haoyu Wang1, Hongming Zhang2, Muhao Chen3 & Dan Roth1
1Department of Computer and Information Science, UPenn
2Department of Computer Science and Engineering, HKUST
3Department of Computer Science, USC
<EMAIL_ADDRESS>
<EMAIL_ADDRESS><EMAIL_ADDRESS>
This work was done when the author was visiting the University of
Pennsylvania.
###### Abstract
Event mentions in text correspond to real-world events of varying degrees of
granularity. The task of subevent detection aims to resolve this granularity
issue, recognizing the membership of multi-granular events in event complexes.
Since knowing the span of descriptive contexts of event complexes helps infer
the membership of events, we propose the task of _event-based text
segmentation_ (EventSeg) as an auxiliary task to improve the learning for
subevent detection. To bridge the two tasks together, we propose an approach
to learning and enforcing constraints that capture dependencies between
subevent detection and EventSeg prediction, as well as guiding the model to
make globally consistent inference. Specifically, we adopt Rectifier Networks
for constraint learning and then convert the learned constraints to a
regularization term in the loss function of the neural model. Experimental
results show that the proposed method outperforms baseline methods by 2.3% and
2.5% on benchmark datasets for subevent detection, HiEve and IC, respectively,
while achieving a decent performance on EventSeg prediction111Our code is
publicly available at http://cogcomp.org/page/publication_view/950..
## 1 Introduction
Since real-world events are frequently conveyed in human languages,
understanding their linguistic counterparts, i.e. event mentions in text, is
of vital importance to natural language understanding (NLU). One key challenge
to understanding event mentions is that they refer to real-world events with
varied granularity Glavaš et al. (2014) and form _event complexes_ Wang et al.
(2020). For example, when speaking of a coarse-grained event “publishing a
paper”, it can involve a complex of more fine-grained events such as “writing
the paper,” “passing the peer review,” and “presenting at the conference.”
Naturally, understanding events requires to resolve the granularity of events
and infer their memberships, which corresponds to the task of subevent
detection (a.k.a. event hierarchy extraction). Practically, subevent detection
is a key component of event-centric NLU Chen et al. (2021), and is beneficial
to various applications, such as schema induction Zhang et al. (2020); Li et
al. (2020a), task-oriented dialogue agents Andreas et al. (2020),
summarization Chen et al. (2019); Zhao et al. (2020), and risk detection Pohl
et al. (2012).
Figure 1: An example of Parent-Child relations and EventSegs from the HiEve
dataset Glavaš et al. (2014). The blue and yellow segments denote the textual
spans of event complexes “posted” and “scandal” respectively. Curved arrows
denote Parent-Child relations within a text segment, whereas the dotted arrows
denote cross-segment Parent-Child relations.
As a significant step towards inducing event complexes (graphs that recognize
the relationship of multi-granular events) in documents, subevent detection
has started to receive attention recently Wang et al. (2020); Han et al.
(2021). It is natural to perceive that in documents, there might be several
different event complexes and they often span in different descriptive
contexts that form relatively independent text segments. Consider the example
in Figure 1, where the two membership relations in the event complex (graph
consisting of “scandal (e7),” “charges (e6),” “ousting (e8),” and relations)
are both within the segment marked in yellow that describes the event complex.
As can be seen in the paragraph, though we cannot deny the existence of cross-
segment subevent relations (dotted arrows), events belonging to the same
membership are much more often to co-occur in a text segment. This correlation
has been overlooked by existing data-driven methods Zhou et al. (2020); Yao et
al. (2020), which formulate subevent detection as pairwise relation
extraction. On the other hand, while prior studies have demonstrated the
benefits of incorporating logical constraints among event memberships and
other relations (such as co-reference) Glavaš and Šnajder (2014); Wang et al.
(2020), the constraints between the memberships and event co-occurences in
text segments remain uncertain. Hence, how to effectively learn and enforce
hard-to-articulate constraints as in the case of subevent detection and
segmentation of text is another challenge.
Our _first_ contribution is to improve subevent detection based on an
auxiliary task of EventSeg prediction. By EventSeg prediction, we seek to
segment a document into descriptive contexts of different event complexes.
Evidently, with EventSeg information, it would be relatively easy to infer the
memberships of events in the same descriptive context. Using annotations for
subevent detection and EventSeg prediction, we aim to adopt a neural model to
jointly learn these two tasks along with the (soft) logical constraints that
bridge their labels together. In this way, we incorporate linear discourse
structure of segments into membership relation extraction, avoiding
complicated feature engineering in the previous work Aldawsari and Finlayson
(2019). From the learning perspective, adding EventSeg prediction as an
auxiliary task seeks to provide effective incidental supervision signals Roth
(2017) to the subevent detection task. This is especially important in the
current scenario where annotated learning resources for subevents are
typically limited Hovy et al. (2013); Glavaš et al. (2014); O’Gorman et al.
(2016).
To capture the logical dependency between subevent structure and EventSeg, our
_second_ contribution is an approach to automatically learning and enforcing
logical constraints. Motivated by Pan et al. (2020), we use Rectifier Networks
to learn constraints in the form of linear inequalities, and then convert the
constraints to a regularization term that can be incorporated into the loss
function of the neural model. This allows any hard-to-articulate constraints
to be automatically captured for interrelated tasks, and efficiently guides
the model to make globally consistent inference. By learning and enforcing
task-specific constraints for subevent relations, the proposed method achieves
comparable results with SOTA subevent detection methods on the HiEve and IC
dataset. Moreover, by jointly learning with EventSeg prediction, the proposed
method surpasses previous methods on subevent detection by relatively 2.3% and
2.5% in $F_{1}$ on HiEve and IC, while achieving decent results on EventSeg
prediction.
## 2 Related Work
We discuss three lines of relevent research.
Subevent Detection. Several approaches to extracting membership relations have
been proposed, which mainly fall into two categories: statistical learning
methods and data-driven methods. Statistical learning methods Glavaš et al.
(2014); Glavaš and Šnajder (2014); Araki et al. (2014); Aldawsari and
Finlayson (2019) collect a variety of features before feeding into classifiers
for pairwise decision. Nevertheless the features often require costly human
effort to obtain, and are often dataset-specific. Data-driven methods, on the
other hand, automatically characterize events with neural language models like
BERT Devlin et al. (2019), and can simultanously incorporate various signals
such as event time duration Zhou et al. (2020), joint constraints with event
temporal relations Wang et al. (2020) and subevent knowledge Yao et al.
(2020). Among recent methods, only Aldawsari and Finlayson (2019) utilize
discourse features like discourse relations between elementary discourse
units, but still document-level segmentation signals are not incorporated into
the task of subevent detection. Actually, research on event-centric NLU Chen
et al. (2021) has witnessed the usage of document-level discourse relations:
different functional discourse structures around the main event in news
articles have been studied in Choubey et al. (2020). Hence, we attempt to
capture the interdependencies between subevent detection and segmentation of
text, in order to enhance the model performance for event hierarchy
extraction.
Text Segmentation. Early studies in this line have concentrated on
unsupervised text segmentation, quantifying lexical cohesion within small text
segments Choi (2000), and unsupervised Bayesian approaches have also been
successful in this task Eisenstein and Barzilay (2008); Eisenstein (2009);
Newman et al. (2012); Mota et al. (2019). Given that unsupervised algorithms
are difficult to specialize for a particular domain, Koshorek et al. (2018)
formulate the problem as a supervised learning task. Lukasik et al. (2020)
follow this idea by using transformer-based architectures with cross segment
attention to achieve state-of-the-art performance. Focusing on creating
logically coherent sub-document units, these prior work do not cover
segmentation of text regarding descriptive contexts of event complexes, which
is the focus of the auxiliary task in this work.
Learning with Constraints. In terms of enforcing declarative constraints in
neural models, early efforts Roth and Yih (2004); Glavaš and Šnajder (2014)
formulate the inference process as Integer Linear Programming (ILP) problems.
Pan et al. (2020) also employ ILP to enforce constraints learned automatically
from Rectifier Networks with strong expressiveness Pan and Srikumar (2016).
Yet the main drawback of solving an ILP problem is its inefficiency in a large
feasible solution space. Recent work on integrating neural networks with
structured outputs has emphasized the importance of the interaction between
constraints and representations Rocktäschel and Riedel (2017); Niculae et al.
(2018); Li and Srikumar (2019); Li et al. (2019, 2020b). However there has
been no automatic and efficient ways to learn and enforce constraints that are
not limited to first-order logic, e.g., linear inequalities learned via
Rectifier Networks. And this is the research focus of our paper.
## 3 Preliminaries
A document $\mathcal{D}$ consists of a collection of $m$ sentences
$\mathcal{D}=[s_{1},s_{2},\cdots,s_{m}]$, and each sentence, say $s_{k}$,
contains a sequence of tokens $s_{k}=[w_{1},w_{2},\cdots,w_{n}]$. Some tokens
in sentences belong to the set of annotated event triggers, i.e.,
$\mathcal{E}_{\mathcal{D}}=\\{e_{1},e_{2},\cdots,e_{l}\\}$. Following the
notation by Koshorek et al. (2018), a segmentation of document $\mathcal{D}$
is represented as a sequence of binary values:
$\mathcal{Q}_{\mathcal{D}}=\\{q_{1},q_{2},\cdots,q_{m-1}\\}$, where $q_{i}$
indicates whether sentence $s_{i}$ is the end of a segment.
Subevent Detection is to identify membership relations between events, given
event mentions in documents. Particularly, $\mathcal{R}$ denotes the set of
relation labels as defined in Hovy et al. (2013) and Glavaš et al. (2014)
(i.e., Parent-Child, Child-Parent, Coref, and NoRel). For a relation
$r\in\mathcal{R}$, we use a binary indicator $Y_{i,j}^{r}$ to denote whether
an event pair $(e_{i},e_{j})$ has relation $r$, and use $y_{i,j}^{r}$ to
denote the model-predicted possibility of an event pair $(e_{i},e_{j})$ to
have relation $r$.
EventSeg prediction aims at finding an optimal segmentation of text that
breaks the document into several groups of consecutive sentences, and each
sequence is a descriptive context of an event complex Wang et al. (2020).
Being different from the traditional definition of text segmentation, EventSeg
focuses on the change of event complex (which is not necessarily the change of
topic). For a pair of events $(e_{i},e_{j})$, we use a binary indicator
$Z_{i,j}$ to denote whether the two events are within the same descriptive
context of event complexes, and $z_{i,j}$ to denote the model-predicted
possibility of two events to belong to the same segment. Details on how to
obtain EventSeg are described in Table 1.
Connections between Two Tasks. Statistically, through an analysis of the HiEve
and IC corpus, Parent-Child and Child-Parent relations appear within the same
descriptive context of event complex with a probability of 65.13% (see Table
1). On the other hand, the probability for each of the two other non-
membership relations (i.e., Coref and NoRel) to appear within the same segment
approximately equals that of its appearence across segments. This demonstrates
that subevent relations tend to appear within the same EventSeg. Since this is
not an absolute logical constraint, we adopt an automatic way of modeling such
constraints instead of manually inducing them, which is described in the next
section.
## 4 Methods
We now present the framework for learning and enforcing constraints for the
main task of subevent detection and the auxiliary EventSeg prediction. We
start with learning the hard-to-articulate constraints (Section 4.1), followed
by details of joint learning (Section 4.2) and inference (Section 4.3) for the
two tasks.
### 4.1 Learning Constraints
From the example shown in Figure 1 we can construct an event graph $G$ with
all the events, membership relations, and EventSeg information. Figure 2 shows
a three-event subgraph of $G$. The goal of constraint learning is as follows:
given membership relations $Y_{i,j}^{r},Y_{j,k}^{r}$ and segmentation
information $Z_{i,j},Z_{j,k}$ about event pairs $(e_{i},e_{j})$ and
$(e_{j},e_{k})$, we would like to determine whether a certain assignment of
$Y_{i,k}^{r},$ and $Z_{i,k}$ is legitimate.
Feature Space for Constraints. We now define the feature space for constraint
learning. Let $\mathbf{X}_{p}=\\{Y_{p}^{r},r\in\mathcal{R}\\}\cup\\{Z_{p}\\}$
denote the set of features for an event pair $p$. Given features
$\mathbf{X}_{i,j}$ and $\mathbf{X}_{j,k}$, we would like to determine the
value of $\mathbf{X}_{i,k}$, yet the mapping from the labels of
$(e_{i},e_{j}),(e_{j},e_{k})$ to the labels of $(e_{i},e_{k})$ is a one-to-
many relationship. For instance, if $r=$ Parent-Child,
$Y_{i,j}^{r}=Y_{j,k}^{r}=1$, and $Z_{i,j}=Z_{j,k}=0$, then due to the
transitivity of Parent-Child, we should enforce $Y_{i,k}^{r}=1$. Yet we cannot
tell whether $e_{i}$ and $e_{k}$ are in the same EventSeg, i.e., both
$Z_{i,k}=1$ and $Z_{i,k}=0$ could be legitimate. In other words, we actually
want to determine the _set of possible values_ of $\mathbf{X}_{i,k}$ and thus
we need to expand the constraint features to better capture relationship
legitimacy. We employ the _power set_ of $\mathbf{X}_{i,k}$,
$\mathcal{P}(\mathbf{X}_{i,k})$, as our new features for event pair
$(e_{i},e_{k})$. And now a subgraph with three events $e_{i}$, $e_{j}$, and
$e_{k}$ can be featurized as
$\mathbf{X}=\mathbf{X}_{i,j}\cup\mathbf{X}_{j,k}\cup\mathcal{P}(\mathbf{X}_{i,k}).$
(1)
Constraint Learning with Rectifier Network. When we construct three-event
subgraphs from documents, a binary label $t$ for structure legitimacy is
created for each subgraph. Inspired by how constraints are learned for several
structured prediction tasks Pan et al. (2020), we represent constraints for a
given subgraph-label pair $(\mathbf{X},t)$ as $K$ linear inequalities.222Here
we assume $K$ constraints is the upper bound for all the rules to be learned.
Formally, $t=1$ if $\mathbf{X}$ satisfies constraints $c_{k}$ for all
$k=1,\cdots,K$. And the $k^{\text{th}}$ constraint $c_{k}$ is expressed by a
linear inequality
$\displaystyle\begin{split}\mathbf{w}_{k}\cdot\mathbf{X}+b_{k}\geq
0,\end{split}$
whose weights $\mathbf{w}_{k}$ and bias $b_{k}$ are learned. Since a system of
linear inequalities is proved to be equivalent to the Rectifier Network
proposed in Pan et al. (2020), we adopt a two-layer rectifier network for
learning constraints
$p=\sigma\Big{(}1-\sum_{k=1}^{K}\operatorname{ReLU}\big{(}\mathbf{w}_{k}\cdot\mathbf{X}+b_{k}\big{)}\Big{)},$
(2)
where $p$ denotes the possibility of $t=1$ and $\sigma(\cdot)$ denotes the
sigmoid function. We train the parameters $\mathbf{w}_{k}$’s and $b_{k}$’s of
the rectifier network in a supervised setting. The positive examples are
induced from subgraph structures that appear in the training corpus, while the
negative examples are randomly chosen from the rest possibilities that do not
exist in the training corpus.
Figure 2: A legitimate structure for three-event subgraph obtained from the
example shown in Figure 1. The constraint features for the subgraph can be
expressed by
$\mathbf{X}=\mathbf{X}_{7,6}\cup\mathbf{X}_{6,2}\cup\mathcal{P}(\mathbf{X}_{7,2})$,
and the label $t$ for this structure is 1.
### 4.2 Joint Task Learning
After learning the constraints using Rectifier Networks, we introduce how to
jointly model membership relations and EventSeg with neural networks and how
to integrate the learned constraints into the model. The model architecture is
shown in Figure 3.
Figure 3: An overview of our approach. The model takes three pairs of events
at a time in training to enforce constraints over three-event subgraphs (an
example can be found in Figure 2). Event pair representations are obtained
from RoBERTa where the context of two events are taken into consideration.
Soft logical constraints learned in Section 4.1 are converted to a
regularization term in the loss function for subgraph structure legitimacy.
Local Classifier. To characterize event pairs in documents, we employ a neural
encoder, which obtains contextualized representations for event triggers from
the pre-trained transformer-based language model RoBERTa Liu et al. (2019). As
the context of event pairs, the sentences where two event mentions appear are
concatenated using [CLS] and [SEP]. We then calculate the element-wise average
of subword-level contextual representations as the representation for each
event trigger. To obtain event pair representation for $(e_{i},e_{j})$, we
concatenate the two contextual representations, together with their element-
wise Hadamard product and subtraction as in Wang et al. (2020). The event pair
representation is then sent to a multi-layer perceptron (MLP) with
$|\mathcal{R}|$ outputs for estimation of the confidence score $y_{i,j}^{r}$
for each relation $r$. To make EventSeg as an auxiliary task, the model also
predicts whether two events belong to the same segment using another separate
MLP with a single-value output $z_{i,j}$. In accordance with the learned
constraints in Section 4.1, the model takes three pairs of events at a time.
The annotation loss in Figure 3 is a linear combination of a four-class cross-
entropy loss $L_{A,sub}$ for subevent detection and a binary cross-entropy
loss $L_{A,seg}$ for EventSeg.
Incorporating Subgraph Constraints. The $K$ constraints learned in Section 4.1
are encoded into the weights $\mathbf{w}_{k}$ and bias $b_{k}$,
$k=1,\cdots,K$. Now that the input $\mathbf{X}$ is considered valid if it
satisfies all $K$ constraints, we obtain the predicted probability $p$ of
$\mathbf{X}$ being valid from Equation 2. To add the constraints as a
regularization term in the loss function of the neural model, we convert $p$
into the negative log space Li et al. (2019) which is same as the cross-
entropy loss. And thus the loss corresponding to the learned constraints is
$\displaystyle\begin{split}L_{cons}=-log\Big{(}Sigmoid\big{(}1-\sum_{k=1}^{N}ReLU(\mathbf{w}_{k}\cdot\bm{\psi}+b_{k})\big{)}\Big{)}.\end{split}$
And the loss function of the neural model is
$L=\lambda_{1}L_{A,sub}+\lambda_{2}L_{A,seg}+\lambda_{3}L_{cons},$ (3)
where the $\lambda$’s are non-negative coefficients to control the influence
of each loss term. With the loss function in Equation 3, we train the model in
a supervised way to fine-tune RoBERTa.
### 4.3 Inference
At inference time, to extract relations in the subevent detection task, we
input a pair of events into the model and compare the predicted possibility
for each relation, leaving the other two input pairs blank. For EventSeg
prediction, we let the model predict $z_{i,i+1}$ for each pair of adjacent
events $(e_{i},e_{i+1})$ that appear in different sentences. If $z_{i,i+1}=1$,
it means there is a segment break between $e_{i}$ and $e_{i+1}$. When there
are intermediate sentences between the two adjacent event mentions, we treat
the sentence that contains $e_{i}$ as the end of a previous segment. In this
way, we provide an approach to solving two tasks together via automatically
learning and enforcing constraints in the neural model. We provide in-depth
experimentation for the proposed method in the next section.
## 5 Experiments
Here we describe the experiments on subevent detection with EventSeg
prediction as an auxiliary task. We first introduce the corpora used (Section
5.1), followed by evaluation for subevent detection and an ablation study for
illustrating the importance of each model component (Section 5.2-Section 5.4).
We also provide a case study on EventSeg prediction (Section 5.5) and an
analysis of the constraints learned in the model (Section 5.6).
### 5.1 Datasets
Relations | HiEve | IC
---|---|---
Within | Across | Within | Across
Parent-Child | 1,123 | 679 | 1,698 | 550
Child-Parent | 1,067 | 779 | 1,475 | 863
Coref | 322 | 436 | 1,476 | 877
NoRel | 32,029 | 31,726 | 40,072 | 41,815
Table 1: Statistics of the HiEve and IC dataset. Numbers in column “Within”
denote the number of relations appearing within the same descriptive context
of event complex, whereas numbers under “Across” denote those across different
segments.
HiEve The HiEve corpus Glavaš et al. (2014) contains 100 news articles. Within
each article, annotations are given for both subevent membership and
coreference relations. Using the same measurement of inter-annotator agreement
(IAA) as event temporal relations in UzZaman and Allen (2011), the HiEve
dataset has an IAA of 0.69 F1.
Intelligence Community (IC) The IC corpus Hovy et al. (2013) also contains 100
news articles annotated with membership relations. The articles report
violence events such as attack, war, etc. We discard those relations involving
implicit events annotated in IC, and calculate transitive closure for both
subevent relations and co-reference to get annotations for all event pairs in
text order as it is done for HiEve Glavaš et al. (2014).
Labeling EventSeg We explain how to segment the document using annotations for
subevent relations. First, we use the annotated subevent relations (Parent-
Child and Child-Parent only) to construct a directed acyclic event graph for
each document. Due to the property of subevent relations, each connected
component in the graph is actually a tree with one root node, which forms an
event complex. If the graph constructed from document has one connected
component, we remove the root node and separate the event graph into more than
one event complexes. Since each event complex has a textual span in the
document, we obtain several descriptive contexts that may or may not overlap
with each other. For those documents with non-overlapping descriptive
contexts, their segmentations are therefore obtained. In cases where two
descriptive contexts of event complexes overlap with each other, if there
exists such an event whose removal results in non-overlapping contexts, then
we segment the contexts assuming this event is not considered. Otherwise, we
merge the contexts into one segment. Through this event-based text
segmentation, on average we obtain 3.99 and 4.29 EventSegs in the HiEve and IC
corpus, respectively.
We summarize the data statistics in Table 1.
### 5.2 Baselines and Evaluation Protocols
On IC dataset, we compare with two baseline approaches. Araki et al. (2014)
propose a logistic regression model along with a voting algorithm for parent
event detection. Wang et al. (2020) use a data-driven model that incorporates
handcrafted constraints with event temporal attributes to extract event-event
relations. On Hieve333Despite carefully following the details described in
Aldawsari and Finlayson (2019) and communicating with the authors, we were not
able to reproduce their results. Therefore, we choose to compare with other
methods., we compare with a transformer-based language model TacoLM Zhou et
al. (2020) that fine-tunes on a temporal common sense corpora, and the method
proposed by Wang et al. (2020) which also serves as the second baseline for
IC.
We use the same evaluation metric on HiEve as previous methods Zhou et al.
(2020), leaving 20% of the documents out for testing444To make predictions on
event complexes, we keep all negative NoRel instances in our experiments
instead of strictly following Zhou et al. (2020) and Wang et al. (2020) where
negative instances are down-sampled with a probability of 0.4.. The $F_{1}$
scores of Parent-Child and Child-Parent and the micro-average of them are
reported. In accordance with HiEve, the IC dataset is also evaluated with
$F_{1}$ scores of membership relations instead of BLANC Araki et al. (2014),
while the other settings remain the same with previous works.
### 5.3 Experimental Setup
We fine-tune the pre-trained 1024 dimensional RoBERTa Liu et al. (2019) to
obtain contextual representations of event triggers in a supervised way given
labels for membership relations and EventSeg. Additionally, we employ 18
dimensional one-hot vectors for part-of-speech tags for tokens in documents to
include explicit syntactic features in the model. For each MLP we set the
dimension to the average of the input and output neurons, following Chen et
al. (2018). The parameters of the model are optimized using AMSGrad Reddi et
al. (2018), with the learning rate set to $10^{-6}$. The training process is
limited to 40 epochs since it is sufficient for convergence.
### 5.4 Results
We report the results for subevent detection on two benchmark datasets, HiEve
and IC, in Table 2. Among the baseline methods, Wang et al. (2020) has the
best results in terms of $F_{1}$ on both datasets. They integrate event
temporal relation extraction, common sense knowledge and handcrafted logical
constraints into their approach. In contrast, our proposed method does not
require constraints induced by domain experts, but still outperforms their
$F_{1}$ score by 2.3 - 2.5%. We attribute this superiority to the use of
connections between subevent relations and the linear discourse structure of
segments. Thanks to the strong expressiveness of Rectifier Networks, we
utilize these connections via the learning of linear constraints, thus
incorporating incidental supervision signal from EventSeg. Furthermore, the
event pair representation in our model is obtained from broader contexts than
the local sentence-level contexts for events in Wang et al. (2020). The new
representation not only contains more information on events but naturally
provides necessary clues for determining whether there is a break for
EventSeg.
| | $F_{1}$ score
---|---|---
Corpus | Model | PC | CP | Avg.
IC | Araki et al. (2014) | - | - | 0.262
Wang et al. (2020) | 0.421 | 0.495 | 0.458
Our model | 0.446 | 0.516 | 0.481
HiEve | Zhou et al. (2020) | 0.485 | 0.494 | 0.489
Wang et al. (2020) | 0.472 | 0.524 | 0.497
Our model | 0.534 | 0.510 | 0.522
Table 2: Experimental results for subevent detection on IC and HiEve corpus.
PC, CP and Avg. denote Parent-Child, Child-Parent and their micro-average,
respectively. $F_{1}$ scores for PC and CP are not reported in Araki et al.
(2014).
We further perform an ablation analysis to aid the understanding of the model
components and report our findings in Table 3. Without any constraints,
integrating EventSeg prediction as an auxiliary task brings along an absolute
gain of 0.2% and 0.6% in $F_{1}$ on HiEve and IC respectively over the vanilla
single-task model with RoBERTa fine-tuning. This indicates that EventSeg
information is beneficial to the extraction of membership relations. When
membership constraints are added via the regularization term into the loss
function, the model’s performance on subevent detection is significantly
improved by 2.1% in $F_{1}$ on HiEve dataset. Incorporating constraints
involving two tasks further enhances the model performance by 0.5% - 1.1%.
This indicates that the global consistency ensured within and across EventSegs
is important for enhancing the comprehension for subevent memberships.
| HiEve | IC
---|---|---
Model | $P$ | $R$ | $F_{1}$ | $P$ | $R$ | $F_{1}$
Single-task Training | 43.9 | 56.6 | 49.4 | 44.5 | 46.9 | 45.8
Joint Training | 45.7 | 54.2 | 49.6 | 39.9 | 56.5 | 46.4
\+ Membership Constraints | 55.6 | 48.5 | 51.7 | 50.1 | 45.8 | 47.0
\+ Membership + EventSeg | 51.9 | 53.6 | 52.2 | 39.6 | 64.0 | 48.1
Table 3: Ablation study results for subevent detection. The results on both
datasets are the micro-average of Parent-Child and Child-Parent in terms of
precision, recall, and $F_{1}$. “+ Membership Constraints” denotes adding
automatically learned constraints for membership relations upon the joint
training model. The row of “+ Membership + EventSeg” shows the results of the
complete model.
### 5.5 Case Study for EventSeg Prediction
Here we provide an analysis of model performance on the task of EventSeg
prediction. Though EventSeg prediction is somewhat different from text
segmentation in concept, we can use methods for text segmentation as baselines
for EventSeg prediction. We train a recent BERT-based model Lukasik et al.
(2020) for text segmentation based on annotations for EventSeg in the HiEve
and IC corpora and compare our method with this baseline. In Table 4 we show
the performances of the baseline model and ours for EventSeg prediction in
terms of $F_{1}$ on HiEve and IC. Since our solution for EventSeg prediction
is essentially similar to the cross-segment BERT model in terms of
representations of segments, our performance is on par with the baseline
model.
Model | HiEve | IC
---|---|---
Cross-segment BERT Lukasik et al. (2020) | 55.2 | 58.3
Our model | 56.8 | 57.4
Table 4: EventSeg prediction performance in terms of $F_{1}$ on the HiEve and
IC corpus.
### 5.6 Analysis on Constraint Learning
We further provide an in-depth qualitative analysis on different types of
logical constraints captured by the constraint learning.
#### 5.6.1 Types of Learned Constraints
We expect that both task-specific constraints (membership relations only) in
previous works Glavaš and Šnajder (2014); Wang et al. (2020) and cross-task
constraints can be automatically captured in our framework. Accordingly, we
separately analyze these two constraints.
Task-specific Constraints. Since we are using three-event subgraph for
constraint learning, apparently, transitivity constraints for membership
relations like
$\begin{split}Y_{i,j}^{r}+Y_{j,k}^{r}&-Y_{i,k}^{r}\leq 1,\\\
r\in\\{\textsc{Parent-Child},&\textsc{Child-
Parent},\textsc{Coref}\\},\end{split}$
can be learned; whereas constraints that typically involve two events, e.g.,
symmetry constraints for membership relations like
$\begin{split}Y_{i,j}^{r}=&Y_{j,i}^{\bar{r}},\\\ r\in\\{\textsc{Parent-
Child}&,\textsc{Child-Parent}\\},\end{split}$
can also be learned by assigning the third event $e_{k}$ to the same event as
$e_{i}$ and treating the relation of $(e_{i},e_{k})$ as Coref.
Cross-task Constraints. Here we provide an analysis of cross-task constraints
for both membership relations and EventSeg information learned in the model.
We give an example constraint in the form of linear inequality learned from
HiEve
$\displaystyle\begin{split}&0.13x_{0}+0.19x_{1}+0.27x_{2}+0.08x_{3}-0.18x_{4}\\\
+&0.09x_{5}+0.13x_{6}+0.25x_{7}+0.04x_{8}-0.18x_{9}\\\
+&\cdots+0.02x_{18}+0.07x_{19}+\cdots+0.05\geq 0,\end{split}$
where $x_{1}$ and $x_{6}$ denote the variables for $Y_{i,j}^{r}=1$ and
$Y_{j,k}^{r}=1$ ($r=$ Child-Parent) respectively, and they both have positive
coefficients. If we look at expected labels for
$\mathcal{P}(\mathbf{X}_{i,k})$, we can see that $x_{18}$ and $x_{19}$ which
denote the variables for $Y_{i,k}^{r}=1,Z_{i,k}=0$ and
$Y_{i,k}^{r}=1,Z_{i,k}=1$ have coefficients of 0.02 and 0.07, respectively.
The two positive coefficients for $x_{18}$ and $x_{19}$ indicate that (a)
$(e_{i},e_{k})$ is possible to have Child-Parent relation, and (b) the
possibility of $(e_{i},e_{k})$ being in the same EventSeg is greater than two
events being in different EventSegs.
#### 5.6.2 Qualitative Analysis
We set $K$ to 10 since we observe less number of constraints will decrease the
performance of learning accuracy while increasing $K$ does not cause
noticeable influence. We optimize the parameters using Adam with a learning
rate of 0.001 and the training process is limited to 1,000 epochs. We show the
performance of constraint learning in Table 5. Since the constraints for
membership relations should be declarative hard constraints like symmetry and
transitivity constraints in Section 5.6.1, the accuracy of constraint learning
is equal or close to 100%. Yet, those hard-to-articulate constraints that
incorporate EventSeg information are more difficult to learn, and thus the
Rectifier Network has a less satisfying performance in terms of accuracy on
the test set of HiEve and IC (96.44% and 98.01%).
Constraints | HiEve | IC
---|---|---
Membership | 99.13 | 100.00
Membership + EventSeg | 96.44 | 98.01
Table 5: Constraint learning performance in terms of accuracy on test set.
“Membership” denotes the constraints involving membership relations only,
while “Membership + EventSeg” denotes full constraints.
## 6 Conclusion
In this work we propose an automatic and efficient way of learning and
enforcing constraints for subevent detection. By noticing the connections
between subevent dection and EventSeg, we adopt EventSeg prediction as an
auxiliary task which provides effective incidental supervision signals.
Through learning and enforcing constraints that can express hard-to-articulate
constraints, the logical rules for both tasks are captured to regularize the
model towards consistent inference. The proposed approach outperforms SOTA
data-driven methods on benchmark datasets and provides comparable results with
recent text segmentation methods on EventSeg prediction. This demonstrates the
effectiveness of the framework on subevent detection and the potential of
solving other structured predictions tasks in NLP.
## Ethical Considerations
This work does not present any direct societal consequence. The proposed
method aims at supporting high-quality extraction of event complexes from
documents with the awareness of discourse structures and automated constraint
learning. We believe this study leads to intellectual merits of developing
robust event-centric information extraction technologies. It also has broad
impacts, since constraints and dependencies can be broadly investigated for
label structures in various natural language classification tasks. The
acquired eventually knowledge, on the other hand, can potentially benefit
various downstream NLU and NLG tasks.
For any information extraction methods, real-world open source articles to
extract information from may include societal biases. Extracting event
complexes from articles with such biases may potentially propagate the bias
into acquired knowledge representation. While not specifically addressed in
this work, the ability to incorporate logical constraints and discourse
consistency can be a way to mitigate societal biases.
## Acknowledgement
We appreciate the anonymous reviewers for their insightful comments.
This research is supported by the Office of the Director of National
Intelligence (ODNI), Intelligence Advanced Research Projects Activity (IARPA),
via IARPA Contract No. 2019-19051600006 under the BETTER Program, and by
contract FA8750-19-2-1004 with the US Defense Advanced Research Projects
Agency (DARPA). The views expressed are those of the authors and do not
reflect the official policy or position of the Department of Defense or the
U.S. Government.
## References
* Aldawsari and Finlayson (2019) Mohammed Aldawsari and Mark Finlayson. 2019. Detecting subevents using discourse and narrative features. In _Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics_ , pages 4780–4790, Florence, Italy. Association for Computational Linguistics.
* Andreas et al. (2020) Jacob Andreas, John Bufe, David Burkett, Charles Chen, Josh Clausman, Jean Crawford, Kate Crim, Jordan DeLoach, Leah Dorner, Jason Eisner, et al. 2020. Task-oriented dialogue as dataflow synthesis. _Transactions of the Association for Computational Linguistics_ , 8:556–571.
* Araki et al. (2014) Jun Araki, Zhengzhong Liu, Eduard Hovy, and Teruko Mitamura. 2014. Detecting subevent structure for event coreference resolution. In _Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC’14)_ , Reykjavik, Iceland. European Language Resources Association (ELRA).
* Chen et al. (2018) Muhao Chen, Changping Meng, Gang Huang, and Carlo Zaniolo. 2018. Neural article pair modeling for wikipedia sub-article matching. In _Joint European Conference on Machine Learning and Knowledge Discovery in Databases_ , pages 3–19. Springer.
* Chen et al. (2021) Muhao Chen, Hongming Zhang, Qiang Ning, Manling Li, Heng Ji, Kathleen McKeown, and Dan Roth. 2021. Event-centric natural language understanding. In _Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics – Tutorials_.
* Chen et al. (2019) Xiuying Chen, Zhangming Chan, Shen Gao, Meng-Hsuan Yu, Dongyan Zhao, and Rui Yan. 2019. Learning towards abstractive timeline summarization. In _Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence, IJCAI-19_ , pages 4939–4945. International Joint Conferences on Artificial Intelligence Organization.
* Choi (2000) Freddy Y. Y. Choi. 2000. Advances in domain independent linear text segmentation. In _1st Meeting of the North American Chapter of the Association for Computational Linguistics_.
* Choubey et al. (2020) Prafulla Kumar Choubey, Aaron Lee, Ruihong Huang, and Lu Wang. 2020. Discourse as a function of event: Profiling discourse structure in news articles around the main event. In _Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics_ , pages 5374–5386, Online. Association for Computational Linguistics.
* Devlin et al. (2019) Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In _Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers)_ , pages 4171–4186, Minneapolis, Minnesota. Association for Computational Linguistics.
* Eisenstein (2009) Jacob Eisenstein. 2009. Hierarchical text segmentation from multi-scale lexical cohesion. In _Proceedings of Human Language Technologies: The 2009 Annual Conference of the North American Chapter of the Association for Computational Linguistics_ , pages 353–361, Boulder, Colorado. Association for Computational Linguistics.
* Eisenstein and Barzilay (2008) Jacob Eisenstein and Regina Barzilay. 2008. Bayesian unsupervised topic segmentation. In _Proceedings of the 2008 Conference on Empirical Methods in Natural Language Processing_ , pages 334–343, Honolulu, Hawaii. Association for Computational Linguistics.
* Glavaš and Šnajder (2014) Goran Glavaš and Jan Šnajder. 2014. Constructing coherent event hierarchies from news stories. In _Proceedings of TextGraphs-9: the workshop on Graph-based Methods for Natural Language Processing_ , pages 34–38, Doha, Qatar. Association for Computational Linguistics.
* Glavaš et al. (2014) Goran Glavaš, Jan Šnajder, Marie-Francine Moens, and Parisa Kordjamshidi. 2014. HiEve: A corpus for extracting event hierarchies from news stories. In _Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC’14)_ , pages 3678–3683, Reykjavik, Iceland. European Language Resources Association (ELRA).
* Han et al. (2021) Rujun Han, I Hsu, Jiao Sun, Julia Baylon, Qiang Ning, Dan Roth, Nanyun Pen, et al. 2021. Ester: A machine reading comprehension dataset for event semantic relation reasoning. _arXiv preprint arXiv:2104.08350_.
* Hovy et al. (2013) Eduard Hovy, Teruko Mitamura, Felisa Verdejo, Jun Araki, and Andrew Philpot. 2013\. Events are not simple: Identity, non-identity, and quasi-identity. In _Workshop on Events: Definition, Detection, Coreference, and Representation_ , pages 21–28, Atlanta, Georgia. Association for Computational Linguistics.
* Koshorek et al. (2018) Omri Koshorek, Adir Cohen, Noam Mor, Michael Rotman, and Jonathan Berant. 2018. Text segmentation as a supervised learning task. In _Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers)_ , pages 469–473, New Orleans, Louisiana. Association for Computational Linguistics.
* Li et al. (2020a) Manling Li, Qi Zeng, Ying Lin, Kyunghyun Cho, Heng Ji, Jonathan May, Nathanael Chambers, and Clare Voss. 2020a. Connecting the dots: Event graph schema induction with path language modeling. In _Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)_ , pages 684–695, Online. Association for Computational Linguistics.
* Li et al. (2019) Tao Li, Vivek Gupta, Maitrey Mehta, and Vivek Srikumar. 2019. A logic-driven framework for consistency of neural models. In _Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)_ , pages 3924–3935, Hong Kong, China. Association for Computational Linguistics.
* Li et al. (2020b) Tao Li, Parth Anand Jawale, Martha Palmer, and Vivek Srikumar. 2020b. Structured tuning for semantic role labeling. In _Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics_ , pages 8402–8412, Online. Association for Computational Linguistics.
* Li and Srikumar (2019) Tao Li and Vivek Srikumar. 2019. Augmenting neural networks with first-order logic. In _Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics_ , pages 292–302, Florence, Italy. Association for Computational Linguistics.
* Liu et al. (2019) Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining approach.
* Lukasik et al. (2020) Michal Lukasik, Boris Dadachev, Kishore Papineni, and Gonçalo Simões. 2020\. Text segmentation by cross segment attention. In _Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)_ , pages 4707–4716, Online. Association for Computational Linguistics.
* Mota et al. (2019) Pedro Mota, Maxine Eskenazi, and Luísa Coheur. 2019. BeamSeg: A joint model for multi-document segmentation and topic identification. In _Proceedings of the 23rd Conference on Computational Natural Language Learning (CoNLL)_ , pages 582–592, Hong Kong, China. Association for Computational Linguistics.
* Newman et al. (2012) David Newman, Nagendra Koilada, Jey Han Lau, and Timothy Baldwin. 2012. Bayesian text segmentation for index term identification and keyphrase extraction. In _Proceedings of COLING 2012_ , pages 2077–2092, Mumbai, India. The COLING 2012 Organizing Committee.
* Niculae et al. (2018) Vlad Niculae, Andre Martins, Mathieu Blondel, and Claire Cardie. 2018. SparseMAP: Differentiable sparse structured inference. In _Proceedings of the 35th International Conference on Machine Learning_ , volume 80 of _Proceedings of Machine Learning Research_ , pages 3799–3808. PMLR.
* O’Gorman et al. (2016) Tim O’Gorman, Kristin Wright-Bettner, and Martha Palmer. 2016. Richer event description: Integrating event coreference with temporal, causal and bridging annotation. In _Proceedings of the 2nd Workshop on Computing News Storylines (CNS 2016)_ , pages 47–56, Austin, Texas. Association for Computational Linguistics.
* Pan et al. (2020) Xingyuan Pan, Maitrey Mehta, and Vivek Srikumar. 2020. Learning constraints for structured prediction using rectifier networks. In _Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics_ , pages 4843–4858, Online. Association for Computational Linguistics.
* Pan and Srikumar (2016) Xingyuan Pan and Vivek Srikumar. 2016. Expressiveness of rectifier networks. In _Proceedings of The 33rd International Conference on Machine Learning_ , volume 48 of _Proceedings of Machine Learning Research_ , pages 2427–2435, New York, New York, USA. PMLR.
* Pohl et al. (2012) Daniela Pohl, Abdelhamid Bouchachia, and Hermann Hellwagner. 2012. Automatic sub-event detection in emergency management using social media. In _Proceedings of the 21st international conference on world wide web_ , pages 683–686.
* Reddi et al. (2018) Sashank J Reddi, Satyen Kale, and Sanjiv Kumar. 2018. On the convergence of adam and beyond. In _International Conference on Learning Representations (ICLR)_.
* Rocktäschel and Riedel (2017) Tim Rocktäschel and Sebastian Riedel. 2017. End-to-end differentiable proving. In _Advances in Neural Information Processing Systems_ , volume 30. Curran Associates, Inc.
* Roth (2017) Dan Roth. 2017. Incidental Supervision: Moving beyond Supervised Learning. In _Proc. of the Conference on Artificial Intelligence (AAAI)_.
* Roth and Yih (2004) Dan Roth and Scott Yih. 2004. A Linear Programming Formulation for Global Inference in Natural Language Tasks. In _Proc. of the Conference on Computational Natural Language Learning (CoNLL)_ , pages 1–8. Association for Computational Linguistics.
* UzZaman and Allen (2011) Naushad UzZaman and James Allen. 2011. Temporal evaluation. In _Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies_ , pages 351–356, Portland, Oregon, USA. Association for Computational Linguistics.
* Wang et al. (2020) Haoyu Wang, Muhao Chen, Hongming Zhang, and Dan Roth. 2020. Joint constrained learning for event-event relation extraction. In _Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)_ , pages 696–706, Online. Association for Computational Linguistics.
* Yao et al. (2020) Wenlin Yao, Zeyu Dai, Maitreyi Ramaswamy, Bonan Min, and Ruihong Huang. 2020. Weakly Supervised Subevent Knowledge Acquisition. In _Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)_ , pages 5345–5356, Online. Association for Computational Linguistics.
* Zhang et al. (2020) Hongming Zhang, Muhao Chen, Haoyu Wang, Yangqiu Song, and Dan Roth. 2020. Analogous process structure induction for sub-event sequence prediction. In _Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)_ , pages 1541–1550, Online. Association for Computational Linguistics.
* Zhao et al. (2020) Lulu Zhao, Weiran Xu, and Jun Guo. 2020. Improving abstractive dialogue summarization with graph structures and topic words. In _Proceedings of the 28th International Conference on Computational Linguistics_ , pages 437–449, Barcelona, Spain (Online). International Committee on Computational Linguistics.
* Zhou et al. (2020) Ben Zhou, Qiang Ning, Daniel Khashabi, and Dan Roth. 2020. Temporal common sense acquisition with minimal supervision. In _Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics_ , pages 7579–7589, Online. Association for Computational Linguistics.
|
††thanks: Corresponding author: J. D. W<EMAIL_ADDRESS>
# Beyond one-axis twisting: Simultaneous spin-momentum squeezing
John Drew Wilson JILA and Department of Physics, University of Colorado, 440
UCB, Boulder, CO 80309, USA Simon B. Jäger JILA and Department of Physics,
University of Colorado, 440 UCB, Boulder, CO 80309, USA Physics Department
and Research Center OPTIMAS, Technische Universität Kaiserslautern, D-67663,
Kaiserslautern, Germany Jarrod T. Reilly JILA and Department of Physics,
University of Colorado, 440 UCB, Boulder, CO 80309, USA Athreya Shankar
Institute for Theoretical Physics, University of Innsbruck, Innsbruck, Austria
Institute for Quantum Optics and Quantum Information of the Austrian Academy
of Sciences, Innsbruck, Austria Maria Luisa Chiofalo Dipartimento di Fisica
“Enrico Fermi”, Università di Pisa, and INFN
Largo Bruno Pontecorvo, 3 I-56127 Pisa (Italy) Murray J. Holland JILA and
Department of Physics, University of Colorado, 440 UCB, Boulder, CO 80309, USA
Dipartimento di Fisica “Enrico Fermi”, Università di Pisa, and INFN
Largo Bruno Pontecorvo, 3 I-56127 Pisa (Italy)
###### Abstract
The creation and manipulation of quantum entanglement is central to improving
precision measurements. A principal method of generating entanglement for use
in atom interferometry is the process of spin squeezing whereupon the states
become more sensitive to $SU(2)$ rotations. One possibility to generate this
entanglement is provided by one-axis twisting (OAT), where a many-particle
entangled state of one degree of freedom is generated by a non-linear
Hamiltonian. We introduce a novel method which goes beyond OAT to create
squeezing and entanglement across two distinct degrees of freedom. We present
our work in the specific physical context of a system consisting of collective
atomic energy levels and discrete collective momentum states, but also
consider other possible realizations. Our system uses a nonlinear Hamiltonian
to generate dynamics in $SU(4)$, thereby creating the opportunity for dynamics
not possible in typical $SU(2)$ one-axis twisting. This leads to three axes
undergoing twisting due to the two degrees of freedom and their entanglement,
with the resulting potential for a more rich context of quantum entanglement.
The states prepared in this system are potentially more versatile for use in
multi-parameter or auxiliary measurement schemes than those prepared by
standard spin squeezing.
## I Introduction
The creation and manipulation of quantum entanglement is central to developing
powerful quantum technologies [1, 2]. In particular, precision measurements
can greatly benefit from exploiting quantum entanglements [3, 4] because non-
classical states may be engineered for greater sensitivity to a parameter of
interest compared to their classical counterparts [5, 6]. This field has seen
rapid progress on several frontiers [7] including, but not limited to,
experimental demonstration of atomic clock timekeeping below the shot noise
limit [8], extensions of quantum error correction into quantum metrology
schemes [9], and machine learning and optimization for complex state
preparation [10, 11, 12] and measurement schemes [13]. Through this rapid
progress, there is the possibility that we will soon use quantum mechanical
devices to probe new fundamental physics via tabletop experiments [14, 15].
Many state-of-the-art atom interferometry schemes rely on the process of spin
squeezing [16, 17], where a set of quantum spins are correlated to prepare a
non-classical state that is sensitive to $SU(2)$ rotations at a precision
below the standard quantum limit (SQL) [18] of $\Delta\phi^{2}\propto 1/N$,
where $\Delta\phi^{2}$ is the mean square error and $N$ is the number of
particles used in the measurement. One candidate for generating this
entanglement is one-axis twisting (OAT), whereupon many particles become
entangled in a single degree of freedom under a non-linear Hamiltonian [19,
20]. Through entangling processes such as OAT, the SQL may be surpassed and a
limit in precision of $\Delta\phi^{2}\propto 1/N^{2}$ is achievable. This
limit is a result of a Heisenberg uncertainty-like principle between the
operator generating the unitary and the parameter one is measuring. This limit
is aptly named Heisenberg limited scaling (HLS) [21] and is the ultimate limit
for metrological systems [22].
Schemes using OAT provide below SQL improvements for single parameter
measurements, such as the angle a dipole sweeps under rotation generated by a
magnetic field. These improvements are realized by sacrificing the variance of
quantum fluctuations in one direction in exchange for a reduction in the
variance of fluctuations in the direction we wish to measure. This hints at a
natural extension of OAT; one where multiple degrees of freedom are entangled
and squeezed to provide below SQL improvements for multiple parameters
simultaneously.
In this paper, we introduce a novel method for squeezing and entangling two
distinct degrees of freedom: the internal energy levels of an atomic ensemble
and the collective atomic momentum. As a Gedanken experiment, we consider a
collimated packet of atoms passing through a cavity. The cavity mediated
emission and absorption of photons induces a twisting of the collective
internal and momentum degrees of freedom, while also rapidly creating
entanglement between these two degrees of freedom. The states prepared by this
system could have the potential for multiparameter sensing and estimation [23]
below the SQL, squeezed state Bragg interferometry [24], or single parameter
estimation benefiting from auxiliary measurements. By analyzing the Quantum
Fisher Information Matrix (QFIM) of the system, we find that the maximum
metrological gain in each individual degree of freedom is shown to scale
proportionally to HLS. Here, we focus on the squeezing and correlation of the
collective atomic internal energy state and momentum, but we emphasize that
the general process could be realized with any system having same structure in
its couplings and interactions. To this point, we discuss possible platforms
which might be made to generate similar forms of entanglement in the
conclusion of this paper.
The structure of this paper is as follows. In Section II, we cast the
Hamiltonian into a form that illustrates the entanglement generating process:
atomic emission and absorption of photons and the resulting momentum recoil.
From this form, we show that some features may be intuitively understood as a
generalization of the OAT Hamiltonian, while other important features have no
analog in OAT. In Section III, we explore the structure of the system and
Hamiltonian using an underlying Lie algebra, and use these to simplify the
subsequent analysis of the dynamics. In Section IV, we use the quantum Fisher
information matrix (QFIM) to discuss the results of a numerical simulation of
the time dynamics. Lastly, in Section V we show schematically two
interferometry protocols that benefit from the form of entanglement generated
by this scheme.
## II Derivation of the Hamiltonian and System Dynamics
We consider the Gedanken experiment depicted in Fig. 1(a), where a collimated
packet of atoms passes through the center of the beam waist of a linear
optical cavity, similar to a pulsed version of the set up proposed in [25].
Each atom has a mass $m$, and two relevant internal energy levels labeled the
excited and ground states $\ket{e}$ and $\ket{g}$, respectively. These energy
levels are separated by the transition energy $\hbar\omega_{a}$. We assume
that the cavity supports a single optical mode with corresponding frequency
$\omega_{c}$, which is far detuned from the atomic transition by an amount
$\Delta=\omega_{a}-\omega_{c}$. The interaction strength between the cavity
photons and the $j$th atom is taken to be
$g(x_{j})=\frac{g}{2}\cos(k\hat{x}_{j})$. Furthermore, we assume $N$ atoms
enter the cavity with uniform velocity, and spend a time $t$ inside the light-
atom interaction volume. During this interaction time, the Hamiltonian is
then:
$\displaystyle\hat{H}=$
$\displaystyle\sum_{j=1}^{N}\left(\frac{\hat{p}_{j}^{2}}{2m}+\frac{\hbar\omega_{a}}{2}\hat{\sigma}_{j}^{z}\right)+\hbar\omega_{c}\hat{a}_{c}^{\dagger}\hat{a}_{c}$
(1) $\displaystyle+\frac{\hbar
g}{2}\sum_{j=1}^{N}\cos(k\hat{x}_{j})\left(\hat{a}_{c}\hat{\sigma}^{+}_{j}+\hat{a}_{c}^{\dagger}\hat{\sigma}^{-}_{j}\right),$
where $\hat{\sigma}^{z}_{j}=\ket{e}_{j}\bra{e}_{j}-\ket{g}_{j}\bra{g}_{j}$,
$\hat{\sigma}^{+}_{j}=(\hat{\sigma}^{-}_{j})^{\dagger}=\ket{e}_{j}\bra{g}_{j}$
are Pauli matrices for the $j^{\text{th}}$ atom, $\hat{p}_{j}$ ($\hat{x}_{j}$)
is the transverse momentum (position) operator for the $j^{\text{th}}$ atom
parallel to the cavity axis, and $\hat{a}_{c}^{\dagger}$ ($\hat{a}_{c})$ is
the photon creation (annihilation) operator of the cavity mode.
The two relevant processes at play are the exchange of photons between
different atoms and the atom’s recoil due to the emission and absorption of
photons. To simplify our study of these dynamics, we first take the
interaction picture with
$\hat{H}_{0}=\sum_{j=1}^{N}\hbar\omega_{a}\hat{\sigma}^{z}_{j}/2+\hbar\omega_{a}\hat{a}_{c}^{\dagger}\hat{a}_{c}$.
We assume the cavity is in the dispersive regime
$|\Delta|\gg\sqrt{N}g,\kappa$, where $\kappa$ is the cavity decay rate, such
that we can adiabatically eliminate the cavity degrees of freedom over a
coarse-grained timescale [26]. The resultant Hamiltonian becomes
${\hat{H}}=\sum_{j=1}^{N}\frac{\hat{p}_{j}^{2}}{2m}+\sum_{i,j=1}^{N}\frac{\hbar
g^{2}}{4\Delta}\cos(k\hat{x}_{i})\cos(k\hat{x}_{j})\hat{\sigma}^{+}_{i}\hat{\sigma}^{-}_{j}.$
(2)
The photon exchange has now been abstracted to an excitation exchange between
different atoms and a resultant change in momentum. We note that the operators
$\sum_{j=1}^{N}\cos(k\hat{x}_{j})\hat{\sigma}^{\pm}_{j}$ cause a change in an
atom’s momentum by $\pm\hbar k$ upon trading an excitation, as $\exp(\pm
ik\hat{x}_{j})$ are the momentum shift operators. Therefore, if the atomic
ensemble is prepared such that the atoms are in motional states differing in
their momentum by integer multiples of $\hbar k$, the atoms will never leave
this manifold under purely Hamiltonian evolution. We consider atoms in a
superposition of motional states of the form $\ket{n}_{j}\equiv\ket{n\hbar
k/2}_{j}$ for odd integers $n$. Preparation of such a state could be
accomplished with a diffraction grating [27] or via Kapitza-Dirac pulses and a
trapping potential [28].
Lastly, we assume that $\hbar Ng^{2}/(4\Delta)\ll(\hbar k)^{2}/m$, such that
the lowest two momentum states are far detuned from the rest of the quadratic
kinetic energy spectrum, as shown in Fig. 1(b). Therefore, if the atoms start
in the $\ket{\pm 1}_{j}$ states, they will in the subspace spanned by these
two states. Under these conditions, the total kinetic energy remains fixed at
$N(\hbar k)^{2}/(8m)$. As a result, we can ignore the constant kinetic energy.
In this regime, the momentum now has a spin-$1/2$ algebraic structure and so
the atom’s momentum is effectively mapped onto a two-level system. We define
$\hat{s}_{j}^{+}=(\hat{s}_{j}^{-})^{\dagger}=\ket{+1}_{j}\bra{-1}_{j},$ and
$\hat{s}_{j}^{z}=\ket{+1}_{j}\bra{+1}_{j}-\ket{-1}_{j}\bra{-1}_{j}$ such that
we can cast the translation operator
$\cos(k\hat{x}_{j})=[\exp(ik\hat{x}_{j})+\exp(-ik\hat{x}_{j})]/2$ in terms of
spin raising and lowering operators. We note that
$e^{+ik\hat{x}_{j}}=(e^{-ik\hat{x}_{j}})^{\dagger}=\hat{s}_{j}^{+}$ in this
regime and therefore
$2\cos(k\hat{x}_{j})=(\hat{s}_{j}^{+}+\hat{s}_{j}^{-})\equiv\hat{s}^{x}_{j}$,
thus we can rewrite our Hamiltonian in terms of these operators. Our
simplified Hamiltonian therefore becomes
$\displaystyle\hat{H}$
$\displaystyle=\chi\sum_{i,j=1}^{N}\hat{s}_{i}^{x}\hat{s}_{j}^{x}\hat{\sigma}_{i}^{+}\hat{\sigma}_{j}^{-},$
(3)
with $\chi=\hbar g^{2}/(16\Delta)$. This non-linear Hamiltonian dictates how
the atoms are to be entangled via cavity mediated interactions.
From Eq. 3, we see that if the atoms enter the cavity in the same momentum
state, with all atoms in the state $(\ket{+1}_{j}+\ket{-1}_{j})/\sqrt{2}$,
then the dynamics are generated by
$\hat{H}\approx\sum_{i,j=1}^{N}\hat{\sigma}_{i}^{+}\hat{\sigma}_{j}^{-}\propto(\hat{J}^{z})^{2}$,
where $\hat{J}^{z}=\sum_{j}^{N}\hat{\sigma}_{j}^{z}/2$, and one-axis twisting
is recovered. This is because the momentum flip operator, $\hat{s}^{x}_{j}$,
affects an atom in the state $(\ket{+1}_{j}+\ket{-1}_{j})/\sqrt{2}$ trivially.
Physically, this is the case that all the atoms are in the same equal
superposition of the $+\hbar k/2$ momentum states, so the recoil from emission
and absorption of light doesn’t affect the collective momentum, but the atom’s
internal degree of freedom remains free to evolve. With a starting state such
as $\ket{+}^{\otimes N}=(1/\sqrt{2})^{N}(\ket{e}+\ket{g})^{\otimes N}$ for the
internal atomic energies, the Hamiltonian induces standard OAT behavior,
leading to an effective spin squeezing. This starting state and behavior is
shown in Fig. 1(c), where the red arrows on the left Bloch sphere represent
the action of $(\hat{J}^{z})^{2}$.
We may also consider the case that the internal degrees of freedom don’t
affect the dynamics. This case is not physical, but rather provides an
important intuition for the behavior in the system. Here, we take
$\hat{H}\approx\chi\sum_{i,j=1}^{N}\hat{s}_{i}^{x}\hat{s}_{j}^{x}=4\chi(\hat{K}^{x})^{2}$,
where $\hat{K}^{x}=\sum_{j}^{N}\hat{s}_{j}^{x}/2$. While this is not
necessarily physical, it sheds light on the approximate behavior of the atomic
momentum: we expect the momentum states to experience OAT-like behavior
through the non-linear rotation under $(\hat{K}^{x})^{2}$. With a starting
state of $\ket{+1}^{\otimes N}$ for the momentum degrees of freedom, we would
expect to see operators orthogonal to $\hat{K}^{x}$, such as
$\hat{K}^{z}=\sum\hat{s}^{z}_{j}/2$, to undergo a twisting-like behavior. This
starting state and approximate behavior is shown in Fig. 1(c), where the red
arrow on the right Bloch sphere represents the action of $(\hat{K}^{x})^{2}$.
For the full Hamiltonian we expect the state $\ket{\psi}_{0}=\ket{+}^{\otimes
N}\otimes\ket{+1}^{\otimes N}$ to experience the corresponding spin twisting-
like behavior in both degrees of freedom, and to lead to interesting
entanglement between the two. In the subsequent sections, we demonstrate
mathematically that this state breaks an important symmetry typically found in
OAT, and then we numerically show this leads to entanglement that has
potential for metrological advantage.
Figure 1: (a) Schematic of the proposed set up. Here, the momentum
perpendicular to the cavity controls the interaction time. The initial
momentum along the cavity axis selects the manifold of momentum states that
the cavity couples to. (b) The spectrum of the kinetic energy versus the
spectrum of momentum states. Here, we note the $\pm 3\hbar k/2$ states are far
from the $\pm\hbar k/2$ states, thus demonstrating that the lowest manifold of
$4$ states can be considered isolated from the rest of the quadratic spectrum.
(c) The two Bloch spheres for the collective two-level system. This picture is
only valid when there is no entanglement between the two degrees of freedom,
but it still provides a useful picture of the approximate behavior of the
system. The blue cloud is the starting state, while the green dashed line
represents the approximate distribution of the final state. The final state
may not be fully represented on these Bloch spheres due to entanglement
breaking the initial $SU(2)\otimes SU(2)$ symmetry needed to represent states
on two collective Bloch spheres. (d) The four-level system, and black arrows
representing each of the three unique $\ \mathfrak{su}(2)$ algebras acting on
the system.
## III The Operator Algebras
In full, it is not immediately obvious how dynamics evolve under Eq. (3). The
$\hat{s}^{x}_{j}$ operators complicate the Hamiltonian compared to the usual
OAT Hamiltonian, preventing us from using methods typically used to solve OAT
models. However, we can use the symmetries of the system to recast the
Hamiltonian such that it is a member of an $\mathfrak{su}(4)$ algebra yielding
a clear picture of the full dynamics and allowing for efficient numerical
simulation.
The operators appearing in the Hamiltonian are all Pauli operators which
correspond to a single atom’s internal or momentum observable. For the
$j^{th}$ atom’s internal state, the operators
$\\{\hat{\sigma}^{x}_{j},\hat{\sigma}^{y}_{j},\hat{\sigma}^{z}_{j}\\}$ fully
describe any possible observable, where
$\hat{\sigma}^{x}_{j}=\sigma^{+}_{j}+\sigma^{-}_{j}$ and
$\hat{\sigma}^{y}_{j}=i(\sigma^{-}_{j}-\sigma^{+}_{j})$. Similarly, its
momentum state is fully described by
$\\{\hat{s}^{x}_{j},\hat{s}^{y}_{j},\hat{s}^{z}_{j}\\}$, where
$\hat{s}^{y}_{j}=i(\hat{s}_{j}^{-}-\hat{s}_{j}^{+})$ is needed for the
momentum operators to close under commutation. The total system is then
described, in part, by the collective atomic and momentum operators,
$\hat{J}^{i}=\sum_{j}^{N}\hat{\sigma}_{j}^{i}/2$ and
$\hat{K}^{i}=\sum_{j}^{N}\hat{s}_{j}^{i}/2$ for $i=x,y,z$ respectively. These
collective atomic and momentum operators each form an $\mathfrak{su}(2)$
algebra: $\mathfrak{J}=\\{\hat{J}^{z},\hat{J}^{\pm}\\}$ and
$\mathfrak{K}=\\{\hat{K}^{z},\hat{K}^{\pm}\\}$. These two algebras allow us to
fully describe any state which is seperable in the two degrees of freedom,
such as the state $\ket{\psi_{0}}$ which is represented on two composite Bloch
spheres in Fig. 1(c) in blue. Importantly, we note that the momentum operator
$\hat{K}^{z}$ corresponds to the observable for the center of mass momentum,
$\hat{P}_{\rm COM}=\hbar k\hat{K}^{z}$, which is intuitively the difference
between the number of atoms moving in the $+1$ and $-1$ eigenstates.
We can further simplify our analysis by mapping particles into the Schwinger
boson representation [29]. Here we use the simultaneous eigenstates of
$\hat{J}^{z}$ and $\hat{K}^{z}$ as the basis for the new representation, but
in general this could be done via the procedure shown in [30]. First, we
define
$\displaystyle\ket{\alpha,\beta,\gamma,\delta}=$ (4)
$\displaystyle\mathcal{S}\left(\ket{e,+1}^{\otimes\alpha}\ket{g,-1}^{\otimes\beta}\ket{e,-1}^{\otimes\gamma}\ket{g,+1}^{\otimes\delta}\right),$
where $\alpha+\beta+\gamma+\delta=N$ is the total number of atoms and
$\mathcal{S}$ is the symmeterization operator. Note that the symmetrizer is
defined with the normalization factor, shown explicitly in Appendix A.1, so
this representation is normalized. We can represent all the relevant operators
in this formalism as well by associating the annihilation (creation) operators
$\hat{a},\hat{b},\hat{c},\hat{d}$
($\hat{a}^{\dagger},\hat{b}^{\dagger},\hat{c}^{\dagger},\hat{d}^{\dagger}$) to
each of the four modes, such that
$\hat{a}\ket{\alpha,\beta,\gamma,\delta}=\sqrt{\alpha}\ket{\alpha-1,\beta,\gamma,\delta}$
and similarly for the other three modes as shown in Appendix A.2. Now, the
number of atoms in the excited state is simply $\alpha+\gamma$ for states of
the form in Eq. (LABEL:eq:SchwingState). Therefore, we define
$\hat{n}_{e}\ket{\alpha,\beta,\gamma,\delta}=(\hat{a}^{\dagger}\hat{a}+\hat{c}^{\dagger}\hat{c})\ket{\alpha,\beta,\gamma,\delta}$.
By the same process, we can recover the ground state number operator to be
$\hat{n}_{g}=\hat{b}^{\dagger}\hat{b}+\hat{d}^{\dagger}\hat{d}$, the $+1$
momentum state number operator to be
$\hat{n}_{+1}=\hat{a}^{\dagger}\hat{a}+\hat{d}^{\dagger}\hat{d}$, and the $-1$
momentum state number operator to be
$\hat{n}_{-1}=\hat{b}^{\dagger}\hat{b}+\hat{c}^{\dagger}\hat{c}$. Our
collective atomic and momentum operators are simple to represent in the form
$\displaystyle\hat{J}^{z}$
$\displaystyle=\frac{1}{2}(\hat{n}_{e}-\hat{n}_{g}),$ (5)
$\displaystyle\hat{K}^{z}$
$\displaystyle=\frac{1}{2}(\hat{n}_{+1}-\hat{n}_{-1}),$
and
$\hat{J}^{-}=\hat{a}\hat{d}^{\dagger}+\hat{c}\hat{b}^{\dagger}=(\hat{J}^{+})^{\dagger},\hat{K}^{-}=\hat{a}\hat{c}^{\dagger}+\hat{d}\hat{b}^{\dagger}=(\hat{K}^{+})^{\dagger}$.
Moreover, the Hamiltonian is also simply represented,
$\hat{H}=\chi(\hat{a}^{\dagger}\hat{b}+\hat{c}^{\dagger}\hat{d})(\hat{a}\hat{b}^{\dagger}+\hat{c}\hat{d}^{\dagger}).$
(6)
This is intuitively what should be expected because, for example,
$\hat{a}\hat{b}^{\dagger}$ is collective emission where a single atom goes
from the excited, +1 motional state to a ground, -1 motional state. The other
terms can be similarly understood.
Lastly, we introduce the raising and lowering operators
$\hat{E}^{+}=\hat{a}^{\dagger}\hat{b}+\hat{c}^{\dagger}\hat{d}=(\hat{E}^{-})^{\dagger}$,
and we notice that $[\hat{E}^{+},\hat{E}^{-}]=2\hat{J}^{z}$ and
$[\hat{J}^{z},\hat{E}^{\pm}]=\pm\hat{E}^{\pm}.$ Thus, we see that the set
$\mathfrak{E}=\\{\hat{J}^{z},\hat{E}^{\pm}\\}$ forms a third closed
$\mathfrak{su}(2)$ algebra on the system which succinctly represents the
entanglement generating processes due to absorption and emission. The three
sub-algebras $\mathfrak{J},\mathfrak{K}$ and $\mathfrak{E}$ taken together are
members of a complete $\mathfrak{su}(4)$ algebra, which generates an $SU(4)$
group that efficiently describes the dynamics of this system. The action of
three sub-algebras is represented schematically in Fig. 1(d) for a single
atom. In summary, within the full $\mathfrak{su}(4)$ describing our dynamics,
we find that there exists three $SU(2)$ subgroups each generated by
$\mathfrak{J},\mathfrak{K}$, or $\mathfrak{E}$, which matches the general
structure for $SU(4)$ [31]. Thus, the system can be considered as a collection
of hybridized angular momentum.
We can take advantage of the commutation structure in $\mathfrak{E}$ to
simplify the Hamiltonian even further,
$\displaystyle\hat{H}$ $\displaystyle=\chi\hat{E}^{+}\hat{E}^{-}$ (7)
$\displaystyle=\chi(\hat{E}^{2}-(\hat{J}^{z})^{2}+\hat{J}^{z}),$
where $\hat{E}^{2}=\hat{E}^{+}\hat{E}^{-}+(\hat{J}^{z})^{2}-\hat{J}^{z}$ is
the quadratic Casimir operator [32] for $\mathfrak{E}$. Now, Eq. (7) looks
like the familiar form of a OAT Hamiltonian, except for the important
difference that $\hat{K}^{y}$, $\hat{K}^{z}$ don’t commute with $\hat{E}^{2}$.
This means there exists states which are eigenstates of $\hat{K}^{z}$ that
evolve non-trivially under the operator $\hat{E}^{2}$, such as the starting
state discussed at the end of Section II. Furthermore, we can observe that the
operator $\hat{E}^{2}$ has shells corresponding to each of its eigenvalues,
similar to the shells typically defining eigenvalues for total angular
momentum observables. The starting state, $\ket{\psi_{0}}$ creates a
superposition over these shells and, with $\hat{E}^{2}$ contributing non-
trivially to the dynamics, each of the three pseudo-angular momentum subgroups
experience a twisting under this Hamiltonian.
## IV Analysis of the Dynamics and Entanglement Generation
Now we use the Schwinger boson representation introduced in Section III to
numerically simulate the system and explore the dynamics. For these
simulations we assume the cavity decay at rate $\kappa$ and other dissipative
processes, such as spontaneous emission at rate $\gamma$, are negligible. This
assumption is valid in the limit that the timescale considered for unitary
dynamics, $t$, is much smaller than the relevant inverse decay rates. Further
analysis of the effects of decoherence is left to future work, but we attempt
to make explicit note of when one would expect decoherence to become non-
negligible, and the relevant bounds in these cases.
To simulate the system, we use the four annihilation/creation operators found
in the previous section, and model the atomic system as a system of four
harmonic oscillators. The Hilbert space of four harmonic oscillators has a
dimensionality of $(N+1)^{4}$ containing all states with atom numbers between
$0$ and $4N$ atoms. We may use either of the conditions that
$\hat{n}_{e}+\hat{n}_{g}=N$ or $\hat{n}_{+1}+\hat{n}_{-1}=N$ to project onto
only the states with $N$ atoms. This corresponds to restricting to only states
which are may be reached by $SU(4)$ action, and the typical argument of
putting $N$ atoms indistinguishably in four distinguishable states shows the
system scales at $(N+1)(N+2)(N+3)/6$ states for $N$ atoms. This now matches
the dimensionality of the basis states with an $SU(4)$ symmetry, given in Ref
[33], and is numerically more efficient than the initial $(N+1)^{4}$ scaling.
We use the starting state discussed in Section II,
$\ket{\psi}_{0}=\ket{+}^{\otimes N}\otimes\ket{+1}^{\otimes N}$. As noted in
the end of Section III, $\ket{\psi}_{0}$ is not an eigenstate of
$\hat{E}^{2}$. From the discussion of this state and the picture in Fig. 1(c),
we expect this initial state to lead to twisting-like behavior and
entanglement generation between the two degrees of freedom. The intuitive
picture to understand this behavior is the following. When an atom emits
light, its internal degree of freedom becomes entangled to that of the atom
which absorbs the emitted light. At the same time, both these atom’s momentum
states must switch, causing their external degrees of freedom to become
entangled similar to their internal ones.
To diagnose the amount of entanglement and potential metrological use, we
consider the case that one wants to prepare states which will be used to
estimate some phase, $\phi_{j}$, encoded by unitary evolution under some
operator, $\hat{G}^{j}$, so that the state evolves according to
$\exp(-i\phi_{j}\hat{G}^{j})$. Specifically we consider the cases that
$\hat{G}^{j}$ is in either $\mathfrak{J}$ or $\mathfrak{K}$, and choose the
indices $i,j$ so that if $i,j=1,2,3$ then
$\hat{G}^{i},\hat{G}^{j}=\hat{J}^{x},\hat{J}^{y},\hat{J}^{z}$ and if
$i,j=4,5,6$ then
$\hat{G}^{i},\hat{G}^{j}=\hat{K}^{x},\hat{K}^{y},\hat{K}^{z}$. In this
scenario the QFIM serves as both an entanglement measure [34] and a measure of
the potential metrological use of a state in quantum metrology [35]. We use
for the form of the QFIM given in Ref. [36] for pure states, since in the
present proof of concept we do not address decoherence. Under this condition,
the matrix elements are given by
$\displaystyle\mathcal{F}^{ij}=4\Big{(}\Big{\langle}\frac{\\{\hat{G}^{i},\hat{G}^{j}\\}}{2}\Big{\rangle}-\langle\hat{G}^{i}\rangle\langle\hat{G}^{j}\rangle\Big{)},$
(8)
where
$\\{\hat{G}^{i},\hat{G}^{j}\\}=\hat{G}^{i}\hat{G}^{j}+\hat{G}^{j}\hat{G}^{i}$
is the anti-commutator. For $i=j$, Eq. (8) returns the fourfold variance,
which captures the amount of squeezing and entanglement present. The condition
for an entanglement witness to squeezing is $\mathcal{F}^{ii}/N^{2}>1/N$,
which is equivalent to the condition given in Ref. [34]. If
$\mathcal{F}^{ii}/N^{2}$ approaches a constant as $N$ grows, then the
sufficient condition for entanglement is clearly met and the system’s
potential metrological use proportional to HLS is demonstrated. Meanwhile for
$i\neq j$, Eq. (8) returns the fourfold covariance, thereby capturing the
amount of quantum correlations between these observables. We observe that
$[\hat{J}^{i},\hat{K}^{j}]=0$ for all
$\hat{J}^{i}\in\mathfrak{J},\hat{K}^{j}\in\mathfrak{K}$. As a result the
covariance of two operators on the internal state and the momentum, such as
$\text{cov}(\hat{J}^{x},\hat{K}^{z})$, is non-zero only for pure states which
are entangled. The off-diagonal elements of the QFIM with $i\in\\{1,2,3\\}$
and $j\in\\{4,5,6\\}$ therefore represents the covariance between the atomic
and momentum operators, and acts as an entanglement witness of quantum
correlations between the two degrees of freedom. Thus, we use the sufficient
condition that $\mathcal{F}^{ij}\neq 0$ as an entanglement witness for the two
degrees of freedom as a pure state bipartite system. This is a modified
version of the condition given in Ref. [37].
In Fig. 2, we show the quantity $\mathcal{F}^{ii}/N^{2}$ for the four
operators of interest, and for four different numbers of atoms, $N$, each as a
function of interaction time with the cavity, $t$. We observe that
$\mathcal{F}^{ii}/N^{2}$ increases sharply before leveling off to a constant
value over time. Because $\mathcal{F}^{ii}/N^{2}>1/N$, the entanglement
witness condition is satisfied for each case. This condition is met in a short
interaction time, demonstrating that entanglement is quickly generated in both
the collective internal and momentum modes. Therefore we see that along with
the spin squeezing in the internal atomic degrees of freedom, this platform
also leads to an effective squeezing of the momentum degrees of freedom.
Figure 2: Four of the six diagonal elements of the QFIM, for four different
atom numbers. The operators $\hat{J}^{z}$ and $\hat{K}^{x}$ are left out
because they commute with the Hamiltonian and are therefore conserved
quantities. We see that as the number of atoms grow, the behavior of the
diagonal QFIM elements converge. For atom numbers of $N\approx 50$ or more, a
plateau with respect to time appears, centered around $\chi t=\pi/4$. This is
similar to the behavior found in OAT where the QFIM for $\hat{J}^{x}$ and
$\hat{J}^{y}$ reach a plateau [5] centered around the same time. As $N$ grows,
the plateau exists almost everywhere in time. Here we only show even atom
numbers, $N$, but we note that for odd atom numbers the behavior is the same
except for at $\chi t=\pi/2$, where the concavity is opposite from what’s
shown here.
To quantify the potential metrological use of this system, we fix the time at
$\chi t=\pi/4$ and show how the diagonal element of QFIM for $\hat{J}^{x}$ and
$\hat{K}^{z}$ scales with atom number. The results are shown in Fig. 3.
Achieving an interaction time scale of $\chi t\sim 1$ would require a very big
cavity-atom coupling, such that $\chi\gg\gamma$. The same is true for any
other decoherence rate one might consider. As a result, this timescale may be
physically inaccessible with traditional cavities, but is theoretically
interesting nonetheless. These long timescales form the equivalent of the
“oversqueezed” timescales in standard OAT. We specifically choose the time
$\chi t=\pi/4$ because it is the center of the plateau in the QFIM’s diagonal
elements.
We see that both the atomic and momentum degrees of freedom scale
proportionally to $N^{2}$, i.e. with HLS. Similar behavior exists in OAT,
where one finds a plateau in the variance of the antisqueezed quadrature for
times between $1/\sqrt{N}\lesssim\chi^{\prime}t\lesssim\pi/2-1/\sqrt{N}$,
where $\chi^{\prime}$ is an appropriately defined frequency. However, in OAT
this plateau restricted to just the spin degree of freedom [5]. Our scheme
provides a squeezing mechanism for momentum degrees of freedom, creating the
possibility that spin-squeezing techniques used in Ramsey interferometry [38]
might be generalized to Bragg interferometry or that the two might be
performed simultaneously.
Figure 3: The diagonal elements of the QFIM corresponding to $\hat{J}^{x}$
and $\hat{K}^{z}$ shown as a function of atom number, $N$. We fit
$4\Delta\hat{J}^{x}$ and $4\Delta\hat{K}^{z}$ with second order polynomials
$F_{J}(N)$ and $F_{K}(N)$ respectively. We fit for $N\geq 4$, because for
$N=2$ and $N=3$ the system has anomalous behavior for small atom numbers. We
find that $4\Delta\hat{J}^{x}$ is fit with the function $F_{J}(N)\approx
0.366N^{2}+0.793N-2.662$, and $4\Delta\hat{K}^{z}$ is fit with the function
$F_{K}(N)\approx 0.356N^{2}+0.599N+1.466$. Both of these demonstrate the HLS.
Now, we study the behavior of the entanglement between the degrees of freedom,
which has no analog in OAT. We study the entanglement via the fourfold
covariance between the two operators $\hat{J}^{x}$ and $\hat{K}^{z}$,
corresponding to an off-diagonal element of the QFIM. In Fig. 4(a), we see
that the system moves through a highly correlated state, with a high
covariance between the two degrees of freedom, before it approaches an
uncorrelated state for a moment in time at $\chi t=\pi/2$. At an interaction
time of $\chi t=\pi$, the system returns to its original state. In Fig. 4(b),
we see that for interaction times of $\chi t\approx\pi/4$ the correlations
only scale linearly with $N$. Therefore, interaction times which reach this
plateau prepare a system which is capable of quantum sensing for two
parameters at the Heisenberg limit, with relatively little error introduced by
the simultaneous measurement of the two parameters. This motivates the first
half of the next section, where a schematic representation of two parameter
interferometry is shown.
The time at which the system is maximally correlated is labeled
$t_{\text{max}}$, and we find $\chi t_{\text{max}}$ decreases with number of
atoms such that $\chi t_{\text{max}}\approx N^{\nu}$, where $\nu\approx-2/5$
is found from fitting. At this time, the maximum correlation scales
proportionally to $N^{2}$, which is on the order of the squeezing for the two
degrees of freedom.
To achieve an interaction time with these large correlations, one needs $\chi
t\sim N^{-2/5}$. Compared to the single particle emission, one has the
requirement $\chi\gg N^{-2/5}\gamma$, which can be achieved for sufficiently
large $N$. Therefore we expect single-particle decoherence to be negligible on
these timescales. In this regime, we instead expect that collective
decoherence processes, such as collective spontaneous emission mediated by the
cavity, would limit the amount of achievable entanglement. After adiabatic
elimination of the cavity, the collective decoherence rate is due to light
being incoherently scattered into the cavity and lost. This rate can be
estimated as $N\chi\kappa/\Delta\propto Ng^{2}\kappa/\Delta^{2}$. Therefore
one may reduce it by increasing the cavity-atom detuning, $\Delta$, at the
expense of reducing $\chi$. However, interaction times of $\chi t\sim
N^{-2/5}$ may still be possible in cavities with low photon loss rate
$\kappa$. We also note that a similar timescale of $\chi t\sim N^{-2/3}$ is
needed for production of optimally squeezed states in standard OAT [5, 39], so
it could be possible to achieve an interaction time on the order needed to see
strong correlations.
This short timescale with highly correlated degrees of freedom motivates the
second half of our next section, where a schematic representation of single
parameter interferometry is shown. The parameter is estimated via an
interaction with one degree of freedom, and an auxiliary measurement on the
other degree of freedom.
Figure 4: Plots of $\mathcal{F}^{ij}=4\mathrm{cov}(\hat{J}^{x},\hat{K}^{z})$.
Left - The off diagonal of the QFIM, $\mathcal{F}^{ij}$, normalized by $N^{2}$
for four different values of $N$. We see the covariance between
$\hat{J}^{x},\hat{K}^{z}$ grows rapidly before decaying for longer time
scales, then in a collapse-revival like effect at $\chi t\approx\pi$ the
operators become correlated again before approaching the starting state. Right
- The same off diagonal element of the QFIM at two different times: $\chi
t=\pi/4$ when the correlations are decreasing, and $\chi t=N^{-2/5}$ when the
correlations are largest. We find that $\mathcal{F}^{ij}|_{\chi
t=\pi/4}\approx 4.103\cdot 10^{-3}N^{2}+0.926N$, and $\mathcal{F}^{ij}|_{\chi
t=N^{-2/5}}\approx 0.1782N^{2}-0.02721N$
.
## V Interferometer Schemes
To demonstrate a possible metrological use, we numerically explore two
interferometry schemes. The first uses the system to detect two relative
phases: one encoded in the atom’s internal degree of freedom, and a second
encoded in the momentum degree of freedom. The second scheme uses this system
to detect a single parameter via auxiliary measurements. The version of the
auxiliary measurement scheme presented here is the case that the collective
internal degree of freedom accumulates phase and the momentum degree of
freedom is measured. However, this process would work similarly if the roles
were reversed.
For both schemes, we choose a new interaction picture for the Hamiltonian such
that $\hat{J}^{z}$ is removed from Eq. (7). This has no effect on the physics
described above, besides keeping the atomic ensemble polarized in
$\hat{J}^{x}$ instead of precessing about $\hat{J}^{z}$. This matches what is
often done in OAT, and the process is shown in more depth in Appendix B.
@C=1em @R=0.7em (a) &
—+⟩^⊗N 1e^-i ^H τ_2 e^-i θ_opt ^J^x e^-i ϕ_3 ^J^z ^J^x
—+1⟩^⊗N e^-i ^H τ_2 e^-i ϕ_5 ^K^y ^K^z
(b)
—+⟩^⊗N 1e^-i ^H τ_1 e^-i ϕ_1 ^J^x 1e^i ^H τ_1
—+1⟩^⊗N e^-i ^H τ_1 e^i ^H τ_1 ^K^z
Figure 5: A quantum circuit schematic of the two schemes. The two tracks
represent the actions affecting either degree of freedom, with the top track
representing the internal states of the atoms, and the bottom track
representing the momentum. (a) The two parameter scheme. The interaction time
for this two parameter scheme, $\tau_{2}$, is fixed at $\chi\tau_{2}=\pi/4$ to
demonstrate metrological use on the platuea found in Section IV. (b) The
auxiliary measurement scheme. Here, $\chi\tau_{1}=N^{-2/5}$ is chosen such
that the ensembles are maximally correlated. The time-reversed unitary could
be achieved by changing the detuning on the cavity.
We start with the two parameter scheme. The relevant schematic representation
is shown in Fig. 5(a). Here, we first pass the atomic ensemble through the
cavity for an interaction time $\chi\tau_{2}=\pi/4$ to prepare the probe
state. We chose this time to show the metrological use for times near the
plateau, when correlations between the degrees of freedom are decreasing with
respect to interaction time. However, this multiparameter scheme could be used
for any interaction time, albeit with slight differences due to varying
correlation strengths. After the state preparation, a rotation generated by
$\hat{J}^{x}$ is performed so that the maximum fluctuation is in
$\hat{J}^{z}$, where the angle $\theta_{\text{opt}}$ is found numerically. For
the momentum degree of freedom, it was found that the state is already
prepared such that the maximal fluctuations are along $\hat{K}^{y}$ at this
time. The signal is encoded in the system by unitary
$\hat{V}=\exp(-i\phi_{3}\hat{J}^{z}-i\phi_{5}\hat{K}^{y}),$ (9)
where we assume for numerical convenience the phases $\phi_{3},\phi_{5}$ are
small, at $\phi_{3}=\phi_{5}=\pi/16$. However, we found that these results
hold for larger phases as well as for two phases which aren’t equal. After the
unitary, we measure the observables $\hat{J}^{x}$ and $\hat{K}^{z}$ and carry
out phase estimation for both phases simultaneously. To estimate the phase, we
simulate a Bayesian inferencing scheme [21] for two parameters and with a flat
prior, and to find the asymptotic behavior of this Bayesian inference, we
numerically calculate the Classical Fisher Information (CFI) as a function of
atom number. The exact algorithm for sampling and updating a probability
distribution, as well as the explicit form of the CFI are shown in Appendix C.
Using the CFI, we have a useful set of inequalities from the Cramér-Rao Bound
[40] (CRB):
$\sigma_{i}^{2}\geq\frac{1}{MI(\hat{G}^{i})}\geq\frac{1}{M\mathcal{F}^{ii}}$
(10)
where $i=3,5$ corresponds to either $\phi_{3},$ or $\phi_{5}$,
$\sigma_{i}^{2}$ is the variance of the probability distribution, $M$ is the
number of measurements, $I(\hat{G}^{i})$ is the CFI for a parameter encoded by
the operator $\hat{G}^{i}=\hat{J}^{z},\hat{K}^{y}$, and $\mathcal{F}^{ii}$ is
the diagonal element of the QFIM for the corresponding operator. The first
inequality is the classical CRB, and the second inequality is the quantum CRB.
By inverting this bound we find the following: $\mathcal{F}^{ii}\geq
I(\hat{G}^{i})\geq\frac{1}{M\sigma_{i}^{2}}$, so we can tell how close our
resultant probability distribution from Bayesian inferencing is to saturating
the CRB. In Fig. 6, we see the results of this analysis for $M=5000$
measurements. This measurement scheme saturates the classical CRB for both
parameters, and reaches a value of about $80\%$ of the quantum CRB. Moreover,
it does this simultaneously for both parameters.
We also note that, while not shown, as $\phi_{3},\phi_{5}$ tend towards zero
the CFI exactly saturates the quantum CRB, but Bayesian inferencing takes
asymptotically more measurements to saturate the classical CRB. This result
was found numerically, but it can be intuitively explained by the formation of
narrow, ring-like $Q$ functions on the collective Bloch spheres of the
internal and external degrees of freedom. Those rings form along the
$J^{x}$-$J^{z}$ plane and along the $K^{y}$-$K^{z}$ plane which makes them
sensitive to any rotation which results in leaving the corresponding planes.
For rotations of these planes around the $J^{z}$ and $K^{y}$ axes one can then
efficiently read out the applied phase by measuring $J^{x}$ and $K^{z}$,
respectively. With this picture in mind, we would expect the optimal
measurement for any value of $(\phi_{3},\phi_{5})$ is
$(\hat{J}^{x}\cos(\phi_{3})+\hat{J}^{y}\sin(\phi_{3}))\otimes(\hat{K}^{z}\cos(\phi_{5})+\hat{K}^{x}\sin(\phi_{5}))$,
such that the measurement will always be oriented the same relative to plane
this state is in. Numerically we find that this is in fact always saturates
the quantum CRB. However, using this measurement requires knowledge of
$(\phi_{3},\phi_{5})$.
Figure 6: Left - A plot of the standard deviation corresponding to the final
result of Bayesian inferencing for estimating the phases $\phi_{3}$ and
$\phi_{5}$ with $M=5000$ measurements, and $\phi_{3}=\phi_{5}=\pi/16$. Right -
A plot of the quantities $\frac{1}{M\sigma_{i}^{2}}$ for
$\sigma_{i}=\sigma_{J},\sigma_{K}$, the CFI $I(\hat{G}^{i})$ for
$\hat{G}^{i}=\hat{J}^{z},\hat{K}^{y}$ corresponding to these measurements, and
the diagonal elements of the QFIM for these measurements. Note that because of
the rotation generated by $\hat{J}^{x}$ prior to the interferometry,
$\mathcal{F}^{33}=(\Delta\hat{J}^{z})^{2}$ now scales with HLS. We see that
the quantities $\frac{1}{M\sigma_{i}^{2}}=I(\hat{G}^{i})$ saturate the
classical CRB from the left half of Eq. (10), and nearly saturate the quantum
CRB. By fitting the diagonal QFIM elements and $\frac{1}{M\sigma_{i}^{2}}$ we
find the CFI scales as $I(\hat{J}^{z})\approx 0.3184N^{2}+0.9162N$,
$I(\hat{K}^{y})\approx 0.2022N^{2}+1.454N$, while
$\mathcal{F}^{33}=(\Delta\hat{J}^{z})^{2}\approx 0.3815N^{2}+0.1577N$,
$\mathcal{F}^{55}=(\Delta\hat{K}^{y})^{2}\approx 0.2512N^{2}+0.8727N$. This
indicates this measurement scheme scales at about $80\%$ the theoretical
maximum.
Now, we turn our attention to the auxiliary measurement scheme, shown in Fig
5(b). Here, the atomic ensemble first passes through the cavity for a time of
$\chi\tau_{1}=N^{-2/5}$, so that the observables $\hat{J}^{x}$ and
$\hat{K}^{z}$ are well correlated. Then, the phase is encoded on either the
internal degree of freedom or the momentum. By changing the detuning on the
cavity, the unitary may be reversed and a measurement on the non-interacting
degree of freedom may be used to determine the phase. We simulate this scheme
using a phase encoded on the atomic degree of freedom and a momentum
measurement. To diagnose the metrological use, we consider the fidelity
between the $\ket{+1}^{\otimes N}$ momentum state and the final momentum
state. This is the same as measuring if $\langle\hat{P}_{COM}\rangle$ is equal
to $+N\hbar k/2$ or not. We consider this measurement outcome because for
values of $\phi_{1}$ near zero, a $\hat{K}^{z}$ measurement outcome of $+N/2$
is the most likely outcome, and for $\phi_{1}=0$, it will be the only outcome.
As a result, this fidelity forms an effective probability distribution of
$\phi_{1}$ for just this one measurement outcome of $\hat{K}^{z}$, and groups
together the rest of the possible measurement outcomes. In Appendix D we show
that this effective probability distribution provides a lower bound for the
CFI. The standard deviation of this distribution may be used to calculate a
lower bound for the CFI of this measurement scheme. The standard deviation of
this fidelity is shown in Fig. 7 (a), while the inverted form of the standard
deviation from equation Eq. (10) is compared to the relevant QFIM diagonal
element and shown in Fig. 7 (b). Using the fidelity to represent only one of
the possible measurement outcomes, the uncertainty scales at
$1/\sigma_{Fid}^{2}\approx 0.1699N^{2}$ and from this we see that these
auxiliary measurements allow us to predict the real phase with an uncertainty
that scales with at least $59\%$ of the quantum CRB. This demonstrates that
the auxiliary measurement, while not optimal compared to a direct measurement,
still recovers a large amount of information about the degree of freedom not
being directly measured.
Figure 7: Left - The standard deviation of the final state fidelity,
$\sigma_{Fid}$ with the $\ket{+1}^{\otimes N}$ momentum state. This is found
by fitting the central peak with a Gaussian and offset. Right - The quantities
$1/\sigma_{Fid}^{2}$ and the QFIM element corresponding to rotations about
$\hat{J}^{x}$. We see that $1/\sigma_{Fid}^{2}\approx 0.1699N^{2}+0.1069N$ and
$\mathcal{F}^{ii}|_{\chi t=N^{-2/5}}\approx 0.2874N^{2}-0.0577N$, showing that
this auxiliary measurement reaches about $0.6$ the quantum CRB.
## VI Conclusion
In this work, we have introduced a novel method which individually squeezes
and entangles two degrees of freedom, and showed there exists a non-trivial
interplay between the atomic internal and momentum degrees of freedom. We have
demonstrated that these extra degrees of freedom might create the opportunity
for multi-parameter metrology at the Heisenberg limit in either degree of
freedom, or for novel metrology schemes which benefit from the entangled
degrees of freedom. The multiparamter and auxiliary schemes shown in the final
section have potential to be the basis for practical tools in matter wave
interferometry. This form of entanglement generation and manipulation
represents a possible new frontier for atom interferometry.
Future work could include adding decoherence in a numerical exploration [41],
and explorations of the existence of multipartite entanglement[42] that may be
realized by this system. We also note that the physical system explored here
might pose experimental challenges. Namely, the regime requiring
$\Delta\gg\sqrt{N}g$ leads to the parameter $\chi$ being small, thereby
requiring long interaction times which are hard to achieve in atomic beam-
cavity experiments. To explore the effects of the small $\chi$ and long
interaction times compared to the decoherence time, one could simulate this
system with full beam dynamics. It would also be interesting to explore the
use of a moving optical lattice [43] to select the atomic transverse momentum,
and trap the atoms in the cavity longer. We are also interested in the
possibility of using the auxiliary measurement scheme for much shorter
interaction times than shown here, $\chi t\ll 1$, such that the degrees of
freedom only become weakly correlated and measurements on one degree of
freedom only perturbatively affect the other degree of freedom. This could
allow for measurements which only extract a small amount of information, but
don’t destroy the quantum state of the other degree of freedom.
Lastly, we point out that the above discussion is centered on realizing Eq. 3,
however the principles discussed here may be relevant to different platforms.
Specifically, we believe coherently controlling a two-component Bose-Einstein
condensate [44, 45] in order to select for interactions, and engineering an
optical lattice to induce spin-momentum couplings in a Bose-Einstein [46]
might lead to similar Lie algebraic structure and allow for controlled
generation of metrologically useful entanglement. The use of a two component
BEC might have the added benefit of relaxing the condition on small $\chi$
that we have here [45].
## Acknowledgments
The authors thank John Cooper, Liang-Ying Chih, and Christopher Wilson for
stimulating discussions. This research was supported by the NSF PFC Grant No.
1734006 and the NSF Q-SEnSE Grant No. OMA 2016244. M.H. acknowledges support
from a Visiting Fellowship from the University of Pisa, Italy. M. L. C.
acknowledges support from the MIT-UNIPI program.
## References
* Preskill [2018] J. Preskill, Quantum 2, 79 (2018).
* Cronin _et al._ [2009a] A. D. Cronin, J. Schmiedmayer, and D. E. Pritchard, Rev. Mod. Phys. 81, 1051 (2009a).
* Tse and et al. [2019] M. Tse and et al., Phys. Rev. Lett. 123, 231107 (2019).
* Hardman _et al._ [2016] K. S. Hardman, P. J. Everitt, G. D. McDonald, P. Manju, P. B. Wigley, M. A. Sooriyabandara, C. C. N. Kuhn, J. E. Debs, J. D. Close, and N. P. Robins, Phys. Rev. Lett. 117, 138501 (2016).
* Pezzè _et al._ [2018] L. Pezzè, A. Smerzi, M. K. Oberthaler, R. Schmied, and P. Treutlein, Rev. Mod. Phys. 90, 035005 (2018).
* Lucchesi and Chiofalo [2019] L. Lucchesi and M. L. Chiofalo, Phys. Rev. Lett. 123, 060406 (2019).
* Giovannetti _et al._ [2011] V. Giovannetti, S. Lloyd, and L. Maccone, Nature Photonics 5 (2011), 10.1038/nphoton.2011.35.
* Pedrozo-Peñafiel _et al._ [2020] E. Pedrozo-Peñafiel, S. Colombo, C. Shu, A. F. Adiyatullin, Z. Li, E. Mendez, B. Braverman, A. Kawasaki, D. Akamatsu, Y. Xiao, and V. Vuletic̀, Nature 588, 414–418 (2020).
* Kessler _et al._ [2014a] E. M. Kessler, I. Lovchinsky, A. O. Sushkov, and M. D. Lukin, Phys. Rev. Lett. 112, 150802 (2014a).
* Chih and Holland [2021] L.-Y. Chih and M. Holland, Phys. Rev. Research 3, 033279 (2021).
* Kaubruegger _et al._ [2021] R. Kaubruegger, D. V. Vasilyev, M. Schulte, K. Hammerer, and P. Zoller, Phys. Rev. X 11, 041045 (2021).
* Marciniak _et al._ [2022] C. D. Marciniak, T. Feldker, I. Pogorelov, R. Kaubruegger, D. V. Vasilyev, R. van Bijnen, P. Schindler, P. Zoller, R. Blatt, and T. Monz, Nature 603, 604 (2022).
* Nolan _et al._ [2021] S. Nolan, A. Smerzi, and L. Pezzè, npj Quantum Information (2021), 10.1038/s41534-021-00497-w.
* Bothwell _et al._ [2022] T. Bothwell, C. J. Kennedy, A. Aeppli, D. Kedar, J. M. Robinson, E. Oelker, A. Staron, and J. Ye, Nature 602, 420 (2022).
* Safronova _et al._ [2018] M. S. Safronova, D. Budker, D. DeMille, D. F. J. Kimball, A. Derevianko, and C. W. Clark, Rev. Mod. Phys. 90, 025008 (2018).
* Wineland _et al._ [1994] D. J. Wineland, J. J. Bollinger, W. M. Itano, and D. J. Heinzen, Phys. Rev. A 50, 67 (1994).
* Gil _et al._ [2014] L. I. R. Gil, R. Mukherjee, E. M. Bridge, M. P. A. Jones, and T. Pohl, Phys. Rev. Lett. 112, 103601 (2014).
* Ma _et al._ [2011] J. Ma, X. Wang, C. Sun, and F. Nori, Physics Reports 509, 89 (2011).
* Kitagawa and Ueda [1993] M. Kitagawa and M. Ueda, Phys. Rev. A 47, 5138 (1993).
* Wineland _et al._ [1992] D. J. Wineland, J. J. Bollinger, W. M. Itano, F. L. Moore, and D. J. Heinzen, Phys. Rev. A 46, R6797 (1992).
* Holland and Burnett [1993] M. J. Holland and K. Burnett, Phys. Rev. Lett. 71, 1355 (1993).
* Degen _et al._ [2017] C. L. Degen, F. Reinhard, and P. Cappellaro, Rev. Mod. Phys. 89, 035002 (2017).
* Gessner _et al._ [2020] M. Gessner, A. Smerzi, and L. Pezzè, Nature Communications 11 (2020), 10.1038/s41467-020-17471-3.
* Shankar _et al._ [2019] A. Shankar, L. Salvi, M. L. Chiofalo, N. Poli, and M. J. Holland, Quantum Science and Technology 4, 045010 (2019).
* Liu _et al._ [2020a] H. Liu, S. B. Jäger, X. Yu, S. Touzard, A. Shankar, M. J. Holland, and T. L. Nicholson, Phys. Rev. Lett. 125, 253602 (2020a).
* Jäger _et al._ [2022] S. B. Jäger, T. Schmit, G. Morigi, M. J. Holland, and R. Betzholz, “Lindblad master equations for quantum systems coupled to dissipative bosonic modes,” (2022).
* Cronin _et al._ [2009b] A. D. Cronin, J. Schmiedmayer, and D. E. Pritchard, Rev. Mod. Phys. 81, 1051 (2009b).
* Li _et al._ [2014] W. Li, T. He, and A. Smerzi, Phys. Rev. Lett. 113, 023003 (2014).
* Schwinger [1952] J. Schwinger, _On Angular Momentum_, Tech. Rep. (US Atomic Energy Commission, 1952).
* Mathur _et al._ [2010] M. Mathur, I. Raychowdhury, and R. Anishetty, Journal of Mathematical Physics 51, 093504 (2010).
* Yukawa and Nemoto [2016] E. Yukawa and K. Nemoto, Journal of Physics A: Mathematical and Theoretical 49, 255301 (2016).
* Roček [1991] M. Roček, Physics Letters B 255, 554 (1991).
* Xu _et al._ [2013] M. Xu, D. A. Tieri, and M. J. Holland, Phys. Rev. A 87, 062101 (2013).
* Pezzé and Smerzi [2009] L. Pezzé and A. Smerzi, Phys. Rev. Lett. 102, 100401 (2009).
* Liu _et al._ [2020b] J. Liu, H. Yuan, X.-M. Lu, and X. Wang, J. Phys. A: Math. Theor. 53 (2020b).
* Meyer [2021] J. J. Meyer, Quantum 5, 539 (2021).
* Abascal and Björk [2007] I. S. Abascal and G. Björk, Phys. Rev. A 75, 062317 (2007).
* Kessler _et al._ [2014b] E. M. Kessler, P. Kómár, M. Bishof, L. Jiang, A. S. Sørensen, J. Ye, and M. D. Lukin, Phys. Rev. Lett. 112, 190403 (2014b).
* Lewis-Swan _et al._ [2018] R. J. Lewis-Swan, M. A. Norcia, J. R. K. Cline, J. K. Thompson, and A. M. Rey, Phys. Rev. Lett. 121, 070403 (2018).
* Braunstein and Caves [1994] S. L. Braunstein and C. M. Caves, Phys. Rev. Lett. 72, 3439 (1994).
* Lepori _et al._ [2021] L. Lepori, A. Trombettoni, D. Giuliano, J. Kombe, J. Y. Malo, A. J. Daley, A. Smerzi, and M. L. Chiofalo, arXiv preprint arXiv:2108.03605 (2021).
* Ren _et al._ [2021] Z. Ren, W. Li, A. Smerzi, and M. Gessner, Phys. Rev. Lett. 126, 080502 (2021).
* Browaeys _et al._ [1970] A. Browaeys, H. H, C. McKenzie, S. Rolston, K. Helmerson, and W. Phillips, Physical Review A (Atomic, Molecular and Optical Physics) (1970).
* Kroeze _et al._ [2018] R. M. Kroeze, Y. Guo, V. D. Vaidya, J. Keeling, and B. L. Lev, Phys. Rev. Lett. 121, 163601 (2018).
* Chen _et al._ [2020] L. Chen, Y. Zhang, and H. Pu, Phys. Rev. A 102, 023317 (2020).
* Khamehchi _et al._ [2016] M. Khamehchi, C. Qu, M. Mossman, C. Zhang, and P. Engels, Nature communications 7, 1 (2016).
## Appendix A Schwinger Boson Representation
### A.1 Normalization Coefficient
The symmeterizer in Eq. (LABEL:eq:SchwingState) is defined with the
normalization factor, $1/\mathcal{M}_{\alpha,\beta,\beta,\delta}$, such that
$\mathcal{M}_{q,q_{3},\sigma_{3}}=\sqrt{\frac{N!}{\alpha!\beta!\gamma!\delta!}},$
(11)
so that the bosonic state representation is normalized. In fact, we can see
that $\mathcal{M}_{\alpha,\beta,\beta,\delta}^{2}$ is just a multinomial
coefficient so this normalization makes our bosonic modes match a
straightforward second quantization of the system’s degrees of freedom.
### A.2 Creation and Annihilation Operators
For posterity, we present the remaining three annihilation operators not shown
in the paper:
$\displaystyle\hat{b}\ket{\alpha,\beta,\gamma,\delta}$
$\displaystyle=\sqrt{\beta}\ket{\alpha,\beta-1,\gamma,\delta},$ (12)
$\displaystyle\hat{c}\ket{\alpha,\beta,\gamma,\delta}$
$\displaystyle=\sqrt{\gamma}\ket{\alpha,\beta,\gamma-1,\delta},$ (13)
$\displaystyle\hat{d}\ket{\alpha,\beta,\gamma,\delta}$
$\displaystyle=\sqrt{\delta}\ket{\alpha,\beta,\gamma,\delta-1}.$ (14)
## Appendix B Interaction Picture for the Simplified Hamiltonian
Starting with the Hamiltonian Eq. (7), we can choose a different interaction
picture, such that
$\displaystyle\hat{H}-\chi\hat{J}^{z}$
$\displaystyle=\chi(\hat{E}^{2}-(\hat{J}^{z})^{2}+\hat{J}^{z})-\hat{J}^{z})$
$\displaystyle=\chi(\hat{E}^{2}-(\hat{J}^{z})^{2}).$ (15)
This is equivalent to choosing
$\hat{H}_{0}=\sum_{j=1}^{N}\hbar\omega_{a}\hat{\sigma}^{z}_{j}/2+\chi\sum_{j=1}^{N}\hat{\sigma}^{z}_{j}/2+\hbar\omega_{a}\hat{a}_{c}^{\dagger}\hat{a}_{c}$
for our transformation into the interaction picture. This leads to an extra
phase on the Pauli raising operator for the $j^{th}$ atom, so
$\hat{\sigma}^{+}_{j}(t)=e^{(-i\chi t)}\hat{\sigma}^{+}_{j}$ in the
interaction picture. However, this phase cancels after the adiabatic
elimination of the cavity mode. Thus, we may effectively ignore the
$\hbar\hat{J}^{z}$ appearing in the Hamiltonian. Our Hamiltonian is then
$\displaystyle\hat{H}$
$\displaystyle=\chi\sum_{i,j=1}^{N}\hat{s}_{i}^{x}\hat{s}_{j}^{x}\hat{\sigma}_{i}^{+}\hat{\sigma}_{j}^{-}-\chi\sum_{j=1}^{N}\hat{\sigma}^{z}_{j}/2$
(16)
$\displaystyle=\chi(\hat{a}^{\dagger}\hat{b}+\hat{c}^{\dagger}\hat{d})(\hat{a}\hat{b}^{\dagger}+\hat{c}\hat{d}^{\dagger})-\frac{\chi}{2}(\hat{a}^{\dagger}\hat{a}+\hat{c}^{\dagger}\hat{c}-\hat{b}^{\dagger}\hat{b}-\hat{d}^{\dagger}\hat{d})$
$\displaystyle=\chi(\hat{a}^{\dagger}\hat{a}\hat{b}^{\dagger}\hat{b}+\hat{c}^{\dagger}\hat{c}\hat{d}^{\dagger}\hat{d}+\hat{a}^{\dagger}\hat{b}\hat{c}\hat{d}^{\dagger}+\hat{a}\hat{b}^{\dagger}\hat{c}^{\dagger}\hat{d}+\hat{a}^{\dagger}\hat{a}+\hat{c}^{\dagger}\hat{c}-\frac{1}{2}\hat{n}_{e}+\frac{1}{2}\hat{n}_{g})$
$\displaystyle=\chi(\hat{a}^{\dagger}\hat{a}\hat{b}^{\dagger}\hat{b}+\hat{c}^{\dagger}\hat{c}\hat{d}^{\dagger}\hat{d}+\hat{a}^{\dagger}\hat{b}\hat{c}\hat{d}^{\dagger}+\hat{a}\hat{b}^{\dagger}\hat{c}^{\dagger}\hat{d}+\frac{1}{2}\hat{n}_{e}+\frac{1}{2}\hat{n}_{g})$
$\displaystyle=\chi(\hat{E}^{2}-(\hat{J}^{z})^{2}).$
In the last line we have used the fact that
$\frac{1}{2}\hat{n}_{e}+\frac{1}{2}\hat{n}_{g}=N/2$, and dropped this term due
to it only contributing a global phase.
## Appendix C Bayesian Inferencing Algorithm
In Section V, we use Bayes theorem to carry out Bayesian inferencing. We aim
to construct a probability distribution $P(\vec{\phi}|\vec{\epsilon})$, where
$\vec{\phi}=(\phi_{3},\phi_{5})$ and $\vec{\epsilon}$ is a measurement log
derived from a weighted random sampling of possible measurement outcomes.
Here, we use the fact that
$\hat{J}^{x}=\sum_{i}\lambda^{x}_{i}\hat{\Pi}^{x}_{i}$ and
$\hat{K}^{z}=\sum_{j}\lambda^{z}_{j}\hat{\Pi}^{z}_{j}$, for eigenvalues
$\lambda^{x}_{i},\lambda^{z}_{j}$ and projective operators
$\hat{\Pi}^{x}_{i},\hat{\Pi}^{z}_{j}$. Both sets of projective operators form
a complete positive operator valued measure on the set of states.
We simulate a measurement by choosing an outcome, $\epsilon_{i,j}$,
corresponding to finding eigenvalue $\lambda^{x}_{i}$ for a $\hat{J}^{x}$
measurement and $\lambda^{z}_{j}$ for a $\hat{K}^{z}$ measurement. This
outcome is chosen at random by sampling a list of all possible outcomes with
weights
$P(\epsilon_{i,j})=\bra{\psi}\hat{V}^{\dagger}\hat{\Pi}^{x}_{i}\hat{\Pi}^{z}_{j}\hat{V}\ket{\psi}$,
where $\hat{V}$ is given in Eq. (9) and
$\ket{\psi}=\exp(-it\hat{H})\ket{\psi_{0}}$. Through this process we generate
the measurement log, $\vec{\epsilon}$.
We start with a flat prior distribution, $P(\vec{\phi})=(2\pi)^{-2}$, and
update our probability distribution with each measurement outcome according to
$P_{m+1}(\vec{\phi}|\epsilon_{i,j})=P(\epsilon_{i,j}|\vec{\phi})\frac{P_{m}(\vec{\phi})}{P(\epsilon_{i,j})},$
(17)
where
$P(\epsilon_{i,j}|\vec{\phi})=\bra{\psi}\hat{V}_{\text{est}}^{\dagger}(\vec{\phi})\hat{\Pi}^{x}_{i}\hat{\Pi}^{z}_{j}\hat{V}_{\text{est}}(\vec{\phi})\ket{\psi}$
with $\hat{V}_{\text{est}}(\vec{\phi})$ being a numerical reconstruction of
the unitary, $P(\epsilon_{i,j})$ is the probability of the measurement outcome
integrated over all values of $\phi_{3},\phi_{5}$, $P_{m}(\vec{\phi})$ is the
probability distribution from the first $m$ measurements, and
$P_{m+1}(\vec{\phi}|\vec{\epsilon})$ is the updated probability distribution.
We can predict the asymptotic behavior of Bayesian analysis from the classical
Fisher information (CFI). The CFI can be explicitly calculated:
$I(\hat{G}^{i})=\sum_{i}\left(\frac{d}{d\phi_{i}}\ln(P(\epsilon_{j}|\phi_{i}))\right)^{2}P(\epsilon_{j}|\phi_{i}),$
(18)
where $\epsilon_{j}$ represents the $j$th measurement outcome, and
$P(\epsilon_{j}|\phi_{i})$ is the same probability distribution, but
marginalized over any independent variables besides $\phi_{i}$. For example,
if $i=3$ such that $\hat{G}^{3}=\hat{J}^{z}$, we have that
$P(\epsilon_{j}|\phi_{3})=\Tr_{J}\left[\hat{\Pi}^{x}_{j}\
\Tr_{K}(\hat{V}_{\text{est}}(\vec{\phi})\ket{\psi}\bra{\psi}\hat{V}^{\dagger}(\vec{\phi}))\right],$
(19)
where $\Tr_{J}$ and $\Tr_{K}$ are the traces over the atomic internal degree
of freedom, and momentum degree of freedom respectively. The CFI we consider
is only dependent on a single degree of freedom because we only use it in a
comparison to a diagonal element of the QFIM.
## Appendix D Fidelity as a Lower Bound of the CRB
The trace of one degree of freedom in this system is very hard to caclulate,
even just numerically, because correlations between the atomic energy level
and momentum states happen on an atom by atom basis–whereas large simulations
are only feasible using the 2nd quantization picture we show in this paper.
This provided challenges for calculating the scaling behavior of the axuiliary
scheme, where one wants to measure one degree of freedom but no the other.
Here we briefly show that the fidelity between an eigenstate of an observable
and the state one wishes to measure serves as a suitable lower bound on the
actual set of measurements, without needing to take the trace of a degree of
freedom.
One may analytically calculate the CFI with respect to a $\hat{J}^{x}$
rotation and a full $\hat{K}^{z}$ measurement as follows,
$I(\hat{J}^{x})=\sum_{m=-N/2}^{+N/2}p_{j}\left(\frac{\partial}{\partial\phi_{1}}\log(p_{j})\right)^{2}$
(20)
where $p_{j}$ represents the probability of the $j^{\text{th}}$ measurement
outcome of $\hat{K}^{z}$, for example
$p_{N/2}=\bra{+1}^{N}\text{Tr}_{J}(\ket{\psi}\bra{\psi})\ket{+1}^{N}$, where
$\text{Tr}_{J}$ means we first trace over the atomic degrees of freedom. We
also have that
$p_{j}\left(\frac{\partial}{\partial\phi_{1}}\log(p_{j})\right)^{2}\geq 0$ for
all outcomes $j$, so we can observe that
$p_{N/2}\left(\frac{\partial}{\partial\phi_{1}}\log(p_{N/2})\right)^{2}+p_{j\neq
N/2}\left(\frac{\partial}{\partial\phi_{1}}\log(p_{\neq
N/2})\right)^{2}\leq\sum_{m=-N/2}^{+N/2}p_{j}\left(\frac{\partial}{\partial\phi_{1}}\log(p_{j})\right)^{2},$
(21)
where $p_{j\neq N/2}=1-p_{N/2}$ is the probability of not measuring
$\hat{K}^{z}=+N/2$. These two probabilities, $p_{N/2}$ and $p_{j\neq N/2}$ can
be calculated without the use of a trace via the fidelity. This is because one
may observe that under the time evolution of the Hamiltonian, the only atomic
states entangled to $\ket{+1}^{N}$ momentum states are the ones in the initial
atomic configuration, $\ket{+}^{N}$. Otherwise, momentum flips occur in pairs
from the $\hat{s}^{x}_{i}\hat{s}^{x}_{j}$ term in the Hamiltonian. Therefore,
the CFI of this single probability distribution, $p_{N/2}$, serves as a lower
bound for the CFI by construction, because this would be the same as measuring
if $\hat{K}^{z}$ is $+N/2$ or not.
|
# Marginal Treatment Effects and Monotonicity
Henrik Sigstad BI Norwegian Business School, Department of Economics (e-mail:
henrik.sigstad@bi.no). Thanks to Magne Mogstad and Vitor Possebom.
###### Abstract
How robust are analyses based on marginal treatment effects (MTE) to
violations of Imbens & Angrist, (1994) monotonicity? In this note, I present
weaker forms of monotonicity under which popular MTE-based estimands still
identify the parameters of interest.
## 1 Introduction
Marginal treatment effects (MTE), introduced by Björklund & Moffitt, (1987)
and generalized by Heckman & Vytlacil, (1999, 2005), provide a unified way of
estimating various treatment effects with continuous instruments. For
instance, MTE analysis can be used to identify the average treatment effect,
the average treatment effect on the treated and the untreated, and other
policy-relevant treatment effects. In contrast, with a continuous instrument,
two-stage least squares identifies a convex combination of treatment effects
that is not necessarily of policy interest (Heckman & Vytlacil, , 2007b). MTE
analysis, however, relies on Imbens & Angrist, (1994) monotonicity—often a
strong assumption. For instance, in the context of judge IV designs, Imbens &
Angrist, (1994) monotonicity requires that each judge is weakly stricter than
more lenient judges _in each case_. Thus, if Judge A is stricter than Judge B
in one case, Judge A can not be more lenient than Judge B in another case. As
shown in Sigstad, (2024), this assumption is frequently violated among
judges.
It is thus important to understand how MTE-based treatment effect estimates
are affected by monotonicity violations. In this note, I derive necessary and
sufficient monotonicity conditions for MTE-based estimates of popular
treatment effects to identify the parameters of interest. Fortunately, it
turns out that even when Imbens-Angrist monotonicity is violated, MTE-based
estimates of these parameters might still be consistent. I first consider MTE-
based estimates of LATE—the average treatment effect for agents affected by
the instrument. The necessary and sufficient condition for MTE analysis to
identify LATE is that monotonicity holds between the two most extreme
instrument values. For instance, in the random-judge design, this condition
requires that the strictest judge is always stricter than the most lenient
judge. As shown in Sigstad, (2024), this condition is much more plausible in
random-judge designs than Imbens & Angrist, (1994) monotonicity. Thus, even
though conventional MTE analysis assumes Imbens-Angrist monotonicity, MTE-
based LATE estimates can be highly robust to plausible levels of monotonicity
violations.
Next, I consider estimates of the average treatment effect on the treated
(ATT) and the untreated (ATUT) for the complier population. MTE-based ATT
(ATUT) estimates are consistent as long as Imbens-Angirst monotonicity holds
for all pairs of instrument values involving the lowest (highest) instrument
value. For instance, in the random-judge design, these conditions require that
the most lenient (stringent) judge is most lenient (stringent) in all cases.
These conditions are more demanding than the condition required to estimate
LATE. Estimates of ATT and ATUT are thus more sensitive to monotonicity
violations.
I also consider MTE-based estimates of the average treatment effect (ATE),
which require extrapolation beyond the observed instrument values. As long as
this extrapolation is well specified, MTE-based ATE estimates are consistent
without any monotonicity assumption. Such estimates are equivalent to the
Arnold et al., (2021) approach to estimating average treatment effects.
Finally, I consider the use of MTEs to assess heterogeneous treatment effects
by treatment propensity. As long as attention is limited to aggregate
properties of the MTE curve, this practice also requires only mild
monotonicity assumptions.
While these analyses show that MTE-based estimators are relatively robust to
monotonicity violations, the intermediate step of estimating marginal
treatment effects is not a meaningful exercise when monotonicity is violated.
Instead, I propose to directly estimate the relevant treatment parameters
without first estimating an “MTE curve”. I show that whenever MTE analysis
identifies LATE, LATE is identified by a standard Wald estimand: the
difference in average outcomes between agents receiving the highest instrument
value and agents receiving the lowest instrument value divided by the
difference in treatment propensities. Similar results are obtained for the
average treatment effects on the treated and on the untreated for the complier
population. There are several reasons to prefer such a direct estimation of
treatment parameters over MTE-based estimation when monotonicity is violated.
First, the direct estimation is more honest and clarifies the necessary
identification assumptions. Second, the direct estimates can easily be
estimated non-parametrically and do not require a fully continuous instrument.
Finally, by targeting a specific parameter rather than the full MTE curve, the
parameter can be more precisely estimated.
## 2 Marginal Treatment Effects and Monotonicity
Fix a probability space with the outcome corresponding to a randomly drawn
agent. Define the following random variables: A binary treatment
$D\in\left\\{0,1\right\\}$, an outcome $Y\in\mathbb{R}$, and a continuous
instrument $Z\in\mathbb{R}$ with support $\left[\underline{z},\bar{z}\right]$.
To capture the idea that different agents might be induced into treatment in
different ways by the instrument, define a _response type_ as a mapping
$s:\left[\underline{z},\bar{z}\right]\rightarrow\left\\{0,1\right\\}$ from
instrument values to treatments.111Response types were introduced by Heckman &
Pinto, (2018). Denote by the random variable $S$ the response type of the
randomly drawn agent. If $S=s$ for agent $i$, then $s\left(z\right)=1$
indicates that agent $i$ will receive the treatment if $Z$ is set to $z$.
Denote by $\mathcal{S}$ the set of all response types in the population.
Define $Y\left(0\right)$ and $Y\left(1\right)$ as the _potential outcomes_
when $D$ is set to $0$ and $1$, respectively. Denote by the random variable
$\beta\equiv Y\left(1\right)-Y\left(0\right)$ the treatment effect of agent
$i$. Let $p\left(z\right)\equiv\operatorname{E}\left[S\left(z\right)\right]$
be the share of agents receiving treatment at $Z=z$. I assume the following
throughout
###### Assumption 1.
(Exogeneity and Exclusion).
$\left\\{Y\left(0\right),Y\left(1\right),S\right\\}\perp Z$
###### Assumption 2.
(First stage). The propensity $p\left(z\right)$ is non-trivial function of
$z$.
To simplify the notation, assume (without loss of generality) that the
instrument values are labeled such that
###### Assumption 3.
$p\left(z\right)=z$.
Imbens & Angrist, (1994) monotonicity is then defined by
###### Definition 1 (Imbens-Angrist Monotonicity).
For all $z,z^{\prime}\in\mathbb{R}$ and $s\in\mathcal{S}$
$z\geq z^{\prime}\Rightarrow s\left(z\right)\geq s\left(z^{\prime}\right)$
Marginal treatment effects were introduced by Björklund & Moffitt, (1987) and
generalized by Heckman & Vytlacil, (1999).222See Heckman & Vytlacil,
(2007b). In applied work (_e.g._ , Arnold et al., 2018; Bhuller et al.,
2020), marginal treatment effect analysis relies on a generalized Roy, (1951)
selection model
$D=\mathbf{1}\left[Z>U\right]$
where $U$ is a random variable.333See Heckman & Vytlacil, (2007a, b). To
simplify the exposition, I disregard covariates. The agent receives treatment
if the instrument is above the agent’s unobserved _resistance to treatment_
$U$. This model implicitly assumes Imbens-Angrist monotonicity.444Consider two
instrument values $z_{1}\geq z_{2}$. Then $D\left(z_{2}\right)\geq
D\left(z_{1}\right)$ for all agents. A _marginal treatment effect_ is then
defined as the average treatment effect for agents with a given treatment
propensity:
$\operatorname{MTE}\left(u\right)=\operatorname{E}\left[\beta\mid U=u\right]$
Marginal treatment effects can then be identified using the local instrumental
variable approach (Heckman & Vytlacil, , 1999, 2005):
$\operatorname{LIV}\left(u\right)\equiv\frac{d\operatorname{E}\left[Y\mid
Z=u\right]}{du}$
Assume this derivative exists. Under Imbens-Angrist monotonicity, we have
$\operatorname{LIV}\left(u\right)=\operatorname{MTE}\left(u\right)$. But
$\operatorname{LIV}\left(u\right)$ is defined even when Imbens-Angrist
monotonicity does not hold.
The applied literature uses MTE analysis for two purposes. First, to estimate
meaningful treatment parameters such as LATE and ATE (_e.g._ , Arnold et al.,
2018; Bhuller et al., 2020). Second, to assess heterogeneous treatment
effects based on the treatment propensity $U$ by directly inspecting
$\operatorname{LIV}\left(u\right)$ (_e.g._ , Doyle Jr, 2007; Maestas et al.,
2013; French & Song, 2014).
### 2.1 Using MTE to Identify Meaningful Treatment Parameters
Heckman & Vytlacil, (1999, 2005) showed that many popular treatment
parameters—including the average treatment effect (ATE)—can be identified by a
weighted average of marginal treatment effects. MTE analysis can thus be used
to identify more meaningful treatment parameters than the weighted average of
individual treatment effects produced by 2SLS. Identifying ATEs using MTE
analysis, however, requires full support of $Z$ in $\left[0,1\right]$ or
extrapolation beyond the support of $Z$. Since $Z$ typically does not have
full support in practice, the literature instead normally seeks to estimate
the local average treatment effect (LATE) for agents receiving treatment when
$Z=\bar{z}$ and not when $Z=\underline{z}$—agents with
$D\left(\underline{z}\right)<D\left(\bar{z}\right)$. This parameter differs
from the 2SLS estimand by placing equal weight on all compliers. The
literature (_e.g._ , Bhuller et al., 2020) has also considered the average
treatment effect on the treated and the average treatment effect on the
untreated for the same population. Since these treatment effects are
“local”—defined on the complier population—I label them LATT and LATUT,
respectively. Formally, define
$\displaystyle\operatorname{LATE}$ $\displaystyle\equiv$
$\displaystyle\operatorname{E}\left[\beta\mid
S\left(\underline{z}\right)=0,S\left(\bar{z}\right)=1\right]$
$\displaystyle\operatorname{LATT}$ $\displaystyle\equiv$
$\displaystyle\operatorname{E}\left[\beta\mid
S\left(\underline{z}\right)=0,D=1\right]$ $\displaystyle\operatorname{LATUT}$
$\displaystyle\equiv$ $\displaystyle\operatorname{E}\left[\beta\mid
S\left(\bar{z}\right)=1,D=0\right]$
The $\operatorname{LATE}$ parameter is the local average treatment effect for
agents receiving treatment under the highest instrument value but not under
the lowest instrument value. The $\operatorname{LATT}$ and
$\operatorname{LATUT}$ parameters are the (local) average treatment effects on
the treated and the untreated for a similar complier population.555The
$\operatorname{LATT}$ and $\operatorname{LATUT}$ complier population includes
all agents except never-takers and always-takers. The $\operatorname{LATE}$
complier population also ignores, for instance, response types receiving
treatment under some intermediate instrument values but not by the highest nor
the lowest instrument value. I do not see a way to identify a local average
treatment effect that covers also such compliers. As shown by Heckman &
Vytlacil, (1999, 2005), these parameters are identified under Imbens-Angrist
monotonicity by:
$\displaystyle\tilde{\operatorname{LATE}}$ $\displaystyle\equiv$
$\displaystyle\frac{1}{\bar{z}-\underline{z}}\int_{\underline{z}}^{\bar{z}}\operatorname{LIV}\left(u\right)du$
$\displaystyle\tilde{\operatorname{LATT}}$ $\displaystyle\equiv$
$\displaystyle\frac{1}{\operatorname{E}\left[Z\right]-\underline{z}}\int_{\underline{z}}^{\bar{z}}\Pr\left[Z>u\right]\operatorname{LIV}\left(u\right)du$
$\displaystyle\tilde{\operatorname{LATUT}}$ $\displaystyle\equiv$
$\displaystyle\frac{1}{\bar{z}-\operatorname{E}\left[Z\right]}\int_{\underline{z}}^{\bar{z}}\Pr\left[Z<u\right]\operatorname{LIV}\left(u\right)du$
When Imbens-Angrist monotonicity is violated, however, this method might lead
to wrong conclusions. How sensitive are these estimands to monotonicity
violations? Fortunately, it turns out that MTE analysis might still identify
$\operatorname{LATE}$, $\operatorname{LATT}$, and $\operatorname{LATUT}$ even
when Imbens-Angrist monotonicity is violated.
Formally, let $\mathcal{G}$ be the set of all possible joint distributions of
$\left(Y\left(1\right),Y\left(0\right),S\right)$. To allow for arbitrary
heterogeneous effects, we do not want to place any restrictions on this joint
distribution. The necessary and sufficient conditions for
$\tilde{\operatorname{LATE}}$, $\tilde{\operatorname{LATT}}$,
$\tilde{\operatorname{LATUT}}$ to be consistent under arbitrary heterogeneous
effects are the following much weaker monotonicity conditions:
###### Theorem 1.
(Identification results).
i)
$\tilde{\operatorname{LATE}}=\operatorname{LATE}$ for all $g\in\mathcal{G}$ if
and only if $s\left(\bar{z}\right)\geq s\left(\underline{z}\right)$ for all
$s\in\mathcal{S}$.
ii)
$\tilde{\operatorname{LATT}}=\operatorname{LATT}$ for all $g\in\mathcal{G}$ if
and only if $s\left(z\right)\geq s\left(\underline{z}\right)$ for all
$s\in\mathcal{S}$ and $z$.
iii)
$\tilde{\operatorname{LATUT}}=\operatorname{LATUT}$ for all $g\in\mathcal{G}$
if and only if $s\left(\bar{z}\right)\geq s\left(z\right)$ for all
$s\in\mathcal{S}$ and $z$.
Thus, for MTE analysis to identify $\operatorname{LATE}$, it is sufficient
that monotonicity holds between the lowest and the highest instrument value.
This condition is substantially weaker than Imbens-Angrist monotonicity,
especially when there are many possible instrument values. Similarly,
$\operatorname{LATT}$ is identified by MTE analysis whenever monotonicity
holds for all instrument value pairs that involve the lowest instrument value.
This condition is stronger than $\operatorname{LATE}$ condition but still
considerably weaker than Imbens-Angrist monotonicity—monotonicity is allowed
to be violated for all instrument value pairs that do not include the lowest
instrument value. For MTE analysis to identify $\operatorname{LATUT}$, on the
other hand, monotonicity must hold for all instrument value pairs that involve
the _highest_ instrument value.
### 2.2 Estimating ATE by Extrapolating the MTE curve
When $f\left(u\right)\equiv\operatorname{E}\left[Y\mid Z=u\right]$ is
estimated parametrically, one might seek to extrapolate beyond the support of
$Z$ to estimate the average treatment effect,
$\operatorname{ATE}\equiv\operatorname{E}\left[\beta\right]$. In particular,
let $\hat{f}:\left[0,1\right]\rightarrow\mathbb{R}$ be an extrapolation of $f$
that covers the full interval $\left[0,1\right]$. The corresponding MTE curve
is defined by
$\hat{\operatorname{LIV}}\left(u\right)\equiv\hat{f}^{\prime}\left(u\right)$.
One can then estimate $\operatorname{ATE}$ by
$\tilde{\operatorname{ATE}}\equiv\int_{0}^{1}\hat{\operatorname{LIV}}\left(u\right)du$
How do monotonicity violations influence such analysis? By the fundamental
theorem of calculus,
$\tilde{\operatorname{ATE}}=\hat{f}\left(1\right)-\hat{f}\left(0\right)$. If
the extrapolation is well specified, $\hat{f}\left(1\right)$ can be thought of
as the average outcome for agents in the hypothetical case of receiving
$Z=1$.666In the context of the judge IV design, it would correspond to the
average outcomes for defendants randomly assigned a hypothetical supremely
stringent judge that always incarcerates. In that case,
$\hat{f}\left(1\right)=\operatorname{E}\left[Y\left(1\right)\right]$.
Similarly, $\hat{f}\left(0\right)$ can be thought of as the average outcome
for agents had they been assigned $Z=0$ which gives
$\hat{f}\left(0\right)=\operatorname{E}\left[Y\left(0\right)\right]$. We thus
get that
$\tilde{\operatorname{ATE}}=\operatorname{E}\left[Y\left(1\right)-Y\left(0\right)\right]=\operatorname{ATE}$
if the extrapolation is well specified—$\hat{f}$ is able to identify the
average outcome for agents in the hypothetical cases of receiving $Z=1$ and
$Z=0$. Formally
###### Proposition 1.
$\tilde{\operatorname{ATE}}=\operatorname{ATE}$ if
$\hat{f}\left(1\right)=\operatorname{E}\left[Y\left(1\right)\right]$ and
$\hat{f}\left(0\right)=\operatorname{E}\left[Y\left(0\right)\right]$.
The MTE-based estimator of $\operatorname{ATE}$ is equivalent to the estimator
proposed by Arnold et al., (2021), who extrapolate towards a supremely
lenient judge to estimate the ATE of pre-trial release on pre-trial misconduct
in a judge IV setting. As pointed out by Arnold et al., (2021), this approach
does not require any monotonicity assumptions. Thus, monotonicity violations
do not affect the validity of this approach.
### 2.3 Using MTE to Analyze Heterogeneous Effects
The literature also uses the MTE framework to assess heterogeneous treatment
effects based on the treatment propensity $U$ by directly inspecting
$\operatorname{LIV}\left(u\right)$—the “MTE curve” (_e.g._ , Doyle Jr, 2007;
Maestas et al., 2013; French & Song, 2014). But
$\operatorname{LIV}\left(u\right)$ is difficult to interpret when Imbens-
Angrist monotonicity is violated. To see this, it is instructive to consider
$\operatorname{LIV}\left(u\right)$ as the limit of a standard Wald estimand:
$\operatorname{LIV}\left(u\right)=\lim_{v\rightarrow
u}\tilde{\operatorname{LATE}}_{u,v}$
where
$\tilde{\operatorname{LATE}}_{u,v}\equiv\frac{\operatorname{E}\left[Y\mid
Z=u\right]-\operatorname{E}\left[Y\mid Z=v\right]}{u-v}$
Under Imbens-Angrist monotonicity, $\tilde{\operatorname{LATE}}_{u,v}$
identifies
$\operatorname{LATE}_{u,v}\equiv\operatorname{E}\left[\beta\mid
D\left(u\right)>D\left(v\right)\right]$
the local average treatment effect for cases where receiving treatment at
$Z=u$ but not at $Z=v$. As $v$ approaches $u$, however, Imbens-Angrist
monotonicity between $v$ and $u$ might be unlikely.777In the context of
judges, Imbens-Angrist monotonicity is less likely for judge pairs with
similar stringencies than for judge pairs with more different stringencies
(Sigstad, , 2024). Individual points of the MTE curve are then hard to
interpret. But looking at more aggregate properties of the MTE curve could
still be meaningful. For instance, the average of
$\operatorname{LIV}\left(u\right)$ across a range
$u\in\left[\underline{u},\bar{u}\right]$ identifies LATE for agents receiving
treatment at $Z=\bar{u}$ but not at $Z=\underline{u}$ when monotonicity holds
between these instrument values:
###### Proposition 2.
$\operatorname{E}\left[\operatorname{LIV}\left(U\right)\mid\underline{u}\leq
U\leq\bar{u}\right]=\operatorname{LATE}_{\underline{u},\bar{u}}$
for all $g\in\mathcal{G}$ if and only if $s\left(\bar{u}\right)\geq
s\left(\underline{u}\right)$ for all $s\in\mathcal{S}$.
As $\bar{u}$ and $\underline{u}$ become more distant, monotonicity between
these two values typically becomes more plausible.888This is at least true for
the random-judge design (Sigstad, , 2024).
### 2.4 Identifying Meaningful Treatment Effects without MTE
While MTE analysis gives correct results under weaker assumptions than Imbens-
Angrist monotonicity, the “MTE curve” $\operatorname{LIV}\left(u\right)$ is
not a meaningful object when monotonicity is violated. A more honest approach
is to estimate aggregate treatment effects directly, without first estimating
an MTE curve. The following results show how LATE, LATT, and LATUT can be
directly identified without first estimating
$\operatorname{LIV}\left(u\right)$.
###### Theorem 2.
(Identifying meaningful treatment effects without MTE analysis).
i)
$\operatorname{LATE}=\frac{\operatorname{E}\left[Y\mid
Z=\bar{z}\right]-\operatorname{E}\left[Y\mid
Z=\underline{z}\right]}{\bar{z}-\underline{z}}$ if $s\left(\bar{z}\right)\geq
s\left(\underline{z}\right)$ for all $s\in\mathcal{S}$.
ii)
$\operatorname{LATT}=\frac{\operatorname{E}\left[Y\right]-\operatorname{E}\left[Y\mid
Z=\underline{z}\right]}{\operatorname{E}\left[Z\right]-\underline{z}}$ if
$s\left(z\right)\geq s\left(\underline{z}\right)$ for all $s\in\mathcal{S}$
and $z$.
iii)
$\operatorname{LATUT}=\frac{\operatorname{E}\left[Y\mid
Z=\bar{z}\right]-\operatorname{E}\left[Y\right]}{\bar{z}-\operatorname{E}\left[Z\right]}$
if $s\left(\bar{z}\right)\geq s\left(z\right)$ for all $s\in\mathcal{S}$ and
$z$.
iv)
$\operatorname{LATE}_{z_{1},z_{2}}=\frac{\operatorname{E}\left[Y\mid
Z=z_{1}\right]-\operatorname{E}\left[Y\mid Z=z_{2}\right]}{z_{1}-z_{2}}$ if
$s\left(z_{1}\right)\geq s\left(z_{2}\right)$ for all $s\in\mathcal{S}$.
In particular, $\operatorname{LATE}$ is identified by the standard Wald
estimand of the effect of receiving the highest instrument value compared to
receiving the lowest instrument value.999A similar estimand is discussed by
Frölich, (2007) (Theorem 8). Furthermore, $\operatorname{LATT}$ and
$\operatorname{LATUT}$ are identified by the difference between the mean
outcome and the expected outcomes for agents receiving the lowest and highest
instrument values, respectively. The only parameters that need to be estimated
are thus $\bar{z}$, $\underline{z}$, $\operatorname{E}\left[Y\mid
Z=\underline{z}\right]$ and $\operatorname{E}\left[Y\mid Z=\bar{z}\right]$.
There are two advantages of this approach. First, it is not needed to estimate
a full MTE curve. Estimating an MTE curve is difficult in practice due to data
limitations and typically requires parametric assumptions. When the aim is to
estimate only $\operatorname{E}\left[Y\mid Z=\underline{z}\right]$ and
$\operatorname{E}\left[Y\mid Z=\bar{z}\right]$ instead of the full MTE curve,
one can do this non-parametrically.101010For instance, one can directly
estimate $\operatorname{E}\left[Y\mid Z=\bar{z}\right]$ using the sample
analog, or one can estimate it using a local linear regression (with, _e.g._ ,
a triangular kernel) on a sample of the highest instrument values. Note that
such LATE estimates (obtained either through MTE analysis or the Wald
approach) essentially ignore agents receiving medium instrument values. These
estimates will thus typically be less precise than 2SLS estimates which
exploit all instrument values. Also, note that in finite samples, the sample
analog of $\bar{z}-\underline{z}$ will be upward biased. For instance, even if
all instrument values have the same treatment propensity
($\bar{z}=\underline{z}$), the sample analog of $\bar{z}-\underline{z}$ will
still be positive. To avoid this bias, one could use a split-sample approach:
Estimate which instrument values are associated with the highest and lowest
treatment propensities in one sample and estimate the treatment propensities
associated with these instrument values in another sample. I leave a thorough
discussion of inference to future research. Second, the results above are
valid also for discrete instruments when MTE analysis is not
applicable.111111For instance, applying MTE-analysis in the judge IV setting
formally requires a continuum of judges.
## 3 Conclusions
Marginal treatment effects can be used to estimate more meaningful treatment
parameters than two-stage least squares but require Imbens-Angrist
monotonicity. In this note, I have presented conditions under which MTE-based
estimates still identify the parameters of interest when Imbens-Angrist
monotonicity is violated. I also showed how the same parameters can be
identified without relying on the MTE framework. I leave questions of
estimation for future work.
## References
* Arnold et al., (2018) Arnold, David, Dobbie, Will, & Yang, Crystal S. 2018\. Racial Bias in Bail Decisions. The Quarterly Journal of Economics, 133(4), 1885–1932.
* Arnold et al., (2021) Arnold, David, Dobbie, Will, & Hull, Peter. 2021. Measuring racial discrimination in algorithms. Pages 49–54 of: AEA Papers and Proceedings, vol. 111.
* Bhuller et al., (2020) Bhuller, Manudeep, Dahl, Gordon B, Løken, Katrine V, & Mogstad, Magne. 2020. Incarceration, recidivism, and employment. Journal of Political Economy, 128(4), 1269–1324.
* Björklund & Moffitt, (1987) Björklund, Anders, & Moffitt, Robert. 1987. The estimation of wage gains and welfare gains in self-selection models. The Review of Economics and Statistics, 42–49.
* Doyle Jr, (2007) Doyle Jr, Joseph J. 2007. Child protection and child outcomes: Measuring the effects of foster care. American Economic Review, 97(5), 1583–1610.
* French & Song, (2014) French, Eric, & Song, Jae. 2014. The effect of disability insurance receipt on labor supply. American Economic Journal: Economic Policy, 6(2), 291–337.
* Frölich, (2007) Frölich, Markus. 2007. Nonparametric IV estimation of local average treatment effects with covariates. Journal of Econometrics, 139(1), 35–75.
* Heckman & Pinto, (2018) Heckman, James J, & Pinto, Rodrigo. 2018. Unordered monotonicity. Econometrica, 86(1), 1–35.
* Heckman & Vytlacil, (2005) Heckman, James J, & Vytlacil, Edward. 2005. Structural equations, treatment effects, and econometric policy evaluation 1. Econometrica, 73(3), 669–738.
* Heckman & Vytlacil, (1999) Heckman, James J, & Vytlacil, Edward J. 1999. Local instrumental variables and latent variable models for identifying and bounding treatment effects. Proceedings of the national Academy of Sciences, 96(8), 4730–4734.
* Heckman & Vytlacil, (2007a) Heckman, James J, & Vytlacil, Edward J. 2007a. Econometric evaluation of social programs, part I: Causal models, structural models and econometric policy evaluation. Handbook of econometrics, 6, 4779–4874.
* Heckman & Vytlacil, (2007b) Heckman, James J, & Vytlacil, Edward J. 2007b. Econometric evaluation of social programs, part II: Using the marginal treatment effect to organize alternative econometric estimators to evaluate social programs, and to forecast their effects in new environments. Handbook of econometrics, 6, 4875–5143.
* Imbens & Angrist, (1994) Imbens, Guido W., & Angrist, Joshua D. 1994\. Identification and Estimation of Local Average Treatment Effects. Econometrica, 62(2), 467–475.
* Maestas et al., (2013) Maestas, Nicole, Mullen, Kathleen J, & Strand, Alexander. 2013. Does disability insurance receipt discourage work? Using examiner assignment to estimate causal effects of SSDI receipt. American economic review, 103(5), 1797–1829.
* Roy, (1951) Roy, Andrew Donald. 1951. Some thoughts on the distribution of earnings. Oxford economic papers, 3(2), 135–146.
* Sigstad, (2024) Sigstad, Henrik. 2024. Monotonicity among judges: Evidence from judicial panels and consequences for judge IV designs. Available at SSRN 4534809.
## Appendix A Proofs
###### Proof.
(Theorem 1.) _Part i)._
$\displaystyle\tilde{\operatorname{LATE}}$ $\displaystyle=$
$\displaystyle\frac{1}{\bar{z}-\underline{z}}\int_{\underline{z}}^{\bar{z}}\frac{d\operatorname{E}\left[Y\mid
Z=u\right]}{du}du$ $\displaystyle=$
$\displaystyle\frac{1}{\bar{z}-\underline{z}}\left(\operatorname{E}\left[Y\mid
Z=\bar{z}\right]-\operatorname{E}\left[Y\mid Z=\underline{z}\right]\right)$
$\displaystyle=$
$\displaystyle\frac{1}{\bar{z}-\underline{z}}\left(\operatorname{E}\left[DY\left(1\right)+\left(1-D\right)Y\left(0\right)\mid
Z=\bar{z}\right]-\operatorname{E}\left[DY\left(1\right)-\left(1-D\right)Y\left(0\right)\mid
Z=\underline{z}\right]\right)$ $\displaystyle=$
$\displaystyle\frac{\Pr\left[D\left(\bar{z}\right)>D\left(\underline{z}\right)\right]}{\bar{z}-\underline{z}}\operatorname{E}\left[Y\left(1\right)-Y\left(0\right)\mid
D\left(\bar{z}\right)>D\left(\underline{z}\right)\right]$ $\displaystyle-$
$\displaystyle\frac{\Pr\left[D\left(\bar{z}\right)<D\left(\underline{z}\right)\right]}{\bar{z}-\underline{z}}\operatorname{E}\left[Y\left(1\right)-Y\left(0\right)\mid
D\left(\bar{z}\right)<D\left(\underline{z}\right)\right]$
The first equality invokes the fundamental theorem of calculus and the fourth
equality uses Assumption 1. This expression equals $\operatorname{LATE}$ for
all $g\in\mathcal{G}$ if and only if
$\Pr\left[D\left(\bar{z}\right)<D\left(\underline{z}\right)\right]=0$.
_Part ii)._ _Let $f\left(u\right)$ denote the density of $Z$ at $u$. Then_
$\displaystyle\tilde{\operatorname{LATT}}$ $\displaystyle=$
$\displaystyle\frac{1}{\operatorname{E}\left[Z\right]-\underline{z}}\int_{\underline{z}}^{\bar{z}}\Pr\left[Z>u\right]\frac{d\operatorname{E}\left[Y\mid
Z=u\right]}{du}du$ $\displaystyle=$
$\displaystyle\frac{1}{\operatorname{E}\left[Z\right]-\underline{z}}\int_{\underline{z}}^{\bar{z}}\left(\operatorname{E}\left[Y\mid
Z=u\right]-\operatorname{E}\left[Y\mid
Z=\underline{z}\right]\right)f\left(u\right)du$ $\displaystyle=$
$\displaystyle\frac{1}{\operatorname{E}\left[Z\right]-\underline{z}}\left(\operatorname{E}\left[Y\right]-\operatorname{E}\left[Y\mid
Z=\underline{z}\right]\right)$ $\displaystyle=$
$\displaystyle\frac{1}{\operatorname{E}\left[Z\right]-\underline{z}}\operatorname{E}\left[\operatorname{E}\left[Y\mid
S\right]-\operatorname{E}\left[Y\mid Z=\underline{z},S\right]\right]$
$\displaystyle=$
$\displaystyle\frac{1}{\operatorname{E}\left[Z\right]-\underline{z}}\operatorname{E}\left[\operatorname{E}\left[DY\left(1\right)+\left(1-D\right)Y\left(0\right)\mid
S\right]-\operatorname{E}\left[Y\mid Z=\underline{z},S\right]\right]$
$\displaystyle=$
$\displaystyle\frac{1}{\operatorname{E}\left[Z\right]-\underline{z}}\operatorname{E}\left[\operatorname{E}\left[D\mid
S\right]\operatorname{E}\left[Y\left(1\right)\mid
S\right]+\left(1-\operatorname{E}\left[D\mid
S\right]\right)\operatorname{E}\left[Y\left(0\right)\mid
S\right]-\operatorname{E}\left[Y\mid Z=\underline{z},S\right]\right]$
$\displaystyle=$
$\displaystyle\frac{1}{\operatorname{E}\left[Z\right]-\underline{z}}\operatorname{E}\left[\operatorname{E}\left[D\mid
S\right]\operatorname{E}\left[Y\left(1\right)-Y\left(0\right)\mid
S\right]+\operatorname{E}\left[Y\left(0\right)\mid
S\right]-\operatorname{E}\left[Y\mid Z=\underline{z},S\right]\right]$
The second equality uses that both integrals represent the area between the
curve $\operatorname{E}\left[Y\mid Z=u\right]$ and
$\operatorname{E}\left[Y\mid Z=\underline{z}\right]$ (weighted by the density
$f$). The fourth equality uses the law of iterated expectations, and the sixth
equality invokes Assumption 1. For this to equal
$\displaystyle\operatorname{LATT}$ $\displaystyle=$
$\displaystyle\operatorname{E}\left[Y\left(1\right)-Y\left(0\right)\mid
D\left(\underline{z}\right)<D\left(\bar{z}\right),D=1\right]$ $\displaystyle=$
$\displaystyle\frac{1}{\operatorname{E}\left[D\mid
D\left(\underline{z}\right)<D\left(\bar{z}\right)\right]}\operatorname{E}\left[D\left(Y\left(1\right)-Y\left(0\right)\right)\mid
D\left(\underline{z}\right)<D\left(\bar{z}\right)\right]$
for all $g\in\mathcal{G}$, we need that for each $s\in\mathcal{S}$, either
$s\left(\underline{z}\right)=0$ or $\operatorname{E}\left[D\mid
S=s\right]=1$.121212If $\operatorname{E}\left[D\mid S=s\right]<1$ and
$s\left(\underline{z}\right)=1$ for a response type $s\in\mathcal{S}$,
$\tilde{\operatorname{LATT}}$ will—unlike $\operatorname{LATT}$—place a
negative weight on $\operatorname{E}\left[Y\left(1\right)-Y\left(0\right)\mid
S=s\right]$. It is straightforward to check that the expressions for
$\tilde{\operatorname{LATT}}$ and $\operatorname{LATT}$ coincide when either
$s\left(\underline{z}\right)=0$ or $\operatorname{E}\left[D\mid S=s\right]=1$
for all $s\in\mathcal{S}$. In other words, we need $D\left(z\right)\geq
D\left(\underline{z}\right)$ for all $z$. The proof of part iii) is analogous
to part ii). ∎
###### Proof.
(Theorem 2.) This follows from the proof of Theorem 1. ∎
###### Proof.
(Proposition 2.)
$\displaystyle\operatorname{E}\left[\operatorname{LIV}\left(U\right)\mid\underline{u}\leq
U\leq\bar{u}\right]$ $\displaystyle=$
$\displaystyle\frac{1}{\bar{u}-\underline{u}}\int_{\underline{u}}^{\bar{u}}\operatorname{LIV}\left(u\right)du$
$\displaystyle=$ $\displaystyle\frac{\operatorname{E}\left[Y\mid
Z=\bar{u}\right]-\operatorname{E}\left[Y\mid
Z=\underline{u}\right]}{\bar{u}-\underline{u}}$
The latter Wald estimand identifies
$\operatorname{LATE}_{\underline{u},\bar{u}}$ for all $g\in\mathcal{G}$ if and
only if there are no “defiers”,
$\Pr\left[D\left(\underline{u}\right)>D\left(\bar{u}\right)\right]=0$. ∎
|
# Motion of a sphere in a viscous density stratified fluid
Arun Kumar Varanasi Ganesh Subramanian<EMAIL_ADDRESS>Engineering
Mechanics Unit, Jawaharlal Nehru Center for Advanced Scientific Research,
Bangalore-560064, India
###### Abstract
We examine the translation of a sphere in a stably stratified ambient in the
limit of small Reynolds ($Re\ll 1$) and viscous Richardson numbers ($Ri_{v}\ll
1$); here, $Re=\frac{\rho Ua}{\mu}$ and $Ri_{v}=\frac{\gamma a^{3}g}{\mu U}$
with $a$ being the sphere radius, $U$ the translation speed, $\rho$ and $\mu$
the density and viscosity of the stratified ambient, $g$ the acceleration due
to gravity, and $\gamma$ the density gradient (assumed constant)
characterizing the ambient stratification. In contrast to most earlier
efforts, our study considers the convection dominant limit corresponding to
$Pe=\frac{Ua}{D}\gg 1$, $D$ being the diffusivity of the stratifying agent. We
characterize in detail the velocity and density fields around the particle in
what we term the Stokes stratification regime, defined by $Re\ll
Ri_{v}^{\frac{1}{3}}\ll 1$, and corresponding to the dominance of buoyancy
over inertial forces. Buoyancy forces associated with the perturbed
stratification fundamentally alter the viscously dominated fluid motion at
large distances. At distances of order the stratification screening length,
that scales as $aRi_{v}^{-\frac{1}{3}}$, the motion transforms from the
familiar fore-aft symmetric Stokesian form to a fore-aft asymmetric pattern of
recirculating cells with primarily horizontal motion within; except in the
vicinity of the rear stagnation streamline. At larger distances, the motion is
vanishingly small except within (a) an axisymmetric horizontal wake whose
vertical extent grows as $O(r_{t}^{\frac{2}{5}})$, $r_{t}$ being the distance
in the plane perpendicular to translation and (b) a buoyant reverse jet behind
the particle that narrows as the inverse square root of distance downstream.
As a result, for $Pe=\infty$, the motion close to the rear stagnation
streamline starts off pointing in the direction of translation, in the inner
Stokesian region, and decaying as the inverse of the downstream distance; the
motion reverses beyond a distance of $1.15aRi_{v}^{-\frac{1}{3}}$, with the
eventual reverse flow in the far-field buoyant jet again decaying as the
inverse of the distance downstream. For large but finite $Pe$, the narrowing
jet is smeared out beyond a distance of
$O(aRi_{v}^{-\frac{1}{6}}Pe^{\frac{1}{2}})$, leading to an exponential decay
in the aforementioned reverse flow.
###### keywords:
Stratified flows
## 1 Introduction
The phenomena of particles moving in a density stratified environment is a
common occurrence in nature since both the atmosphere and the oceans are, on
average, stably stratified. Considering the oceans, for instance, there exist
examples of both active (aquatic swimming organisms) and passive (so-called
marine snow) particles moving through the stratified pycnocline (Magnaudet &
Mercier, 2020), the former often as part of a diurnal migration pattern that
has been termed the largest migration on earth (Martin et al., 2020). This
work was originally motivated by a rather provocative suggestion (Katija &
Dabiri, 2009; Subramanian, 2010) of the aforementioned migratory pattern
leading to an additional biogenic contribution to the mixing of the ocean
waters; this, in addition to the two well known mechanisms of winds and tides
(Munk, 1966). In contrast to the latter two, the energy input in the proposed
biogenic contribution occurs at the smallest scales, since the vast majority
of the aquatic biomass is concentrated at these scales (the zooplankton or
copepods involved in the migration range in size from tens of microns to a few
millimeters) (Kunze et al., 2006; Visser, 2007). As evident from the arguments
put forth in Katija & Dabiri (2009), the validity of the biogenic mixing
hypothesis is rooted in the ability of a single small active organism, or a
passive particle, to drag along a large amount of fluid during its migration,
thereby contributing to the (vertical) mixing of the ocean waters on larger
scales. Interestingly, in a homogeneous fluid medium and for any finite
Reynolds number, a passive particle can drag an arbitrarily large volume of
fluid, over sufficiently long times, on account of the slowly decaying
velocity field in its viscous wake (Eames et al., 2003; Chisholm & Khair,
2017). However, as pointed out by Subramanian (2010), the oceans being stably
stratified, this dragging motion incurs a potential energy penalty on large
enough length scales. The limit of a vanishing stratification (corresponding
to a homogeneous fluid medium) is therefore a singular one; in that, a small
but finite stratification is expected to render the volume dragged by the
particle, the so-called drift volume (Darwin, 1953; Lighthill, 1956), finite.
The above description makes it clear that, at the heart of the validity of the
biogenic mixing hypothesis, is the nature of fluid motion induced by an active
or passive particle in a stably stratified medium. This study examines the
latter problem, that of a small passive particle translating in a stably
stratified medium, where ‘small’ refers to the dominance of viscous forces.
Consideration of a passive particle is not overly restrictive since even
active swimmers, moving along the vertical, attain neutral buoyancy only at a
certain instant in time (corresponding to a depth at which the ambient and
swimmer densities equal each other). At all other times, such swimmers exert a
net force on the ambient. Despite the near-field being dominated by the fluid
motion induced by the slip velocity on the swimmer surface (Doostmohammadi et
al., 2012), one expects the net force to invariably play a dominant role in
the far-field. With this in mind, we consider a passive sphere translating
along the direction of stratification at small Reynolds($Re$) and viscous
Richardson numbers($Ri_{v}$), the translation assumed to be the result of a
density difference. $Ri_{v}=\frac{\gamma a^{3}g}{\mu U}$ measures the relative
importance of viscous and buoyancy forces, and is therefore the key
dimensionless parameter for motion of small particles in a stratified ambient;
here, $\gamma=-\frac{d\rho}{dz}$ is the constant density gradient in the
ambient($\gamma>0$ for stable stratification), $a$ the sphere radius, $g$ the
acceleration due to gravity, $\mu$ the fluid viscosity and $U$ the speed of
translation. Note that $Ri_{v}=\frac{Re}{Fr^{2}}$, where $Fr=\frac{U}{Na}$ is
the Froude number that is the usual measure of the importance of
stratification in the inviscid limit, $N=\sqrt{-\frac{g\gamma}{\rho}}$ here
being the Brunt-Vaisala frequency (Turner, 1979). In a significant departure
from most earlier efforts (discussed below), and keeping in mind the oceanic
scenario, we consider the Peclet number, defined as $Pe=\frac{Ua}{D}$, $D$
being the diffusivity of the stratifying agent (salt in the oceanic case) to
be large.
As mentioned above, earlier efforts, particularly the ones devoted to analysis
of the fluid motion around a moving particle or swimmer, have mostly been
restricted to small $Pe$; an exception is the very recent effort of Shaik &
Ardekani (2020b), and we discuss this in section 4. Motivated by the need to
understand laminar jets in a stratified ambient, List (1971) was the first to
characterize the analog of a Stokeslet (the limiting scenario of a small
translating particle, for $Re=0$, approximated as a point force) in a linearly
stratified fluid, and for small $Pe$. The author considered both vertical and
horizontal Stokeslet orientations in two and three dimensions; for the
vertical orientation, relevant to the problem analyzed here, the motion
although fore-aft symmetric was shown to decay away much more rapidly than the
$O(\frac{1}{r})$ decay characteristic of a Stokeslet in a three-dimensional
homogeneous ambient. The resulting weak far-field motion, shown in the paper
only for the two-dimensional case, was in the form of toroidal recirculation
cells stacked along the direction of stratification. ‘Far-field’ here refers
to (in units of $a$) length scales of $O(Ri_{v}Pe)^{-\frac{1}{4}}$, the
stratification screening length for $Pe\ll 1$; as will be seen below, the
number of such cells is finite. Much later, Ardekani & Stocker (2010)
considered the same problem, but for both passive and active particles modeled
as point force and force-dipole singularities, respectively. The density and
velocity fields were obtained numerically using a fast Fourier transform
technique, the singularities being termed ‘stratlets’. More recently, Fouxon &
Leshansky (2014) examined the role of turbulence, within the Boussinesq
framework, in disrupting the stratification-induced signatures on the flow
field around passive particles and active swimmers. As part of their analysis,
the authors derived an asymptotic expression for the far-field flow in the
absence of turbulence, and that exhibited a rapid algebraic decay, consistent
with the findings of the aforementioned studies. Wagner et al. (2014) examined
the mixing efficiencies associated with the flow induced by micro-swimmers,
for small $Pe$, finding them to be negligibly small. Very recently, Mercier et
al. (2020) and Dandekar et al. (2020) have analyzed the drag and torque acting
on anisotropic disk-shaped particles (and the resulting orientation dynamics)
sedimenting in a stratified medium. The experiments reported in Mercier et al.
(2020) pertain to finite $Re$ and $Ri_{v}$, and highlight the existence of an
edgewise-settling regime for sufficiently large $Ri_{v}$ or small $Fr$ (in
this regard, also see Doostmohammadi & Ardekani (2014); Mrokowska (2018,
2020a, 2020b)); in contrast to the broadside-on settling regime known for
small to moderate Re in a homogeneous ambient (Cox, 1965; Dabade et al., 2015;
Anand et al., 2020). The theoretical effort of Dandekar et al. (2020)
evaluates the hydrodynamic force and torque on an arbitrarily shaped body in a
linearly stratified ambient for arbitrary $Pe$, and finds a hydrodynamic
torque, arising from the ambient stratification, for chiral particles. The
role of stratification in the orientation dynamics of achiral particles, such
as the ones used in Mercier et al. (2020), has been analyzed in Varanasi et
al. (2021). In the present context, we only note that, although the
aforementioned recent studies also pertain to the large-$Pe$ limit, the fluid
motion was not examined in detail.
As seen above, a number of efforts in the literature have analyzed the fluid
motion around both passive particles and active swimmers primarily in the
small $Pe$ regime. However, the motion of a typical particle or small-sized
swimmer (zooplankton) in the oceanic ambient, relevant to the biogenic mixing
hypothesis, pertains to large $Pe$; for instance, a zooplankton of size $0.1$
$mm$ moving at a speed of $1$ $mm/s$ in a typical oceanic stratification of
$\gamma=1.67\times 10^{-3}\frac{kg}{m^{4}}$, yields $Re=0.116$,
$Ri_{v}=1.84\times 10^{-8}$ and $Pe=100$. Note that the large $Pe$ regime
pertains generically to cases where salt is the stratifying agent, for
particles larger than a few microns, the aforementioned oceanic ambient only
being one such instance. The first theoretical effort in this regime is that
of Zvirin & Chadwick (1975) who calculated the drag enhancement in what we
term the Stokes stratification regime below, and is defined by $Re\ll
Ri_{v}^{\frac{1}{3}}\ll 1$. The calculation was restricted to determining the
drag enhancement arising from buoyancy effects in the outer region, on scales
of $O(Ri_{v}^{-\frac{1}{3}})$, corresponding to the stratification screening
length (note that this is the screening length for large $Pe$, in contrast to
the $O(Ri_{v}Pe)^{-\frac{1}{4}}$ screening length above, for small $Pe$, that
was obtained by List (1971) and Ardekani & Stocker (2010)). Similar to
Childress’s determination of the drag correction for the axial motion of a
sphere in a rotating fluid(Childress, 1964), and Saffman’s calculation of the
inertial lift (Saffman, 1965), the analysis was done in Fourier space, with
the correction to the Stokes drag coming out to be $O(Ri_{v}^{\frac{1}{3}})$,
the inverse of the aforementioned screening length. More recently, Zhang et
al. (2019), by using detailed numerical calculations and an ingenious
splitting procedure, showed that the enhancement in drag at low Reynolds
numbers comes from the induced baroclinic torque and the resulting change in
the flow structure. Moreover, the enhancement in drag was found to be
proportional to $Ri_{v}^{\frac{1}{3}}$, in agreement with the theoretical
result above. These results, however, do not agree with the observations of
Yick et al. (2009) who obtained a scaling closer to $Ri_{v}^{\frac{1}{2}}$,
the mismatch likely due to additional non-Boussinesq contributions arising
from heavily deformed iso-pycnals close to the sphere. A recent effort of
mehaddi_2018 has extended the sphere drag calculation to include effects of
weak inertia.
The primary motivation for our calculation is to eventually determine the
drift volume in a stably stratified ambient, and thereby, estimate the
importance of the biogenic mixing contribution. Now, as mentioned above, the
infinite-time drift volume is divergent, for any finite Re, in a homogeneous
ambient (Eames et al., 2003; Chisholm & Khair, 2017; Subramanian, 2010), this
divergence arising from the slow $O(\frac{1}{r})$ decay of velocity field
within the far-field wake, $r$ being the distance downstream. For $Re=0$, the
velocity field decays as $O(\frac{1}{r})$ at large distances regardless of the
direction, and as a result, the drift volume diverges for any finite time.
This implies that the finiteness of the drift volume, for a weakly stratified
ambient pertaining to the aforementioned Stokes stratification regime, must
arise from the transition of the far-field fluid motion from an
$O(\frac{1}{r})$ Stokesian decay to a more rapid decay beyond the
$O(Ri_{v}^{-\frac{1}{3}})$ stratification screening length. Thus, for small
$Re$, and unlike the drag problem considered in Zvirin & Chadwick (1975), one
expects the dominant contribution to the drift volume to arise from the fluid
motion far from the sphere, or in other words, the outer region. It is with
this in mind that the analysis here is restricted to the linearized equations
in the far-field. One may nevertheless question the relevance of this
linearization, given that the motion in the outer region is indirectly
influenced by the heavily deformed iso-pycnals, close to the sphere, for large
$Pe$. However, these deformed iso-pycnals contribute to a localized buoyant
envelope around the sphere, and at large distances, one may regard the
combination of the envelope and the sphere as an effective point force, albeit
of a different magnitude, as far as the outer region is concerned; the
linearity of the outer-region equations implies that the nature of fluid
motion is independent of the magnitude of the force. More detailed scaling
arguments pertaining to the velocity and density fields in the inner region
(length scales of order the particle size) are given in the conclusions
section.
The remainder of the paper is organized as follows. In the next section, we
present the quasi-steady governing equations for the fluid motion under the
Boussinesq approximation and a scaling analysis to determine the screening
lengths arising from the effects of inertia and stratification, for both small
and large $Pe$. Next, the linearized equations in the outer region are solved
using a Fourier transform approach (Saffman, 1965; Childress, 1964), and the
velocity and density field are written as Fourier integrals, in the
aforementioned small and large-Pe limits, and in the so-called Stokes
stratification regime, when buoyancy forces are dominant over inertial ones;
this translates to $Re\ll(Ri_{v}Pe)^{1/4}$ for small $Pe$, and $Re\ll
Ri_{v}^{1/3}$ for large $Pe$. In section 3, we contrast the streamline
patterns and iso-pycnals obtained from a numerical evaluation of the Fourier
integrals for $Pe=0$ and $Pe\gg 1$; the numerical results are also compared to
analytical approximations valid for distances much greater than the respective
screening lengths. In the concluding section 4, we summarize our work, and
follow this up with scaling arguments pertaining to the inner region dynamics
and drift volume.
## 2 The disturbance fields in a stable linearly stratified ambient
We consider a sphere of radius $a$ moving vertically with speed $U$ in an
unbounded stably stratified fluid with a linear stratification profile
$\frac{d\rho}{dz}=-\gamma$, with $\gamma>0$. Using $a$, $U$ and $\gamma a$ for
the length, velocity and density scales, respectively, the non-dimensional
continuity equation, the Navier-Stokes equations and the convection-diffusion
equation for the velocity($\mathbf{u}$) and density disturbance($\rho_{f}$)
fields, in a sphere-fixed reference frame, are as follows:
$\nabla\cdot\mathbf{u}=0,$ (1) $Re[\mathbf{u}\cdot\nabla\mathbf{u}]=-\nabla
p+\nabla^{2}\mathbf{u}-Ri_{v}\rho_{f}\mathbf{1_{z}},$ (2)
$1-w+\mathbf{u}\cdot\mathbf{\nabla}\rho_{f}=\frac{1}{Pe}\nabla^{2}\rho_{f},$
(3)
$\displaystyle\mathbf{u}=0,\quad\mathbf{n}\cdot\nabla{\rho_{f}}=0\quad\mbox{
at }\quad r=|\mathbf{x}|=1,$ (4)
$\displaystyle\mathbf{u}\rightarrow\mathbf{1_{z}},\quad\rho_{f}\rightarrow
0\quad\mbox{ as }\quad r=|\mathbf{x}|\rightarrow\infty,$ (5)
where $r$ is the non-dimensional distance from the sphere and $w$ in (3) is
the vertical velocity component. The total density in the aforementioned
reference frame is given by $\rho(z)=\rho_{0}+t-z+\rho_{f}$, and the term
involving $1-w$ in (3) denotes the convection of the base-state stratification
(along the vertical coordinate) by the perturbation velocity field. Note that
the Boussinesq approximation has been used above to neglect the density
disturbance in the convective terms of the equations of motion, so $Re$ in
$(2.2)$ is based on an appropriate reference density. Further, in taking
$\rho_{f}$ in particular to be independent of time, we have assumed a quasi-
steady state to be achieved for long times. This assumption is examined in
section $4$ for both the inner ($r\sim O(a)$) and outer regions ($r\geq
O(Ri_{v}^{-\frac{1}{3}})$).
As is well known, although we examine the limit $Re,Ri_{v}\ll 1$, the inertial
and stratification terms in ($2.2$) cannot be neglected. This is because the
resulting Stokes equations are not a uniformly valid approximation, and the
aforementioned terms become comparable to the leading order viscous terms at
sufficiently large distances. As discussed in the introduction, the large
length scales above are precisely the ones that control the drift volume that
in turn underlies the biogenic mixing hypothesis. For a homogeneous fluid, the
length scale (in units of $a$) at which inertial forces first become
comparable to viscous forces is $Re^{-1}$, referred to here as the inertial
screening length. Obtaining a similar estimate for the buoyancy forces
requires one to obtain the far-field behavior of the density field which in
turn depends on whether $Pe$ is large or small.
For $Pe\rightarrow 0$, the density perturbation on length scales of $O(a)$
arises from the no-flux boundary condition on the surface of the particle, and
decays as $O(\frac{1}{r^{2}})$ at large distances. The convective correction
to the density field satisfies $\frac{1}{Pe}\nabla^{2}\rho_{f}\sim(1-w)$;
using $(1-w)\sim O(\frac{1}{r})$ for the Stokeslet field leads to
$\rho_{f}\sim Pe\;r$. The buoyancy forces arising from the convective
perturbation are $O(Ri_{v}Pe\>r)$, and grow linearly with distance. Equating
them to the decaying viscous forces of $O(\frac{1}{r^{3}})$ leads to the
small-$Pe$ stratification screening length
$l_{c}\sim(Ri_{v}Pe)^{-\frac{1}{4}}$. The equations governing the disturbance
fields on scales of order the aforementioned screening length may be obtained
by using the expansions:
$\mathbf{u}=\mathbf{1_{z}}+(Ri_{v}Pe)^{1/4}\mathbf{\bar{u}}$,
$p=p_{\infty}+(Ri_{v}Pe)^{\frac{1}{2}}\bar{p}$ and
$\rho_{f}=Pe(Ri_{v}Pe)^{-\frac{1}{4}}\bar{\rho_{f}}$. Note that the velocity,
pressure and density disturbance vary as $\frac{1}{r}$, $\frac{1}{r^{2}}$ and
$Pe\>r$, respectively, in the inner Stokesian region far away from the
particle, leading to the scalings in the above expansions. The outer region
equations for $\mathbf{\bar{u}}$, $\bar{p}$ and $\bar{\rho}_{f}$ are given by
$\bar{\nabla}.\mathbf{\bar{u}}=0,$ (6)
$-\alpha_{0}\frac{\partial\mathbf{\bar{u}}}{\partial\bar{z}}=-\bar{\nabla}\bar{p}+\bar{\nabla}^{2}\mathbf{\bar{u}}-[\bar{\rho_{f}}+6\pi\delta(\mathbf{\bar{r}})]\mathbf{1_{z}},$
(7)
$-\mathbf{1_{z}}.\bar{u}+\beta_{0}\frac{\partial\bar{\rho_{f}}}{\partial\bar{z}}=\bar{\nabla}^{2}\bar{\rho_{f}}.$
(8)
Here, $\alpha_{0}$ and $\beta_{0}$ are given by $\frac{Re}{(Ri_{v}Pe)^{1/4}}$
and $\frac{Pe}{(Ri_{v}Pe)^{1/4}}$, respectively, and denote the ratios of the
low-$Pe$ stratification screening length to the inertial ($Re^{-1}$) and
convective($Pe^{-1}$) screening lengths. Note that the boundary condition on
the particle surface has now been replaced by a point force on the RHS of (7).
For $Pe,Re\ll(Ri_{v}Pe)^{\frac{1}{4}}$, one may ignore the terms proportional
to $\alpha_{0}$ and $\beta_{0}$. One then finds the velocity and density
disturbance fields as the following Fourier integrals:
$\mathbf{\bar{u}}(\mathbf{\bar{r}})=\frac{-3}{4\pi^{2}}\int\frac{k^{4}(\mathbf{1_{z}}-\frac{k_{3}\mathbf{k}}{k^{2}})}{k^{6}+k_{t}^{2}}e^{i\mathbf{k}.\mathbf{\bar{r}}}d\mathbf{k},$
(9)
${\bar{\rho_{f}}}(\mathbf{\bar{r}})=\frac{-3}{4\pi^{2}}\int\frac{k_{t}^{2}}{k^{6}+k_{t}^{2}}e^{i\mathbf{k}.\mathbf{\bar{r}}}d\mathbf{k},$
(10)
where $k_{t}=(k^{2}-{k_{3}}^{2})^{\frac{1}{2}}$ is the magnitude of the
wavevector projected onto the plane perpendicular to the translation
direction. The above diffusion dominant limit has been considered previously
(see (List, 1971; Ardekani & Stocker, 2010; Fouxon & Leshansky, 2014)), as
indicated in the introduction, and we have included this case only for
purposes of contrasting with the results obtained below in the convection
dominant limit.
For $Pe\rightarrow\infty$, one neglects the diffusion term in ($2.3$) and thus
$\mathbf{u}\cdot\nabla\rho_{f}\sim(1-w)$. Again, using $(1-w)\sim
O(\frac{1}{r})$, one has $\rho_{f}\sim O(1)$, so the buoyancy forcing term in
($2.2$) is $O(Ri_{v})$. Equating this to the $O(l_{c}^{-3})$ viscous term, one
obtains the large-$Pe$ stratification screening length to be $l_{c}\sim
Ri_{v}^{-\frac{1}{3}}$, as originally shown by Zvirin & Chadwick (1975).
Again, keeping in mind the Stokesian scalings in the inner region, the
disturbance fields in the outer region may be expanded in the form:
$\mathbf{u}=\mathbf{1_{z}}+Ri_{v}^{\frac{1}{3}}\mathbf{\tilde{u}}$,
$p=p_{\infty}+Ri_{v}^{\frac{2}{3}}\tilde{p}$ and $\rho_{f}=\tilde{\rho_{f}}$,
and one obtains the following equations for $\mathbf{\tilde{u}}$, $\tilde{p}$
and $\tilde{\rho}_{f}$:
$\tilde{\nabla}.\mathbf{\tilde{u}}=0,$ (11)
$-\alpha_{\infty}\frac{\partial\mathbf{\tilde{u}}}{\partial\tilde{z}}=-\tilde{\nabla}\tilde{p}+\tilde{\nabla}^{2}\mathbf{\tilde{u}}-[\tilde{\rho_{f}}+6\pi\delta(\mathbf{\tilde{r}})]\mathbf{1_{z}},$
(12)
$-\mathbf{1_{z}}.\tilde{u}+\frac{\partial\tilde{\rho_{f}}}{\partial\tilde{z}}=\beta_{\infty}\tilde{\nabla}^{2}\tilde{\rho_{f}}.$
(13)
Here, $\alpha_{\infty}=\frac{Re}{Ri_{v}^{1/3}}$is the large-$Pe$ analog of
$\alpha_{0}$, with $\beta_{\infty}^{-1}=\frac{Pe}{Ri_{v}^{1/3}}$ being the
corresponding analog of $\beta_{0}$ above. In the Stokes stratification
regime, corresponding to $Re\ll Ri_{v}^{\frac{1}{3}}$, one can set
$\alpha_{\infty}$ in (12) to zero. Although our primary focus is on the limit
$\beta_{\infty}\rightarrow 0\,(Pe\rightarrow\infty)$, retaining a small but
finite $\beta_{\infty}$ turns out to be important for numerical convergence of
the Fourier integrals below. As will also be seen below, the structure of the
velocity and density fields, almost everywhere in the domain, is independent
of $\beta_{\infty}$ provided the latter is small; in section 3.2.2, however,
it is shown that a small but finite $\beta_{\infty}$ crucially affects the
structure of the fields right behind the translating sphere. Again, Fourier
transforming, one obtains the velocity and density fields as the following
integrals:
$\mathbf{\tilde{u}}(\mathbf{\tilde{r}})=\frac{-3}{4\pi^{2}}\int\frac{(ik_{3}+\beta_{\infty}k^{2})k^{2}(\mathbf{1_{z}}-\frac{k_{3}\mathbf{k}}{k^{2}})}{(ik_{3}+\beta_{\infty}k^{2})k^{4}+k_{t}^{2}}e^{i\mathbf{k}.\mathbf{\tilde{r}}}d\mathbf{k},$
(14)
${\tilde{\rho_{f}}}(\mathbf{\tilde{r}})=\frac{-3}{4\pi^{2}}\int\frac{k_{t}^{2}}{(ik_{3}+\beta_{\infty}k^{2})k^{4}+k_{t}^{2}}e^{i\mathbf{k}.\mathbf{\tilde{r}}}d\mathbf{k}.$
(15)
## 3 Results and Discussion
Herein, we analyze the axial velocity and density disturbance fields, and the
resulting streamline and iso-pycnal patterns in both the diffusion and
convection dominant limits by using a combination of numerics (Gauss-Legendre
quadrature integration) and far-field asymptotics. As already mentioned in the
introduction, the results in both limits are for the case of buoyancy forces
being dominant (the Stokes stratification regime), corresponding to
$\alpha_{0},\alpha_{\infty}\ll 1$. The role of weak inertial effects is
discussed, via scaling arguments towards the end of this section.
### 3.1 Diffusion-dominant limit ($Pe\ll 1$)
List (1971) used residue theory to enable the reduction of the velocity and
density fields to one-dimensional integrals for both the two and three-
dimensional cases. We use a different method where the disturbance fields are
reduced to two-dimensional integrals; importantly, and unlike List (1971),
this method is applicable in both the diffusion and convection dominant
limits. The Fourier integrals for the velocity and density disturbance fields,
given by (9) and (10), are expressed in a spherical coordinate system with its
polar axis aligned with the translation direction. The integral over the
azimuthal angle ($\phi$) can be carried out analytically, and the resulting
two dimensional integrals for the fields are given by:
$\bar{u}_{z}=\frac{-3}{2\pi}\int_{0}^{\infty}dk\int_{0}^{\pi}d\theta\frac{k^{4}\sin^{3}\theta
J_{0}(k\bar{r}_{t}\sin\theta)e^{ik\bar{z}\cos\theta}}{(k^{4}+\sin^{2}\theta)},$
(16)
$\bar{\rho}_{f}=\frac{-3}{2\pi}\int_{0}^{\infty}dk\int_{0}^{\pi}d\theta\frac{k^{2}\sin^{3}\theta
J_{0}(k\bar{r}_{t}\sin\theta)e^{ik\bar{z}\cos\theta}}{(k^{4}+\sin^{2}\theta)},$
(17)
where $J_{0}(x)$ is the zeroth order Bessel function of the first kind. Note
that since the problem is axisymmetric, the fields are written as functions of
($\bar{r}_{t},\bar{z}$) with $\bar{r}_{t}$ and $\bar{z}$ being the distances
along and orthogonal to the direction of translation. Not including the
complex exponential, the Fourier integrand for the density disturbance field
in (17) decays as $\frac{1}{k^{5/2}}$ for large $k$, while that for the axial
velocity in (16) only decays as $\frac{1}{k^{1/2}}$; the latter slow decay
reflects the $1/r$-decay in physical space (for small $r$ corresponding to the
inner region) of the Stokeslet. As a result, an accurate evaluation of (16)
relies essentially on cancellation induced by the complex Fourier exponential.
In order to facilitate numerical evaluation, we therefore separate out the
Stokeslet contribution, writing the axial velocity integral above as:
$\mathbf{\bar{u}}_{z}=\frac{-3(2+\frac{\bar{r}_{t}^{2}}{\bar{z}^{2}})}{4\mathinner{\\!\left\lvert\bar{z}\right\rvert}(1+\frac{\bar{r}_{t}^{2}}{\bar{z}^{2}})^{\frac{3}{2}}}+\frac{3}{2\pi}\int_{0}^{\infty}dk\int_{0}^{\pi}d\theta\frac{\sin^{5}\theta
J_{0}(k\bar{r}_{t}\sin\theta)\cos(k\bar{z}\cos\theta)}{(k^{4}+\sin^{2}\theta)}$
(18)
where the Fourier integrand in $(\ref{eq:5})$ now decays as
$\frac{1}{k^{9/2}}$ for large $k$, and we have replaced the complex
exponential by the cosine on account of symmetry (an analogous replacement
applies to (17)). The Stokes streamfunction characterizing the axisymmetric
flow field may be found from the axial velocity by using the relation
$\bar{u}_{z}=\frac{1}{\bar{r}_{t}}\frac{\partial\bar{\psi}}{\partial\bar{r}_{t}}$
and is given by:
$\mathbf{\bar{\psi}}_{s}=\frac{-3\bar{r}_{t}^{2}}{4(\bar{r}_{t}^{2}+\bar{z}^{2})^{\frac{1}{2}}}+\frac{3\bar{r}_{t}}{2\pi}\int_{0}^{\infty}dk\int_{0}^{\pi}d\theta\frac{\sin^{4}\theta
J_{1}(k\bar{r}_{t}\sin\theta)\cos(\bar{z}\cos\theta)}{k(k^{4}+\sin^{2}\theta)}$
(19)
The density disturbance and axial velocity fields, and the Stokes
streamfunction, given by (17), (18) and (19), respectively, are evaluated
using Gaussian quadrature. The instantaneous streamline pattern and iso-
pycnals, in a reference frame with a far-field quiescent ambient, are shown in
figure 1. Both the disturbance velocity and density fields are seen to be
fore-aft symmetric, as is evident from the cosine in (18) and (19). As
originally found by List (1971) and Ardekani & Stocker (2010), buoyancy forces
suppress the long-ranged vertical motion associated with the Stokeslet at
large distances, leading to the development of recirculating cells aligned
with the direction of stratification, and wherein the motion is predominantly
horizontal. Interestingly and perhaps surprisingly (if one’s intuition is
based on the cellular disturbance flow fields set up internal gravity waves in
an unbounded stratified ambient), the far-field analysis in the next
subsection shows the number of such cells to be finite, likely on account of
the neglect of inertial/convection effects.
Figure 1: (a) Streamlines and (b) Iso-pycnals for a translating sphere in a
linearly stratified fluid in the diffusion dominant limit ($Pe=0$); in the
point-particle approximation used, the sphere is at the origin and moving
vertically downward.
#### 3.1.1 Far-field analysis
At large distances, as already mentioned, one expects the motion to be largely
in the horizontal direction. As a consequence, one expects the characteristic
length scale in the vertical direction to be much smaller than that along the
horizontal - this is already evident from the rather small aspect ratios of
the recirculating cells in figure 1. Thus, the Fourier integrals in (9) and
(10), for length scales large compared to $O(Ri_{v}Pe)^{-1/4}$, may be
simplified using $k_{3}\gg k_{t}$, leading to:
$\mathbf{\bar{u}}(\mathbf{\bar{r}})=-\frac{3}{4\pi^{2}}\int\frac{k^{4}(\mathbf{1_{z}}-\frac{k_{3}\mathbf{k}}{k^{2}})}{(k_{3}^{6}+k_{t}^{2})}e^{i\mathbf{k}.\mathbf{\bar{r}}}d\mathbf{k},$
(20)
${\bar{\rho_{f}}}(\mathbf{\bar{r}})=-\frac{3}{4\pi^{2}}\int\frac{k_{t}^{2}}{(k_{3}^{6}+k_{t}^{2})}e^{i\mathbf{k}.\mathbf{\bar{r}}}d\mathbf{k},$
(21)
which may, via contour integration in the complex-$k_{3}$ plane, be reduced to
one-dimensional integrals written in terms of the similarity variable
$\eta=\frac{\bar{z}}{\bar{r_{t}}^{\frac{1}{3}}}$; see Appendix A for details.
These integrals are only functions of
$\mathinner{\\!\left\lvert\eta\right\rvert}$, and are given by:
$\bar{u}_{z}=\frac{-9i}{\bar{r}_{t}^{3}}\int_{0}^{\infty}p^{8}J_{0}(p^{3})\left(lq_{1}^{2}e^{iq_{1}p\mathinner{\\!\left\lvert\eta\right\rvert}}+mq_{2}^{2}e^{iq_{2}p\mathinner{\\!\left\lvert\eta\right\rvert}}+nq_{3}^{2}e^{iq_{3}p\mathinner{\\!\left\lvert\eta\right\rvert}}\right)dp,$
(22)
$\bar{\rho}_{f}=\frac{-9i}{\bar{r}_{t}^{\frac{7}{3}}}\int_{0}^{\infty}p^{6}J_{0}(p^{3})\left(le^{iq_{1}p\mathinner{\\!\left\lvert\eta\right\rvert}}+me^{iq_{2}p\mathinner{\\!\left\lvert\eta\right\rvert}}+ne^{iq_{3}p\mathinner{\\!\left\lvert\eta\right\rvert}}\right)dp,$
(23)
$\bar{u}_{{r_{t}}}=\frac{-9\operatorname{sgn}{(\eta)}}{\bar{r}_{t}^{\frac{7}{3}}}\int_{0}^{\infty}p^{6}J_{1}(p^{3})\left(lq_{1}^{3}e^{iq_{1}p\mathinner{\\!\left\lvert\eta\right\rvert}}+mq_{2}^{3}e^{iq_{2}p\mathinner{\\!\left\lvert\eta\right\rvert}}+nq_{3}^{3}e^{iq_{3}p\mathinner{\\!\left\lvert\eta\right\rvert}}\right)dp.$
(24)
The above self-similar forms point to the existence of a thin axisymmetric
wake bracketing the horizontal plane containing the settling sphere, in the
far-field, whose vertical extent grows as
$z\propto(Ri_{v}Pe)^{-\frac{1}{6}}{r}_{t}^{\frac{1}{3}}$, where $z$ and
$r_{t}$ are now in units of $a$; the disturbance fields are negligibly small
outside the wake. Even within the wake, it can be seen from (22-24) that the
disturbance fields exhibit a more rapid decay of the velocity field relative
to the $O(1/r)$ decay of the Stokeslet, reinforcing the fact that buoyancy
forces screen the originally long-ranged Stokesian fields. Nevertheless, the
velocity and density fields in the diffusion-dominant limit are fore-aft
symmetric as can be seen from the above expressions, and as evident from
figure 1. The one dimensional integrals in (22-24) are readily evaluated by
using numerical integration, and furthermore, the large-$\eta$ asymptotes,
obtained from using the small argument asymptote for the Bessel function in
the integrands, are given by
$\bar{u}_{z}\approx\frac{181440}{\bar{r}_{t}^{3}\mathinner{\\!\left\lvert\eta\right\rvert}^{9}}$,
$\bar{u}_{{r_{t}}}\approx\operatorname{sgn}{(\eta)}\frac{816480}{\bar{r}_{t}^{7/3}\mathinner{\\!\left\lvert\eta\right\rvert}^{10}}$,
and
$\bar{\rho}_{f}\approx\frac{-3240}{\bar{r}_{t}^{7/3}\mathinner{\\!\left\lvert\eta\right\rvert}^{7}}$.
The comparison between the one-dimensional profiles of the axial velocity
field, obtained from the exact calculations above (that led to the streamline
pattern in figure 1), and those obtained from the far-field self-similar
approximation given by (22) are shown in figure 2 for various $\bar{r}_{t}$’s.
Based on the self-similar form given by (22), the figures plot
$\bar{r}_{t}^{3}\bar{u}_{z}$ as a function of $|\eta|$, as a result of which
the far-field approximation shown in the figures remains invariant to a change
in $\bar{r}_{t}$. In the log-log plots shown, the zero-crossings of the axial
velocity (which roughly correlate to the boundaries between recirculating
cells) appears as sharp dips (to negative infinity). While there exist
significant differences between the numerical and far-field predictions for
$\bar{r}_{t}$’s of order unity, the agreement improves with increasing
$\bar{r}_{t}$, and there is near-quantitative agreement for the largest
$\bar{r}_{t}\,(=25)$. Importantly, the number of zero crossings (eight) in the
exact field appears independent of $\bar{r}_{t}$, and is the same as that in
the far-field approximation; note that the streamline pattern in figure 1
includes only three of the eight zero crossings for $\bar{r}_{t}=25$. The
finite number of zero crossings seen in figure 2, as mentioned above, points
to a finite number of recirculating cells in the outer region. Finally, for
$\bar{r}_{t}$’s greater than that corresponding to the final zero crossing,
the axial velocity profiles conform to the algebraic asymptote given above
viz.
$\bar{r}_{t}^{3}\bar{u}_{z}\approx\frac{181440}{\mathinner{\\!\left\lvert\eta\right\rvert}^{9}}$,
and shown as the dashed orange line in figure 1. A scenario analogous to that
described above prevails for the density disturbance field.
Figure 2: The axial velocity profiles in the diffusion-dominant limit
($Pe=0$): comparison between the exact numerical profiles and the far-field
approximation (given by (22) for various $\bar{r}_{t}$’s; in each of the
plots, the large-$\eta$ analytical asymptote is shown as a dashed orange line.
As one approaches the translation axis, that is, for $\bar{r}_{t}\rightarrow
0$, $\eta$ becomes asymptotically large for any finite $\bar{z}$, and only the
large-$\eta$ asymptotes are of relevance. On substituting for $\eta$, the
large-$\eta$ asymptotes for the axial velocity and density fields above are
seen to be independent of $\bar{r}_{t}$, being functions of only $|\bar{z}|$,
suggesting that these asymptotes remain valid far-field (large $|\bar{z}|$)
approximations even along the translation axis (the stagnation streamline).
The radial velocity is, of course, zero along the stagnation streamline, with
the large-$\eta$ approximation given above being $O(\bar{r}_{t})$ for small
$\bar{r}_{t}$. In figure 3, we compare the exact axial velocity field for
$\bar{r}_{t}=0$, again obtained numerically, with the large-$\eta$ asymptote
that is now proportional to $\bar{z}^{-9}$. Although the locations of the
(seven) zero-crossings of the exact profile can no longer be predicted, the
far-field algebraic decay nevertheless conforms to the asymptote above. It is
worth noting that, the large-$z$ asymptote may also be obtained by directly
setting $\bar{r}_{t}=0$ in the exact expression for the axial velocity,
giving:
$\displaystyle u_{z}$ $\displaystyle=\frac{-1}{4\pi
z}+\frac{1}{2\pi^{2}}\int_{0}^{\pi/2}d\theta\int_{0}^{\infty}dk\;\frac{\sin^{5}\theta\cos(kz\cos\theta)}{k^{4}+\sin^{2}\theta},$
which in turn may be reduced to the following one dimensional integral using
residue integration:
$u_{z}=\frac{-1}{4\pi
z}+\frac{1}{4\pi}\int_{0}^{\pi/2}d\theta\;e^{-z\cos\theta\sqrt{\frac{\sin\theta}{2}}}\cos(z\cos\theta\sqrt{\frac{\sin\theta}{2}}-\frac{\pi}{4})\sin^{7/2}\theta,$
(25)
a reduction only possible for $\bar{r}_{t}=0$. For large $\bar{z}$, the
dominant contributions to the above integral arise from the neighborhood of
the zeroes of $\cos\theta\sqrt{\frac{\sin\theta}{2}}$ \- that is, $\theta=0$
and $\theta=\frac{\pi}{2}$. The contribution from $\theta=\pi/2$ exactly
cancels the Stokeslet contribution (the first term in (25)). The second order
contribution from $\theta=\pi/2$, and the leading order contribution from
$\theta=0$, together, lead to the large-$\bar{z}$ asymptote above, which was
originally given in Fouxon & Leshansky (2014).
Figure 3: Axial velocity, as a function of $\bar{z}$, for $\bar{r}_{t}=0$ (the
translation axis), in the diffusion dominant limit ($Pe=0$); the
large-$\bar{z}$ asymptote is shown as a dashed orange line.
### 3.2 Convection dominant limit ($Pe\gg 1$)
The Fourier integrals in the convection dominant limit are given by (14) and
(15), and their simplification is analogous to the diffusion dominant case
above. In a spherical coordinate system aligned with the translation
direction, and after integration over the azimuthal angle, the residual two-
dimensional integrals for the disturbance fields are given by:
$\tilde{u}_{z}=\frac{-3(2+\frac{\tilde{r}_{t}^{2}}{\tilde{z}^{2}})}{4\mathinner{\\!\left\lvert\tilde{z}\right\rvert}(1+\frac{\tilde{r}_{t}^{2}}{\tilde{z}^{2}})^{\frac{3}{2}}}+\frac{3}{2\pi}\int_{0}^{\infty}dk\int_{0}^{\pi}d\theta\frac{\sin^{5}\theta
J_{0}(k\tilde{r}_{t}\sin\theta)e^{ik\tilde{z}\cos\theta}}{(ik^{3}\cos\theta+\beta_{\infty}k^{4}+\sin^{2}\theta)},$
(26)
$\tilde{\rho_{f}}=\frac{-3}{2\pi}\int_{0}^{\infty}dk\int_{0}^{\pi}d\theta\frac{k^{2}\sin^{3}\theta
J_{0}(k\tilde{r}_{t}\sin\theta)e^{ik\tilde{z}\cos\theta}}{(ik^{3}\cos\theta+\beta_{\infty}k^{4}+\sin^{2}\theta)},$
(27)
where, as for the diffusion-dominant case, we have separated out the Stokeslet
contribution in (26) in the interests of numerical convergence. The Stokes
streamfunction can be derived from the axial velocity as before and is given
by
$\tilde{\psi}_{s}=\frac{-3\tilde{r}_{t}^{2}}{4(\tilde{r}_{t}^{2}+\tilde{z}^{2})^{\frac{1}{2}}}+\frac{3\tilde{r}_{t}}{2\pi}\int_{0}^{\infty}dk\int_{0}^{\pi}d\theta\frac{\sin^{4}\theta
J_{1}(k\tilde{r}_{t}\sin\theta)e^{ik\tilde{z}\cos\theta}}{k(ik^{3}\cos\theta+\beta_{\infty}k^{4}+\sin^{2}\theta)}$
(28)
Note from (26) and (27) that, although our interest is in the limit
$\beta_{\infty}=0$, corresponding to convection effects being infinitely
dominant, we have nevertheless retained the terms proportional to
$\beta_{\infty}$ in the Fourier integrands. This is because, on one hand,
numerical convergence in the convection-dominant limit is considerably more
difficult; a small but finite $\beta_{\infty}$ aids convergence of the
quadrature integration especially at large distances from the sphere, and over
most of the domain, as is evident from figure 4 where we compare the
numerically evaluated axial velocity profiles for $\beta_{\infty}=0$ and
$10^{-5}$ for varying number of quadrature points. The detailed explanation of
the nature of this profile appears later, but it may nevertheless be seen that
the $\beta_{\infty}=0$ profile deviates from the true profile, asymptoting to
a spurious plateau beyond a certain $\tilde{z}$. There is only a modest effect
of quadrature resolution on this threshold $\tilde{z}$, and as a result, for
$\beta_{\infty}=0$, the eventual algebraic decay regime remains numerically
inaccessible regardless of the number of quadrature points. On the other hand,
and more importantly, the structure of both the velocity and density fields
behind the translating sphere, in the vicinity of the rear stagnation
streamline, depends crucially on $\beta_{\infty}$ being non-zero; the density
field in particular is logarithmically singular on the rear stagnation
streamline for $\beta_{\infty}=0$.
Figure 4: The comparison given highlights the importance of weak diffusive
effects (small but finite $\beta_{\infty}$) in obtaining an accurate
representation of the disturbance fields in the convection-dominant limit. The
profiles for $\beta_{\infty}=0$ asymptote to a spurious plateau regardless of
$N$; here, $N$ represents the number of quadrature points used for numerical
integration.
Figure 5 shows the streamline pattern and the isopycnal contours for the
smallest $\beta_{\infty}\,(=10^{-5})$ accessed in our calculations. The
limited spatial extent here, in comparison to figure 1, is on account of the
numerical difficulties involved in calculating the farfield isopycnals; the
streamline pattern alone, over a larger spatial extent, appears below in
figure 11a. Nevertheless, the profound asymmetry of both the streamline and
iso-pycnals patterns is readily evident. This asymmetry may be anticipated
from the integral expressions in (26-28) where, unlike the
$Pe=0\,(\beta_{\infty}=\infty)$ limit, one may no longer replace the complex
exponential by a Cosine. Apart from the different shapes and numbers of the
recirculating cells in front of and behind the translating sphere, evident
from the figure, there is also the appearance of a radially localized but
vertically extended structure, in the streamline pattern, in the rear. As will
be seen, this corresponds to a buoyant reverse jet that develops behind the
particle with decreasing $\beta_{\infty}$. The far-field analysis below points
to both a stratification-induced wake in the convection-dominant limit with a
structure that is insensitive to $\beta_{\infty}$ (for $\beta_{\infty}\ll 1$);
and the buoyant reverse jet mentioned above whose structural features depend
essentially on $\beta_{\infty}$; these are analyzed in separate subsections.
Figure 5: (a) Streamlines and (b) Iso-pycnals for a translating sphere in a
linearly stratified fluid, in the convection dominant limit
($\beta_{\infty}=10^{-5}$), in the Stokes stratification regime ($Re\ll
Ri_{v}^{\frac{1}{3}}$); in the point-particle approximation used, the sphere
is at the origin and moving vertically downward.
#### 3.2.1 Far-field wake analysis
Similar to the diffusion-dominant case analyzed in section 3.1.1, the expected
dominance of horizontal motion for distances large compared to
$Ri_{v}^{-\frac{1}{3}}$ points to the assumption $k_{3}\gg k_{t}$ being
applicable to the Fourier integrals in (14) and (15), when characterizing
fluid motion in a far-field wake region. The original Fourier integrals, in
this limit,reduce to:
$\mathbf{\tilde{u}}(\mathbf{\tilde{r}})=\frac{-3}{4\pi^{2}}\int\frac{ik_{3}k_{t}^{2}}{(ik_{3}^{5}+k_{t}^{2})}e^{i\mathbf{k}.\mathbf{\tilde{r}}}d\mathbf{k},$
(29)
${\tilde{\rho_{f}}}(\mathbf{\tilde{r}})=\frac{-3}{4\pi^{2}}\int\frac{k_{t}^{2}}{ik_{3}^{5}+k_{t}^{2}}e^{i\mathbf{k}.\mathbf{\tilde{r}}}d\mathbf{k},$
(30)
where we have set $\beta_{\infty}=0$ which, as will be seen, is justified
everywhere in the domain except in the vicinity of the rear stagnation
streamline. The integrals in (29) and (30) may be reduced to the following
one-dimensional integrals, written in terms of the similarity variable
$\eta=\frac{\tilde{z}}{\tilde{r_{t}}^{\frac{2}{5}}}$, via contour integration
(see Appendix B for details):
$\displaystyle\tilde{u}_{z}$
$\displaystyle=-\frac{15i}{2\tilde{r_{t}}^{14/5}}\int_{0}^{\infty}m^{6}J_{0}[m^{5/2}][Q_{1}q_{1}e^{iq_{1}m\eta}+Q_{2}q_{2}e^{iq_{2}m\eta}+Q_{3}q_{3}e^{iq_{3}m\eta}]dm\textrm{
}(\textrm{for }\tilde{\eta}>0),$
$\displaystyle=\frac{15i}{2\tilde{r_{t}}^{14/5}}\int_{0}^{\infty}m^{6}J_{0}[m^{5/2}][Q_{4}q_{4}e^{iq_{4}m\eta}+Q_{5}q_{5}e^{iq_{5}m\eta}]dm\textrm{
}(\textrm{for }\tilde{\eta}<0),$ (31) $\displaystyle\tilde{\rho_{f}}$
$\displaystyle=-\frac{15}{2\bar{r_{t}}^{12/5}}\int_{0}^{\infty}m^{5}J_{0}[m^{5/2}][Q_{1}e^{iq_{1}m\eta}+Q_{2}e^{iq_{2}m\eta}+Q_{3}e^{iq_{3}m\eta}]dm\textrm{
}(\textrm{for }\tilde{\eta}>0),$
$\displaystyle=\frac{15}{2\tilde{r_{t}}^{12/5}}\int_{0}^{\infty}m^{5}J_{0}[m^{5/2}][Q_{4}e^{iq_{4}m\eta}+Q_{5}e^{iq_{5}m\eta}]dm\textrm{
}(\textrm{for }\tilde{\eta}<0),$ (32) $\displaystyle\tilde{u}_{r_{t}}$
$\displaystyle=-\frac{15}{2\tilde{r_{t}}^{11/5}}\int_{0}^{\infty}m^{9/2}J_{1}[m^{5/2}][Q_{1}q_{1}^{2}e^{iq_{1}m\eta}+Q_{2}q_{2}^{2}e^{iq_{2}m\eta}+Q_{3}q_{3}^{2}e^{iq_{3}m\eta}]dm\textrm{
}(\textrm{for }\tilde{\eta}>0),$
$\displaystyle=\frac{15}{2\tilde{r_{t}}^{11/5}}\int_{0}^{\infty}m^{9/2}J_{1}[m^{5/2}][Q_{4}q_{4}^{2}e^{iq_{4}m\eta}+Q_{5}q_{5}^{2}e^{iq_{5}m\eta}]dm\textrm{
}(\textrm{for }\tilde{\eta}<0).$ (33)
Here, the $Q_{n}$’s and $q_{n}$’s ($n=1,2,3,4,5$) are complex-valued constants
given in Appendix B. The fore-aft asymmetry implies that one has different
asymptotic approximations depending on the sign of $\tilde{\eta}$ (or
$\tilde{z}$). Nevertheless, the above self-similar forms point to a far-field
wake, that includes the horizontal plane containing the settling sphere, and
whose vertical extent grows as $z\propto
Ri_{v}^{\frac{2}{15}}{r}_{t}^{\frac{2}{5}}$, with $z$ and $r_{t}$ being
measured in units of $a$. The axial and radial velocity profiles, and the
density disturbance profiles, obtained from a numerical evaluation of the one-
dimensional integrals above, are shown both on the linear and logarithmic
scales in figure 6. The logarithmic plot shows that while there are still only
a finite number of zero crossings, similar to $Pe=0$, they differ in number
for negative and positive $\tilde{\eta}$, with fewer zero crossings for
negative $\tilde{\eta}$. This implies fewer recirculating cells below the
settling sphere, and is consistent with the streamline pattern in figure 5.
Similar to the diffusion-dominant limit, one may obtain the large-$\eta$
asymptotic forms from (3.2.1-3.2.1) which govern the eventual algebraic decay
of the disturbance fields beyond the final zero crossing; these are given by
$[\frac{3240}{r_{t}^{\frac{14}{5}}\tilde{\eta}^{7}},-\frac{2160}{r_{t}^{\frac{14}{5}}\tilde{\eta}^{7}}]$
for the axial velocity,
$[\frac{11340}{r_{t}^{\frac{11}{5}}\tilde{\eta}^{8}},-\frac{7560}{r_{t}^{\frac{11}{5}}\tilde{\eta}^{8}}]$
for the radial velocity, and
$[-\frac{540}{r_{t}^{\frac{12}{5}}\tilde{\eta}^{6}},\frac{360}{r_{t}^{\frac{12}{5}}\tilde{\eta}^{6}}]$
for the density disturbance, with the first and second members of each ordered
pair corresponding to positive and negative $\tilde{\eta}$, respectively.
These asymptotes, and the above approximate profiles based on the one-
dimensional integrals above will be compared to the exact numerically
evaluated disturbance fields below.
The structure of the far-field wake may also be characterized in terms of the
$\tilde{\eta}$-moments of the disturbance fields above. The motion being
largely horizontal, it is the moments of the radial velocity field that are
the most important. A calculation using the far-field approximation above
(equation (3.2.1)) shows that the zeroth and first moments of the radial
velocity field in the wake vanish, and that the second moment, defined as
$\int_{-\infty}^{\infty}\tilde{\eta}^{2}\tilde{u}_{r_{t}}d\tilde{\eta}=6$, is
the first non-trivial member of the moment hierarchy (interestingly, this may
also be seen from direct neglect of the viscous term $ik_{3}k^{4}$ in the
original Fourier integral (14), and additionally setting $\beta_{\infty}=0$;
the radial velocity may now be obtained in terms of generalized functions as
$\tilde{u}_{r_{t}}=\frac{3}{\tilde{r}_{t}}\delta^{\prime\prime}(z)$, that
yields the same value for the second moment.). The moment-based
characterization above offers an interesting contrast to the known solution
for the motion induced by a sphere settling through a linearly stratified
ambient in the linearized inviscid approximation, when stratification forces
are (infinitely) dominant. As shown in (Vladimirov & Li’in (1991)), the motion
is strictly horizontal and restricted to an infinite horizontal slab whose
upper and lower planes bound the sphere. Within this slab, the fluid moves
radially inward (outward) in the rear (front) half of the translating sphere.
The nature of this motion is easily understood from the changing size of the
sphere cross-section in a given horizontal plane, and the requirement of
satisfying the impenetrability condition at the sphere surface. In two
dimensions (that is, a settling cylinder), the horizontal velocity field is a
constant, while in three dimensions (a settling sphere), the motion would have
a $1/r_{t}$-dependence consistent with incompressibility. Such a motion
corresponds has a dipolar character with a non-trivial first moment for the
radial velocity. In contrast, as already seen, the structure of the far-field
wake above does not exhibit the aforementioned structure. This is because
although the Stokeslet in the inner reigon has a radial component consistent
with the symmetry of the linearized inviscid solution above (directed inward
behind the sphere and outward in front of it), the force associated with the
Stokeslet is screened by the buoyancy forces induced by the density
perturbation, in a surrounding volume with a dimension of
$O(Ri_{v}^{-\frac{1}{3}})$. As a result, the wake velocity field on length
scales much larger than $O(Ri_{v}^{-\frac{1}{3}})$, has the symmetry
pertaining to a force-dipole consisting of the original Stokeslet and an
effective upward force arising from the aforementioned volumetric distribution
of induced buoyancy forces.
Figure 6: The axial velocity, the radial velocity and density disturbance
profiles, within the far-field wake region, in the convection dominant limit
($Pe\gg 1$) pertaining to the Stokes stratification regime: $(a)$ the
disturbance fields on a linear scale; the absolute value of the disturbance
fields on a logarithmic scale for (b) negative $\tilde{\eta}$ and $(c)$ for
positive $\tilde{\eta}$; here, $n=14/5$, $11/5$ and $12/5$ for $u_{z}$,
$u_{r_{t}}$ and $\rho_{f}$, respectively. The aforementioned wake includes the
plane of the settling sphere, and grows in vertical extent as $z\propto
Ri_{v}^{\frac{2}{15}}{r}_{t}^{\frac{2}{5}}$.
#### 3.2.2 Far-field jet analysis
As for the diffusion-dominant case, the large-$\eta$ asymptotes for the axial
velocity and density disturbance fields in the convection-dominant limit,
given above, are seen to be independent of $\bar{r}_{t}$, with the radial
disturbance field being $O(\bar{r}_{t})$ for $\tilde{z}\rightarrow 0$. Thus,
one expects the large-$\eta$ asymptotes to continue to remain valid at
sufficiently large distances (large $\tilde{z}$) along the stagnation
streamline ($\tilde{z}=0$). This remains true for the front stagnation
streamline, with $\tilde{u}_{z}=-\frac{2160}{\tilde{z}^{7}}$ and
$\rho_{f}=\frac{360}{\tilde{z}^{6}}$ for large negative $\tilde{z}$. Although
we don’t go into any detail, these far-field asymptotes may also be derived
directly from the exact expressions via residue integration, as seen in (25)
for $Pe=0$.
The wake approximation in the earlier subsection, and therefore, the
large-$\eta$ approximations derived from it, are no longer valid in the
vicinity of the rear stagnation streamline. The pronounced asymmetry in the
streamline pattern in figure 1, and the predominantly vertical motion behind
the sphere, are already indicative of the breakdown of the wake
approximation.The neighborhood of the rear stagnation streamline, at large
distances, corresponds to large positive $\tilde{z}$ and small
$\tilde{r}_{t}$, which in Fourier space is equivalent to $k_{3}\ll k_{t}$ \-
the opposite of the wake-approximation developed above. This reduces the
original Fourier integrals to the following approximate forms:
$\mathbf{\tilde{u}}(\mathbf{\tilde{r}})=-\frac{6\pi
i}{8\pi^{3}}\int\frac{k_{3}k_{t}^{2}(\mathbf{1_{z}}-\frac{k_{3}\mathbf{k}}{k^{2}})}{(ik_{3}k_{t}^{4}+k_{t}^{2}+\beta_{\infty}k_{t}^{6})}e^{i\mathbf{k}.\mathbf{\tilde{r}}}d\mathbf{k},$
(34)
${\tilde{\rho_{f}}}(\mathbf{\tilde{r}})=-\frac{6\pi}{8\pi^{3}}\int\frac{k_{t}^{2}}{(ik_{3}k_{t}^{4}+k_{t}^{2}+\beta_{\infty}k_{t}^{6})}e^{i\mathbf{k}.\mathbf{\tilde{r}}}d\mathbf{k},$
(35)
where, unlike the wake-approximation above, we retain the $O(\beta_{\infty})$
terms in the integrands, in anticipation of the fact that the reverse jet we
find below has a structure that crucially depends on $\beta_{\infty}$ even in
the limit $\beta_{\infty}\ll 1$. The integrals in (34-35) can be further
simplified by contour integration in the complex-$k_{3}$ plane. From the
denominator of the integrand in (34-35) one notes that the only pole exists in
the upper half of the complex plane, being given by
$k_{3}=i\frac{\beta_{\infty}k_{t}^{4}+1}{k_{t}^{2}}$. This pole contributes
only for positive $z$, when one closes the contour via a semi-circle (of an
infinite radius) in the upper half of the plane. Performing the integral over
the azimuthal angle, and accounting for the contribution of the aforementioned
pole in the $k_{3}$-integration, the axial velocity and density disturbance
fields can be reduced to the following one-dimensional integrals:
$\displaystyle\tilde{u}_{z}=3\int_{0}^{\infty}\frac{J_{0}(k_{t}\tilde{r}_{t})e^{-\tilde{z}(\beta_{\infty}k_{t}^{2}+\frac{1}{k_{t}^{2}})}}{k_{t}^{3}}dk_{t},$
(36)
$\displaystyle\tilde{\rho}_{f}=-3\int_{0}^{\infty}\frac{J_{0}(k_{t}\tilde{r}_{t})e^{-\tilde{z}(\beta_{\infty}k_{t}^{2}+\frac{1}{k_{t}^{2}})}}{k_{t}}dk_{t},$
(37)
For $\tilde{r}_{t}=0$, the integrals in (36) and (36) may be evaluated
analytically, giving:
$\displaystyle\tilde{u}_{z}=3\sqrt{\beta_{\infty}}K_{1}[2\sqrt{\beta_{\infty}}\tilde{z}],$
(38) $\displaystyle\tilde{\rho}_{f}=-3K_{0}[2\sqrt{\beta_{\infty}}\tilde{z}].$
(39)
Here, $K_{0}$ and $K_{1}$ are zeroth and first order modified Bessel functions
of the second kind, respectively. The crucial role of weak diffusion on the
jet structure, as characterized by (38) and (39), may now be seen. Rather
remarkably, on using the small-argument asymptote $K_{1}(z)\approx 1/z$ in the
limit $\beta_{\infty}\rightarrow 0$, (38) is found to be independent of
$\beta_{\infty}$ at leading order, reducing to
$\tilde{u}_{z}\approx\frac{3}{2\tilde{z}}$. This implies that the axial
velocity, although pointing in the reverse direction (that is, directed
opposite to the translating sphere), still decays as $O(1/z)$, analogous to a
Stokeslet, on length scales much larger than $O(Ri_{v}^{-\frac{1}{3}})$! In
contrast, on using the small argument form $K_{0}\approx-\ln z$, the density
disturbance given by (39) is seen to be logarithmically singular for
$\beta_{\infty}\rightarrow 0$ for any positive $\tilde{z}$, pointing to a
logarithmic singularity all along the rear stagnation streamline for
$Pe=\infty$. The far-field behavior in this jet region changes fundamentally
for any small but finite $\beta_{\infty}$. Now, there exists a second
screening length across which the buoyant jet transitions from the
$\frac{1}{z}$ decay above to a much faster exponential one, this arising from
the exponentially decaying forms of the large-argument asymptotes of the
modified Bessel functions above; likewise, the density disturbance transitions
from the logarithmic form above, again to a far-field exponential decay. From
(38) and (39), this second screening length is seen to be
$O(\beta_{\infty}^{-\frac{1}{2}})$, in units of $Ri_{v}^{-\frac{1}{3}}$, or
$O(Ri_{v}^{-\frac{1}{6}}Pe^{\frac{1}{2}})$ in units of $a$. The radial extent
of the jet region may be seen from the earlier expressions (34) and (35).
Setting $\beta_{\infty}=0$, one notes that $\tilde{z}\sim O(k_{t}^{2})\sim
O(\tilde{r}_{t}^{-2})$ for the argument of the exponential integrand to be of
order unity. Thus, the reverse-Stokeslet behavior above is valid in a region
with a radial extent $\tilde{r}_{t}\propto\tilde{z}^{-\frac{1}{2}}$ for
$\beta_{\infty}=0$, suggesting that the buoyant jet narrows as
$O(\tilde{z}^{-\frac{1}{2}})$, with increasing downstream distance, until the
effects of diffusion become important. As shown above, the diffusive smearing
of the jet, and the transition to an exponentially decaying reverse flow,
occurs across a second screening length of $O(\beta_{\infty}^{-\frac{1}{2}})$
when the jet has a width of $O(\beta_{\infty}^{\frac{1}{4}})$, both in units
of $Ri_{v}^{-\frac{1}{3}}$. Although, the existence of a rearward jet is well
known for moderate Reynolds numbers, from earlier computations (see (Hanazaki
et al., 2009)), its appearance has been primarily attributed to the inertial
effects (for instance, see (Eames & Hunt, 1997)). The existence of such a jet,
as predicted above in the Stokes stratification regime, therefore comes as a
surprise. It is also worth emphasizing that, unlike the usual case of the
laminar (or turbulent) wake or jet, the buoyant jet above conserves neither
momentum nor mass flux; the absence of a net mass flux implies that the
existence of a jet region doesn’t affect drift volume estimates (see section
4.3).
Figures 7 and 8 show plots of the axial velocity and density disturbance
fields evaluated numerically at points along the stagnation streamline, based
on (26) and (27), with $\bar{r}_{t}=0$. In figure 7, the right hand side plot
shows the transition of the axial velocity field, for negative $\tilde{z}$,
from an $O(1/\tilde{z})$ Stokeslet decay in the inner region, to the more
rapid $O(1/\tilde{z}^{7})$ decay of the large-$\eta$ asymptote derived earlier
(see section 3.2.1), on length scales greater than the (primary) screening
length. Note that this transition is accompanied by a reversal in direction,
as evident from the sharp dip around $|z|\approx 8.85$ in the aforementioned
logarithmic plot. Thus, the axial flow in the neighborhood of the front
stagnation streamline, and at distances larger than the screening length,
points towards the sphere. Importantly, the axial velocity profiles are
virtually coincident for $\beta_{\infty}\leq 10^{-2}$, implying that the flow
pattern in the vicinity of the front stagnation streamline converges to a
limiting form for $Pe\rightarrow\infty$ that is characterized by the primary
screening length of $O(Ri_{v}^{-\frac{1}{3}})$. In contrast, the plot on the
left hand side, for positive $\tilde{z}$, shows a transition from the inner
region Stokeslet decay to an eventual exponential decay at the largest
distances, with this transition being postponed to progressively larger
$\tilde{z}$ with decreasing $\beta_{\infty}$. For the smallest
$\beta_{\infty}^{\prime}s\,(=10^{-4}$ and $10^{-5})$, one can see the
emergence of an intermediate asymptotic regime, corresponding to
$1\ll\tilde{z}\ll\beta_{\infty}^{-\frac{1}{2}}$, where the velocity conforms
to the reverse-Stokeslet behavior predicted above. Note that both the
Stokeslet and reverse-Stokeslet behavior appear as the same asymptote (the
black dot-dashed line), since the plot is for the absolute value of the
velocity field on a logarithmic scale, and the indication of the reversal in
direction is again the intervening sharp dip corresponding to
$\tilde{z}\approx 1.15\,(z\approx 1.15Ri_{v}^{-\frac{1}{3}}$). The inset in
this plot shows that the axial velocity profiles collapse onto a universal
exponential behavior, when the ordinate and abscissa are rescaled with
$\beta_{\infty}^{\frac{1}{2}}$ and $\beta_{\infty}^{-\frac{1}{2}}$,
respectively, the latter corresponding to the axial distance being scaled by
the secondary screening length. This collapse is consistent with (38) above;
although, since the distance corresonding to the reversal in direction scales
with the primary screening length, the dips of the curves in the inset plot,
are no longer coincident for varying $\beta_{\infty}$.
The plots in figure 8 again highlight the contrast between the density
disturbance fields along the front and rear stagnation streamlines. The plot
on the right hand side, for negative $\tilde{z}$, shows that the density
disturbance converges to a limiting form for $\beta_{\infty}\leq 10^{-2}$,
with an $O(1/\tilde{z}^{6})$ far-field decay, consistent with the large$-\eta$
asymptote obtained in section 3.2.1; although, the numerics break down beyond
a critical $|\tilde{z}|$ that is a function of the number of quadrature points
used. In contrast, the left hand side plot shows that the density disturbance
transitions from a near-field plateau to a far-field exponential decay, with
this plateau increasing in magnitude logarithmically with decreasing
$\beta_{\infty}$, consistent with (39), precluding a collapse of the density
profiles for small $\beta_{\infty}$. The inset in this figure plots the
density profiles as a function of the rescaled abscissa,
$\beta_{\infty}^{\frac{1}{2}}\tilde{z}$, so as to highlight their collapse
onto a common curve (the modified Bessel function asymptote given by (39)).
The individual curves deviating from this common asymptote on account of the
near-field plateauing behavior, with this deviation occurring at a
progressively smaller distance with decreasing $\beta_{\infty}$; note that for
$\beta_{\infty}\rightarrow 0$, the said plateau regime becomes vanishingly
small, while the exponential decay is pushed off to infinitely large distances
(in units of the primary screening length), so the density field becomes
logarithmically singular all along the rear stagnation streamline.
Figure 7: The axial velocity field plotted along the stagnation streamline for
both positive (the LHS plot) and negative $\tilde{z}$ (the RHS plot), and for
different small $\beta_{\infty}$. In the LHS plot, both the Stokeslet and
reverse-Stokeslet asymptotes appear as the black dot-dashed line; the inset
shows the collapse of the far-field profiles onto a common curve, when plotted
as a function of $\beta_{\infty}^{\frac{1}{2}}\tilde{z}$, consistent with
(38). The RHS plot shows the transition from the near-field Stokeslet decay
(blue dash-dotted line) to the far-field decay given by
$-\frac{2160}{\tilde{z}^{7}}$ (the black dash-dotted line).
Figure 8: The density disturbance field plotted along the stagnation
streamline for both positive (LHS plot) and negative $\tilde{z}$ (the RHS
plot), and for different small $\beta_{\infty}$. The inset plot in the LHS
figure shows the collapse of the density disturance profiles onto a common
far-field asymptote, given by (39), when plotted as a function of
$\beta_{\infty}^{\frac{1}{2}}\tilde{z}$. The RHS plot shows that the
small-$\beta_{\infty}$ density profiles converging to a common limiting form
given by $\frac{360}{\tilde{z}^{6}}$; although in agreement with the farfield
asymptote, the numerical approximations (with $N=1,50,000$) break down for
large axial distances, with this breakdown being delayed the most for
$\beta_{\infty}=10^{-2}$.
#### 3.2.3 Comparison of numerical profiles with the far-field
approximations: Transition from the jet to the wake regimes
Having characterized the far-field approximations for the disturbance fields
in both the buoyant jet (section 3.2.2) and wake (section 3.2.1) regions, we
now compare the exact results for the axial velocity, obtained from a
numerical evaluation of (26), with $\beta_{\infty}=10^{-5}$, with these
approximations. The comparison is shown in Figures 9 and 10 for negative and
positive $\tilde{\eta}$, respectively. Motivated by the self-similar one-
dimensional integral approximation given by (3.2.1), both figures plot
$\bar{r}_{t}^{\frac{14}{5}}|\tilde{u}_{z}|$ as a function of $\tilde{\eta}$.
Only the wake-similarity profile (3.2.1) is relevant for negative
$\tilde{\eta}$, and is shown alongside the exact numerical profiles in figure
9 for different $\bar{r}_{t}$, together with its large-$\eta$ asymptotic form
given by $2160/\tilde{\eta}^{7}$. The comparison here is similar to the
diffusion dominant case, the agreement being poor for small to order unity
$\bar{r}_{t}$, with the number of zero crossings also being in disagreement,
but improving with increasing $\bar{r}_{t}$. There is good agreement for
$\bar{r}_{t}=6$, and almost a perfect match between the analytical and
numerical profiles for $\bar{r}_{t}=25$.
The comparison for positive $\tilde{\eta}$ is more elaborate since one now has
both far-field wake and jet approximations in different regions of the half-
space. One expects the axial velocity profile to transition from a jet-like
profile to a wake-like one as one moves away from the rear stagnation
streamline, that is, for a fixed $\tilde{z}$ and with increasing
$\bar{r}_{t}$. This is seen in figure 10 where the numerically determined
axial velocity profiles are shown for six different $\bar{r}_{t}$’s ranging
from $0.05$ to $25$, together with the far-field wake and jet approximations
developed in the earlier two subsections. For the smallest
$\bar{r}_{t}\,(=0.05)$, the exact calculation matches the far field jet
approximation for $\tilde{z}$ greater than about $10$; for the chosen
$\beta_{\infty}$, this jet approximation is virtually identical (in magnitude)
to a Stokeslet decay over the range of $\tilde{\eta}$ examined. For the
aforementioned $\bar{r}_{t}$, similar to figure 7, the numerical profile has a
zero-crossing at a smaller $\tilde{z}\approx 1.15$, and continues to diverge
at smaller $\tilde{z}$, in accordance with the expected Stokeslet behavior in
the inner region, with there being the beginning of a plateau at the smallest
$\tilde{z}$’s. For $\bar{r}_{t}=0.25$, the plateauing tendency for small
$\tilde{z}$ is more readily evident, with there still being a good agreement
with the jet approximation for large $\tilde{z}$. The plateauing behavior
arises for any finite $\bar{r}_{t}$ since the disturbance velocity field is
now finite in the plane $\tilde{z}=0$; the continued divergence down to
$\tilde{z}=0$ only occurs along the rear stagnation streamline (see figure 7).
For $\tilde{r}_{t}$ values greater than unity, the exact profile starts to
agree better with with the wake approximation, and for $r_{t}=25$ this
agreement is near-perfect, with the jet approximation being virtually
irrelevant. Although not shown, an analogous scenario prevails for the density
disturbance profiles.
From figures 7 and 10, one sees that although the axial profile velocity
exhibits only a single zero crossing along the rear stagnation streamline
(corresponding to the Stokeslet-reverse-Stokeslet transition for
$\beta_{\infty}=0$), the jet-approximation for any non-zero $\tilde{r}_{t}$
(the expression (36)) appears to exhibit a denumerably infinite number of zero
crossings as evident from the plots in the former figure for $\tilde{r}_{t}=6$
and $\tilde{r}_{t}=25$. The infinitely many zero-crossings suggest an infinite
number of recirculating cells in the region $\tilde{z}\gg\tilde{r}_{t}$,
$\tilde{z},\tilde{r}_{t}\gg 1$. Note that this conclusion is not necessarily
in conflict with the wake approximation, that has only a finite number of
zero-crossings, since the latter approximation is restricted to the region
$\tilde{z}\ll\tilde{r}_{t}$. Thus, although the self-similar profiles in the
wake predict an eventual algebraic decay, in reality, this decay might not
extend to indefinitely large distances, but instead with increasing
$\tilde{z}$, one will again have zero-crossings in the region
$\tilde{z}>\tilde{r}_{t}$. As of now, this is difficult to verify, given the
near-impossibility of accurate numerical evaluation at such large distances.
Nevertheless, and although not evident from figures 5 and 6, the implication
of the above argument is that the flow-field in the convection-dominant limit
exhibits an infinite number of recirculating cells (unlike the diffusion-
dominant limit).
Figure 9: The comparison, for negative $\tilde{z}$, between the numerically
evaluated axial velocity profile, and the far-field wake-approximation given
by (3.2.1), in the convection dominant limit, and in the Stokes stratification
regime ($Re\ll Ri_{v}^{\frac{1}{3}}$); the exact profile is obtained from a
numerical integration of (26) with $\beta_{\infty}=10^{-5}$.
Figure 10: The comparison, for positive $\tilde{z}$, between the numerically
evaluated axial velocity profile, and both the far-field jet and wake-
approximations given by (38) and (3.2.1), respectively. The profiles pertain
to the convection dominant limit and the Stokes stratification regime ($Re\ll
Ri_{v}^{\frac{1}{3}}$); the numerical profile is obtained from an integration
with $\beta_{\infty}=10^{-5}$.
Finally, figures 11 and 12 show the streamline and iso-pycnal patterns,
respectively, for $\beta_{\infty}$ varying over the range $10^{-5}-10$. The
departure of both patterns from fore-aft symmetry, with decreasing
$\beta_{\infty}$, is evident, with the buoyant jet clearly evident in the
streamline patterns for $\beta_{\infty}\leq 10^{-2}$. The spatial extent of
all the streamline patterns shown corresponds to $\tilde{z}|,|\tilde{r}|\leq
20$, with these intervals measured in units of $Ri_{v}^{-\frac{1}{3}}$. For
$\beta_{\infty}=10^{-5}$, this implies that the streamline pattern includes
the first two zero crossings that appear in the large-$\tilde{r}_{t}$ axial
velocity profile in figure 10, while including both the zero crossings that
appear in the profiles in figure 9. Note that the length scale characterizing
the pattern changes from $Ri_{v}^{-\frac{1}{3}}$ to
$(Ri_{v}Pe)^{-\frac{1}{4}}$ with increasing $\beta_{\infty}$. In units of
$Ri_{v}^{-\frac{1}{3}}$, this corresponds to the characteristic length scale
increasing as $\beta_{\infty}^{\frac{1}{4}}$. Thus, for the same range in
$\tilde{z}$ and $\tilde{r}_{t}$, one samples a proportionately smaller region
of the streamline pattern with increasing $\beta_{\infty}$. This reduction in
the spatial extent is evident from a comparison of the streamline pattern for
$\beta_{\infty}=10$ to the one in figure 1. As seen in figure 12, the iso-
pycnals become heavily compressed and distorted for the smallest
$\beta_{\infty}$’s, in a manner consistent with the density disturbance having
a logarithmic singularity along the rear stagnation streamline
($\bar{r}_{t}=0,\tilde{z}>0$) and as a result, numerically resolving the iso-
pycnal becomes very difficult; this difficulty is reflected in the range of
accessible $\tilde{r}_{t}$ and $\tilde{z}$ in figure 12 progressively
decreasing with decreasing $\beta_{\infty}$ (this isn’t an issue for the
streamline patterns, given that the axial velocity remains finite along the
rear stagnation streamline even for $\beta_{\infty}=0$).
Figure 11: Streamline patterns, pertaining to the Stokes-stratification regime
(defined by the stratification screening length being the smallest of all
relevant scales), for various $\beta_{\infty}$. The first plot for
$\beta_{\infty}=10$ is in the diffusion-dominant limit and nearly fore-aft
symmetric; the plot for $\beta_{\infty}=10^{-5}$ shows the buoyant reverse jet
in the rear.
Figure 12: Isopycnals pertaining to the Stokes-stratification regime for
various $\beta_{\infty}$. The first plot for $\beta_{\infty}=10$ is in the
diffusion-dominant limit and nearly fore-aft symmetric; the plots for the
smallest $\beta_{\infty}$’s are suggestive of a developing singularity along
the rear stagnation streamline.
### 3.3 Effects of weak inertia or convection
In our calculations thus far, we have completely neglected the role of
inertia. This is equivalent to assuming the inertial screening length (of
$O(Re^{-1})$) to be much larger than the relevant stratification screening
length, the latter being $(Ri_{v}Pe)^{-\frac{1}{4}}$ for $Pe\ll 1$ and
$O(Ri_{v}^{-\frac{1}{3}})$ for $Pe\gg 1$, with this ordering of the screening
lengths corresponding to the Stokes stratification regime. With regard to the
calculations above, this is equivalent to setting $\alpha_{0}=0$ in (7) and
$\alpha_{\infty}=0$ in (12), for small and large $Pe$, respectively. While the
detailed calculation of the flow field in the presence of competing effects of
inertia and buoyancy is beyond the scope of the present manuscript, the effect
of weak inertia on the larger-scale structure of the velocity field may
nevertheless be inferred via simple scaling arguments.
We begin with the diffusion-dominant case, corresponding to $Pe\ll 1$ where,
for small but finite $\alpha_{0}$, the denominator of the Fourier integrals
for the disturbance fields, obtained from Fourier transforming (6)-(8), takes
the form
$\alpha_{0}\beta_{0}k_{3}^{2}k^{2}+\mathrm{i}k_{3}(\alpha_{0}+\beta_{0})k^{4}+k^{6}+k_{t}^{2}$,
with $k$ here being scaled by $(Ri_{v}Pe)^{\frac{1}{4}}$. Note that the term
proportional to $\beta_{0}k_{3}k^{4}$ denotes effects arising from the (weak)
convection of the density disturbance field, and is typically associated with
a screening length of $O(Pe^{-1})$ (Leal, 2007); the fore-aft asymmetry in the
far-field arising from this term alone was already seen in the streamline and
iso-pycnal patterns corresponding to the largest $\beta_{\infty}$’s in figures
11 and 12. Assuming buoyancy forces to first become important with increasing
distance from the settling sphere, we now know from section 3.1 that the
dominant motion is restricted to an axisymmetric wake on distances greater
than $O(Ri_{v}Pe)^{-\frac{1}{4}}$, and is primarily horizontal. Thus, in order
to examine inertia-induced transitions in the wake-scaling at larger
distances, one may set $k_{3}\gg k_{t}$, whence the aforementioned Fourier-
space expression takes the form
$\alpha_{0}\beta_{0}k_{3}^{4}+i(\alpha_{0}+\beta_{0})k_{3}^{5}+k_{3}^{6}+k_{t}^{2}$.
For $\alpha_{0}=\beta_{0}=0$, one obtains the balance $k_{3}^{6}\approx
k_{t}^{2}$, and the vertical extent of the aforesaid wake growing as
$z\propto(Ri_{v}Pe)^{-\frac{1}{6}}r_{t}^{\frac{1}{3}}$ (in units of $a$), as
shown in section 3.1. For $\alpha_{0},\beta_{0}$ small but finite, the
neglected terms invariably become important, and balance buoyancy forces
(instead of viscosity) on larger lengthscales, corresponding to smaller $k$’s.
For $\alpha_{0}\ll\beta_{0}$ (or $Re\ll Pe$), one obtains the balance
$\beta_{0}k_{3}^{5}\approx k_{t}^{2}$ beyond a radial length scale of
$O(Ri_{v}^{\frac{1}{2}}Pe^{-\frac{5}{2}})$; this balance is the same as that
in section 3.2.1, and therefore, implies a wake that grows as $z\propto
Ri_{v}^{-\frac{2}{15}}r_{t}^{\frac{2}{5}}$. Thus, even for $Pe\ll 1$, one
obtains the large-$Pe$ wake-scaling derived in section 3.2.1, but only beyond
the aforementioned secondary screening length. Finally, on the largest scales,
the leading order balance is between inertial and buoyancy forces, and takes
the form $\alpha_{0}\beta_{0}k_{3}^{4}\approx k_{t}^{2}$, leading to a growth
of $z\propto
Re^{\frac{1}{4}}Pe^{\frac{1}{16}}Ri_{v}^{-\frac{3}{16}}r_{t}^{\frac{1}{2}}$
beyond a radial scale of $Re^{-\frac{5}{2}}Ri_{v}^{\frac{1}{2}}$ that may be
termed a tertiary screening length, again for $Re\ll Pe$. Thus, in the
diffusion-dominant limit, weak convection (small but finite $Pe$) and inertia
effects (small but finite $Re$) alter the far-field wake-scaling, causing it
grow progressively faster beyond the screening lengths obtained above.
Although the difference in the growth exponents is marginal ($1/3\rightarrow
2/5\rightarrow 1/2$), one expects a more significant alteration of the wake
structure; the change in structure accompanying the first transition in growth
exponent ($1/3\rightarrow 2/5$) involves a departure from fore-aft symmetry,
and the details may already be inferred from sections 3.1.1 and 3.2.1.
Provided the stratification screening length, $(Ri_{v}Pe)^{-\frac{1}{4}}$,
remains the smallest of the three possible primary screening lengths viz.
$(Ri_{v}Pe)^{-\frac{1}{4}}$, $Re^{-1}$ and $Pe^{-1}$, an assumption that
defines the Stokes stratification regime for small $Pe$, the screening lengths
derived above remain well ordered under the assumption $Re\ll Pe$. If we allow
for convection and inertial effects to be small but of comparable importance,
so that $\alpha_{0}/\beta_{0}\sim O(1)$ (or $Re\sim Pe$), then the growth
exponents found above remain the same, but the secondary and tertiary
screening lengths are now given by $max(Re,Pe)^{-3}(Ri_{v}Pe)^{\frac{1}{2}}$
and $(min(Re,Pe)^{5}max(Re,Pe))^{-\frac{1}{2}}(Ri_{v}Pe)^{\frac{1}{2}}$. A
schematic of the different wake-scaling regimes in the diffusion-dominant
limit is given in figure 13; fluid motion outside the wake remains negligibly
small.
The effects of inertia in the convection dominant limit, corresponding to
$Pe\gg 1$, is based on the same expression as that in the preceding paragraph,
except that $k$ is now scaled with $Ri_{v}^{\frac{1}{3}}$, and accordingly,
one has the form
$-\alpha_{\infty}k_{3}^{2}k^{2}+i\alpha_{\infty}\beta_{\infty}k_{3}k^{4}+\mathrm{i}k_{3}k^{4}+\beta_{\infty}k^{6}+k_{t}^{2}$.
Outside of the buoyant jet, one may neglect $\beta_{\infty}k^{6}$ and use
$k_{3}\gg k_{t}$ implying the dominance of horizontal motion. Setting
$\alpha_{\infty}=\beta_{\infty}=0$ then leads to the balance $k_{3}^{5}\approx
k_{t}^{2}$ which, as already seen in section 3.2.1, and again in the analysis
of the diffusion-dominant case above, yields the wake scaling $z\approx
Ri_{v}^{-\frac{2}{15}}r_{t}^{\frac{2}{5}}$. At length scales larger than a
radial threshold of $O(Re^{-\frac{5}{2}}Ri_{v}^{\frac{1}{2}})$, the balance is
between the inertial and buoyancy forces, leading to the same square-root
scaling $z\propto r_{t}^{\frac{1}{2}}$ seen above. Thus, the only difference
with regard to the wake-scalings, in relation to the diffusion-dominant limit
analyzed above, is that the initial scaling regime $z\propto
r_{t}^{\frac{1}{3}}$ is now absent, and one directly transitions to the
$z\propto r_{t}^{\frac{2}{5}}$ scaling regime at distances much greater than
the stratification screening length of $O(Ri_{v}^{-\frac{1}{3}})$. As already
mentioned in section 3.2.2, a novel feature in the large-$Pe$ regime is the
emergence of a buoyant jet that is smeared out by diffusion beyond a length
scale of $O(Ri_{v}^{-\frac{1}{6}}Pe^{\frac{1}{2}})$. Again, provided the
stratification screening length of $O(Ri_{v}^{-\frac{1}{3}})$ remains the
smallest of the primary screening lengths, the diffusion-screening length for
the jet is asymptotically smaller than the secondary screening length, of
$O(Re^{-\frac{5}{2}}Ri_{v}^{\frac{1}{2}})$ above. A schematic of the various
scaling regimes, in the convection-dominant limit, is given in figure 14.
Figure 13: Schematic of the different wake-growth regimes in the diffusion-
dominant limit ($Pe\ll 1$).
Figure 14: Schematic of various regions in the convection dominant limit for
non-zero Reynolds and Peclet numbers in the Stokes-stratification regime
While the discussion in this manuscript has been restricted to the Stokes
stratification regime, we briefly mention the screening lengths relevant to
the inertia-stratification regime that, for large $Pe$, is defined by
$Ri_{v}^{\frac{1}{3}}\ll Re$, or $\alpha_{\infty}\gg 1$; the inertial
screening length of $O(Re^{-1})$ is now the primary screening length. For
$Re\ll 1$, the fore-aft symmetric flow field in the inner Stokesian region
first transitions, on scales of $O(Re^{-1})$, to a far-field structure
consisting of an $O(1/r^{2})$ source flow everywhere except for a viscous wake
behind the translating sphere that acts as a directed sink (Batchelor, 1967;
Subramanian, 2010). In terms of the Fourier-space expression given in the
preceding paragraph, the viscous wake corresponds to the balance
$\mathrm{i}k_{3}k^{4}\sim\alpha_{\infty}k_{3}^{2}k^{2}$, leading to the
familiar scaling $r_{t}\sim(z/Re)^{\frac{1}{2}}$ for the wake growth in
physical space. This source-wake structure is expected to be modified by
buoyancy forces when $k_{t}^{2}$ becomes comparable to the terms in the
aforementioned balance. This happens for $k\sim
O(\alpha_{\infty}^{-\frac{1}{2}})$, which gives a secondary screening length
of $O(Re^{\frac{1}{2}}Ri_{v}^{-\frac{1}{2}})$ in the inertia-stratification
regime (Zhang et al., 2019). The structure of the flow field on these length
scales is currently under investigation.
## 4 Conclusions
### 4.1 Summary of main results
We have analyzed in detail both the disturbance velocity and density fields
induced by a sphere translating in a linearly stratified ambient fluid
otherwise at rest. The analysis pertains to the Stokes stratification regime
when buoyancy forces are dominant over inertial ones, so the transition from
the well known Stokesian behavior, in the inner region, first occurs across a
screening length determined by a balance between viscous and buoyancy forces.
While we analyze the fluid motion in the diffusion-dominant limit (section
3.1), this scenario has also been the focus of earlier work (List, 1971) and
(Ardekani & Stocker, 2010)), and our main focus is therefore on the convection
dominant limit ($Pe\gg 1$) when the screening length is
$Ri_{v}^{-\frac{1}{3}}$. In the latter limit, and within the Stokes
stratification regime defined by $Re\ll Ri_{v}^{\frac{1}{3}}\ll 1$, we show
through both numerical integration (section 3.2) and asymptotic analysis
(section 3.2.1), that the far-field fluid motion consists of an axisymmetric
wake surrounding the sphere whose vertical extent grows as $z\propto
Ri_{v}^{-\frac{2}{15}}r_{t}^{\frac{2}{5}}$, and wherein the fluid motion is
predominantly horizontal; an analog of this wake also exists in the diffuson
dominant limit, in which case it grows as
$z\propto(Ri_{v}Pe)^{-\frac{1}{6}}r_{t}^{\frac{1}{3}}$; $z$ and $r_{t}$ here
being scaled by $a$. Although not obvious from the figures in earlier
sections,the amplitude of fluid motion at a given non-dimensional distance
(measured in units of the relevant screening length) is significantly greater
for $Pe\gg 1$. In sharp contrast to the diffusion dominant limit, we have
shown (section 3.2.2) that there also exists a buoyant reverse jet in the
vicinity of the rear stagnation streamline for $Pe\gg 1$. Unlike the usual
laminar or turbulent jets which broaden with increasing distance downstream on
account of the momentum flux being conserved, the buoyant jet region above
narrows down with increasing distance downstream as $r_{t}\propto
Ri_{v}^{-\frac{1}{6}}z^{-\frac{1}{2}}$, with a velocity field that, although
oppositely directed, decays in the same manner as a Stokeslet for $Pe=\infty$;
the jet is screened by diffusion beyond a length scale of
$O(Ri_{v}^{-\frac{1}{6}}Pe^{\frac{1}{2}})$ for large but finite $Pe$. The
recent effort of Shaik & Ardekani (2020b) has investigated the flow pattern
due to a particle settling in the aforementioned convection dominant limit,
based on a numerical fast Fourier transform technique. Although the primary
emphasis was on calculating the drift volume, their examination of the fluid
motion shows the existence of a strong reverse flow along the rear stagnation
streamline, consistent with our findings. Finally, in section 3.3, we comment
briefly on the role of weak inertial (and convection) effects on the structure
of the fluid motion beyond the primary buoyancy-induced screening length.
The fore-aft asymmetry of the large-$Pe$ disturbance velocity field found here
has implications for pair-interactions. A vertically oriented particle-pair
will experience a repulsive interaction for sufficiently large separations (on
scales of $O(Ri_{v}^{-\frac{1}{3}})$). This is in contrast to the Stokesian
scenario where the particle-pair separation remains invariant with time, a
fact that may be established using reversibility arguments, and may be seen
explicitly from the fore-aft symmetry of the Stokesian velocity field; note
that the fore-aft symmetry of the $Pe=0$ velocity field, obtained in section
3.1, implies that the particle-pair separation, in a stratified fluid, is
conserved to leading order in the diffusion dominant limit. For $Pe\gg 1$, the
aforementioned repulsive pair-interaction is initially controlled by the
greater magnitude of the velocity field along the front stagnation streamline,
this because the zero-crossing along the front stagnation streamline ($\approx
8.85Ri_{v}^{-\frac{1}{3}}$) occurs at a greater distance than that on the rear
stagnation streamline ($\approx 1.15Ri_{v}^{-\frac{1}{3}}$). However, for
distances a little beyond approximately $2Ri_{v}^{-\frac{1}{3}}$, the more
rapid $O(1/z^{7})$ decay of the disturbance velocity in front of the particle
implies that the repulsion is controlled by the slowly decaying $O(1/z)$
disturbance along the rear stagnation streamline. Succinctly put, the rear
particle pushes the one in front for smaller separations, while the opposite
is true at larger separations. The range of repulsion is limited to a length
scale of $O(Ri_{v}^{-\frac{1}{6}}Pe^{\frac{1}{2}})$ by the effects of
diffusion. This repulsive behavior is the opposite of the drafting behavior
known for homogeneous fluids at finite $Re$.
### 4.2 Discussion: the inner-region scaling estimates
It was indicated in the introduction as to how the validity of a linearized
approximation is not obvious at large $Pe$, given that the ambient iso-pycnals
in the inner region are severely distorted by the sphere velocity field. An
examination of the density disturbance in the inner region for large $Pe$
should help identify possible restrictions on the results obtained in the
manuscript, and a few comments in this regard are in order. We begin with the
simpler case of small $Pe$ when the density perturbation around the sphere, on
length scales of $O(a)$ (the inner region), remains finite at all times. The
no-flux condition on the sphere surface causes the ambient iso-pycnals to
tilt, so as to meet the sphere in a normal orientation. This tilting effect is
significant in a region of $O(a^{3})$, implying that the associated density
perturbation is $O(\gamma a)$. The resulting baroclinically induced vorticity
drives a flow of $O(\gamma a^{3}g/\mu)$, or $O(Ri_{v})$ in non-dimensional
terms (scaled by $U$; see Varanasi et al. (2021)). For $Ri_{v}\ll 1$, this
weak flow may evidently be neglected compared to the primary Stokesian field.
On larger length scales, convection of the base-state stratification by the
perturbation Stokeslet field leads to a density perturbation that grows as
$O(Pe\,r)$ in the inner region. The buoyancy forcing due to this density
perturbation becomes comparable to viscous forces on length scales of
$O(Ri_{v}Pe)^{-\frac{1}{4}}$, the small-$Pe$ stratification screening length
screening length identified first by List (1971) and Ardekani & Stocker
(2010). Importantly, for small $Pe$, the Stokesian flow remains a valid
leading order approximation in the inner region for all times.
For large $Pe$, the density perturbation in the inner region can become much
larger than the nominal $O(\gamma a)$ estimate above. This may be seen by
considering the limiting case of $Pe=\infty$, when the iso-pycnals are
affinely convected by the sphere velocity field. The sphere, as it settles
through the stably stratified medium, entrains positively buoyant fluid in a
quasi-spherical annular region that extends behind it in a narrow wake that
lengthens with time. The amplitude of the density perturbation near the sphere
increases linearly with time as $O(\gamma Ut)$, leading to a buoyancy forcing
per unit volume of $O(\gamma Utg)$. Clearly, for large enough times, this
buoyancy forcing will become comparable to the viscous terms even in the inner
region, and for $Ri_{v}\ll 1$. Since the viscous terms in the equations of
motion are $O(\frac{\mu U}{a^{2}})$ in the inner region, the threshold time at
which buoyancy forces are of a comparable magnitude is $O(\frac{\mu}{\gamma
a^{2}g})$, or $O(\frac{a}{U}Ri_{v}^{-1})$. This is therefore the time at which
the flow in the inner region must deviate from the leading Stokesian
approximation on account of buoyancy forces; as mentioned in the introduction,
it is still possible for the structure of the fluid motion to remain similar
to that detailed in this manuscript, but for a buoyancy-induced
renormalization of the force exerted by the particle, although only a detailed
examination of the inner region would confirm this. Moving to the outer
region, in the Stokes stratification regime, the time scale associated with
the development of the flow field in this region may be estimated as the time
required for momemtum to diffuse to a distance of $O(aRi_{v}^{-1/3})$, which
is $O(\frac{a^{2}}{\nu}Ri_{v}^{-2/3})$. The ratio of this latter time to the
time scale estimated above, for the inner region to depart from a homogeneous
Stokesian evolution, is $O(ReRi_{v}^{1/3})$, and therefore, asymptotically
small for $Re$, $Ri_{v}\ll 1$. Thus, there is an asymptotically long interval
of time corresponding to $\frac{a^{2}}{\nu}Ri_{v}^{-2/3}\ll
t\ll\frac{a}{U}Ri_{v}^{-1}$, where one has a quasi-steady response in the
outer region, with the motion in the inner region still governed by the Stokes
equations at leading order. The findings with regard to the nature of the
fluid motion, detailed in section 3.2, are certainly valid in this time
period. Note that for any finite $Pe$, however large, the distortion of the
isopycnals will not continue indefinitely. Instead, there will eventually be a
steady state boundary layer, of thickness $O(aPe^{-\frac{1}{3}})$, as far as
the density gradient is concerned (although not for the density itself which
will continue to increase with time for an assumed constant $U$).
Scaling arguments similar to those in the preceding paragraph may also be used
to assess the possibility of observing quasi-steady dynamics on scales beyond
the primary screening length, and thereby, examine the relevance of the wake-
scaling regimes sketched in section 3.3; see figures 13 and 14. Focusing on
the Stokes stratification regime for large $Pe$, the arguments in section 3.3
pointed to a secondary screening length of
$O(aRe^{-\frac{5}{2}}Ri_{v}^{\frac{1}{2}})$ across which the dominant balance
shifted from one between buoyancy and viscous forces to one between buoyancy
and inertial forces. Given that the inertial forces enter the dominant
balance, the time scale for a quasi-steady wake to be established on the
aforementioned secondary screening length may be estimated as
$\frac{aRe^{-\frac{5}{2}}Ri_{v}^{\frac{1}{2}}}{U}$. The ratio of this time
scale to $\frac{aRi_{v}^{-1}}{U}$ gives us
$Re^{-\frac{5}{2}}Ri_{v}^{\frac{3}{2}}$, with this ratio needing to be much
less than unity in order for a quasi-steady analysis of the fluid motion to
hold; this yields $Re\gg Ri_{v}^{\frac{3}{5}}$. Combining this with the
primary criterion for the large-$Pe$ Stokes stratification regime gives
$Ri_{v}^{\frac{3}{5}}\ll Re\ll Ri_{v}^{\frac{1}{3}}$ for the dynamics in both
the primary and secondary outer regions to be quasi-steady, in the time that
the inner region region has a Stokesian character.
### 4.3 Discussion: the drift volume scaling estimates
We now turn to the drift volume estimate for a sphere settling in a density-
stratified fluid which, as mentioned in the introduction, was one of the
motivations for the analysis in this paper. The rapid algebraic decay of the
far-field velocity disturbance, induced by buoyancy forces, implies that the
drift volume (${\mathcal{D}}$) will be finite in presence of an ambient
stratification, as originally argued by Subramanian (2010). More precise
estimates for ${\mathcal{D}}$ as a function of $Ri_{v}$ and $Re$, in the
Stokes and inertia-stratification regimes, are obtained below. For the
homogeneous Stokesian scenario, the $O(1/r)$ decay of the disturbance field
implies a divergent drift volume for any finite time. As originally shown by
Eames et al. (2003), it therefore becomes necessary to define a partial drift
volume (${\mathcal{D}}_{p}$) where, in contrast to Darwin (1953), one only
considers an initial material plane of a finite spatial extent. In a recent
effort, Chisholm & Khair (2017) have shown that, at leading order in $a/h$,
${\mathcal{D}}_{p}\sim ah^{2}\sinh^{-1}(Ut/h)$, $t$ and $h$ here being the
time and radius of the aforementioned material plane, respectively; the
$h$-scaling clearly points to the finite-time divergence of
${\mathcal{D}}\,(=\lim_{h\rightarrow\infty}{\mathcal{D}}_{p}$) in the
Stokesian limit. In the limits $Ut/h\ll 1$ and $Ut/h\gg 1$, the authors find
${\mathcal{D}}_{p}$ to be $O(ahUt)$ and $O[ah^{2}\ln(Ut/h)]$, respectively.
These scalings may be readily obtained without a detailed calculation: for
$Ut\ll h$, the flux through the original plane is independent of time, and due
to the $U$-component of the Stokesian field, in the transverse plane
containing the sphere. This component is $3Ua/(4r_{t})$, and the flux through
a circular section of radius $h$ is therefore given by
$\textstyle\int_{0}^{h}3U(a/4r_{t})2\pi r_{t}dr_{t}\approx(3\pi/2)Uah$,
implying ${\mathcal{D}}_{p}\approx(3\pi/2)Uaht$; here, the lower limit of the
integral is taken to be $0$ since the leading contribution comes from $r_{t}$
of $O(h)$ (this is also the reason why a Stokeslet approximation suffices for
the leading order estimate). In the long-time limit of interest, when the
distance of the material plane from the sphere is much larger than its radial
extent, the flux is primarily due to the velocity $u_{z}\approx 3U/az$ along
the rear stagnation streamline. The drift displacement due to this disturbance
velocity field may be estimated as
$\textstyle\int^{t}dt\,u_{z}=\textstyle\int^{Ut}(dz^{\prime}/U)u_{z}=\textstyle\int^{Ut}dz^{\prime}(3a/2z^{\prime})\sim(3a/2)\ln(Ut)$,
and is logarithmically divergent in time. A subtle point here is with regard
to the argument of the logarithm; the approximate estimate above gives a
dimensional argument for the logarithm, and one needs an additional length
with respect to which $Ut$ in the logarithm is measured. Although an obvious
choice would be $a$, the correct choice is $h$ (as also evident from the exact
result above), and this is because the onset of the logarithmic divergence is
dependent on the transverse radial location of the fluid element. The
decreasing magnitude of the disturbance field implies that it takes a
progressively longer time for an element, further off from the translation
axis, to be displaced through a distance of $O(a)$; evidently, the logarithmic
divergence in time can only begin after the drift displacement has attained a
magnitude of $O(a)$. For an element at a transverse distance of $O(h)$, the
scales that contribute dominantly to ${\mathcal{D}}_{p}$, this time is
$O(h/U)$, implying that the argument of the logarithm, in the expression for
the drift displacement above, should be $t/(h/U)$; multiplication by $\pi
h^{2}$ gives the estimate
${\mathcal{D}}_{p}\approx\frac{3\pi}{2}ah^{2}\ln(Ut/h)$.
In the Stokes-stratification regime, one expects the dominant contribution to
the drift volume to come from the range $h\sim l_{c}$, $l_{c}$ being the
relevant stratification screening length; $l_{c}\sim
O[(Ri_{v}Pe)^{-\frac{1}{4}}]$ for $Pe\ll 1$, and $O(Ri_{v}^{-\frac{1}{3}})$
for $Pe\gg 1$. However, for elements at these distances (from the translation
axis), the drift displacement attains a magnitude of $O(a)$ only in the
$O(l_{c}/U)$ time taken for the sphere to translate through a screening
length. Since the velocity field decays faster for larger separations, there
cannot be the analog of the aforementioned logarithmic-in-time behavior, for
larger times, that occurred in the homogeneous case. This implies that
${\mathcal{D}}$ for the Stokes drift displacement can be obtained from the
aforementioned long-time estimate for the Stokesian case by replacing $h$ with
$l_{c}$, but removing the logarithm. One therefore obtains ${\mathcal{D}}\sim
O[a^{3}(Ri_{v}Pe)^{-\frac{1}{2}}]$ and $O(a^{3}Ri_{v}^{-\frac{2}{3}}$), for
small and large $Pe$, in the Stokes stratification regime, the latter estimate
being relevant to the oceanic scenario (Katija & Dabiri, 2009; Subramanian,
2010); both estimates diverge in the limit of a homogeneous ambient
($Ri_{v}\rightarrow 0$), as must be the case. The numerical pre-factors in
these estimates would require a detailed calculation of the drift
displacements on length scales of order the stratification screening length.
Note that fluid elements that start off at distances of $h\ll l_{c}$ from the
translation axis will suffer drift displacements of $O(a\ln
Ri_{v}^{-\frac{1}{3}})$, and one therefore expects higher-order terms
involving logarithmics in a small $Ri_{v}$ expansion of ${\mathcal{D}}$ in the
limit $Re\ll Ri_{v}^{\frac{1}{3}}\ll 1$. Recent efforts by Shaik & Ardekani
(2020a) and Shaik & Ardekani (2020b) have obtained ${\mathcal{D}}_{p}$
numerically, in both the small and large $Pe$ limits, Consistent with the
results of Chisholm & Khair (2017), ${\mathcal{D}}_{p}$ exhibits an $O(h^{2})$
scaling with the radial extent of the material plane under consideration.
Importantly, however, the scaling arguments above imply that this algebraic
divergence must be cut off once $h\sim O(l_{c})$. A more detailed examination
of pathlines and drift volume calculation to support the scaling arguments in
this paragraph will be reported in a separate communication.
In the inertia-stratification regime ($Ri_{v}\ll Re\ll 1$), discussed briefly
towards the end of section 3.3, the disturbance velocity field attains the
familiar source-sink structure on length scales larger than the primary
(inertial) screening length of $O(aRe^{-1})$ (Batchelor, 1967). It is well
known that the presence of a viscous wake leads to ${\mathcal{D}}$ diverging
linearly in time for the homogeneous scenario (Subramanian, 2010; Chisholm &
Khair, 2017). This divergence is readily seen from the constant flux through a
fixed plane driven by the viscous wake. This flux is given by
$u_{z}(r_{t}^{wake})^{2}$, where $r_{t}^{wake}\sim(az/Re)^{\frac{1}{2}}$, and
is $O(Ua^{2}/Re)$, leading to ${\mathcal{D}}\sim(Ua^{2}/Re)t$ for the
homogeneous case. For the stratified case, and for $Pe\gg 1$, this viscous
wake only persists until the secondary screening length of
$O(Re/Ri_{v})^{-\frac{1}{2}}$ obtained in section 3.3, and therefore the
linear divergence above will be cut off for $t\sim
O(\frac{a(Re/Ri_{v})^{-\frac{1}{2}}}{U})$, when stratification forces screen
the wake velocity field, and one obtains ${\mathcal{D}}\sim
a^{3}(ReRi_{v})^{-\frac{1}{2}}$ in the inertia-stratification regime. Note
that this scaling is consistent with the scaling obtained above in the Stokes-
stratification regime, in that it reduces to $O(Ri_{v}^{-2/3})$ for
$Re=Ri_{v}^{1/3}$. In summary, for a fixed $Ri_{v}\ll 1$, ${\mathcal{D}}$
starts off being $O(a^{3}Ri_{v}^{-\frac{2}{3}})$ until an
$ReofO(Ri_{v}^{\frac{1}{3}})$, decreasing thereafter as
$O(a^{3}Re^{-\frac{1}{2}}Ri_{v}^{-\frac{1}{2}})$ for $Re\gg
Ri_{v}^{\frac{1}{3}}$.
## Acknowledgements
Numerical computations reported here are carried out using the Param Yukti
facility provided under the National Supercomputing Mission, and the Nalanda-2
computational cluster available with JNCASR. The authors thank the institute
for providing these facilities.
## Appendix A The far-field wake velocity and density fields in the
diffusion-dominant ($Pe=0$) limit
Herein, we start with the expressions (20) and (21), where the approximation
of nearly horizontal motion has already been made. In a cylindrical coordinate
system aligned with the translation direction, and after carrying out the
$\phi$ integration, the expressions for the axial and transverse velocities,
and the density disturbance, reduce to:
$\bar{u}_{z}=\frac{-3}{2\pi}\int_{-\infty}^{\infty}dk_{3}\int_{0}^{\infty}dk_{t}\frac{k_{3}^{2}k_{t}^{3}J_{0}(k_{t}\bar{r}_{t})e^{ik_{3}\bar{z}}}{(k_{3}^{6}+k_{t}^{2})},$
(40)
$\bar{u}_{r_{t}}=\frac{3i}{2\pi}\int_{-\infty}^{\infty}dk_{3}\int_{0}^{\infty}dk_{t}\frac{k_{3}^{3}k_{t}^{2}J_{1}(k_{t}\bar{r}_{t})e^{ik_{3}\bar{z}}}{(k_{3}^{6}+k_{t}^{2})},$
(41)
$\bar{\rho}_{f_{1}}=\frac{-3}{2\pi}\int_{-\infty}^{\infty}dk_{3}\int_{0}^{\infty}dk_{t}\frac{k_{t}^{3}J_{0}(k_{t}\bar{r}_{t})e^{ik_{3}\bar{z}}}{(k_{3}^{6}+k_{t}^{2})}$
(42)
Next, one uses contour integration to evaluate the $k_{3}$-integral.
Contributions arise from the existence of six poles in the complex-$k_{3}$
plane, with these poles being symmetrically disposed about the real
$k_{3}$-axis, consistent with the fore-aft symmetry of the disturbance fields.
The contour integration yields the following one-dimensional integrals:
$\bar{u}_{z}=-3i\int_{0}^{\infty}k_{t}^{2}J_{0}(k_{t}\bar{r}_{t})\left(lq_{1}^{2}e^{iq_{1}k_{t}^{\frac{1}{3}}\mathinner{\\!\left\lvert\bar{z}\right\rvert}}+mq_{2}^{2}e^{iq_{2}k_{t}^{\frac{1}{3}}\mathinner{\\!\left\lvert\bar{z}\right\rvert}}+nq_{3}^{2}e^{iq_{3}k_{t}^{\frac{1}{3}}\mathinner{\\!\left\lvert\bar{z}\right\rvert}}\right)dp,$
(43)
$\bar{u}_{{r_{t}}}=-3\operatorname{sgn}{\bar{z}}\int_{0}^{\infty}k_{t}^{\frac{4}{3}}J_{1}(k_{t}\bar{r}_{t})\left(lq_{1}^{3}e^{iq_{1}k_{t}^{\frac{1}{3}}\mathinner{\\!\left\lvert\bar{z}\right\rvert}}+mq_{2}^{3}e^{iq_{2}k_{t}^{\frac{1}{3}}\mathinner{\\!\left\lvert\bar{z}\right\rvert}}+nq_{3}^{3}e^{iq_{3}k_{t}^{\frac{1}{3}}\mathinner{\\!\left\lvert\bar{z}\right\rvert}}\right)dp,$
(44)
$\bar{\rho}_{f}=-3i\int_{0}^{\infty}k_{t}^{\frac{4}{3}}J_{0}(k_{t}\bar{r}_{t})\left(le^{iq_{1}k_{t}^{\frac{1}{3}}\mathinner{\\!\left\lvert\bar{z}\right\rvert}}+me^{iq_{2}k_{t}^{\frac{1}{3}}\mathinner{\\!\left\lvert\bar{z}\right\rvert}}+ne^{iq_{3}k_{t}^{\frac{1}{3}}\mathinner{\\!\left\lvert\bar{z}\right\rvert}}\right)dp.$
(45)
Here $q_{1}$, $q_{2}$, $q_{3}$, $l$, $m$ and $n$ are complex-valued constants
given by:
$\displaystyle[q_{1},q_{2},q_{3},q_{4},q_{5},q_{6}]=[e^{\frac{\pi
i}{6}},e^{\frac{\pi i}{2}},e^{\frac{5\pi i}{6}},e^{\frac{7\pi
i}{6}},e^{\frac{9\pi i}{6}},e^{\frac{11\pi i}{6}}],$ $\displaystyle
l=\frac{1}{(q_{1}-q_{2})(q_{1}-q_{3})(q_{1}-q_{4})(q_{1}-q_{5})(q_{1}-q_{6})},$
$\displaystyle
m=\frac{1}{(q_{2}-q_{1})(q_{2}-q_{3})(q_{2}-q_{4})(q_{2}-q_{5})(q_{2}-q_{6})},$
$\displaystyle
n=\frac{1}{(q_{3}-q_{1})(q_{3}-q_{2})(q_{3}-q_{4})(q_{3}-q_{5})(q_{3}-q_{6})}.$
Setting $k_{t}\bar{r}_{t}=p$ as the integration variable, and using
$\eta=\frac{\bar{z}}{\bar{r_{t}}^{\frac{1}{3}}}$, with some simplification,
yields (22), (23) and (24).
## Appendix B The far-field wake velocity and density fields in the
convection-dominant ($Pe\rightarrow\infty$ or $\beta_{\infty}\rightarrow 0$)
limit
Herein, we start with the expressions appropriate for the far-field wake,
given by (20) and (21), where the approximation of nearly horizontal motion
has already been made. In a cylindrical coordinate system aligned with the
translation direction, and after carrying out the $\phi$ integration, the
expressions for the axial and transverse velocities, and the density
disturbance, reduce to:
$\bar{u}_{z}=\frac{-3i}{2\pi}\int_{-\infty}^{\infty}dk_{3}\int_{0}^{\infty}dk_{t}\frac{k_{3}k_{t}^{3}J_{0}(k_{t}\bar{r}_{t})e^{ik_{3}\bar{z}}}{(ik_{3}^{5}+k_{t}^{2})}$
(46)
$\bar{u}_{r_{t}}=\frac{3i}{2\pi}\int_{-\infty}^{\infty}dk_{3}\int_{0}^{\infty}dk_{t}\frac{k_{3}^{2}k_{t}^{2}J_{1}(k_{t}\bar{r}_{t})e^{ik_{3}\bar{z}}}{(ik_{3}^{5}+k_{t}^{2})}$
(47)
$\bar{\rho}_{f_{1}}=\frac{-3}{2\pi}\int_{-\infty}^{\infty}dk_{3}\int_{0}^{\infty}dk_{t}\frac{k_{t}^{3}J_{0}(k_{t}\bar{r}_{t})e^{ik_{3}\bar{z}}}{(ik_{3}^{5}+k_{t}^{2})}$
(48)
As for the diffusion dominant case analyzed in appendix A, the next step is to
evaluate the $k_{3}$-integral using contour integration. There now exist five
poles in the complex-$k_{3}$ plane with two poles in the lower half and three
poles in the upper half of the complex plane; the differing number of poles in
the two halves of the plane translates to fore-aft asymmetry of the axial
velocity and density disturbance fields. The residue integration then yields
the following one dimensional integrals for positive and negative $\tilde{z}$:
$\displaystyle\tilde{u}_{z}$
$\displaystyle=-3i\int_{0}^{\infty}k_{t}^{\frac{9}{5}}J_{0}[k_{t}r_{t}][Q_{1}q_{1}e^{iq_{1}k_{t}^{\frac{2}{5}}\tilde{z}}+Q_{2}q_{2}e^{iq_{2}k_{t}^{\frac{2}{5}}\tilde{z}}+Q_{3}q_{3}e^{iq_{3}k_{t}^{\frac{2}{5}}\tilde{z}}]dk_{t}\textrm{
}(\textrm{for }\tilde{z}>0)$
$\displaystyle=3i\int_{0}^{\infty}k_{t}^{\frac{9}{5}}J_{0}[k_{t}r_{t}][Q_{4}q_{4}e^{iq_{4}k_{t}^{\frac{2}{5}}\tilde{z}}+Q_{5}q_{5}e^{iq_{5}k_{t}^{\frac{2}{5}}\tilde{z}}]dk_{t}\textrm{
}(\textrm{for }\tilde{z}<0)$ (49) $\displaystyle\tilde{u}_{r_{t}}$
$\displaystyle=-3\int_{0}^{\infty}k_{t}^{\frac{6}{5}}J_{0}[k_{t}r_{t}][Q_{1}q_{1}^{2}e^{iq_{1}k_{t}^{\frac{2}{5}}\tilde{z}}+Q_{2}q_{2}^{2}e^{iq_{2}k_{t}^{\frac{2}{5}}\tilde{z}}+Q_{3}q_{3}^{2}e^{iq_{3}k_{t}^{\frac{2}{5}}\tilde{z}}]dk_{t}\textrm{
}(\textrm{for }\tilde{z}>0)$
$\displaystyle=3\int_{0}^{\infty}k_{t}^{\frac{6}{5}}J_{0}[k_{t}r_{t}][Q_{4}q_{4}^{2}e^{iq_{4}k_{t}^{\frac{2}{5}}\tilde{z}}+Q_{5}q_{5}^{2}e^{iq_{5}k_{t}^{\frac{2}{5}}\tilde{z}}]dk_{t}\textrm{
}(\textrm{for }\tilde{z}<0)$ (50) $\displaystyle\tilde{\rho}_{f}$
$\displaystyle=-3\int_{0}^{\infty}k_{t}^{\frac{7}{5}}J_{0}[k_{t}r_{t}][Q_{1}e^{iq_{1}k_{t}^{\frac{2}{5}}\tilde{z}}+Q_{2}e^{iq_{2}k_{t}^{\frac{2}{5}}\tilde{z}}+Q_{3}e^{iq_{3}k_{t}^{\frac{2}{5}}\tilde{z}}]dk_{t}\textrm{
}(\textrm{for }\tilde{z}>0)$
$\displaystyle=3\int_{0}^{\infty}k_{t}^{\frac{7}{5}}J_{0}[k_{t}r_{t}][Q_{4}e^{iq_{4}k_{t}^{\frac{2}{5}}\tilde{z}}+Q_{5}e^{iq_{5}k_{t}^{\frac{2}{5}}\tilde{z}}]dk_{t}\textrm{
}(\textrm{for }\tilde{z}<0)$ (51)
Here $q_{1}$, $q_{2}$, $q_{3}$, $q_{4}$, $q_{5}$, $Q_{1}$, $Q_{2}$, $Q_{3}$,
$Q_{4}$ and $Q_{5}$ are complex-valued constants given by:
$\displaystyle[q_{1},q_{2},q_{3},q_{4},q_{5}]=[e^{\frac{\pi
i}{10}},e^{\frac{\pi i}{2}},e^{\frac{9\pi i}{10}},e^{-\frac{7\pi
i}{10}},e^{-\frac{3\pi i}{10}}],$ $\displaystyle
Q_{1}=\frac{1}{(q_{1}-q_{2})(q_{1}-q_{3})(q_{1}-q_{4})(q_{1}-q_{5})},$
$\displaystyle
Q_{2}=\frac{1}{(q_{2}-q_{1})(q_{2}-q_{3})(q_{2}-q_{4})(q_{2}-q_{5})},$
$\displaystyle
Q_{3}=\frac{1}{(q_{3}-q_{1})(q_{3}-q_{2})(q_{3}-q_{4})(q_{3}-q_{5})},$
$\displaystyle
Q_{4}=\frac{1}{(q_{4}-q_{1})(q_{4}-q_{2})(q_{4}-q_{3})(q_{4}-q_{5})},$
$\displaystyle
Q_{5}=\frac{1}{(q_{5}-q_{1})(q_{5}-q_{2})(q_{5}-q_{3})(q_{5}-q_{4})}.$
Setting $k_{t}\bar{r}_{t}=p$ as the integration variable, and using
$\eta=\frac{\tilde{z}}{\tilde{r_{t}}^{\frac{2}{5}}}$ yields (3.2.1), (3.2.1)
and (3.2.1).
## References
* Anand et al. (2020) Anand, Prateek, Ray, Samriddhi Sankar & Subramanian, Ganesh 2020 Orientation dynamics of sedimenting anisotropic particles in turbulence. Phys. Rev. Lett. 125, 034501.
* Ardekani & Stocker (2010) Ardekani, A. M. & Stocker, R. 2010 Stratlets: Low Reynolds Number Point-Force Solutions in a Stratified Fluid. Physical Review Letters 105 (8), 084502.
* Batchelor (1967) Batchelor, G. K. 1967 An Introduction to Fluid Dynamics. Cambridge Mathematical Library . Cambridge University Press.
* Childress (1964) Childress, Stephen 1964 The slow motion of a sphere in a rotating, viscous fluid. Journal of Fluid Mechanics 20 (2), 305–314.
* Chisholm & Khair (2017) Chisholm, Nicholas G. & Khair, Aditya S. 2017 Drift volume in viscous flows. Physical Review Fluids 2, 064101\.
* Cox (1965) Cox, R. G. 1965 The steady motion of a particle of arbitrary shape at small reynolds numbers. Journal of Fluid Mechanics 23 (4), 625–643.
* Dabade et al. (2015) Dabade, Vivekanand, Marath, Navaneeth K. & Subramanian, Ganesh 2015 Effects of inertia and viscoelasticity on sedimenting anisotropic particles. Journal of Fluid Mechanics 778, 133–188.
* Dandekar et al. (2020) Dandekar, Rajat, Shaik, Vaseem A. & Ardekani, Arezoo M. 2020 Motion of an arbitrarily shaped particle in a density stratified fluid. Journal of Fluid Mechanics 890, A16.
* Darwin (1953) Darwin, Charles 1953 Note on hydrodynamics. Mathematical Proceedings of the Cambridge Philosophical Society 49 (2), 342–354.
* Doostmohammadi & Ardekani (2014) Doostmohammadi, A. & Ardekani, A. M. 2014 Reorientation of elongated particles at density interfaces. Phys. Rev. E 90, 033013.
* Doostmohammadi et al. (2012) Doostmohammadi, Amin, Stocker, Roman & Ardekani, Arezoo M. 2012 Low-reynolds-number swimming at pycnoclines. Proceedings of the National Academy of Sciences 109 (10), 3856–3861.
* Eames et al. (2003) Eames, I., Gobby, D. & Dalziel, S. B. 2003 Fluid displacement by stokes flow past a spherical droplet. Journal of Fluid Mechanics 485, 67–85.
* Eames & Hunt (1997) Eames, I. & Hunt, J. C. R. 1997 Inviscid flow around bodies moving in weak density gradients without buoyancy effects. Journal of Fluid Mechanics 353, 331–355.
* Fouxon & Leshansky (2014) Fouxon, Itzhak & Leshansky, Alexander 2014 Convective stability of turbulent boussinesq flow in the dissipative range and flow around small particles. Physical Review E 90, 053002\.
* Hanazaki et al. (2009) Hanazaki, H., Kashimoto, K. & Okamura, T. 2009 Jets generated by a sphere moving vertically in a stratified fluid. Journal of Fluid Mechanics 638, 173–197.
* Katija & Dabiri (2009) Katija, Kakani & Dabiri, John O. 2009 A viscosity-enhanced mechanism for biogenic ocean mixing. Nature 460 (7255), 624–626.
* Kunze et al. (2006) Kunze, Eric, Dower, John F., Beveridge, Ian, Dewey, Richard & Bartlett, Kevin P. 2006 Observations of biologically generated turbulence in a coastal inlet. Science 313 (5794), 1768–1770.
* Leal (2007) Leal, L. Gary 2007 Advanced Transport Phenomena: Fluid Mechanics and Convective Transport Processes. Cambridge Series in Chemical Engineering . Cambridge University Press.
* Lighthill (1956) Lighthill, M. J. 1956 Drift. Journal of Fluid Mechanics 1 (1), 31–53.
* List (1971) List, E. J. 1971 Laminar momentum jets in a stratified fluid. Journal of Fluid Mechanics 45 (3), 561–574.
* Magnaudet & Mercier (2020) Magnaudet, Jacques & Mercier, Matthieu J. 2020 Particles, Drops, and Bubbles Moving Across Sharp Interfaces and Stratified Layers. Annual Review of Fluid Mechanics 52 (1), null.
* Martin et al. (2020) Martin, Adrian, Boyd, Philip, Buesseler, Ken, Cetinic, Ivona, Claustre, Herve, Giering, Sari, Henson, Stephanie, Irigoien, Xabier, Kriest, Iris, Memery, Laurent, Robinson, Carol, Saba, Grace, Sanders, Richard, Siegel, David, Villa-Alfageme, María & Guidi, Lionel 2020 The oceans’ twilight zone must be studied now, before it is too late. Nature 580 (7801), 26–28.
* Mehaddi et al. (2018) Mehaddi, R., Candelier, F. & Mehlig, B. 2018 Inertial drag on a sphere settling in a stratified fluid. Journal of Fluid Mechanics 855, 1074–1087.
* Mercier et al. (2020) Mercier, M. J., Wang, S., Péméja, J., Ern, P. & Ardekani, A. M. 2020 Settling disks in a linearly stratified fluid. Journal of Fluid Mechanics 885, A2.
* Mrokowska (2018) Mrokowska, M. M. 2018 Stratification-induced reorientation of disk settling through ambient density transition. Scientific Reports 8, 412.
* Mrokowska (2020a) Mrokowska, M. M. 2020a Dynamics of thin disk settling in two-layered fluid with density transition. Acta Geophys 68, 1145–1160.
* Mrokowska (2020b) Mrokowska, M. M. 2020b Influence of pycnocline on settling behaviour of non-spherical particle and wake evolution. Scientific Reports 10, 20595.
* Munk (1966) Munk, Walter H. 1966 Abyssal recipes. Deep Sea Research and Oceanographic Abstracts 13 (4), 707 – 730.
* Saffman (1965) Saffman, P. G. 1965 The lift on a small sphere in a slow shear flow. Journal of Fluid Mechanics 22 (2), 385–400.
* Shaik & Ardekani (2020a) Shaik, Vaseem A. & Ardekani, Arezoo M. 2020a Drag, deformation, and drift volume associated with a drop rising in a density stratified fluid. Phys. Rev. Fluids 5, 013604\.
* Shaik & Ardekani (2020b) Shaik, Vaseem A. & Ardekani, Arezoo M. 2020b Far-field flow and drift due to particles and organisms in density-stratified fluids. Phys. Rev. E 102, 063106.
* Subramanian (2010) Subramanian, G. 2010 Viscosity-enhanced bio-mixing of the oceans. Current Science 98, 1103.
* Turner (1979) Turner, J.S. 1979 Buoyancy Effects in Fluids. Cambridge Monographs on Mechanics . Cambridge University Press.
* Varanasi et al. (2021) Varanasi, Arun Kumar, Marath, Navaneeth K. & Subramanian, Ganesh 2021 The rotation of a sedimenting anisotropic particle in a stratified fluid. Journal of Fluid Mechanics p. submitted.
* Visser (2007) Visser, André W. 2007 Biomixing of the oceans? Science 316 (5826), 838–839.
* Vladimirov & Li’in (1991) Vladimirov, V.A. & Li’in, K.I. 1991 Slow motions of a solid in a continuously stratified fluid. J. Appl. Mech. Tech. Phys. 32, 194–200.
* Wagner et al. (2014) Wagner, Gregory L., Young, William R. & Lauga, Eric 2014 Mixing by microorganisms in stratified fluids. Journal of Marine Research 72 (2), 47–72.
* Yick et al. (2009) Yick, King Yeung, Torres, Carlos R., Peacock, Thomas & Stocker, Roman 2009 Enhanced drag of a sphere settling in a stratified fluid at small reynolds numbers. Journal of Fluid Mechanics 632, 49–68.
* Zhang et al. (2019) Zhang, Jie, Mercier, Matthieu J. & Magnaudet, Jacques 2019 Core mechanisms of drag enhancement on bodies settling in a stratified fluid. Journal of Fluid Mechanics 875, 622–656.
* Zvirin & Chadwick (1975) Zvirin, Y. & Chadwick, R. S. 1975 Settling of an axially symmetric body in a viscous stratified fluid. International Journal of Multiphase Flow 1, 743–752.
|
# A theory of neural emulators
Catalin C. Mitelut
Forum Basiliense, University of Basel
Foresight Institute
<EMAIL_ADDRESS>
###### Abstract
A central goal in neuroscience is to provide explanations for how animal
nervous systems can generate actions and cognitive states such as
consciousness while artificial intelligence (AI) and machine learning (ML)
seek to provide models that are increasingly better at prediction. Despite
many decades of research we have made limited progress on providing
neuroscience explanations yet there is an increased use of AI and ML methods
in neuroscience for prediction of behavior and even cognitive states. Here we
propose emulator theory (ET) and neural emulators as circuit- and scale-
independent predictive models of biological brain activity and emulator theory
(ET) as an alternative research paradigm in neuroscience. ET proposes that
predictive models trained solely on neural dynamics and behaviors can generate
functionally indistinguishable systems from their sources. That is, compared
to the biological organisms which they model, emulators may achieve
indistinguishable behavior and cognitive states - including consciousness -
without any mechanistic explanations. We posit ET via several conjectures,
discuss the nature of endogenous and exogenous activation of neural circuits,
and discuss neural causality of phenomenal states. ET provides the conceptual
and empirical framework for prediction-based models of neural dynamics and
behavior without explicit representations of idiosyncratically evolved nervous
systems.
## 1 Introduction
A central goal of neuroscience research is to provide explanatory models
marr1982vision ; churchland1992computational underlying brain function from
molecules and genetics to memory encoding, decision making and cognition
shen2022emergence . After more than a century of extensive research in anatomy
and physiology we understand anatomical connectivity better - but have made
limited progress in understanding function, especially at the whole-organism
scale, including how multi-scale dynamical interactions within the central
nervous system (CNS) give rise to our behaviors and phenomenal experiences
roland2023far . Our limited progress may be due to avoiding difficult
paradigms involving behavior krakauer2017neuroscience ;
humphries2017neuroscience , being disconnected from psychology
beste2021disconnected , or lacking large datasets and powerful models
markram2013seven . Recent controversies including possibly pursuing the wrong
approach in Alzheimer’s research for decades piller2022blots raise the
question whether seeking to provide completely mechanistic models of brain
function will ever succeed at the whole organisms level, help solve complex
diseases or explain capacities such as consciousness.
In parallel, over the past several decades deep machine learning (deep ML) and
artificial intelligence (AI) methods relying on black-box neural networks
(NNs) have created increasingly powerful predictive models that can achieve
superhuman game performance silver2016mastering , and improved language
translation popel2020transforming and language generation radford2019language
, among many other feats lecun2015deep ; alzubaidi2021review . While the value
of explanatory vs. predictive models has been debated in statistics for
decades Breiman2001 , it is also being debated in newer fields, e.g.
neuroscience bowers2023deep ; lin2023scientific and psychology
yarkoni2017choosing . Some neuroscience researchers have begun implementing
NNs for predictive modeling of neural systems richards2019deep ;
kietzmann2019deep showing they are as good or better than explanatory or
mechanistic models cichy2019deep including for decoding of spatial location
tampuu2019efficient , latent dynamics zhou2020learning or even inferring
single trial population dynamics pandarinath2018inferring .
While it is still uncertain how much predictive models will contribute to our
understanding of the brain - large-scale neural datasets are increasingly
common and AI and ML methods are likely required to understand them. In fact,
the extraordinary predictive power of Large-Language-Models (LLMs) Openai2023
in the last few years provides some evidence that prediction-only models offer
some utility for scientific and knowledge dissemination. Additionally, given
their success some have even speculated whether future LLMs could experience
conscious states butlin2023consciousness . These and other debates have
contributed to an already existing interdisciplinary research field on
"machine" or "artificial" consciousness GAMEZ2008 . Deciding on the usefulness
of modern AI in neuroscience as well as their conceptual properties is
challenging as we need to resolve conceptual, computational and experimental
problems involving questions from AI, philosophy and neuroscience.
Scope of work: emulators as a novel type and use-case for predictive models.
Here we provide a novel framework for how predictive models can contribute to
neuroscience and whether such models could generate internal states such as
consciousness. We propose the concept of neural emulators (or "emulators" for
short): predictive (joint) models of animal behavior and whole-organism neural
dynamics generated from different spatio-temporal scaled data (Fig 1). In
their simplest formulation, emulators predict behavior and neural dynamics
based on historical neural and behavioral states and can be used to study the
informational content and structure of neural activity in specific areas,
across areas and with respect to behavior. In their most complete form,
emulators learn to model the (nearly) complete causal structure of neural-
behavior interactions and can generate outputs that are indistinguishable from
the behaviors and phenomenal states of the organisms they model.
Figure 1: Circuit- and scale-agnostic neural emulators. (a). Recording from a
rodent brain (light-blue) based on parcelation (cubes) of increasing
granularity (red-hue diamonds). (b). Neural time series from parcellation in
(a) (light blue) and behavior time series (green). (c). Emulators learn joint
probability of time-series in (b). (d). Behavior output of emulators is
increasingly similar to biological behavior as parcellation granularity
increases.
Contribution of work: emulators and emulator theory. We define emulators and
propose emulator theory (ET) as a theoretical framework for measuring and
replicating the capacities of biological organisms solely via predictive
models that do not require explicit mechanisms of how neural (sub)systems
interact. ET argues that neural dynamics and causality can be simulated or
artificially generated and proposes that emulators can capture all capacities
of biological brains including possibly conscious experiences. The scope of
our work and contributions are as follows:
1. 1.
provide a definition of neural emulators, describe the axioms and conjectures
of emulator theory (ET) and provide a description for constructing neural
emulators from neurophysiology datasets (Section 2 and Appendix C).
2. 2.
propose a theory of neural causality that explain how models trained solely on
prediction - such as emulators - can generate indistinguishable behavior and
first person phenomenal states of biological organisms (Section 3) and
Appendix B, and D).
Relation to previous work. Our work is related to several research paradigms
including: machine consciousness, phenomenology, philosophy and physics of
causality, and deep ML methods for modeling time-series data. We discuss our
work in relation to previous research at length in Appendix E. Briefly, in
comparison to previous work, we propose a novel interpretation for predictive
models of neural activity that is independent of architecture or mechanistic
explanations of brain function.
## 2 A framework for neural emulators
Here we define the central components of emulator theory (ET) - a proposal for
how and why artificial (i.e. computational) systems trained to predict the
behavioral and cognitive states of animals can become increasingly accurate
copies, or "emulators", of such organisms. In particular, ET proposes that
models trained on sufficiently large datasets solely to predict behavior and
phenomenal states have the capacity to generate such states - even without
access to explanatory mechanisms.111We henceforth omit the terms ”sufficiently
large” datasets and discuss further in Appendix A. ET proposes this can occur
because both behaviors and phenomenal states are causal-path independent: i.e.
the neural states supporting behavior and consciousness do not depend on a
specific neuronal pathway - only on specific neuronal dynamics.
Below we provide a framing of ET relative to computational neuroscience
approaches, describe the causal-path independence conjecture and provide a
practical approach to constructing emulators.
### 2.1 Common assumptions in explanatory models of behavior and neural
dynamics
A central goal for computational and behavior neuroscience research is to
identify explanatory mechanisms for how the nervous systems of animals can
generate behaviors and cognition. We generalize these approaches as seeking
functional descriptions of (i) behaviors and (ii) neural dynamics.222We note
that reportability of conscious states and presence of conscious states is a
topic in philosophy of mind and psychology broadly under several topics
including ”phenomenal” versus ”access consciousness” naccache2018 . Here we
focused on ”reportable” only for the purpose of having a simplified training
label for our models though this is not central to our argument and other
labels would suffice. :
$F_{t}^{b}(x_{i,t})=\mathrm{Observed\ behavior}$ (1)
$F_{t}^{c}(x_{i,t})=\mathrm{(Reportable)\ state\ of\ consciousness}$ (2)
with:
$x_{i,t}=Q(x_{i,t_{-1}},s_{t_{-1}})$ (3)
where $F_{t}^{b}$ and $F_{t}^{c}$ describe how behavior and conscious states,
respectively, are generated from $x_{i,t}$ which are measurements of neural
component $i$ (most often the spiking of a single neuron) at time $t$, and $Q$
describes the evolution of the neural components from endogenous (i.e.
$x_{i,t_{-1}}$) and exogenous inputs (or stimuli) $s_{t_{-1}}$. Thus, for
example, we want to show how neural activity in motor cortex
churchland2012neural and supporting areas SVOBODA201833 generate observable
body movements, even at the single trial level pandarinath2018inferring , by
describing how neural states (e.g. spiking of populations of neurons) generate
observables such as location of the arm or hand of an animal or the location
of a rodent in an environment.
This framework is common in computational and theoretical neuroscience (though
it is not the only one) and makes several assumptions:
1. 1.
$Q$, $F_{t}^{b}$ and $F_{t}^{c}$ have a closed form expression that we can
eventually discover.
2. 2.
$Q$, $F_{t}^{b}$ and $F_{t}^{c}$ will describe necessary causal mechanisms
without which neither behaviors nor cognitive states can occur.
3. 3.
Identifying $Q$, $F_{t}^{b}$ and $F_{t}^{c}$ requires single neuron or lower
spatial scale neural data.
### 2.2 Emulators are mechanism- and spatial scale-independent predictive
models of behavior and neural dynamics
ET proposes to eliminate all three assumptions from building useful models of
the brain. In particular, ET propose that both behaviors and cognitive states
can be accurately predicted by models that do not represent or instantiate
$Q$, $F_{t}^{b}$ or $F_{t}^{c}$ and without an explicit spatio-temporal scale
of neural activity. Thus ET proposes that at a specific neural data
granularity $g$ we can build emulator $E$ to predict behaviors and conscious
states:
$E_{t}^{b}(w_{j,t,g})=\mathrm{Observed\ behavior}$ (4)
$E_{t}^{c}(w_{j,t,g})=\mathrm{(Reportable)\ state\ of\ consciousness}$ (5)
$w_{j,t,g}=R(w_{j,t_{-1},g},s_{t_{-1}})$ (6)
where $w_{j,t,g}$ is the state of parcel $j$ of a system sampled at
granularity $g$ and at time $t$ that evolves dynamically and $E_{t}^{b}$,
$E_{t}^{c}$ and $R$ are transformation learned from "big" data. Thus, for
example, emulators can be described simply by nested (black-box) neural-
network (NN) models:
$\mathrm{NN}_{2}(\mathrm{NN}_{1}(w_{j,t,g}))=\mathrm{Observed\ behavior}$ (7)
$\mathrm{NN_{3}}(\mathrm{NN_{1}}(w_{j,t,g}))=\mathrm{(Reportable)\ state\ of\
consciousness}$ (8)
where NN1, NN2 and NN3 stand for $R$, ${E_{t}^{b}}$, and ${E_{t}^{c}}$,
respectively.
In this framework, emulators are predictive models that are imperfect - i.e.
have imperfect prediction - but can improve their accuracy with increased
granularity of neural data and size of training datasets. This is similar to
many, if not most, ML approaches in data modeling. However, there are at least
two major differences. First, in the large dataset and high granularity limit
ET claims that emulators can generate phenomenal states even without an
explanation of how phenomenal states are caused. Second, and more simply, ET
claims that despite the extreme complexity of the brain, datasets generated in
practical experiments (e.g. finite-time laboratory neuroscience experiments)
are sufficient for building accurate emulators. In the remainder of this
section we address these two issues.
### 2.3 Emulators (must) model causality of phenomenal states
Emulators are more than (black-box) models that seek to predict behaviors or
conscious states - they are models that generate such outputs while modeling
the neural dynamics causing the outputs. In this sense, unlike general NNs fit
to data - emulators implicitly capture the idiosyncratically evolved neural
causal pathways by jointly modeling neural dynamics and observables.
But shouldn’t our models (predictive or mechanistic ones) first explain in
mechanistic terms the causal pathways for generating behaviors or conscious
experience before being able to generate them? In our view, this requirement
may be too conservative as it suggests a priori that (human) understanding of
neural causality must precede generation of neural causality. In our view, the
question of how we can capture the causal interactions of a neural system to
recreate its behavior and internal states - is an empirical question, and one
that ET in part seeks to address.
### 2.4 Exogenous generation of behavior and phenomenal states is already
possible
In fact, our mechanistic understanding of neural causality already lags behind
the ability to artificially generate behaviors or even conscious states in
biological organisms. In particular, we have known for almost a century how to
artificially generate conscious states via direct, i.e. exogenous, electrical
stimulation (DES). For example, in awake human neuro-surgery, activations of
neural tissue by direct (i.e. exogenous) electrical stimulation can lead to
individuals experiencing specific conscious content: emotions and novel visual
imagery lai2020acute , speech generation collee2023localization , or even
disrupting short term memory of the task itself ng2021disrupting . An even
more remarkable finding in the neuroscience of agency is that even self-
control states such as experiencing a desire or "will to act" can be elicited
by exogenous electrical stimulation fried1991functional . In mice, using light
optogenetic activation deisseroth2015optogenetics we can identify and then
artificially activate visual system neurons to generate perceptions leading to
behavior marshel2019cortical ; CARRILLOREID2019447 . We can even "tag" memory
engrams representing positive moods and reactivate them at a later time to
elicit mood-change like behaviors in mice ramirez2015activating .
Even though it is very likely that both DES and optogenetic activations engage
specific - rather than random - neural circuits and states, these are
nonetheless remarkable experimental findings. That is, there is no obvious
requirement that evolution required that organism-level behaviors or
phenomenal states can be generated simply by exogenous input \- rather, the
opposite seemed more possible given the extraordinary complexity of the brain.
Namely, that highly specific neurons and circuits must be engaged in a
particular fashion to generate behaviors and phenomenal states. In our view,
the existence of coarse-grained exogenous generation of phenomenal states (and
behavior) is a largely under-explored research approach to understanding
behavior and consciousness causality. More specific to ET, these studies
suggest that - at the very least - we may not require precise mechanistic
models of neural dynamics all the way to the synapse (or lower level) to
generate models that exhibit behavior and phenomenal states.
### 2.5 Causal path independence conjecture
More importantly, these findings suggest that neural states that support the
generation of actions or conscious states may be reacheable by multiple paths.
We formulate this into a central conjecture of ET:
1. C1:
Path independent neural causality conjecture (PINC). Behaviors and conscious
states do not dependent on endogenous (i.e. within system) causality and they
can be generated (or reached) by completely exogenous pathways.
Here, "exogenous" means a perturbation that is external to the brain such as a
patch clamp or optogenetic excitation. In practical terms, $C1$ proposes that
sequences of neural activation $x_{i,t_{n}}$ ($n$ time steps) that are
sufficient for conscious experience or behaviors - can be generated (i.e.
caused) by exogenous sequential activation of such neural states. Thus, $C1$
implies that any mechanism required for consciousness, for example the
"ignition and broadcast within a neuronal global workspace" (GWT), "local
recurrent or re-entrant cortical processing" (recurrence), or "multiple …
representations … available to a central system" (multiple
drafts)seth2022theories \- can be activated by exogenous pathways as there is
no dependence of conscious states emerging from specific causal paths, but
only on dynamical activation.
We view PINC as central to understanding how causality can be emulated in a
artificial systems (such as computational models or emulators) which can in
turn generate behaviors and conscious states in biological and even artificial
organisms. We discuss PINC in more detail in Section 3.
Figure 2: Granularity-based neural emulators. Relationship between the
capacity (or accuracy) of a behavior (blue curve) and neural (red curve)
emulator vs the granularity of the neural data used for training with
hypothesized requirements for perfect behavior models (dashed blue line) and
conscious states (dashed red line). Proposed experiment of recording sparsely
sampled brain-wide LFP (magenta arrow) and putative emulator capacity from
such datasets (magenta dots).
### 2.6 Scalable emulators: a research paradigm
We now return to the question of whether we can collect sufficient datasets
for training mechanism-agnostic emulators. Central to this question is how to
build emulators from empirical data. We find it instrutive to start by
defining an ideal emulator as an abstract tool to help frame the practical
emulators. An ideal emulator is a model that is generated from very low (or
arbitrarily small) spatial-scale granularity parcels and very large (or
arbitrarily large) datasets. We point out that ideal emulators seek to model
the behavior of biological system but only to the level required to predict
the behaviors and phenomenal states and may not need to model synapse, or
protein-level interactions in nervous systems. In this sense, ideal emulators
model (only) the sufficient conditions for the generation of such observables.
We discuss ideal emulators in more detail in Appendix C.
In contrast to ideal emulators we propose building scalable emulators (or
simply "emulators"). Scalable emulators can be created by training predictive
models from (practical size) neural dynamics datasets of varying spatial and
temporal resolution or "granularity" (Fig 2). Emulators can thus be generated
using neural activity recorded at different granularity scales such as: single
neuron spikes, local-field-potential (LFP), or functional magnetic resonance
imaging (fMRI). Emulators will thus be trained on (simultaneously recorded)
neural dynamics and behaviors of biological organisms - to predict behavior
and neural dynamics. Does this mean that any type of data, e.g. functional
magnetic resonance imaging (fMRI) is sufficient to generate perfect or nearly
perfect emulators?
Not necessarily. We define the capacity of an emulator simply as the accuracy
of predicted behaviors and neural dynamics relative to the source (e.g. on
test or hold out data). ET proposes that - much like any other ML model -
emulator capacity or accuracy will increase with dataset size (e.g. number of
time points and number of neural areas) - and also with increased granularity
of the recordings (e.g. LFP models should do better than fMRI ones). While we
do not expect that current types of fMRI data alone is sufficient to achieve
perfect emulators, ET implies that given sufficiently large fMRI datasets
there is no theoretical prohibitions on this.
We close by providing a comparison between the dataset sizes used to train
LLMs and what is feasibly possible to generate using relatively modest
empirical paradigms in rodent neuroethology (Table 1). Considering each
individual neural parcel as a single token, and 1000Hz sampling rate, we can
in principle generate datasets of similar size used to train GPT-4 (estimated)
with just over a dozen rodents recording continuously for 24 months.
Type of model | Recording time | No. of neural parcels | No of params | No. of training tokens
---|---|---|---|---
Llama 2 | n/a | n/a | 7-70 billion | 5 trillion
GPT 4 | n/a | n/a | 1.76 trillion | O(100) trillion (est)
1 Rodent | 30 days | 32 | n/a | O(100) billion
15 Rodents | 2 years | 128 | n/a | O(100) trillion
Table 1: Comparison between training datasets for LLMs and emulators.
### 2.7 ET: Summary and Conclusion
We proposed emulators as scale-dependent predictive models of neural dynamics
and behaviors of biological organisms. We offered a conceptual theoretical
framework, i.e. ET, which suggests that sufficiently accurate models, i.e.
emulators, may generate artificial systems with the same capacities of the
biological organisms they model. We argued this occurs because accurate
emulators must instantiate models of neural dynamics that are similar to the
biological nervous systems they model - essentially emulating neural causality
of such biological nervous systems. We proposed a practical research framework
- scalable emulators that focuses on modeling of simultaneously recorded
neural and behavior data to generate increasingly accurate predictive models
as a function of the granularity or spatio-temporal scale of the recordings.
One advantage of ET framework for modeling nervous systems is that we can
balance the reductionist desire to decompose all neural states into the
smallest components with principled pragmatic-driven goals. Thus, we can
remain agnostic on both mechanisms as well as spatio-temporal scale of
recordings required to adequately model biological neural networks. In
contrast to mechanistic models sought in neuroscience, ET states we don’t need
to understand the role of all neural components across all scales if we can
identify scales that enable us to generate sufficiently accurate models.
Because there are redundancies in biological neural networks (e.g. not all
neurons or areas are required for specific capacities), we may directly pursue
model accuracy as a primary goal rather than mechanistic explanation of all
components.
In the next section we expand our discussion of the central ET conjecture -
$C1$ \- to provide a conceptual background for why neural states are causal-
path independent and why this further supports the exogenous emulation of
causality proposed by ET. We discuss additional topics on the philosophy of
causality in the context of ET in Appendix B.
## 3 A theory of path-independent neural causality
The central conjecture of ET, i.e. the path-independent-neural-causality
(PINC) proposes that we can generate behaviors and conscious states using
exogenously pathways as opposed to endogenous (or "natural") ones. We reviewed
DES and optogenetic studies in humans and mice, respectively, and argued this
provides some evidence for this. However, in our view, to fully establish
emulators as exogenous models of causality we must do more than merely argue
that some neural states can be generated exogenously. In particular, we must
show that emulators can learn to activate biological - or artificial -
circuits using models that capture and replicate causality.
In its simplest form PINC highlights limits in empirical measurement and
"measurability" of biological dynamical systems that generate phenomenal
states. For example, if we could somehow activate any neural state ("down to
the synapses") by external means - it may not be possible to measure a
difference between such neural states arising naturally and exogenously. This
is because the fundamental quantities we measure are behavioral or internal
state "readouts" - and if those are nearly identical then we cannot in
principle differentiate between their causes.
A second, related, challenge that PINC raises is about "identicality": i.e. we
may have to rethink what makes for a perfect or "ideal" model of a neural
system or biological organisms. The contribution of chaos and noise to neural
dynamics has been studied for decades in neuroscience even at the single
neuron level Mainen1995 and in drift diffusion models of decision making where
the precise timing of a decision is generally determined by noise Ratcliff2001
; Ratcliff2008 . These studies (many non-mechanistic) propose broad boundaries
on what constitutes a sufficiently accurate model of an observed neural system
- or its behavioral output. Thus, PINC raises the question of what constitutes
a perfect model - even one built from mechanistic knowledge - of a biological
organism?
Below we build our intuition for PINC using various levels of abstract thought
experiments attached to hypotheses. We start with the input source
indistinguishability (ISI) hypothesis which states that from a first person
perspective it is impossible to distinguish whether the behavior output or
conscious state of an organism was generated by endogenous or exogenous
pathways. Second, the path-independent causality (PIC) hypothesis restates
$C1$ (see Section 2) in the framework of causal models and highlights the lack
of causal-path requirements for neural system dynamics. Lastly, the model
system divergence (MSD) hypothesis proposes that because of noise and chaotic
dynamics - there can be no a priori thresholds for distinguishing between the
output of a dynamical system driven by a (sufficiently accurate) emulator and
a system driven by ground-truth mechanistic model.
### 3.1 ISI: all neuronal states can be reached exogenously
We begin with a model of a 2-neuron circuit receiving input from sensory and
internal state systems (i.e. "native" input) and generating downstream output
(Fig 3a). From an external perspective (i.e. the general perspective of an
experimental neuroscientist) this system can be completely described by the
spike trains of all the nodes (Fig 3b). Having access to such spike trains -
we can imagine removing (or ablating) the native connections to this circuit
and directly or "artificially" simulating inputs (e.g. using patch clamps)
(Fig 3c). Such an artificially driven system would have identical333We assume
determinism here, without loss of generality. spike rasters with the original
system (Fig 3d). More importantly, as our artificially driven system is
functionally identical to our native one - from a first person perspective
there is no experiment that we can do to determine whether the neural states
are generated by native or artificial inputs.
Figure 3: Input source indistinguishability (ISI). (a) A two neuron local
network receiving "native" sensory and internal state input and generating
downstream output. (b) Spike rasters for network in (a). (c) Same as (a) but
inputs to network are simulated by an external system. (d) Same as (a) but for
simulated system as in (c) results in identical spike rasters. (e) Left:
single neuron receiving axonal inputs (red) from various sources and
outputting a spike pattern (green); Right: proposed spike rasters for (e). (f)
Left: same as (e) but axonal inputs are simulated by external system; Right:
identical raster to (e) achieved by simulated system in (f). (g) Left: a self-
driving recurrent neural network; Right: proposed spike rasters for (g). (h)
Left: same as (g) but with recurrent inputs removed and simulated inputs;
Right: identical spike raster as in (g) achieved by simulated system in (h).
This thought experiment can be modified ad infinitum. For example, we can
consider smaller size scales by modeling synapse activations (Fig 3e,f) and we
can consider complex recurrent neural networks by increasing the complexity of
the connectivity (Fig 3g,h). In all cases, if we can adequately model the
neural dynamics and artificially generate them - from a first person behavior
it is not possible to determine how causality was achieved.
Does this argument extend to arbitrary scales, e.g. the whole brain - and what
are the implications for cognition or consciousness? With respect to
viability, single neuron activations is increasingly achievable tong2023single
. At the phenomenal level, ISI hypothesizes that all neural states including
those underpinning conscious states can can be achieved by artificial
activation of neurons even in complex recurrent networks. More specifically,
ISI proposes that irrespective of the necessary neural states for
consciousness - such states can be artificially generated with a sufficiently
powerful model of neural activity and precise activation (see discussion above
on DES).
### 3.2 PIC: all neural states are causal path agnostic
Conscious states are thought to unfold on the scale of hundreds of
milliseconds to seconds northoff2020neural ; kent2019duration and most
neuroscience theories of consciousness require complex interactions across
many neural systems where temporal dynamics matter seth2022theories . Given
that we may not be able to distinguish the source of neural inputs - does this
mean that dynamics (e.g. order of activation or recurrence) do not matter for
complex states such as cognition or consciousness?
The path-independent causality conjecture (PIC) builds on ISI and proposes
that while neural dynamics may matter for the generation of conscious states,
the causal paths do not. That is, from a first person perspective - it is
impossible to distinguish the causal pathway by which the necessary and
sufficient neuronal states for consciousness are achieved. Thus, at best, only
temporal dynamics matter and that causal pathways can be simulated with
sufficient manipulation. How is this possible?
To show this we study a causal model of conscious state generation (Fig 4). In
particular, we study the causal generation of sufficient neural states S1 and
S2 for the generation of conscious state C. In this framework the direct
causes (i.e. parents) of C are S1 and S2 which are in turn caused by their own
parents (P1 and P2; Fig 4a). Thus, given S1 and S2, C is independent of the
parents:
$C\perp\\!\\!\\!\perp(P_{1},P_{2})\mid(S_{1},S_{2})$ (9)
In this causal framework, because C is caused solely by S1 and S2, the causes
of by S1 and S2 can be arbitrarily set or determined (Fig 4b).444For clarity,
objections that ”conscious” states require many other interacting parts,
including states such as P1 and P2 to be activated, we simply add more
exogenous causal paths to represent their activations as well. Thus, if we
replace endogenous input (i.e. parents P1 and P2) with exogenous input -
conscious state $C$ is still achieved. 555We note this description generalizes
to all possible neural states from simple two neuron systems to large
recursively activated networks. For example, in HOT S1 can represent lower
order representations and S2 higher order, or in GWT S1 can represent
globalization of information and S2 attention being brought to the states.
Figure 4: Path-independent causality conjecture (PIC). (a) Causal graph of two
necessary and sufficient biophysical neural states, S1 and S2, required for
phenomenal conscious state C. In the native state, neural states S1 and S2 are
caused by endogenous prior states P1 and P2 (e.g. sensory and internal state
processing). (b) Same as (a) but neural states S1 and S2 are generated via
exogenous (e.g. artificially generated) inputs resulting in the same
phenomenal cognitive state C irrespective of the causal path of activation of
necessary and sufficient biophysical neural states.
### 3.3 MSD: emulator outputs may be indistinguishable from mechanistic
models and biological organism outputs
Our last hypothesis - MSD - states that there are no a priori methods for
distinguishing between sufficiently accurate predictive models (i.e.
emulators) and models built from a complete mechanistic theory of neurvous
system function. Fundamentally, this hypothesis suggests that we must define
empirical tests and thresholds for what constitutes a sufficiently accurate
model of a biological neural system. We discuss this in more detail in
Appendix D.
## 4 Limitations
Our proposal has some limitations. In particular, ET does not describe the
content of conscious states or the necessary and sufficient conditions for
generating them. ET primarily states that such states can be achieved via
artificial activations - or ultimately within completely artificial systems
that model biological neural networks in sufficient detail. Additionally, ET
proposes that experimentally viable emulators are possible but it is an
empirical question that needs to be investigated. Similarly, we did not
discuss the size of the datasets required but also the neural-area specificity
of our datasets. We envision that neural data from some areas such as frontal
and sensory systems (known to be involved in conscious state and behavior
generation) may provide more information for our models. In this sense,
building emulators in the near-term (i.e. before technologies to record from
the whole brain are available) should benefit from neuroanatomical targeting.
Lastly, we did not discuss neural architectures required for generation of
conscious states. ET is agnostic to the role of architectures (or even
anatomical modularity) in generating behavior and conscious states. In
particular, ET does not make any commitments to the requirement of specific
neural circuits - but only indirectly supports whatever artificial NN
architectures are required to model of causality. However, given the universal
approximation theorem, given sufficiently expressive NNs and large enough
datasets - ET suggests architectures do not matter. In this context, ET may be
interpreted to suggest that biological organisms developed specific neural
architectures due to biophysical and energy constraints not potentially
required for artificial or synthetic NNs.
## 5 Concluding remarks
Here we proposed emulators as predictive models of organism behavior and
cognitive states trained solely on historical neural dynamics and behavioral
states. We offered ET as a framework for understanding how emulators can be
built, what they can achieve and suggested benefits and limitations. ET also
points to the need for developing increasingly nuanced empirical tests to
determine whether models of complex systems exhibit the capacities of the
originals including internal phenomenal experiences. In our view, once models
of biological neural networks achieve high precision - it is not only possible
that they experience conscious states - but likely that they do so.
## References
* (1) David Marr. Vision: A Computational Investigation into the Human Representation and Processing of Visual Information. The MIT Press, Cambridge, MA, 1982. Foreword by Shimon Ullman, Afterword by Tomaso A. Poggio.
* (2) Patricia Smith Churchland and Terrence J Sejnowski. The computational brain. MIT press, Cambridge, 1992.
* (3) Y. Shen, A. Luchetti, G. Fernandes, et al. The emergence of molecular systems neuroscience. Molecular Brain, 15:7, 2022.
* (4) Per E. Roland. How far neuroscience is from understanding brains. Frontiers in Systems Neuroscience, 17:1147896, 2023.
* (5) JW Krakauer, AA Ghazanfar, A Gomez-Marin, MA MacIver, and D Poeppel. Neuroscience needs behavior: Correcting a reductionist bias. Neuron, 93(3):480–490, Feb 2017.
* (6) Mark Humphries. Is neuroscience doomed to go round in circles? on the disturbing lack of direction in neuroscience. Medium, May 2017.
* (7) Christian Beste. Disconnected psychology and neuroscience—implications for scientific progress, replicability and the role of publishing. Communications Biology, 4:1099, 2021.
* (8) Henry Markram. Seven challenges for neuroscience. Functional Neurology, 28(3):145–151, Jul-Sep 2013.
* (9) Charles Piller. Blots on a field? Science, 377(6604):358–363, Jul 2022.
* (10) David Silver, Aja Huang, Chris J. Maddison, Arthur Guez, Laurent Sifre, George van den Driessche, Julian Schrittwieser, Ioannis Antonoglou, Veda Panneershelvam, Marc Lanctot, Sander Dieleman, Dominik Grewe, John Nham, Nal Kalchbrenner, Ilya Sutskever, Timothy Lillicrap, Madeleine Leach, Koray Kavukcuoglu, Thore Graepel, and Demis Hassabis. Mastering the game of go with deep neural networks and tree search. Nature, 529:484–489, Jan 2016.
* (11) Martin Popel, Marketa Tomkova, Jakub Tomek, Łukasz Kaiser, Jakob Uszkoreit, Ondřej Bojar, and Zdeněk Žabokrtský. Transforming machine translation: a deep learning system reaches news translation quality comparable to human professionals. Nature Communications, 11, Sep 2020.
* (12) Alec Radford, Jeff Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, et al. Language models are unsupervised multitask learners. OpenAI blog, 1(8):9, 2019.
* (13) Yann LeCun, Yoshua Bengio, and Geoffrey Hinton. Deep learning. Nature, 521:436–444, 2015.
* (14) Laith Alzubaidi, Jinglan Zhang, Amjad J. Humaidi, Ayad Al-Dujaili, Ye Duan, Omran Al-Shamma, J. Santamaría, Mohammed A. Fadhel, Muthana Al-Amidie, and Laith Farhan. Review of deep learning: concepts, cnn architectures, challenges, applications, future directions. Journal of Big Data, 8(53), 2021.
* (15) Leo Breiman. Statistical Modeling: The Two Cultures (with comments and a rejoinder by the author). Statistical Science, 16(3):199 – 231, 2001.
* (16) Joshua S Bowers, Gautam Malhotra, Marko Dujmović, et al. Deep problems with neural network models of human vision. Behavioral and Brain Sciences, 46:e385, 2023.
* (17) Hui Lin. The scientific value of explanation and prediction. Behavioral and Brain Sciences, 46:e399, 2023.
* (18) Tal Yarkoni and Jacob Westfall. Choosing prediction over explanation in psychology: Lessons from machine learning. Perspectives on Psychological Science, 12(6):1100–1122, 2017.
* (19) Blake A. Richards, Timothy P. Lillicrap, Philippe Beaudoin, Yoshua Bengio, Rafal Bogacz, Anders Christensen, Claudia Clopath, Rui Ponte Costa, Archy de Berker, Surya Ganguli, Colleen J. Gillon, Danijar Hafner, Adam Kepecs, Nikolaus Kriegeskorte, Peter Latham, Geoffrey W. Lindsay, Kenneth D. Miller, Richard Naud, Christopher C. Pack, Panayiota Poirazi, Pieter Roelfsema, Joao Sacramento, Andrew Saxe, Benjamin Scellier, Anna Christina Schapiro, Walter Senn, Greg Wayne, Daniel Yamins, Friedemann Zenke, Joel Zylberberg, Dominic Therien, and Konrad P. Kording. A deep learning framework for neuroscience. Nat Neurosci, 22(11):1761–1770, Nov 2019.
* (20) Tim Kietzmann, Patrick McClure, and Nikolaus Kriegeskorte. Deep neural networks in computational neuroscience. Oxford Research Encyclopedia of Neuroscience, January 2019.
* (21) Radoslaw M. Cichy and Daniel Kaiser. Deep neural networks as scientific models. Trends Cogn Sci, 23(4):305–317, Apr 2019.
* (22) A Tampuu, T Matiisen, HF Ólafsdóttir, C Barry, and R Vicente. Efficient neural decoding of self-location with a deep recurrent network. PLoS Comput Biol, 15(2):e1006822, Feb 2019.
* (23) Ding Zhou and Xue-Xin Wei. Learning identifiable and interpretable latent models of high-dimensional neural activity using pi-vae. In Proceedings of the 34th International Conference on Neural Information Processing Systems (NeurIPS ’20), pages 7234–7247, December 2020\.
* (24) Chethan Pandarinath, Daniel J. O’Shea, Jasmine Collins, Rafal Jozefowicz, Sergey D. Stavisky, Jonathan C. Kao, Eric M. Trautmann, Matthew T. Kaufman, Stephen I. Ryu, Leigh R. Hochberg, Jaimie M. Henderson, Krishna V. Shenoy, L. F. Abbott, and David Sussillo. Inferring single-trial neural population dynamics using sequential auto-encoders. Nature Methods, 15(9):805–815, 2018.
* (25) OpenAI. Gpt-4 technical report. 2023\.
* (26) Patrick Butlin, Robert Long, Eric Elmoznino, Yoshua Bengio, Jonathan Birch, Axel Constant, George Deane, Stephen M. Fleming, Chris Frith, Xu Ji, Ryota Kanai, Colin Klein, Grace Lindsay, Matthias Michel, Liad Mudrik, Megan A. K. Peters, Eric Schwitzgebel, Jonathan Simon, and Rufin VanRullen. Consciousness in artificial intelligence: Insights from the science of consciousness. arXiv preprint arXiv:2308.08708, 2023.
* (27) David Gamez. Progress in machine consciousness. Consciousness and Cognition, 17(3):887–910, 2008.
* (28) Lionel Naccache. Why and how access consciousness can account for phenomenal consciousness. Phil. Trans. R. Soc. B, 373(20170357), 2018.
* (29) MM Churchland, JP Cunningham, MT Kaufman, JD Foster, P Nuyujukian, SI Ryu, and KV Shenoy. Neural population dynamics during reaching. Nature, 487(7405):51–56, 2012.
* (30) Karel Svoboda and Nuo Li. Neural mechanisms of movement planning: motor cortex and beyond. Current Opinion in Neurobiology, 49:33–41, 2018. Neurobiology of Behavior.
* (31) G. Lai, JP. Langevin, RJ. Koek, SE. Krahl, AA. Bari, and JWY. Chen. Acute effects and the dreamy state evoked by deep brain electrical stimulation of the amygdala: Associations of the amygdala in human dreaming, consciousness, emotions, and creativity. Frontiers in Human Neuroscience, 14:61, Feb 2020.
* (32) E Collée, A Vincent, E Visch-Brink, E De Witte, C Dirven, and D Satoer. Localization patterns of speech and language errors during awake brain surgery: a systematic review. Neurosurgical Review, 46(1):38, Jan 2023.
* (33) S. Ng, G. Herbet, and AL. et al. Lemaitre. Disrupting self-evaluative processing with electrostimulation mapping during awake brain surgery. Scientific Reports, 11:9386, 2021.
* (34) Itzhak Fried, Amiram Katz, Gregory McCarthy, Karen J Sass, Philip Williamson, Susan S Spencer, and Dennis D Spencer. Functional organization of human supplementary motor cortex studied by electrical stimulation. Journal of Neuroscience, 11(11):3656–3666, Nov 1991.
* (35) Karl Deisseroth. Optogenetics: 10 years of microbial opsins in neuroscience. Nature Neuroscience, 18(8):1213–1225, 2015.
* (36) James H Marshel, Yongsoo S Kim, Timothy A Machado, Sean Quirin, Brendan Benson, Jonathan Kadmon, Chandramouli Raja, Artur Chibukhchyan, Charu Ramakrishnan, Masafumi Inoue, Justin C Shane, Daniel J McKnight, Shin Yoshizawa, Hideaki E Kato, Surya Ganguli, and Karl Deisseroth. Cortical layer-specific critical dynamics triggering perception. Science, 365(6453):eaaw5202, Aug 2019.
* (37) Luis Carrillo-Reid, Shuting Han, Weijian Yang, Alejandro Akrouh, and Rafael Yuste. Controlling visually guided behavior by holographic recalling of cortical ensembles. Cell, 178(2):447–457.e5, 2019.
* (38) Steve Ramirez, Xiaojing Liu, Christopher MacDonald, et al. Activating positive memory engrams suppresses depression-like behaviour. Nature, 522:335–339, 2015.
* (39) Anil K Seth and Tim Bayne. Theories of consciousness. Nature Reviews Neuroscience, 23:439–452, 2022.
* (40) Zachary F. Mainen and Terrence J. Sejnowski. Reliability of spike timing in neocortical neurons. Science, 268(5216):1503–1506, Jun 1995.
* (41) Roger Ratcliff. Putting noise into neurophysiological models of simple decision making. Nature Neuroscience, 4(4):336, 2001.
* (42) Roger Ratcliff and Gail McKoon. The diffusion decision model: theory and data for two-choice decision tasks. Neural Computation, 20(4):873–922, Apr 2008.
* (43) Ling Tong, Shuo Han, Yusheng Xue, Mengjiao Chen, Fei Chen, Wei Ke, Yousheng Shu, Nan Ding, Joerg Bewersdorf, Zhong-Wei Zhou, Peng Yuan, and Jaime Grutzendler. Single cell in vivo optogenetic stimulation by two-photon excitation fluorescence transfer. iScience, 26(10):107857, Sep 2023.
* (44) Georg Northoff and Victor Lamme. Neural signs and mechanisms of consciousness: is there a potential convergence of theories of consciousness in sight? Neuroscience & Biobehavioral Reviews, 118:568–587, 2020.
* (45) L. Kent. Duration perception versus perception duration: a proposed model for the consciously experienced moment. Timing & Time Perception, 7:1–14, 2019.
* (46) David Fair. Causation and the flow of energy. Erkenntnis, 14(3):219–250, 1979.
* (47) Wesley C. Salmon. Scientific Explanation and the Causal Structure of the World. Princeton University Press, 1984.
* (48) Phil Dowe. Physical Causation. Cambridge University Press, New York, 2000.
* (49) Peter Fazekas, Balazs Gyenis, Gábor Hofer-Szabó, and Gergely Kertesz. A dynamical systems approach to causation. Synthese, 198(11):6065–6087, 2021.
* (50) Alexandre Pouget, Peter Dayan, and Richard S Zemel. Inference and computation with population codes. Annual Review of Neuroscience, 26:381–410, 2003. Epub 2003 Apr 10.
* (51) Bruno B Averbeck, Peter E Latham, and Alexandre Pouget. Neural correlations, population coding and computation. Nature Reviews Neuroscience, 7(5):358–366, May 2006.
* (52) Surya Vyas, Matthew D Golub, David Sussillo, and Krishna V Shenoy. Computation through neural population dynamics. Annual Review of Neuroscience, 43:249–275, Jul 2020.
* (53) Ofer Mazor and Gilles Laurent. Transient dynamics versus fixed points in odor representations by locust antennal lobe projection neurons. Neuron, 48(4):661–673, Nov 2005.
* (54) Russell J Gardner, Eilif Hermansen, Marius Pachitariu, et al. Toroidal topology of population activity in grid cells. Nature, 602:123–128, 2022.
* (55) Mark D Humphries. Strong and weak principles of neural dimension reduction. 2020\.
* (56) Patricia Kitcher. Marr’s computational theory of vision. Philosophy of Science, 55(1):1–24, 1988.
* (57) Kurt Hornik, Maxwell Stinchcombe, and Halbert White. Multilayer feedforward networks are universal approximators. Neural Networks, 2(5):359–366, 1989.
* (58) Balázs Csanád Csáji. Approximation with artificial neural networks. Master’s thesis, Eötvös Loránd University, Faculty of Sciences, Hungary, 2001. MSc thesis.
* (59) David J. Chalmers. The singularity: A philosophical analysis. Journal of Consciousness Studies, 17(9-10):9–10, 2010.
* (60) Ben Goertzel. Human-level artificial general intelligence and the possibility of a technological singularity: A reaction to ray kurzweil’s the singularity is near, and mcdermott’s critique of kurzweil. Artificial Intelligence, 171(18):1161–1173, 2007. Special Review Issue.
* (61) Ben Goertzel and Matthew Ikle’. Mind uploading, introduction. International Journal of Machine Consciousness, 04(01):1–3, 2012\.
* (62) S Makin. The four biggest challenges in brain simulation. Nature Outlook, July 2019.
* (63) Sim Bamford. A framework for approaches to transfer of a mind’s substrate. International Journal of Machine Consciousness, 4(01):23–34, 2012\.
* (64) Olle Häggström. Aspects of mind uploading. Proceedings, 1(3), 2017.
* (65) Massimo Pigliucci. Mind Uploading: A Philosophical Counter-Analysis, chapter 7, pages 119–130. John Wiley and Sons, Ltd, 2014.
* (66) J.H. Moor. Testing robots for qualia. In H.R. Otto and J.A. Tuedio, editors, Perspectives on Mind. D. Reidel Publishing Company, Dordrecht/ Boston/ Lancaster/ Tokyo, 1988.
* (67) Jesse J. Prinz. Level-headed mysterianism and artificial experience. In Owen Holland, editor, Machine Consciousness. Imprint Academic, Exeter, 2003.
* (68) John R. Searle. Minds, brains and programs. Behavioral and Brain Sciences, 3:417–457, 1980.
## Appendix / supplemental material
## Appendix A Terminology
Below we provide descriptions and working definitions of some of the terms in
our work.
Sufficient accurate emulator or model. Sufficiently accurate is an empirically
derivable term. It refers to whether a model behaves (or generates output)
that is so close to our biological system that we cannot distinguish the
differences. As we point out in our main text, a central implication of ET is
that we will need increasingly nuanced tests for behavior and conscious states
and, more critically, we may need to set these thresholds empirically - i.e.
without a clear theory.
Perfect models. Similar to "sufficiently accurate", we reserve this term to
mean a synthetic or computational model that is indistinguishable from the
organism or neural system it is seeking to model.
Sufficiently large or big datasets. This term refers to the dataset size
required to generate a sufficiently accurate emulator. As explained in the
main text, ET proposes that even fMRI data might be good enough to generate
accurate emulators, but it is likely that significantly more data (and perhaps
lower noise fMRI datasets) will be required. This would be in contrast to
using data from single neuronal activations for emulators which in principle
contain more information and less noise and could be smaller in size.
Phenomenal or conscious states. We use this term to refer to the internal,
first-person, observation that is generally not accessible from the third
person perspective. We note that did not distinguish between phenomenal and
access consciousness and that we referred to phenomenal states as a type of
"reportable" state. While some have argued that such states may be in
principle experienced - but not reportable, it is central to our work which
only requires some way to measure conscious states - usually from a third
person perspective.
Coarse grained or exogenous activation. These terms refer to the lack of
precision in direct-electrical-stimulation or optogenetic activation. Such
exogenous neural activations generally target many neurons and neural
circuits. Even in cases where a single neuron is activated, such inputs are
still artificial - relative to an endogenous activation of the neuron which
generally involves likely dozens but possibly hundreds of inputs form
anterograde neurons and increasingly more from those cells’ parental inputs.
Neural state. This term refers usually to the simultaneous activity of the
entire neural system we study. In systems neuroscience, a neural state is also
called a population vector or state and can represent the activity of each
neuron recorded (e.g. being active or not) within a 1millisecond or larger
time windows (depending on analysis carried out).
Behaviors or actions. We used this term to refer to the behavior of an
animals, such as a mouse moving its body, or its limbs in space.
Emulator. Predictive (joint) models of animal behavior and whole-organism
neural dynamics generated from different spatio-temporal scaled data. We
generally intend the term emulator to refer to the model and its physical
instantiation (i.e. operation) on a physical computer system.
Emulator theory. A theoretical framework for measuring and replicating the
capacities of biological organisms solely via predictive models that do not
require explicit mechanisms of how neural (sub)systems interact.
## Appendix B The physics of neural causality
While ET may not be required to explain how to generate consciousness - it may
require to describe how neural causality works in biophysical organisms (e.g.
mammals) and whether such causality can be present in artificial ones.
Neuroscience does not directly adress the philosophy or physics of neural
state causality - and only provides high-level candidate theories of causality
(nearly all correlation-based) (e.g. seth2022theories ). Arguably, there is an
implicit reductionist assumption to all these theories - that consciousness is
caused by specific pathways and it emerges from the activity of lower-level or
more fundamental systems.
Here we seek to address the question of what causality may look like in
"carbon-based" organisms and whether our best physics or philosophical
theories of causality preclude artificial organisms from similar causal
phenomena. We begin by briefly commenting on the science and physics of
causality - a field without a specific consensus especially with respect to
complex physical systems. We then discuss causality in emulations and the
sufficient conditions for recreation of all source capacities in an emulated
system.
### B.1 How are effects "caused" in complex dynamical (nervous) systems?
The philosophy and physics of causality is still a developing field with the
most common proposals for what "causality" means focusing on the transference
or transmission of physical properties from one part to another or
conservation of quantities across interactions Fair1979-FAICAT ;
Salmon1984-SALSEA ; Dowe2000-DOWPC-2 . In these frameworks, we can
conceptualize of a neuron as causing another neuron to spike by passing on
some physical properties (e.g. sufficient volume of glutamate molecules, or
sufficiently large membrane current). However, many higher-order capacities
(e.g. perception or cognition) simply cannot be reduced to cause-effect
explanations between single neurons due to the infeasibility of tracking all
the physics - but also the fact that cumulative effects of many neurons lead
to an emergent property (i.e. cognition) that is dynamical and almost always
independent of any individual single neuron. Some conceptual proposals for
"causality by dynamics" have been put forward fazekas2021dynamical . In these
dynamical systems explanations, "[c]ause and effect states are … regions of
the state space, and the causal claim asserts a connection between these two
regions". More specifically, "what makes a causal claim true is how the
physical states it picks out are related by time evolution…". In simpler
terms, in a complex dynamical system an effect is caused if the time evolution
of the causal state is likely or "sufficiently" likely to lead to the effect
state. Importantly, this definition is permissive enough to bridge the
biophysical-cognitive divide.
Computational neuroscientists have already identified how the dynamics of
populations of neurons are at least supportive of, if not central to, brain
function pouget2003inference ; averbeck2006neural ; vyas2020computation . In
this "computation by population" paradigm “a neural population constitutes a
dynamical system that, through its temporal evolution, performs a computation
to, for example, generate and control movement". There have been several
successful models using population level dynamical trajectories (e.g.
mazor2005transient ; churchland2012neural ; gardner2022toroidal ).666For
example mazor2005transient , when presented with different odors, the
individual principal neurons (PNs) of the locust’s olfactory system exhibit a
brief transient ON signal that is quite similar across all neurons and odors.
However, when pooling the neurons together into a “population vector”,
principal-component-analysis (PCA) visualizations show that odor processing
has a substantially different dynamical trajectory at the population level for
the different odors including fixed points and within-odor trial variance well
below the inter-odor variance.. And while population dynamics are usually
visualized using dimensionality reduction tools such as PCA, there are
suggestions that such tools are more than interpretation aids and “show us how
neural circuits actually operate and compute” humphries2020strong . This
implies that in some sense computation is achieved by populations - not just
individual neurons.
Given this we draw two conclusions. First, we lack a complete theory of
physical causality especially when it comes to complex dynamical systems.
Second, and more central to our question, in a complex dynamical system such
as the brain, the (i) physical states and (ii) their dynamical evolution are
the only things we can measure and they must account for the emergence of
macro-states such as consciousness from micro-states.
### B.2 Can software systems (emulators or simulators) cause or experience
cognition?
The above discussion suggests that if we can recreate both neural states and
their dynamical changes in a model or a "simulation" - we are essentially
recreating all the components of the original systems and there is nothing
else to explain about such systems. While such simulations could solve the
same tasks of the source systems - would such simulations "experience"
cognitive states?
In our view, there is nothing in physics prohibiting complete emulation or
simulation including internal states of consciousness of self-organizing-
systems. We can formulate this claim around David Marr’s well known
computational theory of vision marr1982vision containing three levels:
computational, algorithmic and implementational (Fig 5). The distinctions are
that "[t]he Computational theory tells us what function is being computed, for
example, the square root function; the algorithmic level provides a means of
carrying out the computation, for example, Newton’s method of approximating
square roots" whereas at "the implementation level… the same algorithm can be
implemented in different hardware" Kitcher1988 . In this space, an emulator is
a computational system that captures both the input-output functions of a
source-system and possibly its algorithmic, circuit-mechanistic and
biophysical-substrate implementations (Fig 4). Thus emulators seek a -
potentially - biophysical copy of an original source system, while simulators
are largely interested in matching the computation level only (i.e. input-
output function) of the system studied.
We propose this as a working definition, amenable to change, of how to
distinguish between software vs. hardware-software models of biophysical
systems. Our central claim, however, is that in the infinite - or
"sufficiently granular" - limit emulators will contain all the components of
the biophysical systems they are seeking to model and must necessarily also
have all their capacities including internal conscious states. More specific
to this section, emulators of dynamical systems cannot be distinguished from
the dynamical systems themselves. The central weakness or unknown, however,
that we do not know at what level of brain emulation we get consciousness.
Figure 5: Different depths of models. Relative to David Marr’s marr1982vision
three levels of modeling (green rectangle), emulators model all levels of a
biophysical system, while simulators model only behavior.
### B.3 Conclusion: rethinking the need for infinite reductionism in the
causality of non-physical states
In this section we highlighted the limited conceptual agreement on the physics
of causality and the implications on limitations for emulating complex
biophysical organisms. Our framework suggests that a biophysical emulator
could generate indistinguishable neural but also behavioral dynamics from a
source, e.g. optimal motor responses and planning.
## Appendix C Ideal emulators
In the main text we proposed ideal emulators as systems that - in the infinite
granularity and infinite data-size limit - can perfectly model all the
necessary and sufficient components of a biological organism that generate
behavior and conscious states.777We note that we generally mean ideal neural
emulators - but drop the term ”neural” for simplicity. As an example, an ideal
emulator can be thought of as a complete biophysical copy of an animal’s
nervous system, from atoms to synapses to neuron connections - that can be
reconstructed based on access to "big" data. However, as we are interested in
predicting behavior and conscious states of biological organisms - not
synaptic, molecular or atomic interactions - an ideal emulator will focus on
learning to model these two types of output primarily. The existence of ideal
emulators is essentially a restatement of the universal approximation
theoremHORNIK1989359 ; csaji2001approximation but linking neural states to
behavioral and phenomenal states. Within such a framework, the central axiom
of ET is:
1. A1:
Ideal emulator are equivalent to their sources. There is no empirical test
that can distinguish between the behavior output or conscious states
experienced by a biological organism and those generated by an ideal emulator
of such an organism.
We first note that $A1$ is a statement about how we measure conscious states -
in an emulator or a biological organism - rather than how conscious states
emerge in a biological organism. That is, $A1$ states that consciousness must
be present (by definition) in an ideal emulator which is trained to predict
the behavior and conscious states of a biological organism - even if those are
never explicitly represented or even known by a human model designer.
Within the "levels of analysis" framework provided by David Marr
marr1982vision an ideal emulator is a system that can model all components of
an organisms including the subsystem (e.g. vision) level computations, their
algorithms, and circuit-level and even biophysical implementations (such as
intracellular mechanisms and dynamics)(Fig 5).
In contrast, we define a simulator as a system that models the behavioral
outputs of a biological system without any specific "lower-level"
constraints.888We provide a discussion on zombie consciousness in the appendix
## Appendix D MSD: model system divergence hypothesis
Figure 6: Emulators learn joint distributions and activate neuronal
populations. (a). Six-node recurrent neural network. (b). Proposed spike
trains for network in (a). (c). Emulators learn the joint probability
distribution from empirical observations. (d). Emulators can now drive the
neural activity in a similar neural network that has no endogenous
connections. e. Proposed spike trains obtained from (d) are similar to those
in the endogenous case in (b). f. The evolution of two identical dynamical
systems driven ground-truth mechanistic-based models (NN1 and NN2 is as
divergent (or different) as the evolution of an emulator based dynamical
system (ENN) from the two models (inset: putative distributions of path
differences over time).
The model system divergence hypothesis (MSD) is essentially a statement about
the inherent noise and chaotic dynamics of the neural systems we seek to
model.
Given a neural network (e.g. an RNN) (Fig 6a) that generates spiking time
series (Fig 6b) - a neural emulator is a model that learns the joint
probability of the neural time series (i.e. spike rasters here) and
additionally behavior outputs999We note that organism-level behavior outputs
is in principle encoded in the neural states of the organism, and this
secondary requirement is somewhat redundant. (Fig 6c). As argued in the main
text, such a model can artificially drive the origial biological network (Fig
6d) to generate similar (or identical) spiking time series (Fig 6e). Thus,
given these inherent components of biological neural networks, two systems
driven by NNs with identical connectivity weights and starting states that
evolve on a constrained manifold - can nonetheless diverge substantially (Fig
6f - green and red curves). MSD states that the it is not possible to
statistically distinguish (by empirical measurements alone) between a
dynamical system built on ground-truth mechanistic models of biological system
and a (sufficiently accurate) emulator-driven NNs (Fig 6f; inset histograms).
MSD thus specifically highlights that relying on "precise" or accurate
dynamics of neural systems models may not be enough to distinguish ground-
truth mechanistic models of nervous systems and increasingly accurate
emulators (or any other class of models). In particular, as emulators become
increasingly precise in capturing the dynamics of a biological organism -
there may be no clear or discrete moment when complex capacities such as
"conscious" states are being generated.
## Appendix E Relation to previous work
Given the various interdisciplinary notions discussed in our work, we selected
to focus mostly (and only at a high level) on the literature dealing with the
philosophy and phenomenology of conscious states and topics related to machine
consciousness.
A central theme in artificial intelligence that is linked to understanding
brain function is whether we can copy, emulate, or upload brains or minds to
digital systems. In this field there increase discussions around humans minds
being copied to machines Chalmers2010-CHATSA or "mind-uploading" GOERTZEL2007
; Goertzel2012 . Even setting aside the limited scientific progress on this
front - there are significant and unresolved conceptual challenges Makin2019
including lack of frameworks Bamford2012-BAMAFF , the lack of clarity around
whether digital copies will experience consciousness or have a personal
identity Haggstrom2017 ; Pigliucci2014 . In parallel,while we still don’t know
the necessary or sufficient conditions for conscious states in biological
organisms, there are suggestions of a scattershot approach
butlin2023consciousness to increase the probability of this.
Progress in machine consciousness has also been framed around core principles
GAMEZ2008 including creating machines whose behavior and cognitive
characteristics are associated with consciousness, have similar architectures
to conscious systems (i.e. humans) or actually experience phenomenal conscious
states. For the later, the believe is that the "hard" problem of consciousness
needs to be solved first Chalmers2010-CHATSA . Others like Moor1988 and
Prinz2003 suggest that it is not possible to know whether artificial systems
can be conscious because we can not isolate or separate the factors for
consciousness. In contrast, ET proposes that we do not have to solve hard
problem to create conscious machines nor have similar architectures as long
the the machines outward behavior matches that of the source organisms - and
we achieve such behaviors via modeling neural dynamics.
Lastly, Searle1980 in the famous "Chinese room" experiment suggests that an
artificial system that does "mindless" translation by rules (or lookup tables)
does not amount to understand a language, for example, and therefore cannot
have phenomenal states. ET propose that if the artificial system produces the
"mindless" translation by modeling the causal components of a real - e.g.
biological organism that can translate - then there is an equivalence between
the biological and the artificial system and it is difficult to distinguish
between the two systems - aside from some molecular components of their
makeup.
## NeurIPS Paper Checklist
1. 1.
Claims
2. Question: Do the main claims made in the abstract and introduction accurately reflect the paper’s contributions and scope?
3. Answer: [Yes]
4. Justification: the Abstract and Introduction frame the contribution of the paper relative to existing research. We clearly define the scope and contribution the introduction and a brief discussion of previous work (with a broader discussion in the Appendix).
5. Guidelines:
* •
The answer NA means that the abstract and introduction do not include the
claims made in the paper.
* •
The abstract and/or introduction should clearly state the claims made,
including the contributions made in the paper and important assumptions and
limitations. A No or NA answer to this question will not be perceived well by
the reviewers.
* •
The claims made should match theoretical and experimental results, and reflect
how much the results can be expected to generalize to other settings.
* •
It is fine to include aspirational goals as motivation as long as it is clear
that these goals are not attained by the paper.
6. 2.
Limitations
7. Question: Does the paper discuss the limitations of the work performed by the authors?
8. Answer: [Yes]
9. Justification: The paper discusses implications and limitations of our work in a separate section. As our work is conceptual and theoretical in nature, we provide theoretical limitations and conceptual challenges in building the types of systems and models we propose.
10. Guidelines:
* •
The answer NA means that the paper has no limitation while the answer No means
that the paper has limitations, but those are not discussed in the paper.
* •
The authors are encouraged to create a separate "Limitations" section in their
paper.
* •
The paper should point out any strong assumptions and how robust the results
are to violations of these assumptions (e.g., independence assumptions,
noiseless settings, model well-specification, asymptotic approximations only
holding locally). The authors should reflect on how these assumptions might be
violated in practice and what the implications would be.
* •
The authors should reflect on the scope of the claims made, e.g., if the
approach was only tested on a few datasets or with a few runs. In general,
empirical results often depend on implicit assumptions, which should be
articulated.
* •
The authors should reflect on the factors that influence the performance of
the approach. For example, a facial recognition algorithm may perform poorly
when image resolution is low or images are taken in low lighting. Or a speech-
to-text system might not be used reliably to provide closed captions for
online lectures because it fails to handle technical jargon.
* •
The authors should discuss the computational efficiency of the proposed
algorithms and how they scale with dataset size.
* •
If applicable, the authors should discuss possible limitations of their
approach to address problems of privacy and fairness.
* •
While the authors might fear that complete honesty about limitations might be
used by reviewers as grounds for rejection, a worse outcome might be that
reviewers discover limitations that aren’t acknowledged in the paper. The
authors should use their best judgment and recognize that individual actions
in favor of transparency play an important role in developing norms that
preserve the integrity of the community. Reviewers will be specifically
instructed to not penalize honesty concerning limitations.
11. 3.
Theory Assumptions and Proofs
12. Question: For each theoretical result, does the paper provide the full set of assumptions and a complete (and correct) proof?
13. Answer: [Yes]
14. Justification: We provide our assumptions for our theoretical proposals in detail in each of the sections. In particular, emulators assume the ability to record specific types of neural dynamics and have sufficiently large datasets. We have also collected some of these terms in an Appendix specific on terminology.
15. Guidelines:
* •
The answer NA means that the paper does not include theoretical results.
* •
All the theorems, formulas, and proofs in the paper should be numbered and
cross-referenced.
* •
All assumptions should be clearly stated or referenced in the statement of any
theorems.
* •
The proofs can either appear in the main paper or the supplemental material,
but if they appear in the supplemental material, the authors are encouraged to
provide a short proof sketch to provide intuition.
* •
Inversely, any informal proof provided in the core of the paper should be
complemented by formal proofs provided in appendix or supplemental material.
* •
Theorems and Lemmas that the proof relies upon should be properly referenced.
16. 4.
Experimental Result Reproducibility
17. Question: Does the paper fully disclose all the information needed to reproduce the main experimental results of the paper to the extent that it affects the main claims and/or conclusions of the paper (regardless of whether the code and data are provided or not)?
18. Answer: [N/A]
19. Justification: We did not run any specific experiments.
20. Guidelines:
* •
The answer NA means that the paper does not include experiments.
* •
If the paper includes experiments, a No answer to this question will not be
perceived well by the reviewers: Making the paper reproducible is important,
regardless of whether the code and data are provided or not.
* •
If the contribution is a dataset and/or model, the authors should describe the
steps taken to make their results reproducible or verifiable.
* •
Depending on the contribution, reproducibility can be accomplished in various
ways. For example, if the contribution is a novel architecture, describing the
architecture fully might suffice, or if the contribution is a specific model
and empirical evaluation, it may be necessary to either make it possible for
others to replicate the model with the same dataset, or provide access to the
model. In general. releasing code and data is often one good way to accomplish
this, but reproducibility can also be provided via detailed instructions for
how to replicate the results, access to a hosted model (e.g., in the case of a
large language model), releasing of a model checkpoint, or other means that
are appropriate to the research performed.
* •
While NeurIPS does not require releasing code, the conference does require all
submissions to provide some reasonable avenue for reproducibility, which may
depend on the nature of the contribution. For example
1. (a)
If the contribution is primarily a new algorithm, the paper should make it
clear how to reproduce that algorithm.
2. (b)
If the contribution is primarily a new model architecture, the paper should
describe the architecture clearly and fully.
3. (c)
If the contribution is a new model (e.g., a large language model), then there
should either be a way to access this model for reproducing the results or a
way to reproduce the model (e.g., with an open-source dataset or instructions
for how to construct the dataset).
4. (d)
We recognize that reproducibility may be tricky in some cases, in which case
authors are welcome to describe the particular way they provide for
reproducibility. In the case of closed-source models, it may be that access to
the model is limited in some way (e.g., to registered users), but it should be
possible for other researchers to have some path to reproducing or verifying
the results.
21. 5.
Open access to data and code
22. Question: Does the paper provide open access to the data and code, with sufficient instructions to faithfully reproduce the main experimental results, as described in supplemental material?
23. Answer: [N/A]
24. Justification: We did not run any experiments at this time.
25. Guidelines:
* •
The answer NA means that paper does not include experiments requiring code.
* •
Please see the NeurIPS code and data submission guidelines
(https://nips.cc/public/guides/CodeSubmissionPolicy) for more details.
* •
While we encourage the release of code and data, we understand that this might
not be possible, so “No” is an acceptable answer. Papers cannot be rejected
simply for not including code, unless this is central to the contribution
(e.g., for a new open-source benchmark).
* •
The instructions should contain the exact command and environment needed to
run to reproduce the results. See the NeurIPS code and data submission
guidelines (https://nips.cc/public/guides/CodeSubmissionPolicy) for more
details.
* •
The authors should provide instructions on data access and preparation,
including how to access the raw data, preprocessed data, intermediate data,
and generated data, etc.
* •
The authors should provide scripts to reproduce all experimental results for
the new proposed method and baselines. If only a subset of experiments are
reproducible, they should state which ones are omitted from the script and
why.
* •
At submission time, to preserve anonymity, the authors should release
anonymized versions (if applicable).
* •
Providing as much information as possible in supplemental material (appended
to the paper) is recommended, but including URLs to data and code is
permitted.
26. 6.
Experimental Setting/Details
27. Question: Does the paper specify all the training and test details (e.g., data splits, hyperparameters, how they were chosen, type of optimizer, etc.) necessary to understand the results?
28. Answer: [N/A]
29. Justification: We did not run any experiments.
30. Guidelines:
* •
The answer NA means that the paper does not include experiments.
* •
The experimental setting should be presented in the core of the paper to a
level of detail that is necessary to appreciate the results and make sense of
them.
* •
The full details can be provided either with the code, in appendix, or as
supplemental material.
31. 7.
Experiment Statistical Significance
32. Question: Does the paper report error bars suitably and correctly defined or other appropriate information about the statistical significance of the experiments?
33. Answer: [N/A]
34. Justification: We did not run any experiments.
35. Guidelines:
* •
The answer NA means that the paper does not include experiments.
* •
The authors should answer "Yes" if the results are accompanied by error bars,
confidence intervals, or statistical significance tests, at least for the
experiments that support the main claims of the paper.
* •
The factors of variability that the error bars are capturing should be clearly
stated (for example, train/test split, initialization, random drawing of some
parameter, or overall run with given experimental conditions).
* •
The method for calculating the error bars should be explained (closed form
formula, call to a library function, bootstrap, etc.)
* •
The assumptions made should be given (e.g., Normally distributed errors).
* •
It should be clear whether the error bar is the standard deviation or the
standard error of the mean.
* •
It is OK to report 1-sigma error bars, but one should state it. The authors
should preferably report a 2-sigma error bar than state that they have a 96%
CI, if the hypothesis of Normality of errors is not verified.
* •
For asymmetric distributions, the authors should be careful not to show in
tables or figures symmetric error bars that would yield results that are out
of range (e.g. negative error rates).
* •
If error bars are reported in tables or plots, The authors should explain in
the text how they were calculated and reference the corresponding figures or
tables in the text.
36. 8.
Experiments Compute Resources
37. Question: For each experiment, does the paper provide sufficient information on the computer resources (type of compute workers, memory, time of execution) needed to reproduce the experiments?
38. Answer: [N/A]
39. Justification: We did not run any experiments.
40. Guidelines:
* •
The answer NA means that the paper does not include experiments.
* •
The paper should indicate the type of compute workers CPU or GPU, internal
cluster, or cloud provider, including relevant memory and storage.
* •
The paper should provide the amount of compute required for each of the
individual experimental runs as well as estimate the total compute.
* •
The paper should disclose whether the full research project required more
compute than the experiments reported in the paper (e.g., preliminary or
failed experiments that didn’t make it into the paper).
41. 9.
Code Of Ethics
42. Question: Does the research conducted in the paper conform, in every respect, with the NeurIPS Code of Ethics https://neurips.cc/public/EthicsGuidelines?
43. Answer: [Yes]
44. Justification: Our paper conforms with the ethics guidelines.
45. Guidelines:
* •
The answer NA means that the authors have not reviewed the NeurIPS Code of
Ethics.
* •
If the authors answer No, they should explain the special circumstances that
require a deviation from the Code of Ethics.
* •
The authors should make sure to preserve anonymity (e.g., if there is a
special consideration due to laws or regulations in their jurisdiction).
46. 10.
Broader Impacts
47. Question: Does the paper discuss both potential positive societal impacts and negative societal impacts of the work performed?
48. Answer: [N/A]
49. Justification: Our work does not provide any immediate societal implications.
50. Guidelines:
* •
The answer NA means that there is no societal impact of the work performed.
* •
If the authors answer NA or No, they should explain why their work has no
societal impact or why the paper does not address societal impact.
* •
Examples of negative societal impacts include potential malicious or
unintended uses (e.g., disinformation, generating fake profiles,
surveillance), fairness considerations (e.g., deployment of technologies that
could make decisions that unfairly impact specific groups), privacy
considerations, and security considerations.
* •
The conference expects that many papers will be foundational research and not
tied to particular applications, let alone deployments. However, if there is a
direct path to any negative applications, the authors should point it out. For
example, it is legitimate to point out that an improvement in the quality of
generative models could be used to generate deepfakes for disinformation. On
the other hand, it is not needed to point out that a generic algorithm for
optimizing neural networks could enable people to train models that generate
Deepfakes faster.
* •
The authors should consider possible harms that could arise when the
technology is being used as intended and functioning correctly, harms that
could arise when the technology is being used as intended but gives incorrect
results, and harms following from (intentional or unintentional) misuse of the
technology.
* •
If there are negative societal impacts, the authors could also discuss
possible mitigation strategies (e.g., gated release of models, providing
defenses in addition to attacks, mechanisms for monitoring misuse, mechanisms
to monitor how a system learns from feedback over time, improving the
efficiency and accessibility of ML).
51. 11.
Safeguards
52. Question: Does the paper describe safeguards that have been put in place for responsible release of data or models that have a high risk for misuse (e.g., pretrained language models, image generators, or scraped datasets)?
53. Answer: [N/A]
54. Justification: Our work does not pose any immediate risks.
55. Guidelines:
* •
The answer NA means that the paper poses no such risks.
* •
Released models that have a high risk for misuse or dual-use should be
released with necessary safeguards to allow for controlled use of the model,
for example by requiring that users adhere to usage guidelines or restrictions
to access the model or implementing safety filters.
* •
Datasets that have been scraped from the Internet could pose safety risks. The
authors should describe how they avoided releasing unsafe images.
* •
We recognize that providing effective safeguards is challenging, and many
papers do not require this, but we encourage authors to take this into account
and make a best faith effort.
56. 12.
Licenses for existing assets
57. Question: Are the creators or original owners of assets (e.g., code, data, models), used in the paper, properly credited and are the license and terms of use explicitly mentioned and properly respected?
58. Answer: [N/A]
59. Justification: There are no applicable licenses required.
60. Guidelines:
* •
The answer NA means that the paper does not use existing assets.
* •
The authors should cite the original paper that produced the code package or
dataset.
* •
The authors should state which version of the asset is used and, if possible,
include a URL.
* •
The name of the license (e.g., CC-BY 4.0) should be included for each asset.
* •
For scraped data from a particular source (e.g., website), the copyright and
terms of service of that source should be provided.
* •
If assets are released, the license, copyright information, and terms of use
in the package should be provided. For popular datasets,
paperswithcode.com/datasets has curated licenses for some datasets. Their
licensing guide can help determine the license of a dataset.
* •
For existing datasets that are re-packaged, both the original license and the
license of the derived asset (if it has changed) should be provided.
* •
If this information is not available online, the authors are encouraged to
reach out to the asset’s creators.
61. 13.
New Assets
62. Question: Are new assets introduced in the paper well documented and is the documentation provided alongside the assets?
63. Answer: [N/A]
64. Justification: Our work does not introduce any new assets.
65. Guidelines:
* •
The answer NA means that the paper does not release new assets.
* •
Researchers should communicate the details of the dataset/code/model as part
of their submissions via structured templates. This includes details about
training, license, limitations, etc.
* •
The paper should discuss whether and how consent was obtained from people
whose asset is used.
* •
At submission time, remember to anonymize your assets (if applicable). You can
either create an anonymized URL or include an anonymized zip file.
66. 14.
Crowdsourcing and Research with Human Subjects
67. Question: For crowdsourcing experiments and research with human subjects, does the paper include the full text of instructions given to participants and screenshots, if applicable, as well as details about compensation (if any)?
68. Answer: [N/A]
69. Justification: Our work did not involve crowdsourcing nor human subject research.
70. Guidelines:
* •
The answer NA means that the paper does not involve crowdsourcing nor research
with human subjects.
* •
Including this information in the supplemental material is fine, but if the
main contribution of the paper involves human subjects, then as much detail as
possible should be included in the main paper.
* •
According to the NeurIPS Code of Ethics, workers involved in data collection,
curation, or other labor should be paid at least the minimum wage in the
country of the data collector.
71. 15.
Institutional Review Board (IRB) Approvals or Equivalent for Research with
Human Subjects
72. Question: Does the paper describe potential risks incurred by study participants, whether such risks were disclosed to the subjects, and whether Institutional Review Board (IRB) approvals (or an equivalent approval/review based on the requirements of your country or institution) were obtained?
73. Answer: [N/A]
74. Justification: Our work does not invovle crowdsourcing or human subject research
75. Guidelines:
* •
The answer NA means that the paper does not involve crowdsourcing nor research
with human subjects.
* •
Depending on the country in which research is conducted, IRB approval (or
equivalent) may be required for any human subjects research. If you obtained
IRB approval, you should clearly state this in the paper.
* •
We recognize that the procedures for this may vary significantly between
institutions and locations, and we expect authors to adhere to the NeurIPS
Code of Ethics and the guidelines for their institution.
* •
For initial submissions, do not include any information that would break
anonymity (if applicable), such as the institution conducting the review.
|
Subsets and Splits